How to effectively manage datacentre temperature and humidity

A datacentre is a complex facility with different cooling equipment and power systems. Here's how to monitor its temperature and humidity

A datacentre facility is not just a collection of servers and storage systems connected via cables. It is a complex, dynamic facility with a mix of different types of cooling equipment and power systems creating a need for energy monitoring not just at the technical level, but also at the environmental and cost level.

The datacentre houses uninterruptedly power supplies (UPSs), switches and power systems ranging from 415V three-phase AC to 5V DC and below.

The main areas datacentre managers need to monitor and manage for disaster recovery are temperature, fire and water. 

Strategies for datacentre temperature monitoring

Temperature monitoring is a core focus for datacentre managers. The idea of running a datacentre at low temperatures – between 18°C and 22°C – has long been the norm, and standard temperature monitors have been used to ensure that ambient temperatures remain within specified limits.

The American Society of Heating, Refrigeration and Air-conditioning Engineers (ASHRAE) now allows for a datacentre to run hotter provided adequate cooling is applied where it is required. This results in potentially massive energy savings because the overall cooling requirement is far lower.

Monitoring the temperature of the overall datacentre is therefore less of an issue – but monitoring for distinct hotspots within a datacentre becomes more critical. If a hotspot is left alone, it can result in a fire, which can damage not only the equipment where the fires starts, but other equipment through smoke damage. 

Temperature monitoring should therefore go hand-in-hand with fire monitoring.

More resources on datacentre management

Firefighting in datacentres

Should a fire occur, the traditional approach has been to use heat-sensitive triggers that either release water or a damping gas.

The use of water has receded over the years, with most facilities only using it as a backstop measure to extinguish a fire where all else has failed, because trying to put out an early stage fire in a datacentre using water can short out electrical circuits and destroy the vast majority of equipment. 

A more practical approach is to use a blanketing gas. These can be naturally occurring non-flammable gases such as CO2, Nitrogen or Argon, or specific commercial gases such as FM200, IG55 or Novec 1230.

The use of heat triggers means that a fire must already have started, and it is better to use early-detection systems. Here, very early smoke detection apparatus (VESDA) systems can help. 

Before a fire starts, smoke will be given off that is undetectable to the human eye or nose. VESDA systems can pick up these small particles of smoke and react according to rules placed in the system – for example, raising an alarm and pinpointing where in the datacentre the problem is, allowing a datacentre administrator to shut down equipment in the area or to investigate further.

Even earlier detection can be enabled through the use of thermal cameras. These tools monitor the datacentre and look for thermal hotspots, and again can be programmed such that a change in temperature at a specific spot can raise an alarm. 

Therefore, we come full circle, using temperature monitoring to help to prevent fires in the first place.

Tactics to avoid water damage or excess humidity in a datacentre

Flooding, leaking roofs and humidity must also be monitored and managed so they do not damage the datacentre equipment.

To avoid flooding in the IT facility, a sloped underfloor with drainage will allow certain flood situations to just flow directly through the datacentre without causing major damage, but as with fire, it is better to aim for early detection and avoidance, rather than trying to deal with a full-scale flood.

Moisture sensors in the ceiling to monitor for roof leakage and the same at floor level will help detect slow failure of the physical environment that could lead to water egress and problems in the facility. 

General atmospheric moisture content monitoring should be carried out anyway, as datacentres operate best within a specific moisture envelope, but the same systems can be used to monitor for any trend or step change in the moisture content of the facility’s air.

For a rapid flood situation, such as a river breaching its banks, internal monitoring will not be of much use. Here, a rapid response system such as raising flood barriers should be considered to create an embankment around the facility to keep the water at bay – at least for a time to allow systems to be shut down elegantly and control switched to an alternative facility away from the flood situation.

Managing airflow in the datacentre

The last area that has moved from being a relatively simple environmental task to a far more complex one is the management of airflow. 

With a standard rack-based open datacentre, the aim was to maintain a minimum rate of air flow through the entire datacentre to maintain an average temperature within the desired range. 

Higher equipment densities, with racks that used to be in the range of 10-15kW now running at up to 35kW, and less air space has made simple cooling approaches inefficient. Also, as higher temperatures are now accepted for running some datacentre equipment, IT executives should use more targeted cooling.

For example, spinning magnetic disk drives and central processing units (CPUs) will tend to run hot, whereas peripheral chips and switches will tend to run cooler. Combining this with dynamic load balancing and workload provisioning means that cooling also has to be dynamic – and this needs monitoring at a highly granular level. 

Therefore, the cooling air being used should be targeted to where it is most needed using ducting and managed airflows. To ascertain where the airflows are most needed, thermal monitoring is needed, as detailed above. 

Beyond this, computational fluid dynamics (CFD) software, as seen in datacentre infrastructure management (DCIM) software such as those from Nlyte and Emerson, can help in designing the optimum use of hot and cold aisles, baffles and ducting to ensure that the minimum amount of cold air creates the maximum amount of cooling.

The importance of environmental monitoring

Environmental monitoring is more important in managing a datacentre facility today than ever before. 

Ensuring that temperature, fire, moisture and airflows are all covered is critical. Pulling all of these together in a coordinated and sensible manner will require an overall software and hardware solution built around a datacentre infrastructure management (DCIM) package.

Investment in DCIM and efficient monitoring strategies by datacentre managers will soon gain payback – an environmentally stable datacentre will be more energy efficient and able to deal with any problems at an early stage allowing for greater availability of a robust technology platform to the business.


Clive Longbottom is a service director at UK analyst firm Quocirca


Image: Digital Vision/Thinkstock

Read more on Datacentre cooling infrastructure