Mario Savoia - Fotolia
The datacentre chill-factor: Finding the right conditions
Computer Weekly explores a variety of cost-effective, energy-efficient services currently using innovative methods to keep datacentres cool
The volume of data stored and processed has a direct correlation to datacentre energy efficiency and cooling requirements, needed to keep equipment running optimally. Analyst IDC estimates that the amount of data stored will increase from 33ZB (zettabytes) in 2018 to 175ZB by 2025.
Globally, datacentres currently use around 205TWh (terawatt-hours) of power each year, and the predicted growth in data and processing over the next few years will have a significant impact on power demands. As a result, the datacentre industry needs to look at new and innovative ways to drive efficiencies, especially when it comes to cooling. Facebook, for instance, runs a huge datacentre campus in Luleå, Sweden, where the average air temperature is 17°C, enabling the social media giant to take advantage of fresh air cooling.
According to Alibaba Cloud, conventional datacentres, particularly older ones, are often incredibly inefficient when it comes to cooling, as Selina Yuan, president of international business at Alibaba Cloud Intelligence, explains: “Consumption for standard air-conditioning is estimated at around 40% of the total energy bill. Not only does this inefficiency have a financial cost, it also has a huge environmental impact.”
Balancing hot servers and energy-efficient cooling
In a presentation at the OCP Virtual Summit in May, Jeffrey Sarkinen, a researcher in the ICE Data Centre unit at the Rise, discussed an experiment to test the effects on power efficiency of running central processing units (CPUs) at different temperatures.
Datacentres need to monitor two types of cooling – the cooling systems to control the racks and the cooling fans that monitor the temperature of the CPUs. The experiment involved a server wind tunnel, comprising three servers in a controlled environment running different workloads at different CPU temperatures.
“To hold cooler CPU temperatures, more server fan power is needed. More air has to pass over the heatsink to keep the CPU temperatures cooler,” he told the audience.
The experiment showed a datacentre could make a 0.7-1.5% of savings in power consumption if CPUs are run at 65°C rather than 77°C. A lower inlet temperature based on the outside temperature also decreases energy used for direct fresh air cooling.
Beyond fresh air cooling
Yuan says Alibaba Cloud has been deploying immersion cooling, which involves an immersed, liquid cooled server that uses insulating coolant instead of traditional air-cooling equipment.
“All IT equipment in the datacentre is immersed in the cooling fluid and heat is efficiently transmitted to the outside through the liquid cooling system,” says Yuan. “As a result, low-efficiency traditional fans and air-conditioners are no longer used.” Such techniques are not limited to hyperscale cloud providers. According to the Global data center cooling industry report from Reportlinker.com, as artificial intelligence–based electronics and machine learning programs are installed across European datacentres, the demand for efficient and low-cost cooling systems will rise. This will lead to a shift towards liquid cooling. The report states: “As companies seek a balance between efficiency and cost, the datacentre cooling market is expected to witness high compound annual growth rate of nearly 7%.”
Case study: Alibaba’s green facility in Zhangjiakou, Hebei Province
A year after it opened its Qiandao Lake green datacentre facility, Alibaba Cloud opened another green facility, in Zhangjiakou, Hebei Province. This new facility included wind power and solar power. It was subsequently upgraded with a large-scale deployment of an immersion cooling solution in 2018.
This datacentre serves a blueprint for energy-efficient datacentres and has been shared with the Open Compute Project, an organisation whose mission it is to apply the benefits of open source to hardware and rapidly increase the pace of innovation in, near and around the datacentre and beyond.
To redesign a conventional datacentre with immersion cooling, it needs preparation in four key areas: infrastructure reconstruction, preparing specially designed IT equipment, selecting the right coolant, and mechanisation of its maintenance and monitoring system.
For the Zhangjiakou datacentre, Alibaba Cloud completely redesigned the server, storage and network architecture for immersion cooling, making sure all IT equipment in the datacentre were wholly immersed in cooling fluids without insulation and corrosiveness. The heat is then transmitted to the outside through the cooling system.
As it does not require air-conditioning or other large refrigeration equipment, the technology can be deployed anywhere, delivering space savings of up to 75%.
It also significantly improves computing density, with its power usage effectiveness reaching as low as 1.0, beating the current industry average worldwide.
Source: Selina Yuan, president of international business, Alibaba Cloud Intelligence
The report’s authors expect traditional cooling will continue to dominate the datacentre market for the foreseeable future due to companies focusing on maximum use of free cooling in Europe to cool their datacentres. “The Nordics is expected to be a hotspot for datacentre activity throughout the next decade as companies shift to the region in search of free cooling. Increasing regulation on storage of data will lead to more localised large-scale datacentre installations across Europe, which aids in the growth of the datacentre UPS [uninterruptible power supply] and cooling market,” the report notes.
However, new, processor-intensive workloads will drive the need for more advanced cooling. Thermal design power (TDP) and heat dissipation requirements are increasing, and the wider use of multicore CPU and graphics processing unit (GPU) combinations is resulting in uneven heat distribution and transient hot spots on the surface of the microprocessor.
This means that at the device level we will need more effective cooling than is possible with forced air, according to Robert Bunger, liquid cooling programme director at Schneider Electric. “Liquid cooling use cases will become inevitable in datacentre facilities across the globe due to a number of key drivers, such as greater adoption of distributed IT applications, the advocacy of 5G services, innovation in hyperscale datacentre design, technological developments and the impact of social media,” he says.
Process manufacturing
The earliest adopters of liquid cooled systems include process manufacturing, which requires high-performance and exascale computing. Bunger says the hyperscalers and colocation companies are also gearing themselves up to become “liquid cooled-ready”.
As well as providing greater energy efficiency and savings, Bunger says liquid immersion cooling provides a safe and secure environment for IT components protected from dust, humidity, vibration, external changes to the environment or sudden shocks.
“While the field sample is still relatively small, indications from early adopters are that IT environment [ITE] component life is significantly extended, potentially leading to a reduction, or possibly the elimination of, break-fix requirements,” he adds. This, in turn, allows for the use of hot-swappable “sealed for life” servers.
Read more about datacentre efficiency
- As demand for increasingly dense racks and compute-heavy workloads increases within datacentres, could this lead to a wholesale shift in preference from air-to-liquid-based cooling systems? If so, at what cost to the environment?
- As organisations increase server rack density, more C-level execs investigate liquid cooling technology. Industry experts see these systems as an operational and cost necessity.
According to Bunger, one of the most significant advantages provided by liquid cooling is the design freedom it provides by eliminating the constraints inherent in any air-cooled system. “At the server, layout, orientation and position of components are no longer dictated by ‘shadow’ heating considerations, nor maintaining airflow paths through dense component groupings,” he says. “Elimination of fans provides additional space for ITE and eliminates noise. At a larger facility level, a simpler, less complex and less costly supporting infrastructure is also possible.”
Fans also tend to be noisy. For distributed edge computing environments – as required for smart city internet of things (IoT) applications and building 5G networks in densely populated or urbanised areas – Bunger believes liquid cooling removes that problem.
Free air cooling
Not every datacentre is fortunate enough to be located in a region where it can take advantage of free air cooling. As the research from Rise found, those that are can lower the energy needed for cooling by finding a lower temperature to run processors, based on their workload requirements.
But where free air cooling is not possible, or workload requirements dictate more radical approaches to cooling, liquid cooling will become increasingly relevant.
In the past, many datacentre equipment manufacturers used liquid cooling for high-performance computing applications.
But Bunger says liquid cooling specialists are now providing standard or custom white-box servers that incorporate liquid cooling technology. Some PC server providers are beginning to work with these liquid cooling specialists to deliver “liquid cooled-ready” variants of their standard servers.