cherezoff - stock.adobe.com

Lean, mean and green: The datacentres of the future

Working out how to develop more efficient datacentres in an increasingly resource-hungry world is not easy, but it is possible

It’s not easy being lean, although doing more with less sounds simple enough. The sustainability-seeking datacentre operator needs to pinpoint waste and reduce it, without losing value from other parts of the system. Yet the race is on to support more capacity and high-performance computing than ever.

“The industry wrestles with capacity challenges and advanced applications that are forcing significant changes to datacentres of all shapes and sizes,” says Vertiv CEO Rob Johnson. “The message to datacentre equipment providers is clear: the status quo is not acceptable.”

According to French thinktank The Shift Project’s 90-page Lean ICT report 2019, digital energy consumption rose globally by 9% per year between 2015 and 2020, with digital technology expected to be responsible for about 8% of greenhouse gas (GHG) emissions by 2025.

The thinktank’s research pegged the average 1MW, 1000m2 datacentre in 2019 as achieving a power usage effectiveness (PUE) of 2.0. Meanwhile, risk-averse operators typically err on the side of maximum uptime and can be over-provisioned.

Jennifer Cooke, research director of the cloud-to-edge datacentre trends team at IDC, notes that although many are making sustainability announcements, there’s still a lot of waste to reduce before considering entirely new solutions.

“Are you making good use of the energy you have? You can have giant facilities cooled down to the temp of a meat locker, only using 40% of the space, with 25-30% of the servers running with no one knowing what they’re contributing to,” she tells ComputerWeekly.

Datacentre operators probably have more of an understanding than other businesses – and they can have a relatively standardised environment, with tighter controls and processes than mainstream enterprises. 

Large hyperscalers may be able to devote a whole area to high-performance computing (HPC) – other enterprises may need to have it next to general business applications that must be kept running.

New technology will be required to tackle that disruption, confirms Cooke, but small tweaks over time can trim a lot of fat and add up to big cost savings.

“Liquid cooling has a lot of promise. It’s kind of new in the datacentre, but more workloads are trying on AI [artificial intelligence], GPUs and so on, heating up the datacentre,” she says.

The need to streamline operations

Virtualisation meant operators no longer had to buy new “stuff” for each new workload. Now, the industry needs to achieve a similar result around the physical facilities themselves.

This might include creating something like a digital twin of the datacentre to enable modelling of data volumes, power consumption, temperature controls and so on. It might also mean collaborative efforts between risk-averse datacentres protecting their own mission-critical environments and suppliers on big data projects.

“I would say we’re on the cusp of having the right technologies, with a lot down to process as well. We have cloud-based datacentre management tools, but getting people to use these consistently and getting datacentre operators OK with sharing data are big hurdles for many,” says Cooke.

Smart datacentres with sensors all around the racks can pinpoint heat emissions linked to specific workload patterns and temperature changes, disrupting other parts of the datacentre environment in situ. Here’s where AI-assisting cooling technology might come in.

Of course, energy-intensive AI adoption itself spurs datacentre efficiency requirements. Facilities can no longer remain defined by their ability to house high-performance computing and power-dense equipment. But IDC says organisations are at an early stage here, with Europe potentially a little ahead and the Asia-Pacific region further behind.

“Despite greenwashing, companies that reduce waste and try to use their resources better will save money, and that makes sense,” says Cooke. “That’s going to be the tipping point for ‘I know I need to do this to get investors and to attract talent’ – students are signing pacts not to go to a company unless they have aligned sustainability goals, and so on.”

Coming full circle

Societies are embracing the circular economy concept, and datacentres must follow. Cooke says suppliers such as HP might blaze a trail, perhaps because they’ve long worked with used equipment strategies and as-a-service models.

“It’s not all about cloud and cloud providers. Organisations are trying to get cloud to run on-premise, so look at how you can do that efficiently and what to do if you’re not an expert,” she says. “District heating – taking ‘waste’ heat from one place and using it another – might be one idea.”

Datacentre operators must take a good look at their hardware and performance requirements. Rather than going out and buying the datacentre equivalent of a new Prius, ask whether the old banger still has something to offer. Even if an old rack of servers cannot support new requirements, can it be cascaded into another use?

“It’s really hard to recycle tech gear – it takes a lot of energy and it’s hard to use the components for something else. It makes sense to keep it in use longer or find a new home for it,” says Cooke. “How can waste from one process become inputs for another?”

“It’s really hard to recycle tech gear; it takes a lot of energy and it’s hard to use the components for something else. It makes sense to keep it in use longer or find a new home for it”
Jennifer Cooke, IDC

Knowing exactly what to do to trim energy use in specific instances is difficult as good research-based data is lacking. The Shift Project discovered some 170 papers dated 2014-2017 that simply regurgitate findings from other papers without relevant context or examination. In addition, sampling methodologies and heuristics are often obsolete.

Rigorous quantitative clarification and benchmarking of the direct environmental impacts of all digital technology. Quantitative measurement and analysis of the impact of investment policies, management practices and company practices is also lacking, the thinktank concludes.

Cisco has estimated that 67ZB (zettabytes) of “useful” data will be produced by the internet of things (IoT) and industrial internet of things (IIoT) sectors in 2020 – 35 times more than the storage capacity planned in datacentres at that time. New architectures from edge to fog computing and additional storage capacity, based on SSD18 including 3D NAND19, are probably needed.

Mike Mattera, director of sustainability at Akamai Technologies, says change is absolutely achievable even with older equipment. In one US datacentre, it achieved a PUE of 1.09-1.15 in 2019 using a mix of outside air and direct expansion (DX) cooling.

“After seeing the results and understanding the operating heat tolerance of our hardware, we can operate warmer than what you would traditionally see,” says Mattera.

“In addition, we reduce our need for power through software and hardware advances. Since 2015, Akamai’s platform has used 61% less energy per gigabit of network capacity while still growing by over 182%.”

Future forward in Cornwall

Operators need to work out where they are now, where they want to be, and how to get between those two points. But without better data, it’s hard to identify the best levers for improved energy efficiency, which means falling back on anecdotal evidence such as case studies.

Chris Roberts, head of datacentre and cloud at Goonhilly Earth Station in Cornwall, pinpoints the difficulty. “You’re trying to save the world yet need all these hugely power-intensive AI models, so you’ve got to look at efficient ways of doing that,” he says.

“Sometimes that means not taking small steps but big adventurous steps, and at Goonhilly, we’ve got the closeness to the wave project and wind farm, and ultimately our ambition is to become a kind of supercomputer hub starting to address some carbon emission challenges.”

Goonhilly Earth Station functions as an advanced information hub for satellite observation data streams, covering some 168 acres (with its own herd of alpacas to nibble the grass). Huge data volumes need timely processing and they consume a lot of power.

“For example, we’re deploying deep-earth observation on Nvidia on a unit geared around things like optimising crop yields. That in itself would generate more heat, so we use more efficient ways to cool it,” he says. “In the standard workplace server, you can use lower-power chips and so on. But with our kinds of workloads, you have to work quite hard to mitigate that – hence immersive cooling.”

As workflows rose, Goonhilly moved to liquid cooling, using Submer’s non-conductive biodegradable fluid, with a posted efficiency gain of around 65% compared with standard cooling. “We looked at firms like 3M, but they’re not biodegradable,” says Roberts.

Goonhilly’s new datacentre launched mid-2019, superseding its largely traditional historic (yet relatively efficient) comms room. It uses lower energy platforms wherever possible, but must work hard to develop the most efficient processes, looking carefully and continually at the way the infrastructure is managed, from cooling to unnecessary energy expenditure.

“We’ve got space for 90 racks in the new datacentre in the first data hall, and there’s capacity to expand that at least a few times. That’s why we’re looking at new technologies,” says Roberts.

Energy efficiency and sustainability are overall drivers, with the datacentre developing a denser environment over time to further enhance it, with power requirements expected to go on rising. The station uses its own on-site wood pellet biomass generator, with additional power acquired from other renewable sources.

“The next stage is talking to the wind farm half a mile away about how we can take in 33kPa [kilopascal] rather than low voltage, and the wave project. But we’re almost waiting for it all to catch up with us, because we’re not chasing these standard workloads,” says Roberts.

Goonhilly is identifying areas on-site for more solar panels, supporting its full power requirements of 500kW, pushing energy efficiency higher still. In 12 months, it could become the world’s first carbon-neutral supercomputer and 100% solar datacentre.

“There are other solar-powered supercomputers, but in the fine print it’s only 50%. So this will be the first,” says Roberts. “We’re not as advanced in some ways as, say, Google, but we’re using the tools we’ve got and it’s absolutely about the future.”

Read more about datacentre sustainability

Read more on Datacentre energy efficiency and green IT