sdecoret - stock.adobe.com

Datacentre operators ‘hesitant’ over how to proceed with server farm builds as AI hype builds

As the hype surrounding artificial intelligence enters a new phase with the rising enterprise and hyperscale interest in generative AI, operators are not sure how to proceed with new datacentre builds, it is claimed

The hype surrounding generative artificial intelligence (AI) is giving datacentre operators pause for thought about how best to proceed with the buildouts of their newer facilities, it is claimed.

With operators in some of the major European colocation hubs struggling to meet the demand for datacentre capacity amid space and power constraints, they need to make the sites they have under development as appealing as possible for prospective tenants. Depending on the type of colocation provider, this has typically meant tailoring facilities to meet the needs of the hyperscale cloud giants (wholesale colocation) or enterprises (retail colocation).

However, in light of the rising hype and demand for more energy-intensive and compute-heavy AI services from enterprises and hyperscalers, operators are finding themselves at something of a crossroads and pressing pause on their developments as they work out how best to cater to this trend.

That’s according to Niklas Lindqvist, Nordic general manager at datacentre networking infrastructure provider Onnec, who told Computer Weekly the market is hesitating over how to respond to the hype surrounding AI because it will require a total rethink of how to kit out sites.

“We have seen [operators] pause the buildout of datacentres so they are able to have a design that fits AI needs,” he told Computer Weekly. “The market right now is a little bit hesitant about AI because it’s a lot of investment… [because you need] new network topology, different cooling systems, and how [these sites] use power will be different too.”

For example, liquid cooling is often the preferred means of temperature regulation for servers inside AI clusters, and these setups typically require more floorspace and raised floor heights to accommodate the piping needed to get the liquid in. For operators that are used to relying on air-cooled systems to keep their servers cool, the added requirements that come from deploying liquid-cooling systems can represent a sizeable logistical and financial undertaking.  

Lindqvist said that while operators need to “do something to keep up with the market because AI is happening”, the requirements for building an AI datacentre are “so intense” there is no certainty that building a datacentre now that can only handle AI will pay off.

Debates about datacentre design

The hesitancy Lindqvist talks about is backed by data from real estate consultancy CBRE, which regularly publishes reports tracking how the supply and demand for colocation capacity changes on a quarterly basis in Frankfurt, London, Amsterdam, Paris and Dublin.

As reported by Computer Weekly in August 2023, the company picked up on a slowdown in the amount of new colocation supply coming online in the second quarter of 2023, although it cited the “tremendous amount” of new capacity deployed in the second half of 2022 as being the causal factor, rather than operators having misgivings about the longevity of the AI trend.

Forecast data released by IT market watcher Gartner in August 2023, meanwhile, predicts a 20.9% year-on-year increase in revenue generated by the sales of semiconductors that are designed to help AI workloads run in datacentres, edge environments and endpoint devices in 2023, to $53.4bn.

“The developments in generative AI and the increasing use of a wide range of AI-based applications in datacentres, edge infrastructure and endpoint devices require the deployment of high-performance graphics processing units and optimised semiconductor devices,” said Alan Priestley, vice-president analyst at Gartner. “This is driving the production and deployment of AI chips.”

And while Gartner’s figures suggest the AI trend is going to continue to take the world of tech by storm, the market watcher’s recently published Hype Cycle for emerging technologies lists generative AI as being at the “peak of inflated expectations”, which might go some way to explaining why operators are reluctant to rush to kit out their sites to accommodate this trend.

For colocation operators that are targeting hyperscale cloud firms, many of which regularly talk up the potential for generative AI to transform how enterprises operate, there is perhaps less reticence, said Onnec’s Linqvist.

“If you look at the hyperscalers, yes, they will do that because they have a big future plan that AI is happening within their software, but the [retail] colocation providers could be in a different spot,” he added.

“Should they aim for AI, or should they [continue] to aim for traditional CPU racks? Traditional cooling like airflow, or should they use liquid cooling? That is a decision on a high level because it’s going to be, “Which market am I targeting?’”

So what does Linqvist think it will take to get operators to overcome their hesitancy? More proven use cases for AI in all its forms would be a start, he said.

“It would be a good idea to have that, but it is hard to get that, so [it] is important to have an open-minded, holistic view on how we design [datacentres] so we’re not just designing them [to accommodate] AI,” he said. “It needs to be flexible, scalable and modular.”

Expanding on this theme, he said it was important that operators take into account the interconnectedness that exists between the different disciplines that contribute to the overall way a datacentre operates.

“When you design a datacentre, you need to think about power, cooling and all the other disciplines that need to be connected, because if you change one thing, the other one will also need to be changed, and you need to have them as aligned as possible,” he continued.

“You also need to think a little bit outside the box. Not just about what’s happening this year or next year, because, for us, the cabling that we are providing is an infrastructure we think is going to live in the datacentre for 10 to 15 years without being changed. [But] the hardware is going to change within three years, so how can we make sure the cabling that will support that hardware is going to be as good as it can be?”

To address these challenges, Linqvist advocates for operators taking a multi-disciplinary approach to designing their facilities to ensure they are future-proofed against whatever new technology trends come along.

“You need to engage all disciplines. You need to work together in teams, getting out the details before you actually start, and everyone needs to sit down together. And it’s a lot, because in the datacentre you have a network team, a storage team, a facility team, a datacentre director and a manager,” he said. “All these people need to get aligned and look into the future a little open minded [as they figure out] what they want to provide to the market.”

Read more about datacentre design considerations

Read more on Clustering for high availability and HPC