Romolo Tavani - stock.adobe.com
Uptime Institute highlights patchy reporting of water use by datacentre operators
With the water usage habits of datacentres coming under increased scrutiny, research from resiliency think tank Uptime Institute highlights shortcomings in the sector's reporting of how much water facilities use
The datacentre industry still has more work to do when it comes to embracing sustainability, according to the findings of the Uptime Institute’s 2021 global datacentre survey.
More than 800 datacentre operators and owners took part in this year’s survey, which saw participants quizzed on a wide range of issues pertaining to how these organisations run their server farms.
The report looked at the financial impact that outages have on operators, the sector’s track record on diversity and inclusion, and how the datacentre community is responding to calls to improve its sustainability and environmental friendliness.
Where the latter is concerned, many colocation and hyperscale datacentre operators have publicly committed to taking steps to curb their carbon emissions and improve the energy efficiency of their facilities over the course of the past two years.
Despite these declarations, the Uptime Institute report painted a picture of an industry that is struggling to get to grips with its sustainability commitments, given how few operators track key metrics that indicate how efficient their facilities are.
For instance, the majority of datacentre owners and operators said they use the power usage effectiveness (PUE) metric to keep tabs on how much energy their facilities use, with the average score across the industry now standing at 1.57, Uptime’s research shows.
The use of the PUE metric is also most prevalent among operators with facilities that are in excess of 1MW in size, although Uptime said this was likely to change as more smaller edge facilities come online in the coming years.
In the meantime, operator efforts to drive down their PUE scores appear to have stalled in recent years, the research found, with Uptime attributing that trend to the number of legacy datacentres still in operation around the world.
“After large efficiency gains through the first half of the 2010s, average PUEs have remained relatively stable for the past five years or so,” the report stated.
“There is a clear explanation for this. Even as a growing number of new builds sport design PUEs of 1.3 or better, it is not economically or technically feasible for many operators to perform the major overhauls needed for much better efficiency in many older facilities.
“Across much of this large population of older datacentres, the easy gains from better airflow management, optimised controls and replacement of ageing equipment have already been achieved,” the report added.
Presently, it is less common for operators of smaller facilities to track their power use, but reporting of other performance metrics is decidedly patchy across the whole industry, the report suggested.
“Most [operators] do not track server utilisation, arguably the most important factor in overall digital infrastructure efficiency,” the report stated. “Even fewer operators track emissions or the disposal of end-of-life equipment, which underscores the datacentre sector’s overall immaturity in adopting comprehensive sustainability practices.”
The report went on to state that just half of operators keep tabs on how much water their datacentres consume for cooling purposes. And the ones that do commonly only track it at on a site-by-site basis, rather than monitoring how much of this resource their entire datacentre portfolio uses.
When Uptime quizzed the operators that said they do not track their sites’ water usage habits, 63% said there was “no business justification” for doing so, while 23% said they lacked the technical capabilities needed to monitor how much water their sites use.
These admissions come at a time when the water usage habits of datacentres are coming under increased scrutiny by environmentalists and government policymakers on the back of predictions about how climate change and population growth is set to exacerbate water scarcity in drought-prone regions of the world.
This has prompted several members of the hyperscale cloud community (Microsoft, Google and Facebook) to set out plans to become “water-positive” entities by 2030 by committing to ensuring their global operations replenish more water than they consume by that date.
It is likely that other datacentre operators will be forced to follow suit in the years to come in response to regulatory pressure, said the Uptime report.
“A growing number of municipalities will permit new datacentre developments only if they are designed for minimal or near-zero direct water consumption,” the Uptime report stated. “These types of rules will heavily influence facility design and product choices in the future, mandating cooling equipment that uses water sparingly (or not at all).”
Staffing challenges persist in datacentres
The report also highlighted the staffing issues that operators are encountering, with nearly half of respondents admitting to difficulties finding skilled datacentre professionals to fill vacancies at their firms. This is up from 38% in 2018.
The situation is worsening as the number and size of facilities continue to grow, with the report stating that this was creating jobs at a rate that recruiters were finding hard to match.
There is potential for artificial intelligence (AI) technologies to “decouple” the demand for datacentre workers from the market’s overall growth, the report stated.
Respondents claimed, however, that it would probably take several years for AI to directly impact staffing requirements in the datacentre market.
“Any replacement of datacentre staff with AI will require higher trust in the technology,” the report stated. “Most operators view AI and its risks – some still unknown – with caution.”
In North America and Europe, specifically, there is also the added pressure that many organisations are losing experienced staff to retirement, the report added.
“There is an additional threat of an ageing workforce, with many experienced professionals set to retire around the same time – leaving more unfilled jobs, as well as a shortfall of experience,” it stated. “An industry-wide drive to attract more staff, with more diversity, has yet to bring widespread change.”
Datacentre downtime increasingly damaging
From an outages perspective, this year’s report echoed the findings of last year’s in that when downtime incidents do occur, they are becoming increasingly more damaging and expensive for operators to bounce back from.
A third of operators said they had experienced no outages during the past three years, while 69% said they had suffered “some form of outage” during that timeframe, down from 78% in 2020.
The report cited the onset of the Covid-19 coronavirus pandemic as a possible cause of this drop, as operators introduced limits on how many people could access their sites for social distancing reasons and delayed upgrades that could have created downtime incidents.
“There is, nevertheless, still a troubling number of outages and other major failures and interruptions. A proportion of these cause significant disruption and are costly,” the report stated. “As the world becomes more dependent on IT services, reliability will receive greater scrutiny and calls for further improvements.”
Read more about datacentre downtime
- Amazon Web Services users are awaiting a full explanation from the public cloud giant about the cause of a prolonged outage at one of its major US datacentre regions that began on 25 November 2020.
- The Uptime Institute’s third annual datacentre outage analysis report suggests downturn in number of downtime incidents over the past 12 months due to the pandemic, with networking issues fast emerging as main source of technical difficulties.