Tier 3 data center design: The cooling checklist
Our expert gives insights into data center cooling system considerations that will ensure adherence to tier 3 data center standards.
Note: This is the concluding part of a two part series on the tier 3 data center specifications checklist. Read the first part of the tier 3 data center specifications checklist.
A data center is said to have been designed as a tier 3 data center when it meets the prime requirements of redundancy and concurrent availability. In this context, cooling infrastructure is as critical to a data center as power, since it keeps the data center’s overall heat load in check. Therefore, it is mandatory that an organization aspiring for tier 3 data center certification ensures that the following guidelines are met.
Cooling system essentials
Data centers typically feature precision air conditioning systems, which are either direct expansion type air conditioning systems or chilled water cooling systems. Tier 3 data center specifications do not specify any specific air conditioning system types. However, the specifications require air conditioning systems to be designed such that they can handle the data center’s maximum IT heat load, and function normally in extreme ambient conditions.
A facility designed to meet tier 3 data center standards should have a cooling system in place capable of handling the most extreme ambient temperature recorded for that region in the last 25 years. This is requirement put in place by the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE).
The tier 3 data center specifications further require that cooling systems have n+1 redundancy built in at every component level. For instance, if the data center requires an air conditioner (AC) of 10 ton capacity, then it could either deploy two 10 ton ACs, or ensure further redundancy by having three 5 ton ACs. Concurrent maintainability should also be ensured at all component levels.
Direct expansion type air conditioners
A direct expansion type AC is divided into three parts: indoor unit, outdoor unit and piping. n+1 redundancy in accordance with the tier 3 data center standards can be achieved with two or more of such cooling systems. If the data center is spread over a large area, it’s an efficient design practice to evenly distribute indoor units throughout the facility. An outdoor unit should connect to only one indoor unit. There can be multiple pipes connecting the indoor and outdoor units.
Concurrent maintainability for tier 3 data center standards can be achieved by deploying ACs with dual power supplies. Alternatively, each AC should connect to one power distribution unit (PDU), and have an automatic transfer switch (ATS) mechanism that switches over to the other PDU as a compensation for unit failure.
It’s not a good practice to have more than one AC unit connecting to a PDU when ensuring redundancy. For instance, a tier 3 data center can have a cooling requirement of 20 ton. It is better to have two 20 ton units connecting to individual PDUs, than have five 5 ton ACs divided (as a combination of 2 and 3 each) between two PDUs. In the event of either going down, overall cooling capacity requirements will never be met.
Chilled water cooling system
A chilled water cooling system utilizes chiller pipes that pass on cold water to an air handling unit. Hot air from the data center is passed into the handling unit, and passed back into the data center as cold air. The tier 3 data center specifications require redundancy and concurrent maintainability to be achieved for chilled water cooling system components.
n+1 redundancy can be achieved with two or more water sources, chiller pipes, pumps and air handling units. The pump, air handling unit, and condenser need dual power supplies or multiple PDU connections with an ATS arrangement. It is advisable that an organization designing a tier 3 data center has multiple air handling units connecting to two chiller pipe systems, with each capable enough to independently meet the heat load requirements. However, this leads to complexity in piping between the air handling units and the chiller system. This in turns makes it difficult to meet the tier 3 data center requirements of concurrent maintainability.
Each chiller system should connect to two or more water sources such that if one dries up, supply continues from the other via a valve (for the switchover). Valves may be required at various stages of the piping system. So the piping design can get complex at various stages while trying to ensure redundancy and concurrent maintainability. Replacing or maintaining a valve (or a pump) requires isolation of the valve (or pump) using two more valves on either sides of the component to cut water supply.
Piping design can be the most complex process while trying to meet tier 3 data center requirements using chilled water cooling systems. Software solutions are now available for piping design. Specialized manpower can also be used to help with piping design.
Note: This is the concluding part of our two part series on the tier 3 data center specifications checklist. Read the first part of the tier 3 data center specifications checklist.
About the author: Mahalingam Ramasamy is the managing director of 4T technology consulting, a company specializing in data center design, implementation and certification. He is an accredited tier designer (ATD) from The Uptime Institute, USA and the first Indian to get this certification.
(As told to Harshal Kallyanpur.)