Sep 12, 2014

High power density data centers: 3 essential design elements

INAP

When it comes to high power density data centers, all are not created equal. Many customers, particularly those focused on ad tech and big data analytics, are specifically looking for colocation space that can support high power densities of 12+kW per rack. Here at Internap, we have several customers that need at least 17kW per rack, which requires significant air flow management, temperature control and electricity. To put this in perspective, 17kW equates to about 60,000 BTUs, and a gas grill with 60,000 BTUs can cook a pretty good steak in about five minutes.

Delivering super high power density that meets customer demands and ensures tolerable working conditions requires careful planning. When designing a high power density data center, there are three essential elements to consider.

1. Hot aisle vs. cold aisle containment.
To effectively separate hot and cold air and keep equipment cool, data centers use either hot aisle or cold aisle containment. With cold aisle containment, all the space outside the enclosed cold aisle is considered hot aisle, and enough cold air must be pumped across the front side of the servers to keep them cool. However, the hot aisles can become too hot – over 90 degrees – which creates intolerable working conditions for customers who need to access their equipment.

As power densities rise, temperature control becomes even more important. Using true hot aisle containment instead of cold aisle containment creates better working conditions for customers and maintains a reasonable temperature across the entire data center floor. With hot aisle containment, there’s still heat coming from the racks, but you only have to deal with the heat coming from the rack you’re working on at the time, instead of getting roasted by all of them at once. This approach helps avoid the “walking up into the attic” effect for data center technicians.

2. Super resilient cooling systems.
In a typical data center, if the computer room air conditioning (CRAC) units go offline, you have about 10-15 minutes to get the chillers restarted before temperatures start to rise significantly. But when equipment is putting off 36,000 BTUs, you don’t have that luxury. To avoid an oven-like atmosphere, cooling systems must be ultra-resilient and designed with concurrent maintainability, including +1 chillers and separate loops for the entire cooling infrastructure.

Hot aisle containment also makes a cooling outage less painful because the entire data center floor becomes a cool air pocket that can be sucked through the machines, giving you a few extra minutes before things start getting – well, sweaty.

3. Electrical distribution.
Data centers must be designed to support high density power from day one. We have a mobile analytics customer that uses nine breaker positions in a single footprint. You can’t simply add more breaker panels when customers need them; you have to plan ahead to accommodate future breaker requests from the start. Also, breaker positions are used for primary and redundant circuits – more customers than ever are requesting redundant power, so this should also be taken into consideration.

The flexibility of modular design
Internap’s high density data centers are flexible enough to work with custom cabinets if the customer prefers to use their own. As long as the cabinet can be attached to the ceiling and connected to the return air plenum, we can meet the customers’ power density requirements.

Data centers designed to support high power density allow companies to get more out of their colocation footprint. The ability to use rack space more efficiently and avoid wasted space can help address changing needs and save money in the long run. But be sure to choose a data center originally designed to accommodate high power density – otherwise you and your equipment may have trouble keeping cool.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More