INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Aug 30, 2018

High-Density Colocation 101: What You Need to Know About Data Center Power Density and Why it Matters

Nick Chang, Editor

You’ve probably heard about the potential upsides of high data center power density: More power in a smaller footprint equals more efficiency. For applications that have specific, high-compute workloads, the benefits of high-density colocation are apparent.

Data Center Power Density Considerations

But high data center power density also brings with it some considerations, and not all environments can support it. For many, high-density colocation is a simple, straightforward way to reap the benefits of high data center power density, while also allowing you to lean on the expertise of a trusted colocation service provider.

This post will cover:

  1. What is high data center power density?
  2. What enables high-density colocation?
  3. What are the benefits of high-density colocation?

What is “High” Data Center Power Density?

First things first: What constitutes “high” power density? For background, a lot of data centers—including many owned and operated by colocation service providers—are simply not designed for high density.

Many average anywhere from 90-150 watts per square foot. “Normal” deployed data center power density ranges from 5-6kW per rack, and anything above that could be considered high density, up to around 30kW per rack. Some data centers exceed this, even achieving ultrahigh density of up to 50-60kW per rack and beyond.

Two takeaways here: 1) Data center power density can greatly vary from one data center to another, and more importantly, 2) high-density colocation is something that a data center or colocation space must be built to support.

But even when data centers are provisioned for high-density colocation, there is often a discrepancy between what they can theoretically support per cabinet and what actually gets used by any given customer. Often, it can be hard for even savvy customers to understand how to set up their infrastructure to optimize power draw or even properly assess their power needs.

This is why working with the right colocation service provider is so crucial. They can help you rightsize your set-up, helping you avoid over provisioning power that you don’t end up needing, and they will ensure that your equipment is properly configured to protect it from overheating.

What Enables High-Density Colocation?

This brings us to the primary limiting factor for power density: heat rejection, since high data center power density equipment configurations generate a tremendous amount of heat. Air flow management utilizing well-designed hot aisles and cold aisles are essential, forming a tightly controlled, closed-loop environment to prevent any inefficient mixing of hot and cold air. Alternatively, ultra-high density equipment configuration may require further steps, including close-coupling the cooling or utilizing other technologies, like liquid cooling.

Deploying air flow management devices and containment technologies that maximize cooling efficiency are also needed for a trouble-free high-density colocation configuration. So when it comes to physical infrastructure, an experienced data center and colocation service provider is invaluable; they will know how to set up and implement the right solution for your configuration.

At INAP data centers, our own specially engineered containment technology allows us to maintain tight control of our colocation environments, in addition to being highly scalable at lower costs. This empowers us to quickly deploy the right containment solutions for specific, high-density colocation equipment configurations, while keeping us flexible enough to adapt our solutions as requirements shift and evolve over time.

What Are the Benefits of High-Density Colocation?

Data centers that are enabled for high power density, like many of INAP’s, allow customers to achieve the same amount of compute in a smaller footprint, consolidating equipment needs. So any application with high-intensity, high-compute workloads will make great use of high data center power density space: e.g., cryptocurrency mining/blockchain, artificial intelligence or certain gaming applications.

But an expert colocation and data center service provider doesn’t just provide the space: They can ensure that your hardware is properly configured, while also ensuring equipment airflow is correct, equipment is racked properly and airflow augmentation devices are installed where needed.

And perhaps counterintuitively, higher data center power density also increases the effectiveness of cooling systems: More hot air being returned to the cooling system results in greater efficiency, since the primary source of inefficiency is low differential temperatures, leading to the staging up or down of cooling infrastructure. In other words, when the system is constantly running at a high temperature differential, it’s most efficient.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Nick Chang

Editor

Read More
Feb 21, 2018

The Importance of Redundancies in Your Infrastructure

INAP

In December 2017, an electrical fire knocked out power at Atlanta’s Hartsfield-Jackson International Airport.

The outage forced the FAA to issue a ground stop for hours in Atlanta, canceling thousands of flights and stranding thousands of passengers at the world’s busiest airport. The power came back on after about 11 hours, but it took days for the airport to return to normal operations.

The outage created a financial ripple effect, impacting the airport, airlines and travelers. Delta Air Lines uses Atlanta as its major hub and reported the outage cost the company up to $50 million.

Just imagine if something like this happened in your data center or to your network.

Why Your Environment Needs Redundancies

This scenario illustrates why it’s so important to have built-in redundancies in your environment.

Redundancies work by placing multiple channels of power or communication within your infrastructure and network. Think of your redundancies as insurance against failures. If you have multiple paths of connection, the loss of a single path would be inconsequential because your connection would be switched to another source.

To be fair, Hartsfield-Jackson International Airport did have a redundant power system in place. Unfortunately, the fire was so intense that it damaged the two substations providing power to the airport, including the backup system.

When you consider the financial, technical and PR damage caused by unplanned outages, it’s almost a no-brainer to include redundancies in your footprint.

Physical Redundancies

The first level of systems you need to consider is physical-level redundancies which back up your utilities, such as power and water. Adding these redundant systems will help eliminate single points of failure in your environment.

If there is an unplanned disruption or scheduled maintenance, these secondary components will automatically take control, keeping your servers and applications online. A setup like this is necessary for your critical applications. (Shameless plug: INAP’s Tier 3-type data centers include N+1 concurrently maintainable design.)

Network Redundancies

A network-level redundancy involves the use of both redundant links as well as network equipment, such as routers and switches. The concept is similar to a physical-level redundancy – should your main communication path go down, your servers can use your backup links to maintain availability and keep your business online.

In layman’s terms, consider your network redundancy like directions you’d get from your car’s GPS. If you are driving down the highway and there is an accident, your navigation system will divert you to a route that’s less crowded. It may not be the shortest route in distance, but it ends up being the quickest to your destination.

Of course, your network-level redundancy won’t have unlimited paths from which to choose like your car GPS. It will only work with paths that you’ve already established as your backups.

Facility Redundancies

Even with the best redundancy plans, there are always situations that are out of your control. Whether it be a man-made incident or natural disaster, there are certain instances in which an entire data center (or even city) could go offline. To keep your business up and running should the unthinkable happen, it’s important to consider facility redundancies.

Facility redundancies are very similar to disaster recovery solutions, but rather than having your backup site on standby for failover, you are normally running off servers in both locations. For instance, if you have a footprint in Atlanta and Dallas, you would set up your environment so it is equipped to handle your entire infrastructure should something happen in one of those sites. (Shameless plug #2: INAP has 51 data centers in 21 metropolitan markets around the world that you could utilize for your footprint.)

Implementing Your Redundancies

Implementing redundancies within a network or infrastructure is more than simply duplicating all your connections. Redundancies are necessary for maintaining availability, but when used in excess, they can be a drain on overall speed and performance.

It can be possible for a network to be overbuilt. When implementing redundancies, the key is to create backup paths built for efficiency, speed and availability. This means having a clear design that considers current failing paths and builds redundancies to fit exact pain points.

Remember, every IT network is unique. It’s important to be acquainted with your network’s strengths and weaknesses. Assess where your business’s connections terminate and where your resources are most available.

The best way to ensure you’re doing it right is leaving the strategy and setup to the experts. Our team has experience providing the best possible networking and infrastructure services for some of the most successful businesses in the world. INAP’s concurrently maintainable data centers are designed with built-in redundancies, so your network and servers will remain online even if there’s a disruption. Contact us today to learn about how we can provide a footprint for your infrastructure that’s always online, so you can focus on your core business.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jan 30, 2018

Two of the Biggest Data Center and Colocation Myths Debunked

INAP

Data Centers Are Here to Stay.

Data centers play an essential role in the storage and management of a company’s data and digital information.

Large corporations may opt to store their digital information within their own data centers on site. However, many organizations rely on other companies to run the data center and will pay for the power and space – this service is known as colocation.

Colocation provides safe, reliable and affordable options that are essential to the growth and operation of many organizations and businesses. But due to the integral role they play, colocation and data centers are often also subjected to industry myths and misconceptions.

Here are two of the most common misunderstandings about data centers and their services.

Myth 1: The Growth of Cloud Computing Will Render Colocation Obsolete

One of the most popular data center myths is that the rapid expansion of cloud computing and services will eventually eliminate the need for data centers and colocation services.

It is true that the cloud has grown astonishingly in the last several years and this growth is sure to continue to rise; however, fear of the cloud’s size and strength is misguided. The cloud has not replaced onsite servers, so it’s unlikely it will replace data centers and the need for colocation.

Many businesses make use of cloud services to facilitate and improve their businesses processes, but very few businesses move all their data to the cloud. In most cases, living completely out of the cloud simply isn’t feasible. Some organizations just feel more comfortable storing sensitive data on-site or in dedicated data centers. While cloud computing is certainly growing, it is not growing monolithically. And as a result, blended infrastructure, with both cloud and colocation environments, are among the most popular setups for businesses.

For example, a business may choose to outsource repeatable business practices, such as emails or internal documents to the cloud, but that same business would choose to keep sensitive information and data housed within a personal server within a data center.

The growth of blended infrastructure solutions means that both colocation and cloud services will continue to have a role in handling the IT infrastructure needs of the future.

Myth 2: Data Centers Can’t Handle New Workloads

A second myth is that current legacy data centers lack the capabilities necessary to handle new IT workloads.

These infrastructure doomsday scenarios generally focus on the assertion that data centers lack the physical space and necessary power to properly handle our modern IT needs.

But once again, the reality is far less dramatic than critics suggest. New technologies and innovations in cooling and power usage allow data centers to be radically more efficient. So, while the amount of data being processed may increase, the amount of power being used stays relatively constant.

It is true that data centers need to adapt to a changing IT landscape; however, this change can be both sustainable and gradual. Rather than focusing on the dramatic, data centers can improve upon existing equipment. Data centers can continue to train and retain good employees, thus keeping performance and efficiency running at an optimal level. While new problems and situations may arise, data centers can still thrive in the current IT environment.

How Can Data Centers Support My Needs?

Data centers continue to play a valuable role in handling IT needs for organizations and businesses. Finding the right data center for your business is often about finding the provider who can give you the services and products you’ll need to keep your business applications running smoothly.

INAP’s data center specialists can assess your organization’s needs and find the right plan that fits your scale, scope and budget. Contact us today to learn more about INAP’s data centers and colocation services.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Dec 21, 2017

5 Highlights from the Gartner IO Conference 2017

INAP

Insights and Advice from our Experts

INAP was fortunate to be a sponsor at Gartner’s annual IT Infrastructure, Operations Management & Data Center Conference 2017 in Las Vegas.

In addition to exhibiting our high-performance managed hosting and service solutions, our team of experts had the opportunity to attend some of the popular keynotes and sessions throughout the four-day event.

The conference included more than 150 sessions, so naturally we weren’t able to attend every one. We would have liked to, but since time travel is still unreliable at best, our experts picked the sessions they knew would be most relevant to the future of our business and our ever-evolving industry.

And they weren’t disappointed.

Here are five key industry insights and trends our experts brought home with them from the Gartner IO Conference.

1. Make Way for Artificial Intelligence and Machine Learning

You probably already use some form of automation in your business. Chatbots and virtual assistants are increasing in popularity, but are you doing enough to improve the efficiency of your infrastructure?

During their opening keynote address, Gartner’s Milind Govekar and Dave Russell predicted that if you don’t effectively adapt artificial intelligence (AI) and machine learning (ML) into your environment and workloads by 2020, your infrastructure may not be operationally and economically viable.

As a result, they expect an increase in software-centric or programmable infrastructure to support advanced platform thinking and integration with minimal human intervention. If utilized correctly, this technology will enable your environment to process more data faster with less cost.

Stay tuned.

2. Living on the Edge

It was just a few years ago that the internet of things (IoT) took off as the next big advancement in digital technology.

Businesses now need to embrace the edge by blending physical and digital resources to create an experience that provides value and makes a difference.

It’s not about rolling out technology for the sake of doing it. In a session about top trends in 2018 and their impact on infrastructure and operations, Gartner VP David Cappuccio pointed out the necessity of creating an intelligent edge. This focuses on utilizing connected devices that provide a real-time reaction and allow for interaction between things and people to solve a critical business need.

3. Data is More Valuable Than Ever

In a digital world of AI, connected devices and intelligent edges, data is becoming even more important.

Machine learning and automated systems will require additional data to analyze trends and behaviors to make logical decisions to improve efficiency, especially when connected with multiple devices. To manage the influx of digital information, a greater priority will be placed on data storage and backup. (Shameless self-promotion: INAP launched a new managed storage offering during this conference.)

More data also means more opportunities for hackers, and businesses are being forced to take additional steps to combat this risk. In a session about the state of business continuity management, we learned that average disaster recovery budgets were expected to increase in 2017.

4. Cloud Reaches New Heights

One of the overwhelming themes that kept coming up during sessions and keynotes was a focus on the cloud.

You’re probably already familiar with some of the stats that predict massive increases in cloud computing over the next few years. Gartner’s Govekar and Russell doubled down on those forecasts, claiming that by 2021, 80 percent of organizations using DevOps will deploy new services in the public cloud.

It appears we can expect more businesses to transition to a cloud-only model, where before it was just cloud first. The impact remains to be seen.

5. Mind the Skills Gap

With technical innovation and the transition to a more cloud-focused infrastructure, IT teams are being driven to master additional skills.

Some employees may be fast learners, but the reality for most businesses is that they’ll likely experience disruptions due to infrastructure and operational skills gaps.

Rather than being specialists or generalists, IT talent should strive to become versatilists – meaning they are a specialist for a certain discipline, but can easily switch to another role. In the meantime, companies need to consider the experience level of their existing teams when rushing to adapt new technology.

Implementing New Trends

Your business may already be in the process of implementing changes based on these trends. Or perhaps you’re aware that you need to get the ball rolling, but you’re not ready just yet.

Regardless of where you currently sit, you should consider how these trends will impact your industry and business model or you risk being left in your competitors’ dust.

It may seem like a daunting task, but you don’t have to do it alone. Consider a trusted partner who will be there every step of the way to provide guidance, support and the necessary services to help you achieve your business goals. That’s where INAP comes in. Our team of experts will assist you in preparing your organization and infrastructure for the technology of tomorrow. Contact us to learn how we can help you build a better IT infrastructure for today and the future.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Dec 17, 2013

2014 IT security priority: Business continuity/disaster recovery

INAP

istock_Disaster_recovery_smSecurity is a central topic and concern in the digital economy. IT security attracts a significant amount of attention because of its business risk and far-reaching consequences, including significant revenue loss, customer loss and legal ramifications. Recent surveys completed in 2013 by accounting firms PwC and EY provide interesting insight into security priorities and future funding plans by corporate America.

In its 2013 survey, “Under cyber attack”, EY interviewed 1900 respondents, primarily C-suite professionals and executives from finance, IT and security. Rated by respondents as a number 1 or 2 priority, the top 3 security concerns include:

  1. Business continuity/disaster recovery – 68%
  2. Cyber risks/threats – 62%
  3. Data leakage/data loss prevention – 56%

Indeed, business continuity/disaster recovery requires considerable foresight and planning. It is essential to understand the business impact in the unfortunate case that disaster strikes. This is more than a theoretical argument as evidenced in the EY survey; 10% of the respondents claimed that the threat of natural disasters has increased risk exposure for their business in the past 12 months. Organizations should set acceptable downtime limits for restoring critical business functions and plan accordingly. Consider the ramifications not only for your business, but for your customers as well, should a worst case scenario occur.

Part of business continuity planning involves the data center. What plans will you have in place for data center recovery? Shortest recovery times may be achieved by establishing a hotsite, an alternate secure facility fully equipped and on stand-by to take over operations. If this level of response is unnecessary, warm or cold sites are possible options. Alternatively, the cloud, public or private, may provide the best solution for your requirements. “PwC’s 5th Annual Digital IQ Survey” shows that 2013 investment in the private and public cloud was expected to increase significantly; we will need to wait and see if this prediction came true, and the extent to which this investment was targeted for business recovery purposes.

Clearly, selecting the appropriate data center recovery option is critical to the success of the overall business continuity plan. To what extent will your corporation develop contingencies for 2014 and beyond?

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 14, 2012

The truth about usage-based power: Cost savings and energy efficiency

INAP

The truth about usage-based power: Cost savings and energy efficiencyI recently had the chance to sit down with our vice president of data center services, Mike Frank, to ask him all about our new usage-based power feature in our data centers. Mike gave me the scoop on why being able to monitor your power consumption is a step-up in terms of controlling costs and optimizing your infrastructure for maximum efficiency. Check out what he had to say…

What is usage-based power?

Usage-based power is exactly as the name implies. It allows customers in our colocation facilities to pay for only the power they are using instead of paying a flat rate — in many cases helping them save on costs. In today’s economy things have gotten incredibly expensive, and power is included in that. It’s definitely a customer-friendly feature since being able to understand your power consumption is critical to knowing if you are running at maximum efficiency. In turn, customers have more control over their environment and more flexibility over what they can do.

How does it work? 

There is something called a branch circuit monitoring system which puts a physical monitor on your power circuit that reports back to the network. Customers can then get that data on their invoice at the end of the month so they can compare it over previous months and years.

Why is this feature important?

By allowing companies visibility into how much power they are using, they can create more efficient environments and create more efficient configurations with their devices, servers, racks and the way they deploy their gear. For example, if I can look at my power consumption and I see that I am using a lot of power, I have the ability to upgrade my machines and change my infrastructure to lower my power consumption — something that could also encourage the industry to become more conscious of energy usage as it relates to being green.

Where is this being deployed?

Usage-based power is available now as a beta at our Dallas facility. The Dallas team is prepared and trained to implement this feature for customers today. Over the course of 2013 we’ll be rolling it out to other markets.

Thanks Mike! We’ll be looking forward to the implications this has for customers and the industry as a whole.

What are your thoughts on usage-based power in the data center?

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
May 29, 2012

Performance Data Centers on the open water

INAP

Performance Data Centers on the open waterWhat’s a “Performance Data Center”? I’m glad you asked… I recently purchased my first boat so let’s use that as an analogy to help explain. We spend a lot of time on one particular lake, and towing my boat back and forth from the house to the lake can sometimes be a hassle. So, I’ve been looking into those huge dry docks where you can have your boat plucked from the water with a forklift when you’re done for the day. Your boat, which is no small investment I might add, is stored safe and sound, in the comfort of a huge warehouse filled with other boats. It’s dry, safe, secure, and I can access my boat whenever I want, for any reason. If I need work done on it, well, the dry dock facility can offer those services too. That seems pretty convenient to me, allowing me to focus on enjoying time with my family − which is why I bought the boat in the first place. As a bonus, varying levels of service are available based on individual needs.

Not unlike the dry dock housing above, a Performance Data Center offers the fundamental services needed to manage your IT Infrastructure, and so much more. It has the controlled environment you need and a staff at the ready with optional services like remote hands or 24/7 support. More robust than traditional data centers, Internap Performance Data Centers offer our entire product portfolio, so if you need more flexibility down the line (think cloud, managed/dedicated hosting or a combination thereof), it’s all there for you.

Over the last several years Internap has been busy building new Performance Data Centers to meet the growing demand for high-density power colocation, hosting and cloud services. Late last year we opened our Dallas facility, a 55,000 square foot, state-of-the-art data center with lots of great features and services that today’s technology professionals demand. Our newest Performance Data Center is currently being built in Los Angeles, CA, opening for business in summer of 2012. We’re excited about the opportunity to serve a new market and look forward to seeing you at our grand opening. Until then, we’ll keep you updated on progress.

Looking for more information on our Los Angeles Performance Data Center? Check out the data sheet.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More