Aug 30, 2018

High-Density Colocation 101: What You Need to Know About Data Center Power Density and Why it Matters

Nick Chang, Editor

You’ve probably heard about the potential upsides of high data center power density: More power in a smaller footprint equals more efficiency. For applications that have specific, high-compute workloads, the benefits of high-density colocation are apparent.

Data Center Power Density Considerations

But high data center power density also brings with it some considerations, and not all environments can support it. For many, high-density colocation is a simple, straightforward way to reap the benefits of high data center power density, while also allowing you to lean on the expertise of a trusted colocation service provider.

This post will cover:

  1. What is high data center power density?
  2. What enables high-density colocation?
  3. What are the benefits of high-density colocation?

What is “High” Data Center Power Density?

First things first: What constitutes “high” power density? For background, a lot of data centers—including many owned and operated by colocation service providers—are simply not designed for high density.

Many average anywhere from 90-150 watts per square foot. “Normal” deployed data center power density ranges from 5-6kW per rack, and anything above that could be considered high density, up to around 30kW per rack. Some data centers exceed this, even achieving ultrahigh density of up to 50-60kW per rack and beyond.

Two takeaways here: 1) Data center power density can greatly vary from one data center to another, and more importantly, 2) high-density colocation is something that a data center or colocation space must be built to support.

But even when data centers are provisioned for high-density colocation, there is often a discrepancy between what they can theoretically support per cabinet and what actually gets used by any given customer. Often, it can be hard for even savvy customers to understand how to set up their infrastructure to optimize power draw or even properly assess their power needs.

This is why working with the right colocation service provider is so crucial. They can help you rightsize your set-up, helping you avoid over provisioning power that you don’t end up needing, and they will ensure that your equipment is properly configured to protect it from overheating.

What Enables High-Density Colocation?

This brings us to the primary limiting factor for power density: heat rejection, since high data center power density equipment configurations generate a tremendous amount of heat. Air flow management utilizing well-designed hot aisles and cold aisles are essential, forming a tightly controlled, closed-loop environment to prevent any inefficient mixing of hot and cold air. Alternatively, ultra-high density equipment configuration may require further steps, including close-coupling the cooling or utilizing other technologies, like liquid cooling.

Deploying air flow management devices and containment technologies that maximize cooling efficiency are also needed for a trouble-free high-density colocation configuration. So when it comes to physical infrastructure, an experienced data center and colocation service provider is invaluable; they will know how to set up and implement the right solution for your configuration.

At INAP data centers, our own specially engineered containment technology allows us to maintain tight control of our colocation environments, in addition to being highly scalable at lower costs. This empowers us to quickly deploy the right containment solutions for specific, high-density colocation equipment configurations, while keeping us flexible enough to adapt our solutions as requirements shift and evolve over time.

What Are the Benefits of High-Density Colocation?

Data centers that are enabled for high power density, like many of INAP’s, allow customers to achieve the same amount of compute in a smaller footprint, consolidating equipment needs. So any application with high-intensity, high-compute workloads will make great use of high data center power density space: e.g., cryptocurrency mining/blockchain, artificial intelligence or certain gaming applications.

But an expert colocation and data center service provider doesn’t just provide the space: They can ensure that your hardware is properly configured, while also ensuring equipment airflow is correct, equipment is racked properly and airflow augmentation devices are installed where needed.

And perhaps counterintuitively, higher data center power density also increases the effectiveness of cooling systems: More hot air being returned to the cooling system results in greater efficiency, since the primary source of inefficiency is low differential temperatures, leading to the staging up or down of cooling infrastructure. In other words, when the system is constantly running at a high temperature differential, it’s most efficient.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Nick Chang

Editor

Read More
Aug 22, 2018

INAP Summer 2018 News Round-Up

Nick Chang, Editor

While summertime usually means a break from the busy pace of work, things at INAP have been anything but slow. As August draws to a close, we wanted to take a moment to highlight some of the big news and events from the past few months.

In May, we announced partnerships with DediPath and the world’s largest social casino games company, adding them to our New York metro and Santa Clara, California, data centers, respectively.

Later that month, we announced a new collaboration with Ciena, whom we chose to work with for our multiphase network enhancement initiative. Keep an eye out—we’ll be launching Metro Ethernet services for increased connectivity and performance in six markets: Boston, Chicago, Dallas, New York, Phoenix, and the Bay Area.

In June, Jim Keeley joined INAP as our new CFO. That same month, US Dedicated also expanded their footprint with us from Atlanta and Seattle into our newly expanded Dallas data center.

In July, our Montreal data center got a new colocation customer: a leading global provider of information and communication technology. We were also pleased to hear that INAP was certified as a Great Place To Work™ based on employee feedback. You can read what people here think about the company, and if you like what you see, check out our current openings.

At the end of the month, we signed a new lease in Phoenix, Arizona, to add a second Tier 3-design data center, anchored by key tenant Bank of America. Around the same time, we released our second quarter earnings report.

Earlier this month, we announced an exciting partnership with Colt Data Centre Services in its North London facility, where we will be building out a standalone, self-contained Tier 3-design data center space.

We also announced a new agreement with Sovrn, an online advertising technology firm. This is the second expansion to our cloud hosting agreement with the company. Our international footprint, in addition to our route-optimized IP service, was a key deciding factor.

We are looking forward to seeing what the last few months of 2018 will bring, and we hope you will too.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Nick Chang

Editor

Read More
Aug 16, 2018

Bare Metal for Big Data: Performance, Price and Value

Mike Jones, Sr. Dir., Product Management

In 1597, Sir Francis Bacon wrote, “Knowledge is power.” This may be no less true now than it was then, but a few things have changed: most notably, the magnitude of the volume of information we now have access to. And since the launch of the Information Age, that information has been growing exponentially.

Companies today need to leverage that information and turn it into knowledge quickly to remain competitive. As web traffic increases, wearable technologies are introduced, and the Internet of Things (IoT) continues to proliferate, what used to be measured in megabytes and gigabytes is now measured in petabytes and beyond.

Data analytics powered by batch processing overnight no longer suffices when data-driven decisions must be made in real time.

The Opportunity of Bare Metal for the Challenge of Big Data

Whether it’s on-demand advertising, financial analysis, preventive security, health care research, or any of the numerous other use cases for data analytics, companies and organizations are challenged with processing the growing volume of data being captured daily in real time. Alongside the growth of the amount of data, the costs of infrastructure to store and analyze that data have also increased substantially.

New technologies like Apache Hadoop and Aerospike have become standard frameworks for big data processing, and bare metal servers have become the most common method of delivering these frameworks in a high-performance and scalable architecture.

By running big data frameworks on bare metal servers, companies can leverage 100 percent of the hardware performance, while still maintaining scalability and automation through APIs—just like provisioning virtual instances in the cloud. This allows companies to not only get value from their data quickly to make informed decisions but also to easily scale the process as the data set continues to grow.

Big Data Performance and Bare Metal Value

But performance is just one piece of the equation. Even as demand for big data-driven decision-making grows, cost efficiency remains an ever-present consideration for many businesses. With a variety of service providers available, it can be difficult to evaluate your options.

INAP commissioned Cloud Spectator to compare our Bare Metal infrastructure running Aerospike and Hadoop against well-known industry heavyweights Amazon Web Services and IBM Cloud. The study compared both raw performance and performance per dollar spent.

The conclusion was clear: INAP’s Bare Metal goes toe-to-toe on performance with Amazon and IBM’s offerings at a more competitive price point—offering greater value. In Cloud Spectator’s testing, INAP Bare Metal provided up to 4.6-times better price-performance.

For more information on the performance and price-performance of INAP Bare Metal compared to IBM and AWS’s offerings, download Cloud Spectator’s report.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Mike Jones

Sr. Dir., Product Management

Read More
Aug 9, 2018

Why CIOs Are Critical to the C-Suite (and Maybe Not for the Reasons You Think)

INAP

Chief information officer (CIO) is a job title commonly given to the most senior executive in an enterprise responsible for the traditional information technology and computer systems that support enterprise goals. – Wikipedia

Sure, that’s part of the job. But it’s an increasingly shrinking part of what the modern CIO actually does and is responsible for. That definition only covers what Gartner refers to as “Mode 1 IT” in its Bimodal framework. Mode 1, at its core, is the traditional upkeep of IT systems that enable a business to operate its digital assets—“keeping the lights on,” so to speak. This work is fundamental of course, but Mode 1 IT is not what’s driving businesses into their next stage of innovation.

CIOs are in a position to do a lot more for the organization than worry about how to keep all their systems patched while still saving money by moving to the cloud (even if that’s still part of the gig).

A Strategic Partner in the C-Suite

So why then are CIOs so critical to the C-suite if not for cost savings purposes or general systems upkeep? A recent Forbes article sums it up nicely: “Today’s CIO has rightfully assumed a much more prominent place in the strategic thinking of the business, not simply enabling other members of the C-suite to achieve their vision, but rather actively setting the agenda for the future of the digital enterprise.”

The modern CIO is first and foremost an agent of change and innovation with a mission for revenue growth and market share. The days of only being focused on the bottom line are over. Today, the CIO truly drives the thinking—and resulting strategy—on what to do with all that data, the systems and the emerging technologies that will capture value for the business. In other words, they’re a true partner to the CEO and other members of the C-suite.

The mission of the CIO has shifted quickly toward innovation, transformation and revenue generation, with organizations leaning on these executives more for their sociology and business school training than IT acumen. This is the case regardless of industry, even if some industries haven’t admitted it yet.

“The potential role for the CIO is to be the digital transformation person who’s going to understand what’s going on with business and then apply technology to get something out of it,” says David Higginson of Phoenix Children’s Hospital. “We’re moving toward more of an information science. All that effort and all that money we’ve spent getting data into the system—now what are we going to do with it?”

Managing Self-Disruption

The CIO role is undoubtedly evolving into a super-executive charged with generating revenue and scaling the digital business to outpace the competition, whoever that may be. Take the automotive industry as a prime example of this: Companies like Google and Apple have begun competing in a space long dominated by big automakers, but competitors such as Zipcar and Uber have extended the very boundaries of the personal mobility industry.

So where does that leave traditional giants like Ford and GM? Well, according to Ford’s CIO Marcy Klevorn, they’re aiming to disrupt themselves. “We know we have to expand into some new areas because if we don’t, someone else will,” she says. At their core, technology is driving the business, Klevorn acknowledges. “The F-150 today is powered not just by a choice of four high-performance engines and a 10-speed transmission, but also by 150 million lines of software code.”

For CIOs like Klevorn and others like her, emerging technology is where their time increasingly needs to be focused. One of the biggest hurdles to this endeavor is accumulating the skills to actually make it happen. There’s something we refer to as the IT Skills Gap that these leaders have to solve for. “What keeps me up at night is not the complexity of running a huge network, as some might expect,” Klevorn says. “It’s making sure we have a team with the skills and the passion to deliver at a different agility level than ever before. Competition for this talent is huge.”

Growing Complexity for the Business—and for CIOs Themselves

This points back to the distinction between Mode 1 IT and Mode 2 IT. Where CIOs need to spend more time, as evidenced throughout these examples, is on Mode 2: innovation, change and revenue growth. This will impact the business as much as it will the individual careers of CIOs. “The CIO role has become one of the most complicated in the C-suite,” says Bask Iyer, CIO of Dell and VMWare. “But it’s also preparing IT executives to serve as presidents, general managers, and CEOs.”

As the role itself grows more complex, the need for smart simplification will only increase—particularly as multicloud infrastructure becomes the norm. “CIOs will be shifting their attention toward brokering services that enable them to plan, procure and orchestrate cloud services from multiple vendors across hybrid clouds from a single pane of glass,” writes Marc Wilczek for CIO magazine.

Change is here to stay; transformation is a must. It only makes sense that the role that IT plays will continue to grow and expand—and with it, so too will the role of CIO, whether they’re ready or not.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Aug 1, 2018

6 Processes You Need to Mature Your Managed Services

Paul Painter, Director, Solutions Engineering

Are you struggling to rapidly mature your services?

You’re probably not alone.

As Director of Solution Engineering at INAP, I see how tough it can be for my customers. All of them know their particular business sector, and the vast majority are highly technology-literate, but that doesn’t necessarily make managing technology-based services any easier.

IT Infrastructure Library (ITIL) is a logical process framework to start from and build upon, but even more fundamental is to construct a set of Minimum Written Processes.

In the interest of full disclosure, I did not come up with these myself; this checklist is something that I learned from others during my career—and directly corresponds to concerns and challenges that I hear from INAP customers. What I’ve found through my experience is that if an organization has these minimum processes, it is well on its way to maturing its managed services.

There are six Minimum Written Processes required to have a managed service:

  • Installation
  • Monitoring
  • Troubleshooting
  • Recovery
  • Changes
  • Decommission

Let me explain each in a bit more detail.

Installation

Having a written process for how to install a new customer provides a path for consistency and standardization between customers. This also allows you to implement a new customer without affecting any others.

Consistency can be a double-edged sword though. If your installation process is workable, then you can have anyone in your team successfully implement a new customer. On the other hand, if there is some mistake in your process, you’ve replicated it with every new customer. That being said, even with a mistake in the installation procedure, you have done so consistently and can go back and rectify accordingly. This is far preferable to a random error that requires an audit of all customers. A defined installation process loosely addresses release management within the ITIL management framework.

Monitoring

An effective monitoring plan will let you know when your service has stalled and will include billing and reporting as well. Monitoring data is used for capacity management, which fits within the ITIL framework, as does financial management, which partially comes down to how billing should be done. This can be a flat fee or usage-based. If usage-based, then a feed of data is needed to determine each customer’s usage, which might be gleaned from monitoring systems. For example, many AWS services have charges based on usage, such as with Route 53, Elastic Load Balancing or S3.

Troubleshooting

If you are monitoring the uptime of a service, then you need to prepare your team for what happens when any given alarm is tripped. For example, if a server happens to halt, I want to know about it, and I want my team to follow a troubleshooting process to determine the failure and be prepared to announce the impact to affected customers. Again, in the ITIL framework, this has tentacles into incident management and problem management processes.

Recovery

If I am troubleshooting something, there is a strong chance that there is a component failure and I need to replace something. Put another way, an upgrade can be thought of as a planned recovery effort, so this process has a dual purpose. For example, say I have a failed firewall and a spare on the shelf. The same process used to install a replacement firewall might be identical to installing a higher-capacity model.

Changes

Once you build the environment, part of managing it is adapting to minor changes. Changes for users need to accommodate personnel changes like new hires, departures and name changes. There are also minor configuration changes to be concerned with, such as firewall rule or backup policy changes and the like. What you want to avoid is having to flush and recreate a user or a component configuration; it’s far easier to adopt processes that allow for easy changes. Linking back to ITIL, this addresses a big portion of change management.

Decommission

Customers will eventually stop using one of your services. Many firms lack a process to disconnect and decommission a former customer.

I once worked at a startup firm that was acquired by a much larger one, and in a bout of reverse acquisition, we ended up inheriting the larger line of business—the one we had previously competed against. In the interest of standardization and efficiency, we decided to consolidate the larger firm’s monitoring configurations into our own monitoring tools.

Unfortunately, the larger firm never had a decommissioning process and would just suppress alarms, which put them to sleep but didn’t eliminate them. During the ingestion process, the monitoring configurations were imported, and the suppressed alarms become active. All those monitors tried to poll for old customer equipment, some of which had been gone for years.

We found ourselves going from a manageable number of alerts to several hundred thousand alerts each month. A proper decommissioning process—rather than a workaround—could have saved us a lot of trouble.

It’s not uncommon to have resources reserved for long-departed customers, and it takes audits to determine their departed status. Instead of going through audit cycles, it makes more sense to have a process to notify operations to decommission a recently departed customer.

One Process Doesn’t Fit All

Checklists are great things. I like to go backcountry hiking in the Rocky Mountains, which entails loading up a backpack for a multiday adventure in the wilderness. I usually plan out all the things that I need before I even leave home and have built a standard checklist of things to pack. Some things on my backpack checklist are very specific, such as a water filter, a stove, my sleeping bag. Those items never change.

Then there are items on my list that allow for some flexibility, depending on the trip: for example, 2,000 calories of food per day. My checklist doesn’t have exactly what food to take, just that I need that much. Similarly, this framework doesn’t say exactly what the process will look like for you, just that you need a process for it.

If you are building anything as a service, you will eventually get to these minimum processes. Instead of falling into processes by happenstance or trial and error, it is far better to determine a checklist of minimum processes before releasing a service. Having these defined when you release your service will put you in a far better and more mature position to support the entire customer life cycle.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Paul Painter

Director, Solutions Engineering

Read More