Click here to LEARN more.

Oct 30, 2013

Internap is buying iWeb!


Internap is buying iWebIt’s been a busy last few months, punctuated by secret off-site meetings, i-banker code words, and lots of short hop flights between NYC and Montreal.

The news is finally out. As of a few hours ago — Internap is buying iWeb!

I’m super excited about the announcement and the implications for the combined company, although it’s really weird being on the other side of the table this time around! (Just about two years ago, Internap acquired Voxel, the company I founded back in 1999.)

We have been undergoing a transformation at Internap over the last few years. Acquiring Voxel was part of the foundation for that story. Today’s iWeb acquisition accelerates things significantly, and positions us to round the corner as a different animal altogether – a global IT infrastructure services company.

We are shedding our telco roots.

iWeb has built a truly global (50% of revenue outside North America) company, with a hyper-localized footprint in Montreal, Canada. They have a very sophisticated online marketing machine, and most of their demand is generated from their website. Internap has a truly global footprint, but most of our revenue comes from the domestic market. We have a big enterprise sales force that targets each local market; almost nothing is sold off our website.

See the obvious fit here?

iWeb recently chose to build out their back office data center automation platform with Ubersmith – Internap’s purpose-built platform. During my time working on iWeb’s Ubersmith implementation, it became clear to me we were dealing with an organization that really ‘gets hosting and cloud’. There is a similar story of commonality with OpenStack – something key to the future strategy of both companies. iWeb is punching above their weight already (#24 contributor in the community), and we are both hard at work deploying OpenStack under the hoods of our respective cloud offerings.

By joining forces, we should be able to increase velocity in these two key areas and, hopefully, work towards a single consolidated infrastructure platform.

Oh, and iWeb guys and gals: Don’t worry too much about these Internap people. They aren’t that bad; they just like wearing suits. The passion and culture of iWeb (as originally instilled by founders Eric and Martin and as carried on by folks like Christian and Cyrille) is one of your biggest assets. We absolutely recognize that. Not messing that up is really important to us, and to me personally.

Things are certainly going to get interesting!

Explore HorizonIQ
Bare Metal


About Author


Read More
Oct 25, 2013

Getting existential on your IaaS


Getting existential on your IaaSIt’s hard to believe that there are such widespread misconceptions about what constitutes a “cloud” in the IaaS space.

This is evidenced by cloud blogger Ben Kepes’ article yesterday, “Server Huggers and Henry T Ford’s Faster Horse,” which took pretty aggressive aim at bare-metal clouds, and more specifically, our Cloud Spectator report that showed the performance-price benefits of bare-metal vs. virtual cloud. The reality is that IaaS is evolving, and it isn’t always tied to virtualization, although it seems difficult for some to leave this notion behind.

I’m actually surprised that Ben suffers from this misconception since I’ve enjoyed many of his previous articles on OpenStack and open source, but in this case, I think he’s just plain wrong. As many still do, Ben equates cloud to virtualization and goes on to cite two important cloud influencers that we both mutually respect – Dave Nielsen and Joe Weinman – on the attributes that make up “cloud”. Here they are (combined and consolidated):

  • On-demand
  • Self-service
  • Scalable
  • Measurable
  • Common
  • Location-independent
  • Online
  • Utility

I totally agree with these characteristics, and I’ll even add another requirement of my own – programmable. It’s not enough to simply provide a self-service portal to create a true infrastructure “cloud”; functionality has to be completely exposed and programmable over an API.

I also absolutely agree with Ben that cloud is more about changing business models than it is about changing technologies. Indeed, in the infrastructure world, how IT is consumed is undergoing a fundamental shift. With “cloud”, you don’t have to spend the capex or plan for the worst case. Instead, you can pay for what you need, when you need it. And, it’s all commoditizing rapidly.

So, back to the virtualization topic. I’ll agree that virtualization is an enabler of a lot of clouds, but it’s definitely not a requirement. It’s interesting that “virtualized” isn’t on either Nielsen’s or Weinman’s list. Nor is it in the NIST definition.

What if we could get all of the things on our mutually-agreed-upon list for “cloud” but without virtualization? We can.

Our bare-metal cloud is just commodity compute. Each server has CPU, RAM, and disk(s). It’s provisioned over an API (or a portal). You can install pretty much any flavor of Linux and Windows. It’s available to you within a few minutes (generally about 5-10). It’s very elastic and is billed on an hourly basis – you can spin them up and throw them away when you’re done.

Only, with our bare-metal cloud, there is no hypervisor. The server nodes aren’t sliced and diced by the likes of KVM and Xen. Instead, each node is dedicated to you. All of the disk’s blinking lights are yours. There’s no noisy neighbor problem or co-tenancy issue and no virtualization overhead.

I think it’s also worth pointing out the confusion and overlapping usage around “bare metal” terminology in the industry. A true bare-metal cloud delivers on the agility (and the shift in business models) that the cloud brings. Many providers are cloud-washing dedicated hosting, which is often sold on a term contract and offers zero automation or elasticity. That’s not what is happening here.

We think bare-metal cloud has a lot of interesting implications, and we’re not alone. Before Voxel was bought by Internap, it was one of the first companies to make dedicated, bare-metal servers available over an API in 2009. Soon after, many of our competitors (including Softlayer, recently bought by IBM) did the same thing.

As I mentioned above, we also understand the value of and offer a fully-virtualized cloud. There’s no doubt that the hypervisor gives you interesting options, and it’s a big enabler. We leave the choice with our customers as to whether they want their instances to be virtualized or not. They can make that decision on a per instance basis, depending on the best match for their specific application and workload. And, they can use the same API, IP addresses and images regardless of whether the server is virtualized or not. A lot of them choose to deploy a beefy bare-metal server (loaded to the gills with SSDs) for high-end databases. Do they suddenly turn into “server huggers” if they do?

But don’t just take my word for it. OpenStack has a top-level project that’s focused on bare-metal cloud. The reason? Because for some workloads, it makes more sense. A lot more sense.

Admittedly, it’s early days in OpenStack land and the project is just getting rolling. Even in the latest stable release of OpenStack (Havannah), they use the term “hard hat required” with regard to bare-metal functionality. Nonetheless, we are actively working on launching the next version of our bare-metal cloud platform that will be largely OpenStack under the hood. We think it’s one of the most interesting and fast-moving projects in the infrastructure cloud space. We’re adding a lot of our own value on top of OpenStack, including expertise and technology around bare-metal cloud garnered over the last 4 years.

How am I going to break the bad news to our engineering team and all of the bare-metal OpenStack developers in the industry that we are all “server huggers”? Creating so-called “cloud” server instances that aren’t virtualized? Blasphemy!

At the end of the day, I’ll put an Internap bare-metal cloud server up against the equivalent price-point Amazon instance any time.

It’s not a religious argument – it’s all about the right tool for the job. This is the reason for our broad approach to hybrid IT infrastructure.

Ben’s article was well-written and provocative, I’ll give him that. Maybe he should consider trying his hand at fiction instead.

Explore HorizonIQ
Bare Metal


About Author


Read More
Oct 22, 2013

Customer spotlight: Treato uses big data to reveal health insights

Ansley Kilgore

healthcare colocationIn today’s Internet-driven society, many of us choose to “ask Google” for information on countless topics – including questions about our health and prescription drugs. A staggering amount of data exists in online health communities where patients compare notes about their conditions, medications and treatments. While the Internet is no substitute for consulting with a medical professional, a startup company called Treato is using big data analytics to bridge the gap between patients, pharmaceutical companies and healthcare providers.

Treato, a big data startup based in Israel, aims to provide meaningful insights from the plethora of information in online health forums. By extracting, aggregating and analyzing data from blogs and other qualified health websites, Treato “creates the big picture of what people say about their medications and conditions.” The resulting analytics are available for free to consumers and as a brand intelligence service for pharmaceutical marketers.

Treato is experiencing rapid growth, and traffic on their website has already surpassed 100,000 visits per day. In addition to acquiring new site users, the amount of data from source content has expanded. To handle increased site traffic and content, Treato recently added a new Internap data center near Dallas, Texas.

Data centers are a critical aspect of Treato’s strategy to expand its world-class SaaS infrastructure. In addition to supporting and the Treato Pharma applications, data centers process and store the content collected from online health blogs and forum posts. Advanced Natural Language Processing (NLP) analysis is used to extract relevant information from this data, ultimately providing insights that can influence future patient experiences.

High availability – Expanding its data center footprint allows Treato to reduce the risk of downtime for the site and backend processing capabilities. With a growing number of users relying on their analytics, resilient infrastructure helps ensure a positive online experience.

Increased capacity – Thanks to the expanded data center space, Treato has enlarged their capacity by 150%. With their new 200-terabyte Hadoop cluster, Treato can process 150% more patient conversations per day.

Scalability – As a result of this increased capacity, Treato can expand its sources of content even further, and expects to invest significantly in this area moving forward. More than a million patient conversations are added each day, perpetually expanding the knowledge base available to Treato visitors.

Using big data analytics to glean consumer insights is a rapidly growing business strategy that is still evolving today. By successfully applying this concept to real-time healthcare data, Treato is opening new doors for the advancement of healthcare. The use of data centers for expanding storage and processing capacity will be an essential factor in achieving analytics goals. While the Internet still can’t diagnose your ailment, it can work together with big data to create better, healthier lives.

Explore HorizonIQ
Bare Metal


About Author

Ansley Kilgore

Read More
Oct 15, 2013

Making the Business Case for Infrastructure – Part I: Rent or Buy? (Capex vs Opex)


There’s a rent or buy dimension to infrastructure decisions, and the direction you go has significant long term implications for a company, so it’s important to know the difference between the two.

You can think of it like a car. Should you buy or lease? There are reasons why you’d do either. It’s a similar question when you’re looking at infrastructure.

Do you want to buy, and maintain all this yourself, or do you want to rent?

RENT OR BUY? (Capex vs Opex)

Note for readers – this short introductory article is oriented towards a typical IT professional, and it includes the language of finance, which is useful to get familiar with, in case you find yourself in the position of needing to discuss long-term, significant infrastructure investments.

This is the traditional model from an IT standpoint, also known as “capital expenditure”, which is a fancy way of saying “cash”. When you needed a solution, you had to go out and buy everything. For example, if a CFO wanted a new accounting system, they’d say “we need a database server, an application server, you’ll need those two things, each one of those will cost you x amount”.

In the CAPEX scenario:
– the company buys equipment
– you write a check upfront
– the equipment goes into a company’s own data center or a colocated space
– cost of ownership: upfront cost

For financial professionals, here’s a description of CAPEX:

“In financial terms, when you are buying the equipment, you’re spending all the cash upfront; from a financial perspective, it’s all happening on your balance sheet. Cash goes down, assets go up. And then ultimately the way that is expensed is there’s depreciation on your P/L statement. So if you have 3 or 4 years of life on the equipment, you’d depreciate that 4400 by spreading it over 26 months, and that’s the cost on your P/L,

But when a business is being valued, you don’t look at depreciation, you use a metric called EBITDA (Earnings Before Interest Taxes Depreciation and Amortization). So under the CAPEX model, it’s more of a cash flow analysis, more than a P/L analysis. And there are reasons why businesses may want to consider placing more prominence on EBITDA — if so, OPEX can make a difference.”

This is the rental option for infrastructure. OpEx = “operating expense”; it occurs over time.

For example, instead of going for a 4400 purchase cash down for a couple servers, you sign up for a monthly charge. In simple terms, from a cash flow perspective, now you’re not putting out the 4400.

CFO Chris Locke on OPEX:

“That monthly charge actually is going to be on your P/L. It will be a “Server Rental Expense,” an infrastructure expense, will actually be in your EBITDA, and it will lower the EBITDA.”

A general “pro” of going with rental is that it improves cashflow, and a “con” might be that it lowers earnings.

So what are the other hidden benefits of rental that need to be considered?

In order to uncover them, you need to dig into the concept of “total cost of ownership”.


So in our data center scenario, if I’m buying the equipment, I’m either paying for my own data center and the people required to maintain it, or I’m renting space out to colocate the servers.

In either case I’m taking operational risk – a common scenario is that the IT department or person will buy the servers, put them in a closet, without the infrastructure to provide full security, continuity or high availability. There is the real possibility of something catastrophic happening.

Operational risks include:

  • Legal exposure from data loss or data breach
  • Interruption of business
  • Loss of brand equity
  • Loss of customers
  • Lost revenue

Or you can offload the risk to an expert who has the right infrastructure.


IaaS providers (IaaS = Infrastructure as a Service) have achieved the intense growth in cloud computing through helping hundreds of thousands of everyday businesses to offload their infrastructure.


In the end, hundreds of thousands of IT professionals and small to mid-sized businesses have circled back to the age old principle of focusing on what you do the best, and this has helped to fuel intense growth in the cloud computing industry. The tools are there for offloading infrastructure, reducing risk, saving money, and IT professionals can harness them to help their business run more efficiently, with more focus.

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More
Oct 15, 2013

Stay cool: The data center design that prevents lost revenue, unhappy customers


Recently, my apartment building here in Atlanta experienced a water leakage issue. In order to fix it, the building maintenance sent a notification that they needed to shut off the water supply to the entire building for a few hours. What they failed to mention was that the air conditioning doesn’t work without the water supply. So that morning, I woke up sweating thanks to no AC and 90 degree temperatures. I’m told that the water and the AC will be back on in a few hours. But very quickly, “a few hours” turned into an entire day and the whole night as well. The problem wasn’t fixed by the next morning, either. So there I was, with no water and no AC, sweating in the hot, humid weather of Atlanta for more than 24 hours.

All types of equipment and systems require fixing occasionally, or need preventative maintenance so that they don’t break. I just wish that my building had a way to help me avoid that experience.

This type of frustration is similar to the pain that customers, employees and partners go through when business-critical services and applications go down. Even if your AC still works, you may break into a sweat when you start quantifying the pain in terms of dollars lost as a result of the outage. You can avoid that pain and lost revenue by choosing a data center for your IT equipment that is concurrently maintainable.

What is concurrent maintainability?
It is a design standard that keeps critical IT equipment running when one component fails or needs to be shut down for maintenance (tweet this). How does it work? By creating two parallel systems that work independently, with no single points of failure and at least two distribution paths. If a critical component in one system goes down, the other system can take over irrespective of where the fault lies.

data center design that prevents lost revenue
Concurrent maintainability means having at least two of every component in a system, connected in such a way that power or cooling to the IT equipment is never interrupted. This is much better than bypassing the failed/shut down components, which exposes the equipment to unconditioned power. Internap’s state-of-the-art New York Metro data center is a great example of a concurrently maintainable facility.

Components within a system rely on each other to function properly, which is why the AC in my building went down when the water supply was turned off. Concurrently maintainable design reduces the risk of this happening in your data center. Continuing with the analogy, it’s like having a parallel water supply, power feed and AC unit for your residence.

data center design that prevents lost revenue

Building another independent system may cost more, but it’s worth it when compared to the damaging effects of customer dissatisfaction and lost dollars on your business. Even though my apartment building wasn’t designed with concurrent maintainability, your data center should be.

Explore HorizonIQ
Bare Metal


About Author


Read More
Oct 8, 2013

Georgia Tech uses CDN to reach global student base

Ansley Kilgore

Georgia Tech uses CDNFor universities with a large international student base, the ability to distribute timely class content is critical. As one of the nation’s top research universities, Georgia Tech ensures that students around the world receive the same top-quality education as those on campus. A Content Delivery Network (CDN) provides a seamless content distribution process without placing additional burden on the University’s finances or infrastructure.

The Georgia Tech Professional Education (GTPE) unit meets the demand for timely, high quality content with Internap’s CDN. The technology has delivered results across multiple areas, including decreased cost, streamlined processes and increased scalability.

The CDN solution
CDNs are purpose-built to distribute content, and offer multiple ways to create a high-quality online user experience through media distribution, large file download and website caching. Internap’s CDN includes multiple Points of Presence (POPs) across the globe, which means content can be cached in edge POPs that are located in closer proximity to the end users. This provides a high-quality, low-latency online experience for students regardless of their location, because they are accessing the content from a nearby edge POP. The global reach of Internap’s CDN allows students to participate in broadcasts via live streaming or download large files on demand.

Thanks to Internap’s CDN, students are able to quickly and reliably access live and on-demand content. Even with increasingly high demand for online learning, the CDN provides high performance, smooth streaming and an overall high quality experience for users.

Decreased cost – Prior to using a CDN, Georgia Tech was shipping CDs and DVDs of class content to students around the globe, costing approximately $250,000 per year. Additional expenses included reproduction of CDs and DVDs, along with the cost of blank media, packaging and labels. Without the need to ship course content to students, Georgia Tech is able to cut costs significantly.

Streamlined processes – Managing and distributing content in a timely manner becomes more difficult as the number of remote students increases. The ability to store media files on the CDN significantly reduces the administrative processes required to create, package and send CDs and DVDs to locations around the world.

Reduced infrastructure burden – The capacity of Georgia Tech’s campus-based servers quickly filled up with the large amount of content, creating a need for scalable storage options that are not dependent upon the limitations of University resources. The CDN provides scalable storage capacity that takes the burden off the University’s physical servers and infrastructure.

Using a CDN, Georgia Tech Professional Education has successfully reduced the cost of delivering content to students, decreased the administrative burden on internal processes and removed the load and bandwidth concerns from its own architecture. Most importantly, students around the world have timely access to course content and can tune-in live or download past broadcasts via the Internet, regardless of their geographic location.

For more information on how CDN can benefit your organization, download the CDN Buyer’s Guide.

Explore HorizonIQ
Bare Metal


About Author

Ansley Kilgore

Read More
Oct 1, 2013

Data centers of the future, no flux capacitor required


18 kilowatts (kW) is a lot of power.

According to, the average US residential utility customer consumes just 1.3 – 1.8 kW of electricity. So 18 kW is about the same amount of power being used in 10 to 14 average US homes.

18 kW is also the equivalent of about 24 horsepower. That’s enough power to run a very nice riding lawn mower. Or keep two average American cars with air conditioners running, cruising along at 60 mph.

An 18 kW tankless water heater can deliver 105-degree water from input water at 62 degrees at a rate of 2.5 gallons per minute. So a never-ending supply of comfortable hot water in your shower is just 18 kW away.

This is possible because 18 kW can be used to generate about 61,400 BTUs per hour of heat. A burner on your gas stove might offer up to 12,000 BTUs per hour. A wood stove that might be used to heat a small home is probably around 55,000 BTUs per hour.

And in case none of that hit home, 1 BTU = 252 calories. So 18 kW is the calorie equivalent of 28,132 Big Macs per hour.

Got it now. 18 kW is a lot of power, right?

Aside from being able to share some “interesting” trivia, I thought 18 kW was interesting because Internap now has customers using 18 kW per cabinet in our data centers. And not just one cabinet here or there, but a cage of 30 cabinets using up to 18 kW per cabinet. And it’s not some monstrous football field-sized cage. It’s a tidy 576 square feet of space. That’s right, better than a half a megawatt in less than 600 square feet. It’s not quite 1.21 gigawatts, but some say that at 10:04pm on Saturday nights, this space can actually travel through time.

Of course, Internap customers know all about time travel. They’ve been to one of our facilities, where instead of confining your business to 2005, we have solutions for today’s users. Like the fastest, most consistent Performance IPTM service. Like a flexible AgileCLOUD and hosting solution. And now also including ultra-high power dense colocation services.

Does your organization want to get back to the future? Learn more about high data center power density in our Next-Generation Colocation white paper. Or, come see what 18 kW of power looks like in person by touring our New York metro data center.

Explore HorizonIQ
Bare Metal


About Author


Read More