INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Aug 28, 2014

The agile side of colocation

Ansley Kilgore

Internap DataCenter. Agile ColocationFor companies running distributed applications at scale, colocation remains an essential piece of a high-performance infrastructure. While traditional colocation is often viewed as simply a physical location with power, cooling and networking functionality, today’s colocation services offer increased flexibility and control for your environment.

Today’s colocation isn’t just a physical location with power, cooling and networking functionality.Click to Tweet this.

Let’s take a look at some real-world examples of companies that are using colocation as a core element of their infrastructure to run a distributed app at scale.

Outbrain
Outbrain is the leading content discovery platform on the web, helping companies grow their audience and increase reader engagement through an online content recommendations engine. The company’s data centers are designed to be DR-ready, and operate in active-active mode so everything is always available when you need it.

Outbrain’s continuous deployment process involves pushing around 100 changes per day to their production environment, including code and configuration changes. This agile, controlled process demonstrates how a traditional solution like colocation can be flexible enough to support a truly distributed application at scale.

Watch how Outbrain drives content discovery and engagement.

eXelate
eXelate is the smart data and technology company that powers smarter digital marketing decisions worldwide. As a real-time data provider, they need to operate as a distributed application to handle large amounts of consumer-generated traffic and transactions on their networks around the world. Their infrastructure has to support dynamic content and data in order to provide meaningful insights for consumers and marketers.

eXelate’s colocation environment includes unique hardware that is outside the realm of normal commodities. The ability to incorporate Fusion-io and data warehousing services like Netezza, as well as make CPU changes and RAM upgrades, helps eXelate support the high number of optimizations required by their application. The company also uses bare-metal cloud to spin up additional instances through the API as needed. This combination of colocation and cloud creates a best-fit infrastructure for eXelate’s data-intensive application.

Watch how eXelate powers smarter digital marketing decisions.

Whether your organization runs a continuous deployment or requires the ability to process real-time data, colocation provides the flexibility to create a best-fit infrastructure. State-of-the art colocation facilities support a hybrid approach, allowing you to combine colocation and cloud in the manner that best meets the requirements of distributed apps at scale.

Get the white paper: Next-Generation Colocation Drives Operational Efficiencies

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Aug 18, 2014

Growing pains of the Internet global routing table

INAP

Network over North AmericaRecently, some businesses experienced outages as a result of older routers hitting the default 512k routing table limit. Here at Internap, we have long been aware of “the TCAM problem” and have taken steps to prepare for it, but many companies are now getting caught off guard. As the global routing table continues to grow, there will likely be an increase in routing instability over the next few months/years, and smaller enterprises could learn some very painful lessons.

If a company is humming along with a BGP routing table of 500,000 routes from its Internet provider, then all of a sudden a Tier1 provider adds 15,000 routes to the table, they are now pushed over the 512,000 route limit and everything goes sideways. I expect to see a lot of that happening as we hit the 512,000 threshold; today we are at about 500,000 routes in the global table, which grows by about 1,000 a week on average. (The larger Tier1 providers such as Verizon, AT&T, Level3, etc. largely know about and have planned for this issue. I would be surprised if they experience any impact.)

The Cisco 6500 and 7600 router platforms are some of the most common pieces of network hardware out there — literally one of the most widely-deployed pieces of hardware on the Internet. (At Internap, we are actively replacing them with more-scalable platforms, expected to finish up in the next few quarters.) Older hardware platforms also have this same limited-memory problem, but for the most part, those platforms have been EOL’ed years ago by hardware vendors such as Cisco, Juniper and Brocade, so anyone still using them for full BGP tables is living dangerously against their vendor’s recommendation. However, the 6500/7600 are not EOL’ed and continue to be a core part of Cisco’s revenue stream, so this is a very real problem for a lot of companies.

Internap lands all of our upstream NSPs on newer-generation Cisco ASR1000 and Cisco ASR9000 platforms, which are built to scale to the much larger routing tables of the future, so we are not too worried. One of the reasons that we purchased these next-gen routers to land our upstream NSPs is our Managed Internet Route OptimizerTM (MIRO) technology. MIRO requires a full routing table from each NSP on the router, which uses up TCAM very quickly. Most enterprise/SMB companies out there are not landing multiple providers on a single router like we are doing; most of our markets have 10-12 providers spread across 3-4 cores, so we were forced to confront TCAM limitations some time ago.

On the 6500/7600 platforms, the previous generation supervisor module (the “SUP2,” which was EOL’ed a few years ago) can only hold 512k routes total, so as that tipping point is reached, lots of companies are going to need emergency hardware upgrades, or they will have to take less than a full BGP table from their provider. Taking less than a full table from the upstream provider is impactful to how granular a company can control their routing, and how much insight they have into what’s going on with the full Internet table, which is definitely a step backwards. Most companies will choose to upgrade the hardware instead, in my opinion.

The current generation 6500/7600 supervisor modules (the “SUP720” module on the 6500 and the “RSP720” module on the 7600) that are widely deployed on millions of production chassis can hold 1,024,000 routes total. The default settings for memory allocation on those modules are 512k IPv4 routes and 256k IPv6 routes (since an IPv6 route takes up twice as much memory as an IPv4 route). While the supervisor modules can hold more than 512k IPv4 routes, a lot of companies are going to learn The Hard Way that they have not manually re-allocated the memory to accommodate the ever-growing routing table. You have to make a config change and reload the router entirely, which is painful to rollout across a global footprint, and you might not even know you need to do it.

At Internap, we have retuned our remaining 6500s for 800k IPv4 and 100k IPv6 routes, which should last us over the next 2-3 years while we phase out our Cisco 6500s and 7600s. We did this specifically to address routing table growth. Currently, we are auditing all of our MCPEs (Managed Customer Premise Equipment) since those are much smaller hardware platforms with less memory available, to make sure there are no issues.

Over the next few years, millions of chassis will hit their physical limits. But you can’t just upgrade the supervisor module to the latest and greatest to get a few more years of runway — the entire chassis has to be replaced. The next-gen supervisor module for Cisco’s 6500/7600 platform that started shipping last year (the “SUP2T”) has the exact same limit of 1,024,000 routing table entries, which means if you are using the 6500/7600 platform, you have to replace the whole chassis with a next-gen model like the Cisco ASR9000, Juniper MX, Brocade MLX, etc.

The only other option is to take a partial/default-only BGP route. This graph of BGP table growth should be very scary for someone running hardware with a 1M route limitation.

Compounding this problem, the American Registry for Internet Numbers (ARIN) (the regional authority for North America that hands out IP addresses) continues to run out of IPv4 space. (Update: The IPv4 pool was officially depleted as of September 24th, 2015.)

Right now, the “BGP boundary” for a route in the global routing table is /24, meaning that the global routing table only has routes /24 or larger in it, specifically to keep the size of the routing table down to accommodate hardware limitations. The purpose of this limit has been to control de-aggregation of the routing table, because the vast majority of hardware deployed today can’t really support the routing table blowing up any larger than it already is. However, ARIN — trying to squeeze as much lifespan out of its remaining IPv4 allocations as possible — has started giving out smaller and smaller blocks and asking providers to route smaller allocations. Just last week, ARIN conducted a test where they tried to route /27s in the global routing table to see which providers might or might not be able to route blocks smaller than the /24 boundary. That is further indication that ARIN and the network operator community want to continue to de-aggregate the remaining address pool and prevent IPv4 exhaustion for as long as they can, but this will be incredibly problematic for everyone because it balloons the routing table and brings hardware limitations to the forefront.

By squeezing as much life out of the remaining IPv4 pool, network operators can delay the migration to IPv6. Routing IPv6 packets is well-supported within most hardware these days, but we find customers struggling with all of the ancillary things that have to happen — retraining their NOC, rebuilding their management, monitoring, and troubleshooting tools to speak both IPv4 and IPv6, developing IPv6 operational experience and so forth. Routing IPv6 packets is the easy part; all the other stuff that goes along with supporting IPv6 can scare off less-experienced customers. At that point, “just let the routing table get a little bigger” seems like an easy fix to avoid making wholesale migrations to IPv6 which might require new hardware, new tools and some operational struggles. Network operators will always put off large-scale technology leaps in favor of having more time to fight today’s fires, but that will not last forever.

So, back to ARIN. Breaking a /24 in half gets you two /25s, or four /26s, or eight /27s… imagine if a plurality of companies out there took all their /24s and started de-aggregating down to the /27 level, causing an 8x increase in their portions of the routing table. This will be a nightmare for most everyone, and a financial windfall for hardware manufacturers.

One of the most basic lessons from The Art of War is not to fight a war on two fronts simultaneously, but that is exactly what’s happening. On one hand, companies don’t want the headache of fully migrating to IPv6, so they’re encouraging ARIN to de-aggregate the routing table and squeeze as much IPv4 out of the remaining allocations as possible, which is inflating the routing table. On the other hand, massive wholescale hardware upgrades will be upon us in the near future, and companies must be ready to fight that battle when the time comes.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 13, 2014

Behind the scenes: How Content Delivery Networks leverage optimization technologies

INAP

How Content Delivery Networks leverage optimization technologiesEstablishing a reliable web presence is the best way to maintain competitive advantage for your business. Large websites often rely on Content Delivery Networks (CDNs) as an effective way to scale to a larger and more geographically distributed audience. A CDN acts as a network of caching proxy servers, which transparently cache and deliver static content to end users.

There are many CDN providers to choose from, but what makes Internap unique is the combination of optimization technologies that we employ. Let’s take a behind-the-scenes look at how these technologies complement one another and work transparently to improve the user experience and accelerate performance.

You may be familiar with Managed Internet Route Optimizer (MIRO), Internap’s route optimization technology that forms the basis of our Performance IP product. MIRO constantly watches traffic flows, and performs active topology discovery and probing to determine the best possible route between networks. After our recent revamp of MIRO, some of our busiest markets are exceeding half a million route optimizations per hour, resulting in significantly lower latency and more consistent performance.

Our CDN also employs a proprietary TCP congestion avoidance algorithm, which evaluates and dynamically adjusts to network conditions. It ensures that short data transfers, such as HTML, Javascript libraries, style sheets and images occur as quickly as possible, while larger file downloads maintain consistent throughput.

Finally, the CDN’s geographic DNS routing system sends requests to the nearest available CDN POP based on service provisioning, geographic proximity, network and server load and available capacity.

All CDN transactions begin with DNS
When a client issues a DNS request to the CDN, DNS routing is handled by a routing methodology called anycast. Internap has a large deployment of CDN DNS servers around the globe, and with anycast, we use BGP to announce a prefix for our DNS servers in each of these locations. The client’s request gets routed to the nearest DNS server based on BGP hop count.

When a DNS request is received, MIRO observes this DNS activity and immediately begins probing and optimizing to find the best possible provider for that DNS traffic. The CDN DNS system evaluates the request and responds with the address of the nearest available CDN POP. The client then establishes a connection and sends a request to an edge cache server in that selected POP. Once again, MIRO observes the network traffic, and immediately begins probing and optimizing to find the best possible provider for the network traffic.

If the requested content is in the cache, then the cache server begins sending it. TCP acceleration takes over and begins optimizing the TCP connection, which ensures CDN content is delivered as quickly and smoothly as network conditions allow. If the requested content is not in the cache, then the cycle repeats itself, but between the CDN edge server and the origin of the content.

The best experience is achieved with a globally distributed origin that employs geographic DNS routing, such as Internap’s distributed CDN storage solution. With CDN storage, content can be replicated in up to 5 different locations across the globe, and the CDN DNS system routes the request to the nearest available location. MIRO also optimizes the experience between the edge and origin servers, both for the DNS request and the content retrieval. TCP acceleration ensures that the transfer happens with the lowest latency and highest throughput possible.

With the recent revamp of Internap’s HTTP content delivery platform, we’re continuing to maintain our commitment to performance. We have upgraded our cache servers to use SSDs instead of hard disks, and added some new performance-oriented features, such as the SPDY protocol. All of these new capabilities will further enhance the user experience and accelerate performance.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 6, 2014

Bare metal goes global in London, Hong Kong

Ansley Kilgore

World (Europe Africa)Bare metal remains to go-to choice for companies running data-intensive applications, such as big data and analytics solutions, online gaming and mobile advertising platforms. As businesses scale their operations worldwide, the need for global availability of these applications has increased. To meet this demand, Internap’s bare metal offering is now available in our London and Hong Kong data centers.

Big data and latency-sensitive applications run better on bare metal than in traditional virtual cloud environments, as demonstrated by our customers and the latest Cloud Spectator benchmarking report. In addition to significant cost savings, the findings also show the benefits of running big data workloads on high-performance NoSQL databases such as Aerospike and Internap’s bare-metal servers. As the need for powerful, transaction-intensive applications becomes an integral part of business strategy, a high-performance infrastructure that includes bare metal can enable optimal performance.

Here are some recent articles highlighting the expanded availability of bare metal in Europe and Asia, as well as the benefits of incorporating bare metal into your hosting environment.

EnterpriseTech: Big Data Prefers Bare Metal

Recent benchmark tests by Cloud Spectator compare bare metal servers with similar high-performance virtual public cloud configurations. The findings reveal lower latency, higher throughput and more cost-efficiency, particularly for “fast big data” workloads running Aerospike’s in-memory NoSQL database on bare metal servers. The test results speak for themselves – check out the full report here.

Cloud of Data (blog): Internap joins Jungle Book song chorus

Many companies claim to offer bare metal solutions, but few are considered to be true contenders in this sector. However, according to Paul Miller’s article, bare metal may be a “bare necessity” for today’s cloud providers. Bare metal reminds us that “the ‘traditional’ virtual machine isn’t the only way to run a data centre”.

Data Center Knowledge: Internap Expands Bare-Metal Cloud Servers to London, Hong Kong Data Centers

With bare metal now available in London and Hong Kong, Internap has addressed the increased demand for globally distributed data-intensive applications. Global locations also include Amsterdam, Singapore, Dallas, New York and Santa Clara. The bare metal offering is backed by Internap’s patented Managed Internet Route Optimizer (MIRO) technology, which continuously monitors Internet performance and routes traffic along the best available path.

Download the Cloud Spectator Benchmarking Report to learn more about the benefits of running big data workloads on bare-metal servers.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More