Click here to LEARN more.

Aug 29, 2013

How agile is your hosting API?


At Internap, we really believe in the power of APIs.

APIs give developers the ability to create things that we can’t even imagine. They use our API to interact with our infrastructure, interact with our staff and interact with their servers on a programmatic basis.

Our API is very complete – it’s not just about the infrastructure that you manage – you can actually pay your bill using the API, or file a support ticket using the API – that means you can integrate your application directly into our API programmatically (tweet this). Potentially, your server could decide that it’s overloaded and spin up more servers. It could notice a problem with one of your applications and open a support ticket for us to go and take a look.

Having a single API makes it easier to deploy the right kind of resources for any need, without having to integrate with multiple different vendors’ APIs. This means less time integrating and coding and more time focusing on the applications running on the platform.

Our API is really powerful – it’s open, it’s licensed under the creative commons and we have many toolkits available in a variety of languages to help get you started.

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 28, 2013

A Quick Guide for Troubleshooting Network Latency—How to Improve Latency


As a colo, cloud and network provider, troubleshooting network latency is one of the most common requests we receive. In this post, I’ll walk you through what’s normal and what’s not, and what we look for to diagnose the source of lag.

What is Network Latency?

Network latency (also called lag) is defined as the amount of time it takes for a packet of data to be encapsulated, transmitted, processed through multiple network devices until it is received at its destination, and is decoded by the receiving computer. The most common signals of network latency include:

  • Data takes a long time to send, for example an email with a large attachment
  • Accessing web-based applications or servers is slow
  • Websites do not load in a timely manner

Read further to:

  • Learn what causes network latency
  • Understand latency issues
  • See how to improve network latency

What Causes Network Latency?

Typically this type of network latency is unavoidable and considered to be normal. However, if you notice an increase in latency before or after the hop over the transoceanic fiber chances are there is another reason for increased network latency.

Before you can improve your network latency, you must first understand how to determine your latency and the different ways you can measure it. By knowing your network latency issues, you can better troubleshoot any problems you’re having to ensure data travels more quickly. 

Other reasons for network latency could be:  Network interface port saturation, interface errors, packet fragments, upstream provider outages, routing issues etc. The most common cause of network latency is due to packet queuing at any gateway along the packets course of travel. In order to pinpoint where the network latency bottleneck is occurring, follow these steps:

How to Test Network Latency

  1. Ping your server from your location. Generally, a 100 constant ping is sufficient.
  2. From your server, ping the IP address of your physical location.
  3. Provide traceroute results from your location to your server.
  4. Provide traceroute results from your server to your location.
  5. Provide traceroute results from a looking glass server to both your server, and your location.

At INAP, our network operations team troubleshoots network latency from our network, and from yours. We look at examples of a constant ping from your location to your server. We look at traceroute results from your location to your server and from your server to your location. Finally, look further outside the box with traceroute results utilizing a looking glass server in order to determine where the latency issues are is located.

Here is a quick example of the process. I executed a traceroute from one of our core routers to a Russian telecommunication company’s IP:


The results are as follows:

 Tracing the route to
1: ( 0 msec
2: ( [AS 6461] 0 msec
3: ( [AS 6461] 28 msec
4: ( [AS 6461] 28 msec
5: ( [AS 6461] 32 msec 28 msec 28 msec
6: ( [AS 3257] 32 msec 212 msec 216 msec
7: ( [AS 3257] 220 msec
8: ( [AS 3257] 152 msec 152 msec 196 msec
9: [AS 12389] 212 msec 216 msec 208 msec
10: ( [AS 12389] 220 msec 208 msec 208 msec
11: [AS 35400] 236 msec 228 msec 228 msec
12: [AS 35400] 220 msec 224 msec 220 msec
13: ( [AS 34875] 240 msec 232 msec 240 msec
14: ( [AS 34875] 232 msec 220 msec 56 msec
15: [AS 50699] 236 msec 232 msec 236 msec

As you can see, once the packet reaches the ingress interface at hop 6 (bold) there is a dramatic increase in latency due to packet queuing over a transoceanic fiber. We can determine this as the cause when we run a trace-route from a looking glass server in Russia back to INAP, then compare the two results:

traceroute to (, 30 hops max, 40 byte packets
1: (  9.151 ms  8.939 ms  8.882 ms
2: (  24.711 ms (  33.279 ms  33.238 ms
3: (  18.090 ms  21.387 ms (  27.436 ms
4: (  27.314 ms  30.517 ms (  21.392 ms
5: (  48.528 ms (  60.268 ms  60.254 ms
6: (  47.184 ms  47.252 ms (  49.665 ms
9: (  64.572 ms  64.610 ms  62.171 ms
10: (  133.746 ms  131.793 ms  131.284 ms
11: (  150.693 ms  156.502 ms  154.735 ms
12: (  157.564 ms  157.499 ms dr6506b.ord02.******.net (  156.213 ms
13:  dr6506a.ord03.******.net (  166.293 ms  166.203 ms  166.059 ms
14:  dr6506b.ord02.******.net (  145.481 ms  148.003 ms  147.968 ms
15:  dr6506a.ord03.******.net (  165.771 ms dr6506a.ord02.******.net (  157.431 ms  156.985 ms

As you can see, at hop 11 latency again increased due to packet queuing at the transoceanic fiber. Additionally, when you compare the two results you will notice that on each side of the transoceanic fiber there is very little latency.

Latency Issues: How to Reduce Latency, the INAP Way

Finally, and most importantly, INAP customers can take advantage of Performance IP, our automated route optimization engine. It watches popular destination IP prefixes, and it automatically puts all of our customers’ outbound traffic on the fastest, most stable routes and providers to improve network latency issues. Learn more about how it works by jumping straight into the demo.

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 27, 2013

IT security audit 101: Four rules you need to know


By Clinton Henry, CISM, CISSP, Senior Director, Datacenter Infrastructure & Security for Worldnow

From time to time, it’s common to undergo an IT security audit. Having participated in more than 30 audits across multiple standards (SAS 70, SSAE 16, HIPAA, PCI, SOC 1 and SOC 2), I’ve gained some insights that may assist others embarking on the experience for the first time. Below are four rules to help you get through an audit quickly and efficiently – especially when the auditor is on site.

1. Ducks in a Row
Mike Tyson, the infamous boxer, was once asked how he handles boxing unknown opponents who’ve spent months studying everything about him and have developed elaborate strategies to defeat him. His response: “Everyone has a plan until they get punched in the face.”

Amusing quotes aside, planning ahead is essential for a successful audit. If you have a well-run team with clear policies, controls and enforcement, then you’re halfway there. Audits are about controls – you need to demonstrate that those controls are in place, documented, enforced, reevaluated and tested against regularly. Preparing and organizing documentation for an auditor prior to the audit is a key process, and allows you to respond to their requests quickly when they arise. It also forces you to re-evaluate policies that you may not have looked at in a while, and gives you a chance to document policies that may already be in place, but haven’t been officially documented or disseminated yet.

Contemporary workplaceIf your organization deals with third-party providers, it’s important to show an auditor that these vendors have been thoroughly vetted and held to stringent controls. At Worldnow, we leverage several vendors, including Internap and Salesforce. Internap provides colocation services and managed hosting for some of our critical equipment. Having their SOC 2 reports on hand is incredibly helpful to us and our auditor. Never leverage a provider who is not subject to standard industry controls such as SOC, HIPAA, or ISO 27002/17799 – you’re only asking for a headache when undergoing an audit.

2. Chinese Wall
In large firms when a single organization is representing interests of opposing parties, a “Chinese Wall” must be established to avoid conflicts. In financial firms, the trading desks are not allowed to know what analysts at the firm are going to say about a stock or company prior to it being released to the general public. During the security audit, a different kind of Chinese Wall should be established between the auditor and the company it audits. When the auditor is on site, be extremely mindful of “hallway meetings” because an overheard or misunderstood statement can lead to additional questions, which can bog down an audit for weeks or months. This is an adversarial relationship – it’s cordial, but please remember not to “speak out of school.”

It’s usually best to have a single point of contact with the auditor. This person interfaces with the auditor, collects and provides all documentation and is effectively a gatekeeper. This creates a streamlined process, prevents confusing email chains and will be appreciated by the auditor as it’s much easier to go through a single person for all information than coordinate with multiple people.

3. Don’t volunteer, elaborate, distort (lie) or speculate.
If you do interact directly with the auditor, and they ask you a “yes” or “no” question and you know the answer, say “yes” or “no”. If you elaborate, it could lead to multiple follow ups that wouldn’t have been asked otherwise – this should be avoided. Remember; don’t answer a question that isn’t asked. If you’ve ever been deposed, it’s the exact same process. Providing a history of the company, your architecture or anything else can only hurt you – this is a “point in time” audit, and discussing what was or what will be is counterproductive (tweet this).

What happens when you are asked a question that you don’t understand, don’t know the answer to, or know the answer but don’t think the auditor will like it? Don’t feel pressure to respond right away. The correct answer is, “I need to confirm that” or “I’m not sure” and offer to provide the information as soon as you can. This will prevent a lot of headaches — please trust me on this.

The auditor usually has an assistant who takes detailed notes of all your responses; these will be reviewed off site and will generate more follow-ups. This is where most people get burned – follow these steps to minimize the number of follow ups.

4. Keep your team in the loop
As with anything else, communication is key. Before, during, and after an audit, keep your team apprised of the situation. They should be just as prepared as you for the audit and kept updated with any significant developments. Keep your third-party partners in the loop as well. They are there to help you succeed and will usually provide a resource if questions arise from the auditor that pertain directly to them. Internap gave my auditor a guided tour of one of their data center facilities. This sort of service from your partners goes a long way with the auditor – it makes their job easier, which only helps you.

Audits can be a stressful thing, with a lot riding on successful completion. Each audit presents its own puzzles and challenges, but they do get easier over time. Those who surround themselves with smart people, communicate effectively, and prepare accordingly are usually rewarded with a passing grade. At least that’s the plan – just ask Mike Tyson.

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 20, 2013

Is your cloud noisy and slow?

Ansley Kilgore

Is your cloud noisy and slowNow that most IT organizations have transitioned some of their infrastructure to the cloud, the game has changed yet again. While you may have already moved your email applications, disaster recovery, ERP or CRM systems to the cloud, now your CEO wants to incorporate big data, business intelligence and predictive analytics into the corporate strategy. But these large enterprise applications require more computing power than your current cloud architecture can support. How can you accommodate the CEO’s requests without sacrificing performance and inviting problems from noisy neighbors?

In an effort to not throw out the proverbial baby with the bath water, IT is faced with the challenge of using the current cloud infrastructure to meet the requirements of these new systems. A one-size-fits-all cloud solution doesn’t work for most enterprises, and few businesses can afford to sacrifice the automation and flexibility of the cloud and start manually provisioning physical servers again. Diversifying your infrastructure to include bare-metal cloud can help fill this gap. Bare metal provides the high performance processing capabilities of a dedicated environment, with the service delivery model of the cloud.

Establishing a mixed cloud environment
Understanding the requirements of your use case will help determine which mix of cloud is right for you. Workloads that require high disk I/O are usually better suited for physical, dedicated servers. Bare-metal cloud provides a new way to leverage cloud technology for high-performance, data-intensive workloads, such as big data applications and media encoding. Since bare-metal servers do not run a hypervisor, are not virtualized and are completely dedicated, including storage, you don’t have to worry about noisy neighbors or overhead delays.

Bare-metal cloud can be used in conjunction with virtualized cloud infrastructure to meet a wider range of business requirements. IT managers can balance the capabilities of various cloud models to create a cost-effective operating environment. This reduces capital costs, is operationally efficient and establishes a foundation for agility through adaptable hosting models. At the same time, businesses investigating virtualized clouds as their only hosting solution often prefer to host many of their high-performance, and most complex, applications internally. The bare-metal cloud offers an alternative to virtualized clouds and in-house environments, positioning IT managers to maximize the value of their application and service architectures.

The value of diversity
The ability to create a mixed cloud environment means cloud computing now offers more options than traditional virtualization, while still providing flexibility, scalability and utility-based pricing models. Using different types of cloud together provides organizations with exponentially more opportunities for cost-effective infrastructure.

As the cloud has evolved to include public, private, hybrid and now bare-metal options, IT now has more opportunities to create the right cloud mix to meet the needs of the enterprise. Taking a workload-centric approach can help establish a more strategic, cost-effective cloud solution. The bare-metal cloud is an integral part of an agile infrastructure that allows IT to efficiently meet the demands of business-critical applications.

Explore HorizonIQ
Bare Metal


About Author

Ansley Kilgore

Read More
Aug 17, 2013

Stop Running Circles When Troubleshooting Linux with LSOF


As a system Administrator at INAP, one common type of scenario that we run into almost every day is when a client will call or send us a ticket saying “Help!  My server is slow and we’re not sure why!” Depending on the server OS, there are a lot of different ways to approach troubleshooting this type of issue.  This post will concentrate on Linux-based systems.

When troubleshooting performance on most Linux based servers, one of my favorite tools to use is lsof, which stands for list open files.  Sure, there are some simpler ways to look at similar information, such as top, which shows all the top processes in real time, or ps aux which will show you ALL the processes and their status.  However, when the answer isn’t obvious or you need more information about what exactly is bogging your server down, lsof is a great tool to get started.

Let me give you an example that I ran into recently while on shift. A client sent a ticket in which his server memory usage was spiking and running entirely too high. The server’s IOwait% was spiking, basically, he was almost out of memory and his hard drive was being used heavily to compensate.  I accessed the server via SSH and ran a top command. In the output, I noted that there were two processes that were using the majority of the server’s memory, both were zip.  In most Linux screens, on the far left, you’ll note PID, or process ID.  For this example, let’s assume a PID of 12345

At the command line, we run a simple command:

 lsof  -p 12345

The output you’ll get a list of ALL the files being held open by that process.  You will often see a lot of log files and system files there, but in the haystack there will be a needle… usually it will reveal the source of the problem.

Back to the example I spoke of earlier. In the lsof  output, we found that the process was actually zipping up some large SQL dump files from the customer’s eCommerce software.  We looked at the software log files and identified who started the process.

Mystery solved!

Here are a few other things you might have a lot of luck doing with lsof:

Finding and stopping compromised scripts.
If your server is rooted or hacked, it’s pretty common for the people who did this to dump different scripts inside hidden folders buried far within your system folders.  I’ve used lsof several times to help weed out scripts buried deep within system files and stop/delete them. For example, let’s say there is a rogue perl script running some port scans or being used to DDoS another machine, ps aux | grep perl to get the PID followed by a lsof –p PIDwill help you drill down exactly where the script is and from there you can delete the script and start figuring out how to fix the problem.

Find out what is running on a certain port.
This is very useful when you note some bizarre things in a netstat –nacommand.  There are several ways to do this, but let’s say you want to see what is listening on port 2525 of your server. You could do this:

lsof -Pnl +M -i4 | grep :2525

or a simpler version:

lsof –i :2525


See what a files a system user has open. 
So, let’s say we want to see what files the user ‘archie’ is currently using on the server

Lsof –u archie

Finding out what is running when trying to unmount a device. 
Ever try to unmount a CD Drive, virtual drive, or even a USB thumb drive and you get the pesky ‘Device is busy’ error message.  lsof can help here too.  If you are trying to unmount /dev/sda3

lsof /dev/sda3

lsof is really that simple and that powerful as troubleshooting tool.  If you use it in conjecture with other commands, such as grep, then it can really make for some powerful one line pieces of code.

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 13, 2013

Big data: Two critical definitions you need to know


Down in Cyberspace 01Big data is (clearly) a broadly defined and overused term. It’s been used to describe everything from general “information overload” to specific data mining and analytics to large-scale databases. In Internap’s hosting and cloud customer base, we see two main approaches to big data. In order to make better decisions about the infrastructure required to achieve your goals, you need to understand these different approaches and know where your needs fall.

There is a haystack, go find needles
One class of big data can be thought of as the “needle in a haystack” type. In this scenario, you have mountains of data already, and a very broad idea about the possibility of insights, analytics, and interrelationships within the data. Therefore, your goal is to crunch the data and find the relationships that allow you to understand and gain insight about the data over time. This type of static “big” data requires big backend processing power from technologies such as Hadoop. These applications tend to be mostly batch jobs with sporadic and often unpredictable infrastructure needs.

Massive real-time “big” database
The term “big data” is also used to describe the more mainstream, real-time database applications that have a scale problem to solve well beyond the means of traditional SQL databases. Real-time big data applications, such as Mongo DB, Cassandra and others deliver needed scale and performance for modern scale-out applications. Relational databases are often too limiting for large amounts of unstructured data. NoSQL and key value databases are better suited for the task, but they require high performance storage, high IOPs and the ability to rapidly scale in place. These requirements are vastly different from those of the data-crunching needle in a haystack type of big data, yet the same term is often used to describe both.

The performance question
Performance isn’t unimportant in the first type of big data, but it has a different meaning versus the real-time database scenario. For large data-mining applications, real-time data insertion isn’t as important, because you already have the data. The importance of performance in this case is the ability to extract the data fast enough and process it quickly, and this depends on the type of data you are mining and the business application of it. With that said, the type of infrastructure has a big impact on how long it takes to process your “big data” job. If you can reduce the processing time from three days to two days thanks to a more powerful cloud infrastructure, that can change how you define your business model.

For real-time big database applications, I/O becomes critical. For example, mobile advertising technology companies require real-time data insertion and performance in order to capture the right data at the right time and subsequently deliver timely, relevant ads. What really happens when millions of users simultaneously “check in” at their favorite restaurants and then at the movies via a social media mobile app? Extracting and capturing this information relies on real-time data insertion, but quickly processing and learning from that data relies on compute performance. The ads you see are formulated and delivered based on your real-time location information, behavior patterns and preferences. Dynamic, real-time data requires high I/O storage and superior compute performance in order to provide such targeted ads.

From the proverbial needle-in-a-haystack backend processing to modern, real-time database applications, the term “big data” is used for both. Once you understand the distinct qualities of each type, you can make better decisions regarding the infrastructure and IaaS (Infrastructure-as-a-Service) models that fit one versus the other. Your organization likely has both types of “big data” challenges. Talk to Internap to find out how we can help you meet the needs of both.

Next: How to make IaaS work for your big data needs

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 6, 2013

Customer Spotlight: Onavo improves performance and reduces costs with bare-metal cloud

Ansley Kilgore

Onavo reduces costs with bare-metal cloudAs the expert in mobile data savings, Onavo creates a suite of award-winning data utility apps that help users manage their mobile data usage, protect themselves from malware and save money. With more than one million users around the world, Onavo relies on high network speed and superior performance to provide the best quality experience for consumers.

In this customer spotlight, Galed Friedmann, Head of Operations at Onavo, discusses how the move to bare-metal servers allowed them to increase performance without sacrificing the automation and flexibility of the cloud.

Q: When you started researching solutions, what were your requirements?

A: To provide quality service to our growing customer base, we needed high network speed and reliable server performance with a stringent SLA for guaranteed uptime. We also required a predictable cost approach to keep expenses under control.

Internap’s bare-metal cloud helped us meet network and performance goals using dedicated physical servers that could be scaled up and down like a traditional virtual cloud based on workload needs. We utilize bare metal in Internap’s data center locations in New York and Amsterdam to support our global users, and can provision physical servers within 24 hours in most cases using Internap’s self-service customer portal.

Q: What changes have you noticed since the move to bare-metal servers?

A: Our move from a virtual cloud environment to dedicated physical servers has led to a boost in performance. Bare-metal servers with no virtualization layer allow us to take advantage of the full processing capacity of the physical servers, which let us reduce our overall server count. With better performance and more control over our server usage, costs are more predictable.

Also, our NOC is now able to focus on business-critical issues instead of infrastructure maintenance and troubleshooting. We have fewer unexpected outages and less need for constant network monitoring, which makes it easier to provide customers with fast, reliable access to their mobile data applications.
Internap’s SLA supports 100% network availability across multiple networks, which takes this additional burden off our support personnel.

We are regularly testing, developing and adding new products, and the ability to self-provision new physical servers and delete instances when finished gives us the flexibility we need. As our business grows, it’s important for us to have these automation capabilities along with better performance.

Explore HorizonIQ
Bare Metal


About Author

Ansley Kilgore

Read More