Aug 28, 2013

A Quick Guide for Troubleshooting Network Latency—How to Improve Latency


As a colo, cloud and network provider, troubleshooting network latency is one of the most common requests we receive. In this post, I’ll walk you through what’s normal and what’s not, and what we look for to diagnose the source of lag.

What is Network Latency?

Network latency (also called lag) is defined as the amount of time it takes for a packet of data to be encapsulated, transmitted, processed through multiple network devices until it is received at its destination, and is decoded by the receiving computer. The most common signals of network latency include:

  • Data takes a long time to send, for example an email with a large attachment
  • Accessing web-based applications or servers is slow
  • Websites do not load in a timely manner

Read further to:

  • Learn what causes network latency
  • Understand latency issues
  • See how to improve network latency

What Causes Network Latency?

Typically this type of network latency is unavoidable and considered to be normal. However, if you notice an increase in latency before or after the hop over the transoceanic fiber chances are there is another reason for increased network latency.

Before you can improve your network latency, you must first understand how to determine your latency and the different ways you can measure it. By knowing your network latency issues, you can better troubleshoot any problems you’re having to ensure data travels more quickly. 

Other reasons for network latency could be: Network interface port saturation, interface errors, packet fragments, upstream provider outages, routing issues etc. The most common cause of network latency is due to packet queuing at any gateway along the packets course of travel. In order to pinpoint where the network latency bottleneck is occurring, follow these steps:

How to Test Network Latency

  1. Ping your server from your location. Generally, a 100 constant ping is sufficient.
  2. From your server, ping the IP address of your physical location.
  3. Provide traceroute results from your location to your server.
  4. Provide traceroute results from your server to your location.
  5. Provide traceroute results from a looking glass server to both your server, and your location.

At INAP, our network operations team troubleshoots network latency from our network, and from yours. We look at examples of a constant ping from your location to your server. We look at traceroute results from your location to your server and from your server to your location. Finally, look further outside the box with traceroute results utilizing a looking glass server in order to determine where the latency issues are is located.

Here is a quick example of the process. I executed a traceroute from one of our core routers to a Russian telecommunication company’s IP:


The results are as follows:

 Tracing the route to
1: ( 0 msec
2: ( [AS 6461] 0 msec
3: ( [AS 6461] 28 msec
4: ( [AS 6461] 28 msec
5: ( [AS 6461] 32 msec 28 msec 28 msec
6: ( [AS 3257] 32 msec 212 msec 216 msec
7: ( [AS 3257] 220 msec
8: ( [AS 3257] 152 msec 152 msec 196 msec
9: [AS 12389] 212 msec 216 msec 208 msec
10: ( [AS 12389] 220 msec 208 msec 208 msec
11: [AS 35400] 236 msec 228 msec 228 msec
12: [AS 35400] 220 msec 224 msec 220 msec
13: ( [AS 34875] 240 msec 232 msec 240 msec
14: ( [AS 34875] 232 msec 220 msec 56 msec
15: [AS 50699] 236 msec 232 msec 236 msec

As you can see, once the packet reaches the ingress interface at hop 6 (bold) there is a dramatic increase in latency due to packet queuing over a transoceanic fiber. We can determine this as the cause when we run a trace-route from a looking glass server in Russia back to INAP, then compare the two results:

traceroute to (, 30 hops max, 40 byte packets
1: (  9.151 ms  8.939 ms  8.882 ms
2: (  24.711 ms (  33.279 ms  33.238 ms
3: (  18.090 ms  21.387 ms (  27.436 ms
4: (  27.314 ms  30.517 ms (  21.392 ms
5: (  48.528 ms (  60.268 ms  60.254 ms
6: (  47.184 ms  47.252 ms (  49.665 ms
9: (  64.572 ms  64.610 ms  62.171 ms
10: (  133.746 ms  131.793 ms  131.284 ms
11: (  150.693 ms  156.502 ms  154.735 ms
12: (  157.564 ms  157.499 ms dr6506b.ord02.******.net (  156.213 ms
13:  dr6506a.ord03.******.net (  166.293 ms  166.203 ms  166.059 ms
14:  dr6506b.ord02.******.net (  145.481 ms  148.003 ms  147.968 ms
15:  dr6506a.ord03.******.net (  165.771 ms dr6506a.ord02.******.net (  157.431 ms  156.985 ms

As you can see, at hop 11 latency again increased due to packet queuing at the transoceanic fiber. Additionally, when you compare the two results you will notice that on each side of the transoceanic fiber there is very little latency.

Latency Issues: How to Reduce Latency, the INAP Way

Finally, and most importantly, INAP customers can take advantage of Performance IP, our automated route optimization engine. It watches popular destination IP prefixes, and it automatically puts all of our customers’ outbound traffic on the fastest, most stable routes and providers to improve network latency issues. Learn more about how it works by jumping straight into the demo.

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More