Click here to LEARN more.

Aug 31, 2017

A Beginner’s Guide to RPO and RTO

HorizonIQ, test

I’m constantly reminding my wife to hit ‘Ctrl + S’ in Microsoft Word as she’s writing papers or lesson plans for her classroom. She’s a first-grade teacher, and I love her dearly. But why do I harp on this subject? Because too many times she’s been a page or more into her writing and the application will lock up or crash, or her computer will shut down. Oops… All that work has to be redone. She has to restart her PC, load up the Word app, and get started again, losing 10 minutes for the restart process and another 20-30 minutes just to regain her thoughts on where she left off.

Why am I telling you about my wife’s questionable habits when it comes to saving her work at reasonable intervals? Well, it’s because I work in information technology, specifically within the managed cloud hosting and infrastructure space. The scenario I describe above is a simple way to understand some commonly misunderstood terms—RPO and RTO. That is, ‘Recovery Point Objective’ and ‘Recovery Time Objective’, respectively.

What is your Recovery Point Objective?

Imagine you’re writing a document in Microsoft Word. After each short paragraph, you hit ‘Ctrl+S’, which is the shortcut to saving your work. You can also mouse up to the floppy disk icon and click, but nobody has time for that. Nonetheless, each time you save, you are effectively establishing your recovery point for that particular document.

In other words, if you hit Ctrl+S at 1:43 PM, and your PC crashes at 1:57 PM, but you type another few paragraphs during that time, you will only be able to recover your work up to 1:43 PM. So when you restart your computer, boot up Microsoft Word, and reopen your document, you will see the last thing you typed up to 1:43 PM. That’s your recovery point. Personally, I set a personal RPO of every few sentences. After writing 2-3 sentences I hit Ctrl+S.

What is your Recovery Time Objective?

We covered Recovery Point Objective, so now let’s look at the other three-letter term: RTO. Or, Recovery Time Objective. Again, using the scenario above with the Word document, we’ll focus on how long it takes to get back to work once you’ve experienced a failure on your computer or within the application you’re working. In my wife’s case, it’s not the end of the world if it takes her 30 minutes to get back to work. Frustrating and annoying, yes, but no major impact if it happens. Restarting the computer, booting up the application, and getting her thoughts focused again on the work at hand- finishing the document.

Imagine, though, if she was working under an incredibly tight timeline and every minute was critical to her success. That 30 minutes could make or break her completing an assignment on time, submitting it to her boss, or otherwise. This is a more serious situation that could stand to have a continuity plan in place to shrink that 30 minutes down to say, 5 minutes as an example. If you don’t have the right capabilities or tools in place, however, this may be challenging to solve. The tolerance for ‘time not at work’ in this example is the equivalent of your Recovery Time Objective. Can it be an hour, 30 minutes, 2 minutes? That’s for you to decide based on the criticality of the work you are doing and the time constraints you are working under.

How does RPO and RTO impact my business?

Hopefully, you now have a better understanding of the terminology and concept of Recovery Point and Recovery Time Objectives. Let’s take it a step further and draw a direct parallel to the businesses we work in.


You probably already have a good understanding of what backups are, but as a refresher, I’ll give a quick explanation. Simply put, backup refers to the copying of physical or virtual files or databases to a secondary location or site for preservation in case of equipment failure or other catastrophe. Using the Word document analogy from earlier in the article, a backup in that example would be to have the document saved also on another PC, in a Google Doc, or otherwise. It’s someplace else you can go get the document should you lose it completely from your primary source.

For businesses, backups could consist of a wide variety of data. Customer records, billing information and financial records, patient data, web files, intellectual property such as application code, and so on. In the unfortunate event of a system or application failure, you never want to be in the position of not having a backup plan in place and ready to execute when the time calls for it.

There are a number of ways to achieve a sound backup strategy in the B2B world. We promote Veeam’s 3-2-1 method, which consists of :

  • Have at least three copies of your data.
  • Store the copies on two different media.
  • Keep one backup copy offsite

3 ways to achieve a sound backup strategy

At a very base level, it’s important to think about how you are currently backing up your data, and what gaps you may need to consider filling when considering the 3-2-1 rule above.

Disaster Recovery

The ultimate in business continuity planning is having a thoughtful Disaster Recovery plan in place. This answers the question to stakeholders when they ask: “What happens in the worst-case scenario?” This could mean natural disaster strikes, catastrophic hardware failure, malicious activity from an employee, software bug, or failure, and so on. How long will it take your business to be operational again when disaster strikes? Do you currently have an answer to that question? If not, then read on.

With backups, the good news is that your data is retrievable. It’s somewhere you can go get it and it’s in good shape to get you to an operational state again, at some point. But, just because you have backups doesn’t mean that you have the capabilities or delivery mechanisms in place to actually do anything with it. Let’s use the event of a catastrophic hardware failure.

Your servers were so old that parts failed and you simply can’t get them to work again. In this scenario, you would need secondary servers to connect to your backup system in order to retrieve that data and begin allowing access to your users again. Now, if you don’t already have those secondary servers on hand, it may take you weeks, if not months just to get the hardware to your site. Then, you still have to do all the setup in order for them to run and function like you need them to. You could be inoperable for months. There are not many businesses, if any, that can afford to be in that situation.

A much better scenario is one where you have a plan in place that would enable you to quickly restore data and services and maintain operations with little to no downtime. Having a disaster recovery solution ready to go at the time of disaster keeps the risk of downtime to a minimum. Think about the question we asked earlier. “What happens in the worst-case scenario?”

What if you could answer that by saying, “Our worst-case scenario is mitigated due to our ability to replicate our critical data and applications to our secondary site? In the event of a disaster, our maximum downtime would only be 1 hour because we have the systems and technology in place to seamlessly failover to continue operations.”

Now that’s an answer your boss would love to hear!

If you want to learn more about how you can have the right answers to these important business and technology questions, contact us and one of our specialists would be delighted to help.

Explore HorizonIQ
Bare Metal


About Author



Read More
Aug 22, 2017

Will My Application Work Better on Dedicated Servers?


We live in a “cloud world.” Nobody can deny that, but are there still use cases that make sense for dedicated servers? The team here at INAP says “yes,” and that answer is backed up by the requirements and demands of some of our customers. This post explores some of the specific applications and workloads that still thrive on dedicated servers rather than a cloud platform, public, private or otherwise.

What Applications Run Better on Dedicated Servers?

At a high level, applications that are I/O intensive, coded with legacy frameworks, or that simply need more direct access and flexibility relative to the server resources such as CPU, RAM, and Storage. More specifically, some examples would be:

  • Highly accessed, I/O intensive databases such as Oracle, SQL, NoSQL and others.
  • Video Transcoding that requires specific chipsets like the Intel QuickSync technology.
  • 3-D Rendering for AutoCAD and other rendering applications.
  • Financial and Scientific modeling that requires an immense amount of constant computations.
  • Shared and VPS web hosting.
  • Voice Over IP (VOIP) services (for legacy platforms)
  • High Traffic eCommerce platforms like Oracle ATG.
  • SaaS applications that sustain constant high loads from users.

This is not an exhaustive list, but it aims to highlight some of the workloads that still make quite a bit of sense for dedicated servers.

Why are dedicated servers a better choice for these applications?

If I were to boil it down to just a few key reasons, they would be pure horsepower, flexibility, and the price/performance ratio. I’ll explain each in a bit more detail.


For applications that are CPU-bound, and reliant on doing lots of computations, having direct access to all of the CPU power is important. While hypervisor technology has reduced the amount of resource overhead needed, there’s still anywhere from 5-15% (or more) you lose when you add that hypervisor layer atop of the host’s hardware. This overhead comes from things like HA failover capacity and system resource reservations. For example, with VMware, vmkernel processes are reserving some resources for themselves in order to operate properly. In other words, you lose those resources to do the core tasks of your application which means you ultimately need to pay for additional resources, CPU, RAM, etc, just to accomplish your performance objectives.  When I get into price/performance ratio below, you’ll see where this becomes evermore important to some use cases.


Does your application require a specific CPU and chipset? Do you need a custom and unique network interface card (NIC)? Will your application run better with high performance, 1.6TB NVMe Solid State Drives (SSD)? If the answer is ‘yes’ to any of these, dedicated servers are a great fit. They provide the flexibility to build-to-spec rather than being forced into a potentially vanilla environment that’s somewhat one-size-fits all. At INAP, we advertise some basic dedicated server plans, but each is fully customizable to the exact needs of the client. We’ve done some really neat things for eCommerce, video transcoding, VOIP services and others that simply could not be achieved in a cloud platform. Not today, at least.

Price/Performance Ratio

This is where the rubber meets the road. Everything discussed above is encapsulated into this final idea, which is the price to performance ratio. In other words, how much do you have to spend to get the desired performance for your applications in order to deliver the best possible customer experience? For these high-performance applications, every percent of resource availability counts, so being able to sidestep hypervisor overhead losses is a big deal. You’re still paying for those resources even though you’re not using them for the core functionality of the application. It’s not delivering value to the end user. Paul Kreiner, from TSheets explains it best here:

“Dedicated servers are optimal because we don’t pay a 10 to 15 percent performance penalty off-the-top for virtualizing. Unlike a typical IT shop, where most systems sit idle and user traffic is sporadic, our systems sustain high loads for hours on end.”

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 18, 2017

Building a Multi-Cloud Strategy with AWS Direct Connect and Azure ExpressRoute

Paul Painter, Director, Solutions Engineering

With the growth of public cloud options such as AWS and Azure, more and more businesses  are choosing a hybrid cloud and multi-cloud approach to their digital transformation and infrastructure needs. By choosing to place high-security applications into a private cloud and customer facing applications in the public, savvy IT departments are able to optimize their environments without compromising compliance or performance requirements.

But with this strategy comes the need  for a reliable, secure connection between the two clouds. Often a VPN of some kind will meet the need but there are many use cases where it is crucial for low latency, highly stable connections.

This is where the AWS Direct Connect and Azure ExpressRoute and options come into play.

Using ExpressRoute or Direct Connect

DirectConnect from AWS and ExpressRoute from Microsoft Azure let you create private connections between the their data centers around the world and your on-premise or hosted infrastructure. These connections do not traverse the public internet, and the setups naturally provide the highest-possible degree of security, as well as lower and more consistent latency then you will experience on a Site-to-Site VPN.

Configuring either of these requires you to work with a partner (such as Level 3) for the private connection to the data center and a vendor to provide the “last mile” of the connection. For instance, we partner with the automation and software defined networking experts at Megaport for connections between our customers’ dedicated and private cloud environments and Amazon and Microsoft data centers. Additionally, HorizonIQ is able to connect into any of Megaport’s 157 points of presence around the globe. Because it’s a month-to-month service, it’s a truly flexible hybrid solution with variety of use cases that go beyond compute – e.g. short term migrations, DDoS mitigation, disaster recovery, etc.

The main benefit of both ExpressRoute and Direct Connect is that a consistent background connection allows your application or environments to talk to one another over a private and secure connection.

Depending on your needs and budget, cost is potentially the only real drawback to both of these. The higher tiers can get very pricey. This becomes even more of a factor if you have services in multiple regions, as you may need to have multiple private connections into your environments.

Consider Your Broader Multi-Cloud Strategy

If the cost consideration doesn’t drive this home on its own, it’s important to note that the hybrid environments enabled by ExpressRoute and Direct Connect are only as strong as your multi-cloud strategy in its entirety. Before you spin up a public cloud instance, make sure you follow a few key rules:

  1. Design your strategy around the needs of applications & workloads, not cloud platforms.
  2. Take the time to understand the different payment models available in Azure and AWS.
  3. Explore feature set availability, specifically which other add-on services like ExpressRoute and Direct Connect you’ll need to add to fully power your public cloud instances.
  4. Build a plan for ongoing management of multi-cloud platforms — e.g., skills/certs training, hiring a public cloud MSP for AWS or Azure, etc.

At the end of the day, both Azure ExpressRoute and AWS Direct Connect you will need to take some time to carefully consider the needs of your cloud strategy before you make any choices. However, if you do decide you need to utilize one of these services, you can be rest assured you’ll get the most reliable options for connecting public cloud applications to your data center.

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author

Paul Painter

Director, Solutions Engineering

Read More
Aug 15, 2017

Business Continuity Management 101: Resources for Compliance Minded Organizations


In recent years, corporate governance has taken on increased significance in the U.S. as more and more legislation, regulations, and external standards require organizations to provide proof of control measures to external auditors and assessors.

Compliance with these laws, regulations, and standards is a key concern of business continuity planning/disaster recovery (BCP/DR) personnel. There is a silver lining to the requirements outlined in these mandatory regulatory frameworks: A safer and more responsible organization!

Think of your business as your home. Compliance and business continuity requirements are your list of household chores. We all feel the same about chores. Nobody likes doing them even though they must be done. Why? Perhaps we feel we could be better spending our time on other things. Maybe we are not very good at doing it. It is not enjoyable work. Ironically, these are some of the same arguments against implementing compliance processes within the company.

Yet compliance and business continuity management are not optional if we are to foster safer business practices that protect our businesses.

Surprisingly, business continuity management and planning is not always as complicated as it seems at first glance.  The perception is that a lot of time and money must be spent implementing processes, technology, and personnel with expertise in compliance. Oftentimes, however, a bulk of the work is expanding existing corporate policies to ensure there is accountability and oversight.

Organizations not only must have disaster recovery plans, but full business continuity plans to ensure that key parts of the organization—not just the IT systems, but also the personnel, functions, and processes—can continue operating in the event of an emergency. By creating a comprehensive plan accounting for the following questions, you would be well ahead of the pack from a business continuity management standpoint:


  • Who is responsible for which aspects of the business continuity procedures and plans?
  • How disasters will be avoided and mitigated?
  • Which risks have been identified?
  • How will various scenarios (flood, fire natural disaster) be handled?
  • How will employees be evacuated and to where?
  • How will medical emergencies be handled?
  • Where are you alternate site locations and how will they be used?
  • What are your internal communications/notification procedures?
  • How will the business continuity plan be tested, updated, reviewed, and approved?


This routine “housekeeping” is not expensive to do and should be the first steps taken to become a more compliant organization. All it takes is the time and willingness to put in the effort!

If you are still thinking “I hate chores!” thankfully you are in luck. Managed service providers like INAP have the technology and expertise in house to help businesses of all sizes incorporate compliant systems and processes for Business Continuity solutions like cloud backups and DR as a Service. A good MSP acts as an extension of your internal support teams, having 24/7 support and services backed by Service Level Agreements (SLAs).  Some even go so far as to provide complementary documentation and emergency services.  If you feel that compliance and business continuity is a “chore,” think about reaching out to INAP to schedule a consultation on how we can help you become more compliant with a rock solid business continuity strategy!

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More
Aug 1, 2017

Current Trends in the Public Cloud: Migrating VMware Licenses


Over here at INAP, we’ve been hearing the buzz with VMware and its shifting cloud strategy. RightScale’s latest State of the Cloud Report has confirmed that “public cloud adoption is growing while private cloud adoption flattened and fewer companies are prioritizing building a private cloud” (Right Scale 2017 State of the Cloud Report, p. 36).
current trends

The report goes on to state that VMware vSphere adoption was flat year over year, confirming what we’re feeling in the market. That said, VMware still holds a high share of the private cloud deployments, and our customers have a significant investment in their licenses.

private cloud adoption

For many VMware users looking to move into the cloud, finding a VMware based public cloud solution without losing their VMware license investment seems limited. I have had several conversations with IT Directors and CTOs about this problem, and most were unaware that they can migrate to INAP’s AgileSERVER Cloud and still use their VMware licenses.

But wait…I can run my vSphere cluster on INAP’s public cloud?

It’s true, and has been for a while now! INAP launched AgileSERVER, our OpenStack bare-metal platform in 2015, laying the foundation for a VMware cluster in a public cloud environment.

Shortly thereafter, we added the ability to support dedicated SAN devices to our Bare-Metal cloud, rounding out the infrastructure needed for a VMware cluster.

In fact, INAP provides the following features to make migrating VMware simple for our clients:

  • Dedicated servers that can be provisioned in a cloud-like manner
  • Over 70 different configurations with multiple processor sets, RAM and disk needs
  • High speed 10GB networking in all the servers
  • Entry level to high performance dedicated SAN options
  • Optional monitoring and management services

The design of our infrastructure will enable your IT department to migrate to the cloud with minimal difficulty. Because INAP serves companies at all points in their growth cycles, we know that some companies may need assistance configuring their VMware cluster or migrating their existing environment. We can even help you with that! Our Solution Engineering team will work with you to size and configure the VMware environment you need. We have the tools and the experience to review your current environment and right-size it based on the current performance.

The cloud is the limit tomorrow…

Vendor lock-in is a material reality of the corporate world. Companies protect their proprietary information by securing a customer’s loyalty for an agreed-upon time, creating a mutually beneficial relationship. That is, until a customer realizes that they may have a better option elsewhere, with the weight of prior commitments keeping them where they are.

At INAP, the customers I talk to recognize the cost and performance benefits of moving to the cloud, but they want to make the most of their VMware investments first. In addition to license costs, their IT operations and platforms are dependent on VMware. These customers believe that their sunk costs would be wasted in migrating to the cloud, so they are always surprised to learn that they can use their existing VMware licenses in our cloud environment!

Your new environment will have the same look and feel with the same management practices as a privately hosted VMware cluster, but it’s built on OpenStack Cloud technology. As your teams and applications become more cloud-friendly, you can slowly migrate off VMware, providing further savings in the future when you do not have to renew your licenses.

But we can save that discussion for another time…

The most important thing to realize is that if you are looking to migrate to the cloud but feel stuck in terms of licenses, technologies or skill-sets, you do have options. Our customers find this solution is an easy step to the cloud with limited risk. Contact one of our specialists to find out what INAP can do to help you transition to the cloud with licenses intact!

Explore HorizonIQ
Bare Metal


About Author


Read More