INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Nov 23, 2015

6 Things to Consider While Designing and Prepping Your Disaster Recovery Plan

INAP

Disaster recovery

No organization looks forward to the day they implement their disaster recovery (DR) plan. But like any good insurance policy, DR is an essential component of business continuity and preservation.

Most people understand that much. Where organizations often fall short, however, is the details and preparation. The only thing worse than having no plan at all is not going through the proper research and review prior to the event that takes you offline.

In other words, don’t be like Kip and Gary in the above comic.* Start the process now, and keep in mind these six, baseline items:

1 – Identify and plan for your most critical assets. Because of the high resource requirements of a good DR plan, focus only on the processes and applications that are most crucial to your business while you restore normal operations. For many companies, those may be customer-facing applications and systems like e-commerce sites or portals. Applications like email, while important, may take a secondary position, and systems for internal use only – like HR or accounting applications – may fill out a third tier.

2 – Determine RPO/RTO. The Recovery Point Objective (RPO) is the maximum amount of time your business can tolerate between data backups. If your RPO is one day, that means you can survive losing one day’s worth of data, but no more. Your Recovery Time Objective (RTO), on the other hand, is the target for restoring regular service after the disaster strikes. Neither metric is arbitrary, and you’ll likely have to crunch a lot of numbers and coordinate with virtually every business unit to determine the most accurate objectives.     

3 – Scope out the technical mechanics. The hybrid era of IT, for all its benefits, only makes DR planning more difficult. Critical business processes and applications exist in a complex web of interdependencies. You’ll have to map relationships across server, storage and network infrastructure and develop accurate scripts to ensure apps function like they’re supposed to in the recovery environment.    

4 – Select an appropriate failover site. Traditional DR requires redundant infrastructure in which to failover. Not only is this pricey, you have to choose a site that makes sense geographically (i.e. – low odds of being affected by same event) and offers an SLA that’s up to your current standards.      

5 – Take advantage of the cloud. For many organizations, designing a robust DR plan is significantly impeded due to extremely high cost and resource requirements. Disaster Recovery as a Service (DRaaS), however, is a cloud-based solution that eliminates the heavy capital expense, putting DR within reach of companies unable to acquire the redundant infrastructure needed to restore service. INAP offers a comprehensive DRaaS solution for hosted private cloud customers that includes seamless failover and failback, an RPO within seconds, and an RTO within minutes.

6 – Document, Test, Refine. This is the hat trick of ensuring effective execution. Each of these components is critical. Your plan needs to be specific and detailed. Plans, procedures, responsibilities and check lists should be clearly documented. You want your team to have clear marching orders and leave little to interpretation in the middle of a crisis.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Nov 13, 2015

Route optimization and business continuity

Ansley Kilgore

route optimizationFor most businesses today, the Internet is mission-critical. Computer networks and IT systems are important strategic assets to modern companies, but each facet of a business that relies on the Internet is also vulnerable to risk.

Multiple factors can jeopardize network availability, including slow Border Gateway Protocol (BGP) convergence, invalid route updates, human error, malicious DDoS attacks and fiber cuts. The results of these complications can range from minor delays or degradations to complete loss of connectivity across the Internet. So how can system administrators and other IT professionals ensure business continuity in spite of the risks?

Business continuity and Internet optimization

Unfortunately, many organizations depend on suboptimal network protocols and remedial redundancy to ensure availability of critical network elements. In many cases, traditional operational practices are reactive and fail to prevent dramatic impacts to organizational productivity. To ensure business continuity, IT organizations need a proactive approach to automate BGP and prevent service degradations from becoming business-critical disasters.

Here are two ways Internet route optimization can enhance business continuity.

Improve multi-homed route optimization

Intelligent route control devices take into account business objectives and risks by maintaining routing policies and service demands as defined by an enterprise. The systems leverage redundant infrastructure, proactively measure service levels and alternate paths for traffic on the Internet and assert routing entries consistent with the principles of business continuity. On-premise route control devices use dynamic network analysis to provide real-time BGP route optimization, which proactively reduces the risk of outages that can affect productivity and revenue.

Measure and monitor network performance

In addition to addressing the risks, route control systems provide metrics in the form of performance, distribution and economic reports, giving you visibility into your network’s current state. This includes transparency into carrier commit reserves and price-indexed ISP performance. Access to these vital metrics allows you to fine tune your parameters and achieve network performance goals with reduced costs, easier troubleshooting and intelligent traffic management across multiple providers.

IT systems and networks play a huge role in the success of your business. Incorporating route optimization into your business continuity plan can reduce risk for your mission-critical operations. Anything else is an acceptance of risk and an invitation for disaster.

Download the Intelligent Route Control white paper to learn more about on-premise route optimization.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Nov 10, 2015

What is a sustainability consultant?

Ansley Kilgore

Sustainability consultants develop and implement energy-efficient strategies to help reduce a company’s overall carbon footprint. But designing environmentally sustainable solutions isn’t their only purpose. Their implemented policies can improve a company’s performance while drastically reducing costs on utilities and additional resources.

With the increase of our dependency on technology, the data center industry – the backbone of the Digital Age – has seen substantial growth. As a result, data centers have become one of the fastest-growing consumers of electricity in the United States, creating a need for intelligent, sustainable architecture within the industry.

 

The Natural Resource Defense Council (NRDC) projects that by 2020, the energy consumed by data centers will cost American businesses $13 billion in electric bills and will emit 150 million metric tons of carbon pollution on an annual basis. These numbers seem staggering, especially when you take into account that the data centers run by large cloud providers aren’t the culprit of this massive energy consumption. In fact, they only take up about 5% of consumed energy by the industry. The other 95% of energy usage comes from corporate and multi-tenant data centers that lack a sustainable design.

Another alarming stat presented by the NRDC is that the US data center industry is consuming enough electricity to power all the households in New York City for two years. This massive amount of energy and pollution output is equal to that of 34 coal-fired power plants. And by 2020, the output is projected to be equivalent to 50 power plants.

These statistics highlight that in the data center industry alone, sustainability consultants are a necessity in helping reduce the industry’s energy consumption by designing green data centers.

In the video below, Randy Ortiz, VP of Data Center Design and Engineering at Internap, and Dan Prows, Sustainability Consultant at Morrison Hershfield, discuss how Internap’s design team works with sustainability consultants to construct a highly energy-efficient and sustainable data center.

 

As the expected need for data center growth continues, it is economically and fiscally responsible for data center providers to design their facilities to be as energy efficient and sustainable as possible. And in working with sustainability consultants, data center engineers can ensure that their facility is designed to drive performance while reducing operational costs and environmental impact.

Learn more about Internap’s energy-efficient data centers.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Nov 5, 2015

Software-defined WAN and route optimization

INAP

Software-defined WAN (SD-WAN) is generating a lot of buzz these days as a way to simplify network management. The growing demand for WAN optimization is driven in part by emerging applications with a low tolerance for latency. Data-intensive applications such as big data and analytics workloads require high read/write speeds and more powerful compute capabilities, making increased network performance and reliability more important than ever.

At Internap, we view software-defined networking (SDN) as the ability to automate standardized network engineering tasks and maintain that automation through an entire networking system. As a subset of software-defined networking, software-defined WAN means using software to make decisions on routing across a wide area network.

Challenges of BGP

MIRO Controller
View the Infographic: Does every second really count?
Route optimization is a core component of an overall SD-WAN framework. Fundamentally, both approaches share the common goal of maximizing network reliability, availability and performance.

It’s no secret that BGP wasn’t designed for performance. In the early days of the Internet, the pragmatic decision was made to choose reliability and availability over speed. Route optimization solutions offer the ability to solve for performance measures that BGP does not consider. Learn more about the challenges of BGP.

Network engineers are tasked with finding the delicate balance between rates and commits across providers to keep costs low while maintaining the required performance levels. Multi-homing is often used to increase availability and eliminate single point of failure issues by connecting applications to multiple upstream ISPs. If the connection from one provider is lost, traffic is routed to backup carriers so that mission-critical applications remain online. But managing traffic volume, latency and cost in a multi-homed network is a manual process, and when multiple ISPs are involved, diagnosing and troubleshooting problems and implementing solutions can be difficult and time consuming.

Maximizing performance with route control

At Internap, we consider our on-premise Managed Internet Route OptimizerTM (MIRO) Controller to be an extension of SD-WAN. Backed by our patented MIRO technology, MIRO Controller improves performance and reduces costs for multi-homed networks.

While BGP only looks at AS path length, MIRO Controller offers visibility into latency, packet loss, congestion and other considerations that can impact network performance. Aside from taking the burden off your network engineers, MIRO Controller automatically reroutes traffic based on your specified performance metrics and cost saving considerations, or a combination of both. As a result, traffic is routed along the optimal path at any given time. With advanced reporting and a comprehensive dashboard, MIRO Controller offers visibility into outbound traffic, average latency, average packet loss, optimizations made and route changes.

For almost 20 years, Internap has been using software to optimize the WAN and overcome performance challenges.

Learn more about how MIRO Controller improves performance for multi-homed networks.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Nov 4, 2015

Webinar series: Optimizing BGP networks through route control

Ansley Kilgore

While multi-homing your network can provide better performance, it also comes with a few challenges. For instance, how can you ensure optimal application performance when you don’t have enough visibility into your network, availability and cost structure?

If you’re looking to efficiently manage your multi-homed networks, join us for a webinar series to learn the benefits of multi-homing and how organizations are dynamically optimizing BGP networks.

In this webinar, you’ll also see:

  • A live demonstration of Managed Internet Route OptimizerTM Controller, Internap’s real-time route-optimization appliance that enhances BGP by evaluating path characteristics such as latency, packet loss, traffic and route stability.
  • Real-world examples of how MIRO Controller can improve visibility, enhance performance and reduce carrier costs.

Register below for the time that is most convenient for you:

Want to know more beforehand? Learn more about MIRO Controller and multi-homed networks.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Nov 2, 2015

News roundup: OpenStack Summit Tokyo

Ansley Kilgore

OpenStack Summit TokyoOn the heels of last week’s OpenStack Summit Tokyo, below is a collection of articles to keep you updated on announcements and news from the OpenStack ecosystem.

OpenStack Adoption Accelerates to Support App Deployment, Study Finds

The findings from most recent user survey were presented at the OpenStack Summit in Tokyo, which ran from Oct. 27-30. Results indicate that the percentage of production deployments is growing. In May 2015, 49 percent of deployments were in production, and this number has risen to 60 percent. Multiple reasons were cited as to why organizations are choosing to deploy OpenStack. The top reason, cited by 77 percent of respondents, is a desire to accelerate innovation to be able to deploy code and applications faster. 76 percent of respondents identified avoiding vendor lock-in as a key driver for OpenStack adoption. North America represents 44 percent of all users, with Asia holding the number two spot at 28 percent. Watch the eWEEK slideshow to see the key findings of the October 2015 OpenStack User Survey. Read entire article here.

How OpenStack’s Project Navigator aims to steer users’ cloud choices

The new online Project Navigator tool from the OpenStack Foundation is designed to help firms pick through open-source cloud components. According to OpenStack Foundation COO Mark Collier, there are more than two dozen different services available that can be put into production based on different OpenStack projects. There’s a small number of projects that every cloud uses, or services that every cloud provides, but there are quite a few projects that offer optional services. For example, if you’re doing big data, you might want to use the Sahara project. If you want to automate databases, you might look at Trove. While it’s good to have options, it can be overwhelming. Project Navigator aims to help make sense of the various projects by offering potential users information drawn from a number of sources. Read entire article here.

OpenStack vs Amazon: Where’s the Money?

At the OpenStack Summit, Jonathan Bryce, executive director of the OpenStack Foundation, delivered the opening keynote address which outlined the successes of the past six months. While OpenStack is growing, media and analysts asked how OpenStack compares to Amazon. “There is no question that AWS is extremely impressive and doing a great job of creating a market, but I think if you look at public OpenStack clouds, there are more points of presence around the world than Amazon,” said Mark Collier, COO of the OpenStack Foundation. “If you look at private clouds, the largest company in the world, Walmart, is running OpenStack.” Watch a video of the post-keynote session here. Read entire article here.

OpenStack Foundation Announces Certification Program For Cloud Admins

The OpenStack Tokyo Summit developer conference announced a new certification program for OpenStack cloud admins. Given the complexity of OpenStack, which consists of a large number of sub-projects, it can be difficult for businesses that want to adopt the technology to find qualified administrators. Admins who want to be certified will have to take a virtual certification test that will be available globally. The foundation expects that it will administer the first tests in 2016. About 20 training providers will offer training courses to help admins prepare for the test. OpenStack COO Mark Collier said the organization plans to look at offering similar certifications for OpenStack developers and other roles in the future. Read entire article here.

The top 6 new guides for working with OpenStack

With so much going on in OpenStack, the open source cloud computing project, it can be difficult to keep track of what’s new and to learn how to use it. Fortunately, there are a lot of resources out there to help, including third-party training, listservs, IRC channels, and of course the official documentation. There are also a great number of community-written tutorials, guides, and howtos for OpenStack that can be a helpful way to learn. Every month, Opensource.com collects the best in new community tutorials written in the prior month and brings them to you in one handy collection. Without further ado, check out some of our favorites from last month. Read entire article here.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More