INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Feb 27, 2017

AWS Availability: How to Achieve Fault Tolerance and Redundancy in EC2

Paul Painter, Director, Solutions Engineering

With millions of customers and a host of entry-level services that help companies quickly on-board the cloud, it’s no surprise that organizations are shifting mission-critical apps and services onto the elastic compute cloud (EC2). According to AWS (Amazon Web Services), “Amazon Elastic Compute Cloud (Amazon EC2) provides computing resources, literally server instances, that you use to build and host your software systems.

Amazon EC2 is a natural entry point to AWS for your application development. You can build a highly reliable and fault-tolerant system using multiple EC2 instances and ancillary services such as Auto Scaling and Elastic Load Balancing.”

Fault Tolerance & Redundancy: The Same, But Different

First up? Defining the difference between redundant and fault-tolerant solutions. While the terms are certainly related — and often used interchangeably — they’re not exactly the same. And although there’s no hard-and-fast rule regarding the definitions, the commonly accepted answer goes like this:

  • Components — such as disks, racks or servers — are redundant.
  • Systems — such as disk arrays or cloud computing networks — are fault tolerant.

Put simply, redundant means having more than one of something in case the first instance fails. Having two disks on the same system that are regularly backed up makes them redundant, since if one fails the other can pick up the slack. If the entire system fails, however, both disks are useless. This is the role of fault-tolerance, to keep the system as a whole operating even if portions of the system fail.

According to AWS, “Fault-tolerance is the ability for a system to remain in operation even if some of the components used to build the system fail.” The AWS platform enables you to build fault-tolerant systems that operate with a minimal amount of human interaction and up-front financial investment.

So, how does this apply to EC2 and the Amazon cloud?

Saving Grace

For many companies, the cloud acts as both home for applications and a flexible DR service in the event of local systems failure. But what happens when the cloud itself goes down? Like all cloud providers, Amazon has experienced outages due to weather, power failures and other disasters; while the company promises 99.95 percent uptime for its compute instances, this still equates to approximately four hours of downtime per year.

Use of Amazon as a DR solution is now both possible and recommended — but isn’t perfect. To address this issue, EC2 comes with several tools that can help companies increase both their total redundancy and overall fault tolerance. EC2’s specific features to assist with this include availability zones, elastic IP addresses, and snapshots, that a fault tolerant and highly available system must take advantage of and use correctly.

Ramping Up AWS Redundancy

How do companies address the issue of redundancy in their EC2 instances? It starts with availability zones (AZs). These zones are divided by region — meaning if you’re on the West Coast of the United States you’ll have a choice of multiple zones along the coast that are independently powered and cooled, and have their own network and security architectures. A

Zs are insulated from the failures of other zones in the group, making them a simple form of redundancy. By replicating your EC2 instance across multiple AZs, you significantly reduce the chance of total outage or failure.

It’s worth noting that bandwidth across zone boundaries costs $0.01/GB, which is a fraction of the cost of Internet traffic at large but is important to consider when calculating cloud costs. It’s also important to remember that information transfer does have an upper limit bounded by the speed of light, meaning that if you’re using two geographically distant AZs to house your EC2 instances you may experience some latency in the event of a failure.

Amazon Web Services are available in geographic Regions and with multiple Availability zones (AZs) within a region, which provide easy access to redundant deployment locations.

Finding Fault Tolerance

As noted by the AWS Reference Architecture for Fault Tolerance and High Availability, while higher-level services such as the Amazon Simple Storage Service (S3), Amazon SimpleDB, Simple Queue Service (SQS) and Elastic Load Balancing (ELB) are inherently fault-tolerant, EC2 instances come with a number of tools that must be properly used to achieve overall fault tolerance.

For example, employing ELB can help migrate workloads off failed EC2 instances and ensure you’re not wasting resources, while creating an Auto Scaling group in addition to an existing ELB load balancer will automatically terminate “unhealthy” instances and launch new ones. Also critical are the use of elastic IP addresses, which are public IP addresses that can be mapped to any EC2 instance in the same region, since they’re associated with your AWS account and not the instance itself.

In the event of a sudden EC2 failure, elastic IP lets you shift network requests and traffic in under two minutes. It’s also a good idea to make use of Snapshots in combination with S3 — by taking regular point-in-time snapshots of your EC2 instance, saving them to S3 and replicating them across multiple AZs, it’s possible to reduce the impact of unexpected or emerging faults.

Mission-critical workloads now have a place in Amazon’s EC2 offering. Ensuring the high availability demanded by these workloads, however, means making best use of both redundant and fault-tolerant tools included with any elastic compute cloud instance.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Paul Painter

Director, Solutions Engineering

Read More
Feb 14, 2017

The Future of SQL: Database Standard or Sidelined Solution?

INAP

The rise of big data and real-time analytics has changed the value proposition of SQL Server. As noted by Datanami, SQL-on-Hadoop offers significant benefits and is driving market competition, while TechTarget highlights that SQL pros are often left out of the loop as Java plays a more important role. What’s the future of SQL?

Current Conditions

SQL Server 201 was well received by users, and although many still prefer 2014, 2012 or even 2008 R2 versions, features of interest — such as Microsoft’s new “Nano” SQL server deployments, which require significantly less space to deliver a high-quality database experience — have encouraged many companies to switch. Thankfully, this hasn’t stopped the Redmond giant from pushing SQL forward and finding new ways to broaden the appeal of its relational database technology.

According to Tech Crunch, for example, Microsoft has now opened the Linux beta version of its SQL server to the public. And thanks to support for Docker containers, it’s possible to run the Linux iteration on MacOS, creating a crossover many users thought would never occur. In fact, this is a significant departure for Microsoft, which has historically looked for ways to compete rather than collaborate with other tech giants, especially Apple. Tools such as massively popular productivity suite Office have always been Windows-only and PC-only offerings — until now. In large measure, this change stems from a recognition that cloud and other distributed technology services are the way forward for IT, and companies will no longer accept the notion that a single provider can handle all of the service and security needs. Given the massive success of Azure, it appears that Microsoft is taking this lesson to heart and now providing improved opportunities for companies and users who love SQL but prefer a non-Windows OS.

Along with the public release of the SQL Linux client, the company also announced changes to programmability features: A large number of SQL users, including those running the free “Express” variant, now have access to features previously gated for Enterprise edition.

The Hadoop Happening

One of the biggest forward pushes for SQL in the marketplace at large is the development of SQL-on-Hadoop. According to the Datanami piece, efforts by multiple companies to build better Hadoop/SQL combinations has led to significant performance gains from mainstream products such as Hive, Impala, Spark SQL and Presto, resulting in two- to four-times faster query results than just a few months ago. Impala and Spark especially are delivering great large-join performance thanks to a feature called “runtime filtering,” which reduces the total volume of data that needs to be scanned.

Better still? Taking advantage of these gains doesn’t require companies to buy extra hardware or change query structure. It’s also worth noting that while open-source alternatives can’t quite match the speed of proprietary solutions yet, their development is progressing faster than similarly equipped single company engines. In other words? Proprietary engines have the advantage for the moment — but not for long.

No discussion of Hadoop would be complete without talking about language. As discussed by the TechTarget piece, what many data pros don’t talk about when it comes to SQL-on-Hadoop is that Java plays a larger role than T-SQL, meaning some database programming veterans are at-risk of getting left behind. And while Microsoft could simply chalk this up to the inevitable shift of databases away from single-language design, the company is looking for a way to empower T-SQL users and get them back on board. The answer? U-SQL. Although it’s a dialect of T-SQL, there are several differences between the new offering and standard SQL language. For example, while it can handle disparate data, it also supports C# extensions and .NET libraries; automatically deploys code to run in parallel; and supports queries on all kinds of data, not just the relational data found in SQL. Currently part of Microsoft’s Azure Data Lake Analytics public preview, the new code is fundamentally designed to give data professionals “the access to a big data platform without requiring as much learning,” which may be a critical feature as IT pros are bombarded with new tools to master and new techniques to help streamline the performance of technology departments.

Moving Forward

As noted by DZone, the first computerized database models emerged in the 1960s. By the 1970s, dedicated tools such as SQL had been developed. In the early part of the next decade, Microsoft’s offering became the de-facto industry standard. The 1990s brought a significant shift, however, as single-server databases struggled to contend with massive data volumes and resource requirements. By the turn of the century, alternatives emerged such as NoSQL and Hadoop, even as data velocity, variety and volume continued to skyrocket. Today, companies are turning to scale out SQL solutions rather than trying to match the pace of data by scaling up internally; especially as real-time analytics becomes a critical component of long-term corporate strategy.

Yet where does SQL go from here? One possibility is all-out replacement, where proprietary SQL databases are replaced by open-source alternatives or flexible databases from other companies. A more likely scenario, however, suggests the development of distributed SQL environments that provide access to scale on demand, support the integration of other toolsets, and enable the addition of real-time analytics tools. Microsoft’s current trajectory supports this aim: The move to develop a Linux SQL offering, support for ongoing Hadoop integration, and focus on cooperating with rather than competing against open-source alternatives. What’s more, the company seems focused on lowering the bar for entry with initiatives such as U-SQL, which will help data professionals on-board more quickly and make better use of SQL resources.

Bottom line? The relational database isn’t dead, but is simply undergoing a cloud-enabled transformation. While it’s unlikely that SQL will retain its position as the de-facto standard for organizations, Microsoft’s current efforts and future plans point to database development that should support SQL as an integral part of the new, cloud-connected, scale-out database environment.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Feb 9, 2017

Microsoft Azure 101: What You Need to Know Before Deploying

INAP

Microsoft’s cloud offering was late to the party, but still a big hit among business users — 82 percent of North American businesses have evaluated Azure, and the same number say it’s part of their overall cloud strategy. While Microsoft has done its best to ensure a simple transition from on-prem offerings or other clouds, Azure adoption can get complicated if you’re not prepared.

Here’s what you need to know about making the move, what’s currently on offer and what may come next for Azure.

The Basics of Azure

Most companies are familiar with Amazon’s AWS cloud services, which are widely considered simple, powerful and relatively inexpensive. While Azure tops Amazon in some cases and pulls on par in others, there’s both overlap and distinct differences that can make Azure adoption somewhat confusing. Let’s look at the basics.

First up, some history. Microsoft rolled out Azure in 2010 to compete with Amazon. Thanks to a solid service ramp-up of PaaS and IaaS, cost-effective pricing for Enterprise Agreement customers and the cloud linkage of familiar Microsoft services such as Office and SQL Server, Azure has enjoyed substantial growth year over year.
However, it’s worth breaking down the details of this cloud offering before making a move. For example, Azure virtual instances come in three different abstraction levels to suit multiple business needs:

  • Virtual Machines — This IaaS deployment provides customizable Windows and Linux virtual machines, but gives local IT complete control over their OS.
  • Cloud Services — Azure’s middle ground, this tier represents Microsoft’s original PaaS offering, which provides scalable cloud apps and affords in-house pros some control over VMs and OS architecture.
  • App Services — This PaaS play provides fully managed Web, Mobile and API apps for companies, while Microsoft handles the hardware, OS, networking and load balancing.

Azure Terms and Terminology

Before moving to Azure, it’s a good idea to get familiar with Microsoft’s block and object storage terminology to avoid any confusion down the line. While AWS calls its block storage “Elastic Block Storage”, Microsoft opted for “Page Blobs”. Object storage instances, meanwhile, are known as “Block Blobs” in Azure, which often trips up admins trying to provision disk-like block storage — when using Microsoft, Page Blobs are actually Block Storage.

Pricing, meanwhile, is straightforward: It’s entirely per-hour and on-demand. But that doesn’t mean there’s a single price for Azure. Instead, Microsoft leverages its Enterprise Agreements (EA) to great effect here, providing organizations steep discounts depending on how much companies want to spend upfront and the length of term they’re willing to accept. By and large, the use of these EAs has significantly accelerated the growth of Azure over the last few years.

Azure Instance Types

Microsoft has also broken down Azure into multiple instance “types,” giving companies the ability to select services and features that best meet their needs. While more than 80 types are available, Microsoft groups them into larger categories, which include:

  • A-Series — General-purpose instances, A-series offerings promise consistent processor performance and “modest” amounts of memory and storage. Some are tailored to handle more compute-expensive tasks with increased core numbers.
  • D-Series — More compute power and storage thanks to more cores, more memory on each core and solid-state drives (SSD) for temp storage.
  • Dv2-Series — This series offers 35 percent more processing power than its predecessor thanks to newer processors running in Turbo mode.
  • F-Series — Using the same processor cores as the Dv2-series, the F-series comes in at a lower per-hour cost for budget-conscious companies.
  • G-Series — The big memory series, offering the highest capacity and using Intel Xeon E5 V3 processors.
  • H-Series — Designed for companies with compute-intensive projects, the H-series can handle large-scale modeling, HPC clusters and simulations.
  • N-Series — Pure performance; the N-series adds graphical processing units (GPUs) to take on the most demanding workloads.
  • DS, DSv2, FS and GS-Series — A mixed bag that offers high-performance storage, processors and memory with the ability to use SSD storage and caching, in turn maximizing I/O operation. Ideal for organizations that require specialized cloud instances.

General Advice

Once you’ve made the decision to move any or all of your workload to Azure, it’s worth taking the time to draft a solid migration plan. First, decide why you’re moving specific services to the cloud and which aspect of the Azure stack best suits your needs. While it’s a good idea to slightly overprovision rather than opt for a 1:1 ratio between demands and performance, resist the urge to purchase above and beyond a reasonable buffer. There are two reasons for this. First, cloud computing costs continue to fall as large-scale providers “race to zero,” meaning you’ll get more bang for your buck over time rather than upfront. In addition, the evolution of processor cores, storage media and GPUs means that “best of breed” today may be middle-of-the-road in 12 months.

The Future of Azure

So, what’s next for Azure? Microsoft plans to transition calling service Skype away from P2P and entirely into the cloud over the next few months as the company looks to support real-time collaboration across both desktops and mobile devices.

Meanwhile, Microsoft also has its eye on the private cloud: At the company’s recent Ignite conference, new Azure Stack private cloud deployments were announced for Dell, Lenovo and HPE. These new Stacks won’t be available until mid-2017 but speak to a savvy move on Microsoft’s part to tap the growing private cloud market with an enterprise-ready solution. Azure is also partnering with Docker to accelerate the use of containers on Azure Stack — as part of this plan, any commercial customers who purchase a new Stack will also receive a Docker commercial license.

While Microsoft wasn’t an early adopter of the cloud and faced significant competition from established offerings such as AWS, the combination of high-value services, straightforward pricing and continual evolution of the Azure offering have made it a force to be reckoned with in the cloud computing market. Before adopting Azure, however, it’s worth knowing the basics, identifying the ideal deployment type and getting a handle on where this cloud is headed.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Feb 7, 2017

INAP’s Dallas Data Center Earns EPA’s ENERGY STAR® Certification for Superior Energy Efficiency

INAP

Internap Corporation (NASDAQ: INAP), (“INAP” or the “Company”), a leading technology provider of high-performance Internet infrastructure services, today announced that its Dallas Data Center, located at 1221 Coit Road in Plano, Texas has earned the U.S. Environmental Protection Agency’s (EPA’s) ENERGY STAR certification, which signifies that the industrial facility performs in the top 25 percent of similar facilities nationwide for energy efficiency and meets strict energy efficiency performance levels set by the EPA.

“We take great pride in our Dallas data center being the fourth INAP facility to achieve the Energy Star certification seal,” stated Corey Needles, senior vice president and general manager of INAP Colocation. “This achievement exemplifies our continued commitment to keeping our facilities as operationally efficient as possible while simultaneously being responsible stewards of the environment.”

“INAP now places in the top three nationally for data center operators with Energy Star certification,” Needles concluded. “We’re excited to remain in a leadership position in the energy efficient data center conversation.”

Only 101 data centers nationally have been bestowed with the ENERGY STAR certification. Unique to this and other INAP facilities is achieving this certification while providing high-power density infrastructure services; meaning customers can power more of their equipment in the colocation environment using less space; which saves the customers unnecessary space and infrastructure cost.

“Improving the energy efficiency of our nation’s industrial facilities is critical to protecting our environment,” said Jean Lupinacci, Chief of the ENERGY STAR Commercial & Industrial Branch. “From the plant floor to the board room, organizations are leading the way by making their facilities more efficient and earning EPA’s ENERGY STAR certification.”

To earn the ENERGY STAR certification, INAP collected and shared 12 full months of detailed total energy spend (“data center energy intensity”) meeting the stipulated guidelines and procured and installed energy-efficient appliances and infrastructures in our Dallas data center. Moreover, among ENERGY STAR-certified data centers, the INAP Dallas Data Center is:

  • 1.8 times more energy efficient than the national average for data centers (92 ENERGY STAR performance rating)
  • 19.4% below the national average of energy intensity – the amount of energy the data center consumes
  • 19.4% below the national average of greenhouse gas emissions – based on an analysis of the data center’s power consumption, utility provider and the power plant

ENERGY STAR was introduced by EPA in 1992 as a voluntary, market-based partnership to reduce greenhouse gas emissions through energy efficiency. Today, the ENERGY STAR label can be found on more than 65 different kinds of products, 1.4 million new homes, and 20,000 commercial buildings and industrial plants that meet strict energy-efficiency specifications set by the EPA. Over the past twenty years, American families and businesses have saved more than $230 billion on utility bills and prevented more than 1.8 billion metric tons of greenhouse gas emissions with help from ENERGY STAR.

INAP is excited to offer the Dallas data center market an EPA ENERGY STAR certified data center. For more information about ENERGY STAR Certification for Industrial Facilities: www.energystar.gov/labeledbuildings

Our Dallas data centers are conveniently located in the Dallas-Fort Worth metroplex:

  • Flagship Dallas Data Center
    1221 Coit Road
    Plano, Texas 75075
  • Dallas Data Center
    1950 N Stemmons Fwy
    Dallas, Texas 75207
  • Data Center Downtown Dallas POP
    400 S Akard Street
    Dallas, Texas 75202

More About Our Flagship Dallas Data Center

Our flagship Dallas data center is located at 1221 Coit Road, Plano, TX 75075. This Dallas colocation data center connects to Atlanta, Silicon Valley, Los Angeles, Phoenix, Chicago and Washington, D.C. data centers via our reliable, high-performing backbone. Our carrier-neutral, SOC 2 Type II Dallas data center facilities are concurrently maintainable, energy efficient and support high-power density environments of 20+ kW per rack. If you need colocation Dallas services, or other data center solutions in the Dallas-Fort Worth metroplex, contact us.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More