INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Jun 29, 2015

Server backup: Is your data at risk?

Paul Painter, Director, Solutions Engineering

Most people view backups like insurance policies. Nobody likes spending money on something intangible that only benefits you in certain situations. But backups are incredibly useful to have when a disk crash occurs, and they are a requirement for those of us responsible for data recovery. We all know DDoS attacks, bots, scripts, disk crashes and any number of unforeseeable data-loss events makes it necessary to have a data backup solution.

Backup history

The concept of backup has always existed. Even before today’s technology, people made copies of documents by hand and usually stored them in different locations. This took time and energy and was prone to errors.

The development of new backup solutions has evolved with the advancement of storage and computing. Technology with punch cards and magnetic tapes was introduced, but these devices were not reliable enough.

Today, we are usually backing up on hard disks and using software like R1Soft Server Backup Manager to copy, compress and retrieve our data when necessary. This approach offers a faster, more reliable and economical backup solution compared with tape-based backup.

HorizonIQ backup solutions

HorizonIQ provides simple ways to protect against data loss without requiring the user to do any scripting. We use existing infrastructure technologies to provide incremental data backup services for on-premise as well as off-premise servers.

Dedicated backup servers
For those who have large data needs for backups (more than 1 TB), we can turn a dedicated server into a dedicated backup server. By doing so, you are able to protect several servers at once, whether they are hosted at HorizonIQ or not.

Shared backups
For smaller backup needs (up to 1 TB) and HorizonIQ hosted servers, shared backup represents an affordable backup solution in a shared environment to protect critical data.

Benefits of block-level backups

Our server backup solution includes R1Soft, which leverages block-level backups. These operate below the filesystem and examine blocks of data on the disk, regardless of the files with which those blocks are associated. When the data in a particular block changes, only that block is included in the incremental or differential backup rather than the whole file.

Although implementing a block-level backup system requires a deeper understanding of lower-level computer architecture, performing server backups in this manner offers several benefits:

  • Only changed blocks are included in incremental and differential backups. The backup process is faster, reduces wear on the disk drive and takes up less network bandwidth.
  • Unlike file-level backup systems, a block-level backup system can do a bare-metal disk restore. A bare-metal disk restore provides a bit-for-bit copy of the original, with all data in the same physical location on the restored disk as it was on the original. A file-level restoration enables the operating system to determine where to place data, which may or may not be optimal.
  • Because a block-based backup system takes a snapshot of the backed-up disk, the impact on the disk is reduced. Generally, creating a snapshot takes less than a second, which isn’t long enough to cause performance problems that could affect end users. The reduced impact on servers allows system administrators to perform backups more frequently, which shrinks the backup window and reduces the chance of data loss.
    Some block-level backup software designs track which blocks belong to each
file in the file system. This can be challenging, as the blocks associated with a file are not necessarily located in a contiguous chunk on the disk and may even be scattered in different locations. The advantage of this method can be seen when restoring an individual file or collection of files. Consequently, the software performs less initial setup of the backup data before collecting and restoring the files.
  • Another advantage of some block-level backup systems is the ability to work across various operating systems. For example, a Windows server can be backed up onto a Linux-based backup server and vice versa.

Data is increasingly becoming an organization’s most valuable commodity. According to EMC’s Global Data Protection Index, data loss now costs companies more than $1.7 trillion per year worldwide. We strongly recommend choosing the right backup strategy that will ensure you stay in business.

Take your digital journey to the next level. Explore our services and solutions.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Paul Painter

Director, Solutions Engineering

Read More
Jun 25, 2015

News from the gaming front: E3 2015

Ansley Kilgore

E3 2015The video game world witnessed some groundbreaking announcements during the Electronic Entertainment Expo, or E3 2015 event in Los Angeles last week.
We’ve compiled news highlights from the show to keep you informed on the latest happenings in the video game industry.

E3 2015: Five Days in LA that Left Gamers Stunned

E3 presented new console and PC games, including the next FIFA game, which was introduced by the world’s most famous footballer, Pele. As expected, this year’s event included a large focus on virtual reality (VR) technology, with Ubisoft and Sony making heavy investments in VR titles. However, few VR games were readily available for play, which will presumably change in the near future when new VR headsets hit the market. Read entire article here.

Virtual Reality is Finally Growing Up

The virtual reality headsets on display at E3 included some expected household names, as well as other game publishers that have risen to the challenge. The Morpheus VR headset from Sony PlayStation and the Oculus Rift headset are both anticipated to be released in first quarter 2016. As part of the PlayStaion console ecosystem, Morpheus has a well-established distribution mechanism, and the Rift headset requires a powerful PC. Additional headsets were unveiled from Fove Inc., Starbreeze and others. Read entire article here.

HoloLens Brings 3D Media Into the Physical World

What could be more fascinating than virtual reality? Augmented reality. Specifically, Microsoft’s Hololens demo that populates the user’s surroundings with holographic images that look real. The Minecraft Hololens demo stole the show at E3, making digital objects appear as part of your field of vision. Microsoft hasn’t announced a definite release date at this time, but Hololens could be available in fourth quarter of this year. Read entire article here.

These Are the 10 Most Promising Games of E3 2015

The E3 show is large enough for everyone, not just the big game publishers with seemingly unlimited budgets. While Microsoft, Sony and other huge names may get most of the spotlight, here’s a top ten list of awesome games that didn’t get as much attention. Read entire article here.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Jun 23, 2015

Customer Spotlight: Twelvefold targets online audiences with big data

Ansley Kilgore

TwelvefoldTwelvefold is an advertising technology company built on big data, helping marketers reach the right consumers, in the right places, at the right time. The company’s unique approach is driven by understanding content – what people are reading or watching – versus tracking users or devices, like most of the industry.

The company has built a search engine on top of a high-frequency trading system. It leverages the real-time nature of programmatic buying, uniquely allowing advertisers to say “I want to appear on articles like these.” Twelvefold’s SPECTRUM™ platform effectively surrounds trending stories or breaking news that a brand’s prospective customers are likely to read. The solution cherry-picks relevant content, rather than targeting a whole website or something broad like “the health section” of the local paper.

Twelvefold’s unique value proposition relies on ultra-low latency network connectivity. This is required to evaluate advertising opportunities in real time, and to determine which placements are most relevant to specific brand campaigns. Twelvefold buys its inventory through live auctions that require bids to be placed within 10 milliseconds.

How it works

Twelvefold has the potential to bid on 200 billion advertisements each day. It scores nearly 20 billion of those opportunities, as placements on brand-safe websites from which it scrapes the text of articles, much like a search engine. Proprietary algorithms and natural language processing are used to calculate relevancy at the page (URL) level, while most companies buy advertising at the site (domain) level.

The underlying analytics and big data that fuel Twelvefold’s real-time buying decisions require streaming enormous volumes of data, without the benefit of batch processing. Internap provides route optimization and BGP management to optimize this mission-critical logic.

The opportunities (called “impressions”) arrive at a peak rate of 300,000 per second. With such high transaction volume, speed is business-critical. Twelvefold has been fine-tuning its systems for years. The company works with Internap’s engineering group to optimize network connectivity at all levels, from the data center to the servers.

Establishing and implementing a comparable integrated solution, including routers, switches, fiber, etc., would cost millions of dollars that would be better spent on business-critical technologies. Internap’s aggregated network and colocation solutions allow Twelvefold to focus on their application and their business, instead of spending time and money managing their network and dealing with carriers directly to ensure consistent performance.

“Our partnership with Internap lets us focus on what makes our technology unique,” explains Twelvefold president, David Simon. “With colocation, route-optimized IP and Internap’s world-class team of NOC engineers, we can move mountains of data at rapid speeds, and still employ only one full-time technical operations resource.”

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Jun 19, 2015

What are modular data centers?

INAP

modular data centersModular data centers seem to be a hot topic these days, but to a large degree, much of the discussion is “modular-washing”. As a result, the importance of modularity, like the equally important “green data center” which was muted by “greenwashing”, is being lost.

With that said, let me explain what modularity means to me. A modular data center is one that uses prefabricated infrastructure or deploys standardized infrastructure in a modular fashion. That definition is general enough to be ripe for exploitation, and so it is being exploited. I mean, is anyone out there deploying their infrastructure or growing their data center in a way that couldn’t be considered modular?

From brick-and-mortar to containers

Interestingly, I see two primary camps on modular data centers. There are the “traditional” “brick-and-mortar” buildings – I’m doing a terrible injustice to some of the facilities out there with this description, but bear with me – and then there are the “containers”.

A lot of the initial discussion about modularity was from brick-and-mortar multi-tenant data center providers, including Internap. For us, modularity meant predictability for our customers. Each new phase of our data centers mirrored the data center experience that our customers were already enjoying. Modularity also created predictability from our vendors, designers, manufacturers, permitters and planners.

Modularity also meant speed in deployment at a lower cost, because it was the same plan executed over and over again. That’s exciting stuff to someone who has outsourced their data center needs to a third-party data center provider. But even the private data center providers recognized that predictability and speed were good things. At this point, any brick-and-mortar data center being built with scale in mind is likely being built with modularity as a key component. It’s still incredibly beneficial but about as exciting as N+1.

A container data center is the ultimate expression of a modular data center. It’s a steel box containing a bunch of compute equipment along with power and cooling management infrastructure (another gross oversimplification). As a result, each container is an operating data center. To grow the data center, you add another container. Super modular. But I haven’t seen this work in a multi-tenant environment, so in my opinion, this option is best suited for those who want to manage and operate their own infrastructure and who have relatively small, specialized needs.

Modular washing

Some might say that I have left out a newer modular solution. They see “container” within a “brick and mortar” as the ultimate expression of a modular data center. Providers who offer such solutions often talk about how they differ from containers by being purpose-built, integrated and user-friendly. While this is correct, it’s also “modular washing”. Less often, there is talk about how these solutions differ from a modern brick-and-mortar data center. That’s because the multi-tenant data center world has had those since the beginning. We call them cabinets, cages and suites.

Watch the video to learn more about the benefits of Internap’s modular data centers.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jun 12, 2015

CTOs and CIOs can finally have it all with bare-metal cloud

Ansley Kilgore

bare metal webinarFor decades, the challenges of infrastructure and operations professionals have involved an eternal conflict. You can have either cheap and efficient infrastructure, or reliable and available infrastructure, but not both. While achieving this balance is still not easy, new advancements in cloud services are making it possible to get the best of both worlds.

Performance always matters

Definitions and expectations of cloud solutions may differ depending on whether you view it from a business or technical perspective, but at the end of the day, the end-user requirements will dictate platform choices. While the cloud offers CIOs the ability to scale rapidly and optimize infrastructure costs, technical decision makers have to focus on the application characteristics that need to be translated to architectural choices.

A few considerations:
Scalability – Business and technical decision makers understand the need to make their infrastructure future-proof, from a scalability perspective. Workloads are generally not static, and must adapt to changing needs.
Openness – How open is the platform itself? The ability to programmatically manage infrastructure and determine which ecosystem you belong to are key considerations.

Bare-metal cloud: The holy grail

With emerging cloud options, you don’t have to choose between better, faster or cheaper – you can have all three without compromising. Bare-metal cloud offers dedicated physical servers that optimize for application performance. A key factor is the ability to deliver performance along with scalability and cloud elasticity. Additionally, price performance advantages offer unparalleled cost efficiencies.

Bare-metal cloud can help you achieve the best of both worlds. Learn more in the webinar recording, Considering Bare Metal as a Viable Cloud Option, featuring guest speaker, Forrester Research’s Richard Fichera.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Jun 10, 2015

Challenges of hybrid cloud architecture

Ansley Kilgore

hybrid cloud architectureIn the quest to create a hybrid cloud environment, many enterprises inadvertently create a monster. Instead of the benefits that come with the perfect balance of public and private cloud, the end result is sometimes an amalgamation of disparate legacy and cloud infrastructures that don’t play well together.

In a recent Network Computing article, “In Pursuit of the Purpose-Built Hybrid Cloud,” Susan Fogarty clearly states the problem: “The question isn’t public cloud vs. private cloud, but rather how to blend the two efficiently.” So how can organizations successfully create a hybrid cloud architecture?

Achieving success

In the article, Satish Hemachandran, Internap’s Senior VP and general manager of cloud and hosting, describes two types of customers that have the best success with cloud.

First is the “tech-savvy startup that is disrupting how the world is working fundamentally.” These businesses come to cloud without the constraints of legacy infrastructure and can capitalize on its benefits more easily. The second type is the enterprise that proactively decides to focus on using technology as a differentiator. Such companies may be large or small, but they have “woken up and realized they cannot afford to be disrupted,” Hemachandran said. “They are using technology to reinvent their businesses.”

Distil Networks chooses a different hybrid model: Case study

Distil Networks has created a different kind of hybrid cloud, one based on private hosted services that burst into shared resources. The website security provider protects its clients against attacks from malicious bots, which requires extremely high performance, said CTO Engin Akyol. “To be able to do bot protection on every single request, we need high compute, and to keep the latency in check, we need a global footprint and dedicated high-performance internetworking connections,” he explained. The four-year-old company has been migrating its environment from Amazon Web Services and five different managed service providers to the Internap cloud.

The bulk of Distil’s infrastructure is hosted on Internap’s bare-metal cloud services, Akyol said. Bare metal lets customers customize their own dedicated servers – hosted in Internap’s data center – as they wish with their own operating system and applications. These servers are not virtualized, said Internap’s Hemachandran, in order to provide the highest possible performance. “Virtualization adds overhead to your server in terms of how it performs, and with bare metal you don’t have that overhead. It’s that elimination of the overhead that gives customers an incremental performance advantage over a virtual server.”

Distil creates a hybrid environment by integrating with Internap’s other virtualized services. “We start at a baseline using bare metal, but we don’t overprovision there,” Akyol said. “We can utilize cloud virtual instances to scale up and handle huge traffic dumps, and then when the traffic goes away, we can turn the instances down to keep our costs relatively low.”

Distil estimated it saved 10 percent on its immediate migration and expects that rate to triple. Moving from Amazon instances with multiple points of cost to a baseline of always-available resources provided a better cost structure, Akyol said. On top of that, Distil experienced double-digit performance gains and the security benefits of dedicated servers, which is especially important to the company as a security provider.

Learn more about how Distil Networks uses cloud services from Internap.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Jun 4, 2015

Colocation QnA: Data center airflow for high-density environments

Ansley Kilgore

In our recent webinar on Critical Design Elements of High Power Density Data Centers, we received several questions about data center power and cooling. Let’s take an in-depth look at your colocation inquiries regarding data center air temperature and power needs.

Have a question about colocation? Tweet us @Internap with hashtag #AskTheExpert.
Ask The Expert is a new video series where we answer your questions on any topic within our industry. Follow us on Twitter to know when new videos are published and to ask us your questions.

Data center airflow for high-density racks

Q: What air temperature would you advise for 25kW racks?
A: While temperature is important, it is not the only factor in high-density environments. Managing high-density environments requires fine-tuning of multiple items, including inlet temperature, return air temperature, static pressure and airflow (cubic feet per minute, or CFM) at the face of a cabinet. Most data center operators will maintain the ASHRAE range, which is between 65-80 degree inlet temperatures. Obviously, the higher the inlet temperature, the less energy you are using to cool the air. In a colocation environment where strict service level agreements (SLAs) are in place, we keep our inlet temperature on the lower side.

Temperature aside, the airflow (CFM) that is required in front of higher density cabinets is important. High-density servers have many servers compacted into a small area, and each one craves air being pulled through it to cool down the processor. This means that you will need a steady flow or stream of air (CFM) that matches the density of the cabinet. To provide that air, the raised floor tile is perforated to allow airflow from the pressurized under floor. In the case of a high-density cabinet, there are more and larger openings in the perforated tile to allow the required amount of air per density. The rule of thumb to determine the amount of CFM needed per kW ranges from 100-160CFM/kW based on design and proper airflow management. In addition, depending on density, you might need anywhere from a 4’-6’ cold aisle, which would hold at least two to three perforated tiles directly in front of the cabinet. The perforations allow a consistent stream of air to be pulled up through the floor across the front of the server equipment. Mastering the correct temperature and amount of perforated tiles should be carefully reviewed in design and managed once deployed to ensure proper temperature and airflow.

Measuring kW per rack

Q: Is your measurement of KW per rack based on full A/B coverage? (e.g. 10 KW = 5KW A side + 5KW B Side, or does it mean 10KW A Side with full failover coverage on the 10 KW B Side)?
A: This is a great question that is commonly misunderstood. Dual power supplies to servers will offer half of its required load on an A and a B circuit for a primary/redundant configuration. This configuration provides the server redundancy in the event of a power supply failure. There are instances where a primary/primary configuration is needed, meaning that the power being provided to the server between both circuits is what is needed without any capability of redundancy in the event of a power supply failure. Most data center operators set rules on the percentage of use of that circuit to ensure that power is not overdrawn against the rating of the breaker, or overdrawn to fulfill something more than a primary/redundant circuit such as a primary/primary circuit. This means that if I wish to draw 6kW on an 8kW circuit (the 2kW difference provides a buffer on the circuit – always size circuit for 20% more than needed), then 3kW will be drawn from each circuit to the server to provide the total requirement of 6kW. If a failure occurs in one of the power supplies, then the other power supply can draw the full amount required. In a primary/primary configuration, redundancy is not possible.

Voltage

Q: 120 volt per phase v/s 220 volt per phase?
A: The answer to this question comes down to the requirements of your equipment. Most data center providers have the ability to service 120V, 208V and 208V 3Phase. Most server equipment is designed to support one or all if requested. Many manufacturers understand that your needs will change over time – while you’re only using 2kW today, you may need 6kW in the future. For this reason, most equipment has the ability to go from 120 to 208, which will work fine as long as the CPU capabilities can handle it. Typically, the more compute or CPU that is required, the higher the power needs will be to support it. Each server manufacturer typically offers different specifications to accommodate this.

Watch the webinar recording to learn more: Critical Design Elements of High Power Density Data Centers

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More