INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Mar 27, 2015

Is your DNS an overlooked source of latency?

Ansley Kilgore

what is DNS?Poor website performance has a direct impact on revenue. Even just a few seconds of delay on an ecommerce website can lead to shopping cart abandonment and a decline in the conversion rate. DNS (Domain Name System) allows users to find and connect to websites, and can be a hidden source of latency.

What is DNS and how does it work?

DNS maintains a catalog of domain names, such as “internap.com”, and resolves them into IP addresses. Anyone that has a presence online uses DNS – it’s required in order to access websites.

Sample use case: Ecommerce

Let’s look at a growing ecommerce site that recently expanded into a new market in Europe. Following the expansion, the company noticed that users were experiencing unacceptable levels of latency while connecting to their site. During the past month, users had to wait up to 10 minutes before they were able to reach the site.

The company has been handling their DNS needs through their ISP until this point. Possible causes of such high latency include:

  • DNS name servers may not be in close geographic proximity to a large percentage of users, and routing table errors could be misdirecting requests to name servers that are physically far away from the user.
  • Network congestion may contribute to slow resolution of DNS queries, resulting in high wait times to connect to the site.
  • Poor performance can also be caused by hardware failure at one of the name server nodes, and without an active DNS failover in place, this can keep some users from accessing the site.

To prevent these issues from affecting your business, we recommend a Managed DNS Service to support the performance needs of today’s websites.

In our presentation, DNS: An Overlooked Source of Latency, you will learn:

  • Factors that affect webpage load times
  • Important DNS features and functions
  • Different types of DNS solutions available

View the presentation here.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Mar 26, 2015

More than just hot air: 3 data center cooling considerations

Ansley Kilgore

data-center-coolingEffective data center cooling is one of the most important aspects of a colocation facility. Lack of temperature control can limit the amount of IT equipment that can physically operate in your colocation footprint. Making large capital investments to upgrade or build out a high-density data center typically is not feasible for the average enterprise. An effective data center cooling system can make sure your footprint is prepared to support rising power usage trends.

From a data center design and engineering perspective, here are a few things to consider when evaluating data center cooling systems:

1. Location requirements
Consider the amount of kW per cabinet you require, along with the geographic location of the data center. As an example, low-density users in the Pacific Northwest may be able to take advantage of free cooling via an air-cooled system. However, for high-density users in the Southeast, a water-cooled system will offer greater efficiency than an air-cooled system. Other variables should also be considered, including humidity, access to backup water, amount of free cooling that can be achieved and available space on site to name a few.

2. The right team
To meet your data center cooling needs, the importance of conducting due diligence and evaluating solutions against your goals, budget and timeline cannot be understated. Selecting a team of good mechanical designers, contractors and commissioning agents who understand your basis of design and goals will keep the project in full alignment. Before purchasing any equipment, conduct due diligence and evaluate your needs and cross-reference them against design solutions.

3. Future-proof considerations
Your data center design should be flexible and modular enough to adapt to changes in company goals and technology changes. While today’s cooling systems can support the current average power draw of 2-4 kW/cabinet, this may change as more and more organizations require higher densities. If the average power draw per cabinet increases to 6-8 kW, we could see innovations around data center cooling systems. Data centers with a flexible, modular design will be better equipped to accommodate new cooling technologies and higher power densities.

To learn more, download Colocation: The Essential Buyer’s Guide.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Mar 19, 2015

Buyer beware: The ugly truth behind cloud services

Ansley Kilgore

The process of selecting a public cloud IaaS provider can be complicated. Evaluating cloud solutions is based on a multitude of variables, but in many cases, comparing them on a level playing field is difficult. During the decision making process, a lot of emphasis is placed on cost, which can be a deceiving metric to determine the true value of a cloud solution.

This recent Cloud Spectator benchmark report provides like-to-like comparisons designed to help IT buyers evaluate virtual machine (VM) performance.

Here are a few use case examples where specific performance metrics were explored around web servers and databases.

Use case 1: Static web server
For web servers such as personal webpages, online portfolios and image galleries, the metrics that are most important are virtual processors and read IOPS. In tests conducted on processor performance and specific CPU-bound tasks, such as data encryption and image compression/decompression, Internap AgileCLOUD’s virtual processors showed around 50% higher performance than comparable Amazon Web Services (AWS) counterparts. From a business perspective, higher read IOPS translates into a faster, more consistent experience for users of static web applications.

Use case 2: High traffic web servers
In addition to virtual processors and read IOPS, high-traffic web servers need to consider write IOPS and network throughput. Examples of these include large news outlets, travel and ecommerce websites that leverage horizontal scalability and serve requests concurrently on a large scale. Poor internal network throughput across a VM cluster can negatively impact traffic distribution across servers, and result in suboptimal application performance.

Use case 3: Clustered database environments
The internal network also plays a major role for clustered database environments, making network throughput extremely important. Other factors such as virtual processors, memory bandwidth and read/write IOPS should be considered in order to achieve optimal price performance, as shown in the chart below.
Clustered_Database_Environments_Internap

Decisions about your architecture should ultimately be driven by the performance needs of your application or workload. When looking at specific metrics that matter to your use case, be sure to evaluate the price-for-performance ratio. Many buyers learn the hard way that big-box providers with commodity cloud offerings end up being very expensive in the long run.

Download the report, Performance Analysis: Benchmarking Public Clouds, to see the specifications of the VMs used during testing, along with other database use case comparisons.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Mar 17, 2015

Bare metal restore vs. backup: What’s the difference?

INAP

In this blog, we’ll cover the difference between a bare metal restore and a bare metal backup. As bare-metal servers begin to reach parity with cloud features, they offer the ability to create new servers out of an existing server. The typical dedicated server has many challenges, including the ability to quickly copy your server configuration to another server. This is usually a lengthy process with traditional dedicated servers, and it definitely takes more than a single click.

Bare metal restore

Let’s have a look at how a cloud snapshot works to better understand a bare metal restoration.

A snapshot (or image, but we’ll refer to it as a snapshot through the rest of this article) is a copy of a server or virtual machine such as a cloud instance or a bare-metal server at a point in time. Everything is copied, including the drive content, memory content, the applications running, everything! A snapshot is a good way to keep track of static data, but is in no way how you should backup your data. You could compare a snapshot to a picture – it preserves a state of what things were at a precise moment in time.

Restoring from a snapshot means you are reverting back to how things were at the time of the snapshot, without worrying about data that was added after this point in time. It’s just like closing a document without saving your changes; you revert back to the last time you clicked the save button.
Bare Metal Restore

Backup vs. restore

The key difference in bare metal backup vs. restore is that the user can select specifically what they want to backup and recover, instead of having to revert the entire server or virtual machine to a previous state. A backup differentiates itself from a restoration by allowing the user to recover partitions, tables, files or even an entire server if required.

Backups typically require an agent in order to be performed. An agent is a type of software that allows you to execute backups locally, or to initiate a secure communication between the backup platform and the server itself. This allows you to store the backup outside of the server for faster recovery, and avoid data loss in case of hardware failure. It provides a level of flexibility that a bare metal or cloud restoration cannot offer.

Backup solutions are entirely independent and complementary to the restoration functionality of a cloud or bare-metal server.
Bare Metal Backup
A best practice for bare metal restore and backup is to leverage both solutions. Backups are an effective way to recover precise data down to the file level without wiping clean your entire server or virtual machine. In comparison, bare metal restore is a great way to copy a server configuration to other bare-metal servers without having to do the same manual tasks over and over again. It enables system administrators and IT departments to deliver environments faster and avoid repetitive tasks. It also enables them to roll back to a previous state in case of maintenance failure, reducing downtime and impact on your business.

Read Next: What’s the difference between bare-metal servers with and without a hypervisor?

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Mar 11, 2015

GDC 2015: Gaming industry trends

INAP

Internap team at GDCIt’s time for our annual GDC wrap up! We were back at this year’s conference in San Francisco last week to learn about the state of the industry and listen to both large and small developers discuss their online infrastructure needs. Just like last year, we hosted our own session: “Learn from the pros: building fast, massively scalable games,” which included a great panel of industry leaders consisting of Steward Chisam (Hi-Rez Studios), Haitham Rowley (Square-Enix) and Tachu Avila (Crowdstar). Look for it on the GDC vault in a few weeks.

At GDC 2015, there was a lot to experience both on the show floor, and during the diverse conference sessions. Here are some big takeaways:

A VR Explosion
Walking down the show floor this year, you could quickly see that VR was well represented. From cool new VR demos (especially the double light saber demo courtesy of Sixense) to new VR headsets (from Sony’s and Valve’s to the fun Google Cardboard), VR was almost everywhere you looked. And the lines for the Oculus demo were just as long as last year!

Broadening Audiences
Twitch.tv has been growing in popularity for a while, and the fact that a lot of players learn about new games through the service has made Twitch a valuable marketing tool. Recently, many players have begun using Twitch to see their favorite streamers review new games, expansions or patches, using them for pre-purchase research instead of traditional game journalism. This new-found audience will definitely encourage other streaming providers to jump in, which in turn will increase the demand for high-performance networks and Content Delivery Networks (CDN).

Big Data and Analytics
Throughout the conference, there were many sessions devoted to understanding your audience better through behavioral analytics. Common questions included: Who buys your games? Why do they start (or stop) playing? How many sessions do they play? There were good discussions about the scalable infrastructure required to handle data from a growing player base, including high-performance cloud solutions needed for Hadoop deployments. I was surprised that other solutions for resource-intensive use cases (like bare-metal servers) were not brought up more often. Another analytical tool that has become popular in the industry is Tableau, which is used to analyze and draw conclusions from player’s activity and purchase behavior.

The state of Free to Play
What about F2P? Last year there was a bigger emphasis on the morality of F2P, while this year the focus was on different strategies to monetize players. That’s not to say the user experience should always come second, since most developers by now realize that pay gates (where only paying players can enjoy the full experience) are not well-received. In-game ads were widely talked about as speakers highlighted the importance of recognizing how and when to use ads, as well as the pitfalls of overusing them to the point they hinder the experience. Another takeaway was that developers should not underestimate the value of nonpaying gamers, since they bring others through word of mouth, help maintain a healthy community and may transition to paying customers with time.

Managing your Traffic
One of the biggest challenges developers face is providing a seamless experience for their players, with low load times, a stable connection and as little lag as possible. While discussing that SMITE is coming to consoles (to Xbox One specifically), Hi-Rez Studios shone some light on how they handle their traffic. Currently Hi-Rez uses a mix of bare-metal servers and cloud to handle their typical day-to-day traffic (around 84% and 16% respectively). However, during events or times when a high number of players is expected, Hi-Rez switches to a higher percentage of cloud, allowing the cloud’s flexibility and scalability to successfully accommodate the incoming traffic.

That’s all for now. For more information about Internap’s gaming solutions, check out our gaming resources, scalable media streaming solution and the ParStream report on database performance. GDC was fun and we learned a lot, looking forward to next year!

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Mar 6, 2015

GDC 2015 news roundup

Ansley Kilgore

Arena_V2_Promo_Shot_2The online gaming industry saw several big announcements during this year’s Game Developer’s Conference. Here are some GDC 2015 news highlights to keep you in the know.

GDC 2015: SMITE Alpha Launching on Xbox One Next Week

Hi-Rez Studio’s third-person MOBA, SMITE, will be available next week for some Xbox One players. The console version will enter Alpha on March 11, and codes will be available at the Hi-Rez booth during PAX East 2015. Get behind-the-scenes information about SMITE and meet the developers in the new show, “IGN Godlike.” This week, Hi-Rez Studios stops by to discuss SMITE’s past, present, and future, including the game experience on Xbox One.
Read entire article here.

Games dominate App Annie’s annual list of top-grossing app publishers

App Annie provides reports that focus specifically on gaming, and many game publishers made the list of 52 top-grossing apps for 2014, which is interesting because many mobile games are free-to-play. Game companies dominated App Annie’s list of the top-grossing app publishers for both iOS and Android in 2014. Of the top 10, all were either publishers or had dipped into gaming in some manner.
Read entire article here.

‘Dragon Quest Heroes’ To Receive A Western Release

Square Enix has announced that they are finally bringing Dragon Quest Heroes to the West on PS4. While the series never took off outside of Japan the same way Final Fantasy did, Heroes is arguably one of the most culturally important game series ever created. This new Dragon Quest-themed action role-playing game was developed by the team behind the Dynasty Warriors games. Read entire article here.

Middle-earth: Shadow of Mordor Wins Game of the Year at GDC Awards

The 15th annual Game Developers Choice Awards and the 2015 Independent Games Festival were held in San Francisco as part of this year’s Game Developers Conference. Middle-earth: Shadow of Mordor won overall Game of the Year at the Game Developers Choice Awards, while Out of Wilds took home the Seumas McNally Grand Prize award at the Independent Games Festival awards.
Read entire article here.

Watch: Five Key Considerations for Online Gaming Infrastructure

Image courtesy of Hi-Rez Studios

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Mar 3, 2015

Top 4 data center challenges of always-on applications

Ansley Kilgore

Internap DataCenterReal-time applications that process data-intensive workloads require a reliable, high-performance infrastructure to provide the best experience for end users. In a recent webinar, Internap and AudienceScience discussed four essential data center technologies to ensure success for always-on applications.

Scalable density

AudienceScience, an enterprise advertising management platform, experienced a rapid increase in media transaction volume and needed to expand its infrastructure to support rising demand for its real-time, data-driven workloads. The ability to accommodate higher power densities is critical for AudienceScience to scale their infrastructure and ensure a high quality end-user experience.

Watch the webinar recording to hear Peter Szabo, Director of Technical Operations at AudienceScience, discuss the four essential data center technologies.

As AudienceScience added more servers and equipment to their existing data center footprint, they pushed the power limits of the facility and began experiencing problems with temperature control and power outages. Without access to additional power, they could not provide adequate cooling for their equipment, and as a result, they were unable to fill their racks to maximum capacity.

The ability to densify their colocation footprint allowed AudienceScience to manage the heat from additional equipment and completely fill their racks. Making full use of their servers and circuits allowed the company to reduce spending on new IT equipment and provide a more stable environment for their application.

High availability

Real-time ad bidding is essential for AudienceScience to provide data-driven media buying for their global customer base. The company doesn’t have the option to not bid on ads, and their application must be up and running at all times.

Data centers designed for concurrent maintainability can minimize the risk of downtime and ensure availability for business-critical applications. This design eliminates single points of failure, and includes a backup for the components within the data center. Essentially, this means that a component can be shut down without disrupting other systems or most importantly, customers.

Establishing a scalable, highly available data center footprint is critical for future-proofing your infrastructure.

Watch the webinar recording to learn the importance of flexibility and connectivity for your data center environment.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More