INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Aug 16, 2012

The six keys to disaster preparedness: Preventative maintenance

Ansley Kilgore

The six keys to disaster preparedness: Preventative maintenanceIn addition to design and infrastructure, emergency response plans and mock drills I have explained in my previous posts, a highly-structured and robust maintenance program is crucial in preventing a disaster from impacting business.

Your data center services provider should have a Computer Maintenance Management System (CMMS) to keep track of when maintenance is due as well as repairs that have been done. This also helps to identify equipment that has numerous repairs and may require replacement before it reaches end-of-life. Only through a regularly scheduled preventative maintenance program performed by OEM representatives can you be assured that a data center is prepared for a disaster.

For example, batteries are a weak point in any system and, if not monitored and maintained properly, can actually cause an outage during a loss of utility power. Real-time monitoring can help by not only reporting when the batteries fall out of the OEM specifications, but by performing load testing to ensure the UPS can support the critical load. Although many providers perform quarterly maintenance on their batteries, that isn’t always enough – batteries can, and often do, fail shortly after scheduled preventative maintenance.

The way in which maintenance on critical equipment is planned and executed is also extremely important. For example, a “critical environment work authorization program” ensures that each element in the maintenance procedure is reviewed not only by the local facilities engineering team but also by a committee consisting of engineering staff and management across the enterprise. Maintenance on critical equipment should only be performed when the provider has 100% confidence in the “method of procedure,” the contractors performing the work, and the documented contingency plans. You should also request to see maintenance records, including associated repairs, to ensure your confidence in the provider’s ability to prevent and predict needed maintenance.

Predictive maintenance is as important as preventive measures regarding end-of-life equipment replacement decisions. Once again batteries – specifically their timely replacement – are a perfect example. In this case, you should ask about the age of the UPS batteries, what the OEM recommended life expectancy is, and when your data center provider plans to replace the batteries. Even well-maintained equipment will eventually reach an end-of-life cycle, which could lead to a catastrophic failure if there is not proper predictive planning for replacement.

For IP network maintenance, ask your data center services provider how long their equipment has been in service, when the last failure was and how recently its software has been updated. How do they monitor the health of the devices? Do they monitor device logs proactively or primarily react to events that occur? Do they maintain certain devices in the network differently than others, and if so, why? How do they react to impactful software bugs that are found? What is their QA process to validate new software and/or configurations before deploying these to the network?

Stay tuned for more on data center disaster preparedness in our next segment on communication best practices for data center providers.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Aug 15, 2012

Technology for the Olympics — what about your business?

INAP

Technology for the OlympicsWe all know prepping and conducting the Olympics is a mammoth undertaking. And, now that the Games are over, I was curious as to what some of the mind-boggling stats with regard to technology might be.

According to a ComputerWeekly article:

  • London2012.com became the most popular sports website in the world. It had 38.3 billion page views, peaking at 96,871 page views per second.
    1. 1. Some 1.2 petabytes of data were transferred over the website, with a peak of rate of 22.8Gbps.
      2. On the busiest day there were 13.1 million unique visitors.

  • During the Games, the Olympic network which connected 94 locations (including 34 competition venues) carried 961 terabytes of information.
  • Olympic traffic to bbc.co.uk exceeded that for the entire BBC coverage of FIFA World Cup 2010 games.
  • The BBC saw 12 million requests for video on mobile across the whole of the Games.
  • Around 13.2 million minutes (or 220,000 hours) of BT Wi-Fi were used across the Olympic Park venues.
  • A single company provided 13,500 desktops; 2,900 notebooks; 950 servers and storage systems and a number of tablet PCs.

Now what you and I do on a daily basis doesn’t come close to comparing that with running the Olympic games (and then again, maybe it does). Regardless, technology plays a critical role and with more and more businesses moving to the cloud, here are some articles to ponder:

What upcoming super event has you losing sleep at night? Check out our flexible cloud hosting solutions to see how we can help.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 14, 2012

The truth about usage-based power: Cost savings and energy efficiency

INAP

The truth about usage-based power: Cost savings and energy efficiencyI recently had the chance to sit down with our vice president of data center services, Mike Frank, to ask him all about our new usage-based power feature in our data centers. Mike gave me the scoop on why being able to monitor your power consumption is a step-up in terms of controlling costs and optimizing your infrastructure for maximum efficiency. Check out what he had to say…

What is usage-based power?

Usage-based power is exactly as the name implies. It allows customers in our colocation facilities to pay for only the power they are using instead of paying a flat rate — in many cases helping them save on costs. In today’s economy things have gotten incredibly expensive, and power is included in that. It’s definitely a customer-friendly feature since being able to understand your power consumption is critical to knowing if you are running at maximum efficiency. In turn, customers have more control over their environment and more flexibility over what they can do.

How does it work? 

There is something called a branch circuit monitoring system which puts a physical monitor on your power circuit that reports back to the network. Customers can then get that data on their invoice at the end of the month so they can compare it over previous months and years.

Why is this feature important?

By allowing companies visibility into how much power they are using, they can create more efficient environments and create more efficient configurations with their devices, servers, racks and the way they deploy their gear. For example, if I can look at my power consumption and I see that I am using a lot of power, I have the ability to upgrade my machines and change my infrastructure to lower my power consumption — something that could also encourage the industry to become more conscious of energy usage as it relates to being green.

Where is this being deployed?

Usage-based power is available now as a beta at our Dallas facility. The Dallas team is prepared and trained to implement this feature for customers today. Over the course of 2013 we’ll be rolling it out to other markets.

Thanks Mike! We’ll be looking forward to the implications this has for customers and the industry as a whole.

What are your thoughts on usage-based power in the data center?

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 10, 2012

Textbooks get an iMakeover

INAP

Textbooks get an iMakeoverTurn on a TV, take a trip to your nearest superstore or listen to the radio right now and you’ll quickly find yourself immersed in advertising for clothing bargains, office supply blowouts and expanded selections of backpacks; yep it’s back-to-school time. For me, the beginning of a school year harkens memories of freshly-sharpened number two pencils, new unscuffed tennis shoes and a heaping pile of heavy textbooks. These books were never new, always a few years old and littered with renderings of the previous owner’s thoughts about the teacher in sketch form throughout the pages. Wrapped in kraft paper to avoid further damage and a fine at the end of the year, I stuffed five or six of these monsters in my backpack and toted them home and back every day. They made my back hurt, were boring to read and smelled kinda weird.

Given this description of my daily plight, you can see why I’m totally jealous of the new generation of textbooks hitting schools. Digital textbooks are rolling out at a rapid pace, and schools systems all over the country are implementing this new technology into their academic programs. Experts say that school-age kids have grown up in a world of technology, and as a result the way they learn has changed. Digital textbooks allow for a world of interactivity that could never be achieved with a traditional book. Interactive diagrams, end-of-chapter quizzes, embedded video and dynamically updated HTML content are just a few of the attractive features of these digital publications. This interactive content isn’t just limited to your everyday algebra textbook either; it’s being utilized in flight manuals, medical books, music theory and more. Utilizing Apple’s iBooks Author, users can even create and publish their own books to Apple’s iBookstore, meaning teachers can create custom content for their classes and get relevant material into students’ hands quickly and easily.

The technology behind digital textbooks is just a small part of the growing movement toward on-demand content. Consumers expect their digital books, online video and streaming media to be available where they want it, when they want it, and fast. Can your IT Infrastructure handle the strain of users all over the globe? A content delivery network can help your company get content to users faster and more seamlessly. Check out our CDN Overview to learn more about how Internap’s CDN can put your content in the fast lane.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 9, 2012

The six keys to disaster preparedness: Mock disaster drills

Ansley Kilgore

The six keys to disaster preparedness: Mock disaster drillsLast week I discussed the importance of choosing a data center services provider with a documented emergency response plan in my list of six keys to disaster preparedness.

Now its time to put that response plan into practice with mock disaster drills. A provider may have seemingly perfect site disaster preparedness plans — on paper! However, only through testing and conducting drills will they truly be prepared for an event. Your provider should test their plans at least twice a year. By conducting simulations of an event, they will be able to verify whether their operations personnel are knowledgeable of their responsibilities and if the infrastructure equipment performs as intended. Look for providers who perform quarterly or similar simulations of equipment failures, power outages and other related critical equipment events that may occur as a result of a disaster. The findings of these mock drills should be documented on an on-going basis, and the training program should be modified as needed to ensure all on-site personnel are well trained in case of an event. In essence, a provider’s disaster preparedness philosophy should be, “If we’re not finding problems when we test our plans and equipment, we’re not testing thoroughly enough!”

Your provider should also be an integral part of your IP network disaster testing. You can run your own network disaster readiness tests, but that, in and of itself, is not enough. As you run tests on your end, ask your provider questions, including:

• What do they see when you fail over?

• Do they have to take corrective action on their side?

• Who should you contact, work with, and escalate to?

Has your provider made any changes to their infrastructure since your last disaster drill that might have changed how your drill needs to be operated?

Service provider networks are fluid — don’t be fooled into thinking that things will stay the same indefinitely. Test regularly to ensure that your service provider’s network hasn’t rendered your disaster preparedness useless!

I’ll end here this week, but stay tuned for the fourth installation in this series — preventative maintenance.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Aug 8, 2012

Getting crowded is a good thing

INAP

crowd sourcingCrowdsourcing isn’t new, but increasingly it is being used by organizations big and small as a process to solve challenges in a more effective way – faster, more creatively and with better engagement. In fact, US lawmakers, scientists, educators, online gaming enterprises and Netflix, to name a few, are crowdsourcing.

According to a BusinessWeek article, in February, Jenny Drinkard, an industrial designer and recent Georgia Institute of Technology graduate, proposed an idea to a website called Quirky.com to turn modified milk crates into a home storage system. Then 1,791 Quirky community members around the world refined the design, suggesting accessories and ranking them in order of preference on the site. The result: plastic cubes that can be stacked, connected, and customized with drawers, slide-in wooden shelves, cork bulletin boards, wooden feet, and rollers—fit for a college dorm room, which by the way were shipped to such retailers as Target and Staples by July to capture back-to-school shoppers.

This is a prime example of crowdsourcing and while the process occurs both offline and online, there are all the obvious reasons it is happening more and more online. Whether it is creating a website to crowdsource the next great fast food menu item, an award-winning graphic design, or top-quality e-learning content, the technology to make crowdsourcing possible becomes ever more important to keep contributors engaged. Because let’s face it, if contributors can’t do it easily, they just won’t do it.

Is your IT Infrastructure flexible and robust enough to take advantage of this vastly popular technique? You may want to check out our platform, performance and support videos for your future needs.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Aug 7, 2012

AgileFILES now integrated with the AgileCLOUD platform

Ansley Kilgore

AgileFILES Now Integrated with the AgileCLOUD PlatformInternap has been offering cloud storage solutions since June of 2011, allowing users to leverage our scalable OpenStack-based cloud storage platform for their applications and deliver their content via Internap’s Performance IP™ and CDN service.

As of this month, we’ve consolidated our object storage service, officially dubbed AgileFILES, with our virtual compute, bare-metal cloud server and on-demand CDN offerings to form a unified cloud platform for our users. That means existing Internap customers and anybody who signs up for our Agile Hosting Platform will now have access to our complete platform:

  • AgileSERVERS – Instantly delivered bare-metal servers in popular configurations
  • AgileCLOUD – Instantly delivered virtual compute servers up to 11 CPU cores and 22GB of memory
  • AgileFILES – Scalable object storage built on and compatible with OpenStack
  • AgileCDN – On-demand CDN service that provides global delivery of static content and progressive downloads

What is Cloud Storage, Really?

Like most things cloud, you’ll probably find as many definitions for cloud storage as you will find cloud providers! At Internap, we’ve named our cloud storage product AgileFILES to specifically describe what it is we’re offering…file storage! The short version is that instead of presenting storage to you or your programs as a disk or formatted POSIX-style filesystem (i.e. ext3, NTFS, ZFS, etc), we are providing an interface over HTTP to a distributed file system. The benefits are really about scalability, availability and cost — by providing an HTTP interface and consuming/presenting files or objects to users, we gain greater control over where files are stored and how we ensure availability to our users while improving the price/performance ratio of the offering behind the scenes. Another big benefit for web-scale users is that AgileFILES isn’t limited by the traditional size limitations of file systems — users can continue to scale way past the limits of traditional DAS or SAN units.

AgileFILES Now Integrated with the AgileCLOUD Platform AgileFILES Now Integrated with the AgileCLOUD Platform

Traditional Filesystem…

AgileFILES looks different…

 

How It’s Priced

Just like our Universal Transfer bandwidth billing model, we aimed to make AgileFILES simple to use and easy to understand from a pricing standpoint. As such, we leveraged the same tiering setup and volume breaks as in Universal Transfer for our AgileFILES billing model:

Storage (in GB)

  • Tier 1 – 0-50TB = $0.10/GB
  • Tier 2 Next 50 – 500TB = $0.09/GB
  • Tier 3 Next 500 – 1PB = $0.08/GB
  • Beyond 1 PB, call for rates

Requests are billed at $.01 per 10,000 requests (for GETs) and $0.01 per 1,000 for all other requests.

AgileFILES Now Integrated with the AgileCLOUD PlatformHow to Use AgileFILES
Getting started with AgileFILES is quick and easy. If you’re a developer, leverage the API documentation for OpenStack’s swift project or many of the language bindings available for download (currently PHP, Java, .NET, python and Ruby). If you’re not über techy and just want a good, solid way to move files into the cloud, download Cyberduck or Gladient, two popular GUI tools that allow you to start uploading objects into AgileFILES within just a few minutes.

One thing to note is that moving to an object storage mechanism can often be a paradigm change if you’re used to dealing with traditional file systems. Recently, we’ve been working with our friends over at Sourceforge, who power most of the world’s open source project downloads, much of that via our global mirror servers, storage and Performance IP delivery. As they work to migrate their platform to be fully native/aware of object storage, we’ve implemented a FUSE setup (“Filesystem in Userspace”) that makes API driven object storage look and act like a POSIX compatible file system. Recommended and compatible libraries include CloudFuse and S3QL. This can dramatically ease the migration to object storage for users moving from a traditional storage setup.

More tips, tricks and getting started information is listed on our public wiki, reachable here.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Aug 3, 2012

The brains behind intelligent route control

Ansley Kilgore

The brains behind intelligent route control“In a picture, it should be possible to discover new things every time you see it.” – Joan Miró

Internap’s Managed Internet Route Optimizer™ (MIRO), the brain behind our Performance IP™, is really astounding when you think about it. It makes millions of calibrations every day to measure the Internet and route around problems. Every fraction of a second MIRO collects massive amounts of data. The algorithm then analyzes, parses and prioritizes so that it can perform routing changes that make sure our customers’ content reaches every destination optimally. It makes our Performance IP three times less likely to suffer a network brownout or blackout than commodity bandwidth providers. It reduces packet latency across destination prefixes from any one Private Network Access Point (P-NAP®) by 25 milliseconds on average relative to any single competing backbone.

I’ve been writing about MIRO for more than four years now, in earnings scripts, press releases, marketing collateral and missives to investors and clients. Our customers have spoken volumes on MIRO and the benefits it provides to their own clients. Today, there are 546 mentions of MIRO on Internap’s website. On this very blog we’ve compared MIRO’s awesomeness to everything from parts on an F1 race car to Mom’s meatloaf. Other MIRO comparisons we could have referenced but didn’t (until now) include Professor Badass, Jetpacks and Bacon.

One of our optimization patents (US6912222 for you patent bugs out there) describes MIRO’s inner workings as:

“A Private Network Access Point (P-NAP) packet switched network is provided where two customers connected to the same P-NAP exchange traffic through the P-NAP without transiting over the backbones of the Internet. In addition, a multi-homed customer connected to the P-NAP is provided with access to the P-NAP optimized routing table so that the customer will also have the ability to know the best route for a particular destination. In this way, if a multi-homed customer connected to a particular National Service Provider (NSP) to which a destination is also connected, the P-NAP customer can use the P-NAP information regarding the NSP to send the information to the destination through that commonly connected NSP in the most direct fashion.”

But inevitably, words can’t really describe how awesome MIRO truly is. With our new Route Performance Monitor (RPM), I think we’ve started to address that difficult issue. RPM has two views:

The Monitor Internet tab

Monitor Tab

Here you can see some of the significant network events that MIRO sees emerging across the Internet every day. Optimal Internet paths are determined by running trace routes between the selected P-NAP and destination points. After mapping IP addresses of each hop into a mapping database to obtain city names and longitude/latitude information, RPM ascertains the worst round trip time (RTT) to each probed prefix and then compares that path to the alternative carrier route MIRO would select (i.e., the “best” route). RPM then randomly selects 50 events per day that have more than 500 ms of difference between the best and worst route options. By clicking on any event dot, a pop-up window appears displaying information on each carriers’ latency performance from the origin P-NAP to the selected destination.

And the Compare Carriers tab –

Compare Carriers

This tab displays average latency values (using ping and trace routes) across all probed prefixes (~ 400,000 at any one P-NAP) for each carrier over a 24-hour period and compares these suboptimal routes to Internap’s latency across the same prefixes and time period (MIRO leverages only the optimal routes across all carriers). The “split-flap” display in the screen-left dialog border shows the average latency difference between Internap and the carrier with the highest latency carrier on any particular day.

Split Flap

This figure represents the average improvement you might see by using Internap’s Performance IP versus a single poorly performing carrier per packet round trip. So if you have 40 objects on a web page and the average improvement is 25 ms across all prefixes you are potentially saving one second per page load…that’s HUGE (see our page on every second counts).

We’ve posted the RPM data collection and display methodology here if you want to delve into more detail. Take RPM for a drive and if you want to see how your bandwidth provider is performing to certain geographies or how they compare with Internap, leave us your email and we’ll be happy to walk you through the details.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Aug 2, 2012

The six keys to disaster preparedness: Documented response plans

Ansley Kilgore

The Six Keys to Disaster Preparedness: Documented Response PlansIn my previous post, I mentioned there are six keys to disaster preparedness that you should look for in any data center provider: infrastructure design, documented response plans, mock disaster drills, preventative maintenance, clear communication and the right people. This week, I’ll talk about number two on the list: a well-documented response plan.

It is critical that your data center provider has well-documented emergency preparedness and disaster response plans. While similar, both plans should be specific to the geographic location and type of facility. These plans identify actions that will prepare the data center operations team in case of an emergency, including the necessary steps that must be taken before, during and after an event. For example, a prototypical “inclement weather preparedness plan” will specifically address the risks associated with severe weather including tornados, thunderstorms, hurricanes and floods. Your provider’s plan should include specific tasks the operations team should perform at predetermined times leading up to the event – such as arrangements for contractor and supplier support, any changes to staffing levels and hotel reservations if an extended event is expected, among other things. These tasks should be repeated at regular intervals with a final plan in place a minimum of 12 – 24 hours before the event.

Documentation should be easily accessible to customers upon request in both soft and hard copy, contain critical contact information including the  provider’s management team and escalation procedures to ensure command and control maintainability throughout the event.

When it comes to your IP network, it is imperative that your data center services provider knows how to react in a disaster scenario, whether a problem is caused by a hardware or network failure. Do they have escalation procedures? An on-call rotation? Who can assist, and how quickly? Unlike data center facility or system issues, where the cause of a problem is often more obvious, network failures often require deeper inspection and detection before troubleshooting can begin. Does your provider have automated tools that monitor network health and routing stability? If so, are response measures taken automatically or do they require human intervention? Are those response measures documented and is everyone aware of them?

I’ll leave you to ponder these questions, in the meantime check out the full-length version by reading my eBook. See ya here next week as I discuss the third item on the list — mock disaster drills.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Aug 1, 2012

Creeper alert — Minecraft 1.3 arrives

Ansley Kilgore

Creeper Alert -- Minecraft 1.3 arrivesIn case you didn’t realize it — there is a revolution happening today. A revolution that affects just about every video game obsessed boy that I know and even some grown up boys too. Today is the launch of Minecraft 1.3 — the sandbox video game played on your PC in single or multiplayer modes. What? You haven’t heard of Minecraft and are wondering why this is such a big deal? Well, as of this morning, there are 36,141,718 registered users of Minecraft. It’s so popular in fact, that an Xbox version has been released, apparently bringing in about $20M in sales in just the first five days.In the online gaming world — the one issue that will not be tolerated? Latency. It is the cardinal sin. My tween terms this as “lag”, and his screams of frustration during Minecraft sessions with his cousins on the opposite coast have less to do with Creepers and Endermen and more to do with this ever-present aggravation. Gaming online brings the issue of latency right out of the data center and directly into our home.

To learn more about gaming and the effects of latency, check out some of these articles:

No doubt this Halloween will have my neighborhood swarming with Creepers, but if there was a way to dress up as “lag”, that would be my son’s pick for scariest costume. And I’m sure his Christmas list will have Lego® Minecraft at the top along with a plea for a low-latency Internet connection.

Check out our online gaming solutions sheet for more information on how to avoid lag with web acceleration technologies.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More