INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Aug 13, 2019

How to Keep your IT Infrastructure Safe from Natural Disasters

Laura Vietmeyer, Managing Editor

Costly natural disasters—think disasters that cost over $1 billion—are occurring with increased frequency. According to the National Oceanic and Atmospheric Administration, there was an average of 6.3 annual billion-dollar events from 1980-2018, yet in the last five years alone, the average doubled to 12.6.

Last year, natural disasters cost the U.S. $91 billion, and there were 30 events in total over 2017 and 2018 with losses exceeding $1 billion.

Whether the event is a hurricane, flood, tornado or wildfire, businesses can be blindsided when they do happen. And many businesses are woefully unprepared. As many as 50 percent of organizations affected won’t survive these kinds of events, according to IDC’s State of IT Resilience white paper.

Of those businesses that do survive, IDC found that the average cost of downtime is $250,000 per hour across all industries and organizational sizes.

Disaster Recovery Stats

Imagine what would happen if your business takes a direct hit and your data, applications and infrastructure are disabled. We all know that these events are unpredictable, but that doesn’t mean that we can’t do something now to prepare for any eventuality.

Here are a few basic steps you should take to protect your IT infrastructure and keep your business up and running after a natural disaster.

Perform a Self-Evaluation

The first step in protecting your sensitive information is to determine exactly what needs to be safeguarded.

For most companies, the biggest risk is data loss. Determine how many instances of your data exist and where they are located. If your company only performs backs up onsite or even stores data off-site with no additional backup, you need to reevaluate your strategy. Putting all your eggs in one basket makes it easy for your information to be wiped out by natural disasters.

Think About Off-Site Backups in Different Locations

If you do use off-site backups for your information, you’re taking a step in the right direction, but depending on their physical location, your data might not yet be fully protected.

Consider this scenario: Your business is headquartered in San Francisco and you back up your data in nearby Silicon Valley. A massive earthquake strikes the Bay Area (seismologists say California is overdue for the next “big one”), disabling your building as well as the data center where your backup data is located. Depending on the size of the disaster it could take hours, days or even weeks before your data is accessible. Would your company be able to survive this disruption?

A smarter option would be to select a backup site that’s not in the same geographic region, reducing the chances that both locations would be impacted by the same disaster.

Consider the Cloud

An option becoming more popular with businesses is to utilize cloud storage as their backup solution. INAP provides a cost-effective and scalable storage option, providing a flexible and dependable cloud storage solution.

Another dependable and more robust option, Disaster Recovery as a Service (DRaaS) replicates mission-critical data and applications so your business does not suffer any downtime during natural disasters. DRaaS provides an automatic failover to a secondary site should your main environment go down, while allowing your IT teams to monitor and control your replicated infrastructure without your end users knowing anything is wrong.

Think of DRaaS as a facility redundancy in your infrastructure, but rather than running your servers simultaneously from multiple sites, one is just standing by ready to go in case of an emergency.

Don’t Wait Until It’s Too Late

It’s never a bad time to evaluate your disaster recovery strategy. But if you’re waiting for a natural disaster to come barreling toward your city, then you’re waiting too long to establish and activate your backup strategy.

It’s just up to you and your IT team to determine which services are most appropriate for your business needs.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Laura Vietmeyer

Managing Editor

Read More
Aug 1, 2019

Bare Metal Cloud: Key Advantages and Critical Use Cases to Gain a Competitive Edge

Layachi Khodja, Solutions Engineer

Cloud environments today are part of the IT infrastructure of most enterprises due to all the benefits they provide, including flexibility, scalability, ease of use and pay-as-you-go consumption and billing.

But not all cloud infrastructure is the same.

In this multicloud world, finding the right fit between a workload and a cloud provider becomes a new challenge. Application components, such as web-based content serving platforms, real-time analytics engines, machine learning clusters and Real-Time Bidding (RTB) engines integrating dozens of partners, all require different features and may call for different providers. Enterprises are looking at application components and IT initiatives on a project by project basis, seeking the right cloud provider for each use case. Easy cloud-to-cloud interconnectivity allows scalable applications to be distributed over infrastructure from multiple providers.

What is Bare Metal Cloud?

Bare Metal cloud is a deployment model that provides unique and valuable advantages, especially compared to the popular virtualized/VM cloud models that are common with hyperscale providers. Let’s explore the benefits of the bare metal cloud model and highlight some use cases where it offers a distinctive edge.

Advantages of the Bare Metal Cloud Model

Both bare metal cloud and the VM-based hyperscale cloud model provide flexibility and scalability. They both allow for DevOps driven provisioning and the infrastructure-as-code approach. They both help with demand-based capacity management and a pay-as-you-go budget allocation.

But bare metal cloud has unique advantages:

Customizability
Whether you need NVMe storage for high IOPS, a specific GPU model, or a unique RAM-to-CPU ratio or RAID level, bare metal cloud is highly customizable. Your physical server can be built to the unique specifications required by your application.

Dedicated Resources
Bare Metal cloud enables high-performance computing, as no virtualization is used and there is no hypervisor overhead. All the compute cycles and resources are dedicated to the application.

Tuned for Performance
Bare metal hardware can be tuned for performance and features, be it disabling hyperthreading in the CPU or changing BIOS and IPMI configurations. In the 2018 report, Price-Performance Analysis: Bare Metal vs. Cloud Hosting, HorizonIQ Bare Metal was tested against IBM and Amazon AWS cloud offerings. In Hadoop cluster performance testing, HorizonIQ’s cluster completed the workload 6% faster than IBM Cloud’s Bare Metal cluster and 6% faster than AWS’s EC2 offering, and 3% faster than AWS’s EMR offering.

Additional Security on Dedicated Machine Instances
With a bare metal server, security measures, like full end-to-end encryption or Intel’s Trusted Execution and Open Attestation, can be easily integrated.

Full Hardware Control
Bare metal servers allow full control of the hardware environment. This is especially important when integrating SAN storage, specific firewalls and other unique appliances required by your applications.

Cost Predictability
Bare metal server instances are generally bundled with bandwidth. This eliminates the need to worry about bandwidth cost overages, which tend to cause significant variations in cloud consumption costs and are a major concern for many organizations. For example, the Price Performance Analysis report concluded that HorizonIQ’s Bare Metal machine configuration was 32 percent less expensive than the same configuration running on IBM Cloud. The report can be found for download here.

Efficient Compute Resources
Bare metal cloud offers more cost-effective compute resources when compared to the VM-based model for similar compute capacity in terms of cores, memory and storage.

Bare Metal Cloud Workload Application Use Cases

Given these benefits, a bare metal cloud provides a competitive advantage for many applications. Feedback from customers indicates it is critical for some use cases. Here is a long—but not exhaustive—list of use cases:

  • High-performance computing, where any overhead should be avoided, and hardware components are selected and tuned for maximum performance: e.g., computing clusters for silicon chip design.
  • AdTech and Fintech applications, especially where Real-Time Bidding (RTB) is involved and speedy access to user profiles and assets data is required.
  • Real-time analytics/recommendation engine clusters where specific hardware and storage is needed to support the real-time nature of the workloads.
  • Gaming applications where performance is needed either for raw compute or 3-D rendering. Hardware is commonly tuned for such applications.
  • Workloads where database access time is essential. In such cases, special hardware components are used, or high performance NVMe-based SAN arrays are integrated.
  • Security-oriented applications that leverage unique Intel/AMD CPU features: end-to-end encryption including memory, trust execution environments, etc.
  • Applications with high outbound bandwidth usage, especially collaboration applications based on real-time communications and webRTC platforms.
  • Cases where a dedicated compute environment is needed either by policy, due to business requirements or for compliance.
  • Most applications where compute resource usage is steady and continuous, the application is not dependent on PaaS services, the hardware footprint size is considerable, and cost is a limiting concern.

Is Bare Metal Cloud Your Best Fit?

Bare Metal cloud provides many benefits when compared to virtualization-based cloud offerings.

Bare Metal allows for high performance computing with highly customizable hardware resources that can be tuned up for maximum performance. It offers a dedicated compute environment with more control on the resources and more security in a cost-effective way.

Bare metal cloud can be an attractive solution to consider for your next workload or application and it is a choice validated and proven by some of the largest enterprises with mission-critical applications.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Layachi Khodja

Solutions Engineer

Read More
Jun 20, 2018

Colo vs. Cloud: What’s Best for Your Business?

INAP

Yankees or Red Sox, Linux or Windows, Star Wars or Star Trek: There’s no shortage of choices life asks us to make. When it comes to cloud versus colocation, it may be tempting to see it as just another either-or decision. But the question you should be asking isn’t “colo or cloud”—it’s “what’s the right mix for my applications?”

Colo is sometimes forgotten because of its more popular, younger and shinier cousin the cloud, but there are use cases for both, and your particular mix will depend on your applications. For example, a financial services company that wants to leverage cloud to gain cost efficiency might use a public cloud for its end-of-day or end-of-month batch processing, while also using colo or hosted private cloud for its mission-critical databases and supporting applications. This configuration would provide the cost efficiency of public cloud for short-term workloads while also utilizing a dedicated, secure platform optimized for applications that are always on.

Regardless of your situation, developing a comprehensive cloud strategy will help you avoid lock-in, providing flexibility, adaptability and room to grow as your needs evolve. And that multicloud strategy just might include some smart usage of colocation if, for example, you have a need for specific hardware or want a network presence in certain locations. Here’s a primer for understanding the big pieces of cloud, colo and anything in between.

The Hidden Cost of On-Premise Solutions

For any organization facing the decision to “build” or “buy” their infrastructure, “buying”—whether bringing your hardware and renting space in a colo facility or shifting entirely to the cloud—is a simple step that is guaranteed to level up your IT. Yet the conversation about colo and cloud is usually focused on dollars spent and saved. This is understandable, especially since on-premise data centers are often expensive to secure and maintain, and going off-premise can have a clear impact on cost savings. But what could the conversation be if CAPEX or OPEX weren’t the primary drivers of your IT infrastructure decisions?

Now don’t get me wrong—I know keeping costs reasonable is important—but I also think it might be helpful to think about your choice in terms of a different resource: time. The math is simple: If you can offload certain tasks to a service provider, that’s time you get back. Every minute not spent handling maintenance and administration is a minute you now have free to focus on your actual applications. With that being said, here are the ways colo and cloud can make your life better.

Security and Compliance Improvement with Cloud and Colo

With a colo or cloud service provider, all the work of physical data center security and maintenance is no longer part of your to-do list—and a lot of compliance too, depending on your provider. With a managed service provider, they can take care of your routine data security and compliance tasks or even help you architect your infrastructure to fit the specific compliance needs of your applications.

Connectivity Ease with Colo and Cloud

A big part of the decision to move off-premise may be a simple need for connectivity. Your on-premise solution might lack certain connectivity altogether or you may have trouble with reliability or latency. Colo can solve these issues, whether you need to connect to certain geographies, carriers or third-party clouds like AWS or Azure. Managed services from your provider can give you an edge here too, ensuring dependable connectivity and minimizing latency even in spread-out networks.

Colo and Cloud Backup and Disaster Recovery

A huge upside to partnering with a comprehensive service provider is that regardless of your infrastructure solution, backup and disaster recovery services can be easily implemented. Whether using a colo facility or a hosted private cloud, both are effective, efficient ways to build redundancy into your systems—without having to build and operate your own second site.

The Biggest Difference-Maker: A Trusted Service Provider for Cloud and Colo

When choosing the right colo or cloud mix, it’s a good idea to start by asking a few questions:

  1. Where do you see your IT infrastructure and operations strategy in three to five years?
  2. What do you predict your service needs will be then?
  3. And most importantly: Are you working with a provider that gives you the capability to do the things you need to do today and won’t hinder you from doing what you need to do in the future?

Choosing the right colo or cloud provider can determine whether you have the flexibility and freedom to meet your future needs. They can be an invaluable partner in helping you to rightsize for today without limiting your options for the future. So pick a cloud or colo provider with a wide range of infrastructure solutions and managed services and one that is skilled, knowledgeable and experienced in multiple competencies, whether colo or cloud. At INAP, my team of solutions engineers help customers navigate the process, identify hard-to-spot downsides and share knowledge based on our experience assisting other customers.

Applications that aren’t a good fit for a legacy infrastructure model can be easily migrated with the help of a service provider like INAP, while maintaining a single partner that knows you and your business. The right cloud or colo solution will depend on your applications, and that will inevitably evolve over time. Rather than pitting colo against cloud, start from what your applications require, then find the right mix that makes sense for you.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
May 24, 2018

How Artificial Intelligence Is Solving Your Business Needs

Paul Painter, Director, Solutions Engineering

When you hear artificial intelligence (AI), it’s easy to start thinking about Skynet or R2-D2. Or that episode of The Simpsons where the robots at Itchy and Scratchy Land run amok.

But these aren’t the droids you’re looking for.

The fact is, AI is very real and very applicable to today’s increasingly demanding business and personal world. The sky’s the limit: AI assists in everything from spam filtering to flying airplanes. Using advanced algorithms, it determines the fare for your late-night Uber ride home or allows you to tell Alexa to order that pizza you know you deserve!

So, how does AI really work? Here are a few questions answered about artificial intelligence and some examples of industries that are using the technology well today.

What Is Artificial Intelligence?

Neural networking or cognitive computing are other commonly used terms to describe AI technology. Both define what is essentially the same thing: advanced computing using algorithmic patterns for various purposes and applications.

Cognitive computing enables neural networking and expert systems through advanced learning and data mining processes. Mimicking the human brain, the system will gather data and information and adapt to accomplish a set of tasks.

What Applications Does Artificial Intelligence Have?

AI is inherently designed to solve complex problems or needs that humans may not be able to effectively address.

We are increasingly seeing cognitive computing influencing many areas of everyday life. With everything ranging from personal assistants, security analyses and adaptation to sales automation, the uses are nearly endless. Consider IBM’s Watson, a cloud-based data analysis platform, or Salesforce’s Einstein, a CRM automation tool. Both are AI platforms based in cognitive computing and designed to make your life a lot easier.

One area the technology is really transforming is healthcare. AI offers the ability to efficiently sort, analyze, catalog and apply complex data sets. This helps medical professionals and practitioners conduct further research in addition to diagnosing disease. The idea of being able to provide informative evidence-based solutions to patients has driven many to further AI development in this industry.

Other great examples of AI advancements can be found in the gaming and software industries. For more than a decade, certain games have incorporated options to play against “bots”—artificial players that contain advanced techniques and protocols. This is another example of cognitive computing being able to predict, learn and anticipate situations and problems. While the concept isn’t new, each year, we’re seeing developers advance the technology.

How You Can Use Artificial Intelligence to Manage Your Infrastructure

With the wide variety of possible AI applications, it stands to reason that artificial intelligence can also benefit your IT infrastructure. Having a program that can learn and adapt to different situations while managing and monitoring aspects of your systems would be invaluable.

Imagine the man-hours saved when it comes to managing cloud architecture, reducing OPEX or mitigating a sudden DDoS attack.

With so many potential applications, we will definitely be keeping our eyes on further innovations in artificial intelligence technology.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Paul Painter

Director, Solutions Engineering

Read More
May 7, 2018

3 Easy Steps to Create a Comprehensive Backup Strategy

INAP

It’s a tale as old as time.

Company gets data. Company doesn’t back up data. Disaster strikes. Company loses data.

But before we get all Shakespearean tragedy on you, there’s a simple way to give this story a happy ending.

A comprehensive backup strategy.

Building an Off-Site and On-Premise Backup Strategy

By now you’ve probably heard about how important it is to have a backup solution to protect your infrastructure and critical files from natural disasters, human error and cyber incidents.

Traditionally, companies would center their backup strategies around on-site storage systems. But recently, off-site cloud solutions have been giving organizations more flexibility to back up their files while providing the security and business continuity strategies necessary to keep their infrastructures running.

Backing up your data in off-site cloud solutions ensures you can still access your critical information if your primary site goes down (similar to disaster recovery as a service) – something that might not be possible with on-site backup options.

So, how do you best create a backup strategy that utilizes on-premise and off-site resources? Here are three best practices to get you started.

1. Calculate the Amount of Data You’ll Need to Back Up

In most cases, you’ll want a solution that provides a complete backup of your data. This strategy allows you to still access your information should there be a disruption at your main site.

But before you begin building anything, you need to calculate exactly how much data you’ll need to back up. Basically, the whole “measure twice, cut once” philosophy. You don’t want to create a strategy and then discover it’s insufficient to meet your backup needs.

2. Determine Which Backup Files Go on Which Platform

It’s important to note you don’t need to put all of your backup files in one location. In fact, it’s encouraged that you use multiple locations. If your primary site is hit by a natural disaster or goes offline due to human error, chances are your on-premise backup will be disrupted as well.

If you keep non-mission critical data in an on-premise backup, you can save space for your most important files in the cloud. You just need to determine which resources you absolutely need to be able to access. Storage can be expensive, so it’s important to map out which data needs to be backed up on each platform.

3. Analyze Your Backup Timeframe

It doesn’t matter if you are running your backup on-premise or in the cloud, it’s important to determine how long it will take to complete your backup process.

Companies will generally run their backups overnight or on weekends, so they don’t interfere with daily business operations. This typically gives organizations a large enough backup window to accomplish everything they need. However, you never want to assume. While not an exact science, you can estimate how long you’ll need to back up your files by calculating the amount of total storage you’ll need backed up and dividing it by the storage read/write speed.

For instance, assume you have a data set that is 800GB and your storage read/write speed is 150MBps:

800GB / 150MBps = 1.5 hours of backup time for that specific data set.

Add up all of your time to give you an estimate of how long you’ll need so you can effectively plan your backup strategy.

Of course, each organization has unique data and storage requirements. Determine exactly what type of backup strategy would be best for your company and how off-site cloud options can help. In addition to securely housing your data in a location that’s separate from your main site so it’s accessible if your facility goes offline, off-site cloud backups are managed by third-party providers and can help free up your IT team to focus on more critical needs.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 27, 2018

3 Steps to Convince Your Boss You Need Disaster Recovery

INAP

Try as you might, there’s nothing you can do to completely protect against infrastructure failure.

Even the biggest fish in the sea suffer from unplanned disruptions. Remember when Amazon’s S3 had a massive outage in February 2017? To Amazon’s credit, they acted quickly and remediated the problem within hours, but more than 145,000 websites and 120,000 domains were impacted.

For IT professionals, outages like this are often top of mind when trying to pitch disaster recovery (DR) solutions to their boss and company executives. Professionals are often faced with two major hurdles: Some members of the C-suite only think about natural disasters and do not consider equipment failure, human error and cyber incidents. Plus, widespread outages caused by disasters are rare, so executives may doubt the need for a comprehensive DR strategy.

Here are three steps you can take to make a solid argument that a robust disaster recovery solution should be included in your IT budget.

Explain Exactly What Constitutes a Disaster

Mention the word “disaster” to your bosses and what’s the first thing that comes to mind?

Probably hurricanes, tornadoes, earthquakes and locusts (OK, maybe not the last one). And that would make sense. After all, the U.S. spent more on weather disasters in 2017 than any other year on record.

But it’s not just these large acts of God that can cause problems for your infrastructure. An electrical fire in two substations caused a power outage at Atlanta’s Hartsfield-Jackson International Airport in December 2017. The disruption forced airlines to cancel flights for days – Delta Air Lines reported losses up to $50 million.

It’s important that you emphasize that the term disaster can be used to describe anything from major weather events to technology failures and even cyber incidents. Explain how even minor incidents can cause major problems.

Explain Disruptions in Dollars and Cents

If you haven’t already gotten the attention of your executives by describing how disasters are more common than they think, hit them with information about how much these disruptions could cost your business.

Think worst case scenario. According to ITIC, 98 percent of organizations say a single hour of downtime costs their business more than $100,000. Even more staggering, nearly a third of organizations say this cost is more than a $1 million an hour. And this cost is just for the disruption itself. Point out the potential financial impact of customers who abandon your brand or negative publicity as a result of the public outages.

Provide Disaster Recovery Options

Now that you’ve pitched the doom and gloom scenario, come prepared to offer a solution – and most importantly, its cost.

When executives are budget planning, they aren’t going to just hand out a blank check to IT for resources when managers can’t provide specifics about how they will be used and why they will benefit the business. Consider your audience: You don’t need to provide extremely technical information, but you should be able to provide a high-end overview of the solutions you are proposing.

Examine your options for protecting your data during a disaster. Synchronous disaster recovery and data backups duplicate your critical information, so it remains accessible if your servers go down. Since this replication is done in near real-time, this is generally the most expensive option, but it provides the most comprehensive recovery solution. On the other hand, asynchronous solutions won’t be as pricey but have delayed duplication ­– anything from a few seconds to minutes. This means you will lose some of your data if your infrastructure goes down. Figure out if the additional cost is worth limiting the potential loss of data.

In addition, it’s important you consider the type of DR service that meets your specific business needs. Sure, you could build something yourself, but this might not be the most cost-effective option. A more reasonable option might be cloud-based disaster recovery as a service (DRaaS), which is run by a third-party provider, freeing up your IT staff to focus on core business needs. Each vendor will price this solution differently based on your specific service and functionality, so do your homework.

Sign up today for a free consultation with an INAP expert to learn more about our disaster recovery solutions.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 19, 2018

7 Incredible Examples of Augmented Reality Technology

INAP

Remember Google Glass?

The much-hyped augmented reality product from the search engine company promised to revolutionize the way we interacted with the world. It didn’t quite pan out.

While most of the tech world labeled Google Glass a magnificent failure, the underlying technology highlighted the enormous potential of augmented reality.

What is Augmented Reality?

This technology superimposes a CG image on a user’s view of the real world. Unlike virtual reality, where everything a user sees is generated by a computer, augmented reality keeps the real-world focus, but just adds elements that aren’t really there to enhance the user’s experience.

This application has been used everywhere from the healthcare tech industry to retail and even manufacturing—Google is reintroducing Google Glass as an augmented reality tool for the workforce.

A few innovative professionals are now even using augmented reality in their business cards.

Pretty cool, right?

Best Current Examples of Augmented Reality

The potential of augmented reality is endless.

It’s only a matter of time before this technology becomes a part of our everyday lives. We’re not quite there yet, but a few companies and organizations are making great strides with this technology today.

Here are seven of the best examples of augmented reality technology we’ve seen to date.

IKEA Mobile App

Aside from furniture with fun names that you need to assemble yourself and cheap Swedish meatballs, IKEA is also known in the tech world as one of the first companies to use augmented reality well.

The retailer began experimenting with augmented reality back in 2012, when shoppers could use the app to see how tables and shelves would look in various places around your house. IKEA is taking it a step further with its IKEA Place app, which now allows you select anything from the store’s catalog and see how it will look to scale anywhere in your house.

This is an extremely helpful tool for people who are wondering if a certain piece of furniture will fit in a tight space, or if the color of their prospective purchase will match the motif of the room.

Nintendo’s Pokémon Go App

You really can’t have an augmented reality conversation without mentioning Nintendo’s Pokémon Go app.

The smash hit of 2016, Pokémon Go allowed users to catch their favorite Pokémon by looking through their phones at the real world – but with superimposed images.

The game was an overwhelming success, with up to 65 million users at the peak of its popularity. It was probably also the reason you noticed scores of teens and young adults wandering around your neighborhood staring at their phones the whole time.

Google Pixel’s Star Wars Stickers

If you saw “Stars Wars: The Last Jedi” in theaters – or watched TV at all this past December – you probably saw this commercial:

Using the same basic idea as Pokémon Go, Google added a feature to the camera of its Pixel phones that allowed users to input AR stickers into pictures and videos. If you had a friend with a Google phone, you probably received at least one photo of a random Stormtrooper.

Disney Coloring Book

A few years ago, Disney created a unique way for children to see their favorite characters in 3D using augmented reality.

The research team developed technology using AR that would project colored images from a coloring book into 3D renderings using a mobile phone or tablet. This technology is still in its infancy and has not yet been released to the public, but this could usher in a whole new way for children to explore and play with their imagination.

L’Oréal Makeup App

Similar to the idea behind IKEA’s app, beauty company L’Oréal has a mobile app that will allow users to try out various types of makeup.

Think of it like a Snapchat filter. The app identifies your face and then will virtually show you what you would look like with a certain shade or color of a specific product.

Weather Channel Studio Effects

Television news has been using special effects to enhance its program quality for years. For example, weathermen have been standing in front of green screens for years that appear as maps to viewers to deliver their weather forecasts.

The Weather Channel is now taking technology one step further to illustrate extreme weather and its effects. Over the past several years, the broadcast company has used augmented reality to display a 3D tornado on set, show the height of flooding during storm surge and hurricanes and just recently drove a virtual car through the studio to show how vehicles lose control on snowy or icy roads.

Expect news, weather and sports programming to continue experimenting with augmented reality as a way to improve the television experience for viewers.

U.S. Army

Not all examples of augmented reality are fun and games.

The United States Army is experimenting with augmented reality programs to be used in combat that would help soldiers distinguish between enemies and friendly troops, as well as improve night vision.

This technology is still in development and may be years away from deployment, but military officials say this innovation would improve combat efficiency and help save lives.

The Infrastructure Needed to Run Augmented Reality

As you might expect, running an effective augmented reality program involves lots of data and a high-performing infrastructure. Any lag or delay in augmented reality programs would defeat the purpose of using the technology and leave the end user with a far less than optimal experience.

INAP’s data center and infrastructure services provide the necessary solutions you will need to power your augmented reality applications. Whether you are looking for colocation, managed or network services, or hope to run your servers in the cloud, our specialists can help find the right solutions your business requires.

Contact us today to speak with an INAP specialist about your specific business needs.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Mar 28, 2018

It’s Time to Evaluate Your Company’s Backup Strategy

INAP

There are a few things you can always depend on at the end of March.

The weather gets a little warmer, a Cinderella team will bust your college basketball bracket (we see you, UMBC and Loyola-Chicago!) and that one guy in your office pulls off an April Fool’s Day prank that isn’t as funny as he thinks.

But one event that happens every year in March is often overlooked. We’re talking about World Backup Day.

What is World Backup Day

Held every year on March 31, World Backup Day was created by a biology student at Youngstown State in 2011 to encourage users to back up their files, like cell phone photos, home videos, documents and emails.

Users are asked to take a pledge, declaring:

“I solemnly swear to backup my important documents and precious memories on March 31.”

Moving these personal files to the cloud or an external hard drive gives you insurance in case your phone is stolen or your computer gets a virus. Of course, smart IT people like you already know these risks and take precautions against personal data loss, but enough of the general public is unaware of the risks, so World Backup Day became a thing.

Your Corporate Backup Strategy

World Backup Day is also the perfect opportunity to review your company’s backup strategy.

You’ve probably seen the statistics highlighting the consequences of data loss. For instance, 60 percent of small and medium-sized businesses (SMBs) that lose data will shut down within six months. And perhaps more startling, nearly 60 percent of SMBs are not prepared for data loss.

So, what can you do to make sure your business isn’t just another statistic?

A few years ago, INAP published this list of essential tasks you need to consider when evaluating your corporate backup plan.

  • Determine what data is critical to your business
  • Evaluate backup solutions that meet your data’s security requirements
  • Select a backup solution provider
  • Implement your backup solution
  • Regularly test your backups to ensure everything is working as expected

Pay close attention to the third bullet on that list. Sure, you could go it alone and backup your files internally, but that doesn’t protect you from a catastrophic event that completely wipes your servers.

Your best bet is to work with a backup solution provider.

INAP’s Backup Solutions

Fortunately for you and your business, INAP is equipped to help with your backup needs. Our managed backup services give you the capability of replicating your important information in dedicated and shared environments – in both INAP or off-site servers.

If budget is a factor (and we know it always is), cloud backup provides an affordable alternative. Your server will be shared with other users, but you’ll receive the same level of encryption and security as you would a dedicated server, so your files and critical information will remain safe.

Regardless of the type of solution you require, our robust backup offerings will give you the flexibility and performance you desire to protect your important files from data loss.

Contact us today and speak with an INAP specialist about the backup services best suited for your business needs.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Mar 7, 2018

Cloud Gaming: A Revolutionary New Streaming Option

Charles Parry

Over the last decade, we’ve seen how cloud has reshaped the gaming community. Some might remember LAN parties or internet cafes that have now been replaced by high powered servers capable of hosting hundreds of players.

Recently, cloud gaming made its next big step, Gaming as a Service.

What is Cloud Gaming

Not unlike media streaming services such as Netflix or Hulu, Gaming as a Service allows users to “stream” programs directly to their PCs.

This is groundbreaking given that the majority of gamers have always been limited by hardware constraints. Only those able to afford higher priced components could enjoy playing their games in full 60fps bliss and beyond. With the introduction of 4k gaming as well, this list has only grown smaller. Cloud gaming, of course, changes this.

Who Uses Cloud Gaming

While Gaming as a Service may go by several names at its early stage, such as gaming on demand or GaaS, the basic idea is the same. The service allows gamers to essentially use their PC as the gateway to a high-end server. The games are then run off the server and streamed directly to the user’s desktop. The only requirements are a high-speed dependable internet connection and a basic desktop setup.

While this is not an entirely new concept to the community, it’s still revolutionary to the PC gaming industry in particular. Next generation consoles have been using a similar model for many of their games. In this case, a portion of the software would be downloaded to the user’s device before allowing them to play.

Another good example of this would be mobile gaming platforms such as Crowdstar that use bare-metal cloud servers to support their application. In both cases, however, cloud is used to provide a highly scalable service whereby, through big data analysis, large amounts of data are stored and streamed directly to an end user.

Why You Need to Pay Attention to Cloud Gaming

Unless you’re an avid gamer, it’s unlikely that much of this will excite you. While still in very early (and even alpha) stages, companies won’t be releasing their final products for some time. However, there are a few businesses who are standing out in this field.

For one, Nvidia is currently developing a game streaming service, GEFORCE NOW, for its device called the GRID. Small startups, such as Snoost or Vortex, are also trying to get in on the action.

The flexibility that cloud gaming creates will certainly rattle the industry, and many see this as a conceivably new industry frontier. No more updating hardware, complicated installation or configurations, game installations or software patches. This will all be done and managed by cloud gaming providers allowing gamers to just enjoy games.

Just as the service is groundbreaking, so too should be the infrastructure supporting it. Cloud technology has reached the point where this is now possible. To effectively support this growing trend, providers should be able to offer high-performance scalable cloud backed by low latency IP. INAP can provide this exact type of solution made to custom fit your specific business needs.

Hi-Rez Studios, which is highly engaged in both mobile and MMO gaming communities, fully understand this need. The studio utilizes our Cloud, Colocation and Performance IP. We provide many of our gaming customers with this type of support enabling them to push past industry limits to achieve new heights and even revolutionize their industry.

If you would like to learn more, please let us know and one of our product experts will be in touch.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Charles Parry

Read More
Dec 21, 2017

5 Highlights from the Gartner IO Conference 2017

INAP

Insights and Advice from our Experts

INAP was fortunate to be a sponsor at Gartner’s annual IT Infrastructure, Operations Management & Data Center Conference 2017 in Las Vegas.

In addition to exhibiting our high-performance managed hosting and service solutions, our team of experts had the opportunity to attend some of the popular keynotes and sessions throughout the four-day event.

The conference included more than 150 sessions, so naturally we weren’t able to attend every one. We would have liked to, but since time travel is still unreliable at best, our experts picked the sessions they knew would be most relevant to the future of our business and our ever-evolving industry.

And they weren’t disappointed.

Here are five key industry insights and trends our experts brought home with them from the Gartner IO Conference.

1. Make Way for Artificial Intelligence and Machine Learning

You probably already use some form of automation in your business. Chatbots and virtual assistants are increasing in popularity, but are you doing enough to improve the efficiency of your infrastructure?

During their opening keynote address, Gartner’s Milind Govekar and Dave Russell predicted that if you don’t effectively adapt artificial intelligence (AI) and machine learning (ML) into your environment and workloads by 2020, your infrastructure may not be operationally and economically viable.

As a result, they expect an increase in software-centric or programmable infrastructure to support advanced platform thinking and integration with minimal human intervention. If utilized correctly, this technology will enable your environment to process more data faster with less cost.

Stay tuned.

2. Living on the Edge

It was just a few years ago that the internet of things (IoT) took off as the next big advancement in digital technology.

Businesses now need to embrace the edge by blending physical and digital resources to create an experience that provides value and makes a difference.

It’s not about rolling out technology for the sake of doing it. In a session about top trends in 2018 and their impact on infrastructure and operations, Gartner VP David Cappuccio pointed out the necessity of creating an intelligent edge. This focuses on utilizing connected devices that provide a real-time reaction and allow for interaction between things and people to solve a critical business need.

3. Data is More Valuable Than Ever

In a digital world of AI, connected devices and intelligent edges, data is becoming even more important.

Machine learning and automated systems will require additional data to analyze trends and behaviors to make logical decisions to improve efficiency, especially when connected with multiple devices. To manage the influx of digital information, a greater priority will be placed on data storage and backup. (Shameless self-promotion: INAP launched a new managed storage offering during this conference.)

More data also means more opportunities for hackers, and businesses are being forced to take additional steps to combat this risk. In a session about the state of business continuity management, we learned that average disaster recovery budgets were expected to increase in 2017.

4. Cloud Reaches New Heights

One of the overwhelming themes that kept coming up during sessions and keynotes was a focus on the cloud.

You’re probably already familiar with some of the stats that predict massive increases in cloud computing over the next few years. Gartner’s Govekar and Russell doubled down on those forecasts, claiming that by 2021, 80 percent of organizations using DevOps will deploy new services in the public cloud.

It appears we can expect more businesses to transition to a cloud-only model, where before it was just cloud first. The impact remains to be seen.

5. Mind the Skills Gap

With technical innovation and the transition to a more cloud-focused infrastructure, IT teams are being driven to master additional skills.

Some employees may be fast learners, but the reality for most businesses is that they’ll likely experience disruptions due to infrastructure and operational skills gaps.

Rather than being specialists or generalists, IT talent should strive to become versatilists – meaning they are a specialist for a certain discipline, but can easily switch to another role. In the meantime, companies need to consider the experience level of their existing teams when rushing to adapt new technology.

Implementing New Trends

Your business may already be in the process of implementing changes based on these trends. Or perhaps you’re aware that you need to get the ball rolling, but you’re not ready just yet.

Regardless of where you currently sit, you should consider how these trends will impact your industry and business model or you risk being left in your competitors’ dust.

It may seem like a daunting task, but you don’t have to do it alone. Consider a trusted partner who will be there every step of the way to provide guidance, support and the necessary services to help you achieve your business goals. That’s where INAP comes in. Our team of experts will assist you in preparing your organization and infrastructure for the technology of tomorrow. Contact us to learn how we can help you build a better IT infrastructure for today and the future.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More