INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Dec 30, 2014

Three tips for evaluating colocation providers

Ansley Kilgore


Watch Internap’s Randy Ortiz, VP of data center design and engineering, discuss the important features to look for when evaluating colocation providers.

Before you begin investigating colocation facilities, you should have a basic understanding of your connectivity, fiber and power requirements. This will help you make the best decision for your data center services.

Ask the right questions to make sure the facility meets your requirements.

  1. How many carriers will you have access to in the facility?
  2. What is the power capacity that the facility can withstand? Data centers with higher power density allow you to grow your colocation footprint in place, without purchasing additional square footage.
  3. How modular and flexible is the facility? One important aspect of modular design is concurrent maintainability, which becomes important when it’s time for a colocation facility to expand. Anytime you grow the space, add or upgrade equipment, there are risks. Certain equipment must be taken offline for maintenance or upgrade, and concurrent maintainability is designed to duplicate points of failure and prevent outages.

Learn how colocation drives operational efficiencies by downloading our white paper, Next-Generation Colocation.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Dec 18, 2014

News roundup: Top technology predictions for 2015

Ansley Kilgore

technology_predictions_for_2015The technology landscape of tomorrow could look very different than it does today. Change is in the air across several areas, ranging from cloud computing to networking technologies; new developments are on the horizon for the Internet of Things (IoT) and even online gaming.

Below is a compilation of some recent articles to help you stay informed on the technology predictions for 2015.

Gartner Identifies the Top 10 Strategic Technology Trends for 2015
At the Gartner Symposium/ITxpo held in October, analysts outlined 10 trends that companies should consider during the strategic planning process for 2015. These impending shifts could affect end users, IT or the business in general. At minimum, these trends will compel organizations to make decisions on how to deal with the changing technology landscape. According to David Cearley, vice president & Gartner Fellow, most trends fall within three main themes: the “merging of the real and virtual worlds, the advent of intelligence everywhere, and the technology impact of the digital business shift.”

IDC: Top 10 Technology Predictions For 2015
This comprehensive list of predictions covers everything from consumer gadgets to disruptive enterprise-wide shifts. Get ready for major growth in new technologies, especially big data analytics, Internet of Things (IoT), cloud and mobile. The cloud computing landscape may see new partnerships as well as more industry-specific cloud-based service offerings.

ParStream Announces 2015 Predictions for Internet of Things
IoT could be one of the most disruptive technology shifts in 2015. Parstream, which recently launched its IoT analytics platform, has outlined four growth areas, including business adoption, edge analytics, platform integration and industrial/enterprise IoT.

Seven Network Technologies That Will Disrupt IT in 2015
2015 will be a pivotal year for network technologies. Industry-wide shifts that previously seemed far away are just around the corner, and demand for higher bandwidth and improved network efficiency will require new approaches from IT networking professionals. Internap’s Mike Palladino, VP, network and support, discusses six predictions that service providers, hardware vendors and IT organizations need to know in order to stay competitive.

Six Predictions for the Future of Cloud Infrastructure
In the coming year, cloud performance will become a critical factor. Organizations will be challenged to build a best-fit cloud environment that meets workload requirements and provides optimal performance. Learn six predictions about the future of cloud infrastructure for 2015.

Top 5 Online Gaming Industry Trends for 2015
The gaming industry will continue to gain strength this year, and the size of the global games market is forecasted to reach $86.1B by 2016. In this article, Todd Harris, CoFounder & COO of Hi-Rez Studios, discusses five trends that are influencing growth of the online gaming industry.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Dec 16, 2014

Everything’s Bigger in Texas: More Power in Dallas Data Center

INAP

We have seen a steady increase in demand for high power density in our data centers. In the last 12 months, the average power draw per rack has increased by 24% across all Internap data centers. The customer power draw at our Dallas data center facility has grown by more than 20% in the last year as well.

To continue supporting our Dallas data center market customers’ high density power requirements, Internap has added more power and cooling capacity to our Dallas data center with a 1200kW UPS and a 350-ton chiller.

The latest capacity addition brings the total power available at the Dallas data center to 3.6MW and over 1,000 tons of cooling capability. The data center in Dallas is designed to handle up to 18kW per rack configuration or 720 watts per square foot, which is a lot of energy in a small area. To put this in perspective, a kitchen oven uses around 1kW of power depending on the heating temperature.

The advancements in processor technologies and Moore’s Law in general mean that the state-of-the-art processors can do a lot more in the same size. The high-density infrastructure at Internap facilities enables customers to upgrade to new and more powerful equipment that consumes more power but does not require additional space.

Check out our Dallas data center to learn more about the features and services available at the facility.

Our Dallas data centers are conveniently located in the Dallas-Fort Worth metroplex:

  • Flagship Dallas Data Center
    1221 Coit Road
    Plano, Texas 75075
  • Dallas Data Center
    1950 N Stemmons Fwy
    Dallas, Texas 75207
  • Data Center Downtown Dallas POP
    400 S Akard Street
    Dallas, Texas 75202

More About Our Flagship Dallas Data Center

Our flagship Dallas data center is located at 1221 Coit Road, Plano, TX 75075. This Dallas colocation data center connects to Atlanta, Silicon Valley, Los Angeles, Phoenix, Chicago and Washington, D.C. data centers via our reliable, high-performing backbone. Our carrier-neutral, SOC 2 Type II Dallas data center facilities are concurrently maintainable, energy efficient and support high-power density environments of 20+ kW per rack. If you need colocation Dallas services, or other data center solutions in the Dallas-Fort Worth metroplex, contact us.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Dec 10, 2014

6 predictions for the future of cloud infrastructure

INAP

future_of_cloudThe future of cloud is bright, but in the coming year, cloud performance will become a critical factor. Organizations will be challenged to build a best-fit cloud environment that meets workload requirements and provides optimal performance.

Below are six predictions about the future of cloud infrastructure for 2015.

1. For the cloud, network latency will take the spotlight
As my colleague Mike Palladino alluded to in his recent post, having optimal cloud compute and storage specs does not alone ensure the best possible cloud performance. The performance of your network can be equally and sometimes more important, than the quality of your Internet infrastructure.

This year, Internap found that networks only deliver data over the best path 15 percent of the time, which is a huge issue for companies that are hosting their mission-critical applications in the cloud and need to ensure high-speed access. I expect 2015 will be the year when more companies that have invested heavily in Internet infrastructure will take a closer look at how they can improve their network to ensure the best possible performance.

2. Hybrid infrastructure will go beyond cloud-only solutions
The term “hybrid” has traditionally been limited to public and private cloud infrastructure, but in 2015 and beyond, we’ll see “hybrid” take on an expanded definition as companies concerned with delivering the highest possible levels of performance, security, control and cost-efficiency implement broader hybrid infrastructure approaches.

A few years ago, the only people that cared about achieving the highest possible levels of performance might have been high frequency traders or scientists using high-capacity R&E networks, but today performance-sensitive applications are becoming table stakes across industries. From ecommerce to ad tech and mobile analytics companies, the reliability, scalability and performance of their Internet infrastructure is not only critical to the business; it is their business.

Get the white paper: Deciphering Hybrid Infrastructure

For these customers, public or private clouds alone are not optimal for cost-efficiently scaling their applications to deliver a flawless user experience for their end customers on any device, anywhere, any time, which is driving the growth in hybrid infrastructure environments. In 2015, expect more companies to get realistic about virtual cloud’s role as one element of their overall IT infrastructure mix, along with bare-metal and more custom services like managed hosting and colocation, to best fit their specific applications and use cases.

3. Bare-metal adoption will become mainstream for both stable and variable workloads in performance-sensitive industries
Companies in industries that rely on performance-sensitive, fast data applications will increasingly weave bare-metal infrastructure – which can offer the flexibility of virtualized public cloud with the performance of dedicated servers – into their IT infrastructure mix. As customers become more familiar and comfortable with running their stable workloads on bare-metal infrastructure, we also expect to see companies increasingly take advantage of hourly bare-metal configurations to support their performance-sensitive variable workloads.

We were one of the first-to-market with our bare-metal infrastructure solutions back in 2012, and over the last year we’ve seen many other providers move into the bare-metal space as companies seek higher performing, but equally flexible and on-demand alternatives to public cloud. We’ve already started to see widespread adoption of bare-metal emerging in the adtech industry, and we expect to see continued adoption in industries like online gaming, financial services, healthcare and ecommerce in 2015 and beyond.

4. Reality check: public cloud does not always equal virtualization
The NIST definition of cloud computing includes on-demand self-service, rapid elasticity and resource pooling, among other criteria. Although virtualization is nowhere to be found in the definition, there is still a widespread misconception that it is inextricably linked to public cloud services. According to a recent Internap survey of 250 Internet infrastructure decision makers, 66% of respondents cited virtualization as a defining characteristic of a public cloud.

With the rise in popularity of bare-metal infrastructure, which can offer the agility that defines the cloud without using a hypervisor layer or multi-tenant environment, it’s becoming increasingly apparent that virtualization is not necessary for cloud computing. In 2015 and beyond, this traction will continue to drive greater awareness that public clouds can come in various forms – including non-virtualized bare-metal alternatives – and are still evolving to fit the needs of new and more specific applications.

5. OpenStack will gain more public cloud traction
Over the last few years OpenStack-based private clouds have been adopted by some of the world’s largest corporations like PayPal, Time Warner and BMW, and in 2015 we will start seeing more OpenStack-based public cloud deployments.

For public cloud providers, this private-cloud activity we’ve seen over the last few years has helped to build the overall OpenStack ecosystem, and laid the foundation for widespread OpenStack-based public cloud implementations. Each OpenStack-based private cloud that is deployed represents a future hybrid cloud and an enterprise customer that will appreciate the interoperability that OpenStack offers. With expanding geographic footprints, enterprise-class features and the ability to avoid vendor lock-in, these customers will be increasingly attracted to OpenStack-based public clouds in the same way enterprises started warming up to Linux.

6. Cloud-wary organizations adopting cloud infrastructure for the first time will likely experience fewer security issues than they anticipated
In Internap’s 2014 Cloud Services Landscape Report, we found that 40 percent of “cloud wary” organizations (those that have not yet adopted cloud infrastructure) cited security as a concern, whereas only 15 percent of the “cloud wise” (those currently using cloud infrastructure) cited security as a challenge they’ve encountered. While a portion of cloud-wary organizations are from security conscious industries, such as financial services, healthcare and government, these findings suggest that the majority of “cloud wary” organizations may be overestimating security risks. As more companies adopt cloud infrastructure for the first time in 2015, we expect that many will experience fewer security issues than they might have expected prior to adoption.

Choose Your Next Article

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Dec 9, 2014

The Hybrid RAID

INAP

The Hybrid RAID – Are You Maximizing Your Costs to Performance Ratio?

Traditionally, when looking to setup a RAID array, the goal is to mix performance and size while maintaining low enough overhead to remain profitable. With this in mind, we need to figure out how to meet these different goals.

A traditional SATA array will meet the requirements of size and low cost but will lack the performance needed by modern high I/O applications. By using SAS drives, you can make a move towards higher performance but the cost per GB is increased. The last solution in a traditional RAID setup will be using all SSDs. SSDs will provide both the highest performance and the highest cost per GB.

This is where the Hybrid RAID solution comes in.

Hybrid RAID arrays will allow you to leverage the high storage benefits of SATA drives along with the performance of SSDs. By implementing logic in the RAID controller, your server can utilize the SSDs for highly accessed data while still storing large amounts of static data on the SATA disks. In this configuration, the SSDs are utilized as a form of cache memory. Although it’s not as fast as the onboard RAID controller’s cache, it has the ability to store much more data. Testing of this technology has yielded gains of up to 1,200% increased I/O over traditional SATA arrays when used in high I/O situations.

What kind of applications do you regard as being high I/O?

The servers that will see the most benefit from this solution are those running SQL, Virtual Machines, and web servers. SQL servers tend to see the greatest benefit from the Hybrid RAID configuration. Often in these situations, the biggest bottleneck facing the server is accessing data as quickly as the clients request it. By allowing the RAID controller to learn what data is most requested, it can move that information to the SSDs for quick access. As I/O lowers on the server, the data is then synced back to the traditional SATA drives. This is similar to using Write Cache on the RAID controller but at a much larger magnitude. Most modern RAID controllers offer between 512MB – 2GB of onboard memory that can be used for caching data. By using an SSD, you can easily cache up to a maximum of 512GB worth of data. That is a theoretical gain of up to 100x the cache space.

If the Hybrid RAID solution is so great, why doesn’t everyone use it?

While it does provide advantages over traditional RAID configurations, there are also drawbacks. The need to add 1-2 SSDs into your RAID configuration will often mean the need for a larger server chassis to accommodate the extra hardware. This technology is also only available when an upgraded license has been purchased from the RAID controller manufacturer. These two factors will add cost to the server.

Is a Hybrid RAID right for me?

You need to look into the performance of your server to know the answer to that question. Every server will get to a point where one asset is becoming the bottleneck to performance. If you are not fully utilizing your CPU or RAM, then you may need to increase the rate at which you can access your data. Tools like “top” and “iostat” can help you figure this out in Linux. On Windows, you can look at your system performance under the Resource Manager. The key is that, if your disk access rate is high and the rest of the hardware utilization remains low, there is a good chance you would benefit from better performing disk speeds. In this case, the Hybrid RAID may be your solution.

Often times you may find that you can get better overall performance with the use of less servers once you remove this bottleneck, thus allowing you to offset the increased cost of the server hardware.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Dec 9, 2014

1000 Servers in 48 Hours: The Outbrain Data Center Migration

inap admin

Last May, Outbrain migrated our data center from downtown New York to Secaucus, NJ. We found a strong connection between these numbers – 1000, 48, 9, 0 — and we lived to tell about it.

Moving one of your main data centers, including ±1000 physical servers, is no easy task. It requires extensive planning and preparations, and can directly affect the business if not done right. In order to reduce the impact on our team, we decided to do it in an accelerated timeframe of just under 5 weeks. This is the story of Outbrain’s data center migration from 111 8th, NY to Secaucus, N.J. – or in other words: 1000 servers, 48 hours, 9 miles, and 0 downtime.

Outbrain is a content discovery platform that helps readers find the most interesting, relevant, and trusted content wherever they are. Through Outbrain’s content recommendations across a network of premium publishers, including CNN, ESPN, Le Monde, Fox News, The Guardian, Slate, The Telegraph, New York Post, Times of India and Sky News, brands, publishers, and marketers amplify their audience engagement by driving traffic to their content – on their site and around the web. Founded in 2006 and headquartered in New York, the company has 15 offices around the world including Israel, the UK, Australia, Japan, Singapore, and multiple European locations.

Outbrain operates from three data centers in the U.S., one of which is HorizonIQ’s Secaucus, NJ facility, serving more than 72,000 links to content every second.

On April 8th, we toured HorizonIQ’s newly-built Secaucus data center and saw how it was coming together. The new facility was impressive and very spacious, so we decided to go for it. The chosen date was May 23rd, Memorial Day Weekend, in order to provide us with the advantage of a long weekend with lower traffic on our service.

Once the date was set, the countdown began.
We broke the project into 2 phases:
1. Pre-move, which included applicative preparations and tests, and logistics.
2. The move, which included the physical transfer of equipment, and application setup and sync in the new location.

Applicative preparations and test

At Outbrain, we fully implemented the Continuance Integration methodologies, so on any given day we have about 100 different production deployments. It was crucial for us to maintain this ability during the migration, in addition to the overall health of the system once we shut down the servers at the existing 111 8th location. We conducted numerous tests in order to verify that all the redundancy measures we put in place, and what we refer to as our “immune system,” would still be fully functional even after all the services in 111 8th were unavailable to our other functioning data centers. Those tests included scenarios such as controlled disconnection of the network to 111 8th, which simulated complete unavailability.

In addition, we analyzed specific high-risk components and looked for different ways to set those up in the new Secaucus data center in advance.

The irretentive process of testing and analyzing required a great deal of collaboration between the different engineering groups within Outbrain (operations, developers, etc.) – which was key to the success of the project.

Logistics

As the famous phrase goes, “God is in the details,” and this type of project included many, many details.
Starting with finalizing the contractual agreement regarding the new space, and making sure it would be ready as part of the extremely aggressive timelines we set; power, AC, cage build-out, planning the new space layout, taking the opportunity to prepare for projected growth, and more. We set tight meetings with the HorizonIQ team, as we all realized that communication is key and every day counts.

Our preparation also involved bidding between different vendors to perform the heavy lifting of actually moving the 1000 servers, labeling every component (with 3 labels each, in case one falls down – redundancy), planning the server allocation into the moving trucks (you do not want all of your redundant servers on the same truck), insurance aspects, booking elevator time and docking space, and many more small details that at the end of the day made a big difference.

Move day

The time had come, and on Friday, May 23rd at 4:00 pm, we hit the button. An automated shutdown script, which we prepared in advance, managed the shutdown of all services in the desired sequence. By that time, the movers were on site, the trucks were parked in the loading docks, and the cage became quiet – no more static noise, and no more hot aisle to stand in.

We had split into 2 teams; the first team drove with the first uploaded truck to the new Secaucus data center, where the floor was already pre-labeled with the new location of each rack; the second team remained in 111 8th and continued to work on loading the next truck. Both teams worked in parallel to reduce the duration of the physical move.

All in all, it took 5 trucks to complete the move, and after 29 hours, we had all the equipment relocated, racked, and cabled in the new site, and we were ready to start our final and potentially most daunting phase – starting up the equipment and services.

The startup process was also done in a controlled method, to assure the correct startup sequence. We took advantage of the fact that Outbrain is a global company with offices in Tel Aviv, so when it was nighttime in NY, Tel Aviv was in the middle of its business day. We operated a full task force in Tel Aviv to help us make sure that the services were coming up correctly and that the syncing processes were working well.

As a result of careful planning, advance testing, and more than anything, the commitment of the Outbrain and HorizonIQ teams, we began serving real-time content recommendations from our new NJ data center within 48 hours, with 0 downtime or impact to our customers. We still had the opportunity to enjoy a nice trip to the NJ coast on Memorial Day weekend (and some shopping).

Mission accomplished.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

inap admin

Read More
Dec 4, 2014

7 network technologies that will disrupt IT in 2015

INAP

network_technologies2015 will be a pivotal year for the future of network technologies. Industry-wide shifts that previously seemed far away are just around the corner, and demand for higher bandwidth and improved network efficiency will require new approaches from IT networking professionals.

Below are seven predictions that service providers, hardware vendors and IT organizations need to consider in order to stay competitive in the coming year.

1. The 512k routing table problem will strike again
In 2015, many more organizations will experience network instability as millions of legacy routers hit their physical limits. This year, we saw several high-profile websites and networks knocked down due to widely deployed, older routers hitting their default 512k routing table limit. At approximately 500k routes today, and increasing by around 1k routes per week, the growth of the global Internet routing table, which refers to the Internet’s total number of destination networks, shows no signs of slowing.

2. Slow IPv6 migration
What makes additional Internet congestion likely in 2015 is that companies don’t want the headache of fully migrating to IPv6, so they are trying to squeeze as much IPv4 out of the remaining allocations as possible, which is only adding to the inflation of the routing table. This, however, will be incredibly problematic for everyone because it balloons the routing table and brings hardware limitations to the forefront. Some organizations have been aware of these issues, but many companies will be caught off guard in 2015, and smaller enterprises in particular could learn some very painful lessons.

3. Businesses large and small will pay more attention to network speed
Internap’s 2014 data indicates that networks only deliver data over the best path 15 percent of the time. Many large organizations in ad-tech, financial services and e-commerce industries already recognize this issue, and look for ways to improve network efficiency when a difference of milliseconds can impact their bottom lines.

In the coming year, we’ll see even more companies outside of these large organizations look for ways to increase network efficiency. For instance, as more traditional “brick and mortar” businesses leverage the Web to sell their products and services, they will increasingly implement strategies and solutions that give their customers the best possible Web experience – from choosing appropriate geographic locations for their datacenters to using technology that optimizes data routes to ensure the fastest possible end-user delivery.

4. Service providers better be ready to handle everything from IoT…
Recently, I saw a refrigerator in Home Depot that had Facebook running on its touch screen. If posting a Facebook update from your fridge is not indicative that “the Internet of things” is almost here, then I have no idea what is. How long will it be before your fridge can place orders from your local grocery store account when you’re running low on something?

As the Internet of Things continues to gain momentum, we’ll start seeing increased network congestion which can significantly degrade the performance of these new connected devices. Increased network efficiency will be a key factor in supporting a high-quality user experience.

5. … to massive DDoS attacks
Not only have we seen a significant increase in the frequency of DDoS attacks, but we have also seen a monumental increase in their size. A typical DDoS attack used to be a spike of a few gigabits/second, and now it’s typically HUNDREDS of gigabits/second.

That is a crushing, overwhelming amount of traffic to prepare for – the capital cost to build that amount of headroom into your network is something that only very large companies can afford. There are two main factors here: higher broadband speeds to end users, coupled with more devices connected to the Internet, which means more at-risk devices that can be used in an attack.

6. How will service providers prepare? 10G will become the new 1G
Bandwidth-crushing trends in technology, from IoT to the growing size of DDoS attacks, are creating a situation where service providers are in a never-ending arms race to keep up with capacity that is growing faster than they can deploy hardware. These providers are trying to compensate by creating bigger pipes so they’re not constantly behind the curve.

2015 will be the year that IT vendors will take a step up to networks that are 40/100G, and provide support for them. We’re at the cusp of large-scale deployments, and 10G is not enough anymore. Price points will drop as demand for 40/100G increases.

7. Hardware vendors will get serious about SDN
Hardware vendors have been resistant to support Software Defined Networking (SDN), since it could potentially eat into their high margins and reduce their ability to lock in customers. If vendors do choose to support SDN, it will be in ways that are vendor proprietary.

The bottom line is that many of these big hardware vendors are not offering TRUE SDN support even though users are starting to see the power and potential of SDN. 2015 could be the year where traditional networking vendors have to start making a sea change culturally to accept something they have not yet embraced.

While these predictions may not be a surprise to most IT networking professionals, making such large-scale changes is never easy. “Someday” is almost here, and now is the time to make sure your IT organization is equipped to handle these inevitable new demands.

Choose Your Next Article

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Dec 1, 2014

Top 5 online gaming industry trends for 2015

INAP

online-gaming-industryIt’s no secret that the online gaming industry has experienced rapid growth recently. By 2016, the size of the global games market is forecasted to reach $86.1B. Here at Hi-Rez Studios, the number of registered SMITE players is well over 5 million, and part of our strategy is to make sure the right technology is in place to keep up with the growing demand.

Let’s take a look at five trends that are influencing growth of the online gaming industry.

1. Free-to-play (F2P) business model
The F2P model will remain popular in 2015 and beyond. Free game distribution provides a marketing and business approach for developers and publishers to monetize games. With so many entertainment products available, F2P helps attract new gamers and allows them to try new things with zero committment to buy. People can pay absolutely nothing and still enjoy the game, or they can choose to spend money and customize their experience by purchasing items through microtransactions during the game. The flexibility of this pricing model benefits online game pulblishers and consumers alike.

Developers are increasingly deploying the F2P model across all platforms, including mobile and consoles. F2P is already the most popular type of game for mobile users, and this trend will continue to increase. Also, next-generation consoles will soon provide much more support for F2P games than in the past; for example, Sony’s new PlayStation 4 and Microsoft’s Xbox One will offer more F2P options than before. In 2015, SMITE will be free to play on Xbox One.

2) Games as a service
In the days of yore, games were purchased in a box from a retailer and played at home by yourself. When a new game was released, it marked the “finish line” for development and management.

Today, most games are delivered digitally and are available in a multiplayer format. Games as a Service, or cloud gaming, allows gaming compaies to provide regular updates, including new content, events or options for downloadable content (DLC). This requires ongoing management and makes a new game release seem more like the “starting line” for developers to continue updating the game.The user state is saved in the cloud and available on multiple devices, allowing gamers to easily pick up where they left off. This type of game delivery requires a more sophisticated Internet infrastructure with ultra low latency to ensure a highly available game experience for users, regardless of location or device.

3) Esports
The popularity of video games as a spectator sport has grown dramatically in the past two years. Esports fuse the multiplayer game experience with real-world physical sports, and attract large sponsors including Red Bull and Coke. Originally popular in Asia, the main enabler for the growth of Esports was Twitch, which was acquired by Amazon for just under $1B. Professional gamers like Matt Haag are earning a comfortable living through competitive careers. Four game franchises have now had Esports events with a prize pool of more than $1M: League of Legends, DoTA2, Call of Duty and SMITE. The SMITE World Championship event will take place in Atlanta in January, and the prize pool has already grown to more than $1.4M.

4) Going global
The shift to digital game distribution and broader global access to the Internet has encourgaged franchises to expand beyond North America. To gain global appeal, online game developers are incorporating more diverse themes and genres into their games to attract worldwide audiences and global appeal. In the case of SMITE, this includes Greek, Norse, Mayan and Chinese mythology.
Achieving global domination is easier than ever before with cloud-based technology. Online gaming companies need to plan ahead and put the right infrastructure in place to make sure their games are available for a worldwide audience — poor online performance will only lead to game abandonment. As a result of increased global reach, 2015 should see increased global revenues as well.

5) Mobile
Revenue from mobile gaming is predicted to surpass console gaming in 2015. Games are the most popular apps on smartphones, and developers need to plan for this.

Smartphones and tablets, with help from social media apps, are changing the demographics of gamers. The audience is broadening to include more young gamers and more females, also known as mom gamers. The flexibility of multiple devices makes games more accessible to audiences that previously were not associated with video games.

These online gaming trends will continue into 2015 and beyond. Game developers need to be prepared for these shifts, so they can benefit from opportunities that will arise from industry growth.

Choose your next article

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More