Month: July 2013
The cloud hasn’t always been the ideal choice for performing large, resource-intensive workloads. Scenarios that require high disk I/O and large amounts of compute resources are generally better suited for physical, dedicated servers. While the automation and self-service capabilities of the cloud are valuable, the virtualization layer can take up space that could be used to process your workload. The bare-metal cloud offers a best-of-both-worlds approach – a dedicated environment without the overhead of virtualization, but without sacrificing the flexibility of the cloud (tweet this).
Bare-metal servers do not run a hypervisor and are not virtualized, and can be delivered on a cloud-like service model. This balances the scalability of the cloud with the performance capabilities found in monthly dedicated server hosting plans. The hardware is fully dedicated to the customer, including any additional storage that may be required. Bare-metal instances can be provisioned and decommissioned as needed, providing access to high-performance dedicated servers on demand. Depending on the application and use case, a single bare-metal server can support greater workloads than multiple similarly sized VMs.
Bare-metal servers work well for high-performance use cases such as media encoding and render farms, operations that require short-term data-intensive functions without latency or overhead delays.
Use Case: Media Transcoding
Many sites with user-generated content need media transcoding for publishing to an origin-store Content Delivery Network (CDN). When a user uploads a video, the site needs to transcode the video file into a common format viewable by site visitors. The transcoding software for both audio and video are processor-intensive, and if located on the same machine as the web server or used in a multi-tenant environment, performance may suffer. While a site can send the original file to a remote server for processing and have dedicated resources available to the task, a traditional monthly dedicated server may sit idle during slow traffic periods or be overextended during peak traffic. Bare-metal cloud allows this site to take advantage of the cloud-like flexibility of on-demand provisioning while maintaining the dedicated processing resources of a physical server.
Use Case: Render Farm
Many commercial 3D animation and CAD software support a “render farm” mode, where a regular desktop workstation can be turned into a node in a rendering cluster. This was used by animation companies to develop original media assets during the day, and turn their workstations into a rendering cluster after hours to process their files. With bare-metal cloud, a designer could maintain a single always-on “master” node to submit rendering jobs. This master node would then interact with other hardware nodes for processing the individual frames that need to be rendered. These hardware nodes could be provisioned by design staff as needed to process large or small jobs, or the master node software could be adapted to provision additional instances as needed through a provisioning API.
By combining non-virtualized, physical resources with the service delivery and automation capabilities of the cloud, bare-metal servers can focus all their computing power on your use case. This increases your operational efficiency and provides better performance for smaller investment.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Big data is increasingly important across all industries, and C-level executives are placing more demands on their IT departments to provide the necessary data and analytics. Collecting vast amounts of information and drawing meaningful conclusions creates opportunities for improved business agility, visibility, and most importantly, profits. Having such valuable information allows business leaders to make more informed, real-time decisions and gain competitive advantage.
While big data may be part of an overall corporate strategy, businesses also need a technology strategy to meet the demands of this new challenge. Hint: it requires more than purchasing additional data storage.
In this webinar, guest speaker Brian Hopkins, Principal Analyst, Forrester Research, Inc. will present his latest research aimed at helping you get past the non-sense, understand what big data is all about and leverage the concepts to lower cost, increase agility and deliver more data and analytics to your business.
Attend this webinar to learn:
- What is big data, really?
- How does big data relate to an overall data management strategy?
- What are the architectural components of solutions that exploit big data concepts?
- What is a real-world example of addressing a big data challenge?
- What should you do with this information right now?
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Amazon price cuts still no match for bare-metal cloud economics and performance
At first glance, the recent 80% price cut announced by Amazon Web Services seems like a huge savings. But in reality, Amazon has simply reduced a cost that customers shouldn’t be paying in the first place.
The 80% reduction refers to a privilege fee that has been changed from $10 per hour to $2 per hour. As a result, the monthly cost to gain access to any dedicated instances has dropped from $7200 to $1440. In addition, this fee is per region, which means companies that require cloud instances in more than one region will pay the fee in each.
In order for Amazon’s privilege fee to be effectively amortized, customers would need to purchase 100 or more instances. This cost is beyond the budget of most small- and medium-sized businesses, and doesn’t make sense even for enterprises that can afford it. A true bare-metal cloud offers better performance, more customization and dedicated servers (including storage) for less money than Amazon’s dedicated instances.
Price performance
With fully dedicated hardware and no pre-enforced virtualization layer, bare-metal servers can focus all their computing power on the needs of your workload. This is especially important for smaller-sized use cases that need to get the most out of their investment – bare-metal cloud offers better performance at a lower price. Amazon’s privilege fee alone can buy at least four bare-metal servers with comparable compute specs to Amazon’s second-generation extra-large instances. Regardless of the fee, bare-metal servers offer better performance and still cost 40% less than comparable specs of Amazon on-demand dedicated instances. With bare-metal cloud, customers save upfront and on an ongoing basis.
Customization
Bare-metal servers don’t run a hypervisor and are not virtualized out of the gate. This gives the customer more freedom for customization – a “blank slate” – as opposed to the take-what-you-get approach when provisioning an Amazon EC2 instance. When you spin up a bare-metal server, you can provision dedicated hardware in minutes, customize as required (even additional virtualization) and then decommission once your workload is complete.
Dedicated instances
For customers that require dedicated resources with hardware separation for compliance or performance reasons, bare-metal cloud is hands-down the best choice. Bare-metal servers are fully dedicated to the customer, and any additional storage is directly attached and dedicated as well. Amazon’s dedicated instances offer a physical hardware-level separation, but only at the compute layer – everything beyond this is shared.
It’s also important to note that dedicated servers in a managed hosting model are not the same as bare-metal servers offered in a cloud model. A true bare-metal cloud server is a non-virtualized, physical resource with the service delivery and automation capabilities of the cloud. Rackspace, for example, touted their managed hosting offering as an alternative to Amazon’s dedicated instances, but this isn’t a valid comparison. Choosing managed hosting to get dedicated servers is a false choice. There is a better answer between expensive Amazon dedicated instances and managed hosting. Bare-metal cloud fills this gap well by providing better price/performance with all the agility of cloud.
At Internap, we don’t force customers into false choices. We allow them to self-select into virtual public cloud, bare-metal cloud and managed hosting based on their workload needs. As for the issue at hand, if you’re paying for or considering Amazon’s dedicated instances, contact Internap today to find out how far your $1440 privilege fee can really go with our bare-metal cloud.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
By default, most computers are left vulnerable to attack the moment they are attached to a network because they either have an inadequate firewall or no firewall configured on their machine. This is the case, in many situations, for both servers and your own desktop at home.
It is for this reason that I am often asked the question “How good is an operating system-based firewall?” The answer is different for each situation.
In the home situation, it can be used as your primary firewall (for example, you do not have a router between your DSL, cable, or FiOS modem hardware) provided you have configured it correctly (or at least allow the operating system to manage it for you).
In a dedicated server hosting environment, however, it is extremely important to have a firewall of some type enabled and to manually configure it to filter your incoming traffic.
The operating system firewall is, in most cases, a sufficient way to protect your server from being attacked. The Windows firewall service allows you to designate what applications should be allowed to connect to the Internet and allows you to configure what services should be accessible from the Internet.
What are the Benefits of Software Firewalls?
The Advanced Firewall in Windows Server 2008 and the Linux firewall (ipchains on kernel 2.4.x and iptables on kernel 2.6.x) allows for a very advanced firewall based upon matching traffic coming in and going out of the server based upon a set of pre-configured rules. The default Advanced firewall in Windows Server 2008 provides an easy-to-use interface, whereas the Linux firewall either requires a 3rd party application to manage the firewall or extensive knowledge of the firewall command line options to configure it.
Of the 3rd party options for Linux, CSF (ConfigServer Firewall), Firehol, and ASF allow for a much easier-to-use configuration scheme, and CSF even provides a GUI Interface via the web when installed on a server with cPanel/WHM on it.
What are the Downsides to Software Firewalls?
However, there are some fallbacks to using a software-based firewall. For starters, the firewall is on the server itself and may be unable to handle the amount of traffic that your more busy networks can bring to the server.
In the case of web hosting, this can be detrimental to your server’s ability to handle the incoming traffic because it has to spend time processing traffic coming into the server through the firewall before it can handle the actual connection.
If your operating system firewall is misconfigured it can leave your server completely inaccessible or even more vulnerable to attacks. In theory, a malicious user could exploit a website hosted on your server to gain access to an administrator’s account and possibly modify your firewall to allow them through. In all of these cases, this is where a hardware firewall can prove much more reliable and secure than your software equivalent.
What are the Benefits of a Hardware Firewall?
For starters, a hardware firewall takes the load of processing firewall rules, blocking, and permitting traffic, as well as providing logging functionality (which can be sent to a syslog server either on a Linux server behind the firewall or to the memory of the firewall itself) off of your servers.
Hardware firewalls tend to be much more robust in their capability to block certain types of traffic as well as being more friendly to other Internet protocols. They also allow you to have a single authority when it comes to filtering the traffic on your network.
Hardware firewalls can handle the traffic for multiple servers and can differentiate between what traffic is allowed to one server but not to another. Your software firewall is not able to be managed as easily in terms of an enterprise-level deployment without extensive scripting or the implementation of a centralized domain controller in the case of a Windows-based network.
The Case for Cisco PIX or ASA Firewalls
In the case of the Cisco PIX or ASA firewalls, they can provide a 1-to-1 NAT-based firewall solution, where machines behind the firewall have internal IP addresses, and the external interface of the firewall is configured with the individual globally accessible IP addresses forwarding to the appropriate internal IP address. This can further secure your network by obfuscating your internal network’s topology, making it harder for an intruder to map out your network and plan his or her attack.
Furthermore, the Cisco ASA series of firewall appliances can even provide transparent firewall functionality, which operates on a level similar to a software-based firewall in that you do not have to configure your protected machines with an internal IP – the appliance will filter the traffic transparently and allow what you define as acceptable traffic through and reject or silently drop the rest of the traffic.
Another benefit of using a hardware firewall is its proven security in terms of access restriction to the management of the appliance itself. The Cisco firewall appliances come with a web-based GUI as well as software that can be installed on any machine you wish to delegate as your management terminal for your firewall.
Lastly, many firewall appliances allow for further functionality as a VPN access point, which provides access to an internal network in a secure fashion, and takes yet another load off of the servers on the inside.
Which Firewall is Right For Me?
All in all, your best option will depend on the individual needs of your dedicated hosting solution. If you are an end-user without much experience in securing a server for use on the Internet, or the administrator of a relatively low-traffic website, a software-based firewall may be the right choice for you.
But if you are an experienced system administrator of a network of 2 or more machines with high-security requirements for your network, a hardware firewall would be the best solution for your situation.
For added security, you can implement both solutions and have something to fall back on in the event that a personal computer accessing your VPN connection gets compromised and tries to infect the servers behind your firewall.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Language is important for communicating with others and navigating your way in life – whether it is around a city, medical situation, or data center. To express your requirements or evaluate services, it helps to share a common understanding of terminology. Here is a sample of key terms that speak to your data center needs and highlight Internap’s approach within our data centers.
Uptime – Measure of availability. Internap offers a 100% uptime Service Level Agreement (SLA) guarantee for IT infrastructure and IP.
Uninterruptible Power Supply (UPS) – Backup power supply that, in case of power failure or fluctuations, allows enough time for an orderly shutdown. Internap offers properly sized units that maintain operations until generator power kicks-in in the event of power loss, supporting our 100% uptime SLA.
Computer Room Air Conditioning Units (CRAC Unit) – Monitors and maintains the temperature, air distribution and humidity in the data center. State-of–the-art units control the ambient temperature in the room based on ASHRE industry standards.
Closed Circuit Television (CCTV) – Surveillance cameras to view and record activity. Rigorous security is maintained in data centers 24/7 through strategically placed cameras, HID card access, biometric scanners, and on-site security personnel.
Cabinet/Racks– Physical unit to house customer devices in a data center. Locking cabinets and racks with scalable power can safely host valuable equipment.
Cages– Mesh enclosed areas that are occupied by a single customer. Internap provides an optimum, cost-effective area that is calculated based on your specific requirements.
Scalable Power Density – Power in kW per unit area. Scalable power density up to 18kW per cab is available, allowing for cost-effective, flexible solutions.
SOC2 – An independent, professional audit of security, availability, process integrity, privacy and confidentiality. Data centers with SOC 2 certification offer stringent control systems that safeguard customer data and resources.
To discuss your specific data center requirements and learn how to customize the best colocation, managed hosting, or cloud solution for your company, contact Internap today.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
I recently had the pleasure of attending one of the most anticipated F1 races of the year: the British Grand Prix at Silverstone. As anyone passionate about Formula One knows, this is where it all started. The first world championship was held at this circuit in 1950, so there was definitely a sense of excitement for my first European race and the mystique that surrounds Silverstone. To make it extra special, we hosted customers on the lawn of Sahara Force India’s headquarters – practically right on the circuit – at their annual partner event. Will Buxton and Jason Swales from NBC sports gave us some media muscle and helped document the festivities.
Watch four of our customers discuss their impressions of the Grand Prix weekend, the race and their partnership with Internap.
Bango
CCP Games
Onavo
Sahara Force India
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Not so long ago, I would head to my friend’s house after school to play Oregon Trail on her desktop computer. Back in those days, you couldn’t just download apps and games from the Internet, because there was no Internet. There were no smartphones, and handheld tablet devices hadn’t even crossed our minds. We loaded our games onto the computer from multiple floppy disks.
Fast-forward to today. I am surrounded by four computing devices at any given time, all of which are capable of accessing the world at lightning speeds. It’s easy to get excited about these technologies, but what I find most fascinating is what goes on “behind-the-scenes.”
Recently, while studying network reference models, I learned about four different standards bodies that govern the way we experience the Internet, allowing what happens on our devices to be a seamless, magical experience.
A journey down the network highway
Our first stop is the International Telecom Union (ITU), with headquarters in Geneva, Switzerland. This organization was established in 1865 and created what are known as letter standards. Some examples are ADSL (Asymmetric Digital Subscriber Line), which enables a faster connection over copper telephone lines, and MPEG4, which allows us to enjoy audio and video content.
The next stop is at the Institute for Electrical and Electronic Engineers (IEEE). This group was founded in 1884 by a few electronics professionals in New York. They created the numbering system that governs how we access the modern internet. Some familiar protocols are 802.3 (Ethernet) and 802.11 (wifi). Simply put, my neighborhood coffee shop without 802.11 would be like enjoying my coffee without cream and sugar. Thanks IEEE!
We continue our journey to our next destination, which is the Internet Engineering Task Force (IETF). This stop takes us to the west coast in California where RFC (Request for Comment) was created. These standards govern how we reach the content via the World Wide Web. Some familiar protocols developed are RFC 2616, or HTTP (Hypertext Transfer Protocol), and RFC 1034/1035 which is better known as DNS (Domain Name System).
Our last stop on this network field trip is at W3C, or the World Wide Web Consortium. This organization was founded in 1994 (right about the time I stopped playing Oregon Trail) by Tim Berners-Lee at Massachusetts Institute of Technology. W3C created familiar protocols such as HTML5 (the fifth iteration of Hypertext Markup Language), which allows us to experience multimedia content like never before, and CSS (Cascading Style Sheets) which let us manage and enjoy web pages in a more beautiful way.
Now that you’re in acronym overload, I hope you have a better understanding of how our modern Internet became what it is today. I guess I’ll use those floppy disks for drink coasters while I download the latest app to my tablet.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
More than technology: Startup communities cultivate art, entrepreneurs
The Atlantic Cities posted an article last month on the leading metro areas for venture capital, based on number of deals and the dollar amounts of each deal. Earlier this year, Mashable.com posted a similar list from the National Venture Capital Association. A spokesperson from the NVCA stated that these cities “tend to have strong universities where technology is developed and research is completed and commercialized.” Both lists are a great read for any aspiring entrepreneur looking to relocate from SmallTown, USA to a larger metropolitan area, or even for someone who wants to validate their city is the right place to be.
But universities crank out more than just tech research. Many universities in the same cities graduate aspiring artists and musicians who, in turn, help drive the creative scene and cultural landscape of the city. The Jail Break reported on Artbistro (a division of Monster.com), who posted a list of top 25 cities for artists and designers in 2010. CNBC followed up in 2011 with their list (slideshow) of top ten cities for “young people”, with criteria focusing on art, culture, cost of living and employment.
The creative desire spurring technological innovation is similar to the inspiration that creative artists have when producing groundbreaking art. So it’s no surprise that many of the cities with the highest number of startups and venture funding are also cities with a strong arts community. Some of the cities that showed up on all four lists are Boston (in which Internap is sponsoring an event with GigaOM on Thursday), as well as Austin and Seattle. Internap’s home city of Atlanta made it to three of the lists, as did several California cities like San Francisco, San Jose, and Los Angeles (the list for “young people” assumes they want a lower cost of living, which isn’t always a reality in the larger urban hives). A few cities with large surrounding metropolitan areas made it to all four lists if you include their immediate neighbors, like Denver/Boulder and New York City/Newark.
Correlations between data like this aren’t new, of course, and there’s no shortage of interpretations looking for cause-and-effect. Naturally, I’ll throw in my opinion as well: Startups don’t just thrive where there’s VC money. Startups are usually founded by neophiles who have passion and zeal around an idea and want to create something to share with the world (just like artists do). They tend to thrive in cities that have a flourishing arts community, because, like artists, entrepreneurs tend to focus on the new, the exciting and the interesting. They all start from the same creative impulses; while one group moves towards the arts, the other moves toward technology and business. These groups continue to intermingle, creating a symbiotic culture that defines and redefines the city. Venture capital will go looking for the best and the brightest to get the best returns on funding opportunities, leading them to the same cities where artists and entrepreneurs already live.
Whether you’re an artistic entrepreneur or an entrepreneurial artist, you may want to check out events centered around the startup community. Internap is sponsoring a few of these events in Boston and Atlanta. If you’re in the area, we highly encourage you to attend.
GigaOM Boston – July 11, 2013
GigaOM Atlanta – July 17, 2013
TechZulu Final Friday – July 26, 2013
Venture Atlanta – October 22-23, 2013
If you’re interested in the source data for the correlations above, here is the Excel file of the lists compiled from the linked articles. Feel free to manipulate the data and draw your own correlations, and share your comments below.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
In our previous post, we described IaaS (Infrastructure as a Service) and PaaS (Platform as a Service). Today we’ll discuss how Software as a Service (SaaS) fits into the stack and explore which type of infrastructure makes sense in different scenarios. Various combinations of these services can help meet your needs for flexibility and resource control so you can manage and create applications as desired.
SaaS
Most SaaS providers deliver end-user applications on-demand, as a service, over the Internet. Salesforce.com and Google Apps are two common examples of this. However, many SaaS applications can provide services to other applications, usually via a SOAP or RESTful API over the Internet. A good example of this is how Yelp integrates Google Maps into its mobile application – Google is the SaaS provider, and the Yelp application becomes the consumer of the software service.
A SaaS provider may use a PaaS provider or an IaaS provider, depending on the level of development and operational efforts they need to balance out.
Here’s an illustration of how an application may interact with another SaaS provider:
As you can see, the application now interacts with other applications for complete functionality. Just as a developer can increase speed-to-launch by using a PaaS provider (at the expense of flexibility and customization), this developer can use existing third-party SaaS applications without having to recreate each piece of functionality desired.
Ultimately, the more you move your application “up the stack”, the fewer resources you are required to manage. Integrating an application with SaaS providers allows a developer to consume SaaS services as components and building blocks without having to reinvent the wheel (e.g., using Google Maps for location services, using RSS aggregators for content/news services, etc.).
Using a PaaS provider will provide a framework for you to develop your own application without having to manage an infrastructure. And if the PaaS is too limiting – say, you want to use certain web server features not provided by the platform, or you want to use a preferred development framework – then IaaS may make more sense.
As indicated in the final illustration below, the further down the stack, the more lower-level components a developer will need to recreate and maintain.
There is plenty of opportunity for “mix and match”. No one service is correct for all use cases, and I can certainly see distributed applications consuming one or more portions of the stack on multiple cloud service providers.
The decision to go with one or another ultimately comes down to a few considerations:
- The resources the team wants to manage.
- The capabilities of the development (and operations) teams building and supporting the application.
- How quickly the team needs to deploy production software with the resources available.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
For game developers and publishers, launching a new game into the market and creating loyal fans takes a lot of work. Whether you are an established gaming company or a new publisher entering the highly-competitive marketplace, having the right IT Infrastructure in place is critical. Your game must deliver the availability, performance and scale that online gamers have come to expect on their digital quests. Internap solutions help set game developers up for success by providing the right environment for testing, development and deployment of online games.
High-performance cloud services
When the Massachusetts Digital Games Institute (MassDigi) needed a server to host their new online game, Internap’s AgileCLOUD provided a cost-effective way to spin up virtual servers that could support their development and collaboration needs. MassDigi was able to continuously test their online game with live users, and scale dynamically up to thousands of players as needed. With cloud hosting solutions, game developers have more flexibility to test, develop and deploy without worrying about the limitations of technology.
Performance
When introducing new games into the market, speed and performance are key aspects of high-quality game delivery. No matter how awesome your game may be, users will abandon it if there is too much latency. With route-optimized Performance IPTM, Infinite Game Publishing (IGP) can provide gamers with a flawless online experience during the initial game launch and beyond. Meeting the expectations of online gamers is the first step to creating a loyal customer base. This is especially important in the free-to-play revenue model, which is less predictable than a subscription-based model.
Hybrid infrastructure
Gaming companies that use Internap services have the ability to mix and match different infrastructure offerings, including public and private cloud, bare-metal, managed hosting and colocation. As the gaming industry continues to grow and more businesses enter the market, the successful gaming publishers will be those who can seamlessly deliver their game to end users with low latency, high availability and high performance. The ability to establish and maintain your competitive edge depends on having the right gaming infrastructure in place.