INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Apr 30, 2013

Improve application security, compliance with Web Application Firewall (WAF)

INAP

Web Application Firewall (WAF)We recently added Web Application Firewall (WAF) to our portfolio of security technologies for our Custom and Agile Hosting platforms. A WAF is deployed to protect web-based applications or services from external malicious attacks. Unlike a network firewall which monitors general network activity, a WAF focuses on application-specific protocols such as HTTP, HTTPS, XML and SOAP to prevent applications from attacks, including malicious inputs, cross-site scripting, website information scraping, path traversal, tampering of protocol or session data, business logic and injection attacks.

Compliance with industry or government security requirements is one of the most common reasons why organizations deploy security services such as the Web Application Firewall (WAF). Section 6.6 of the Payment Card Industry’s Data Security Standard (PCI-DSS) requires a WAF to protect applications that process credit card data.

WAFs have also been deployed if an organization is unable to directly secure application code. This can happen for legacy applications where either the source code is not available, or the knowledge of how the application works has left the organization. Since secure software development life cycle (SDLC) can’t fix such a problem, a bolt-on application security solution such as a WAF can provide the required protection.

WAFs should not be confused with network firewalls, although both are part of a comprehensive Intrusion Detection Systems (IDS). Network firewalls are designed to protect against TCP/IP related network attacks, but are largely ineffective in protecting the application layer. Although all WAFs can be configured to monitor activity, most are used to block malicious requests before they reach the application; sometimes they are even used to return altered results to the requestor.

Internap’s WAF service includes Alert Logic’s Web Security ManagerTM, a dedicated physical appliance with a service component. Pricing is based on the amount of application traffic experienced, and current rate bands include 100, 250, 500 and 1,000 Mbps.

In addition to WAF, Internap provides a wide range of security and compliancy services, including:

  • Vulnerability assessment
  • Intrusion detection and prevention
  • Managed network firewalls
  • Anti-virus protection
  • SSL certificates
  • Log management
  • Patch Management
  • SOC 2 compliant data centers

Learn more about protecting your applications with Web App Firewall (WAF).

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 28, 2013

CTF: Damo Security Challenge 8 Walkthrough

INAP

For the most part, I rarely indulge in CTF exercises due to a combination of lacking free-time and the fact that many of the solutions are often annoyingly convoluted. The other day, someone on the reverse engineering subreddit was kind enough to post links to their web challenges and after taking a look, one of them caught my eye, mostly because it involved a Java client component which is something I always have time for hacking! So, without further ado, here’s a walkthrough on the solution for solving challenge 8: Government File Store .04. You can view the full challenge description here.

Getting Started

After downloading the client component and following the instructions, we have our client up and running and we see the below screen:

s1

The first part of this challenge is to find the username and password required to log in. To accomplish this, we’re going to use JD-GUI as our Java decompiler of choice. After opening the .jar file in the decompiler and expanding challenge 8, we see the class “Login Panel,” which seems like a good starting point. After looking through this class, it’s pretty easy to spot the two string constants, “username,” and “password,” and sure enough, by using these values we can successfully log into the application.

s2

And after logging in…

s3

Next, we select a .txt file to upload and after doing so, the UI gives us some interesting information…

Compressing & Encrypting …done!
Uploading …done!
Members with proper clearance can now access your file via the government repositories.

Neat. So the next task is to find where the file went. After using my method (which we’ll see shortly), I realized that you could probably get access to this information using Wireshark, but because I was feeling sassy, I decided to create a new project in Eclipse, import the .jar to my libraries, and extract information using methods already provided to me in the code. For example, in the decompiled .jar file, we see the parameters uploadurl, username, and password which are stored as encrypted byte arrays.

private static final byte[] uploadurl = { -117, 46, 126, 45, 117,
    100, -114, 47, 86, -115, -108, 52, -74,
    118, 19, -1, 1, 35, -115, 92, 28,
    -121, 82, 43, -83, 104, -4, -72, -128,
    34, -115, 37, -108, -14, -86, 63, 18,
    20, 68, -87, 6, -9, 97, 44, -109,
    27, 101 };

  private static final byte[] username = { -126, 62, 103, 52, 33,
    34, -46, 63, 69, -127, -113, 117, -89 };

  private static final byte[] password = { -126, 62, 108, 109, 119,
    114, -109, 120, 83, -120, -97, 124, -90,
    126, 20, -10, 66, 114, -39, 90, 94,
    -100 };

Within this class, we also have a cipher variable and an unCipher() method:

private static String unCipher(byte[] ciphertext)
  {
    byte[] deciphered = new byte[ciphertext.length];
    for (int i = 0; i < ciphertext.length; i++) {
      deciphered[i] = ((byte)(cipher[i] ^ ciphertext[i]));
    }

    return new String(deciphered);
  }

Even more interesting and useful to us is the three public methods, getURL(), getUsername(), and getPassword() which return the decrypted variables. So, we can use some simple Java like the below to retrieve this data:

package ghettoHax;

import com.security.challenge.eight.*;

public class Main {

    public static void main(String[] args) {
        System.out.println("URL: " + com.security.challenge.eight.Config.getURL());
        System.out.println("Username: " + com.security.challenge.eight.Config.getUsername());
        System.out.println("Password: " + com.security.challenge.eight.Config.getPassword());

    }
}

And after running our code, we receive the below output:

URL: http://damo.clanteam.com/sch8/file_upload/u.php
Username: administrator
Password: adf08923dhdfsdfg745klx

Cool, now we’re making progress. After going to http://damo.clanteam.com/sch8/file_upload/u.php and providing our credentials, we are presented with a page that simply displays “gfs:1.” If we strip off u.php and try to view the document root, we see what we’re looking for:

s4

Oh boy, this secure government file store sure doesn’t seem very secure. Anyways, for this post, the file we uploaded is mytest.txt_1367182640000.zip and after downloading the file, when we try to extract it, we’re required to enter a password and none of the credentials we have so far seem to do the trick. I initially thought that the secret was in the cipher variable and/or the uncipher method, but after looking at these things more closely, this was clearly not the solution. Back to JD-GUI, we have the SubmitFile class which contains the method, CompressAndEncrypt which has exactly what we’re looking for.

      String password = getSHA1Hash(inputfile.getName());
      parameters.setPassword(password);

These two lines tell us everything we need to know which is that the password for the file is the resultant SHA1 hash of the file name itself, and for mytest.txt, this is 7106a9c7891aadea01397158206793afe19ec369. Using this as the password allows us to open our file, but the file hasn’t been changed and doesn’t give us any new information.

At this point, even though we were able to open our file, we still can’t get to the leader board to add our name, but we do know one very important thing – the password for all other “securely” uploaded files. The file, “memo_to_hof_admin.txt_1337017286000.zip” sounds pretty interesting and based on the file naming convention, the original file name was memo_to_hof_admin.txt and the password for this zip file will be the SHA1 hash of the filename. After downloading the zip file and providing the password “293b663b729409237c28c2ff5659b0ba22caf50b,” we have arrived at the end our or challenge.

s6

And let’s not forget to add ourselves to the hall of fame.

final

Overall, the exercise wasn’t that complicated, but it was pretty fun and a nice example of poor programming practices that you often see in the wild, such as flawed crypto, hardcoded credentials, insecure transmission of sensitive date, etc.

Hats off to Damien Reilly for putting this challenge together and I hope to see similar ones in the future!

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 25, 2013

Hybridize your data center, be agile and reduce IT spend

INAP



Hybridize your data center, be agile and reduce IT spendIf you’re an IT Director at a growing company today, you are probably faced with the challenge of frequent requests for changes at your colocation facility. Now that business is heating up again, the folks in development operations are requesting additional servers, but you’re not sure how to justify the capital expense. While the DevOps team already has lots of projects in their queue and wants to be more nimble, they are likely more concerned with getting immediate access to new servers than with what happens to the equipment afterwards. How do you feed business growth while holding off the capital expense until there’s a solid demand to buy new servers?

One solution to this dilemma is hybridization, where the data center becomes a blend of dedicated and virtualized computing resources. With Internap, you can expand inside the data center using AgileCLOUD over a private, high-performance network called Platform Connect to meet the demand for new servers while avoiding large, up-front costs. (See these capabilites in action at Internap’s New York Metro data center.)

AgileCLOUD is an on-demand service where companies can add and take down servers as they need them, sometimes by the hour. It’s perfect for development teams who need to test new releases in a scalable infrastructure before putting the code into production. Platform Connect acts as a private network where servers communicate at wire speed in either 1GigE or 10GigE speeds. Once the physical network is in place and a colocation cage is connected to the private cloud, development teams can easily add and take down computing resources on demand. Because of these self-service capabilities, the corporate IT group is free to concentrate on more business-critical operations.

There are several advantages to using a service like Platform Connect with AgileCLOUD and colocation resources:

  • It’s private. Communications between real and cloud servers take place over a secure, private network. Internap manages the connections, the network and the security point-to-point.
  • It’s fast. 1GigE and 10GigE speeds are available, which provides very low latency even though servers are in different operating environments in the same datacenter or across two datacenters in the same metropolitan area.
  • It’s cost-effective. It’s easier to spin up new servers as a service until there is a solid demand to buy new servers and install them in your colocation space.
  • It’s scalable. Business and operations can sometimes be unpredictable, so scalability—both up and down—becomes important today.
  • It’s easy. Leaping over human barriers means getting things done quickly. Rather than wait for another individual to set up additional computing infrastructure, this can be done via self-service.
  • It pays for itself. When a growing enterprise has many irons in the fire, the time it takes to simply set up new servers can take an entire day. The time required by key personnel to do this—and again and again to meet the ebb and flow of development teams’ demands—slowly adds up. Soon, you have to hire someone new just to do this amount of set-up. The delays experienced by developers waiting for their servers to be brought online should also be considered.
  • It’s worry free. Management of the network and cloud servers is handled by Internap’s engineers, which frees both IT and DevOps to relax and focus on what’s important.
  • It’s flexible. Once the Platform Connect service is in place and different resources have been interconnected, cloud instances can be added and deleted as needed.

Connecting to the cloud is not new for developers. However, when flexibility, security and performance are required, the number of service providers that can meet all those demands is limited. With colocation services, cloud resources and the ability to hybridize your data center space using Platform Connect, Internap gives your development team the IT resources they need, while keeping your CFO happy at the same time.

Learn more about how Platform Connect can help you support business growth in a cost-effective way.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 24, 2013

Fedora, FreeBSD, or CentOS?

INAP

Today, I’d like to explain the similarities, differences and details of some of the most popular Unix-based operating systems available, as denoted in the title of this post.

Differences and Similarities Between Fedora, FreeBSD and CentOS

To begin, let me go over some of the things each have in common. Each of the operating systems are based on an Enterprise-level Unix Operating System and are very much suited to running a Web Server or any other type of server. They all have the same software available to them. You can compile anything from source or install a binary package. Some examples of those packages are Apache, PHP, MySQL and Exim.

Some of the major differences between the three is that FreeBSD uses a different kernel and userland (general commands/applications such as ls and shells) from CentOS and Fedora Core. FreeBSD is generally easier to recompile. There is a simple text file that you can edit to change any kernel modules. The drivers are grouped by type. With Linux , they also have an ncurses-based frontend for doing this. It is grouped by driver type. Both have many tutorials on how to do this. Both Linux and FreeBSD have good descriptions of what each driver does. This is very helpful in adding new modules. Some of these modules include VPN functionality, firewalling, and File systems.

There is also a major difference with the licensing used between the two.

FreeBSD uses the BSD license and Linux distributions, such as CentOS and Fedora, use the GNU GPL license. The GPL basically says you can use our source code and modify it, but you have to re-release any modifications to it. With the BSD license you can take the source code, modify it and finally sell it. These licenses can also be used for software you make, not just Operating Systems.

Here is a copy of the BSD license from the FreeBSD website, http://www.freebsd.org/copyright/license.html. Here is a copy of the GNU GPL http://www.gnu.org/copyleft/gpl.html. There are many debates concerning which is better. We here at SingleHop do not have a particular preference as far as these licenses go.

FreeBSD is also a source-based Operating System. The best way to patch it is to recompile from source. This allows you to ensure your binaries are secure. You may also wish to enable certain CPU optimizations at this time. This could improve the stability of your server.

CentOS and Fedora are binary distributions. They too release the source, but they release regular binary updates. These updates address security issues, stability issues and newer versions of software. These updates are critical to running your server effectively. If you do not update your server, it could become unstable, or worse yet, be compromised. This is why we suggest signing up for our Security and OS updates. Updating your server is an important part of security and stability. This even applies to our Windows Dedicated Servers. With our Security and OS updates, we use our expert staff to handle these tasks for you.

Comparing CentOS and Fedora

There are two major differences between CentOS and Fedora. CentOS is based on an Enterprise-level distribution, released by a company called RedHat. This is one of the premier Linux distributions and many applications such as Oracle and VMWare specifically go out of their way to support this distribution. Fedora is another distribution sponsored by RedHat. This is the distribution where RedHat tests out new features that may eventually make their way into RedHat Enterprise Linux.

CentOS 4.0 was released on 03/02/2005. This version of the Operating System will be supported until 02/29/2012. This means that CentOS 4.0 will have approximately 7 years of patches that will be released. The Full Updates for the 4.0 version ran until February 29th, 2008, so it is now in a Maintenance Update Phase. This means that only Security problems and select mission critical fixes will be released. You may notice that your server had been upgraded to CentOS 4.6 on a previous 4.0 install. This is normal and was due to several minor releases.

Fedora Core 5 was released on 3/20/2006. It was supported until 6/29/2007. This is a significantly shorter than the lifespan of CentOS. Fedora does not have minor releases between major ones, shortening the life span of a supported install. This means you would have to actually do a major Operating System upgrade sooner. This can be a bad thing for a server. If you have a production environment, you want the OS to be supported as long as possible. This ensures stability for your environment.

FreeBSD 6.0 Release branch was released on 11/4/2005. This will be supported until 1/31/2010. You may need to upgrade the minor release on FreeBSD from 6.0 to 6.1 to 6.2 to 6.3 for this to be true, however.

All in all, the longer your Distribution or Operating System is supported, the longer you have to go before a reinstall or major upgrade. This ensures that you will always have a stable platform for whatever applications you would like to run.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 22, 2013

What is a Hyper-Threaded Processor and Does it Matter?

Sam Bowling, Manager, Infrastructure Engineering

There is a lot of confusion these days with all the new processor technology that has come out over the last few years. Intel has released a great amount of different technologies such as Hyper-Threading, the Core series, as well as VT technology. Now, I’m sure after reading these terminologies that Intel has coined you’re thinking “what does that mean for me?”

Well I am so glad you asked!

Hyper-Threading (HT) is a technology Intel released on their Pentium 4 and Xeon-equivalent series of processors. When you use a HT-enabled CPU you will notice in your system profile that it reports you have two processors instead of just one. While you think,”that’s amazing! I have two processors IN ONE!” this is not the case. What HT actually is, technology-wise, is a duplication on certain sections of the processor that handle the state of threads. It does not actually duplicate the main resources on the processor.

This allows the operating system to schedule two threads at the same time. A thread is basically an instruction that is given to the CPU and waits for a result. When a program is compiled to be multi-threaded that means that the processor in theory can execute two threads at the same time, since the operating system sees two processors. This improves processing performance because a lot of the data executed on a processor is actually mathematical guesses based on statistics.

This technology is known as branch prediction. When the branch prediction is wrong, the thread being executed stalls and has to be run entirely without any prediction so it can return properly. Since you have a stalled thread and one waiting to execute, the branch prediction can execute the new thread while the processor completes the previous thread.

With Intel’s new line of processors, which are commonly known as the Core series, there is a whole new idea on multi-threaded processing. Rather than only duplicating a portion of the CPU, like in the Pentium 4 line, Intel now makes duplicate cores on the same CPU. You can tell the number of cores each processor has by the naming scheme.

For instance, a Core Solo only has one 32-bit processor while a Core2Quad has four 64-bit processors. The main difference between the Core and the Core2 line is that the Core line only has 32-bit capabilities while the Core2 line has 64-bit. Now you’re probably thinking “So what, Intel already has Hyper-Threading, so what good is this?” The main difference is that there are actually multiple cores on the same CPU. There is no stalling and waiting on a core when using a multi-threaded program, as it can pass threads on to an entirely different CPU. This is great for systems that need a lot of threads executed as soon as possible. There is also increased cache on these processors, as well as a faster bus speed, so they are able to execute instructions at a much greater pace over the Pentium 4 line.

Now that you understand cores, let me inform you of one cool thing on the Core series that doesn’t exist on most of the Pentium 4 line. It is known as the VT bit. This bit is a flag on the processor that is used for hardware virtualization. It allows more efficient use of the processor when running a virtualized operating system. A virtualized operating system is when you use software such as VMware or parallels to run an operating system inside your existing one. For instance, using VMware you would be able to load Linux onto a virtual machine and give it access to the internet, while running Windows on your machine. To the outside world, the Linux machine would look like its own server entirely when really it is sharing resources from your Windows machine.

With all the budding technology in processors it’s hard to keep up and understand all the terminology the manufacturers introduce. I hope I was able to explain to you some of the main technologies Intel has implemented into their processors, and plan to provide you information in the future on how you can take advantage of some of these features.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Sam Bowling

Manager, Infrastructure Engineering

Read More
Apr 19, 2013

What Happens When You Traceroute?

Paul Painter, Director, Solutions Engineering

This is thought to be common knowledge by most, but there are quite a few tech people out there who still do not know what happens when you trace to an IP address. And troubleshooting traceroutes can even be more complicated. We will cover the basics here.

Traceroutes and pings are how we troubleshoot the internet. It’s how we see where our traffic is going, how close our game servers are, and how we quickly identify issues. Ping and traceroute programs use a protocol called ICMP (Internet Control Message Protocol). Traceroute uses this protocol because of its diagnostic and error-reporting features. These features, included with a TTL (Time To Live) and response time, give the user an abundance of knowledge about their packets.

TTL is used to identify router hops as your packets are being sent around the Internet. Traceroute uses an identification process to show you the hops by setting a specific TTL. It sends the first packet out with a TTL of 1 and each route along the way decrements the TTL by one. Once the TTL reaches 0, the router responds back with a TTL exceeded error message. The source host receives this error messages then knows that the device that reported the error message is the first router “hop” on the path. It repeats this process by increasing the TTL by 1 for each hop to display the path. Typically after 30 hops traceroute would see the destination as unreachable and quit.

Traceroute Facts

You only see the direction your packets are being sent. This does not mean it is being sent back the same way. Traffic could be sent back through another network, and you would never know unless you did a traceroute in the other direction.

What you see identified in the trace is the port your packets entered the router on. You are never able to see how traffic exits the specific router; you can only see how it enters the router and then enters the next one in the path. Even a traceroute in the other direction may not show you this if the packets take a different path.

There are ways to use the name of the device to identify the port type and location of each hop. This information can be crucial in identifying routing loops and suboptimal routing. More information on this can be found in the detailed presentation.

Just because there is a spike in latency does not mean there is an issue. Remember how I said the router must respond to a TTL of 0 with the exceeded error? Well, this requires precious CPU cycles to accomplish this task. Other normal packets going through the router are switched by hardware totally bypassing the CPU. So if you see a spike in latency at one hop that is typically due the router doing something else. Pretty much everything takes more priority than ICMP packets so this is why you are likely to see a spike in latency from hop to hop when running a traceroute.

ICMP packets are often blocked and rate-limited. This could be for security issues or other DDoS protection measures. So don’t be worried if you can ping the destination but the traceroute never completes. The best way to look for legitimate traceroute issues to see a consistent, above-average latency increase across the path. This would also be in conjunction with pings sent to the destination that show packet loss or above-average latency.

At the end of the day, if you ever need to submit a ticket, we always recommend traceroutes in both directions and 100 pings. This way we will have the information we need to effectively troubleshoot the issue for you. If you have pings that show your baseline latency, this is always too. We hope that this cleared up some questions you had in the past about traceroutes and pings.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Paul Painter

Director, Solutions Engineering

Read More
Apr 18, 2013

Not your grandmother’s online gaming

inap admin

iStock_gaming_grandmother_300x150Recently, I was invited to attend a meeting of the Georgia Game Developer’s Association (GGDA). It made me remember my first experience with gaming, which involved the original Nintendo game console, and I fondly recalled my grandmother saying that she’d never “waste her time on one of those things.” By the following week, she was helping Link save Zelda in “The Legend of Zelda” and was forever hooked on the iconic role play game.

As I listened to the panel of speakers at GGDA who had just returned from the Game Developers Conference (GDC) in San Francisco, one new piece of technology they were raving about was Sifteo Cubes. These are 1.7” cubes packed with more technology than my grandmother ever could have imagined. They communicate wirelessly with each other, and up to 12 cubes can be played at one time.

The techie in me began to think about the technology behind these innovative devices and the games that are played on them. To add games, one must download them via a desktop application and transfer them to a device using a USB cable. It made me wonder if Sifteo uses a Content Delivery Network (CDN) to support this type of download capability for their customers.

Internap’s CDN allows gaming companies to efficiently deliver content to gamers anywhere in the world at lightning fast speed. Using a CDN together with our Performance IPTM service creates a faster, more reliable online experience for users. The benefits to this are two-fold: gaming companies are able to distribute their content globally with minimal latency and their customers are able to download games more quickly and get back to gaming sooner. I wonder what my grandmother would have to say about that.

To learn more about the technologies that are essential to game developers, check out Internap’s Online Gaming Industry Handbook.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

inap admin

Read More
Apr 17, 2013

Managed hosting can solve healthcare patient engagement problems

Ansley Kilgore

Managed hosting can solve healthcare patient engagement problemsIf you’ve spent any time interacting with the healthcare industry in the past couple of years, whether as an employee or a patient, you’ve probably noticed that IT systems are a much bigger part of the healthcare experience. The rise of EHRs (Electronic Healthcare Records) and other IT solutions have created a major opportunity for patient engagement in the sector. However, developing patient engagement models and finding the right technology to accommodate them can be difficult, making managed hosting services important in the sector.

The rise of patient engagement strategies
EHRs create an opportunity for hospital CIOs to increase patient engagement by making more data available. By giving patients access to information about their health and treatment options, they can be more engaged with their care and have a greater understanding of how to get the help they need.

In most current care models, primary care physicians will advise patients through various chains of care, which is sometimes as simple as collecting a prescription at the pharmacy. In other instances, it can involve visiting a test lab or seeing a specialist to get additional care that the primary care doctor can’t provide.

Traditionally, patients have not had much say in how this chain of care progresses. Furthermore, the complex nature of treatment strategies and medical processes has created an environment in which physicians tend to keep patients in the dark because they cannot always provide definitive answers.

Challenges in patient engagement
While hospital CIOs can use patient engagement plans to improve patient care, doing so requires a strategic IT infrastructure to handle the large amounts of data needed. However, the solution has to be simple from an end user perspective, despite the complex web of interdependent technologies. Managed hosting makes this possible by giving CIOs a secure and cost-effective way to host patient engagement applications and data. This operational model allows hospitals to connect meaningfully with patients and provide access to data without making costly internal IT upgrades.

Learn more about managed hosting plans and how they can benefit your business by downloading our white paper, Ten Considerations When Choosing a Managed Hosting Provider.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Apr 16, 2013

Cloud and CDN: Friends or Foes?

Ansley Kilgore

Cloud and CDNToday’s online users expect high-quality, ‘anytime, anywhere’ access from a multitude of devices. This presents a challenge for content providers to deliver multiple types of large files, including streaming media, Video on Demand (VOD) and other large files to tablets and smartphones, all while maintaining the high-quality online experience that users have come to expect. Content Delivery Networks (CDNs) are often used as an efficient way to distribute large amounts of content in this manner.

But with the growth of cloud computing, companies have embraced new, cost-effective approaches to IT Infrastructure. The challenge of scaling is no longer prohibitively expensive, and the ability to scale virtually, on-demand has leveled the playing field for small- and medium-sized businesses to compete with large enterprises for market share. With the substantial performance and cost improvements provided by the cloud, this often leads to the misconception that the cloud alone can maintain the high-quality online experience that consumers demand.

In reality, the cloud and CDNs have specific purposes that make them well-suited to work together.

The cloud is a utility computing platform that consists of large physical stacks of computational resources, or, multi-tenant slices of a pre-built mass computational array. This type of dynamic computing power is ideal for processing big data and business intelligence problems, and evolved from the concept of mainframes in a past decade.

CDNs are utility delivery platforms that specialize in one-to-many distribution as opposed to the two-way interactive exchange performed by utility computing platforms. In contrast to the cloud, CDNs are designed specifically to deliver content from servers to the end-users as part of a repeatable process.

High-performance content delivery is a must for websites or online applications serving geographically-dispersed end users. Using the cloud and a CDN together creates a holistic system that meets the demands for content delivery as well as economical computing power. This best-of-both-worlds combination results in an optimal online user experience when incorporated into your IT Infrastructure strategy.

To learn more about the specific purposes and benefits of CDNs and the cloud and how these two platforms work together to meet the content delivery needs of today’s online users, download our white paper, CDN: A Cloud Accelerant.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Apr 11, 2013

F1 returns to China, fans rejoice

Ansley Kilgore

F1 returns to ChinaIt has been three agonizingly long weeks for Formula One fans as we anxiously await this weekend’s Chinese Grand Prix. Long breaks between race weekends are never any fun for fans, but this break has been particularly maddening. Why? Because we are only two races into the season and both races so far have been all kinds of awesome to say the least.

The season opened with a surprise winner of the Australian Grand Prix – Kimi Räikkönen, who became the focus of much attention last season with his now infamous retort, “leave me alone, I know what I’m doing!” A fellow F1 fan even sent me a button with the phrase emblazoned on it. Well, apparently Kimi knew exactly what he was doing in Australia. We barely had time to catch our breath and then it was on to the Malaysian Grand Prix where utter chaos ensued in the pits and a new catch phrase was born. Welcome “multi-21” to the F1 vernacular.

So how does a fan half-way across the world keep up? Well, it’s all about the F1 digital ecosystem where online information and updates are rampant, opinions and analysis are readily available and fans are whipped into a frenzy – at least this fan is, anyway.

Our friends over at Sahara Force India (SFI) certainly understand that engaging their global fan base is key, which means their website is of utmost importance. At Internap, we get to be part of the F1 action through our relationship with SFI. We provide them with an online performance package for their website that includes Managed Hosting, Performance IPTM, and a Content Delivery Network (CDN). With the support of Internap’s IT Infrastructure services, SFI can engage its global fan base via www.saharaforceindiaf1.com with videos, photos, blogs and much more.

So the next time you are scouring the web and social media sites for the information you crave, remember that your online experience is powered by some serious IT muscle. The right technology and infrastructure need to be in place so that you can interact with your favorite F1 teams and drivers and become part of the virtual global fan base.

How will I get the inside scoop from the pitlane and paddock this race weekend? Check out Force India on Twitter, Facebook and of course at www.saharaforceindiaf1.com.

To learn more about Internap’s partnership with Sahara Force India, watch our video.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More