Click here to LEARN more.

Sep 27, 2013

Webinar recap: The nuts and bolts of bare-metal clouds

Ansley Kilgore

nuts and bolts of bare-metal cloudsBare-metal cloud is emerging as the next evolution of cloud computing. More workloads and use cases are demanding high performance processing power, and in many cases these requirements are best served by dedicated, physical infrastructure. Bare-metal cloud offers a cost-effective way to get full server performance with no virtualization penalty.

Recently, Internap and GigaOM Research discussed the nuts and bolts of bare-metal clouds (watch the webinar here). As enterprises continue to embrace the cloud, bare metal is quickly becoming an economical way to meet high performance demands, making it an essential piece in the future of cloud solutions.

Price for performance – The bare-metal cloud usage-based pricing model and automation capabilities offer upfront and long-term savings, but with substantial performance gains when compared to traditional public clouds. For workloads or databases with a low tolerance for noisy neighbors and the oversubscription found in multi-tenant environments, the bare-metal alternative gives you the full compute and processing power of a dedicated server. The bare-metal cloud is natural choice for performance-sensitive use cases such as big data and media encoding.

Hybridization – For organizations just starting out, the ability to use bare-metal cloud together with virtualized resources is a key driver. This approach lets you get the most from your existing infrastructure while seamlessly adding new performance capabilities. Connecting your cloud environment with other services such as colocation offers a higher degree of control and flexibility that physical infrastructure alone cannot provide. Hybridization also allows you to use the cloud for bursting needs, which is a model used by many Internap customers.

Innovation – Moving forward, IT departments will need the scalability, storage and security advantages of bare-metal cloud in order to meet requirements from the lines of business. As more cloud providers start offering bare metal capabilities to meet this demand, customers should expect further innovations around automation and flexibility. Within the next few years, bare metal will become mainstream and concerns about data control and compliance will no longer justify buying your own hardware. Many companies have already incorporated bare metal into their cloud strategy.

As more enterprises move workloads into the cloud, bare metal should be considered part of a successful IT Infrastructure strategy. Is your current cloud service meeting your requirements from a performance, security and cost standpoint?

Learn more about the Nuts and Bolts of Bare-Metal Clouds.

Explore HorizonIQ
Bare Metal


About Author

Ansley Kilgore

Read More
Sep 24, 2013

Ten innovative features that your data center can’t live without


NY_skyline_300x150Are you looking for a data center provider that can meet the needs of your business as you grow and scale? Cutting-edge data centers provide state-of-the-art features that can help you maximize performance, lower costs and increase efficiency.

Internap’s New York Metro Data Center offers some of the most advanced features available. The list below includes ten features to look for when evaluating innovative data center and colocation space.

  1. Wide breadth of services to build the best fit for your needs
    The NY Metro Data Center offers a flexible menu of Cloud, Hosting, Colocation and Hybrid infrastructure services to help you build the optimal environment for your particular needs.
  2. High-density power allows you to scale in place
    With power configurations up to 18kW per rack, you can simply add power as you grow – instead of adding a new physical footprint – and avoid incremental equipment costs, installation fees and hassle.
  3. Customer portal gives you a single-pane-of-glass view into your infrastructure
    View and manage your colocation environment remotely with our customer portal. Reboot servers, check power utilization and environmentals, monitor bandwidth and hybridize your infrastructure all via a single interface.
  4. Hybridize your environment easily within the same data center
    Seamlessly connect your colocation, managed hosting and cloud environments to maximize performance and lower bandwidth costs.
  5. Concurrent maintainability maximizes uptime and performance
    More than just N+1, industry-leading data centers are designed for concurrent maintainability of electrical and mechanical systems, resulting in maximum uptime and performance for your applications.
  6. Performance IP™ service enables optimized connectivity
    Internap’s Performance IP™ service provides optimized links up to 10GB through seven top tier carriers. In addition, our facilities offer a robust carrier Meet-Me-Room, including alternative transit and local access options.
  7. Energy-efficient design supports a “green” working environment
    Designed to achieve LEED certification, our facility’s efficient design components include outdoor air economizers, variable frequency drives on chillers and pumps and cold aisle/hot aisle containment zones.
  8. State-of-the-art security features ensure your equipment is highly protected
    Our facility’s comprehensive security measures include 24/7 on-site security personnel, closed-circuit television, as well as biometric, electronic and key card locks.
  9. On-site experts proactively monitor facility performance
    Our experienced team of data center engineers and technical support staff are on-site 24/7, proactively monitoring facility performance and resolving issues before they impact your business.
  10. New York Metro location offers convenience and accessibility
    Located in Secaucus, N.J., our facility is within easy access of the bridges and tunnels into Manhattan, the NJ Turnpike, the Secaucus Junction train station and the New York metropolitan area’s three international airports.

Learn more about the New York Metro Data Center here.

Explore HorizonIQ
Bare Metal


About Author


Read More
Sep 20, 2013

Sahara Force India: This F1 website doesn’t brake for pit stops


Sahara Force India

Formula One is rightly regarded as one of the greatest expressions of technology. In a world living on the edge, where a thousandth of a second can mean the difference between triumph and defeat, there is no space for compromise.

The same mentality applies to the IT solutions that power the Sahara Force India website, our most effective platform to interact with our fans and engage them. In a technological world that is connected 24/7, nothing is left to chance. Through our website, we connect with Formula One followers all over the world, sharing news, behind-the-scenes insights and information, making them an integral part of our team.

Top performance online and on the circuit
Formula One is a global sport and our team has fans in every continent. It is crucial for our website to be live and accessible at every hour in the day and in the night – no downtime is acceptable: our partnership with Internap guarantees the most reliable platform in order to leave no fan out in the cold – wherever their location and needs.

Especially at race weekends, when traffic reaches its peaks, the platform created by Internap allows us to achieve top performance, while the knowledge that no fan is ever let down by our website’s technological solutions gives our team the ease of mind to concentrate on bringing new, exciting content to our fans.

Being able to count on a website that delivers unrivalled performance has allowed us to build a unique relationship with our followers, bringing them exclusive content with the confidence our fans will be able to enjoy it.

“I visit the website every day” says Patricia, who hails from the Asturias in Spain; “I am a massive fan of your blogs where you talk about your summer break, how you feel when going somewhere for the first time or how is it like for the team to race in India. This kind of entries put us fans closer to the team and I appreciate that a lot.”

Young fan Cassie, from England, attributes her passion for Formula One to the team’s website, her one-stop-shop for information. “I read every article, even older ones – each one helps my Formula One knowledge grow” is her verdict. “Without the website I wouldn’t have my Paul Di Resta shirt, I wouldn’t know where to get the full race previews from all the team – I wouldn’t be one of your Best Fans!”

The website is the way our team connects to our home in India and to the passionate fans following our team there. Superfan Darshan, from Kolkata, is “really thankful for the website, the best source of information about the team.

“The blogs give us so much about what is going on behind the scenes. When you go through them, one can imagine the situation and feel just like you’re there with the team. It is also great to read about [Academy Driver] Jehan’s excellent performances. It’s tough to get hold of any news of him on the net but there are so many updates by the team!”

Through our website, we try to bridge the gap between the team and the millions of fans who wish us well every race weekend. Getting closer, sharing the insights of what happens backstage at a Grand Prix gives the hundreds of thousands who cannot travel to a race the opportunity to be part of the team. There are a few fans, however, for which this connection reaches an even more personal level – those for whom the interaction with our site allows to keep in touch with loved ones: the families of our team members.

“The team website is so important for us” says Oana, originally from Bucharest. “It makes us feel part of the race weekends and therefore part of the lives of our dear ones when they’re away. The same goes for all our friends, work colleagues and extended family so the virus of F1 and love for SFI spreads through its people.

“As a fan of the team, I love all the drivers’ pre- and post-race comments. I love their optimism and determination and the explanations making things clear for all us non-engineers!”

The positive feedback from our loyal fans is a huge reward for everyone in the team and renews our commitment to work hard, in partnership with Internap, to bring an even better website experience for our followers.

We listen carefully to the suggestions and comments of the best Formula One fans in the world. As we keep improving our offer, we want to bring you even more. Loyal fan Tim, from England, enjoys “one of the best and easiest to access sites of all the F1 teams so keep up the good work” and asks for “more about the team – the key players, the factory and the staff behind running a large operation like a F1 team.”

Working with Internap, we can keep working on bringing you the best content – today and tomorrow.

Explore HorizonIQ
Bare Metal


About Author


Read More
Sep 17, 2013

Industry News: Data center expansions meet growing demand for services


Within the data center industry, new technologies are being deployed at a very fast pace. The future of data center environments and how they will be managed is the focus of many conversations. Data Center Infrastructure Management (DCIM) software implementation is on the rise, and many data center providers, including Internap, are expanding, building and otherwise adding space to keep up with the high demand.

In the past few months, a number of data center expansions and build outs in the New York metro area have been announced as a result of the growing demand for data center services in the region. In fact, later this year Internap will unveil our state-of-the-art New York Metro data center, located in Secaucus, NJ.

So, with all that in mind, we thought this might be a good time to do a roundup of some recent articles about what is happening across the data center industry as a whole:

  • Optimizing Physical Infrastructure to Get More from Virtualization and the Cloud
    Tips and approaches to manage the onslaught of digital data and its impact on a data center. If ignored, the benefits of virtualization and cloud computing can be severely constrained in meeting highly dynamic compute power demand.
  • Colocation has role to play in growing cloud landscape
    Many companies need to put some systems in the private cloud because public cloud technologies are not a perfect fit for some applications and services. This is where colocation comes into play. Colocation can provide an ideal path to the cloud. However, not every company has the kind of data center setup needed to host a secure, reliable and high-performance private cloud.
  • DCIM software implementation, cloud use on the rise in data centers
    Moving data offsite to a cloud or colocation provider and using data center infrastructure management tools are common practices in enterprise data centers. In fact, recent IDC data reflects DCIM software is growing in popularity and taking hold as the trend in data outsourcing continues.
  • Defining the New Data Center Operating System
    There is a paradigm shift occurring in the modern data center, which is at the center of any major organization. As more systems are being placed within the data center environment and new technologies are being born directly within a cloud infrastructure, there needs to be a way to better manage all of the services being relied upon. This is forcing a look at how the future data center environment will be managed.

New developments in colocation management and data center services are an important aspect of establishing and managing the right IT Infrastructure for your business.

Explore HorizonIQ
Bare Metal


About Author


Read More
Sep 10, 2013

The Truth about Creative Problem Solving


Are you creative?

Like many other people, you might feel inclined to answer this question with a “yes” or a “no”. But what you might be surprised to learn is that when it comes to creative problem solving, everyone is capable of it.

Creative problem solving isn’t a trait; it’s a process. One of the best ways to define this process has been done by a company called FourSight.  FourSight is the developer of a research based training tool that helps individuals, teams, and organizations innovate and solve problems more effectively. In their research they were able to successfully break the creative problem solving process into four parts: Clarifying, Ideating, Developing, and Implementing.

Clarifying – In order to solve a problem, you first need to define what the problem is. Clarifying is the part of the creative process where you spend time “clarifying” what the problem is and getting all the facts and parameters you need in order to solve that problem.

Four Squares of Creative Problem Solving
Foursight LLC,

Ideating – This is where you begin to brainstorm potential ideas or solutions that will solve the problem. The ideas generated are what you use to design the solution to the problem.

Developing – Developing is the part of the process where you’ll refine and perfect your idea/design and form it into the best workable solution.

Implementing – This is the final stage, and it’s where you deliver or execute on the solution you’ve developed. e.g. launching a new business, delivering a presentation, etc.

Right Hand vs. Left Hand

All things considered, the FourSight model for the creative process is a pretty simple concept, however, the value of the model is further revealed once you’ve taken the assessment. The FourSight Innovative assessment works by measuring individuals peak preferences in the creative process. Notice that I said “preferences”. The reason I say this is because everyone is capable of spending time in all four parts of the process, however, it’s our peak preferences that determine where we like to spend most of our time.

The best analogy for this is writing with your right or left hand. The majority of us are right or left handed. The hand that we write the most with is considered our dominant hand. That’s not to say we can’t write with our non-dominant hand, however, to do so takes additional focus and energy. The same holds true in the creative process. We all have our preferred part(s) of the process that we like to focus our energy and spend our time in, but that doesn’t mean we can’t make a conscious effort to spend time and work at focusing on the others. 

The Four Types
Foursight LLC,

Turning Insight into Action

Once you’ve taken the FourSight assessment you will be provided with your results which will show you where your peak preferences lie in the process. People can have peak preferences in one or multiple parts of the process.  There are also people that don’t have any peak preferences, but instead show moderate energy for all 4 stages. These people are called “integrators”.  Now while it’s a good thing that integrators spend time in all 4 parts of the process, the main watch out is that they don’t carry as much energy in each of those stages. In fact because they spend time in all of the stages, you’ll sometimes also find that they are running low on energy when it comes time to implement.

Therefore knowing which peak preferences you or your team has is valuable in identifying where potential blind spots may be.

i.e. Someone that has peak preferences in ideation and implementation is likely to spend the majority of their time coming up with a lot of ideas, however thry may not address the problem correctly, and then they may go on to implement those ideas without ever having fully developed them.

The same holds true in a team or organizational dynamic. If a team is lacking any person with a particular preference they may find themselves overlooking a critical stage in the process.  Knowing your preferences and being aware of where other’s preferences lie will improve both communication and productivity within the team and ensure that the right amount of time is being spent in each area.

FourSight Training

Understanding the creative process is a great frame work for undertaking any project, and can be applied both inside and outside of the office.  If you’d like to take the FourSight assessment and have your team workshop through the creative problem solving process – I’d highly recommend talking to Bliss Training & Consulting. With over 20 years’ of experience in corporate training, they specialize in designing active learning environments that will allow your team to experience and workshop through the creative problem solving content in a customized and engaging manner.

Explore HorizonIQ
Bare Metal


About Author


Read More
Sep 10, 2013

How to make IaaS work for your big data needs


How to make IaaS work for your big data needsIn our previous blog, we discussed two main classes of big data that we have observed in our customer base: “needle in the haystack” style data mining and mass-scale NoSQL style “big” database applications. In this blog, I wanted to talk about the importance of choosing the right infrastructure services for your needle in the haystack big data workloads and needs.

The needle in the haystack approach to big data involves searching for relationships and patterns within a static or steadily growing mountain of information, hoping to find insights that will help you make better business decisions. These workloads can be highly variable with constant changes in scope and size, especially when you’re just starting out. These workloads normally require large, backend processing power to analyze the high volume of data (tweet this). To effectively crunch this type of data and find meaningful needles in your haystack, you need an infrastructure that can accommodate:

Dynamically changing, periodic usage – Most big data jobs are processed in batches, and require flexible infrastructure that can handle unpredictable, variable workloads.
Large computational needs – “Big” data requires serious processing power to get through your jobs in a reasonable amount of time and provide effective analysis.

So what kind of infrastructure options can support these requirements? While the multi-tenant virtual cloud platforms offer a great economic model and can handle the variable workloads, performance demands become extremely difficult to manage as your use cases evolve and grow. Big data mining technologies such as Hadoop may work at acceptable levels in virtual environments when you’re just starting out, but they tend to struggle at scale due to high storage I/O, network and computational demands. The virtual, shared and oversubscribed aspects of multi-tenant clouds can lead to problems with noisy neighbors. Big data jobs are some of the noisiest, and ultimately everyone in the same shared virtual environment will suffer, including your big data jobs. An alternative is to build out dedicated infrastructure to alleviate these problems.

This leaves you with two bad options: either deal with subpar performance of virtual pay-as-you-go cloud platforms, or start building your own “expensive” infrastructure. How do you get both the flexibility you need and the high level of performance required to efficiently process big data jobs?

Bare-metal cloud can provide the dedicated storage and compute that you need, along with flexibility for unpredictable workloads. In a bare-metal cloud platform, all compute and direct-attached storage are completely dedicated to your workloads. There are no neighbors, let alone noisy ones, to adversely impact your needs. Best of all, you can get and pay for what your workload specifically needs, and then spin down the whole thing. One caveat – even with dedicated servers and storage, the network layer is still shared among multiple tenants, which could be a limiting factor for some large-scale Hadoop jobs where wire-speed performance is a must. Even though bare metal is one of the best price for performance cloud options, your workload may not be able to tolerate such limitations as your big data needs grow. Managed hosting or private cloud to the rescue.

Managed hosting or private cloud is a better option in some cases, as the infrastructure is dedicated to you on a private network and can be customized to accommodate your specific needs. These options deliver wire-speed network performance along with dedicated compute, storage and reasonable agility. Of course, this won’t be the most economic option, but if your workload requirements demand this, the tradeoff is well worth it.

Whether you begin your big data endeavor with virtual cloud or bare-metal cloud, it’s important to recognize that your infrastructure needs will change over time. When starting out, a virtual cloud or a bare-metal cloud can suffice, with bare metal providing better performance and scale capabilities. But as your big data needs expand, a fully dedicated, managed private cloud may fit better, without the limitations of a shared network.

Given that change is the only constant in big data, choosing a provider that offers more options and allows you to adjust as your needs change is key. Talk to Internap about your “needle in the haystack” big data needs and we will help you find the right options now and for the future.

Explore HorizonIQ
Bare Metal


About Author


Read More
Sep 8, 2013

Binary Vivisection (Part 1)


Early this month, a new version of Vivisect was released which fixed a large amount of bugs, and added a tremendous amount of new features.  While looking over the changelog and documentation, I realized that there doesn’t really seem to be a good tutorial or primer for getting familiar with the Vivisect framework so hopefully we can remediate that today.  In this series, we’ll be covering the usage of VDB (dynamic debugging component) and vivisect (static analysis tool).  One thing that makes Vivisect so powerful is the fact that it’s completely scriptable in Python.  If you’re looking for an example of scripting VDB, check out fitblip’s post since I don’t plan on covering it at this time.  This post assumes the reader has some introductory knowledge of reverse engineering and reading (dis)assembly.

For the rest of this post, we’ll be working with a 32-bit version of Ubuntu 12.04, Python 2.7.3 and three trivial binaries (source code to follow).  The only dependencies for Vivisect on Ubuntu is Python, which ships with the distro, and PyQT4 which can be installed with “sudo apt-get install python-qt4.”  The code for the binaries is as follows and can be complied by simply running:

user@ubuntu:~/vivisect_20130901$ gcc readword.c -o readword
user@ubuntu:~/vivisect_20130901$ gcc conditional.c -o conditional


#include <stdio.h>

#define MAX_LEN 80

main (int argc, char *argv[])
  char a_word[MAX_LEN];

  printf ("Enter a word: ");
  scanf ("%s", a_word);
  printf ("You entered: %s\n", a_word);
   return 0;


#include <stdio.h>   

int main()                           
    int age;                         

    printf( "Please enter your age" );
    scanf( "%d", &age );                
    if ( age < 100 ) {                 
        printf ("You are pretty young!\n" );
    else if ( age == 100 ) {            
        printf( "You are old\n" );      
    else {
        printf( "You are really old\n" );    
  return 0;

Before continuing, please ignore the ugly code above and pay no mind to the use of deprecated functions…

The two programs above will be built and compiled on our Ubuntu machine, and the third file we’re going to use is a Win32 PE which is also the same file used by in the well-known series “Lena’s Reversing for Newbies.”

Now that we have all that business out of the way, let’s get started.


While we do distinguish between “vivisect” and “vdb,” and sometimes “vtrace,” they’re all part of the same framework, which is simply called Vivisect.  Within this framework, there is also a tool called vivisect which is what we’re talking about now.  Vivisect is a static analysis component of the framework which allows us to generate a .viv workspace for a given binary file.  Currently, Vivisect supports PE, PE32+, Elf32, Elf64 and Mach-O.  To get started, let’s generate a .viv workspace for reverseme.exe and start doing some static analysis of the binary.  Oh, and in case you’re curious, reverseme.exe is a very simple crackme which checks for a keyfile, and spits out an error message if the keyfile isn’t found or if it’s invalid.

No keyfile.dat present


Invalid keyfile.dat in Path


Using these error messages and the Vivisect workspace, we’ll start to navigate through the binary to see more about how it works, but most importantly, how to use Vivisect.*

*We don’t focus too much on the binary, what it does, and how to crack it since that’s already been done in multiple tutorials elsewhere.  If you want to see the solution to the crackme and patching the application, check out TiGa’s tutorial which uses IDA 5.x and this binary.

To generate a workspace for our binary file, run the following command:

user@ubuntu:~/vivisect_20130901$ ./vivbin -B reverseme.exe
Loaded (0.2901 sec) reverseme.exe
ANALYSIS TIME: 0.217390775681
Saving workspace: reverseme.exe.viv

Now that we’ve generated our workspace, we don’t need the binary for static analysis since everything is contained within the outputted .viv file.  To open this file within vivisect, simply run ./vivbin reverseme.exe.viv and upon completion, you’ll have a boring screen that looks something like the below:


While you can pick and chose items from the menu bar up top, I recommend loading the default layout as documented in the default display screen.  Once you load the layout, your workspace should now look like the below image.


At a glance, we can immediately see some very useful and familiar things in our new layout.  The top left window is the function graph panel which displays the graph for a given function, as we’ll see shortly.  To the right of the function graph is another window currently displaying functions in the analyzed application.  Immediately under this window there are 5 tabs labeled “Strings,” “Segments,” “Imports,” “Exports,” and “Functions” (which is your current display).  As you can guess, these tabs change the window to display functions imported and exported by the target binary, strings found in the binary, and the segments which make up the executable.  For the sake of learning, let’s play dumb and act like we don’t know anything about this application other than what the error messages showed us and let’s start with the firs error message, “Evaluation period out of date.”  To find out where this string comes from, we’ll locate the string in Vivisect, find cross-references to this string, identify the function responsible for this, and see what else the program can do.

Click the strings tab and find the string which shows this text.  Right click this string and from the popup menu, chose “xrefs to >” and mouse over “0x00401084 reverseme.__entry,” “send to FuncGraph0.”  Once this is done, open up and maximize FuncGraph0 which will show a graph of the corresponding function.



To zoom in/out of the graph, hold shift and scroll your mouse wheel up or down.  To move left, right, up, or down, hold shift and simply drag your mouse.  Zooming out gives us a better idea of the flow of the application, even if it’s not as legible.

Zoomed Out funcGraph0


Because this is a very simple program, it’s pretty easy to scroll around and get an idea of what does what and what goes where.  In looking at the first block (entry point 0x00401000), we can see at the end that there is a comparison (0x00401078) and a conditional branch (0x0040107b).

CODE:0x0040106e  6879204000       push str_Keyfile.dat_00402079
CODE:0x00401073  e80b020000       call CreateFileA_00401283    ;kernel32.CreateFileA(0x00402079,0xc0000000,3,0,3,0x0040216f,0)
CODE:0x00401078  83f8ff           cmp eax,0xffffffff
CODE:0x0040107b  751d             jnz loc_0040109a

The code path to the left is what we hit when no key file is in the currently running path and the program immediately exits.

No Key File Present

CODE:0x0040107d  6a00             push 0
CODE:0x0040107f  6800204000       push str_ Key File Revers_00402000
CODE:0x00401084  6817204000       push str_Evaluation perio_00402017
CODE:0x00401089  6a00             push 0
CODE:0x0040108b  e8d7020000       call MessageBoxA_00401367    ;user32.MessageBoxA(0,0x00402017,0x00402000,0)
CODE:0x00401090  e824020000       call ExitProcess_004012b9    ;kernel32.ExitProcess(sp+0)

Without digging into the disassembly too much, we can see some fairly obvious things here.  Let’s go backwards. 0x00401090, ExitProcess is called and the program exits.  0x0040108b, MessageBoxA is called and this is what causes the message box to appear displaying the text.  Next, 0x00401084, “push str_Evaluation perio_00402017.”  This looks like it’s related to the evaluation period error message, but because we’re playing dumb, we can’t be for sure so let’s get some additional information.  Highlight “CODE:0x00401084”, right click, send to viv.  Once that’s done, click the viv tab under the FuncGraph0 window and you’ll see some new stuff.

Send to Viv


Viv View


The viv view doesn’t show us the function graph (since it’s a different view and all…), but it does give us some navigation improvements.  For example, we want to know more about str_Evaluation perio_00402017 but the graph view had nothing to say about it.  In viv view, double click str_Evaluation perio_00402017 and voila!  We now know everything there is to know about this string, and what a beautiful string it is.

Our String (what a beauty)


If you’re feeling particularly fancy, you should notice that the viv view, when displaying the DATA section of the binary, shows us that there are 3 cross-references (XREFS) to this string.  Right click the DATA block and select “xrefs to” if you care to see where these xrefs are.  Following any of these xrefs to funcGraph0 will adjust the function graph to show you where that xref exists in the program.

Well, that’s all for now and thank you for reading.  Be sure to check back next week where we’ll do some more work with reverseme.exe, manipulate our ELF32 binaries, add code comments, rename functions, identify code paths through the binary, and get started working with the vivisect debugger.

Updated: January 2019

Explore HorizonIQ
Bare Metal


About Author


Read More
Sep 4, 2013

Customer Spotlight: Realview TV launches StudentBridge using CDN

Ansley Kilgore

Realview TV launches StudentBridge using CDNRealview TV specializes in creating and delivering custom online video experiences to help their clients strategically connect with target audiences. Jonathan Clues, CEO, has believed for a long time that video is poised to take over the Internet and now sees the tides changing in the near future as demand increases for more interactive, video-based web experiences. Static websites as we know them will become a thing of the past.

In support of this strategy, Realview TV recently launched StudentBridge, which connects prospective college students with educational institutions. With today’s students visiting on average 30 college websites while researching their choices, access to compelling information regarding the campus and its culture is essential for differentiation. Through branded online video experiences such as virtual campus tours, StudentBridge helps traditional universities strategically promote themselves, while also helping prospective students research and make better decisions about their ideal school.

With more than 1 million videos viewed per month, StudentBridge requires a reliable Content Delivery Network (CDN) to distribute such large amounts of online content. As an Internap customer since 2006, Realview TV uses the CDN to store and deliver rich media files across multiple devices.

The content cloud
According to Clues, the future vision for StudentBridge is to establish multiple distribution outlets and become the content cloud for their clients. This would allow universities to store content on the StudentBridge CDN and seamlessly deliver it to users via different platforms, including social media outlets, websites and mobile devices.

While many universities and businesses are using YouTube to host their online videos, StudentBridge’s content cloud approach offers a more targeted way to drive results. Even though YouTube is an effective way to do research online, it also serves as a search engine that provides information on similar companies and services. This means consumers that are actively looking at video content about your organization on YouTube are also seeing information about your competition. With a CDN, businesses can drive users to their own environment and have a higher degree of control over the content (tweet this). This type of strategy will allow StudentBridge clients to achieve better results through more targeted messaging.

StudentBridge is headquartered in the USA with a satellite office in London. With the global capabilities of Internap’s CDN, they are well-prepared to expand into international markets. CDNs are purpose-built for distributing rich media to origin points in geographically dispersed locations, allowing StudentBridge to provide quality online video to students around the world.

As online content continues to become more video-centric, having a strategic plan that governs how your organization is perceived by users is essential. Consumers will keep consulting the Internet as a first step in many buying decisions, and businesses of all types, including universities, need to deliver the right content to the right audience in order to stay competitive.

Explore HorizonIQ
Bare Metal


About Author

Ansley Kilgore

Read More