INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Apr 28, 2016

Data Center Migration

INAP

While CFOs dream that a data center relocation (physical not logical) will save money, these projects often translate into an IT team’s nightmare. It is easy to understand why; an IT department’s experience with a migration, if any, is probably poor and as a result, there is material risk of a failure. While Internap doesn’t actually perform data center migrations as a part of our service offering, as a colocation provider, we see data center consolidation happen a lot as new customers join our team.

Here are some best practices on data center migration planning based on observation and customer feedback.

  • Establish a budget
    Don’t kid yourself about what this really will cost in terms of people and time. Revisit the budget often; sometimes what seemed like a good idea may not have the return on the investment originally anticipated.
  • Establish a project team
    Like every other well planned project, define roles, responsibilities and expectations. Establish a regular cadence to meetings and/or method for communication.
  • Set a scope and then focus the team’s initial efforts on discovery
    Understand what applications and what hardware is being moved. Don’t forget to scope the risks.
  • Develop a detailed plan ahead of time.
    It needs to be a detailed plan not simply a general guideline. We’ve seen plans detailed down to the individual piece of hardware. (That migration went very well.) But more than that, assemble the plan in a way that makes sense for your business. That means prioritize the migrating application, categorize the hardware.
  • Create a checklist
    Don’t forget to plan for risk mitigation in the event that something goes wrong.
  • Involve the data center provider
    Hopefully, its Internap. As a colocation provider, we do understand migration challenges and our need to be flexible when coordinating timelines and service availability.

[Tweet “Data center migration. A CFOs dream & ITs nightmare.”]

Communication with the service providers makes sure that there is space and people available at the loading dock, that security is prepared for any third parties involved and that there aren’t major maintenance events or other major customer installs occurring at the same time.

Hopefully, this helps. If all this did was make you more nervous, there are data center movers who specialize in the development of a data center migration plan and who can also, actually perform the relocation work. In most cases, Internap can recommend a local partner that we’ve successfully worked with before. So give us a call.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 21, 2016

How to Accurately Size Your Backup Storage (And Save Money)

Paul Painter, Director, Solutions Engineering

My parents made sure the phrase “planning is the key to success” became the mantra of my childhood. And for good reason too. Whether it was a hiking weekend, fishing trip or exotic vacation, planning was essential to the pleasant memories I took from these adventures simply because a lack thereof very well could’ve led to disastrous results.

As it turns out, it’s now a mantra I apply every day toward sizing cloud environments.

When working with clients, a topic we frequently discuss is the current state of their environment and its anticipated growth over a period of time. More often than not, this is a “guesstimate” science due to the complexity of most setups and the lack of statistical data for greenfield deployments.

These facts lead some IT professionals to manage their environments via the “learn and adjust on the fly” approach. While that may work in some scenarios, it’s hardly a reliable method for managing critical business assets. Having a ballpark idea of future state requirements goes a long way.

The same principle applies to backup and disaster recovery strategy. For just a small time investment, proper planning can save you thousands of dollars.

To show you how, let’s walk through a hypothetical, but far from uncommon, example of sizing requirements. Note that this is the same process our solutions engineers use for sizing HorizonIQ Cloud Backup powered by Veeam Cloud Connect, but it’s still applicable for all types of backup jobs.

Covered in six steps below, the goal of this exercise is to help you understand two things: 1) the process for making sure your backup jobs will work given specific retention requirements, recovery targets and backup windows; and 2) the method for getting just enough space for your current needs while simultaneously creating a roadmap with expansion milestones.

(If you want to skip all the details and access a very helpful sizing calculator, jump down to step four!)

Step One — Assess the Job Setup and Key Requirements

Let’s assume we have five critical workloads that need to be backed up. They each require high availability and a long-term retention policy:

  • VM1 – Active Directory
  • VM2 – SQL server
  • VM3 – App server
  • VM4-5 – Web frontend

First off, we need to make note of our RPO/RTO and data retention policy. We can’t arrive at an accurate sizing estimate without them.

In this case, all VMs require the same RPO/RTO, as they belong to the same application group:

  • RPO – 6 hours
  • RTO – 24 hours

The retention policy is as follows:

  • Last 7 days of backups
  • Monthly backup job copies

Next, we need to gather some key info regarding the VMs, data and backup windows.
Operating System — All Guest OS’s are Microsoft Windows 2012 R2
Change Rate — All data on the VMs, except Active Directory Domain controller, have a change rate of 5%
Source Data Size — The total size of our VMs is 800GB:

  • VM1 – 100GB
  • VM2 – 250GB
  • VM3-5 – 150GB

Backup Windows — The idle/non-busy time periods deemed suitable for backups are established as such: Central Time Zone: 7-9 a.m., 1-2 p.m., 5-6 p.m., and 1-2 a.m.
Read/Write — The storage can handle up to 300MB/s reads and 300MB/s writes at a given time
Uplinks — The environment is configured to use 2 x 1Gbps uplinks to Backup server, no LACP — 1Gbps maximum at a time

Step Two — Make Sure Backup Windows Align with RPO/RTO

Now we have to review the above information to ensure there aren’t any potential conflicts with our requirements.

In this scenario, the backup windows conflict with the 6-hour RPO. The time between 5 p.m. to 1 a.m. is spaced by eight hours, which causes the violation. On the other hand, 1-5 p.m. is spaced by four hours and is not considered a violation, since the RPO is met (the oldest restore point is no older than 6 hours).

To mitigate the risk of a RPO policy violation, we decide to re-arrange the non-busy periods and adjust application-level tasks to allow for a backup schedule within the following windows: 7-9 a.m., 1-2 p.m., 7-9 p.m., and 1-2 a.m. This may require a meeting or two to sort out, but that’s why we’re planning ahead!

To ensure the 24-hour RTO is achievable, simply:

    1. Take the total size of the initial, Day 1 data set: 800GB1
    2. Determine how long the backup will take based on the storage read/write speed: 300MB/s read and 300MB/s write
    –Because our storage also handles production VMs, let’s estimate that only 50% of the top capabilities will be available for backup/restore purposes (150MB/s).
    3. Initial Backup time: 800GB / 150MBps = 1.5 hours

Plenty of time to spare!

Step Three — Make Sure the Network Can Handle the Backup Jobs

We also want to make sure our network equipment can tolerate the extra load.

Out of two available 1Gbps uplinks, only half the overall capacity (1Gbps) can be allotted for backup purposes. Meaning, the initial backup will take an estimated two hours. We’ll use this value for all further calculations on timing over 1.5 hours at the storage level.

One last thing before getting to the sizing: Before we can calculate the total storage requirements, we have to make sure our 1-hour windows, occurring four times daily, are sufficient for copying incremental backups, based on the rate of change.

–Every incremental backup is expected to contain up to 5% changed data, which means we need to copy 40GB every six hours. Based on our 150MBps benchmark, this will take approximately 15-20 min.

All good!

Step Four — Calculate Total Backup Storage Requirements

How much space do we need to store the backup data? As with everything in IT, that depends on the type of data and the configuration chosen for the backups.

In our scenario, two types of jobs will run: the Backup Job (our last seven days of backups) and the Backup Job Copy (our monthly backup copies).

To make our calculations, we’ll use this handy calculator developed by one of Veeam’s talented team members:

http://rps.dewin.me/

Calculating Storage for the Backup Job

The default backup job type in Veeam v9 is Forever Forward Incremental, in which no Synthetic Full backups are created. This type of job allows for huge space savings, allowing us to avoid the need for longer backup windows and restore times.

Using the info from our assessment, fill out the calculator fields as follows:

Restore point simulator

This input will deliver the following retention interval schedule, with a total storage size of 1,360GB. This includes of 420GB workspace.

1360GB retention interval schedule

The dates and times on the right represent the 6-hour intervals over the 7-day retention span. While knowing those dates and times aren’t as critical for short-term backup retention policy, the impact it has over the long term is huge, as we’ll see in our provisioning plan.

Calculating Storage for the Backup Copy Job

For the monthly Backup Copy Jobs, use these inputs.

RPS3

That will produce the following:

5620GB retention interval schedule

We now know the total space needed to store all of the data given the requirements: 5,620GB + 1,360GB = ~7TB.

Step Five — Map Your Provisioning Plan

Since the total space can be estimated before we even begin the initial job, why not provision the 7TB and call it a day? Simple: Because that would cost us some serious money.

We know that on Day 1 our Backup Copy Job will not require all the estimated space. Therefore, we can provision a smaller amount of storage, sufficient for the needs of the first four months and then add the space gradually.

For example, if we choose not to provision based on the milestones accounted for above we’d have to request all 7TB of space starting Day 1, costing us about $7,200/year.

Compare that to provisioning the space gradually:
Day 1 – provision space required for the first 3 months – 4 x 400GB = 1600 GB, $160/mo, $480/period
Day 90 – provision space required for 6 months – 7 x 400GB = 2800 GB, $280/mo, $840/period
Day 180 – provision space required for 9 months – 10 * 400GB = 4TB, $400/mo, $1200/period
Day 240 – provision all space – $1800/period

The grand total for the year in this case is $4,320 – nearly $3,000 less than “no planning ahead” plan.

Step Six — Plan How to Spend Your Savings!

Storage planning is not new conceptually, but you’d be surprised how often it’s skipped during development and deployment stages. Don’t make that mistake.

Planning not only allows you to validate your decisions at an early stage of the project and also gives company management confidence that your management is efficient and sound. The cherry on top – all extra savings that can be applied once you have a definitive set of milestones and know exactly when to provision and how much to add/remove. In our example, it may have saved us $3000/year, but for many organizations, savings can balloon to 10X that size.

So there we have it. Happy planning!

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Paul Painter

Director, Solutions Engineering

Read More
Apr 20, 2016

How Do I Identify a Root Compromise and Reset My Root/Admin Password?

INAP

In our tutorial for setting up a new server, we defined the root user as “the administrative user with heightened privileges to all rights and permissions on the server.” A root compromise is simply a security breach that has affected your server at the root, or admin, level.

Step 1 – Identify the Compromise

Information security engineer at Wells Fargo, Vernon Haberstetzer, provides a few common pieces of evidence that could indicate that your system has been hacked:

Suspicious-looking user accounts. These tend not to follow your company’s conventions for valid user accounts. Audit logs (if available) should be able to tell you who created them. Regardless, they should be disabled and investigated.

Rogue applications. Incoming connections can be used as a backdoor for hackers. Tools such as TCPView or Fpipe (Windows) and netstat or Isof (Unix) will show you what applications are using open ports on your system. Make sure you scan your compromised server from another machine, if possible.

OS job scheduler anomalies. Malware sometimes launches from the operating system’s job schedule. To look for anomalies on a Windows system, go to a command prompt and type AT. On a Unix system, check the job list using the cron or crontab commands.

Rootkit access. Hackers also prey on systems using vulnerabilities in either your operating systems or your applications. However, there are so many rootkits that it’s difficult to find the files they’ve modified. Tools, such as chrootkit, can help you with that.

The Ubuntu wiki adds the following things to look for in your log files:

Incorrect time stamps. To mask their activities, hackers will often copy and paste legitimate log files over another, creating a timestamp discrepancy.

Missing log files. A less subtle way of hiding activities – or the nature of those activities – is to simply delete the log file. If you’re missing a file, you can be reasonably sure it’s not an accident.

Partially sanitized log files. If your logs are missing more than 5 minutes of time at any stretch, it’s a clear indication that someone removed them to hide what they were doing.

Engineers at OmniTraining suggest that if you’re running Unix – and you’ve kept current with your kernel patches – you won’t necessarily need to reformat and reinstall the OS in the case of a user account compromise. Disabling the suspicious account should be enough, as long as there’s no evidence of a root compromise.

However, changing your root password is a prudent precaution to take, no matter how insignificant the breach seems to be on the surface.

Step 2 – Change Your Root Password

Linux/MAC Terminal

Log in as root directory. Type “su” or “su root” at the terminal prompt.

Enter the current root password. Don’t be alarmed if nothing displays as you type. This is normal and intentional for security reasons.

Type the command “passwd” at the root prompt.

Enter your new password; then enter it again at the confirmation prompt.

Log out by typing the “exit” command.

Windows PC

Log into your system as Administrator.

Enter Ctrl-Alt-Delete. The Ctrl-Alt-Delete key combination will bring up a prompt that takes you to the change password screen.

Change the Windows administrator password by completing the form.

Step 3 – Tighten Your Security

Once you’ve changed your root password, make sure you’ve completed these steps from our previous article:

Create user accounts. Minimize the use of root accounts by creating normal user accounts that have limited access to the system to prevent the server from being compromised or mistakenly damaged. If you haven’t already done so, consider creating user accounts once you’ve changed the root password.

Ban outside IPs. To protect your server from intruders, download software that will ban IPs that present inherent danger, such as too many login attempts. Fail2ban, for instance, reduces the rate of incorrect login attempts; however, it cannot eliminate the risk that weak authentication presents.

Configure your firewall. Install security and firewall software to help limit access to the server and temporarily block potential intruders.

Consequences of root compromise attacks

In addition to direct loss of revenue, the costs of repairing or rebuilding data, and losing both your external and internal customers’ confidence, your compromised server and sites can be used to enable larger-scale attacks, such as a Denial of Service.

If your business is small to midsize, don’t make the mistake of thinking you’re safe from attack. According to some experts, nearly 60% of online attacks in 2014 targeted small and midsized businesses. This is partly because these businesses are easy to hack. According to an article by Constace Gutske, writer for the New York Times, “Limited security budgets, outdated security and lax employees can leave holes that are easily exploited by ever-more-sophisticated digital criminals.”

The bottom line for any business is that the best, least costly way of dealing with a cyber attack is simply to prevent it.

Updated: January 2019

 

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 19, 2016

Should I Use Windows Services or PowerShell for Infrastructure Automation?

INAP

Microsoft includes PowerShell in its desktop and server operating systems. It’s a popular command language for IT departments that need to automate tasks. PowerShell is the old Windows DOS command line on steroids. Its programs are called commandlets because they are small programs written in a scripting language similar to .NET and DOS. It uses a customized version of the .NET language, and it has some advantages over Windows services.

Windows services, however, are still the way to go in certain contexts. Here are some tips for choosing the right automation tool for your projects.

Synchronizing Folders and Files

As the network grows, users store their files on different servers throughout the infrastructure. The network could have several file servers, SharePoint servers, application servers, as well as the user’s personal local directories. It’s much more efficient if users store their files in one location, but most network administrators must give up some organization for the sake of user convenience.

PowerShell is the best tool for moving files and folders across different network shares. What takes several recursive functions in C# Windows services takes only a few lines of code in PowerShell. PowerShell can be used to scan several servers and copy folders and files to a specific target directory.

Advantage: PowerShell

Managing User Permissions and Active Directory Policies

Large networks have numerous users and Active Directory (AD) policies. They can be difficult to manage as users change positions or leave the company. Each time users resign, they must have their permissions removed and the account deactivated. This can be tedious for network administrators each day.

PowerShell or Windows services work well to clean up AD policies and permissions. PowerShell works directly with AD, and so does any Windows service. If you have developers within the operations department, they can create a Windows service that scans for deactivated users and cleans up AD. PowerShell can do the same, but you must use the server’s Windows Scheduler to run it each night. Both solutions are effective ways to manage AD changes each day, but PowerShell is a bit more difficult to schedule.

Advantage: Windows services

Synchronize Data with Third-Party Cloud Applications

The cloud is integrated into many systems. For instance, Salesforce is a common cloud application used for customer relation management (CRM). Companies synchronize data between the popular CRM and internal data. The synchronization can happen within minutes, every hour or each night during off-peak hours.

When synchronizing data with off-site services, it’s best to use Windows services. Most of these external cloud services have an API that integrates well with .NET code. Using PowerShell would be much more cumbersome. Windows services can run in intervals that you set within the code without any scheduling. The services run on any Windows server without any interaction from the administrator.

With Windows services, you can extract data from the external cloud application, process it and store it in your own database. You can also send data to the CRM tool. This type of synchronization calls for security since you’re passing data across the Internet. Secure applications are much more efficient in Windows services.

Advantage: Windows services

Upload Data to a SharePoint Server

SharePoint is a Microsoft product used for intranets and document collaboration. Users can create their own sites, upload documents to share with members of their team, and place their own permissions on their site structure.

PowerShell works seamlessly with SharePoint. As a matter of fact, the SharePoint application has its own API that can be used with your PowerShell commandlets. You can use PowerShell to synchronize SharePoint data or upload mass documents to the intranet site. It’s also useful for cleanup and maintenance tasks. For instance, if a user wants to move an intranet site to a new URL, you can automate the task using PowerShell.

Advantage: PowerShell

Data Transformation in SQL Server

Data transformation is the act of pulling data from one database, manipulating the data in some way, and then storing the changed information to a new database. The data “transforms” to a customized format.

For instance, you might have a sales platform that stores subscriptions from web traffic. You pull the data from the public web server database and store it in an internal, secure database. The best platform for this type of operation is Windows services. Windows programs work well with Microsoft SQL Server databases, which is usually the platform used in Windows environments. You don’t need any extra configurations, and the .NET language includes libraries that work directly with the database platform. You could work with PowerShell, but it would be much more inefficient.

Advantage: Windows services

Managing Desktop Services

Network administrators manage hundreds of desktops, and it’s difficult to keep track of the services and programs installed on each machine. With PowerShell, administrators can remotely manage services, programs and desktop configurations. You need administrative permissions to the desktops, but PowerShell can be scheduled to clean up user services and detect any malware running on a machine.

PowerShell works with the Windows API, so it can easily delete, start, and add a service to a desktop. Administrators can even use it to edit local services each time the user logs into the domain. PowerShell commandlets efficiently reduce management time when the administrator needs to deploy or edit services on several remote machines.

Advantage: PowerShell

Audit Network Desktops and Servers

PowerShell can scan a list of computers and audit them. The administrators can get a list of programs installed, services running on the system, and the machine’s information such as CPU or serial number. Windows services can do the same, but you’d need to feed the service a list of computers unless you want it to scan the network. The problem with network scanning is that it can kill your bandwidth.

When you want to audit machines for licenses and program usage, it’s best to use PowerShell.

Advantage: PowerShell

Reboot Frozen Processes

Problematic services on a Windows server sometimes freeze without any warning. These services could be critical to network services. You can write a PowerShell commandlet to automatically reboot the frozen service and avoid any costly downtime.

It only takes one line of code to reboot a service in PowerShell, which makes it more efficient for this administrative task.

Advantage: PowerShell

Conclusion: Windows or PowerShell?

Both of these options have their own benefits. If you need simple network automation, go with PowerShell. If you need more complex services such as data transformation or synchronizing with external APIs, you should use Windows services. Just remember to test your code before you deploy it, because both of these solutions can have a powerful effect on your network.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Apr 7, 2016

News Roundup: GDC 2016

Ansley Kilgore

That’s a wrap for GDC 2016! We were back at this year’s conference in San Francisco to learn about the state of the industry and listen to developers discuss their online infrastructure needs.

Just like last year’s event, we hosted our own session: “Bare Metal Cloud: The Key to Delivering Online Gaming Worldwide,” which was presented by Todd Harris from Hi-Rez Studios, an Internap customer. Be sure to look for it on the GDC vault in a few weeks.

The most notable names in the tech and games industry lined the show floors, offering product and tech demonstrations, networking and recruitment opportunities. But the main story and focus at the week-long conference was virtual reality. Here is a round-up of newsworthy articles to keep you in the loop on highlights of this years show:

GDC 2016: PlayStation VR Release Date and Price Confirmed by Sony

At a special event held alongside GDC, Sony announced the price and release date for PlayStation VR – the company’s virtual reality headset for PlayStation 4. PlayStation VR will cost $399 and will launch in October 2016.
Read this article.

The Witcher 3 Wins Top Prize at GDC 2016 Game Awards

The Witcher 3: Wild Hunt received awards for Best Technology and the overall Game of the Year Award at this year’s Game Developer’s Choice Awards 2016. The epic fantasy RPG beat out competition from Fallout 4, Metal Gear Solid V, Boodborne and Rocket League.
Read this article.

Twitch GDC 2016 Announcements: Developer Success and Stream First Showcase

At this year’s Game Developer’s Conference, Twitch, the world’s leading social video platform and community for gamers, revealed their new focus on Developer Success and Stream First which promises to enhance their streaming service experience for both developers and broadcasters. Stream First is a new service that provides broadcasters and their viewers with a deeper level of engagement.
Read this article.

Market Report: GDC 2016

The 2016 Game Developer’s Conference was packed with news, gameplay footage, reveals and surprises. From VR and HMDs to F2F and awards, read this article for the top GDC highlights.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More
Apr 6, 2016

BANDAI NAMCO Entertainment America Inc. accelerates its infrastructure with Internap’s colocation services

Ansley Kilgore

Internap’s ability to provide game publishers a dependable range of bare-metal, cloud, colocation services and Performance IP™, enables developers to find the ideal environment for their interactive games to be tested, developed and launched.

Most recently BANDAI NAMCO Entertainment America Inc. announced it is using Internap’s colocation and Performance IP services to optimize network operations at its new location in Santa Clara, California. BANDAI NAMCO Entertainment America Inc. is a prominent name in the gaming industry, delivering products & services for today’s multi-platform gaming audience. They are best known for some of the most classic video games in the industry including PAC-MAN®, Dark Souls® and TEKKEN®.

With BANDAI NAMCO Entertainment America Inc.’s relocation from Northern California to its new facility in Santa Clara, they needed Internap’s scalable, high-density infrastructure to support its consistently demanding game testing and development environment. This infrastructure allows BANDAI NAMCO Entertainment America Inc. to utilize up to 18kW of power per rack – without consuming additional floor space – to cost-effectively meet those future workload demands.

Internap’s colocation services feature concurrently maintainable designs to eliminate single points of failure and ensure reliability, as well as on-site data center engineers, advanced security features and remote management self-service tools. Backed by a 100% uptime guarantee, Internap’s global Performance IP™ connectivity service with patented Managed Internet Route Optimizer™ (MIRO) technology, guarantees the ultimate player experience by evaluating available service networks in real time and delivering web traffic over the most optimal path.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More