INAP IS NOW HORIZONIQ.
Click here to LEARN more.

Jun 29, 2016

Understanding Microsoft Deployment Toolkit: Introduction and Cost Benefit Analysis

INAP

Microsoft Deployment Toolkit (MDT) is a software package primarily used to deploy images to a large number of physical machines. MDT, unlike its big brother System Configuration Manager (SCCM), is a free product by Microsoft and is relatively simple to use. It runs on a dedicated Windows Server (physical or virtual) and contains a set of Visual Basic Scripts that direct MDT to execute instructions on the target machine.

In this post, I’ll briefly define the scope and capabilities of MDT, as well as summarize its benefits and potential costs.

MDT Image Deployment

Image deployment in MDT is defined by task sequences, which are a set of instructions that tell the program how to create or deploy an image. For example, a task sequence can be created that will deploy an operating system and perform Windows updates on a reference virtual machine. That image can then be captured using a “Sysprep and Capture” task sequence. The image is captured in a standard WIM (Windows Image Format), at which point the fully patched image can be deployed to a target physical machine. When the system administrator is ready to deploy an image on a target machine, he or she will mount the LiteTouch ISO to a USB stick and boot to that media. This installation media contains a Windows Preinstallation Environment (WinPE) which is used to push an image to the target machine without interference from the underlying operating system.

When the reference virtual machine has no third-party applications installed on top of the operating system, this is known as a “thin” image. Sometimes a company has already adopted a set of standard applications for a specific department, so a “thick” image may be more useful since they would already contain the applications on top of the fully patched operating system. A hybrid approach to image deployment is to install applications after the operating system is installed and patched. Applications can be “added” to the Deployment Workbench (MMC snap-in) on the MDT server, which can then be added to a task sequence. In MDT 2013, a new task sequence was adopted known as a “Post OS Installation Task Sequence.” For physical machines that already ship with a fully patched operating system, a Post OS Installation Task Sequence can be used to install third party applications, unattended.

Benefits of MDT

Now that we have a basic understanding of the inner workings of MDT, let’s dig into some of the benefits.

The primary advantage of adopting MDT is simply that it is a free product supported by Microsoft. As such, product support will come by submitting questions to Microsoft’s TechNet forums where they will be answered by qualified MDT specialists. Phone support is available at additional cost but should not be necessary. Because this is a Microsoft product, it is practical for any Windows-based IT shop. Its integration with Hyper-V and VMware virtual machines is straightforward, although ease of use favors Hyper-V since it is a Microsoft product and has more direct integration with WIM and VHDX image files.

Another perk: Ongoing maintenance is minimal. Once the MDT environment is set up, hardware drivers and applications are imported, task sequences are created, and reference virtual machines are configured. The system administrator should only have to patch virtual machines on occasion, and updates to applications will be infrequent. However, it should be noted that adoption of thick images creates additional complexity since it will require numerous reference virtual machines and the maintenance of applications on these machines. For this reason, Microsoft consultants generally recommend adopting the thin or hybrid image approach.

Finally, it’s important to note that hardware drivers are often provided by major notebook and desktop vendors like Lenovo, Dell, HP, etc. These drivers are wrapped in CAB files that can be imported directly into the MDT Workbench. These drivers are maintained by the vendors and updated frequently. Moreover, MDT allows for dynamic driver selection by querying the target physical machine for make and model, and then deploying hardware drivers for that specific model computer (example: Dell Latitude D430) to the target machine.

Potential Costs of MDT

While a free product, using MDT will incur some indirect costs. Perhaps the biggest of these is the time forgone in order to successfully implement MDT. Many companies choose not to adopt MDT simply because the learning curve of any new technology is deemed too steep. Many of today’s IT shops are stretched very thin and simply do not have the time, manpower or resources to adopt a new methodology.

Speaking of resources, the product still requires at least one physical server and one Windows Server 2012 license to get up and running. Furthermore, additional reference virtual machines will add additional licenses costs, although they’ll be minimal in a thin image scenario.

Another caveat to consider is that MDT is generally used in smaller-to medium-sized environments (less than 500 machines). While it can be used in larger environments, the LiteTouch interface could eventually become cumbersome, especially in a large virtual desktop environment. MDT, unlike the paid product SCCM, is not a zero-touch deployment product. It requires the user to click through a “wizard,” hence the reason why the free product is called LiteTouch. While this may be seen as more of a limitation than a cost, it will eventually cost the system administrator time since deployment will take longer as the number of target machines increases.

The Verdict

Overall, the benefits of adopting MDT far exceed the costs in most cases. MDT is a product that is 10 years in the making and has the backing of Microsoft. It is also important to note that while successful adoption is gratifying, getting to that point will require a great deal of determination, patience, and willingness to explore new areas. For organizations with limited IT staff, it might make sense to seek out a consultant to help strategize and plan the implementation of MDT.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jun 28, 2016

How Do I Run a Malware Scan on My Server?

INAP

Keeping your server free of malware is a necessity. As businesses are more “plugged in” than ever, a secure transfer of data between client and server is critical to most operations. That’s why running a malware scan on your cloud server is an important part of your overall security strategy.

What to know before running a malware scan

First and foremost, it’s important to understand that these scans can take time. The amount of time can depend on the system and the state that it’s in. You’re going to need to identify whether you’re scanning for a Windows or Linux system, because the procedure will vary for each:
– For Windows, use MalwareBytes Anti-Malware.
– For Linux, use Maldet and findbot.pl.

Make sure to have backups of any directory you’re scanning. This is in case you have to remove any infected files in production. Always scan the directories that have content facing public internet users, and directories that are writable by your web app’s user accounts. Remember that scans may not remove every infection. If a typical malware scan doesn’t get everything and you need a deeper investigation, open a support request.

Running a malware scan on your computer and server

In order to run a successful scan, follow these steps:

1.) To get started, download the proper malware scanning program for Windows or Linux, as noted above.

2.) Take a backup of any important files in case they’re removed.

3.) Run the malware scan against your web directories as indicated by the software’s manual. Also, ensure that you choose options that will quarantine or remove any detected items. Some software will only scan and report detections, but not remove them unless you choose to do so.

4.) Review your site directories. Are there any unusual files that you don’t recognize? Those files could potentially be from malware.

Common mistakes when running a malware scan

If a malware scan isn’t performed correctly, your cloud server could be in danger. Don’t assume that a scan will be the only action needed. In many cases there will be follow-up steps that need to be taken in order to resolve the issue. Make sure that you perform the scan with quarantine or removal flags. Otherwise, if the scan detects malware and files have to be removed, you’ll need to confirm removal.

The scanning software takes time. The program has to investigate the contents of every single file on the server. If you need to quickly address a compromise, have the scan performed only on a single domain’s directory. Otherwise, the scan will involve the entire server, which will take much more time. In the event of a compromise, you can also scan only the directories which contain website files, instead of every single file on your server.

Sometimes, compromises may be written into site files that are still able to perform their original functions. This is tricky because the malware scan may remove these critical files. That’s why you should make sure to have backups in the event that you remove an important file over a compromise.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jun 22, 2016

Page Speed

INAP

Mike McGuireI help big data, enterprise, ad-tech and e-commerce clients resolve slow page loads, traffic management, cost control and a number of other IT headaches. Most times, the clients I speak with are unaware of the solution that is out there.

In this blog, I will share how I helped resolve slow page load time.

The Problem – Slow Page Load Time

I would be pretty safe saying all e-Commerce folks grasp the concept that slow page loading is the kiss of death to website sales and revenues.

There are some excellent articles written on how to accelerate site performance, accelerate content delivery, reduce web page resource requirements, compress images, reduce time-to-first-byte and help gain some speed or the perception of speed.

Page SpeedRead: Page Speed from Moz and 10 Ways to Speed up Your Website from CrazyEgg.

What? You’ve done all of that? But it just isn’t enough.

Read on.

According to a recent study published by Radware, instead of reducing content, the median e-commerce web page contains 99 resources (images, CSS, scripts, etc..), making them bigger than ever.

These resources create latency. Funny thing about latency… you can’t fix latency with bandwidth but you CAN fix bandwidth with latency. You can tweet that.

[Tweet “You can’t fix latency with bandwidth but you CAN fix bandwidth with latency.”]

Cisco, Riverbed and others have gotten very good at reducing the packet size of data being transmitted, reducing the need for retransmission, and reducing the intelligent conversation needed between routers in order to reduce transmission time and latency. Market prices for bandwidth have plummeted and clients have built networks using multiple carriers.

But here’s the thing.

Once it leaves your router and enters that glorious world wide web, it is ruled by an old, static set of tables fondly known as Border Gate Protocol or BGP.

BGP is the Google Maps of the Internet. It is the ultimate address matrix that uses a predefined route matrix to determine the shortest routing data should take between any two points (and then the next two points, etc.). Which is fine except that BGP was never designed to address performance nor does it attempt to.

So what happens – and it happens a lot – is that BGP takes all that wonderfully accelerated website data and sends it down the wrong path. Well, maybe not the wrong path but certainly not the best performing path with the lowest latency.

The solution is simple. Use the fastest path available. All the time.

Backed by 20 U.S. patents, and scalable from 1G to 80G+, Internap’s Managed Internet Route Optimizer (MIRO) Controller is an intelligent routing device that solves the latency problem and does it automatically up to a million times per hour.

Yes, automatically, as in NO network engineering or consulting hours. No human error. And, with an average ROI of about nine months, it will even save you money while it does it.

MIRO Controller vastly improves IP network performance. It guarantees that your traffic will use the fastest path available within your network for faster page loads resulting in more click-throughs, cart loads, checkouts and revenues. Your routers need to be able to use Netflow, JFlow or IPFix. They need to export NetFlow and use the Additional Path feature. If they do all that, you’re good to go.

[cta id=”1043261″ name=”Request Demo – MIROC For Blog”]

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jun 15, 2016

The Basics of Cloud Economics: Why The Wrong Pricing Model Could Cost You

INAP

The public cloud, so goes the conventional wisdom, is the least expensive option for deploying cloud computing environments. While perhaps true in some cases, the claim carries a major caveat—namely, it depends entirely on how those resources are provisioned.

The question of provisioning is critical because, unfortunately, designing public cloud environments at major providers can be tortuously complex to laypersons and IT pros alike. Regardless of provider, users will navigate countless service options and multiple ways to pay for them, which too often results in deployments that inflate costs and render the conventional wisdom obsolete.  

Simply put, there’s still much about “the cloud” and how people pay for it that needs demystifying. So let’s start that process by comparing the basic economics of two popular pricing models  —  one most commonly used in major public clouds and one typically associated with custom-built private clouds.  

The Instance-Based Pricing Model and Public Cloud

A common way companies provision public cloud computing resources follows what I like to call the Instance-Based Pricing Model. In this model, you spin up a public cloud instance for an application, pay up-front for a discounted rate and reserve the instance’s computing capacity for a specified period of time. You choose among a list of fixed, pre-configured instances defined by resource parameters  —  typically CPU, RAM, and storage.  It’s the opposite of pay-as-you-go models, where you’re only charged for the resources you use at a higher hourly rate.  

The Instance-Based model (Reserved  offers several benefits to public cloud customers, including the ability to scale workloads rapidly and flexibly (in case of sudden spikes in app usage), business continuity in case of a major data center failure, and predictable, discounted monthly rates.

However, the intersection of scale and predictable billing can lead to major inefficiencies if you’re not paying attention.

By default, the big public cloud providers require a new server instance for each additional application. It’s an ideal system for apps with unpredictable usage; those resources will be critical in the event of a spike.

But here’s what many public cloud users don’t realize: the vast majority of applications will never need flexible workload scaling. In other words, most apps are what we call steady-state  —  their workloads are relatively predictable. A steady-state app running at 20 percent resource utilization every month for 12 months isn’t in the right environment.

Underutilization, it turns out, is a way bigger problem than you might suspect. In fact, every year applications use less and less of the total capacity of a single server. Analysts at Gartner, Accenture and McKinsey estimate that anywhere from 88 percent to 94 percent of public cloud capacity is wasted.

Companies end up paying for all of that waste, which is essentially the equivalent of renting a box truck every time you make a trip to the grocery store. You may think 1,000 cubic feet of storage is warranted – just in case – but an SUV or hatchback will invariably get the job done, with room to spare for the occasional overspend in the frozen pizza aisle.  

Whether they know it or not, public cloud users in Instance-Based models are nudged toward the box truck. Underutilization isn’t a bug; it’s a feature. In some cases users don’t take the time to find the right-sized vehicle or overestimate their resource requirements, or more likely, their requirements leave them stuck between two suboptimal options, forcing them to choose the larger, more expensive model.  

Say you have a high performance web server requiring 4 vCPUs and 9 GiB memory. AWS’s c4.xlarge instance (4 vCPUs and 7.5 GiB)  is too small, but the next size up, the c4.2xlarge gives you 8 vCPUs and 15 GiB memory. You end up settling on the latter, but it is double the cost of the smaller-sized option.

Now multiply that scenario across an entire organization and imagine what happens to the company’s monthly cloud bill!

As illustrated in the figure below, pre-configured instance sizes in public clouds recreate the problem of over-allocation we saw with physical servers 15 years ago. A marginal increase in resources today requires jumping from one instance size to the next, much like upgrading from one server model to another in days past.  

This phenomena marks a major step back in efficiency. Virtualization was supposed to eradicate this problem, and fortunately, it still can. It just takes a different model.

The Resource Pool Pricing Model and Private Cloud

Hosted private cloud environments typically follow a Resource Pool Pricing Model. In this system, applications and workloads run across a server cluster or pool created by a series of virtual machines (VMs). It differs from the Instance-Based Model in that new servers are added only when available capacity across resources is consumed.

While the cost is still tied to the amount of capacity reserved, the environments are custom-sized to workload resource requirements and do away with the 1:1 app-to-server instance logic that leads to rampant underutilization. It’s sort of like always having the right car in the garage for the task at hand.  

In short, Resource Pool models, if approached deliberately via a rigid assessment, can considerably lower costs and improve infrastructure manageability by limiting the number of machines required to run steady-state apps. At INAP we’ve found that converting customers from the Instance-Based to the Resource Pool model can result in savings of up to 70 percent. If that figure sounds lofty, remember what the analysts found: Public cloud over-provisioning is getting out of hand. Major savings are easily discoverable when the average public cloud server uses 6 to 12 percent of its resources.  

That’s not to say selecting the model that best fits your business is an either-or decision. There are plenty of valid use cases for the public cloud and the Instance-Based Pricing Model, and everyday we help customers manage such deployments. After all, in the era of hybrid IT, it’s unlikely you’ll ever have a single home for your servers and critical applications.

In future posts, I’ll highlight a few real-world examples to help you select the right cloud solution and pricing model for your computing needs, and our solutions architecture team will walk through INAP’s Resource Pool sizing assessment in more detail. Or, if you’d rather not wait, contact us now to get started with a free assessment.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jun 9, 2016

Three Redundancy Pitfalls to Avoid When Getting Started with AWS

INAP

So, you’re thinking about moving your application to the public cloud and leaning toward Amazon Web Services (AWS) as the best fit?

Well, that is no surprise. More and more businesses are looking to leverage the powerful tools developed by our friends in Seattle. As AWS approaches the $100 billion revenue mark this year, the popularity of the platform is unlikely to change anytime soon.   

While on the surface AWS appears easy enough to set up and manage, there are countless deployment decisions one must make to ensure mission-critical apps can withstand a severe outage. The considerations and process for moving to the public cloud should be just as rigorous as any major change to your IT strategy. In lieu of that, here are three common pitfalls you’ll want to avoid.

Pitfall #1: Assuming your Cloud Environment is Redundant  

First, don’t make the mistake of thinking that just because something is in the “cloud” makes it redundant. Although the physical hardware underlying your EC2 instance is redundant, you still need to maintain at least two copies of each EC2 instance via load balancing or a primary/replica scenario.

Pitfall #2: Running your Critical Apps in a Single Zone

Second, you need to be multi-zoned, and if your budget permits, multi-regioned. Even if you have primary and replica EC2 instances for your application, it will do you no good unless they are in different zones.

Are zone-wide outages really something that happens with AWS? Absolutely.

Last Sunday, AWS data center outages in the Sydney, Australia availability zone (AZ) caused significant downtime for many major “Down Under” web entities, including Foxtel Play (a popular video streaming service) and Channel Nine (a leading television and entertainment network). To complicate matters, API call failures even prevented customers with multi-zone redundancy from staying online, meaning only multi-region customers or customers with redundant infrastructure outside of AWS were safe.

The CIO of REA Group, an Australian digital advertising firm that weathered the outgage, provided this on-point advice and insight in ITNews.com.au:

“Multi AZ and ultimately, multi-region, with some smart architecture for deployment is key to cloud resilience today . . . We learned a lot. Power failure is a tough event for anyone to suffer, and we have an A-team of engineers. Others will be learning different, tougher lessons about good AZ management.”

 

Pitfall #3: Failing to Back Up Storage Volumes

Third, make sure you plan for EBS (Elastic Block Store) failures. Best practice methods available at AWS will tell you to store all your data on the EBS volumes, which allows you to easily move them to new hardware, should that hardware fail. While sound advice, you also need to account for the possibility of a failure of the EBS system itself.

Should this occur, you’ll need backups or copies of your data on local (ephemeral) storage. This way, if there is a failure of the EBS system you will be able to restore your application and maintain your disaster recovery plan.

If All Else Fails

Lastly, it’s wise to maintain an “if all else fails” option. If you do encounter an issue where each of your load balancing and redundancy protocols fail, then it is helpful to have a mechanism that kicks in and displays an error message on your web page or application. This can be done quite easily by leveraging Route 53 DNS failover and Amazon CloudFront.

Moral of the story: Plan wisely when getting started in AWS — just as you would with any of your key infrastructure.   

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

INAP

Read More
Jun 3, 2016

Tulix Systems Trusts Internap to Enhance its Network Performance

Ansley Kilgore

Tulix Systems is a premier global video delivery company that serves a broad spectrum of customers by providing solutions for the intricacies of live streaming. Tulix customers are provided with end-to-end streaming for live and on-demand video to any screen, as well as a comprehensive set of tools for broadcasters, content providers and aggregators, all helping them deliver and monetize their content.

[Tweet “MIRO Controller optimizes #network routes EVERY 90 seconds. 1 million path adjustments per hour!”]

Tulix recently announced that they are using Internap’s Managed Internet Route Optimizer (MIRO) Controller to manage and speed video streaming services used by content providers to reach their audiences around the world. MIRO Controller will allow Tulix to enhance network performance by managing even the heaviest of traffic in the best possible way to ensure great end-user experience.

Download the Case Study

Internap’s MIRO Controller will enable Tulix to more efficiently balance network capacity across its multiple networks while reducing bandwidth costs.

Backed by more than 15 patents, MIRO Controller uses the same powerful, carrier-class software that underpins Internap’s terabit/second Performance IP connectivity, which optimizes traffic for its cloud and data center customers across 80 network access points and 15 data centers worldwide. Unlike other solutions, which optimize traffic every few hours, MIRO Controller optimizes network routes every 90 seconds and is capable of making up to 1 million path adjustments per hour, resulting in fastest-path routing more than 99% of the time.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Ansley Kilgore

Read More