Jul 27, 2017

Popular Multifactor Authentication (MFA) Options and How They Work  

INAP

Multifactor authentication is not new; however, we’re entering a time where it is finally being widely adopted. Multifactor authentication (MFA) is a simple, convenient and effective mechanism for stopping hackers in their tracks. So for this post, let’s talk the value of multifactor (often called two-factor) and some of the MFA options out there.

The Value of Multifactor Authentication

When you log into your email or Facebook account, you are prompted for your username/email and password login. And sometimes, you hear stories of a hacker getting a hold of someone’s credentials and accessing their account. When you throw MFA into the equation, you now require a secondary code or extra method to gain access to that account. So, if a hacker does obtain your credentials and attempts to break into your accounts, they’d still need that extra step to access your account. Most services these days, from social media sites to banking apps, allow you to turn on MFA, if you look into the settings menu of your accounts.

Types of Multifactor Authentication

SMS Authentication

One of the most common, but a bit out-dated, methods for receiving an authentication code is to register your phone number with the respective service you are attempting to secure. In doing so, any time there is a login into your account (using your credentials), the access will be halted until that person also enters the code that was texted to your phone. This effective because it requires the hacker to have access to your phone, to retrieve that code. Otherwise, the attacker will be stopped in their tracks, with no way to access your account, unless they have to secret code that was texted to your phone.

While still used on many services, SMS two-factor is considered an outdated method due to vulnerabilities found with SS7, the protocol that our phones use to send texts.

Authenticator Apps

Most major tech and gaming companies have proprietary authenticator apps. Instead of receiving the secret code via text, it’s broadcast to your phone via an encrypted channel. Then, all you have to do is retrieve that code from the app and enter it. Again, this would require an attacker to have access to your phone to gain access. So if a malicious actor on the other side of the world gets your credentials, they’re still not going to be able to access your account. This is far more secure that utilizing the SMS method.

Hardware Token

This method is quickly gaining popularity and widespread adoption. Tokens are physical keys that resemble tiny USB drives. YubiKeys are popular example. The key stores the encryption algorithm used to generate the secret code you need to log in. These keys also activate with a simple tap. So, once you enter your username and password, you simply insert the key into your USB port and tap it. The process makes tokens the hardest to crack of the three methods mentioned in this post because your encryption key that is physical, on a physical key that you are holding, meaning the attacker has to have physical access to your token. Major corporations, such as Google and Amazon, employ these keys for their employee-base and critical systems, as well. Hardware tokens are becoming very cheap and viable options for the average consumer. Varieties also include smart cards that contain encrypted certificates on them, similar to what the Department of Defense uses.

The outlined methods are not rare, by any means. I urge everyone to spend an hour over the weekend, sifting through all of their online accounts (social media, financial accounts, work accounts, etc). I suspect that everyone will be pleasantly surprised to find that most information systems are ready to integrate with these effective MFA solutions.

Let’s keep hackers out of our hair, permanently.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 25, 2017

How to Use SSH Keys On Your Linux Server

INAP

One of the easiest and most important things you can do to secure a Linux-based server (other than to change the default SSH port to prevent automated bots from profiling your server in the first place!) is to enforce the usage of SSH keys as an authentication requirement across all of your Linux-based systems. Fortunately, not only is implementing one key for access across all servers pretty easy, but it’s extremely convenient, as well!

Before we get into the details, let’s cover why this is a critical step.

Imagine waking up one morning to find your data encrypted, and an electronic ransom note holding your data — and with it, your business — hostage. Security can unfortunately be an aspect of your business neglected until it’s too late.

Working as the proverbial “the eyes on the ground” at INAP, I tend to see the wider patterns and trends. Frequently, when I log into a managed client’s server to troubleshoot an issue, SSH brute-force attacks are commonly seen due to clientele allowing logins as the “root” user instead of creating a “wheel” or “sudo” user, using the default SSH port (22), and most importantly, not enabling SSH keys as a requirement to authenticate. This, when used in conjunction with an SSH key (passphrase or not) renders the other issues practically moot.

These brute-force attempts can cause a myriad problems — the most important of which include excessive load, as your server will need to handle all the connection attempts, and if successful, scenarios such as data theft or loss, and root level compromises (typically with an associated rootkit dropped to hide any malicious tools and maintain access). These attacks have recently been made more effective by the use of distributed botnets, and modified algorithms to guess common password formats (such as “1-2 words plus 3-7 numbers”). These recent modifications on this long-standing attack vector make these attacks more effective than they have been in the past.

Steps for Setting Up SSH Keys

SSH keys are composed of two parts: a private key and a public key. Through the use of public key cryptography, you’ll use the private key to log in, and the public key as a way for the server to identify your private key.

I do encourage the use of passphrases when possible —  otherwise, anyone who has the key has access to your server.

 

1.  Generate a public and private SSH key pair

This can be accomplished on Windows platforms using a tool such as “PuTTYgen,”  but I find this is quicker and easier to do on your server as well!

To implement SSH key authorization only on the “root” account, there are 2 methods for cPanel-based servers – By either method if you have cPanel, you’ll see the key appear in WHM should you use it to assist in administration of your server.

  1. If you’re using a cPanel-based server:
  • Log into WHM, and navigate to “Security Center > Manage root’s SSH Keys”
  • Click on “Generate a new key”, and fill in the name, passphrase (if desired).
  • A secure key type would be RSA, and a good size would be 2048.
  • Click “Generate Key”
  • Head back to “Security Center” in WHM, and click “SSH Password Authorization Tweak”.
  • Verify that your Private key works (see below).
  • Hit “Disable Password Auth” to disable password authorization for root (your SSH key’s passphrase will still work of course!).
  1. If you’re using any Linux-based server, you can also use the command “ssh-keygen”:
  • Log into your server as “root” or su to root using your wheel user
  • Run the following command for a key using RSA with 2048 bit key strength:

ssh-keygen -t rsa -b 2048 -f (key_name)

(Feel free to read the full manual page here for full usage and options)

The default directory for the root user “/root/.ssh/” is OK, so hit enter. Enter your desired passphrase, if any, and once more for confirmation.

Using the default name, your private key will be stored in “/root/.ssh/id_rsa”.

Grab the private key and keep it safe – “cat /root/.ssh/id_rsa” to copy and paste the key into a local file, or download the key using an SFTP client (then encrypt it!).  

2.  Verify your SSH keys work.

On all platforms, before disabling password authentication, test your keys using PuTTY or via an SSH tool (On windows machines, we prefer cmdr – it’s an excellent tool).

To log in using cmdr, open the application and run the following command:

ssh user@yourserver -i /location/of/privatekey

  • Cleanup and config:

3.  Authorize the SSH key.

cat id_rsa.pub >> ~/.ssh/authorized_keys

Should you wish to use this key with multiple servers, download it as well. Put the public key in your other server’s “/user/.ssh/authorized_keys” directory as well, and your Private key will be valid for that user – in this way, you can use one key combination for many servers and/or users. Combined with proper security in keeping the Private key safe, this is a powerful, secure (and convenient!) way of administering multiple servers.

Then, feel free to delete your “id_rsa” file you’ve just downloaded (this is the key you’ll be using to authenticate – so it’s a bit more secure to keep it hidden well!). I highly encourage the use of an encrypted USB drive to manage your SSH keys–especially if you don’t plan to use a passphrase.

Be sure not to delete the “.pub” file until imported into “/root/.ssh/authorized_keys” of course.

This will be your method of root-level authentication to the server from now on, so be sure to keep it in a safe place.

4. Disable password based authentication for “root.”

For any Linux-based server, edit your SSH config file (/etc/ssh/sshd_config) by using “vi/vim”, or “nano.” Change these directives to these values to allow all other users to use passwords to authenticate except root:

PasswordAuthentication yes

PermitRootLogin without-password

That’s it! You’ve now significantly increased your server’s resiliency to automated brute-force attacks. Be sure to keep your key secured; I keep my SSH keys on an encrypted USB drive on my keyring right alongside my car keys. They’re equally important tools.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 17, 2017

Is It Time to Start Looking for an Alternative Cloud Provider?

INAP

When evaluating cloud solutions, how do you determine which one will best meet your requirements? Should you go with a brand name everyone recognizes? Have you outgrown mega-companies’ tiered customer support and vendor lock-in? Whether you are a startup developing cloud-native applications or your organization needs to adopt cloud technology in conjunction with legacy infrastructure, INAP’s cloud offers differentiation in terms of customer support, performance, cost, scalability and efficiency.

If you’re a tech startup with a cool new idea and need some simple, cheap infrastructure, you may find yourself thinking “Where can I find a good cloud provider for start-ups?”  So off you go to a giant who offers cloud services as one of their many products: you pick a plan, give them your credit card information and bam! Instant access to cloud servers. Your developers are off and on their merry-coding way.

Many companies make this move and enjoy the benefits of trusting a recognizable brand, but for others it may be more like the old wives’ tale of how to boil a frog. The water is room temperature, but as the heat gradually increases so does the danger to the unsuspecting frog who can’t sense the changes.

Fast forward a year: with a little luck and a lot of sweat, the business starts to grow. You nailed a niche, and customers love it. Traffic increases, transactions per second grow, and with it so does your cloud footprint. Every month the cloud bill comes in and, yeah, it’s getting more and more expensive, but you think, “We can manage, and, honestly, the focus is still on perfecting the technical side of product, not the economic side of the product.”

You start to add features that require more infrastructure (storage services, DNS, maybe even some relational database services). But in the back of your mind there is that little voice asking, “Is it getting hot in here?”

All the while, you’re growing fast. You’ve hired more DevOps type staff, people who understand both your application and the infrastructure on which it runs.  In their eyes, infrastructure is code. You’ve invested countless hours and gobs of money training them on your cloud provider’s way. You’ve got serious investors and a solid customer base who are relying on you day in and day out.

Remember that voice in your head?  Now it’s screaming, “This is hot. This is HOT!”

One morning, in the not-too-distant future, Accounts Payable drops the hefty bill on your desk, and you see a number now quickly approaching 6 figures!! At the same moment, your lead engineer hits you up on Slack™ to warn you that your cool yet cumbersome service is having more performance issues that require you to buy more instances to accommodate the slow down. Then it hits you like that five-gallon bucket of ice water you dumped on your head last summer: you’ve hit the tipping point.

Despite careful planning, training and investment, you’re seeing issues all over the place:

  • Inconsistent application performance due to cloud oversubscription
  • Noisy neighbors slowing down your service
  • An ongoing struggle to meet performance expectations due to a lack of visibility and control over the cloud environment

You add and delete cloud instances, but ultimately it just becomes more instances, more services and bigger bills. Despite your best efforts, the latency issues for end users don’t go away, and the bills keep getting higher.

And then it happens… the service goes down.

Oh, and by the way, unless you’re paying five-figures per month, most major cloud providers only offer partial support.  If you’re at a lower rate, just keep quiet and watch the Twitter feed for updates like anyone else. By this point, it may be too late for the frog to jump out of the pot!

What’s your exit strategy?

Here at INAP, we hear this story all the time. We call many of our customers “graduates.” They are at the end of their rope with performance, support and billing issues and have come to us looking for a better way.

They have then run benchmark tests comparing us to the big-box cloud providers and found that our bare-metal servers and optimized IP transport technology outperform the cloud Goliaths at a fraction of the cost, while avoiding vendor lock-in as INAP’s Agile services are deployed using the open source OpenStack platform.

As a result, companies like AppLovin, have significantly reduced operating costs and improved performance with INAP.

So, if you are feeling the heat from increased billing for services, a lack of support for your customer level, or a decrease in efficiency for the product you are paying them for, get out of the pot before it gets too hot! Contact INAP to see how one of our services can help get you back to your comfort level so that you can focus on continuing to grow your business and profits.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 17, 2017

The Right Way to Size Your Private Cloud Storage Environment

Paul Painter, Director, Solutions Engineering

Private cloud storage is an increasingly popular mechanism for securing data in virtualized environments, but knowing how to rightsize environments for current and future needs is one of the most important facets of a successful strategy.

In this post, I’ll briefly explain my process for sizing cloud storage environments and share with you the most important element of any solution.

The No. 1 Thing Your Private Cloud Storage Solution Needs

Once the metrics are collected and the capacity/performance requirements are identified, it’s time to process it all. Back in the day, this was a fairly manual process. While vendors did offer tools to size the storage arrays, they were, for the most part, not self-serve and only available to internal engineers and resellers.

Regardless, the logical question for both then and now is this: “What should we do if our storage requirements change or the workload changes its behavior?”

The answer most of us had back then? “We do a re-assessment and re-sizing and then propose a new storage solution.” In other words, not so useful.

The correct answer today? “The storage solution employed is adaptive, flexible, proactive and intelligent.”  

While, again, there’s a plethora of blog posts highlighting the pro’s and con’s for each competing storage vendor on the market, the above answer is non-negotiable. A service provider must be able to spell out how quickly and effectively they can react to workload changes or performance concerns.

These days in IT it’s a crime to not leverage data analytics for predictive behavior of storage solutions, which in turn allows you to address the performance issues before they arise or, at the very least bring them to your attention.

Service providers like HorizonIQ rightsize each and every storage solution. But we augment those initial assessments with technology that allows us to proactively account for unpredictable changes.

For our cloud backup environments, we use Nimble Storage’s Infosight analytics to identify workloads into the categories of “estimated to exceed the defined boundaries” or “requires performance or capacity upgrade.”

Private Cloud Storage Sizing Examples

In Part 1, I identified the primary metrics needed to complete the sizing. These included data size, storage block size and bandwidth and IOPS read/write ratio. After collecting the data, we can estimate the storage requirements. Take a look at two hypothetical workload examples below:

File server workload assessment results:

  • Data: 20 TB
  • Total IOPS: 8,000
  • Reads: 60%
  • Writes: 40%
  • Block size: 4 KB
  • Total Storage Throughput: ??

MS SQL workload assessment results:

  • SQL Data: 300 GB
  • SQL Logs: 50 GB
  • Total IOPS: 20,000
  • Reads: 60%
  • Writes: 40%
  • Block size: ranges from 512B to 8MB
  • Total Storage Throughput: ??

Clearly, we’re missing our bandwidth numbers. We can easily calculate it for the File server:

Total number of IOPS * Block size = Total storage throughput for the workload type

8000 * 4KB = 32,000 KB/s or 32MB/s

(Please note, the Total IOPS is assumed to be the “peak” number, and estimated not to exceed this number).

But how do we estimate it for the SQL workload, where the block size ranges? This is where your storage vendor tools come in handy. Using Nimble Storage SiteAnalyzer, there’s an option to select “real world block sizes seen for this application type” and leverage the appropriate ratios of different blocks seen in SQL environments.

From here, all we have to do is plug the numbers above into the sizing tool. In this example, the recommended array type is CS3000 with 9.6TB of Raw SSD Cache and 25TB of usable capacity on spinning disks.

How does it address specific environment requirements like latency  —  e.g. “Across 100 workloads, latency shouldn’t exceed 10ms.”  

Thanks to the sizing tool, we can pick the configuration that estimates 99 percent probability for the Cache Hit (data served off SSD disks), which in turn ensures the fastest response for the application without forcing us to break the bank an acquire an All-Flash solution.

Most importantly: Should any of the assessed parameters change by the time the project goes live on the new storage array, we can easily collect all performance metrics on the storage array and proactively identify potential bottlenecks regardless of its performance or capacity of the array that requires attention.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Paul Painter

Director, Solutions Engineering

Read More
Jul 13, 2017

How to Move Your Data from S3 to Redshift

INAP

Amazon Redshift is a powerful and fully managed data warehouse solution from AWS. It makes it extremely easy and cost-effective to analyze your data using standard Business Intelligence tools. But before we get into what Redshift can do for you it is important to also say what it can’t, or rather, shouldn’t do for you. Redshift is not built to be an alternative to a standard database (MYSQL, Postgres, etc.) and is not particularly good at:

  1. On demand, real-time updates/deletes
  2. Answering tens of thousands of concurrent queries in milliseconds
  3. Searching specific values by a pre-defined index
  4. Other, non-OLAP use cases

The key point to remember here: Redshift was built for Business Intelligence so its concurrency is low and limited. Rather, it is wise to ingest data into the system in batches.

So in this post we are going to explore a simple example of getting your data into Redshift. In short, we’ll set up a basic EC2 instance for SFTP that will allow users to update the data they want to put into Redshift. From there, we’ll transfer the data from the EC2 instance to an S3 bucket, and finally, into our Redshift instance.

Create S3 Bucket

Let’s start by creating the S3 bucket. After logging into your AWS account, head to the S3 console and select ”Create Bucket.”

Create IAM Policy

Next we are going to create an IAM policy that allows our EC2 instance  to upload to the S3 bucket. We do this by going to the IAM section of the AWS console and clicking on the policies link.

Next, we create a custom policy and choose the following:

  •       AWS Service: Amazon S3
  •       Actions: PutObject, PutObjectAcl, ListAllMyBuckets
  •       Amazon Resource Name (ARN): arn:aws:s3:::redshifts3bucket01 (ARN will always be arn:aws:s3:::BUCKETNAME

 

Next we will name the policy and set a description. We’ll also be able to view the JSON version of the policy, which will look something like this:

{

“Version”: “2012-10-17”,

“Statement”: [

    {

        “Sid”: “Stmt1498839988000”,

        “Effect”: “Allow”,

        “Action”: [

               “s3:ListAllMyBuckets”,

            “s3:PutObject”,

            “s3:PutObjectAcl”

        ],

        “Resource”: [

               “arn:aws:s3:::redshifts3bucket01”

        ]

       }

From here, we need to create a Role in our IAM console, click on and then click “Create New Role.” Select the Amazon EC2 option and then type in the first few letters of what you named your policy. Click the checkbox next to the name.

Click “Next,”name your role, and click “Create.”

Spin Up EC2 Instance

Now we are going to spin up our EC2 instance. Head to the EC2 section of the AWS console and choose to launch an instance. Select the Amazon Linux AMI and the t2.micro free tier.

On the “Configure Instance Details” page, specify your VPC and subnet and also select the option for assigning a public IP. Be sure to select the role you just created for the instance in the “IAM role” field.

You can leave the default in place for the “Storage” and “Tags” pages or make any changes you would like. On the Configure security group page we will need to create a new group and allow the following:

  •       SSH, Port 22 – 0.0.0.0/0
  •       Custom,  Ports 1024-1048 – 0.0.0.0/0

Click “Review” and then  “Launch,” at which point you will need to either use an existing keypair or create a new one.

Create Redshift Cluster

While waiting for the instance to spin up we will create our Redshift cluster. Go to the Redshift page in the AWS console

Firstly, if you are not using the default VPC and do not have a subnet group you will need to create one. You can do so by clicking “Security” on the Redshift console page and following the steps.

Now click “Launch Cluster” on the Redshift console page:

Provide the cluster details:

  •       Cluster Identifier: redshifts3
  •       Database name: redshifts3
  •       Port: 5439
  •       Master User name: masteruser
  •       Password: ##########

Next select the size of the node and other cluster parameters. In this case, let’s just leave the default.

Continue to the review page and then click “Launch Cluster.”

Next, choose the correct options for your VPC, Subnet group and Security group.

Now choose the correct options for your VPC, Subnet group and Security group.

Configure EC2 Instance

Next up, it’s time to  let’slog in to and configure the EC2 instance we just created. Connect to the instance using SSH;once logged in, type the following commands:

sudo -i

yum update -y

yum install vsftpd

chkconfig –level 345 vsftpd on

vi /etc/vsftpd/vsftpd.conf

Change:

anonymous_enable=YES

to

anonymous_enable=NO

Uncomment:

chroot_local_user=YES

And add the following to the bottom of the file:

pasv_enable=YES

pasv_min_port=1024

pasv_max_port=1048

pasv_address=[YOURPUBLICIP]

Now save the file and restart vsftp.

/etc/init.d/vsftpd restart

Now create an FTP user and password:

adduser redshifts3

passwd Password@@@@

Now we are going to install s3cmd, which will let us easily sync folders to our S3 bucket. Enter the following commands:

wget http://ufpr.dl.sourceforge.net/project/s3tools/s3cmd/1.6.1/s3cmd-1.6.1.tar.gz

tar xzf s3cmd-1.6.1.tar.gz

cd s3cmd-1.6.1

sudo python setup.py install

/usr/local/bin/s3cmd –configure

Then enter the config information:

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3

Access Key: xxxxxxxxxxxxxxxxxxxxxx

Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Encryption password is used to protect your files from reading

by unauthorized persons while in transfer to S3

Encryption password: xxxxxxxxxx

Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3

servers is protected from 3rd party eavesdropping. This method is

slower than plain HTTP and can’t be used if you’re behind a proxy

Use HTTPS protocol [No]: Yes

New settings:

 Access Key: xxxxxxxxxxxxxxxxxxxxxx

 Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

 Encryption password: xxxxxxxxxx

 Path to GPG program: /usr/bin/gpg

 Use HTTPS protocol: True

 HTTP Proxy server name:

 HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y

Please wait, attempting to list all buckets…

Success. Your access key and secret key worked fine 🙂

Now verifying that encryption works…

Success. Encryption and decryption worked fine 🙂

Save settings? [y/N] y

Configuration saved to ‘/root/.s3cfg’

Last we just need to setup a bash script to copy the data into Redshfit:

copy customer

from ‘s3://mybucket/mydata’

iam_role ‘arn:aws:iam::0123456789012:role/MyRedshiftRole’;

There you have it. While this may seem like a lot, the whole process of moving your data from S3 to Redshift is fairly straightforward. Once you’re up and running, I have no doubt that you’ll find Amazon Redshift to be a reliable data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL along with your existing Business Intelligence (BI) tools. Any questions about these steps? Get in touch.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 6, 2017

Why Network Isolation is Crucial for PCI Compliance

INAP

Here’s a simple, single-question poll. Choose only the best answer.

Your household contains many small, high-value items  — rare postage stamps, gaudy diamonds, an original copy of Action Comics #1, Hank Aaron’s 755th home run ball, etc. How would you secure them?  

  1. Display items prominently throughout your abode; lock your house doors and windows each night.
  2. Display items prominently in multiple rooms throughout the house;  install high-security locks and surveillance for individual rooms.
  3. Lock items in a secure safe located in a discreet location of your house, bringing them out only for the occasional show-and-tell session.  
  4. Outsource the security — lock that stuff up at the bank!

The risk of losing any of these items would undoubtedly put a dent in your net wealth, so it’s an important choice. “A” is the riskiest solution. It places too much of the security burden on the perimeter of your home. “B” adds valuable additional layers of security, but also requires you to install the requisite tech and monitor multiple locations within the household. “C” and “D” are far better options simply because they reduce the scope of your operation by isolating the prized assets.

OK, here’s the part where I spell out the analogy, assuming the title of the post hasn’t already spoiled it: The above scenario broadly mirrors the right (C & D) and wrong (A & B) strategies for protecting sensitive data stored or processed on a network.

But there’s a problem. Too many organizations are choosing data security strategies that look a lot like A & B, making compliance and robust information security way more difficult (and expensive) than it needs to be.

Consider PCI DSS, the compliance standards for safeguarding credit cardholder data. Any organization that stores, processes or transmits this sensitive data must adhere to 12 core standards (and many more sub-standards) covering items like establishing firewalls, access controls, clearly defined policies and vulnerability management. Those that fail to adhere risk incurring major fines from their credit card company and bank, or worse yet, suffer a data breach that tanks their brand and puts consumers at risk.  

The easiest way to knock off most items on the PCI DSS list follows our examples in C and D  —  it’s all about efficient isolation.

PCI and Network Isolation  —  Why it Works  

Points of weakness for a data breach can occur anywhere in the network chain. Without proper isolation, that makes identifying and monitoring entry points for unauthorized access a huge operational headache.

Remember, PCI applies to the entire cardholder data environment (CDE). The CDE includes all people, processes and system components (servers, network devices, apps, etc.) that interact with the sensitive data.

So we have a choice: Make sure the whole house (which includes everyone in it or with access to it) can pass a PCI DSS audit or segment the data to a highly restricted, isolated room of the house where snooping children and guests can’t enter.

The first option is simply not practical  —  it’s too expensive, too risky, and too unwieldy to monitor and manage. An isolated compute and storage environment protected by firewalls from other areas of the network and internet is the simplest option for applications and databases  touching cardholder data.

PCI, Hosted Private Clouds and Managed Security

So what about building your own fully isolated environment on-premise — like in option C?

It’s a viable solution, but with high downsides. Maintaining these standards in a rapidly evolving cybersecurity threat landscape requires carefully designed infrastructure and best-practice management protocols. Even if your organization is willing to take on the capital expenditures required to run this part of your data center, managing an isolated PCI DSS environment requires headcount, additional security technologies and a lot of time.

Rather than attempt to meet each one of the 200-plus sub-requirements by yourself, opting for a hosted private cloud and managed security service partner solves for substantial portions of PCI DSS. INAP customers save dozens of hours preparing for an audit, all the while benefiting from an enterprise compute environment tailored to their workload performance demands.

Updated: January 2019

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More