February 20, 2019 CLOUD SECURITY,WEB APPLICATION FIREWALL (WAF)

Cloud – Private or Public?

Private Cloud? Or Public Cloud? To be honest, deciding whether to adopt the cloud as part of your infrastructure is no longer a matter of yes or no, but rather a choice of whether you should use a private or public solution. Or even a mix of the two.

There are many, many considerations to be made when selecting an infrastructure solution. You’ll need to review security, safety, reachability, availability. In regulated industries, you may also have compliance requirements. And even in non regulated industries, if you host clients’ data, you may be subject to geographical restrictions. GDPR clearly isn’t making things easier either.

However, the single most important factor in deciding whether private is better than public or not is cost . And when I say cost, I mean both the initial investment in setting up the infrastructure and the operational costs of maintaining it.

While there are certainly many ways of skinning this cat, I can only provide a case study based on our direct experience. I’ll therefore try to illustrate the process that led us to adopt a mixed solution, incorporating both private and public infrastructure, and why we believe it to be our best option.

When you set up a cloud environment, you’ll have many objectives in mind, in terms of what it needs to be or do.  You want this new environment to be:-

Available: it should never go down.

Ideally you want 100% up time. That’s one of the main goals of the cloud in the first place. In order to be available, it needs to be redundant from every aspect – disks, CPUs, networking. When I say redundant, I don’t mean 2 computers, 4 disks, and maybe 2 ISPs. I mean extensive, obsessive, almost excessive redundancy. Take for example the Boeing 747, it can fly with 1 engine; but it has 4. That’s the kind of redundancy I’m referring to! I’ll dive into it momentarily.

Scalable: you want more computing power, disk space, network capacity, and RAM, and you want it as soon as you need it.

In a way, this goes back to the redundancy requirement. A redundant system will have capacity to spare, by default. However, don’t fall into the temptation of using all the resources simply because you spent the money, and you feel so now, you need have to make use of them. Otherwise you no longer have redundancy. Sure, you can over-subscribe a little, just don’t exaggerate. My rule of thumb? No more than 20%.

Resilient: it needs to withstand attacks and failures in such a way that users will never even know something happened.

First thing is to protect yourself against DDoS attacks. In the cloud, you’re exposed more than ever to such attacks, and it’s critical to withstand them as much as possible, in compliance with the requirement of availability. I know, I know; there’s DDoS attack and there’s DDoS attack, and you don’t think you can build a platform that can resist every size of attack. And while I somewhat agree with this assessment, I also know there are ways around it, and that’s the main reason why we ourselves adopted a mixed infrastructure.

Other considerations that need to be made relate pertain to maintenance. With all the availability of public infrastructure, do you still wish to be burdened by the responsibility of having to maintain your own? Time, labor, parts where hardware is involved; releases, versions, compatibility; skills. Costs, costs, costs. Isn’t it better to just delegate this to someone else? Perhaps. But nothing is ever free, and cost is often a primary consideration.

Speaking of costs, there are 3 aspects to that for the private cloud.

Initial cost (buying the hardware, setting it up, configuring it, purchasing the software licenses). Operational cost (rental of rack space, redundant ISP, recurring licenses, maintenance agreements); and finally, skills! Which is always the most expensive aspect of all. If you set up your private cloud, you’d need to have, on hand, individuals who know exactly what’s going on, know how to configure it, how to fix it if it breaks (without taking down the whole rack, thus defeating the ‘availability’ bit of your requirements), can diagnose issues, and are, essentially, capable of maintaining this “monster”.

When you adopt the public cloud, all this vanishes. Those providing the public cloud are completely responsible for the hardware. They maintain it; they ensure it’s always on; they provide availability at every level. When I rent a virtual machine from AWS, I expect it to be there, all the time, 100%, always available. If an entire rack of their hardware crashes, I don’t want to know. And if I need more power, or bigger machines, I expect to be able to get them with the push of a button. Setup should be extremely quick. And most public cloud providers today give you just that – peace of mind you really crave, to not have to worry about anything at all, except making sure your own applications are running.

However, that kind of peace of mind comes at a cost, and it’s not cheap. AWS and all its competitors charge you for CPU, RAM, disk, networking, uptime, software installed (assuming you purchased it from them). They charge you by the hour, and if your servers are always on, that can add up to quite a lot.

There are also other considerations about the public cloud that need to be made, which make the private cloud solution still appealing. At least partially.

One is the geolocation.

Although providers have made great strides in recent years, in being able to assure the general geographic location of your data, this choice isn’t always possible (365 Ireland), and in any case, it’s very generic. For example with AWS, you can choose your data to be in the Eastern Zone, in Virginia. But that’s it. You really can’t choose anything else. And you truly have no way of knowing where your data is, how it’s distributed over the span of all the hard drives in that location; how your bytes are intermingled with those of another AWS client. There isn’t a “rack” with your computers and your disks. You’re sharing infrastructure and disks with everybody else.

Which brings me to the next consideration.

You don’t know your neighbors, and they don’t know you either. Does that even matter though? You could make the argument that you really have nothing to do with the other companies hosted on the same computer and with whom you’re sharing disk space. But in reality, it’s possible that they could affect you, and that what you don’t know may indeed harm you. This is a generic, high level consideration, made by virtualization experts. I’m not aware that anything has yet happened; but theoretically, it’s possible that if your neighbor is compromised because they lack security, hackers could eventually gain access to your data as well. AWS and all the other virtualization providers may tell you otherwise; but when it comes to computing, I tire of statements such as “this is impossible”.

Earlier, I mentioned the need for skilled individuals to maintain your private cloud. When you use the public one, the provider will employ the necessary personnel to do just that – maintain the infrastructure and base software, and ensure everything’s running smoothly. However, their employees, just like yours, aren’t wizards, they’re human beings. And even if the provider is much larger than you and can afford to pay higher salaries to hire the best people, well, they are still just people. And people make mistakes. Only now their mistakes can become your problem, and this has already happened several times at AWS lately, where customer data was compromised because of an AWS employee’s mistake. While you may exercise some control over your employees, you have absolutely none over their employees, and are in a way completely at their mercy.

I don’t wish to sound overly pessimistic. Maybe (hopefully), these were isolated incidents that won’t be repeated any time soon. But the fact remains that their employees’ mistakes affected their clients, and that the latter could do absolutely nothing but be resigned to the fact that certain things are simply beyond their control.

One final consideration, about data/cyber security.

Whether you adopt private or public, this is your problem and your problem alone. With the private solution, you’ll need to protect both your hardware layer (main OS, virtualization OS) and the virtual machines. With the public solution, you’re only responsible for protecting your instances; but don’t forget to do so, because the provider isn’t offering you any protection whatsoever. Those security groups you set up on AWS are nothing more than a firewall controlling port access. There’s really zero protection beyond that. As such, if you have a web server exposed to the internet, you must consider a WAF to protect those transactions.

So, now that we’ve made all these considerations, what do we do?

Well, you’ll have to wait till the next post, where I’ll share with you the actual hard numbers of what it cost us to do what we did.

Until then, do feel free to leave a comment or suggestion for future post topics so we can write more of what you’d like to read.