Big Blue Hat

Hybrid cloud, or who's helping who in the IBM / Red Hat deal

“How much?!?”

34 billion (with a b!) USD, approximately. After the announcement came this Sunday (28 October 2018) that IBM was buying Red Hat, the specialized media went crazy and flooded the Internet with all sorts of articles about the acquisition. Most of them hit the nail on the head (that IBM wants to play the cloud game), but without going too deep into what Ginni Rometty specifically meant when she said that “IBM will become the world’s #1 hybrid cloud provider.” Everyone is familiar with the now clichéd concept of cloud computing, but the hybrid addendum is not something everyone has a full grasp of.

Who's going to tattoo this on their arms now?

A bit about containers

Containerization is a paradigm shift comparable to virtualization, but containers alone can only help us during the development phase of a software project life cycle—in production, we need another piece of software to overlook those containers and kind of guide them in the right direction. Apache Mesos was the first such guide I worked with, back in 2015, but ever since Kubernetes was released in 2014, it was able to swallow all of its palpable challengers (Docker Swarm and DC/OS, which uses Mesos in the back). This was not oly due to the maturity of Kubernetes as a project (Kubernetes borrowed a lot from Google’s Borg system), but also the fact that Google knew how to play its cards in order to make competition with Kubernetes good-for-nothing—for instance, by spurring the creation of CNCF, the Cloud Native Computing Foundation.

Fast forward to 2018, what we have now is an astounding piece of software that every ops person I know wants to work with. Kubernetes is not for the faint of heart, but once you learn it, the advantages far outweigh its initially steep learning curve: its design is elegant, its resource consumption is somewhat lean, but most importantly, it’s standardized. I can’t emphasize that enough—the guarantee that I can run my containers on top of a Kubernetes cluster installed on AWS the same way I can on one installed on Azure or GCP is a game changer.1 Do you know why it’s so easy for us, end users, to navigate the Internet today? Because things like TCP, IP and Ethernet follow strict standards that every vendor who wants to play ball needs to comply with.

Take a web application written in Java, for example. Imagine your company runs a bunch of them on bare metal, with a load balancer for high availability. You could wrap that application in a Docker container and deploy it to Amazon EKS. Not happy with AWS? Fine, take the very same containerized application and deploy it to AKS (Azure Kubernetes Service), instead. It’s going to work the same way. Oh, and you can also kiss goodbye that load balancer now.

That’s not enough

What I described above is great if we only consider what goes on top of Kubernetes (your application); but what about Kubernetes itself and whatever goes under it? Someone still has to deploy those things somewhere. As you may imagine, that part is entirely up to the vendors. The way we deploy and manage a GKE (Google Kubernetes Engine) cluster is not similar to how we handle an EKS or AKS cluster. In other words, the infrastructure behind Kubernetes, just like the infrastructure behind any other cloud service, from relational databases to network interfaces, depends on how cloud providers do their thing—that part is still a vendor lock in.

Which may not be an issue for a startup. All things considered, being locked in to AWS or Azure is generally not a problem if you’re a small, niche business. Even if an AWS region gets hammered by a DDoS attack and brings down your application with it, you can easily redeploy it to another region—there’s no realistic need to go multi-cloud just to achieve high availability. Not every company likes the idea of being locked in to a vendor, though. The reasons are numerous, but the ones I hear most often are price and liability.

That’s even more true if you’re a big enterprise with a vast array of resources.2 Imagine your AWS bill keeps rising, month after month, at which point you decide to shop around and ultimately find that Alibaba Cloud will offer the same services for half the price. What do you do then? Make a call to your ops team and tell them to “Pack it up, boys, we’re moving to Alibaba”? If only it was so simple… As for liability, many businesses, especially those that store sensitive data and require all sorts of security compliances, don’t feel entirely safe in putting all their eggs in someone else’s basket. Of course, they’re okay using the cloud for certain things (Active Directory, data analytics, static file and website hosting), but subjects like permanent storage of sensitive data and network latency still are—and will probably keep being—points of contention.

Many companies in that position find in Terraform the perfect tool for implementing a multi-cloud solution. If they want their application to talk to a database on GCP and an email service on AWS, all they have to do is configure those services in a Terraform template and “run” it. For some big enterprises, though, that’s still not enough—they need that process to be more streamlined.

That’s where Red Hat’s OpenShift comes into play. OpenShift tries to minimize vendor discrepancies by being more than just a container orchestrator. It does so by offering a service catalog from which a user (an ops person or even a developer) can pick an item from a list and deploy it. Think a cPanel-like screen with all sorts of services, on premises or on a public cloud, at the reach of your fingertips. Some AWS services were announced at Red Hat Summit 2017, with a lot more probably coming in the near future. As Red Hat put it in the announcement:

Through this unique offering, Red Hat will make AWS services accessible directly within Red Hat OpenShift Container Platform, allowing customers to take advantage of the world’s most comprehensive and broadly adopted cloud whether they’re using Red Hat OpenShift Container Platform on AWS or in an on-premises environment. Customers will be able to seamlessly configure and deploy a range of AWS services such as Amazon Aurora, Amazon Redshift, Amazon EMR, Amazon Athena, Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing with just a few clicks from directly within the Red Hat OpenShift console.

Red Hat’s proposition is to natively integrate access to both private and public cloud services into OpenShift so that it can bring together two or more disparate vendor environments to serve the same workload or application through a single management plane. “This will enable Red Hat OpenShift Container Platform customers to be more agile as they’ll be able to use the same application development platform to build on premises or in the cloud.”3 The end result is a seamless user experience that substantially simplifies day-to-day operations activities. In short, OpenShift is not just a Kubernetes flavor—it’s a full blown platform of its own.

And OpenShift and hybrid cloud are what Red Hat has been betting on at least for the past two or three years.4 You can confirm that by peeking into Red Hat’s latest earnings call transcripts. That’s not to say IBM payed 34 billion for OpenShift alone, though.

Papa Smurf on the clouds

IBM and Red Hat have been partners for over two decades now and a lot of IBM clients need just what Red Hat is specializing into: hybrid cloud. As often happens with companies “too big to fail,” IBM was a late player in the cloud game. But instead of directly competing with 800-pound gorilla the likes of AWS and Azure, IBM wisely decided to go boutique—hybrid cloud is a niche. What IBM gets with this acquisition is not only OpenShift, but also the assurance of working with top developers who, among other things, know Linux as the back of their hands and contribute to Kubernetes as core members. Selling OpenShift to clients when you have those sorts of credentials may be easier than the skeptics think.

But still, that doesn’t explain why IBM payed such an exorbitant amount for Red Hat. I can’t confirm this, but I suspect other big players had their eyes on it. As you may know, companies such as HP, Cisco, VMWare, and even Google are already working on Kubernetes-based hybrid cloud solutions. I think IBM didn’t want to take the risk of missing the last boat; instead, they accepted the fact that they were late to the party and surrendered to buying the entry ticket at a (very high) premium.

Conclusion

As far as I know, IBM has always been a benevolent actor in the FOSS community and I don’t believe this acquisition puts open source at risk. Red Hat is helping IBM as much as IBM is helping Red Hat. (34 billion! How else should we call that?) I understand why some people in the open source community are worried, but I think IBM is smart enough to let the folks at Red Hat operate at their own will.

Don’t take me for a financial advisor, but if you own IBM stocks and are angry with the drop in prices caused by the acquisition, consider hodling for a little while—IBM may still make a lot of money out of this expensive deal.

As for the future, this move could be the first of many others, and I wouldn’t be surprised to see Microsoft or even Amazon acquire SUSE in the next couple of years.