It has been a little over ten years since Kubernetes made its initial debut and organizations continue to migrate their legacy applications and services from dedicated servers to Kubernetes clusters, while others have yet to adopt the technology. However, before going all-in on Kubernetes, there are some important factors one may want to take into account.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. The word Kubernetes is largely synonymous with cluster, and managing and deploying the containerized applications that make up a larger application via a Kubernetes cluster has become the de facto standard.
Prior to the advent of Kubernetes, standing up a web application comprised of multiple services may have taken hours, if not days. From preparing a virtual machine that will house, isolate and serve each service, to configuring the nuanced details surrounding each service, be it related to networking, security, or a service-specific configuration.
Today, with Kubernetes serving as the underlying platform to manage and host web services, the very same web application may be stood up within hours, or even minutes. All the while, providing greater flexibility around management, additional observability and a more granular control of the compute resources being consumed.
Whether or not to adopt Kubernetes and leverage a cluster to manage enterprise web services and applications is not a moot point, it is a must.
Organizations that are looking to migrate their existing workload to a Kubernetes cluster have three options:
1.) Opt for a fully-managed cloud provider Kubernetes-as-a-Service offering, namely AKS (Microsoft), EKS (Amazon), and GKE (Google), or
2.) Opt for a partially-managed approach by leasing the infrastructure from a data center/cloud provider, building out and self-managing a Kubernetes cluster, or
3.) Leverage on-premise infrastructure, build out and self-manage a Kubernetes cluster
Each of the above options have their pros and cons, but there are some key factors to bear in mind before making a final decision, those include, but are not limited to, maintenance and administration overhead, personnel technical skill-set, desired total cost of ownership, and any application requirements for bleeding-edge technologies, e.g. specific container runtime, networking requirements, proprietary storage, etc.
While there are advanced cluster designs such as hybrid --an architecture design comprised of both, on-premise and data center backed compute resources, and highly-available --an architecture design comprised of multiple clusters working cohesively as one cluster, the purpose of this article is to discuss the varying differences between fully-managed Kubernetes-as-a-Service solutions, partially-managed Kubernetes solutions, and on-premise self-managed Kubernetes solutions.
Brief Overview of Kubernetes
Before going into detail surrounding the difference options of hosting and managing a Kubernetes platform, it may serve well to touch on the underlying components that make up the Kubernetes platform and clarify some of the jargon surrounding the Kubernetes ecosystem.
As mentioned earlier, Kubernetes is a platform for automating deployment, scaling, and management of containerized applications. But, what is a containerized application? To better understand what a containerized application is let us reflect on traditional hosting of web applications, or services, and take a look under the hood of Kubernetes.
Traditional Application Hosting
Applications are typically comprised of multiple services and each service isolated to its own server. Isolation may also come in the form of a hypervisor housing multiple virtual machines within one server.
In the case of a simple web application, one may have an nginx server for web traffic, a MySQL server to serve as a database, a Redis server for storing cache and the code base for the application itself.
If we break down the example application from above, a recommended approach would require at least three virtual machines, one for each service and one storage device for the code base. While this approach would suffice for hosting a production-grade web application, it comes with a drawback.
When virtual machines are created they are allotted a static amount of memory, cpu and storage resources. The resources are committed to each server regardless of the traffic or throughput to the services. Scaling any one of the servers up would require an additional virtual machine, including additional cpu, memory and storage resources, as well as installing a base operating system and configuring the network to account for the added virtual machine.
Not to mention, the dedicated server or virtual machine approach consumes the allotted resources regardless of whether they are being used or not, a very inefficient use of resources, all the while continuously drawing power, and over time can be very costly.
This is where Kubernetes come into play, a Kubernetes cluster provides an environment where cpu, memory, storage and networking, to name a few, are abstracted from the application.