A few weeks ago, I started blogging about Kubernetes. Turns out there is a lot to talk about. Today, I’ll get back to finally answering the original question: what is Kubernetes?
First of all, Kubernetes is a cloud platform system that allows you to run containerized applications across a clustered system.
As mentioned before, it was developed with the best ideas from Google in what they’ve used for years in running all of their software on billions of containers per week. It takes those ideas to a new level.
Kubernetes is intelligent in how it runs containers. It has a scheduler that is cognizant of the state of host machines and what is running system-wide so it intelligently schedules containers/pods to utilize resources efficiently. It allows for horizontal scaling of your application, providing the ability to manually or automatically scale based off of CPU usage.
Not only that, when Kubernetes runs a container, it automatically gives that container its own IP address and DNS name providing built-in service discovery that makes it possible to do things such as having an application have a known database access point based off of a consistent name as opposed to the legacy pattern of having to explicitly supply the host of the database and have to modify application configuration.
Take for instance the project Vitess, which I recently spoke about at Cloud Native Computing’s KubeCon. Vitess is the database system that YouTube uses to scale out MySQL where video meta-data is stored. Though existing prior to Kubernetes (and Borg), Kubernetes’s flexibility and scaling features have lent themselves well to Vitess being able to shard a huge dataset across thousands of machines in containers.
It makes running complex applications easier, and uses fewer resources than previously with virtual machines since containers are smaller and require fewer resources.
Kubernetes provides a common, extendable API for reading and writing Kubernetes resource objects and in turn for managing applications. It provides the ability to define applications and infrastructure with a clean YAML manifest file defining containers that comprise the application.
For instance, an example of a simple manifest file for running WordPress with a backing MySQL database can be seen in the following example.
First the WordPress deployment: https://github.com/CaptTofu/kubeconf-2017/blob/master/mysql-wordpress-pd/wordpress-deployment.yaml:
– first defining a front-end WordPress Service type for access to the outside world
– a PersistentVolumeClaim for WordPress files
– A Deployment defining what WordPress container to use, database connection information, and what volumes to mount
Then the Deployment for MySQL (Percona distribution):
– First a service that WordPress will connect to in order to access the underlying MySQL database pod
– A PersistentVolumeClaim for the database storage
– The Deployment for running the MySQL container setting up the database password and what volume information
As can been seen, the YAML files are very intuitive to read and are somewhat self-documenting.
The following video shows just how easy it is to use Kubernetes to run something like WordPress and a backing database. In this example, it uses the above deployments using a Helm chart (another blog post!).
Running Kubernetes used to require a bit of patience and knowledge and the tools were a bit disparate and varying in difficulty of use. I can remember numerous methods we tested at HPE — using CoreOS Fleet unit files, Vagrant, custom Ansible playbooks, Samsung’s Kraken, an in-house proof of concept written in Python, and many other permutations of trying to find something simple. The community has done a great job of consolidating these efforts into top-level projects that attempt to satisfy any number of desired cluster sizes and use-cases.
There are many ways to run Kubernetes, running on bare metal, virtual machines, containers, LXD, whether it’s a developer wanting to get familiar with it on a single laptop or an organization wanting to deploy Kubernetes for running their applications across a large cluster. The Kubernetes website has a good section explaining these options at https://kubernetes.io/docs/setup/pick-right-solution/#local-machine-solutions.t
Some of the highlights are:
Minikube – useful for running a local cluster on your laptop to become familiar with Kubernetes and do development. (https://kubernetes.io/docs/getting-started-guides/minikube/)
Kubespray – a great tool (and one that Oracle Dyn is using for building their clusters) for spinning up clusters on any number of environments. It utilizes Ansible to install and configure Kubernetes components depending on the desired configuration set up in inventory. It also provides Terraform scripts for provisioning environments on AWS and OpenStack.
Kops – for AWS, uses Terraform to easily build Kubernetes clusters. It’s very simple to set up, scale up and down a cluster simply by editing and applying the configuration using the kops command line utility.
Ubuntu – conjure-up – Spin up Kubernetes clusters on Ubuntu running in any number of cloud IaaS providers including Oracle Cloud, AWS, Azure, etc. https://kubernetes.io/docs/getting-started-guides/ubuntu/
Of course, the cloud
Various cloud providers – pretty much any cloud provider (Oracle Cloud, GCE, AWS, etc,) – offers any number of ways to run Kubernetes, each warranting their own blog post– and something future Dyn blog posts will cover!
If you’re interested in this topic, you can find my other posts here: