Kubernetes also known as K8 was built by Google based on their experience of running containers in production it is now an open source project and is arguably best and most popular container orchestration technologies out there.
In this post I want to share some of my #SummerLearning project and try to explain Kubernetes at a high level.
My first observation is that Kubernetes is the buzz at the moment but in order to understand Kubernetes we have to understand two other areas first, Containers and Orchestration.
The Container technology I chose to try and learn over the summer was Docker, but we probably need to take a step further back here and ask ourselves why Containers are going to be part of the future?
In my opinion it seems that the Application world is changing the way infrastructure needs to be for the Application stack, previously we would have review compatibility with the underlining operating system or systems, all the different services were also compatible with the version of the operating system, even more cumbersome and deeper into that traditional application stack you would have to ensure that compatibility between these services and the libraries and dependencies on the OS as well. This is just day one then you have the added bonus of going through this all again when things need to be upgraded or refreshed, quite the cumbersome task. Then you have the other laborious task of Application development as an infrastructure guy how can you get them the exact version and copy of data, they require so they are working on let’s say nearly new workloads. Well this is the challenge.
Why Containers? Why Docker? Docker or Containers in general allow for you to run each component in a separate container with its own libraries and its own dependencies all on the same machine / virtual machine and OS but within separate environments or containers. The ability to build the docker configuration once and then all of the application developers could now get started on their environment with a simple docker run command, regardless of what the infrastructure or Operating System was. All they needed to do was to make sure they had darker installed on their systems
What are containers?
Containers are completely isolated environments as in they can have their own processes or services their own networking interfaces their own mounts just like virtual machines except they all share the same operating system kernel
it’s also important to note that containers are not new with docker, containers have existed for about 10 years plus now and there are many different types of containers
docker utilizes LXC containers, setting up these container environments is hard as they’re very low-level and that is where docker offers a high-level tool with several powerful functions making it really easy for end-users like us to understand how docker works.
if you look at operating systems like Ubuntu Fedora SUSE or CentOS they all consist of two things an OS kernel and a set of software, the operating system kernel is responsible for interacting with the underlying hardware while the OS kernel remains the same which is Linux in this case it’s the software above it that makes these operating systems different this software may consist of a different user interface drivers compilers file managers developer tools etc so you have a common Linux kernel shared across all operating systems and some custom software that differentiates operating systems from each other.
We mentioned earlier docker containers shared the underlying kernel, what does that actually mean sharing the kernel let’s say we have a system with an Ubuntu OS with Docker installed on it Docker can run any flavour of OS on top of it as long as they are all based on the same kernel in this case Linux if the underlying operating system is Ubuntu Docker can run a container based on another distribution like Debian fedora SUSE or Cent OS each Docker container only has the additional software that we just mentioned previously, that makes these operating systems different and docker utilizes the underlying kernel of docker host which works with all the operating systems above so what is an OS that does not share the same kernel as this.
Windows and so you won’t be able to run a windows-based container on a docker host with Linux OS on it for that you would require a docker on a Windows server you might ask isn’t that a disadvantage then not being able to run another kernel on the OS the answer is no because unlike hypervisors docker is not meant to virtualize and run different operating systems and kernels on the same hardware the main purpose of docker is to containerise applications and to ship them and run them.
The difference between VMs and Containers
Which then brings us to differences between virtual machines and containers something that we tend to do especially us with a virtualisation background
in case of docker we have the underlying hardware infrastructure then the operating system and docker installed on the OS docker can then manage the containers that run with libraries and dependencies alone.
In case of a virtual machine we have the OS on the underlying hardware then the hypervisor like ESXi, then the virtual machines. Each virtual machine has its own operating system inside it then the dependencies and then the application. This overhead causes higher utilisation of underlying resources as there are multiple virtual operating systems and kernels running the virtual machines which also consumes higher disk space as each VM is heavy and is usually gigabytes in size
whereas docker containers are lightweight and are usually megabytes in size this allows docker containers to put up faster usually in a matter of seconds whereas virtual machines as we know take minutes to boot up as it needs to put up the entire operating system.
It is also important to note that docker has less isolation as more resources are shared between containers like the kernel whereas we VMs have complete isolation from each other since VMs don’t rely on the underlying operating system or kernel you can have different types of operating systems such as Linux based or Windows based on the same hypervisor whereas it is not possible on a single docker host so these are some differences between the two
How does it happen with Docker?
There are a lot of containerised versions of applications readily available as of today, most organisations have their products containerised and available in a public docker registry called docker hub or docker store already for example you can find images of more common operating systems, databases and other services and tools
Once you identify the images you need, and you install docker on your host bringing up an application stack is as easy as running a docker run command with the name of the image.
I mentioned Images there so let’s just talk about the difference between Images and Containers.
An image is a package or a template just like a VM template that you might have worked with in the virtualization world it is used to create one or more containers
Containers are running instances of images that are isolated and have their own environments and set of processes as we have seen before a lot of products have been Dockerised already in case you cannot find what you’re looking for you could create an image yourself and push it to the docker hub repository making it available for the public.
If we go back and look at the traditional problem, we have today with the application stack, you have developers developing their applications and then handing them over to the Operations Team to deploy and manage it in the production environments. They do that by providing a set of instructions such as information about how the house must be set up what prerequisites are to be installed on the host and how the dependencies are to be configured. The Operations team uses this guide to setup the application, However the challenge is that the Operations team did not develop the application so on their own they struggle with setting it up and when they hit an issue they work with the developers to resolve it.
with docker a major portion of work involved in setting up the infrastructure is now in the hands of the developers in the form of a docker file the guide that the developers built previously to set up the infrastructure can now easily be put together into a docker file to create an image for the applications this image can now run on any container platform and is guaranteed to run the same way everywhere so the Operations team can now simply use the image to deploy the application since the image was already working when the developer built it and operations are not modifying it continues to work the same way when deployed in production.