container learning – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Sat, 20 Mar 2021 12:02:40 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png container learning – vZilla https://vzilla.co.uk 32 32 Kubernetes playground – Backups in a Kubernetes world https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-10 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-10#comments Sat, 13 Mar 2021 16:42:27 +0000 https://vzilla.co.uk/?p=2747 K8s Part10Kasten

This post will wrap up the 10-part series of getting started on my hands-on learning journey of Kubernetes, the idea here was to try and touch on a lot of the areas without going through the theory in these posts. A lot of theory I have picked up through various learning assets that I have listed here. In the previous posts we have gone into creating a platform for our Kubernetes cluster to run on, we have touched on various stateless and stateful applications, load balancers and object storage amongst a few more topics to get going and started. We have only touched the surface of this whole entire topic though and I fully intend to continue to document further on about the public cloud and managed Kubernetes services that are available.

In this post we are going to wrap the series up talking about data management, what better way to attack this than to cover the installation and deployment of K10 in our lab to assist us with our lab backups and more, the more we can get into over another series and potential video series. But after spending the time getting up and running you will want to spin up and down that cluster and it might then make sense to store some backups to get things back but at least have that data protection angle in the back of your mind as we all navigate this new world.

Everything Free

031321 1625 Buildingthe1

First of all, everything so far in the series has been leveraging free tools and products, so we continue that here with Kasten K10 free edition. There are a few ways in fact you can take advantage of this free edition, firstly its going to cover you for 10 worker nodes and its free forever! This is ideal for testing and home lab learning scenarios where a lot of us are now. This is a mantra that has been the case at Veeam for a long time. There is always a free tier available with Veeam software. How do you get started, well on the page above and both topics needs to be covered off more in another post but you have the test drive option which enables you to not have to have any home lab or cloud access to a Kubernetes cluster this will walk you through the easy approach of getting Kasten K10 up and running in a hands on lab type environment, the second is the free edition which can be obtained from cloud based marketplaces. I have also written about this in one of my opening blogs for Kasten by Veeam.

Documentation

031321 1625 Buildingthe2

Another thing I have found is that the Kasten K10 documentation is good and thorough. Don’t worry its not thorough because its hard but it details the install options and process for each of the well known Kubernetes deployments and platforms that you are using and then into specific details that you may want to consider from a home lab user through to the enterprise paid for product that includes the same functionality but with added enterprise support and a more custom node count. You can find the link to the documentation here. Which is where the steps I am going to run through ultimately come from.

Let’s get deploying

First, we need to create a new namespace.

kubectl create namespace kasten-io

we also need to add the helm repo for Kasten K10. We can do this by running the following command.

helm repo add kasten https://charts.kasten.io/

We should then run a pre flight check on our cluster to make sure the environment is going to be able to host the K10 application and be able to perform backups against our applications. This is documented under Pre-Flight checks, this will create and clean up a number of objects to confirm everything will run when we come to install K10 later on.

curl https://docs.kasten.io/tools/k10_primer.sh | bash

this command should look something like the following when you run it. This is going to check for access to your Kubernetes cluster by using kubectl, access to helm for deployment that we covered in a previous post as well. Validates if the Kubernetes settings meet the K10 requirements.

031321 1625 Buildingthe3

Continued

031321 1625 Buildingthe4

Installing K10

If the above did not come back with errors or warnings, then we can continue to install Kasten K10 into our cluster. This command will be leveraging the MetalLB load balancer that we covered in a previous post to give us network access to the K10 dashboard later on, you could also here use a port forward to gain access which is the default action without the additional externalGateway option in the following helm command.

helm install k10 kasten/k10 –namespace=kasten-io \

–set externalGateway.create=true \

–set auth.tokenAuth.enabled=true

Once this is complete you can watch the pods being created and, in the end, when everything has completed successfully you will be able to run the following command to see the status of our namespace.

kubectl get all -n Kasten-io

031321 1625 Buildingthe5

You will see from the above that we have an External IP on one of our services, service/gateway-ext should with our configuration be using LoadBalancer type and should have a value that you configured in MetalLB that you can access on your network. If you are running this on the public cloud offerings this will be using the load balancing native capabilities and will also give you an external facing value. Depending on your configuration in the public cloud you may or may not have to make further changes to enable access to the K10 dashboard. Something else we will cover in a later post.

Upgrading K10

Before we move on, we also wanted to cover, upgrades again in more detail later but every two weeks there is an update release available so being able to run this upgrade to stay up to date with new enhancements is important. The following command will enable this quick and easy upgrade.

helm upgrade k10 kasten/k10 –namespace=kasten-io \

–reuse-values \

–set externalGateway.create=true \

–set auth.tokenAuth.enabled=true

Accessing the K10 Dashboard

We have confirmed above the services and pods are all up and running but if we wanted to confirm this again we can do so with the following commands.

Confirm all pods are running

kubectl get pods -n kasten-io

031321 1625 Buildingthe6

Confirm your IP address for dashboard access

kubectl get svc gateway-ext –namespace kasten-io -o wide

031321 1625 Buildingthe7

Take the external IP listed above and put this into your web browser adding it like the following, http://192.168.169.241/k10/# you will be greeted with the following sign in and token authentication request.

031321 1625 Buildingthe8

To obtain that token run the following command, this is the default service account that is created with the deployment. If you require further RBAC configuration then refer to the documentation listed above.

kubectl describe sa k10-k10 -n kasten-io

031321 1625 Buildingthe9

kubectl describe secret k10-k10-token-b2tnz -n kasten-io

031321 1625 Buildingthe10

Use the above token to authenticate and then you will be greeted with the EULA, fill in the details, obviously read all the agreement at least twice and then click accept.

031321 1625 Buildingthe11

You will then see your Kasten K10 Cluster Dashboard where you can see your available Applications, Policies and what backups (snapshots) and exports (backups) you have with then a summary and overview of the jobs that have ran down below.

031321 1625 Buildingthe12

The next series of posts are going to continue the theme of learning Kubernetes and we will get back to the K10 journey also as we will want and need this as we continue to test out more and more stateful workloads that then require that backup functionality but also there is a lot of other cool tech and features within this product which is the same product regardless of it being free or the enterprise edition.

Hope the series was useful, any feedback would be greatly appreciated. Let me know if it has helped or not as well.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-10/feed 1
Kubernetes playground – How to deploy your Mission Critical App – Pacman https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-9 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-9#comments Wed, 10 Mar 2021 16:45:42 +0000 https://vzilla.co.uk/?p=2725 K8s Part9PacMan

The last post was to focus a little more on applications but not so much between the stateful and stateless types of applications but in the shape of application deployment. This was deploying KubeApps and using this as an application dashboard for Kubernetes. This post is going to focus on a deployment that is firstly “mission critical” and that contains a front end and a back end.

Recently Dean and I covered this in a demo session we did at the London VMUG.

I would also like to add here that the example nodejs application and mongodb back end was first created here. Dean also has his GitHub which is where we are going to focus with the YAML files.

“Mission Critical App – Pac-Man”

Let’s start by explaining a little about our mission critical app, our application a HTML5 Pac-Man game with NodeJS as the web front end and then the back end a MongoDB database to store our high scores. You can find out more about the build up of this on the first link above.

Getting started

Over the next few sections, we will look at the building blocks to create our mission critical application. We are going to start by creating a namespace for the app.

You can see here we do not have a pacman namespace

031021 1632 Buildingthe1

Let’s create our pacman namespace

kubectl create namespace pacman

031021 1632 Buildingthe2

The next stage is going to be lets download the YAML files to build out our application using the following command.

git clone https://github.com/saintdle/pacman-tanzu.git

then you could simply run each of those YAML files to get your app up and running. (one warning here to make is that you would need a load balancer in place) if you followed the MetalLB post though you will be already in a good spot.

You should now have a folder called pacman-tanzu with the following contents to get going.

031021 1632 Buildingthe3

We will now take a look at those YAML files and explain a little about each one and what they do.

Deployments

A deployment provides declarative updates for Pods and ReplicaSets. This is where we will define the Pods that we wish to deploy and how many of each pod we need. In your deployments folder you will see to files one referring to mongodb and one referring to pacman. Notice the replicaSets for both of the deployments and also that with the MongoDB deployment you will notice a persistent volume claim which we will cover later.

mongo-deployment.yaml

031021 1632 Buildingthe4

pacman-deployment.yaml

031021 1632 Buildingthe5

Persistent Volume Claim

A persistent volume claim (PVC) is a request for storage, by design container storage is ephemeral and can disappear upon container deletion and creation. To provide a location where data will not be lost for our example the MongoDB we will leverage a Persistent volume outside of the container. You can find out much more about the world of storage and persistent volumes here on the official documentation.

When you download the yaml files from github it will assume that you have a default storageclass configured and ready to address persistent volume claims. The YAML file will look like the below.

031021 1632 Buildingthe6

If you do not or you have multiple storage classes, you wish to use then you can define that here using the storageClassName spec.

031021 1632 Buildingthe7

RBAC

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. You will see below in the YAML file that we have a ClusterRole (non namespaced) and role binding (namespaced) this is to enable connectivity between our front and back ends within the namespace. Once again more information or detailed information can be found here.

031021 1632 Buildingthe8

Services

Next, we need to expose our app to the front end i.e. our uses, and we also need to bridge the gap between the pacman (front end) and the MongoDB (back end)

mongo-service.yaml

031021 1632 Buildingthe9

pacman-service.yaml

031021 1632 Buildingthe10

Ok now we have briefly explained the files we are about to run to make up our application lets go ahead and run those files. I don’t think it matters actually which order you run these in but I will be going in the order I have explained. Running the following commands will get you up and running.

kubectl create -f pacman-tanzu/deployments/mongo-deployment.yaml -n pacman

kubectl create -f pacman-tanzu/deployments/pacman-deployment.yaml -n pacman

kubectl create -f pacman-tanzu/persistentvolumeclaim/mongo-pvc.yaml -n pacman

kubectl create -f pacman-tanzu/rbac/rbac.yaml -n pacman

kubectl create -f pacman-tanzu/services/mongo-service.yaml -n pacman

kubectl create -f pacman-tanzu/services/pacman-service.yaml -n pacman

031021 1632 Buildingthe11

if you did want to delete everything that we just created you can also just find and replace the “create” with “delete” and then run the following commands to remove all the same components.

kubectl delete -f pacman-tanzu/deployments/mongo-deployment.yaml -n pacman

kubectl delete -f pacman-tanzu/deployments/pacman-deployment.yaml -n pacman

kubectl delete -f pacman-tanzu/persistentvolumeclaim/mongo-pvc.yaml -n pacman

kubectl delete -f pacman-tanzu/rbac/rbac.yaml -n pacman

kubectl delete -f pacman-tanzu/services/mongo-service.yaml -n pacman

kubectl delete -f pacman-tanzu/services/pacman-service.yaml -n pacman

and then finally to confirm that everything is running as it should we can run the following command and see all of those components

031021 1632 Buildingthe12

From the above you will also see that we have an external IP for our MongoDB instance and our pacman front end. Let’s take that pacman IP address and put it in our web browser to play some pacman.

031021 1632 Buildingthe13

Hopefully this was helpful to somebody, this also leads into a great demo that myself and Dean have been doing where Kasten K10 will come and protect that stateful data, the mission critical high scores that you don’t want to be losing. Obviously, this is out there and available, there are many other viable demos that can be used to play in your home labs and get to grips of the different components. In the next post we will finish off this series by looking at Kasten and the deployment and configuration of K10 and how simple it is to get going even more so if you have been following along here.

Tweet me with your high scores

031021 1632 Buildingthe14

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-9/feed 2
Kubernetes playground – How to Load Balance with MetalLB https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-7 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-7#comments Fri, 05 Mar 2021 21:57:05 +0000 https://vzilla.co.uk/?p=2651 kubernetes

In the last post, we talked about the Kubernetes context and how you can flip between different Kubernetes cluster control contexts from your Windows machine. We have also spoken about in this series how load balancing gives us better access to our application vs using the node port for access.

This post will highlight how simple it is to deploy your load balancer and configure it for your home lab Kubernetes cluster.

Roll your own Kubernetes Load Balancer

If you deployed your Kubernetes cluster in Cloud, the cloud provider will take care of creating Load balancer instances. But if you are using bare metal for the Kubernetes cluster, you have very limited choices which are where we are in this home lab scenario this also enables us to have a choice and to understand why. As I mentioned this is going to be using MetalLB.

Let’s start with what it looks like without a load balancer on bare metal when we are limited to Node or Cluster port configurations. So I am going to create an Nginx pod.

030521 2153 Buildingthe1

If we did not have a load balancer configured but we used the following command here. It would stay in the pending state until we did have a load balancer.

kubectl expose deploy nginx –port 80 –type LoadBalancer

Installing MetalLB into your Kubernetes Cluster

To start you can find the installation instructions here. The following commands, in general, is going to deploy MetalLB to your cluster, it will create a namespace called metallb-system and it will create a controller which is what will control IP address assignments and then also speaker which handles the protocols you wish to use.

kubectl create namespace metallb-system

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml

# On the first install only

kubectl create secret generic -n metallb-system memberlist –from-literal=secretkey=”$(openssl rand -base64 128)”

when you have than these you should see the new namespace metallb-system and be able to run the following command

kubectl get all -n metallb-system

030521 2153 Buildingthe2

We then need a config map to make it do something or at least use specific IP addresses on our network, I am using Layer2 in my lab configuration but there are other options that you can find here.

030521 2153 Buildingthe3

Create your YAML if layer2 as above with a range of IP addresses available on your home lab network and then apply this into your configuration. Where config.YAML is the YAML file with your config as per the above is located.

kubectl apply -f config.yaml

now when you deploy a service that requires port type as LoadBalancer

kubectl expose deploy nginx –port 80 –type LoadBalancer

Instead of pending now, this will give you an IP address available on your home lab network, which is great then if you want to access this from outside your cluster. Now if we check another application I have running already in my cluster. You will see the following when you use the LoadBalancer type on deployment.

030521 2153 Buildingthe4

And then if we go into that service and describe we can then see that configuration

030521 2153 Buildingthe5

I want to give another shout out to just me and opensource if you are a consumer of video vs written or both then this guy has created an amazing Kubernetes playlist covering all things Kubernetes and more.

In the next post, we are going to focus on hitting the easy button for our apps using KubeApps, where things do not need to be all in the shell there are also UI options, KubeApps is the “Your Application Dashboard for Kubernetes”

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-7/feed 6
Kubernetes playground – Context is important https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6#comments Fri, 05 Mar 2021 13:32:21 +0000 https://vzilla.co.uk/?p=2642 K8s Part6AccessingK8sclusterfromwindows

In the last post, we covered an overview of Helm and the MinIO deployment to give us an option for testing later on workloads that require object storage. In this post, we are going to focus on context and how to make sure you have access from your desktop to your Kubernetes Cluster.

Context

030521 1320 Buildingthe1

Image is taken from Kubernetes.io

Context is important, the ability to access your Kubernetes cluster from your desktop or laptop is required. Lots of different options out there and people use obviously different operating systems as their daily drivers.

In the post we are going to be talking about Windows but as I said there are other options out there for other operating systems. More to the point if you are managing multiple Kubernetes clusters for different projects or learning.

By default, the Kubernetes CLI client uses the C:\Users\username\.kube\config to store the Kubernetes cluster details such as endpoint and credentials. If you have deployed a cluster you will be able to see this file in that location. But if you have been using maybe the master node to run all of your kubectl commands so far via SSH or other methods then this post will hopefully help you get to grips with being able to connect with your workstation.

Once again Kubernetes.io have this document

Install the Kubernetes-CLI

First, we need the Kubernetes CLI installed on our Windows Machine, I used chocolatey with the following command.

choco install kubernetes-cl

We then need to grab the kubeconfig file from the cluster, grab the contents of this file either via SCP or just open a console session to your master node and copy to the local windows machine. The location of the config is listed below.

$HOME/.kube/config

If you have taken the console approach, then you will need to get the contents of that file and paste into the config location on your Windows machine. You could go ahead and run the following command but this is going to contain redacted information so this will not work if you take a copy of this to your windows machine.

kubectl config view

030521 1320 Buildingthe2

What we need to do is get those redacted values to copy over to our windows machine, you can achieve this by running the following commands

cd $HOME/.kube/

ls

cat config

030521 1320 Buildingthe3

That the above starting at the apiVersion: v1 down to the bottom of the file and copy that to your .kube directory on windows. This same process is similar for other operating systems.

C:\Users\micha\.kube\config

If you want to open the file, then you will be able to compare that to what you saw on the shell of your master node.

030521 1320 Buildingthe4

You will now be able to check in on your K8 cluster from the windows machine

kubectl cluster-info

kubectl get nodes

030521 1320 Buildingthe5

This not only allows for connectivity and control from your windows machine but this then also allows us to do some port forwarding to access certain services from our windows machine. We can cover them off in a later post.

Multiple Clusters

A single cluster is simple, and we are there with the above specifically on Windows. But accessing multiple clusters using contexts is really what you likely came here to see.

Again some awesome documentation that you can easily run through.

For this post though I have my home lab cluster that we have been walking through and then I have also just deployed a new EKS cluster in AWS. The first thing to notice is that the config file is now updated with multiple clusters. Also, note I do not use notepad as my usual go-to for editing YAML files.

030521 1320 Buildingthe6

Then also notice in the same screen grab that we have multiple contexts displayed.

030521 1320 Buildingthe7

So now if I run the same commands we ran before.

kubectl cluster-info

kubectl get nodes

030521 1320 Buildingthe8

We can see that the context has been changed over, and actually, this is done automatically with the EKS commands and I am not sure if this is the same process for other cloud providers something we will get to in later posts. But now we are on the AWS cluster and can work with our cluster from our windows machine. So how do we view all of the possible contexts that we may have in our config file?

kubectl config get-contexts

030521 1320 Buildingthe9

And if we want to flip between the clusters you simply run the following command, you will then see how we switched over to the other context and back into our home lab cluster.

kubectl config use-context Kubernetes-admin@kubernetes

030521 1320 Buildingthe10

One thing to note is that I also store my .pem file in the same location as my config file, I have been reading about some best practices that if you have multiple config requirements you could start creating a folder structure with all of your test clusters, all of your development clusters and then live and so on.

Note Update – As I have been playing a little with AWS EKS and Microsoft AKS, AWS seems to take care of the clean up of your kubeconfig files whereas AKS does not so I found the following commands very useful when trying to keep that config file clean and tidy.

kubectl config delete-cluster my-cluster

kubectl config delete-context my-cluster-context

Hopefully, that was useful, and in the next post, we will take a look at the load balancer that I am using in the home lab.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6/feed 5
Kubernetes playground – How to use and setup Helm & MinIO? https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-5 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-5#comments Mon, 01 Mar 2021 18:26:47 +0000 https://vzilla.co.uk/?p=2629 K8s Part5HelmMinIO

In the last post, we covered setting up dynamic shared storage with my NETGEAR ReadyNAS system for our Kubernetes storage configuration. This is what I have in my home lab but any NFS server would bring the same outcome for you in your configuration.

This post will cover two areas we will continue to speak to Kubernetes storage options but we will cover object storage, I am going to use MinIO to be able to have an object storage option in my lab, I can use this to practice some tasks and demo things between Veeam Backup & Replication and Kasten and storing backup files. Also, in this post, we will cover Helm and Helm charts.

What is Helm?

Helm is a package manager for Kubernetes. Helm could be considered the Kubernetes equivalent of yum or apt. Helm deploys charts, which you can think of like a packaged application., it is a blueprint for your pre-configured application resources which can be deployed as one easy to use chart. You can then deploy another version of the chart with a different set of configurations.

They have a site where you can browse all the Helm charts available and of course you can create your own. The documentation is also clear and concise and not as daunting as when I first started hearing the term helm amongst all of the other new words in this space.

How do I get helm up and running?

It is super simple to get Helm up and running or installed. Simply. You can find the binaries and download links here for pretty much all distributions including your RaspberryPi arm64 devices.

Or you can use an installer script, the benefit here is that the latest version of the helm will be downloaded and installed.

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

$ chmod 700 get_helm.sh

$ ./get_helm.sh

Finally, there is also the option to use a package manager for the application manager, homebrew for mac, chocolatey for windows, apt with Ubuntu/Debian, snap and pkg also.

Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster, something that we will also cover later is KubeApps which gives a nice web interface to deploy your applications but I still think this uses helm charts for the way in which the applications are deployed.

MinIO deployment

I think I mentioned in a previous post that I wanted an object storage option built on Kubernetes to test out scenarios where Object Storage is required for exports and backups. This being a home lab will automatically mean we are not going to be using any heavy load or performance testing but around some demos this is useful. What this also means is that the footprint of running MinIO within my cluster is very low compared to having to run a virtual machine or physical hardware.

Once again documentation from MinIO is on point, which was actually a misconception that I maybe had of this Kubernetes and CNCF world was that the documentation might or maybe lacking across the board but actually, that is not the case at all everything I have found has been really good.

Obviously, as we went to the trouble above installing Helm on our system we should go ahead and use the MinIO helm chart to bootstrap the MinIO deployment into our Kubernetes cluster.

Configure the helm repo


<span style="color: #24292e; font-family: Consolas;">helm repo add minio https://helm.min.io/
</span>

Install the chart


<span style="color: #24292e; font-family: Consolas;">helm install --namespace minio --generate-name minio/minio
</span>

I also went through the steps to create a self-signed certificate to use here those steps can be found here.

How to get the default secret and access keys

I deployed my MinIO deployment within my default namespace by mistake and have not resolved this so the following commands need to take that into consideration. First, get a list of all the secrets in the namespace, if you have a namespace exclusive to MinIO then you will see only those secrets available. I added a grep search to only show minio secrets.

kubectl get secret | grep -i minio

030121 1807 Buildingthe1

If you have set up a self-signed or third-party certificate, then you will likely have a secret called “tls-ssl-minio”

kubectl get secret tls-ssl-minio

030121 1807 Buildingthe2

you will also have a service account that may look familiar to my command below, although I think all names are random

kubectl describe secret wrong-lumber-minio-token-mx6fp

030121 1807 Buildingthe3

then you will have finally the one we need with the access and secret keys in.

kubectl describe secret wrong-lumber-minio

030121 1807 Buildingthe4you should notice at the bottom here two data types access-key and secret-key, we next need to find out more from these. If we run the following we will get those values.

kubectl get secret wrong-lumber-minio -o jsonpath='{.data}’

030121 1807 Buildingthe5but one more thing we need to encode them. Let’s start with the access key

echo “MHo0blBReFJwcg==” | base64 –decode

030121 1807 Buildingthe6

and now the secret key

echo “aTBWMlNvbUtSMmY5MnhRQVNGV3NrWEphVTZIZ3hLT1ppVHl5MUFSdg==” | base64 –decode

030121 1807 Buildingthe7

Now we can confirm access to the front-end web interface with the following command

kubectl get svc

030121 1807 Buildingthe8

Note that I am using a load balancer here which I added later to the configuration.

030121 1807 Buildingthe9

Now with this configuration and the access and secret keys you can open a web browser and navigate to http://192.168.169.243:9000

030121 1807 Buildingthe10

You will then have the ability to start creating your S3 buckets for your use cases, you can see here that a future post will be covering this as a use case where I can export backups to object storage using Kasten K10.

030121 1807 Buildingthe11

In the next post, I will be working on how to access your Kubernetes cluster from your windows machine.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-5/feed 1
Kubernetes playground – How to setup dynamic shared storage https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-4 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-4#comments Sun, 28 Feb 2021 10:42:18 +0000 https://vzilla.co.uk/?p=2611 kubernetes

In the last three parts we covered, starting from scratch and getting the Kubernetes platform ready, this was using some old hardware and creating some virtual machines to act as my nodes. But if you don’t have old hardware but you still wish to build out your cluster then these virtual machines can really sit wherever they need to, for example, they could be in the public cloud but remember this is going to cost you. My intention was to remove all costs as possible as this system I am using is always running in my home network as it acts as my backup server as well as for tasks like this. We also covered how we got the Kubernetes cluster created using Kubeadm and then we started playing with some stateless applications and pods.

In this post we are going to start exploring the requirements around stateful by setting up some shared persistent storage for stateful applications. There was also something else I was playing with local persistent volumes and you can read more about that here on the Kubernetes Blog.

Stateful vs Stateless

Stateless that we mentioned and went through in the last post is where the process or application can be understood alone, there is no storage associated to the process or application therefore it is stateless, stateless applications provide one service or function.

Taken from RedHat: An example of a stateless transaction would be doing a search online to answer a question you’ve thought of. You type your question into a search engine and hit enter. If your transaction is interrupted or closed accidentally, you just start a new one. Think of stateless transactions as a vending machine: a single request and a response.

Stateful processes or applications are those that can be returned to again and again, think about your shopping trolley or basket in an online store if you leave the site and come back to the site in an hour site if the site is configured well then it is likely that this remembers your choices so you can easily make that purchase rather than having to go through the process of picking everything again into your cart. A good description I read whilst researching this was, think of stateful like an ongoing conversation with a friend or colleague on a chat platform, it is always going to be there regardless of the time between talking. Where as stateless, when you leave that chat or after a period those messages are lost forever.

If you google “Stateful vs Stateless” you will find so much information and examples, but for my walkthrough the best way to describe stateless is through what we covered in the last post, web servers and load balancers (stateless) to what we are going to cover here and the next post around databases (stateful) there are many other stateful workloads such as messaging queues, analytics, data science, machine learning (ML) and deep learning (DL) applications.

Back to the lab

I am running a NETGEAR ReadyNAS 716 in my home lab that can serve both NAS protocols (SMB & NFS) and iSCSI. It has been a perfect backup repository for my home laptops and desktop machines, and this is an ideal candidate for use in my Kubernetes cluster for stateful workloads.

I went ahead and created a new share on the NAS called “K8s” that you can see on the image below.

022821 1033 Buildingthe1

I then wanted to make sure that the folder was accessible over NFS by my nodes in the Kubernetes cluster

022821 1033 Buildingthe2

This next setting had some strange issues until I found out how this was affecting what we were trying to achieve. Basically, with this default setting (root squash) this was causing issues where persistent volumes could be created but then additional folder structure or folders could not always be created it was very sporadic although the same each time we tested.

Root squash is a special mapping of the remote superuser (root) identity when using identity authentication (local user is the same as remote user). Under root squash, a client’s uid 0 (root) is mapped to 65534 (nobody). It is primarily a feature of NFS but may be available on other systems as well.

Root squash is a technique to void privilege escalation on the client machine via suid executables Setuid. Without root squash, an attacker can generate suid binaries on the server that are executed as root on other client, even if the client user does not have superuser privileges. Hence it protects client machines against other malicious clients. It does not protect clients against a malicious server (where root can generate suid binaries), nor does it protect the files of any user other than root (as malicious clients can impersonate any user).

A big shout out to Dean Lewis here who helped massively get this up and running. He also has some great content over on his site.

022821 1033 Buildingthe3

I also enabled SMB so that I could see what was happening on my Windows machine during some of the stages. This is also how we discovered the first issue when some folders were not being created, we then created them, and the process would get that step further so that No Root Squash setting is super important.

022821 1033 Buildingthe4

Kubernetes – NFS External Provisioner

Next, we needed an automatic provisioner that would use our NFS server / shares to support dynamic provisioning of Kubernetes persistent volumes via persistent volume claims. We did work through several before we hit on this one.

The Kubernetes NFS Subdir external provisioner enabled us to achieve what we need to be able to do for our stateful workloads with the ability to create those dynamic persistent volumes. It is deployed using a helm command.

Note – I would also run this on all your nodes to install the NFS Client

apt-get install nfs-common


helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
 --set nfs.server=192.168.169.3 \
 --set nfs.path=/data/K8s

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now when we cover stateful applications you will understand how the magic is happening under the hood. In the next post we will look at helm in more detail and also start to look at a stateful workload with MinIO.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-4/feed 1
Kubernetes playground – How to setup stateless workloads https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-3 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-3#comments Sat, 27 Feb 2021 10:56:27 +0000 https://vzilla.co.uk/?p=2604 K8s Part3PlayingWithStatelessWorkloads

In the last post, we went through creating our home lab Kubernetes cluster and deploying the Kubernetes dashboard. In this post we are going to create a couple more stateless applications.

What is BusyBox

Several stripped-down Unix tools in a single executable file.

Commonly referred to as the “Swiss army knife tool in Linux distributions”

I began by just testing this following a walkthrough tutorial but then later realised that this is a great tool for troubleshooting within your Kubernetes pod.

You can find out more information here on Docker Hub.

kubectl run myshell –rm -it –image busybox – sh

Some things you didn’t know about kubectl The above kubectl command is equivalent to docker run -i -t busybox sh.

When you have run the above kubectl run command for busybox this gives you a shell that can be used for connectivity and debugging your Kubernetes deployments. Kubernetes lets you run interactive pods so you can easily spin up a busybox pod and explore your deployment with it.

022621 1354 Buildingthe1

NGINX deployment

Next, we wanted to look at NGNIX, NGINX seems to be the defacto in all the tutorials and blogs that I have come across when it comes to getting started with Kubernetes. Firstly it is open-source software for web serving, reverse proxying, caching, load balancing, media streaming and their description here go into much more detail. I began walking through the steps below to get my NGINX deployment up and running in my home Kubernetes cluster.

kubectl create deployment nginx –image=nginx

at this point if we were to run kubectl get pods then we would see our NGINX pod in a running state.

022621 1354 Buildingthe2

Now if we want to scale this deployment, we can do this by running the following command. For the demo I am running this in the default namespace if you were going to be actually keeping this and then working on this you would likely define a better namespace location for this workload.

kubectl scale deploy nginx –replicas 2

022621 1354 Buildingthe3

You can manually scale your pods as you can see above or you can use the kubectl autoscale command which allows you to set minimum pods and maximum pods and this should be the moment where you go, hang on this sounds like where things get really interesting and the reason why Kubernetes full stop. You can see by running the following commands and configuring autoscale we get the minimum and if we were to put a load onto these pods then this would dynamically provision more pods to handle the load. I was impressed and the lightbulb was flashing in my head.

022621 1354 Buildingthe4

Ok, so we have a pod that is going to help us with load balancing our web traffic but for this to be useful we need to create a service and expose this to the network. This can be done with the following command; we are going to use the NodePort, to begin with as we did with the Kubernetes dashboard in the previous post.

kubectl expose deployment nginx –type NodePort –port 80

With this command you are going to understand the NodePort that you need to use to access the NGINX deployment from the worker nodes

kubectl describe svc nginx

022621 1354 Buildingthe5

We then run the following command to understand which node address we need to connect to

Kubectl get pods –selector=”app=nginx” –output=wide

And we can see from the below that node is either node2 or node3

022621 1354 Buildingthe6

Open a web page with your worker IP address:NODEPORT Address and you should see the NGINX opening page.

022621 1354 Buildingthe7

Ok so all good we have our application up and running and we can start taking advantage of that I assume but what we should also cover is how you can take what you have just created and create a YAML file based on that so you can use this to create the same configuration and deployment again and again.

We created a deployment, so we capture this by running the below command to an output location you wish.

kubectl get deploy nginx -o yaml > /tmp/nginx-deployment.yml

same for the service

kubectl get svc nginx -o yaml > /tmp/nginx-service.yaml

Then you can use these yaml files to deploy and version your deployments

kubectl create -f /tmp/nginx-deployment.yaml

kubectl create -f /tmp/nginx-service.yaml

if created with YAML you can also delete with

kubectl delete -f /tmp/nginx-deployment.yaml

kubectl delete -f /tmp/nginx-service.yaml

We can also delete what we have just created by running the two following commands

kubectl delete deployment nginx

kubectl delete service nginx

I think that covered quite a bit and next we are going to get into persistent storage and some of the more stateful applications such as databases that need that persistent storage layer. As always please leave me feedback, I am learning with the rest of us so any pointers would be great.

 

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-3/feed 1
Kubernetes playground – Setting up your cluster https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2#comments Fri, 26 Feb 2021 09:47:00 +0000 https://vzilla.co.uk/?p=2594 Kubernetes

In the last post, we walked through the pretty basic way of getting our physical and virtual environment ready for our Kubernetes cluster. This post will cover the Kubernetes cluster setup steps.

Kubernetes Cluster

At this stage, we just have 3 virtual machines running Ubuntu on a flat layer 2 network. Now things get a little more interesting, in one of the last steps of part 1 we touched on installing kubeadm and this is where I am going to focus my installation and configuration, however, you can use the more challenging approach and build out from scratch.

What is kubeadm?

Well, it’s the easy way for you to get Kubernetes up and running and ideal for our learning environment and BYOH option (Bring your own hardware) which many of us will have varying different options in our home labs, if you still have those. It is also an easy way for existing users and more advanced users to automate setting up a cluster and testing their apps. We should have installed kubeadm on each of our nodes in the previous article but if not let’s make sure that is the case at this point.

I am going to also scatter links throughout this series highlighting the most useful resources that I am using to learn more about Kubernetes and specifically for kubeadm here you can find out more here.

We have not covered the components or services that make up a Kubernetes cluster, but my understanding is that your master node is where your API server and etcd which is your cluster database resides. The API server is also where the kubectl CLI tool communicates with.

And so, we begin….

Firstly, we need to initialise the master or control-plane node I don’t know if these are the same or they can be different or exclusive. We are all learning here. Run the following command on your master node:

Kubeadm init

What you should see on the screen after a few minutes in the following confirmation and detail, this is going to be then what we use to add our worker nodes into the cluster. The directory and permissions being changed at the top of the output are to ensure that a non-root user can use kubectl.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join <control-plane-host>:<control-plane-port> –token <token> –discovery-token-ca-cert-hash sha256:<hash>

those a little more familiar may have noticed I have not installed a pod network, this is because I am using host networking at the virtualisation layer and believe this is an easy option but I will be exploring what pod networks are and what they give us the ability to do.

Adding kubernetes Worker node

Ok, if that was your first time getting to this stage you might be thinking, “well that wasn’t too bad” and it really isn’t but I also don’t believe this is the difficult part just yet. Next, we need to add our worker nodes into the Kubernetes cluster. My above gave me the following output so it is as simple in my environment to SSH to all of my worker nodes and simply copy the command to each.

sudo kubeadm join 192.168.169.200:6443 –token r46351.5r6n6nquviz9mu67 –discovery-token-ca-cert-hash sha256:4db470e5c4caa58ce43238951c88fc8b0416267e073306d1144769e787c3b516

another thing I found useful was if you were to lose that token then you can get that by just running

kubeadm token list

once all your workers are added to cluster you can run to ensure you have the required number of nodes.

kubectl get nodes

022521 1156 Buildingthe1

You can also check that the Kubernetes master and cluster is running by running the following command.

Kubectl cluster-info

022521 1156 Buildingthe2

Deploying the Kubernetes Dashboard

Job Done right, we now know Kubernetes! Hmmm, maybe not this is just the start. As much as I am trying I still love a good UI experience so I found the Kubernetes Dashboard as the first deployment that I wanted to tackle to get something up and running in the lab.

I must also give this guy a huge shoutout for his content, I have been glued to YouTube for most of 2020 and his content has been great, concise overviews and demo of different varying topics when it comes to Kubernetes. In particular, this video demo walks through the exact steps I am also going to share here. Ok, let’s get started. We are going to create a resource from a file using the following command.


kubectl create -f <span style="color: #222222; font-family: Consolas; font-size: 10pt;">https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
</span>

if you just take the link to the YAML file above you will see what this is going to create and where. In short, it is going to create a namespace, service account, service, secrets, config maps, role, cluster role, deployments

if you now run the following command you will then get a list of your namespaces you should be seeing default and Kubernetes-dashboard

kubectl get namespace

next, we want to make sure everything is looking good with the deployment by running the following command, this will show you your newly created pods, services, deployments and replicasets.

kubectl -n Kubernetes-dashboard get all

022521 1156 Buildingthe3

You might have noticed above that I have the service/Kubernetes-dashboard using a NodePort type vs I believe the default cluster port. As a cluster port, you can only access from within the cluster and with the dashboard being a web interface, I have no way to access the web page. I want to expose this via the node port option which we will walk through next and this then means that from my Windows machine or any machine on my network I can reach the dashboard web interface.

You can be more granular and just look into the service we want to change for this by running

kubectl -n kubernetes-dashboard describe service kubernetes-dashboard

022521 1156 Buildingthe4

Again, the above has already been changed, and this is done by running the following command, I am using the Windows Terminal which nicely gives me a notepad option to change the file but if you are just in the shell then it will give you vi to make your edits. I have highlighted the edits you need to make from ClusterIP to NodePort.

022521 1156 Buildingthe5

Confirm the change by running the above get service command printed above.

At this point, you should have a light service account with little privileges to use confirm with the following

kubectl -n kubernetes-dashboard get sa

Confirm more details and let’s just make sure we can connect at least using token authentication

kubectl -n kubernetes-dashboard describe sa kubernetes-dashboard

Take the token name and use here at the end of the command, you are then going to get the token used to authenticate on the web page.

kubectl -n kubernetes-dashboard describe secret kubernetes-dashboard-token-m5gw8

022521 1156 Buildingthe6

Then open your browser and put in your address bar your node port address and then copy that token to get access into your Kubernetes dashboard.

022521 1156 Buildingthe7

You will notice that this is restricted, so we need to create a better service account with more access and control.

022521 1156 Buildingthe8

Another shout out to “Just me and opensource” on YouTube. Not only does he make some awesome video content, but he also makes his YAML files available which is great for someone like me that is only very basic when it comes to learning this new way. Let’s grab those files we need.

git clone https://github.com/justmeandopensource/kubernetes

navigate to the dashboard folder and you will see an sa_cluster_admin.yaml file, modify this and make sure it looks like the below:

apiVersion: v1

kind: ServiceAccount

metadata:

name: kubernetes-dashboard

namespace: kubernetes-dashboard


apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: dashboard-rolebinding

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cluster-admin

subjects:

– kind: ServiceAccount

name: kubernetes-dashboard

namespace: kubernetes-dashboard

Then we can create our new service account based on that YAML file.

kubectl create -f sa_cluster_admin.yaml

now if we check all available service accounts, we will now see the new service account created and listed.

Kubectl -n Kubernetes-dashaboard get serviceaccounts

let’s get that token name again with the following

022521 1156 Buildingthe9

We then use that token to get the secret that can be used on the dashboard

022521 1156 Buildingthe10

We will then have more rights when we log in to the dashboard.

Next up we are going to take a look at how I navigated the lack of enterprise storage in my home lab and how I was able to get some persistent volumes for stateful data.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2/feed 2
Kubernetes playground – How to choose your platform https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1#comments Thu, 25 Feb 2021 13:32:00 +0000 https://vzilla.co.uk/?p=2577 kubernetes

This series will document my journey in creating a Kubernetes cluster in my home lab using basically the hardware I have available. I do also have some RaspberryPi options but for the purpose of this series, I am going to focus on x86 architecture. The reason for doing this is purely for learning I fully expect the majority of companies to take the easy button and leverage one of the many managed Kubernetes services possibly in the public cloud or from a service provider, the reason for this is because it really does offload the overhead of management and administration off to the service provider and it doesn’t fall to you, but if you are going to learn then I at least find that getting your hands dirty is a good way to start.

I started my learning efforts around Cloud-Native and Kubernetes back in the summer of 2019 and only managed to release this overview blog at the time as an overview of the new world of Kubernetes – containers and orchestration.

Physical hardware

First of all, we need somewhere to host our Kubernetes cluster, during the summer of 2020 mid pandemic I actually got rid of the majority of the home lab but made sure I was left with one of my trusted HP ML110 servers, I packed this full of disks and I proceeded at the time to build a backup server that would live in my garage on top of the beer fridge. Then as a technologist, you always feel the need to tinker with new technologies, but I only had this Windows 2019 HP server now in my possession along with a lot of varying RaspberryPis performing various home robot and automation projects.

But as many will be aware this Windows 2019 server I now had could also act as a Hyper-V server where I could host a few virtual machines as well as it acting as my Veeam Backup & Replication server (I mean we all have those at home right to look after all of that important data if you don’t shame on you)

I set about enabling the relevant windows features and roles on my server which doesn’t take long and then we can start configuring our host platform for our Kubernetes cluster.

Virtualisation

In an ideal world I would have had several vSphere ESXi hosts and no resource constraints but where is the fun in that.

When I had the Hyper-V feature and role installed on the server it was time to configure the network for Hyper-V. This machine has two physical network adapters, both are going into the same physical switch in my garage.

022521 0913 Buildingthe1

LAN is used for access and the Hyper-V Network is what is used as the Lab virtual switch you see below.

022521 0913 Buildingthe2

I then needed to create my master node and two worker nodes.

Virtual Machine configuration

Next up we created 3 virtual machines,

Role

Name

IP Address

CPU

Memory

Master node

Node1

192.168.169.200

2

2

Worker node

Node2

192.168.169.201

2

4

Worker node

Node3

192.168.169.202

2

4

Each machine was configured the same apart from the master node memory being only 2GB, this configuration along with the host already running my backup operations at home is around 85-90% capacity so everything is running but we are close to the ceiling.

022521 0913 Buildingthe3

The 3 VMs all have Ubuntu 20.04 LTS installed the next section will talk about the installation steps to be ready to start the Kubernetes Cluster.

Getting Kubernetes Ready – Installation Steps

All our hosts need docker and Kubernetes tools: kubeadm, kubectl and kubelet.

Before we get those new installations, we should first start by making sure our systems are all up to date by running apt-get update

The following commands is what I ran on all nodes for reference


apt-get update

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker


sudo apt-get update &amp;&amp; sudo apt-get install -y \
  apt-transport-https ca-certificates curl software-properties-common gnupg2

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt-get update

apt-get install -y kubelet kubeadm kubectl

sudo swapoff -a

vi /etc/fstab (you need to comment out #/swapfile) (escape and :wq)

sudo rm -f /swapfile

vi /etc/sysctl.conf

(add the following line, I added to the bottom of the file and added a comment #Kubernetes for reference.

net.bridge.bridge-nf-call-iptables = 1) (escape and :wq)

enable the docker service with

sudo systemctl enable docker.service

Next, we will talk about some of the Day 2 operations I tackled with the cluster now in place. This includes deploying the Kubernetes Dashboard and configuration, deploying Kasten K10 as a focus on making sure I had the capability of backing up applications within my cluster and some more useable Day 2 configurations.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1/feed 5