Cloud Native – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Tue, 10 Aug 2021 10:29:20 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png Cloud Native – vZilla https://vzilla.co.uk 32 32 Introducing Kubestr – A handy tool for Kubernetes Storage https://vzilla.co.uk/vzilla-blog/introducing-kubestr-a-handy-tool-for-kubernetes-storage https://vzilla.co.uk/vzilla-blog/introducing-kubestr-a-handy-tool-for-kubernetes-storage#comments Tue, 30 Mar 2021 13:01:00 +0000 https://vzilla.co.uk/?p=2907 My big project over the last month has not only been getting up to speed around Kubernetes but has had a parallel effort around Kubernetes storage and an open-source project that has been developed and today is released. In this post we are going to touch on how to get going with Kubestr, the first thing to mention is that this is a handy set of tools to help you identify, validate, and evaluate your Kubernetes storage.

The Challenge

The challenge we have with Kubernetes storage is that it’s not all that easy and it’s very manual to achieve some of the tasks that Kubestr helps you with, for example, the adoption of CSI drivers and choice of storage available to us within our Kubernetes clusters is growing so fast. This tool is going to assist in validating that the CSI driver is configured correctly for snapshots for example this, in turn, means we can use data protection methods within our cluster. Another hard task is benchmarking storage, it can be done today or prior to Kubestr but it’s a potential pain to make this happen and it takes time. Kubestr allows us to hit the easy button to evaluate.

All of this whilst there are so many options out there for storage, we want to make sure we are using the right storage for the right task, at the end of the day you can go and pay for the most expensive disk especially in the public cloud but let’s make sure you need it and you don’t overspend and also instead of spending your time building benchmarking tools manually this will save you time to giving you a better understanding and visibility into your storage options.

You can find out more here on the Kasten by Veeam blog explaining in more detail the challenges and the reasons Kubestr was born.

Getting Started with Kubestr

We all use different operating systems to manage our Kubernetes clusters, first and foremost Kubestr is available across Windows, macOS and Linux you can find links to these releases as well as source code here.

Once you have this installed on your OS the first command, I suggest is (I am running windows) We can see then the simplicity of what can be used from a command point of view as well as additional available commands.

.\kubestr.exe --help

Kubernetes

Identify your Kubernetes Storage options

The first step that this handy little tool can help you with is just giving you visibility into your Kubernetes storage options available to you. I am running this below against an Amazon EKS cluster using the Bottlerocket OS on the nodes. I have also installed the AWS EBS CSI drivers and snapshot capabilities that now is not deployed by default. Now my cluster is new and has been configured correctly but this tool is going to highlight when things are not configured maybe you have the storage class available but you do not have the Volume Snapshot class or maybe you have multiple storages available and some of that is not being used and this highlights that you have this storage attached and could highlight that you could save by removing it.

.\kubestr.exe

032921 1559 Introducing2

Validate your Storage

Now that we have our Storage classes and our volume snapshot class, we can now run a check against the CSI driver to confirm if this was configured correctly. If we run the same help command with the csicheck command, you get the following options.

032921 1559 Introducing3

If we run against our Kubernetes cluster, storage class and volume snapshot class we will see the process on the below image that runs through creating the application, taking a snapshot, restoring the snapshot and confirming that the configuration is complete.

032921 1559 Introducing4

.\kubestr.exe csicheck -s ebs-sc -v csi-aws-vsc

032921 1559 Introducing5

Evaluate your Storage

Obviously, most people will not just have access to one Kubernetes cluster, for us to run against additional clusters you simply change the kubectl config context to the cluster you would like to perform the tests against. In this section, we want to look into the options around evaluating your Kubernetes storage. This has a very similar walkthrough to the CSIcheck we mentioned and covered above apart from there is no restore but we are going to get the performance results from Flexible IO.

032921 1559 Introducing6

Let’s start with the help command to see our options.

.\kubestr.exe fio –help

032921 1559 Introducing7

Now we can run a test against our storage class with the following and default configurations as listed above.

.\kubestr.exe fio -s ebs-sc

032921 1559 Introducing8

Now we can get more catered to specific workloads with different file sizes for the tests.

.\kubestr.exe fio -s ebs-sc -z 400Gi

032921 1559 Introducing9

Then we can output this to JSON and this is where we see the community helping here to be able to extract that JSON and allow for a better reporting method on all of the results so that the community can understand storage options without having to run these tests manually on their own clusters.

.\kubestr.exe fio -s ebs-sc -z 400Gi -o json


.\kubestr.exe fio -s ebs-sc -z 400Gi -o json > results.json

I won’t post the whole JSON but you get the idea.

032921 1559 Introducing10

Finally, we also can bring your own FIO configurations, you can find these open source files here

#BYOFIO - # Demonstrates how to read backwards in a file.

.\kubestr.exe fio -s ebs-sc -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\Kubestr\fio\examples\backwards-read.fio"


#BYOFIO - fio-seq-RW job - takes a long time!


.\kubestr.exe fio -s ebs-sc -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\Kubestr\fio\examples\fio-seq-RW.fio"

I have just uploaded a quick lightning talk I gave at KubeCon 2021 EU on this handy little tool

My next ask is simple, please go and give it a go and then give us some feedback,

032921 1559 Introducing11

]]>
https://vzilla.co.uk/vzilla-blog/introducing-kubestr-a-handy-tool-for-kubernetes-storage/feed 1
Kubernetes, How to – AWS Bottlerocket + Amazon EKS https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks#comments Sun, 28 Mar 2021 16:45:03 +0000 https://vzilla.co.uk/?p=2892 Over the last week or so I have been diving into the three main public clouds, I covered Microsoft Azure Kubernetes Service, Google Kubernetes Engine and Amazon Elastic Kubernetes Service. We are heading back to Amazon EKS for this post and we are focusing on a lightweight Linux container focused open-source operating system that will be our EKS node operating system in our cluster.

What is Bottlerocket?

Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.”

Kubernetes

Bottlerocket was released around a year ago in March 2020, an operating system designed for hosting Linux containers, the key areas of focus and improvement for Bottlerocket was around enhancing security, ensuring the instances in the cluster are identical, and having good operational behaviours and tooling. Bottlerocket improves each of these situations. This is why I wanted to look into this a little deeper in my learning curve around Kubernetes and cloud-native workloads.

Security-focused

Key ingredients when focusing on security regardless of running on-premises or in the public cloud is reducing the attack surface, having verified software and images, and enforced permissions. Bottlerocket does not have SSH or many other components that simply reduces a lot of security headaches that we see maybe with traditional VM operating systems. Then I also mentioned reducing the attack surface comes in the way of hardening the operating system with position-independent executables, using relocation read-only linking, and building all first-party software with memory-safe languages like Rust and Go.

Open Source

Bottlerocket is also fully open-sourced, with specific components written in Rust and Go, the Linux Kernel of course and some other open-source components, all under the MIT or Apache 2.0 license.

Another interesting angle I found was that Bottlerocket being open source is one thing but then also the roadmap is also open source. I think this really allows you to not only see what is coming but also enables you to really pin your efforts on a container-based OS that you know is moving in the right direction.

You can find more of a description here as well from the official AWS documentation.

EKS + Bottlerocket

A few posts back we covered EKS and deployment using the AWS CLI, here we are going to walk through creating an EKS cluster using the Bottlerocket OS. With all the benefits listed above about Bottlerocket, I wanted to explore the use case for running the Bottlerocket OS as my nodes in an EKS cluster.

In the next section, we are going to walk through the way in which I did this using the AWS CLI, I was also intrigued that because this is a lightweight open-source operating system it would also mean that I am not having to pay a license fee for the OS and would only have to pay for the EC2 instances and AWS EKS.

Now don’t get me wrong Bottlerocket is not the first and will not be the last container optimised operating system. Neither are AWS the first company to build one on Linux. The first and most notable would be CoreOS, when we think container optimised operating systems, we think small, stripped-down version of Linux.

The final thing I will mention is Bottlerocket is able to perform automated OS updates seamlessly. This is done by having two OS partitions on the OS disk that are identical, when you update only the inactive partition is updated and then once the update is complete without errors the partitions are swapped this also increases the possibilities here when it comes to updates, rollbacks and just keeping the lights on to serve the workloads that we need.

How to create your Kubernetes Cluster

That is enough theory for one day, but hopefully, that gives you a good grasp on some of the benefits and reasons why this little OS is popping up more and more out there in the wild a year after its launch and release.

To begin we are going to create a new key pair using the following command.

#Create a keypair


aws ec2 create-key-pair --key-name bottlerocket --query "bottlerocket" --output text > bottlerocket.pem

next, we are going to modify this YAML file to suit your requirements. I have labelled some of the key parts to this that you may wish to change to suit your requirements, I will also make sure that this YAML is stored in this repository I have been collecting from these learning posts. I have not highlighted here the AMI Family, this is obviously bottlerocket and if you run through the UI this becomes clear enough on why this is being chosen. You will also notice the publicKeyName that we created in the previous step.

032821 1630 Gettingstar2

Then we need to create our cluster based on our YAML cluster configuration file above. You can find more information here. You can see I have added how long this took in the comments and this will also be stored in the above repository.

#Create EKS Cluster based on yaml configuration (16 mins)


eksctl create cluster --config-file "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\bottlerocket-cluster.yaml"

When the above command is completed you will be able to confirm this with the following command.

#Confirm you have access to your new EKS Cluster


kubectl get nodes

032821 1630 Gettingstar3

But the above command just looks the same as it does for any OS being used as the node operating system.

#The above doesn't show your OS image used so run the following to confirm Bottlerocket is being used.


kubectl get nodes -o=wide

032821 1630 Gettingstar4

Now you can go about deploying your workloads in your new Kubernetes cluster. I have not found any limitations to this but I will cover in a later blog about Installing the CSI Driver and then deploying Kasten K10 into my cluster in EKS to start protecting my stateful workloads.

]]>
https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks/feed 2
Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks#comments Fri, 19 Mar 2021 13:01:37 +0000 https://vzilla.co.uk/?p=2799 Over the last few weeks since completing the 10 part series covering my home lab Kubernetes playground I have started to look more into the Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.

I will say here that the continuation of “this is not that hard” is still the case and if anything and as probably expected when you start looking into managed services. Don’t get me wrong I am sure if you are running multiple clusters and hundreds of nodes that might change that perception I have although the premise is still the same.

Pre-requisites

I am running everything on a Windows OS machine, as you can imagine though everything we talk about can be run on Linux, macOS and of course Windows. In some places, it can also be run in a docker container.

AWS CLI

Top of the tree is the management CLI to control all of your AWS services. Dependent on your OS you can find the instructions here.

031921 1226 Gettingread1

The installation is straight forward once you have the MSI downloaded. Just follow these next few steps.

031921 1226 Gettingread2

Everyone should read the license agreement. This one is a short one.

031921 1226 Gettingread3

031921 1226 Gettingread4

031921 1226 Gettingread5

031921 1226 Gettingread6

Confirm that you have installed everything successfully.

031921 1226 Gettingread7

Install kubectl

The best advice here is to check here on the version to be using within AWS EKS, you need to make sure for stable working conditions that you have the supported version of kubectl installed on your workstation. If you have been playing a lot with kubectl then you may have a newer version depending on your cluster, my workstation is using v1.20.4 as you can see below. To note it is the client version you need to focus on here. The second line (“Server Version”) contains the apiserver version.

031921 1226 Gettingread8

My suggestion is to grab the latest MSI here.

Install eksctl CLI

This is what we are specifically going to be using to work with our EKS cluster. Again official AWS Documentation can be found here. Again, various OS options here but we are using Windows so we will be installing eksctl using chocolatey.

031921 1226 Gettingread9

IAM & VPC

Now I am not going to cover this as this would make it a monster post but you need an IAM account with specific permissions that allow you to create and manage EKS clusters in your AWS account and you need a VPC configuration. For lab and education testing, I found this walkthrough very helpful.

Let’s get to it

Now we have our prerequisites we can begin the next easy stages of deploying our EKS cluster. We will start by configuring our workstation AWS CLI to be able to interact with our AWS IAM along with the region we wish to use.

031921 1226 Gettingread10

Next, we will use EKSCTL commands to build out our cluster, the following command is what I used for test purposes. Notice with this we will not have SSH access into our nodes as we did not specify this, but I will cover off the how on this later. This command will create a cluster called mc-eks in the eu-west-2 (London) region with a standard node group and it will use t3.small instances. This is my warning shot. If you do not specify a node type here it will use m5.large and for those using this for education then things will get costly. Another option here to really simplify things is to run eksctl create cluster and this will create an EKS cluster in your default region that we specified above with AWS CLI with one nodegroup with 2 of those monster nodes.

031921 1226 Gettingread11

Once you are happy you have the correct command then hit enter and watch the cluster build start to commence.

031921 1226 Gettingread12

If you would like to understand what the above is then you can head into your AWS management console and location CloudFormation and here you will see the progress of your new EKS stack being created.

031921 1226 Gettingread13

Then when this completes you will have your managed Kubernetes cluster running in AWS and accessible via your local kubectl. Because I also wanted to connect via SSH to my nodes I went with a different EKS build-out for longer-term education and plans. Here is the command that I run when I require a new EKS Cluster. To what we had above it looks similar but when I also created the IAM role I wanted the SSH key so I could connect to my nodes this is reflected in the –ssh-access being enabled and then ssh-public-key that is being used to connect. You will also notice that I am creating my cluster with 3 nodes with 1 minimum and 3 maximum. There are lots of options you can put into creating the cluster including versions

eksctl create cluster –name mc-eks –region eu-west-2 –nodegroup-name standard –managed –ssh-access –ssh-public-key=MCEKS1 –nodes 3 –nodes-min 1 –nodes-max 4

031921 1226 Gettingread14

Accessing the nodes

If you did follow the above and you did get the PEM file when you created the IAM role then you can now SSH into your nodes by using a similar command to below: obviously making sure you had the correct ec2 instance and the location of your pem file.

ssh ec2-user@ec2-18-130-232-27.eu-west-2.compute.amazonaws.com -i C:\Users\micha\.kube\MCEKS1.pem

in order to get the public DNS name or public IP then you can run the following command, again for the note I am filtering to only show m5.large because I know this is the only instances I have running with that size ec2 instance type.

aws ec2 describe-instances –filters Name=instance-type,Values=m5.large

if these are the only machines you have running in your default region, we provided then you can just run the following command.

aws ec2 describe-instances

Accessing the Kubernetes Cluster

Finally we now just need to connect to our Kubernetes cluster, when you receive the end of the command we ran to create the cluster as per below

031921 1226 Gettingread15

We can then check access,

031921 1226 Gettingread16

eksctl created a kubectl config file in ~/.kube or added the new cluster’s configuration within an existing config file in ~/.kube. if you already had say a home lab in your kubectl config then you can see this or switch to this using the following commands. Also covered in a previous post about contexts.

031921 1226 Gettingread17

The final thing to note is, obviously this is costing you money whilst this is running so my advice is to get quick at deploying and destroying this cluster, use it for what you want and need to learn and then destroy it. This is why I still have a Kubernetes cluster available at home that costs me nothing other than it is available to me.

031921 1226 Gettingread18

Hopefully, this will be useful to someone, as always open for feedback and if I am doing something not quite right then I am fine also to be educated and open to the community to help us all learn.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks/feed 8
Kubernetes playground – Backups in a Kubernetes world https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-10 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-10#comments Sat, 13 Mar 2021 16:42:27 +0000 https://vzilla.co.uk/?p=2747 K8s Part10Kasten

This post will wrap up the 10-part series of getting started on my hands-on learning journey of Kubernetes, the idea here was to try and touch on a lot of the areas without going through the theory in these posts. A lot of theory I have picked up through various learning assets that I have listed here. In the previous posts we have gone into creating a platform for our Kubernetes cluster to run on, we have touched on various stateless and stateful applications, load balancers and object storage amongst a few more topics to get going and started. We have only touched the surface of this whole entire topic though and I fully intend to continue to document further on about the public cloud and managed Kubernetes services that are available.

In this post we are going to wrap the series up talking about data management, what better way to attack this than to cover the installation and deployment of K10 in our lab to assist us with our lab backups and more, the more we can get into over another series and potential video series. But after spending the time getting up and running you will want to spin up and down that cluster and it might then make sense to store some backups to get things back but at least have that data protection angle in the back of your mind as we all navigate this new world.

Everything Free

031321 1625 Buildingthe1

First of all, everything so far in the series has been leveraging free tools and products, so we continue that here with Kasten K10 free edition. There are a few ways in fact you can take advantage of this free edition, firstly its going to cover you for 10 worker nodes and its free forever! This is ideal for testing and home lab learning scenarios where a lot of us are now. This is a mantra that has been the case at Veeam for a long time. There is always a free tier available with Veeam software. How do you get started, well on the page above and both topics needs to be covered off more in another post but you have the test drive option which enables you to not have to have any home lab or cloud access to a Kubernetes cluster this will walk you through the easy approach of getting Kasten K10 up and running in a hands on lab type environment, the second is the free edition which can be obtained from cloud based marketplaces. I have also written about this in one of my opening blogs for Kasten by Veeam.

Documentation

031321 1625 Buildingthe2

Another thing I have found is that the Kasten K10 documentation is good and thorough. Don’t worry its not thorough because its hard but it details the install options and process for each of the well known Kubernetes deployments and platforms that you are using and then into specific details that you may want to consider from a home lab user through to the enterprise paid for product that includes the same functionality but with added enterprise support and a more custom node count. You can find the link to the documentation here. Which is where the steps I am going to run through ultimately come from.

Let’s get deploying

First, we need to create a new namespace.

kubectl create namespace kasten-io

we also need to add the helm repo for Kasten K10. We can do this by running the following command.

helm repo add kasten https://charts.kasten.io/

We should then run a pre flight check on our cluster to make sure the environment is going to be able to host the K10 application and be able to perform backups against our applications. This is documented under Pre-Flight checks, this will create and clean up a number of objects to confirm everything will run when we come to install K10 later on.

curl https://docs.kasten.io/tools/k10_primer.sh | bash

this command should look something like the following when you run it. This is going to check for access to your Kubernetes cluster by using kubectl, access to helm for deployment that we covered in a previous post as well. Validates if the Kubernetes settings meet the K10 requirements.

031321 1625 Buildingthe3

Continued

031321 1625 Buildingthe4

Installing K10

If the above did not come back with errors or warnings, then we can continue to install Kasten K10 into our cluster. This command will be leveraging the MetalLB load balancer that we covered in a previous post to give us network access to the K10 dashboard later on, you could also here use a port forward to gain access which is the default action without the additional externalGateway option in the following helm command.

helm install k10 kasten/k10 –namespace=kasten-io \

–set externalGateway.create=true \

–set auth.tokenAuth.enabled=true

Once this is complete you can watch the pods being created and, in the end, when everything has completed successfully you will be able to run the following command to see the status of our namespace.

kubectl get all -n Kasten-io

031321 1625 Buildingthe5

You will see from the above that we have an External IP on one of our services, service/gateway-ext should with our configuration be using LoadBalancer type and should have a value that you configured in MetalLB that you can access on your network. If you are running this on the public cloud offerings this will be using the load balancing native capabilities and will also give you an external facing value. Depending on your configuration in the public cloud you may or may not have to make further changes to enable access to the K10 dashboard. Something else we will cover in a later post.

Upgrading K10

Before we move on, we also wanted to cover, upgrades again in more detail later but every two weeks there is an update release available so being able to run this upgrade to stay up to date with new enhancements is important. The following command will enable this quick and easy upgrade.

helm upgrade k10 kasten/k10 –namespace=kasten-io \

–reuse-values \

–set externalGateway.create=true \

–set auth.tokenAuth.enabled=true

Accessing the K10 Dashboard

We have confirmed above the services and pods are all up and running but if we wanted to confirm this again we can do so with the following commands.

Confirm all pods are running

kubectl get pods -n kasten-io

031321 1625 Buildingthe6

Confirm your IP address for dashboard access

kubectl get svc gateway-ext –namespace kasten-io -o wide

031321 1625 Buildingthe7

Take the external IP listed above and put this into your web browser adding it like the following, http://192.168.169.241/k10/# you will be greeted with the following sign in and token authentication request.

031321 1625 Buildingthe8

To obtain that token run the following command, this is the default service account that is created with the deployment. If you require further RBAC configuration then refer to the documentation listed above.

kubectl describe sa k10-k10 -n kasten-io

031321 1625 Buildingthe9

kubectl describe secret k10-k10-token-b2tnz -n kasten-io

031321 1625 Buildingthe10

Use the above token to authenticate and then you will be greeted with the EULA, fill in the details, obviously read all the agreement at least twice and then click accept.

031321 1625 Buildingthe11

You will then see your Kasten K10 Cluster Dashboard where you can see your available Applications, Policies and what backups (snapshots) and exports (backups) you have with then a summary and overview of the jobs that have ran down below.

031321 1625 Buildingthe12

The next series of posts are going to continue the theme of learning Kubernetes and we will get back to the K10 journey also as we will want and need this as we continue to test out more and more stateful workloads that then require that backup functionality but also there is a lot of other cool tech and features within this product which is the same product regardless of it being free or the enterprise edition.

Hope the series was useful, any feedback would be greatly appreciated. Let me know if it has helped or not as well.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-10/feed 1
Kubernetes playground – How to deploy your Mission Critical App – Pacman https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-9 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-9#comments Wed, 10 Mar 2021 16:45:42 +0000 https://vzilla.co.uk/?p=2725 K8s Part9PacMan

The last post was to focus a little more on applications but not so much between the stateful and stateless types of applications but in the shape of application deployment. This was deploying KubeApps and using this as an application dashboard for Kubernetes. This post is going to focus on a deployment that is firstly “mission critical” and that contains a front end and a back end.

Recently Dean and I covered this in a demo session we did at the London VMUG.

I would also like to add here that the example nodejs application and mongodb back end was first created here. Dean also has his GitHub which is where we are going to focus with the YAML files.

“Mission Critical App – Pac-Man”

Let’s start by explaining a little about our mission critical app, our application a HTML5 Pac-Man game with NodeJS as the web front end and then the back end a MongoDB database to store our high scores. You can find out more about the build up of this on the first link above.

Getting started

Over the next few sections, we will look at the building blocks to create our mission critical application. We are going to start by creating a namespace for the app.

You can see here we do not have a pacman namespace

031021 1632 Buildingthe1

Let’s create our pacman namespace

kubectl create namespace pacman

031021 1632 Buildingthe2

The next stage is going to be lets download the YAML files to build out our application using the following command.

git clone https://github.com/saintdle/pacman-tanzu.git

then you could simply run each of those YAML files to get your app up and running. (one warning here to make is that you would need a load balancer in place) if you followed the MetalLB post though you will be already in a good spot.

You should now have a folder called pacman-tanzu with the following contents to get going.

031021 1632 Buildingthe3

We will now take a look at those YAML files and explain a little about each one and what they do.

Deployments

A deployment provides declarative updates for Pods and ReplicaSets. This is where we will define the Pods that we wish to deploy and how many of each pod we need. In your deployments folder you will see to files one referring to mongodb and one referring to pacman. Notice the replicaSets for both of the deployments and also that with the MongoDB deployment you will notice a persistent volume claim which we will cover later.

mongo-deployment.yaml

031021 1632 Buildingthe4

pacman-deployment.yaml

031021 1632 Buildingthe5

Persistent Volume Claim

A persistent volume claim (PVC) is a request for storage, by design container storage is ephemeral and can disappear upon container deletion and creation. To provide a location where data will not be lost for our example the MongoDB we will leverage a Persistent volume outside of the container. You can find out much more about the world of storage and persistent volumes here on the official documentation.

When you download the yaml files from github it will assume that you have a default storageclass configured and ready to address persistent volume claims. The YAML file will look like the below.

031021 1632 Buildingthe6

If you do not or you have multiple storage classes, you wish to use then you can define that here using the storageClassName spec.

031021 1632 Buildingthe7

RBAC

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. You will see below in the YAML file that we have a ClusterRole (non namespaced) and role binding (namespaced) this is to enable connectivity between our front and back ends within the namespace. Once again more information or detailed information can be found here.

031021 1632 Buildingthe8

Services

Next, we need to expose our app to the front end i.e. our uses, and we also need to bridge the gap between the pacman (front end) and the MongoDB (back end)

mongo-service.yaml

031021 1632 Buildingthe9

pacman-service.yaml

031021 1632 Buildingthe10

Ok now we have briefly explained the files we are about to run to make up our application lets go ahead and run those files. I don’t think it matters actually which order you run these in but I will be going in the order I have explained. Running the following commands will get you up and running.

kubectl create -f pacman-tanzu/deployments/mongo-deployment.yaml -n pacman

kubectl create -f pacman-tanzu/deployments/pacman-deployment.yaml -n pacman

kubectl create -f pacman-tanzu/persistentvolumeclaim/mongo-pvc.yaml -n pacman

kubectl create -f pacman-tanzu/rbac/rbac.yaml -n pacman

kubectl create -f pacman-tanzu/services/mongo-service.yaml -n pacman

kubectl create -f pacman-tanzu/services/pacman-service.yaml -n pacman

031021 1632 Buildingthe11

if you did want to delete everything that we just created you can also just find and replace the “create” with “delete” and then run the following commands to remove all the same components.

kubectl delete -f pacman-tanzu/deployments/mongo-deployment.yaml -n pacman

kubectl delete -f pacman-tanzu/deployments/pacman-deployment.yaml -n pacman

kubectl delete -f pacman-tanzu/persistentvolumeclaim/mongo-pvc.yaml -n pacman

kubectl delete -f pacman-tanzu/rbac/rbac.yaml -n pacman

kubectl delete -f pacman-tanzu/services/mongo-service.yaml -n pacman

kubectl delete -f pacman-tanzu/services/pacman-service.yaml -n pacman

and then finally to confirm that everything is running as it should we can run the following command and see all of those components

031021 1632 Buildingthe12

From the above you will also see that we have an external IP for our MongoDB instance and our pacman front end. Let’s take that pacman IP address and put it in our web browser to play some pacman.

031021 1632 Buildingthe13

Hopefully this was helpful to somebody, this also leads into a great demo that myself and Dean have been doing where Kasten K10 will come and protect that stateful data, the mission critical high scores that you don’t want to be losing. Obviously, this is out there and available, there are many other viable demos that can be used to play in your home labs and get to grips of the different components. In the next post we will finish off this series by looking at Kasten and the deployment and configuration of K10 and how simple it is to get going even more so if you have been following along here.

Tweet me with your high scores

031021 1632 Buildingthe14

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-9/feed 2
Kubernetes playground – How to Deploy KubeApps the visual marketplace https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-8 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-8#comments Sat, 06 Mar 2021 23:18:30 +0000 https://vzilla.co.uk/?p=2700 K8s Part8DeployingKubeApps e1615072603505

The last post covered how to implement a load balancer such as MetalLB if you are running your learning environment outside the public cloud, the public cloud generally brings this capability natively. This post is going to focus a little more on applications but not so much between the stateful and stateless types of applications but in the shape of application deployment. We also covered in a previous post about Helm and Helm Charts and how they can help when you want to build out an application or deployment.

This post will focus on KubeApps. Your Application Dashboard for Kubernetes.

Getting KubeApps installed

It is super simple to get started and we are going to start by adding the helm chart for KubeApps, again we already covered Helm and the benefits and ease this package manager brings well it makes life really easy when deploying KubeApps which will then act as the UI it seems at least to me to some of those Helm Charts. Lets start with the following command:

helm repo add bitnami https://charts.bitnami.com/bitnami

kubectl create namespace kubeapps

helm install kubeapps --namespace kubeapps bitnami/kubeapps

the above command is going to add the helm repository and charts to your local machine, create a kubeapps namespace and then install that chart to your Kubernetes cluster and into that newly created namespace.

After running the above if we go and run kubectl get all -n kubeapps you will get the following output for all the components we have to build up KubeApps.

030621 2304 Buildingthe1

Continued

030621 2304 Buildingthe2

As you can see there is quite a lot happening above, you will also notice that on service/kubeapps service we are using a LoadBalancer port if we run the following command you can see the description of this service.

kubectl describe service/kubeapps -n kubeapps

030621 2304 Buildingthe3

If you did not go through the load balancer post then you could also use a NodePort configuration here to access the application via a web browser

kubectl port-forward -n kubeapps svc/kubeapps 8080:80

if you need to update your service configuration to the correct port type then you can do this by running the following command and changing the port type.

kubectl edit service/kubeapps -n kubeapps

030621 2304 Buildingthe4

Ok, now either with your Load balancer IP or your node address you will be able to open a web browser.

kubectl get service/kubeapps -n kubeapps 

030621 2304 Buildingthe5From the above, we need to navigate to http://192.168.169.242 in your web browser, and you should see the following page appear.

030621 2304 Buildingthe6

You will notice from the above we now need an API token so let’s go and grab that to get in. first of all for demo or home lab purposes we are going to create a service account and cluster role binding with the following commands.

kubectl create --namespace default serviceaccount kubeapps-operator

kubectl create clusterrolebinding kubeapps-operator –clusterrole=cluster-admin –serviceaccount=default:kubeapps-operator

then to get that API token we need the following command:

kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{range .secrets[*]}{.name}{"\n"}{end}' | grep kubeapps-operator-token) -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo

030621 2304 Buildingthe7

Let’s now copy that token into our web browser and authenticate it into the dashboard. It should look like this if you have deployed anything that is also found in Kubeapps into your default namespace against best practice like me. Here we can see the NFS Provisioner and MinIO

030621 2304 Buildingthe8

You can also select show apps in all namespaces, and you guessed it all the apps in all your namespaces will appear.

030621 2304 Buildingthe9

Now you can click into these applications and you can see details about each one including versions, upgrade options, Access URLs and some general details you might need as well as rollback and delete options.

030621 2304 Buildingthe10

But where I really like this as a fan of the app store UI look and feel vs command line for the most part. I can navigate to catalog at the top of the page and this is going to open the door to a long list of different applications we may wish to deploy in our environment in a super simple way.

030621 2304 Buildingthe11

If you select one of these apps, let’s take Harbor for example a local container registry option we can easily deploy this to our Kubernetes cluster and we can also see in the description what is happening under the hood in regards to the Helm chart it is going to use and which version.

030621 2304 Buildingthe12

When you click on deploy you then see that you actually have the ability to change the configuration and YAML to suit your requirements. This is the ideal place to change that Port Type for your deployment so that out of the box deployments automatically land on the deployment rather than you having to go and change that configuration which in the long run is not the way you should be using the imperative nature of Kubernetes.

030621 2304 Buildingthe13

Once you have checked the YAML, possibly changed the name as this is randomly generated (like my MinIO application) and then you can hit deploy

030621 2304 Buildingthe14

In real life that was actually super quick, I know I could be just saying this but it’s true. Once the deployment has finished you can see your access URLs and application secrets that you need to connect. What you will see though is that it is not ready, and we are currently waiting for the Not Ready to change to ready before we can access those URLs.

030621 2304 Buildingthe15

So once that is complete and ready, we can then navigate to the access URL and play with our application.

030621 2304 Buildingthe16

The username is going to be admin and the password go back to your shell and run the following command: this will give you the password to log in.

 echo Password: $(kubectl get secret --namespace harbor harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)

030621 2304 Buildingthe17

Once again hopefully that was useful and helps just one person get their head around this new world. I also hope that if you have been following along that the world of Kubernetes is not too daunting anymore especially if you have come from a vSphere and Storage background, we have seen a lot of this before and yes, it is different when you get to the theory and component build-out of Kubernetes but that’s how virtualisation was back in the day. Next post we are going to take a deeper look into a fun deployment that we used at a recent London VMUG.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-8/feed 1
Kubernetes playground – How to Load Balance with MetalLB https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-7 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-7#comments Fri, 05 Mar 2021 21:57:05 +0000 https://vzilla.co.uk/?p=2651 kubernetes

In the last post, we talked about the Kubernetes context and how you can flip between different Kubernetes cluster control contexts from your Windows machine. We have also spoken about in this series how load balancing gives us better access to our application vs using the node port for access.

This post will highlight how simple it is to deploy your load balancer and configure it for your home lab Kubernetes cluster.

Roll your own Kubernetes Load Balancer

If you deployed your Kubernetes cluster in Cloud, the cloud provider will take care of creating Load balancer instances. But if you are using bare metal for the Kubernetes cluster, you have very limited choices which are where we are in this home lab scenario this also enables us to have a choice and to understand why. As I mentioned this is going to be using MetalLB.

Let’s start with what it looks like without a load balancer on bare metal when we are limited to Node or Cluster port configurations. So I am going to create an Nginx pod.

030521 2153 Buildingthe1

If we did not have a load balancer configured but we used the following command here. It would stay in the pending state until we did have a load balancer.

kubectl expose deploy nginx –port 80 –type LoadBalancer

Installing MetalLB into your Kubernetes Cluster

To start you can find the installation instructions here. The following commands, in general, is going to deploy MetalLB to your cluster, it will create a namespace called metallb-system and it will create a controller which is what will control IP address assignments and then also speaker which handles the protocols you wish to use.

kubectl create namespace metallb-system

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml

# On the first install only

kubectl create secret generic -n metallb-system memberlist –from-literal=secretkey=”$(openssl rand -base64 128)”

when you have than these you should see the new namespace metallb-system and be able to run the following command

kubectl get all -n metallb-system

030521 2153 Buildingthe2

We then need a config map to make it do something or at least use specific IP addresses on our network, I am using Layer2 in my lab configuration but there are other options that you can find here.

030521 2153 Buildingthe3

Create your YAML if layer2 as above with a range of IP addresses available on your home lab network and then apply this into your configuration. Where config.YAML is the YAML file with your config as per the above is located.

kubectl apply -f config.yaml

now when you deploy a service that requires port type as LoadBalancer

kubectl expose deploy nginx –port 80 –type LoadBalancer

Instead of pending now, this will give you an IP address available on your home lab network, which is great then if you want to access this from outside your cluster. Now if we check another application I have running already in my cluster. You will see the following when you use the LoadBalancer type on deployment.

030521 2153 Buildingthe4

And then if we go into that service and describe we can then see that configuration

030521 2153 Buildingthe5

I want to give another shout out to just me and opensource if you are a consumer of video vs written or both then this guy has created an amazing Kubernetes playlist covering all things Kubernetes and more.

In the next post, we are going to focus on hitting the easy button for our apps using KubeApps, where things do not need to be all in the shell there are also UI options, KubeApps is the “Your Application Dashboard for Kubernetes”

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-7/feed 6
Kubernetes playground – Context is important https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6#comments Fri, 05 Mar 2021 13:32:21 +0000 https://vzilla.co.uk/?p=2642 K8s Part6AccessingK8sclusterfromwindows

In the last post, we covered an overview of Helm and the MinIO deployment to give us an option for testing later on workloads that require object storage. In this post, we are going to focus on context and how to make sure you have access from your desktop to your Kubernetes Cluster.

Context

030521 1320 Buildingthe1

Image is taken from Kubernetes.io

Context is important, the ability to access your Kubernetes cluster from your desktop or laptop is required. Lots of different options out there and people use obviously different operating systems as their daily drivers.

In the post we are going to be talking about Windows but as I said there are other options out there for other operating systems. More to the point if you are managing multiple Kubernetes clusters for different projects or learning.

By default, the Kubernetes CLI client uses the C:\Users\username\.kube\config to store the Kubernetes cluster details such as endpoint and credentials. If you have deployed a cluster you will be able to see this file in that location. But if you have been using maybe the master node to run all of your kubectl commands so far via SSH or other methods then this post will hopefully help you get to grips with being able to connect with your workstation.

Once again Kubernetes.io have this document

Install the Kubernetes-CLI

First, we need the Kubernetes CLI installed on our Windows Machine, I used chocolatey with the following command.

choco install kubernetes-cl

We then need to grab the kubeconfig file from the cluster, grab the contents of this file either via SCP or just open a console session to your master node and copy to the local windows machine. The location of the config is listed below.

$HOME/.kube/config

If you have taken the console approach, then you will need to get the contents of that file and paste into the config location on your Windows machine. You could go ahead and run the following command but this is going to contain redacted information so this will not work if you take a copy of this to your windows machine.

kubectl config view

030521 1320 Buildingthe2

What we need to do is get those redacted values to copy over to our windows machine, you can achieve this by running the following commands

cd $HOME/.kube/

ls

cat config

030521 1320 Buildingthe3

That the above starting at the apiVersion: v1 down to the bottom of the file and copy that to your .kube directory on windows. This same process is similar for other operating systems.

C:\Users\micha\.kube\config

If you want to open the file, then you will be able to compare that to what you saw on the shell of your master node.

030521 1320 Buildingthe4

You will now be able to check in on your K8 cluster from the windows machine

kubectl cluster-info

kubectl get nodes

030521 1320 Buildingthe5

This not only allows for connectivity and control from your windows machine but this then also allows us to do some port forwarding to access certain services from our windows machine. We can cover them off in a later post.

Multiple Clusters

A single cluster is simple, and we are there with the above specifically on Windows. But accessing multiple clusters using contexts is really what you likely came here to see.

Again some awesome documentation that you can easily run through.

For this post though I have my home lab cluster that we have been walking through and then I have also just deployed a new EKS cluster in AWS. The first thing to notice is that the config file is now updated with multiple clusters. Also, note I do not use notepad as my usual go-to for editing YAML files.

030521 1320 Buildingthe6

Then also notice in the same screen grab that we have multiple contexts displayed.

030521 1320 Buildingthe7

So now if I run the same commands we ran before.

kubectl cluster-info

kubectl get nodes

030521 1320 Buildingthe8

We can see that the context has been changed over, and actually, this is done automatically with the EKS commands and I am not sure if this is the same process for other cloud providers something we will get to in later posts. But now we are on the AWS cluster and can work with our cluster from our windows machine. So how do we view all of the possible contexts that we may have in our config file?

kubectl config get-contexts

030521 1320 Buildingthe9

And if we want to flip between the clusters you simply run the following command, you will then see how we switched over to the other context and back into our home lab cluster.

kubectl config use-context Kubernetes-admin@kubernetes

030521 1320 Buildingthe10

One thing to note is that I also store my .pem file in the same location as my config file, I have been reading about some best practices that if you have multiple config requirements you could start creating a folder structure with all of your test clusters, all of your development clusters and then live and so on.

Note Update – As I have been playing a little with AWS EKS and Microsoft AKS, AWS seems to take care of the clean up of your kubeconfig files whereas AKS does not so I found the following commands very useful when trying to keep that config file clean and tidy.

kubectl config delete-cluster my-cluster

kubectl config delete-context my-cluster-context

Hopefully, that was useful, and in the next post, we will take a look at the load balancer that I am using in the home lab.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6/feed 5
Kubernetes playground – How to use and setup Helm & MinIO? https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-5 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-5#comments Mon, 01 Mar 2021 18:26:47 +0000 https://vzilla.co.uk/?p=2629 K8s Part5HelmMinIO

In the last post, we covered setting up dynamic shared storage with my NETGEAR ReadyNAS system for our Kubernetes storage configuration. This is what I have in my home lab but any NFS server would bring the same outcome for you in your configuration.

This post will cover two areas we will continue to speak to Kubernetes storage options but we will cover object storage, I am going to use MinIO to be able to have an object storage option in my lab, I can use this to practice some tasks and demo things between Veeam Backup & Replication and Kasten and storing backup files. Also, in this post, we will cover Helm and Helm charts.

What is Helm?

Helm is a package manager for Kubernetes. Helm could be considered the Kubernetes equivalent of yum or apt. Helm deploys charts, which you can think of like a packaged application., it is a blueprint for your pre-configured application resources which can be deployed as one easy to use chart. You can then deploy another version of the chart with a different set of configurations.

They have a site where you can browse all the Helm charts available and of course you can create your own. The documentation is also clear and concise and not as daunting as when I first started hearing the term helm amongst all of the other new words in this space.

How do I get helm up and running?

It is super simple to get Helm up and running or installed. Simply. You can find the binaries and download links here for pretty much all distributions including your RaspberryPi arm64 devices.

Or you can use an installer script, the benefit here is that the latest version of the helm will be downloaded and installed.

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

$ chmod 700 get_helm.sh

$ ./get_helm.sh

Finally, there is also the option to use a package manager for the application manager, homebrew for mac, chocolatey for windows, apt with Ubuntu/Debian, snap and pkg also.

Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster, something that we will also cover later is KubeApps which gives a nice web interface to deploy your applications but I still think this uses helm charts for the way in which the applications are deployed.

MinIO deployment

I think I mentioned in a previous post that I wanted an object storage option built on Kubernetes to test out scenarios where Object Storage is required for exports and backups. This being a home lab will automatically mean we are not going to be using any heavy load or performance testing but around some demos this is useful. What this also means is that the footprint of running MinIO within my cluster is very low compared to having to run a virtual machine or physical hardware.

Once again documentation from MinIO is on point, which was actually a misconception that I maybe had of this Kubernetes and CNCF world was that the documentation might or maybe lacking across the board but actually, that is not the case at all everything I have found has been really good.

Obviously, as we went to the trouble above installing Helm on our system we should go ahead and use the MinIO helm chart to bootstrap the MinIO deployment into our Kubernetes cluster.

Configure the helm repo


<span style="color: #24292e; font-family: Consolas;">helm repo add minio https://helm.min.io/
</span>

Install the chart


<span style="color: #24292e; font-family: Consolas;">helm install --namespace minio --generate-name minio/minio
</span>

I also went through the steps to create a self-signed certificate to use here those steps can be found here.

How to get the default secret and access keys

I deployed my MinIO deployment within my default namespace by mistake and have not resolved this so the following commands need to take that into consideration. First, get a list of all the secrets in the namespace, if you have a namespace exclusive to MinIO then you will see only those secrets available. I added a grep search to only show minio secrets.

kubectl get secret | grep -i minio

030121 1807 Buildingthe1

If you have set up a self-signed or third-party certificate, then you will likely have a secret called “tls-ssl-minio”

kubectl get secret tls-ssl-minio

030121 1807 Buildingthe2

you will also have a service account that may look familiar to my command below, although I think all names are random

kubectl describe secret wrong-lumber-minio-token-mx6fp

030121 1807 Buildingthe3

then you will have finally the one we need with the access and secret keys in.

kubectl describe secret wrong-lumber-minio

030121 1807 Buildingthe4you should notice at the bottom here two data types access-key and secret-key, we next need to find out more from these. If we run the following we will get those values.

kubectl get secret wrong-lumber-minio -o jsonpath='{.data}’

030121 1807 Buildingthe5but one more thing we need to encode them. Let’s start with the access key

echo “MHo0blBReFJwcg==” | base64 –decode

030121 1807 Buildingthe6

and now the secret key

echo “aTBWMlNvbUtSMmY5MnhRQVNGV3NrWEphVTZIZ3hLT1ppVHl5MUFSdg==” | base64 –decode

030121 1807 Buildingthe7

Now we can confirm access to the front-end web interface with the following command

kubectl get svc

030121 1807 Buildingthe8

Note that I am using a load balancer here which I added later to the configuration.

030121 1807 Buildingthe9

Now with this configuration and the access and secret keys you can open a web browser and navigate to http://192.168.169.243:9000

030121 1807 Buildingthe10

You will then have the ability to start creating your S3 buckets for your use cases, you can see here that a future post will be covering this as a use case where I can export backups to object storage using Kasten K10.

030121 1807 Buildingthe11

In the next post, I will be working on how to access your Kubernetes cluster from your windows machine.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-5/feed 1
Kubernetes playground – How to setup dynamic shared storage https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-4 https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-4#comments Sun, 28 Feb 2021 10:42:18 +0000 https://vzilla.co.uk/?p=2611 kubernetes

In the last three parts we covered, starting from scratch and getting the Kubernetes platform ready, this was using some old hardware and creating some virtual machines to act as my nodes. But if you don’t have old hardware but you still wish to build out your cluster then these virtual machines can really sit wherever they need to, for example, they could be in the public cloud but remember this is going to cost you. My intention was to remove all costs as possible as this system I am using is always running in my home network as it acts as my backup server as well as for tasks like this. We also covered how we got the Kubernetes cluster created using Kubeadm and then we started playing with some stateless applications and pods.

In this post we are going to start exploring the requirements around stateful by setting up some shared persistent storage for stateful applications. There was also something else I was playing with local persistent volumes and you can read more about that here on the Kubernetes Blog.

Stateful vs Stateless

Stateless that we mentioned and went through in the last post is where the process or application can be understood alone, there is no storage associated to the process or application therefore it is stateless, stateless applications provide one service or function.

Taken from RedHat: An example of a stateless transaction would be doing a search online to answer a question you’ve thought of. You type your question into a search engine and hit enter. If your transaction is interrupted or closed accidentally, you just start a new one. Think of stateless transactions as a vending machine: a single request and a response.

Stateful processes or applications are those that can be returned to again and again, think about your shopping trolley or basket in an online store if you leave the site and come back to the site in an hour site if the site is configured well then it is likely that this remembers your choices so you can easily make that purchase rather than having to go through the process of picking everything again into your cart. A good description I read whilst researching this was, think of stateful like an ongoing conversation with a friend or colleague on a chat platform, it is always going to be there regardless of the time between talking. Where as stateless, when you leave that chat or after a period those messages are lost forever.

If you google “Stateful vs Stateless” you will find so much information and examples, but for my walkthrough the best way to describe stateless is through what we covered in the last post, web servers and load balancers (stateless) to what we are going to cover here and the next post around databases (stateful) there are many other stateful workloads such as messaging queues, analytics, data science, machine learning (ML) and deep learning (DL) applications.

Back to the lab

I am running a NETGEAR ReadyNAS 716 in my home lab that can serve both NAS protocols (SMB & NFS) and iSCSI. It has been a perfect backup repository for my home laptops and desktop machines, and this is an ideal candidate for use in my Kubernetes cluster for stateful workloads.

I went ahead and created a new share on the NAS called “K8s” that you can see on the image below.

022821 1033 Buildingthe1

I then wanted to make sure that the folder was accessible over NFS by my nodes in the Kubernetes cluster

022821 1033 Buildingthe2

This next setting had some strange issues until I found out how this was affecting what we were trying to achieve. Basically, with this default setting (root squash) this was causing issues where persistent volumes could be created but then additional folder structure or folders could not always be created it was very sporadic although the same each time we tested.

Root squash is a special mapping of the remote superuser (root) identity when using identity authentication (local user is the same as remote user). Under root squash, a client’s uid 0 (root) is mapped to 65534 (nobody). It is primarily a feature of NFS but may be available on other systems as well.

Root squash is a technique to void privilege escalation on the client machine via suid executables Setuid. Without root squash, an attacker can generate suid binaries on the server that are executed as root on other client, even if the client user does not have superuser privileges. Hence it protects client machines against other malicious clients. It does not protect clients against a malicious server (where root can generate suid binaries), nor does it protect the files of any user other than root (as malicious clients can impersonate any user).

A big shout out to Dean Lewis here who helped massively get this up and running. He also has some great content over on his site.

022821 1033 Buildingthe3

I also enabled SMB so that I could see what was happening on my Windows machine during some of the stages. This is also how we discovered the first issue when some folders were not being created, we then created them, and the process would get that step further so that No Root Squash setting is super important.

022821 1033 Buildingthe4

Kubernetes – NFS External Provisioner

Next, we needed an automatic provisioner that would use our NFS server / shares to support dynamic provisioning of Kubernetes persistent volumes via persistent volume claims. We did work through several before we hit on this one.

The Kubernetes NFS Subdir external provisioner enabled us to achieve what we need to be able to do for our stateful workloads with the ability to create those dynamic persistent volumes. It is deployed using a helm command.

Note – I would also run this on all your nodes to install the NFS Client

apt-get install nfs-common


helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
 --set nfs.server=192.168.169.3 \
 --set nfs.path=/data/K8s

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now when we cover stateful applications you will understand how the magic is happening under the hood. In the next post we will look at helm in more detail and also start to look at a stateful workload with MinIO.

]]>
https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-4/feed 1