cloud-native – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Tue, 10 Aug 2021 10:29:42 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png cloud-native – vZilla https://vzilla.co.uk 32 32 Dark Kubernetes Clusters & managing multi clusters – Part 2 https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters-part-2 https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters-part-2#respond Tue, 10 Aug 2021 07:56:03 +0000 https://vzilla.co.uk/?p=3077 In the last post we focused on using inlets to create a WebSocket to provide a secure public endpoint for the Kubernetes API and port 8080 for Kasten K10 that are otherwise not publicly reachable. In this post we are going to concentrate on the Kasten K10 and multi cluster configuration. I am going to share a great article talking about Kasten multi-cluster from Dean Lewis.

Deploying K10

Deploying Kasten K10 is a simple helm chart deployment that I covered in a post a few months back here.

 kubectl create ns kasten-io
namespace/kasten-io created
 helm install k10 kasten/k10 --namespace=kasten-io

Accessing K10

For the purposes of this demo, I am just port forwarding each cluster out, but you could use ingress to expose to specific network addresses, If I was going to do this again though I would setup ingress on each of the clusters and then this would slightly change the inlets configuration.

Multi-cluster setup-primary

We have 3 clusters, and we need to decide our primary cluster so that we can start the configuration and bootstrap process. In this demo I have chosen the CIVO cluster located in NYC1. More about this configuration setup can be found here in the official documentation.

You will see from the commands and the images below that we are using the K10multicluster tool this is a binary available from the Kasten github page and it provides the functionality of bootstrapping your multi cluster configurations.

k10multicluster setup-primary --context=mcade-civo-cluster01 --name=mcade-civo-cluster01

080721 1029 DarkKuberne1

Bootstrap the secondary (dark site)

The main purpose of the demo is to prove that we can add our local K3D cluster from a data management perspective in one location.

k10multicluster bootstrap --primary-context=mcade-civo-cluster01 --primary-name=mcade-civo-cluster01 --secondary-context=k3d-darksite --secondary-name=k3d-darksite --secondary-cluster-ingress-tls-insecure=true --secondary-cluster-ingress=http://209.97.177.194:8080/k10

or

k10multicluster bootstrap \
--primary-context=mcade-civo-cluster01v \
--primary-name=mcade-civo-cluster01 \
--secondary-context=k3d-darksite \
--secondary-name=k3d-darksite \
--secondary-cluster-ingress-tls-insecure=true \
--secondary-cluster-ingress=http://209.97.177.194:8080/k10

080721 1029 DarkKuberne2

Managing Kasten K10 multi-cluster

I will make more content going into more detail about Kasten K10 multi cluster but for the purposes of the demo, if you now login to your primary cluster web interface you will now have the multi cluster dashboard and with the above commands ran you will now see that we are managing the K3d cluster.

080721 1029 DarkKuberne3

From here we can create global backup policies and other global configurations which also could enable the ability to move applications between your clusters easily. I think there is a lot more to cover when it comes to multi cluster and the capabilities there. The purpose of this blog was to highlight how inlets could enable not only access to the Kubernetes API but also to other services within your Kubernetes clusters.

You will have noticed in the above that I am using TLS insecure, this was due to me changing my environment throughout the demo. Inlets very much enables you to use TLS and have verification on.

Useful Resources

I mentioned in the first post that I would also share some useful posts that I used to get things up and running here. As well as a lot of help from Alex Ellis

https://blog.alexellis.io/get-private-kubectl-access-anywhere/

https://docs.inlets.dev/#/?id=for-companies-hybrid-cloud-multi-cluster-and-partner-access

https://inlets.dev/blog/2021/06/02/argocd-private-clusters.html

I have obviously used Kasten K10 and the Kubernetes API but this same process could be used for anything within side a private environment that needs to be punched out to the internet for access.

]]>
https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters-part-2/feed 0
Dark Kubernetes Clusters & managing multi clusters https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters#respond Mon, 09 Aug 2021 13:33:06 +0000 https://vzilla.co.uk/?p=3072 Let’s first start by defining the “Dark” mentioned in the title. This could relate to a cluster that you have that needs to have minimum to no access from the internet or it could also be a home Kubernetes cluster, the example I will be using in this post will be a K3S cluster deployed in my home network, I do not have a static IP address with my ISP and I would like others to be able to connect to my cluster for collaboration or something that we will get to around data management later.

What is the problem?

How do you access dark sites over the internet?

How do you access dark Kubernetes clusters over the internet? Not to be confused with dark deployment or A/B testing.

Do you really want a full-blown VPN configuration to put in place?

If you are collaborating amongst multiple developers do you want KUBECONFIGS shared everywhere?

And my concern and reason for writing this post is around how would Kasten K10 Multi-Cluster access a dark site Kubernetes cluster to provide data management to that cluster and data?

080721 1005 DarkKuberne1

What is Inlets?

080721 1005 DarkKuberne2

First, I went looking for a solution, I could have implemented a VPN so that people could VPN into my entire network and then get to the K3D cluster I have locally, this seems to be an overkill and complicated way to give access. It’s a much bigger opening than is needed.

Anyway, Inlets enables “Self-hosted tunnels, to connect anything.”

Another important pro to inlets is that it replaces opening firewall-ports, setting up VPNs, managing IP ranges, and keeping track of port-forwarding rules.

I was looking for something that would provide a service that would provide a secure public endpoint for my Kubernetes cluster (6443) and Kasten K10 deployment (8080) which would not normally or otherwise be publicly reachable.

You can find a lot more information about Inlets here at https://inlets.dev/ I am also going to share some very good blog posts that helped me along the way later in this post.

Let’s now paint the picture

What if we have some public cloud clusters but we also have some private clusters maybe running locally on our laptops or even dark sites? For the example I am using CIVO in my last post I went through how I went through the UI and CLI to create these clusters and as they were there, I wanted to take advantage of that. As you can also see we have our local K3D cluster running locally within my network. With the CIVO clusters we have our KUBECONFIG files available with our public IP to access, the managed service offerings make it much simpler to have that public IP ingress to your cluster, it is a little different when you are on home ISP backed Internet, but you still have a requirement.

080721 1005 DarkKuberne3

My local K3D Cluster

If you were not on my network, you would have no access from the internet to my cluster. Which for one stops any collaboration but also stops me being able to use Kasten K10 to protect my stateful workloads within this cluster.

080721 1005 DarkKuberne4

Now for the steps to change this access

There are 6 steps to get this up and running,

  1. Install inletscli on dev machine to deploy exit-server (taken from https://docs.inlets.dev/#/ – The remote server is called an “exit-node” or “exit-server” because that is where traffic from the private network appears. The user’s laptop has gained a “VirtualIP” and users on the Internet can now connect to it using that IP.)
  2. Inlets-Pro Server droplet deployed in Digital Ocean using inletsctl (I am using Digital Ocean but there are other options – https://docs.inlets.dev/#/?id=exit-servers)
  3. License file obtained from Inlets.dev, monthly or annual subscriptions
  4. Export TCP Ports (6443) and define upstream of local Kubernetes cluster (localhost), for Kasten K10 I also exposed 8080 which is what is used for the ingress service for the multi-cluster functionality.
  5. curl -k https://Inlets-ProServerPublicIPAddress:6443
  6. Update KUBECONFIG to access through websocket from the internet

Deploying your exit-server

I used Arkade to install my inletscli more can be found here. The first step once you have the cli is to get your exit server deployed. I created a droplet in Digital Ocean to act as our exit server, could be many other locations as mentioned and shown in the link above. The following command is what I used to get my exit-server created.

inletsctl create \
--provider digitalocean \
--access-token-file do-access-token.txt \
--region lon1

080721 1005 DarkKuberne5

Define Ports and Local (Dark Network IP)

You can see from the above screen shot that the tool also gives you handy tips on what commands you now need to run to configure your inlets pro exit-server within Digital Ocean. We now have to define our ports which for us will be 6443 (Kubernetes API) and 8080 (Kasten K10 Ingress) we also need to define the IP address on our local network.

export TCP_PORTS="6443,8080" - Kubernetes API Server
export UPSTREAM="localhost" - My local network address for ease localhost works.

inlets-pro tcp client --url "wss://209.97.177.194:8123" \
 --token "S8Qdc8j5PxoMZ9GVajqzbDxsCn8maxfAaonKv4DuraUt27koXIgM0bnpnUMwQl6t" \
 --upstream $UPSTREAM \
 --ports $TCP_PORTS \
 --license "$LICENSE"

080721 1005 DarkKuberne6Image note – I had to go back and add export TCP_PORTS=”6443, 8080″ for the kasten dashboard to be exposed

Secure WebSocket is now established

When you commit the commands above to configure inlets-PRO you will then have the following if configured correctly, leave this open in a terminal this is the connection between the exit-server and your local network.

080721 1005 DarkKuberne7

Confirm access with curl

As we are using the Kubernetes API, we are not expecting a fully authorised experience via curl but it does show you have external connectivity with the following command.

curl -k https://178.128.38.160:6443

080721 1005 DarkKuberne8

Updating KubeConfig with Public IP

We already had our KUBECONFIG for our local K3D deployment, to create my cluster I used the following command for the record. If you do not suggest the API port as 6443 then some high random port will be used which will skew everything we have done at this stage.

k3d cluster create darksite --api-port 0.0.0.0:6443

Anyway, back to updating the kubeconfig file, you will have the following in there currently which is fine for access locally inside the same host.

080721 1005 DarkKuberne9

Make that change with the public facing IP of the exit-server

080721 1005 DarkKuberne10

Then locally you can confirm you still have access

080721 1005 DarkKuberne11

Overview of Inlets configuration

Now we have a secure WebSocket configured and we have access externally to our hidden or dark Kubernetes cluster, You can see below how this looks.

080721 1005 DarkKuberne12

At this stage we can share the KUBECONFIG file, and we have shared access to our K3D cluster within our private network.

I am going to end this post here, and then the next post we will cover how I then went to configure Kasten K10 multi cluster so that now I can manage my two CIVO clusters and my K3D clusters from a data management perspective using Inlets to provide that secure WebSocket.

]]>
https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters/feed 0
Welcome to Kubernetes Korner https://vzilla.co.uk/vzilla-blog/welcome-to-kubernetes-korner https://vzilla.co.uk/vzilla-blog/welcome-to-kubernetes-korner#respond Thu, 05 Aug 2021 06:52:17 +0000 https://vzilla.co.uk/?p=3058 We have created an open community forum where we can discuss all things Kubernetes, Data Management, Kasten and DevOps topics in one place where anyone can contribute, learn, or share. You will see the why we decided on this route vs the what seems to be the defacto now to create a discord or slack channel at the end of the post.

Basically, my ask is would love to see you all in there, sharing your experiences and asking your questions.

https://community.veeam.com/groups/kubernetes-korner-90

080521 0652 WelcometoKu1

Welcome!

I wanted to start by welcoming everyone to our new Kubernetes Korner where we can discuss all things cloud-native, Kubernetes and DevOps. More importantly its where we can come to share our experiences in learning this still relatively new world and ask questions of our fellow community members.

We are also hoping to gather feedback around the Kasten K10 platform and Open-Source projects so that we can better understand the product strategy and feedback on where to go next and how to improve the overall experience. We also want this to be a community space to ask your Kasten questions, I will be active in here as well as some of the Kasten product managers and then I expect we have a few community members that are also hands on with Kasten K10 daily that can offer their advice and solutions.

My Ask

I also wanted to kick off the Korner with a question to you all, each, and every one of us will have a different background and learning journey when it comes to Kubernetes and DevOps. I want to know what your biggest challenge has been so far, what has been that one thing you have felt you really struggled with and how did you overcome this? Or maybe it was that you thought a topic was going to be daunting but when you got into it, it wasn’t, and you were able to sail through and get a better foundational knowledge of the topic? Or did you have the skill already and just reinforced the learning that you already had.

Mine was Linux, everything Kubernetes and pretty much DevOps is Linux orientated. I was under the impression you needed years of experience and a massive amount of time behind a Linux OS. Now I am no Linux expert by any stretch but the years of messing around, deploying apps, making things happen has massively helped when it comes to getting around. My biggest advice to anyone that maybe thinks they are in the same boat as I then get hands on, convert a laptop to Ubuntu or one of the overwhelming Linux distributions out there and get hands on every day.

I am going to be suggesting this a lot over the next plan of content I have, get hands on! I started with an introduction to DevOps covering in brief 12 steps to get into and understand more about DevOps. You can find that article here – https://blog.kasten.io/devops-learning-curve and over the next few weeks and months I plan to get into each of these topics a little deeper and share that as I did with my Kubernetes learning journey a few months back.

Community First

Feel free to ask your questions, share your experiences and as you can see any mention to boats, the sea or anything nautical is very much welcome.

Another big part of creating this community forum is that I feel like a lot of the community has gone behind closed doors in hidden slack and discord channels and what was once a super useful searchable open community is now very much locked away unless you join the thousands of channels and platforms. We wanted something open and for everyone to discuss and share experiences.

]]>
https://vzilla.co.uk/vzilla-blog/welcome-to-kubernetes-korner/feed 0
Getting started with CIVO Cloud https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud#respond Mon, 02 Aug 2021 18:22:06 +0000 https://vzilla.co.uk/?p=3055 I have been meaning to jump in here for a while and finally today I got the chance, and it was super quick to get things up and running. Especially when you get the £250 free credits as well! For a playground for learning this is a great place to get started, quick deployment.

This post is going to walk through pretty much from step 1 when you sign in for the first time and how you can easily deploy a Kubernetes cluster from both the UI portal and the Civo CLI.

When you sign up for your CIVO account and your free $250 credit balance, you need to add your credit card and then you can start exploring.

080221 1819 Gettingstar1

My next task was to get the CIVO CLI on my WSL instance, to get this I used arkade to install the CLI

arkade get civo

to add your newly created account to your CIVO CLI then follow these next simple steps, first you will need your API key from your portal you can find this under Account > Security and then you need to take a copy of this string I have blurred out.

080221 1819 Gettingstar2

On your system where you have deployed the CIVO CLI you can now take this API Key and add this using the following command.

civo apikey add MichaelCade <API KEY>

I called my account my name you can it seems choose the name of the account you wish it does not have to be lined up to a username. We can confirm that we added this API key with the following command:

civo apikey list

and then if you want to see the API Key and compare to what we found in the portal then you could run the following command also.

civo apikey show MichaelCade

080221 1819 Gettingstar3

There are many other things you can get from the CLI and obviously incorporate a lot of this into your workflows and automation. For now I am just getting things set up and ready for my first deployment. The other commands can be found here.

From the UI

We can start by creating a Kubernetes cluster through the UI, simply select Kubernetes from the menu on the left and then create new Kubernetes cluster and then you are greeted with this simple wizard to build out your cluster with some great overview of how much your cluster is going to cost you.

080221 1819 Gettingstar4

We then have the option to add marketplace applications and storage to your cluster if you would like to hit the ground running, for the purpose of my walkthrough I am not going to do that just yet. But you can see there are a lot of options to choose from.

080221 1819 Gettingstar5

We then hit create cluster down the bottom and no joke in 2 minutes you have a cluster available to you

080221 1819 Gettingstar6

Now we can also go and jump back to our Civo CLI and confirm we have some visibility into that cluster by using the following command.

civo Kubernetes list

080221 1819 Gettingstar7

Connecting to your cluster

From the UI we can see below it is as simple as downloading the kubeconfig file to access your cluster from your local machine. I have been reading up on this approach not being so secure but for the purpose of learning and labbing I think this way to access is just fine. But we should all be aware of reasons of not exposing the kubeconfig and Kubernetes over the public internet.

080221 1819 Gettingstar8

I downloaded the config file and then put that in my local .kube folder and renamed to config (there might be a better way to handle this or merge this with an existing config file, point me in the right direction if you know a good resource)

080221 1819 Gettingstar9

Ok, so pretty quick and in less than 5 minutes I have a 3 node Kubernetes cluster up and running and ready for some applications. I am also going to show you how if you decide to use the UI to create your first cluster but you would like to use the CLI to get your kubeconfig file then carry on to the next section.

Create a cluster from the CLI

Creating the cluster through the UI was super quick but we always want to have a way of creating a cluster through the UI, maybe it’s a few lines of code that means we can have a new cluster up and running in seconds and no reason to hit a UI maybe it’s a build that is part of a wider demo, there are lots of reasons for using a CLI to deploy your Kubernetes cluster.

When I first installed my Civo CLI in WSL2 I did not have a region configured so I checked this using the following command. And you can see neither London or NYC are set to current.

civo region ls

080221 1819 Gettingstar10

To change this so that LON1 is my default I ran the following command and then ran the ls command again.

civo region current LON1

080221 1819 Gettingstar11

And now if I run civo kubernetes list to show the cluster created in the UI I will not see it as this was created in NYC so I would have to switch regions to see that again.

Let’s now create a Kubernetes cluster from the CLI, issue the following command this is going to create a medium 3 node cluster, obviously you can get granular on size, networking, and other detail that you wish to configure as part of your cluster.

civo kubernetes create mcade-civo-cluster02

once your cluster is created and ready you can issue this command to see your clusters, now in my account I have one cluster shown below in Lon1 and I have another in NYC1

civo kubernetes list

080221 1819 Gettingstar12

If you wish to save your configuration from the CLI so that you can use kubectl locally then you can do this using the following command

civo kubernetes config mcade-civo-cluster02 -s

080221 1819 Gettingstar13

Now I want to have access to both my London cluster and my New York via kubectl and that can be done using the following command. This will then give you access to both contexts. In order to run this, you need to be in the correct region. If you do not use the merge flag then you will overwrite your kubeconfig, if you are like me and you have several configs to different clusters across multiple environments then always make sure you protect that file as well and merge and keep tidy.

civo Kubernetes config mcade-civo-cluster02 -s –merge

080221 1819 Gettingstar14

Obviously this post only touches the surface of what CIVO have going on, I am planning to revisit with some applications being deployed and then getting into the data management side of things and how we can then protect these workloads in CIVO.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud/feed 0
Ransomware is real! – Exposing yourself via the Cloud – https://vzilla.co.uk/vzilla-blog/ransomware-is-real-exposing-yourself-via-the-cloud https://vzilla.co.uk/vzilla-blog/ransomware-is-real-exposing-yourself-via-the-cloud#respond Thu, 15 Apr 2021 16:43:50 +0000 https://vzilla.co.uk/?p=2980 Ransomware is a threat we hear about daily it seems and it is hitting every sector, I have actually been saying that everyone should be concerned here, it is just a matter of time before you are attacked and have to face the Ransomware story. This post is all about highlighting how to prevent your cloud workloads from being easily exposed as well as talking briefly about the remediation and how to get back up on your feet.

In a previous post, I posted about Pac-Man as a mission-critical application, I have decided that this is a great way to show off the stateful approach to data within Kubernetes, this is great as you have your stateful data residing in a MongoDB database, this consists of your high scores. I have been running ad hoc demos of Kasten K10 in various clusters and platforms, but something I found in AWS was worth sharing.

Without repeating the build-up of the Pac-man configuration mentioned in the blog just linked. We have a front end NodeJS web server (this is where we play Pac-man) and we have a MongoDB backend which is where we store the high scores. There is a service created for both pods that expose them out using the AWS load balancer.

041521 1601 TheRansomwa1

In the deployment we leverage load balancers, if we apply this to an EKS cluster, we use the ELB by default, which gives us an AWS DNS name linked to the load balancer which forwards to our pods. As you can see in the below screenshot, the associated security group created for this load balancer is wide open to the world.

041521 1601 TheRansomwa2

Obviously, there are some gaping holes here both in the security group configuration, and there is very limited access control for the application itself. But I wanted to highlight that bad things happen, or mistakes happen. Let’s get into this.

High level – Bad Practices

Basically, by configuring things in this way our services are very exposed, whilst our application works and takes advantage of all the good things with Kubernetes and AWS and the Public cloud in general (this is not limited to AWS) Obviously by setting up the above way is not going to be best practice especially when it is a little more critical than Pac-Man and the back end high scores.

Before we talk about the considerations about making these bad practices into best practices, let me talk about the honeypot and some of the reasons why I did this.

The Ransomware Attack

I have been involved in a lot, a lot of online video demos throughout the last 12 months and the creativeness must be on point to keep people interested but also to get the point across.

Given that the service created for mongo was as below when deployed it will take advantage of the LoadBalancer available within the Kubernetes cluster, when I wrote the original blog this was MetalLB and this was exposed over my internal home network. When you get to AWS or any of the public cloud offerings then this becomes a public-facing IP address which means you have to be more aware of this and more on this later.

041521 1601 TheRansomwa3

It is very easy at that point with the default settings that are configured from a security group point of view within AWS to gain access from any internet-connected device to your Mongo configuration. I will highlight this process now. First of all, you will need MongoDB Compass you can find the download for your OS here.

041521 1601 TheRansomwa4

Once downloaded you can run this and then it is time to test out your unsecured connectivity to your Mongo instance. From here you will need that forward-facing DNS from AWS or in our case we have access to our Kubernetes cluster so we can run the following command.

kubectl get svc -namespace pacman

041521 1601 TheRansomwa5

then within MongoDB compass, you can add the following and connect from anywhere because everything is open. Notice as well that we are using the default port, this is the attack surface, how many Mongo deployments out there are using this same approach with access not secured?

Mongodb://External-IP:27017

041521 1601 TheRansomwa6

Here is a good copy of our data, you can see our Pac-man database there gathering our high scores.

041521 1601 TheRansomwa7

Now we can flip to what happens next, once this is exposed it was likely 12 hours max before the attack was made, sometime between 4 am and 5 am of a morning. Now remember there is no important data here and the experiment is to highlight 2 things, make sure you have thought about all-access security for your application and everything is not exposed to the world to access. But my main point and reason for the demo are making sure you have a backup! The first point is going to protect you in a prevention state the latter is going to be what you need when things go wrong. I cannot help you too much with the data that you are storing in your database but just make sure that you are regulating that data and know what that data is and why you are keeping it.

041521 1601 TheRansomwa8

As you can see from the above we have a new database now with a readme entry that gives us the detail of the attack and also no Pac-Man database this has been removed and no longer available to our front end web server. Just like that because of an “accident” or misconfiguration, we have exposed our data and in fact, lost our data in return for ransom.

The Fix and Best Practices

I can only imagine what this feels like when this is real life and not a honeypot test for a demo! But that is why I wanted to share this. I have been mentioning throughout the requirement to check security and access on everything you do, least privilege, and then work from there. DO NOT OPEN EVERYTHING TO THE WORLD, that probably seems like simple advice but if you google MongoDB ransomware attacks you will be amazed at how many real companies get attacked and suffer from this same access issue.

The second bit after configuring your security correctly is making sure you have a solid backup, the failure scenarios that we have with our physical systems, virtualisation, cloud, and cloud-native are all the same. The attacker did not care that this was a mongo pod within a Kubernetes cluster, this could easily have been a mongo IaaS EC2 instance exposed to the public in the same way. Backups are what will help remediate the issue it will help you get back up and running.

I was of course using Kasten K10 to protect my workloads, so I was able to restore and get back up and running quickly, it’s all part of the demo.

041521 1601 TheRansomwa9

and we are back in business with that restore

041521 1601 TheRansomwa7

Any questions let me know, no data was harmed in the making of this blog and demo. I have also deleted everything that may have been exposed in the screenshots above. I would also note that if you are walking through my lab and you are running through the examples again be conscious of where you are running, at home in your own network using MetalLB you are going to be fine as it will only expose to your home network, in AWS or any of the other public cloud offerings then that will be public-facing and available for the internet to see and access.

]]>
https://vzilla.co.uk/vzilla-blog/ransomware-is-real-exposing-yourself-via-the-cloud/feed 0
How to – Amazon EBS CSI Driver https://vzilla.co.uk/vzilla-blog/how-to-amazon-ebs-csi-driver https://vzilla.co.uk/vzilla-blog/how-to-amazon-ebs-csi-driver#comments Tue, 06 Apr 2021 11:02:48 +0000 https://vzilla.co.uk/?p=2928 In a previous post, we hopefully covered the why and where the CSI has come from and where it is going and the benefits that come with having an industry-standard interface by enabling storage vendors to develop a plugin once and have it work across a number of container orchestration systems.

The reason for this post is to highlight how to install the driver and enable volume snapshots, the driver itself is still in the beta phase, and the volume snapshot is in the alpha phase, alpha phase software is not supported within Amazon EKS clusters. The driver is well tested and supported in Amazon EKS for production use. The fact that we must deploy it in our new Amazon EKS clusters means that the CSI for Amazon EBS volumes is not the default option today. But this will become the standard or default in the future.

Implements CSI interface for consuming Amazon EBS volume

Before we start the first thing we need is an EKS cluster, to achieve this you can follow either this post that walks through creating an EKS cluster or this which will walk through creating an AWS Bottlerocket EKS cluster. If you want the official documentation from Amazon then you can also find that here.

OIDC Provider for your cluster

For the use case or at least my use case here with the CSI driver I needed to use IAM roles for services accounts to do this you need an IAM OIDC provider to exist in your cluster. First up on your EKS cluster run the following command to understand if you have an existing IAM OIDC provider for your cluster.

#Determine if you have an existing IAM OIDC provider for your cluster


aws eks describe-cluster --name bottlerocket --query "cluster.identity.oidc.issuer" --output text

040621 0648 HowtoAmazon1

Now we can run the following command to understand if we have any OIDC providers, you can take that id number shown above and pipe that with a grep search into the below command.

#List the IAM OIDC Providers if nothing is here then you need to move on and create


aws iam list-open-id-connect-providers

040621 0648 HowtoAmazon2

If the above command did not create anything then we need to create an IAM OIDC provider. We can do this with the following command.

#Create an IAM OIDC identity provider for your cluster


eksctl utils associate-iam-oidc-provider --cluster bottlerocket –-approve

Repeat the AWS IAM command that will now or should return something as per the above screenshot.

IAM Policy Creation

The IAM policy that we now need to create is what will be used for the CSI drivers service account. This service account will be used to speak to AWS APIs

Download the IAM Policy example if this is a test cluster you can use this, you can see the actions allowed for this IAM account in the JSON screenshot below the command.

#Download IAM Policy - https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json


curl -o example-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.9.0/docs/example-iam-policy.json

040621 0648 HowtoAmazon3

For test purposes, I am also going to keep the same name as the documentation walkthrough. Following the command, I will show how this looks within the AWS Management Console.

#Create policy


aws iam create-policy --policy-name AmazonEKS_EBS_CSI_Driver_Policy --policy-document file://example-iam-policy.json

the below shows the policy from within the AWS Management Console but you can see, well hopefully that the JSON file outputs are the same.

040621 0648 HowtoAmazon4

Next, we need to create the IAM role

#Create an IAM Role

aws eks describe-cluster --name bottlerocket --query "cluster.identity.oidc.issuer" --output text

aws iam create-role --role-name AmazonEKS_EBS_CSI_DriverRole --assume-role-policy-document "file://D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\trust-policy.json"

040621 0648 HowtoAmazon5

The reason for the first command is to gather the ARN and to add that to the trust-policy.json file You would need to replace the Federated line with your AWS Account ID. Further information can be found on the official AWS documentation. You can find the trust-policy.json below here.

040621 0648 HowtoAmazon6

Next, we need to attach the policy to the role, this can be done with the following command. Take a copy of the ARN output from the above command.

#Attach policy to IAM Role


aws iam attach-role-policy --policy-arn arn:aws:iam::197325178561:policy/AmazonEKS_EBS_CSI_Driver_Policy --role-name AmazonEKS_EBS_CSI_DriverRole

040621 0648 HowtoAmazon7

Installing the CSI Driver

There seem to be quite a few different ways to install the CSI driver but Helm is going to be the easy option.

#Install EBS CSI Driver - https://github.com/kubernetes-sigs/aws-ebs-csi-driver#deploy-driver


helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver


helm repo update


helm upgrade --install aws-ebs-csi-driver --namespace kube-system --set enableVolumeScheduling=true --set enableVolumeResizing=true --set enableVolumeSnapshot=true aws-ebs-csi-driver/aws-ebs-csi-driver

Now annotate your controller pods so that they understand how to interact with AWS to create EBS storage and attach nodes.

kubectl annotate serviceaccount ebs-csi-controller-sa -n kube-system eks.amazonaws.com/role-arn=arn:aws:iam::197325178561:role/AmazonEKS_EBS_CSI_DriverRole

kubectl delete pods -n kube-system -l=app=ebs-csi-controller

Regardless of how you deployed the driver, you will then want to run the following command to confirm that the driver is running. You will see on the screenshot you will see the CSI controller and CSI node; the node should be equal to the number of worker nodes you have within your cluster.

#Verify driver is running (ebs-csi-controller pods should be running)


kubectl get pods -n kube-system

040621 0648 HowtoAmazon8

Now that we have everything running that we should have running we will now create a storage class

#Create a StorageClass


kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/specs/classes/storageclass.yaml


kubectl apply -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\storageclass.yaml"

CSI Volume Snapshots

Before we continue to check and configure volume snapshots, confirm that you have the ebs-snapshot-controller-0 running in your kube-system namespace.

CSI

You then need to install the following CRDs that can be found at this location if you wish to view them before implementing them.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml

040621 0648 HowtoAmazon10

Finally, we need to create a volume snapshot class this enables operators much like a storage class, to describe the storage when provisioning a snapshot.

#Create volume snapshot class using the link https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot


kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/specs/classes/snapshotclass.yaml


kubectl apply -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\snapshotclass.yaml"

Those steps should get you up and running with the CSI Driver within your AWS EKS cluster. There are a few steps I need to clarify for myself especially around the snapshot steps. The reason for this for me was so that I could use Kasten K10 to create snapshots of my applications and export those to S3, which is why I am unsure if this is required or not.

If you have any feedback either comment down below or find me on Twitter, I am ok to be wrong as this is a learning curve for a lot of people.

]]>
https://vzilla.co.uk/vzilla-blog/how-to-amazon-ebs-csi-driver/feed 2
Understanding the Kubernetes storage journey https://vzilla.co.uk/vzilla-blog/understanding-the-kubernetes-storage-journey https://vzilla.co.uk/vzilla-blog/understanding-the-kubernetes-storage-journey#comments Sun, 04 Apr 2021 07:43:16 +0000 https://vzilla.co.uk/?p=2914 Some may say that Kubernetes is built for only stateless workloads but one thing we have seen over the last 18-24 months is an increase in those stateful workloads, think your databases, messaging queues and batch processing functions all requiring some state to be consistent and work. Some people will also believe that these states should land outside the cluster but can be consumed by the stateless workloads ran within the Kubernetes cluster.

The people have spoken

040421 0732 Understandi1

In this post, we are going to briefly talk about the storage options available in Kubernetes and then spend some time on the Container Storage Initiative / Interface which has enabled storage vendors and cloud providers the ability to fast track development into being able to offer cloud-native based storage solutions for those stateful workloads.

Before CSI

Let’s rewind a little, before CSI there was the concept of in-tree and this means that this code was part of the Kubernetes core code. This meant that new in-tree providers from various storage offerings would be delayed or would only be released when the main Kubernetes code was shipped and released. It was not just creating new in-tree provisioner plugins it was also any bug fixes they would also have to wait which means a slow down in adoption really for all those storage vendors and cloud vendors out there wanting to bring their offerings to the table.

From the side of Kubernetes code would also have potential risks if the third-party code caused reliability and security issues. Then also thinking about testing, how would the code maintainers be able to test and ensure everything was good without physical access in some cases to physical storage systems.

The CSI massively helps resolve most of these issues and we are going to get into this shortly.

Kubernetes Storage Today

Basically, we have a blend of the in-tree providers and the new CSI drivers, we are in that transition period of when everything if everything will spin over to CSI and In-Tree will be removed completely. Today you will find especially within the hyperscalers AWS, Azure and GCP that the default storage options are using the in-tree provider and you have access to alpha and beta code to test out the CSI functionality. I have more specific content upcoming around this in some later posts.

With the In-Tree as we mentioned you do not need to install any additional components whereas you do with CSI. In-Tree is the easy button but easy is not always the best option.

Before you can start consuming underlying infrastructure resources with the CSI the drivers must be installed in your cluster. I am not sure this will change moving forward and how this looks in the future, to be honest. The table below shows the current and targeted time frames for when we will see some of the specific CSI driver releases, some are here now for us to test and try and some are targeted for later releases.

040421 0732 Understandi2

Source – https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/

What is the CSI

CSI is a way in which third party storage providers can provide storage operations for container orchestration systems (Kubernetes, Docker Swarm, Apache Mesos etc) it is an open and independent interface specification. As mentioned before this also enables those third-party providers to develop their plugins and add code without the requirement to wait for Kubernetes code releases. Overall, it is a great community effort from community members from Kubernetes, Docker and Mesosphere and this interface standardises the model for integrating storage systems.

This also means Developers and Operators only must worry about one storage configuration which stays in line with the premise of Kubernetes and other container orchestrators with the requirement around being portable.

CSI Driver Responsibility

Going a little deeper into the responsibilities here and I may also come back to this in a follow-up post as I find it intriguing the process that has been standardised here. There are 4 things that need to be considered for what is happening under the hood with the CSI Driver.

CSI Driver – Must be installed on each node that would leverage the storage, I have only seen the CSI pods running within the kube-system so my assumption at this stage is that it needs to be within there and runs as a privileged pod. There are 3 services worth mentioning

Identity Service – This must be on any node that will use the CSI Driver, it informs the node about the instance and driver capabilities such as snapshots or is storage topology-aware pod scheduling supported?

Controller Service – Makes the decisions but does not need to run on a worker node.

Node Service – like the identity service it must run on every node that will use the driver.

Example workflow

040421 0732 Understandi3

This was more of a theory post for me to get my head around storage in Kubernetes, this was something of interest because of the new open-source project that was just released called Kubestr, this handy little tool gives you the ability to identity storage, both in tree provisioners and CSI. It enables you to validate that your CSI driver is configured correctly and then lastly lets you run a Flexible IO (FIO) test against your storage both in tree and CSI this can give you a nice way to automate the benchmarking of your storage systems. In the next posts, we are going to walk through getting the CSI driver configured in the public cloud likely starting with AWS and Microsoft Azure both have pre-release versions available today.

Any feedback or if I have missed something, drop it in the comments down below.

]]>
https://vzilla.co.uk/vzilla-blog/understanding-the-kubernetes-storage-journey/feed 3
Introducing Kubestr – A handy tool for Kubernetes Storage https://vzilla.co.uk/vzilla-blog/introducing-kubestr-a-handy-tool-for-kubernetes-storage https://vzilla.co.uk/vzilla-blog/introducing-kubestr-a-handy-tool-for-kubernetes-storage#comments Tue, 30 Mar 2021 13:01:00 +0000 https://vzilla.co.uk/?p=2907 My big project over the last month has not only been getting up to speed around Kubernetes but has had a parallel effort around Kubernetes storage and an open-source project that has been developed and today is released. In this post we are going to touch on how to get going with Kubestr, the first thing to mention is that this is a handy set of tools to help you identify, validate, and evaluate your Kubernetes storage.

The Challenge

The challenge we have with Kubernetes storage is that it’s not all that easy and it’s very manual to achieve some of the tasks that Kubestr helps you with, for example, the adoption of CSI drivers and choice of storage available to us within our Kubernetes clusters is growing so fast. This tool is going to assist in validating that the CSI driver is configured correctly for snapshots for example this, in turn, means we can use data protection methods within our cluster. Another hard task is benchmarking storage, it can be done today or prior to Kubestr but it’s a potential pain to make this happen and it takes time. Kubestr allows us to hit the easy button to evaluate.

All of this whilst there are so many options out there for storage, we want to make sure we are using the right storage for the right task, at the end of the day you can go and pay for the most expensive disk especially in the public cloud but let’s make sure you need it and you don’t overspend and also instead of spending your time building benchmarking tools manually this will save you time to giving you a better understanding and visibility into your storage options.

You can find out more here on the Kasten by Veeam blog explaining in more detail the challenges and the reasons Kubestr was born.

Getting Started with Kubestr

We all use different operating systems to manage our Kubernetes clusters, first and foremost Kubestr is available across Windows, macOS and Linux you can find links to these releases as well as source code here.

Once you have this installed on your OS the first command, I suggest is (I am running windows) We can see then the simplicity of what can be used from a command point of view as well as additional available commands.

.\kubestr.exe --help

Kubernetes

Identify your Kubernetes Storage options

The first step that this handy little tool can help you with is just giving you visibility into your Kubernetes storage options available to you. I am running this below against an Amazon EKS cluster using the Bottlerocket OS on the nodes. I have also installed the AWS EBS CSI drivers and snapshot capabilities that now is not deployed by default. Now my cluster is new and has been configured correctly but this tool is going to highlight when things are not configured maybe you have the storage class available but you do not have the Volume Snapshot class or maybe you have multiple storages available and some of that is not being used and this highlights that you have this storage attached and could highlight that you could save by removing it.

.\kubestr.exe

032921 1559 Introducing2

Validate your Storage

Now that we have our Storage classes and our volume snapshot class, we can now run a check against the CSI driver to confirm if this was configured correctly. If we run the same help command with the csicheck command, you get the following options.

032921 1559 Introducing3

If we run against our Kubernetes cluster, storage class and volume snapshot class we will see the process on the below image that runs through creating the application, taking a snapshot, restoring the snapshot and confirming that the configuration is complete.

032921 1559 Introducing4

.\kubestr.exe csicheck -s ebs-sc -v csi-aws-vsc

032921 1559 Introducing5

Evaluate your Storage

Obviously, most people will not just have access to one Kubernetes cluster, for us to run against additional clusters you simply change the kubectl config context to the cluster you would like to perform the tests against. In this section, we want to look into the options around evaluating your Kubernetes storage. This has a very similar walkthrough to the CSIcheck we mentioned and covered above apart from there is no restore but we are going to get the performance results from Flexible IO.

032921 1559 Introducing6

Let’s start with the help command to see our options.

.\kubestr.exe fio –help

032921 1559 Introducing7

Now we can run a test against our storage class with the following and default configurations as listed above.

.\kubestr.exe fio -s ebs-sc

032921 1559 Introducing8

Now we can get more catered to specific workloads with different file sizes for the tests.

.\kubestr.exe fio -s ebs-sc -z 400Gi

032921 1559 Introducing9

Then we can output this to JSON and this is where we see the community helping here to be able to extract that JSON and allow for a better reporting method on all of the results so that the community can understand storage options without having to run these tests manually on their own clusters.

.\kubestr.exe fio -s ebs-sc -z 400Gi -o json


.\kubestr.exe fio -s ebs-sc -z 400Gi -o json > results.json

I won’t post the whole JSON but you get the idea.

032921 1559 Introducing10

Finally, we also can bring your own FIO configurations, you can find these open source files here

#BYOFIO - # Demonstrates how to read backwards in a file.

.\kubestr.exe fio -s ebs-sc -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\Kubestr\fio\examples\backwards-read.fio"


#BYOFIO - fio-seq-RW job - takes a long time!


.\kubestr.exe fio -s ebs-sc -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\Kubestr\fio\examples\fio-seq-RW.fio"

I have just uploaded a quick lightning talk I gave at KubeCon 2021 EU on this handy little tool

My next ask is simple, please go and give it a go and then give us some feedback,

032921 1559 Introducing11

]]>
https://vzilla.co.uk/vzilla-blog/introducing-kubestr-a-handy-tool-for-kubernetes-storage/feed 1
Getting started with Google Kubernetes Service (GKE) https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke#comments Wed, 24 Mar 2021 08:40:00 +0000 https://vzilla.co.uk/?p=2881 In this post we will cover getting started with Google Kubernetes Engine (GKE) much the same as the previous posts covering Amazon EKS and Microsoft AKS, we will walk through getting a Kubernetes cluster up and running. Now we could walk through the Google Cloud Portal which is pretty straight forward and if you would like to see that as a walkthrough let me know and I will cover this but I think the most appropriate way is gearing up for Infrastructure as Code.

As with all the public cloud managed Kubernetes posts I have covered they all have great documentation and walkthroughs on getting up and running.

Pre-Requisites

Before we get started you will need to go to the Kubernetes Engine page and choose or create your project.

032221 1935 Gettingstar1

After creating your project or selecting your project you will need to enable the Kubernetes Engine API.

032221 1935 Gettingstar2

Choose your Weapon (Shell)

Next, we need to decide the shell we will use to deploy our Kubernetes cluster. For the purpose of this walkthrough, I am going to be using the local shell which involves us downloading and installing the Google Cloud SDK Shell. It is a super easy install but I have included the steps here with the configuration settings I have chosen.

032221 1935 Gettingstar3

Next up is that agreement, not very long make sure you read this.

032221 1935 Gettingstar4

Which use will be using this shell?

032221 1935 Gettingstar5

Where do you want the installation folder to be?

032221 1935 Gettingstar6

Which components do you wish to install?

032221 1935 Gettingstar7

Installation progress

032221 1935 Gettingstar8

Install complete, choose where you want to find the shell.

032221 1935 Gettingstar9

Because I ticked “start Google Cloud SDK Shell” guess what happened… it starts the shell and you are then prompted to log in to your Google Cloud Platform account.

032221 1935 Gettingstar10

This then opens a web browser to authenticate with your account.

032221 1935 Gettingstar11

You then get the confirmation that all is good and that you are authenticated.

032221 1935 Gettingstar12

Now back to the shell you will now have the ability to choose your default project to use if you wish.

032221 1935 Gettingstar13

Finally, this is a good time if you have not already, install kubectl to interact with your Kubernetes cluster.

gcloud components install kubectl

Configuring GKE

With the above screenshot this where we can configure some of those defaults that we wish to use for our deployments. We choose the project we wish to use; we then choose our default Compute Region and Zone.

032221 1935 Gettingstar14

In the end, when you have selected your region and zone it will be confirmed

032221 1935 Gettingstar15

Deploying a GKE Cluster

For test purposes I am going to simply deploy a 1 node cluster and name it cluster-name, I can do this simply by running the command below.

gcloud container clusters create cluster-name --num-nodes=1

032221 1935 Gettingstar16

Checking in on the GCP Kubernetes engine portal you can see we also have that cluster building out there.

032221 1935 Gettingstar17

Once this is complete in the shell you will see the cluster name, location, Kubernetes version, IP address, Machine type, node version and number of nodes.

032221 1935 Gettingstar18

You will also notice that the kubeconfig entry generated for cluster-name is just above. What this means is that we have updated our kubectl config with our new GKE cluster details to that if we ran the following command, we would get back information about our GKE Cluster and be able to work with the cluster and deploy our applications.

kubectl get nodes

032221 1935 Gettingstar19

Customise your Kubernetes Cluster

In the example above I have just used defaults, but you are likely going to want to be able to determine the Kubernetes version you are using, the Machine types and the number of nodes, I have added some of these examples below.

Kubernetes version

In the above example, we just used the default version and did not specify anything in this section I am going to share the first command is how to check what versions are available in the region you have selected.

gcloud container get-server-config

so, you can choose your version from the list above for example you would replace the latest shown below with 1.18.16-gke.500

gcloud container clusters create cluster-name --num-nodes=1 --cluster-version=latest

032221 1935 Gettingstar20

Machine Types

Next up could be the choice you have for machine types; the first line of code will give you a list of available machine types that you can use.

gcloud compute machine-types list

once we have selected our machine type from the list, we can then add this to our creation. Notice that in our examples above we were using e2-medium machine type.

gcloud container clusters create cluster-name --num-nodes=1 --cluster-version=latest --machine-type=n1-standard-2

032221 1935 Gettingstar21Number of Nodes

By default, if you miss out on the –num-nodes it will automatically be 3 nodes by default but by using this you can determine the number of nodes you require.

gcloud container clusters create cluster-name --num-nodes=4 --cluster-version=latest --machine-type=n1-standard-2

032221 1935 Gettingstar22

Regional Cluster

Everything we have created so far in this post has focused on the single zone, we have been using Europe West 2, there might be a requirement to have nodes running in multiple zones of a region. This is going to help more in a production environment where you need to keep alive systems during upgrades as well as potential zone outages.

gcloud container clusters create cluster-name --num-nodes=2 --cluster-version=latest --machine-type=n1-standard-2 --region=europe-west2

032221 1935 Gettingstar23

Then this is how this looks in the GCP portal, you can see that the location is set to Europe-west2 which covers 3 zones, you can see that the number of nodes we specified means we have 2 nodes in each zone.

032221 1935 Gettingstar24

You can see here as we go down into the node pool details how this looks.

032221 1935 Gettingstar25

Deleting the Kubernetes cluster

Ok so before things start getting out of hand and costing me lots of money, we need to quickly remove what we have done from our GCP account and organisation. This can easily be done with the following command.

gcloud container clusters delete cluster-name

GKE Kubernetes

And then once complete you see the following confirmation.

032221 1935 Gettingstar27

One more note that if you are running the zonal clusters that we described last then your delete command is going to need to add the following onto the command.

gcloud container clusters delete cluster-name --zone=europe-west2

Hopefully, this will be useful to someone, as always open for feedback and if I am doing something not quite right then I am fine also to be educated and open to the community to help us all learn.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke/feed 1
Getting Started with Microsoft AKS – Azure PowerShell Edition https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition#respond Tue, 23 Mar 2021 08:35:00 +0000 https://vzilla.co.uk/?p=2846 This post is going to cover off using Azure PowerShell to get a Microsoft Azure Kubernetes Service (AKS) cluster up and running in your Azure Subscription.

In the previous post, we went through the same AKS cluster creation using the Azure CLI

Which one you choose will depend on your background and usage, if you are familiar with PowerShell then you might choose this option as you might be more familiar with the object output. There are lots of posts already out there around the Azure CLI vs Azure PowerShell here is one, but I am not going to get into that here.

Install Azure PowerShell

Spoiler Alert! To use Azure PowerShell, you are going to need to install it on your system. This article explains how to install the Azure PowerShell. Or before doing this confirm that you have it already installed by running the command in your PowerShell console.

# Connect to Azure with a browser sign in token
Connect-AzAccount

With the above command, you are either going to get a wall of red text saying module not found or you will be prompted to log in to your Azure portal. Alternatively, you can just check which modules you have installed with the Get-Module command.

032221 1433 GettingStar1

Either way, you need to connect to your Azure Account and authenticate.

032221 1433 GettingStar2

Authenticate to the account you wish to use and then you will see the following in the same browser.

032221 1433 GettingStar3

Back in your PowerShell console, I am using Visual Studio Code to run through these commands I now see the following:

032221 1433 GettingStar4

Variables

I generally want to define some variables before we begin creating our AKS cluster. We will use these variables later on in our commands and you will get the complete script linked at the bottom.

$ResourceGroupName = "CadeAKS"
$ClusterName = "CadeAKSCluster"
$ClusterLocation = "eastus"
$NodeCount = "3"

Creating the Azure Resource Group

Next, we need to create a new resource group for where our AKS cluster will be hosted. Broadly speaking the Azure resource group construct is a group where resources are deployed and managed, when creating a resource group, you define a location and a name. For more information, you can find that here.

#Create a New resource group
New-AzResourceGroup -Name $ResourceGroupName -Location $ClusterLocation

032221 1433 GettingStar5

Creating your AKS Cluster

For this example, I will be using Azure PowerShell to also generate a new SSH Public Key, but if you wish to create or use an existing key then you can see the detailed process for creating that public SSH key here. The command to create your AKS cluster with your existing SSH key is as follows: Obviously pointing to the correct location of your SSH Key.

New-AzAksCluster -ResourceGroupName $ResourceGroupName -Name $ClusterName -NodeCount $NodeCount -SshKeyValue 'C:\\Users\micha\\.ssh\\id_rsa'

As I mentioned I will be creating a new cluster and with that also creating new SSH keys with the following command.

#Create the AKS cluster, GenerateSshKey is used here to authenticate to the cluster from the local machine.

New-AzAksCluster -ResourceGroupName $ResourceGroupName -Name $ClusterName -NodeCount $NodeCount -GenerateSshKey -KubernetesVersion 1.19.7

032221 1433 GettingStar6

When this is complete you will get the cluster, information posted like below.

032221 1433 GettingStar7

Accessing the Kubernetes Cluster

The first part to access is making sure you have the kubectl available on your system you can do this by running the below command.

#This will install Kubectl but i am not sure if this is needed if you already have kubectl on your system will have to test that.


Install-AzAksKubectl

Once you have this, we can now import the AKS cluster context to our kubectl configuration to access the cluster.

#Now we need to add our AKS context so we can connect


Import-AzAksCredential -ResourceGroupName $ResourceGroupName -Name $ClusterName -Force

032221 1433 GettingStar8

Now if we check the kubectl config contexts

032221 1433 GettingStar9

Deleting the AKS Cluster

When you have finished your testing, learning tasks then I would advise removing your cluster, do not just leave it running unless you really need to. By leaving it running you are going to be spending money and potentially lots of it.

When you are finished running the following command based on what we have created above.

#To Delete your cluster run the following command
Remove-AzResourceGroup -Name $ResourceGroupName -force

At this stage you might also want to delete that SSH Public Key we created above as well, and this can be done with the following command.

Remove-Item C:\Users\micha\.ssh\id_rsa

aks

You might also find this repository on GitHub useful where I store my scripts for the above as well as Azure PowerShell which I will cover in another post.

Hopefully, this will be useful to someone, as always open for feedback and if I am doing something not quite right then I am fine also to be educated and open to the community to help us all learn.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition/feed 0