AWS – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Tue, 10 Aug 2021 10:29:20 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png AWS – vZilla https://vzilla.co.uk 32 32 Kubernetes, How to – AWS Bottlerocket + Amazon EKS https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks#comments Sun, 28 Mar 2021 16:45:03 +0000 https://vzilla.co.uk/?p=2892 Over the last week or so I have been diving into the three main public clouds, I covered Microsoft Azure Kubernetes Service, Google Kubernetes Engine and Amazon Elastic Kubernetes Service. We are heading back to Amazon EKS for this post and we are focusing on a lightweight Linux container focused open-source operating system that will be our EKS node operating system in our cluster.

What is Bottlerocket?

Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.”

Kubernetes

Bottlerocket was released around a year ago in March 2020, an operating system designed for hosting Linux containers, the key areas of focus and improvement for Bottlerocket was around enhancing security, ensuring the instances in the cluster are identical, and having good operational behaviours and tooling. Bottlerocket improves each of these situations. This is why I wanted to look into this a little deeper in my learning curve around Kubernetes and cloud-native workloads.

Security-focused

Key ingredients when focusing on security regardless of running on-premises or in the public cloud is reducing the attack surface, having verified software and images, and enforced permissions. Bottlerocket does not have SSH or many other components that simply reduces a lot of security headaches that we see maybe with traditional VM operating systems. Then I also mentioned reducing the attack surface comes in the way of hardening the operating system with position-independent executables, using relocation read-only linking, and building all first-party software with memory-safe languages like Rust and Go.

Open Source

Bottlerocket is also fully open-sourced, with specific components written in Rust and Go, the Linux Kernel of course and some other open-source components, all under the MIT or Apache 2.0 license.

Another interesting angle I found was that Bottlerocket being open source is one thing but then also the roadmap is also open source. I think this really allows you to not only see what is coming but also enables you to really pin your efforts on a container-based OS that you know is moving in the right direction.

You can find more of a description here as well from the official AWS documentation.

EKS + Bottlerocket

A few posts back we covered EKS and deployment using the AWS CLI, here we are going to walk through creating an EKS cluster using the Bottlerocket OS. With all the benefits listed above about Bottlerocket, I wanted to explore the use case for running the Bottlerocket OS as my nodes in an EKS cluster.

In the next section, we are going to walk through the way in which I did this using the AWS CLI, I was also intrigued that because this is a lightweight open-source operating system it would also mean that I am not having to pay a license fee for the OS and would only have to pay for the EC2 instances and AWS EKS.

Now don’t get me wrong Bottlerocket is not the first and will not be the last container optimised operating system. Neither are AWS the first company to build one on Linux. The first and most notable would be CoreOS, when we think container optimised operating systems, we think small, stripped-down version of Linux.

The final thing I will mention is Bottlerocket is able to perform automated OS updates seamlessly. This is done by having two OS partitions on the OS disk that are identical, when you update only the inactive partition is updated and then once the update is complete without errors the partitions are swapped this also increases the possibilities here when it comes to updates, rollbacks and just keeping the lights on to serve the workloads that we need.

How to create your Kubernetes Cluster

That is enough theory for one day, but hopefully, that gives you a good grasp on some of the benefits and reasons why this little OS is popping up more and more out there in the wild a year after its launch and release.

To begin we are going to create a new key pair using the following command.

#Create a keypair


aws ec2 create-key-pair --key-name bottlerocket --query "bottlerocket" --output text > bottlerocket.pem

next, we are going to modify this YAML file to suit your requirements. I have labelled some of the key parts to this that you may wish to change to suit your requirements, I will also make sure that this YAML is stored in this repository I have been collecting from these learning posts. I have not highlighted here the AMI Family, this is obviously bottlerocket and if you run through the UI this becomes clear enough on why this is being chosen. You will also notice the publicKeyName that we created in the previous step.

032821 1630 Gettingstar2

Then we need to create our cluster based on our YAML cluster configuration file above. You can find more information here. You can see I have added how long this took in the comments and this will also be stored in the above repository.

#Create EKS Cluster based on yaml configuration (16 mins)


eksctl create cluster --config-file "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\bottlerocket-cluster.yaml"

When the above command is completed you will be able to confirm this with the following command.

#Confirm you have access to your new EKS Cluster


kubectl get nodes

032821 1630 Gettingstar3

But the above command just looks the same as it does for any OS being used as the node operating system.

#The above doesn't show your OS image used so run the following to confirm Bottlerocket is being used.


kubectl get nodes -o=wide

032821 1630 Gettingstar4

Now you can go about deploying your workloads in your new Kubernetes cluster. I have not found any limitations to this but I will cover in a later blog about Installing the CSI Driver and then deploying Kasten K10 into my cluster in EKS to start protecting my stateful workloads.

]]>
https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks/feed 2
Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks#comments Fri, 19 Mar 2021 13:01:37 +0000 https://vzilla.co.uk/?p=2799 Over the last few weeks since completing the 10 part series covering my home lab Kubernetes playground I have started to look more into the Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.

I will say here that the continuation of “this is not that hard” is still the case and if anything and as probably expected when you start looking into managed services. Don’t get me wrong I am sure if you are running multiple clusters and hundreds of nodes that might change that perception I have although the premise is still the same.

Pre-requisites

I am running everything on a Windows OS machine, as you can imagine though everything we talk about can be run on Linux, macOS and of course Windows. In some places, it can also be run in a docker container.

AWS CLI

Top of the tree is the management CLI to control all of your AWS services. Dependent on your OS you can find the instructions here.

031921 1226 Gettingread1

The installation is straight forward once you have the MSI downloaded. Just follow these next few steps.

031921 1226 Gettingread2

Everyone should read the license agreement. This one is a short one.

031921 1226 Gettingread3

031921 1226 Gettingread4

031921 1226 Gettingread5

031921 1226 Gettingread6

Confirm that you have installed everything successfully.

031921 1226 Gettingread7

Install kubectl

The best advice here is to check here on the version to be using within AWS EKS, you need to make sure for stable working conditions that you have the supported version of kubectl installed on your workstation. If you have been playing a lot with kubectl then you may have a newer version depending on your cluster, my workstation is using v1.20.4 as you can see below. To note it is the client version you need to focus on here. The second line (“Server Version”) contains the apiserver version.

031921 1226 Gettingread8

My suggestion is to grab the latest MSI here.

Install eksctl CLI

This is what we are specifically going to be using to work with our EKS cluster. Again official AWS Documentation can be found here. Again, various OS options here but we are using Windows so we will be installing eksctl using chocolatey.

031921 1226 Gettingread9

IAM & VPC

Now I am not going to cover this as this would make it a monster post but you need an IAM account with specific permissions that allow you to create and manage EKS clusters in your AWS account and you need a VPC configuration. For lab and education testing, I found this walkthrough very helpful.

Let’s get to it

Now we have our prerequisites we can begin the next easy stages of deploying our EKS cluster. We will start by configuring our workstation AWS CLI to be able to interact with our AWS IAM along with the region we wish to use.

031921 1226 Gettingread10

Next, we will use EKSCTL commands to build out our cluster, the following command is what I used for test purposes. Notice with this we will not have SSH access into our nodes as we did not specify this, but I will cover off the how on this later. This command will create a cluster called mc-eks in the eu-west-2 (London) region with a standard node group and it will use t3.small instances. This is my warning shot. If you do not specify a node type here it will use m5.large and for those using this for education then things will get costly. Another option here to really simplify things is to run eksctl create cluster and this will create an EKS cluster in your default region that we specified above with AWS CLI with one nodegroup with 2 of those monster nodes.

031921 1226 Gettingread11

Once you are happy you have the correct command then hit enter and watch the cluster build start to commence.

031921 1226 Gettingread12

If you would like to understand what the above is then you can head into your AWS management console and location CloudFormation and here you will see the progress of your new EKS stack being created.

031921 1226 Gettingread13

Then when this completes you will have your managed Kubernetes cluster running in AWS and accessible via your local kubectl. Because I also wanted to connect via SSH to my nodes I went with a different EKS build-out for longer-term education and plans. Here is the command that I run when I require a new EKS Cluster. To what we had above it looks similar but when I also created the IAM role I wanted the SSH key so I could connect to my nodes this is reflected in the –ssh-access being enabled and then ssh-public-key that is being used to connect. You will also notice that I am creating my cluster with 3 nodes with 1 minimum and 3 maximum. There are lots of options you can put into creating the cluster including versions

eksctl create cluster –name mc-eks –region eu-west-2 –nodegroup-name standard –managed –ssh-access –ssh-public-key=MCEKS1 –nodes 3 –nodes-min 1 –nodes-max 4

031921 1226 Gettingread14

Accessing the nodes

If you did follow the above and you did get the PEM file when you created the IAM role then you can now SSH into your nodes by using a similar command to below: obviously making sure you had the correct ec2 instance and the location of your pem file.

ssh ec2-user@ec2-18-130-232-27.eu-west-2.compute.amazonaws.com -i C:\Users\micha\.kube\MCEKS1.pem

in order to get the public DNS name or public IP then you can run the following command, again for the note I am filtering to only show m5.large because I know this is the only instances I have running with that size ec2 instance type.

aws ec2 describe-instances –filters Name=instance-type,Values=m5.large

if these are the only machines you have running in your default region, we provided then you can just run the following command.

aws ec2 describe-instances

Accessing the Kubernetes Cluster

Finally we now just need to connect to our Kubernetes cluster, when you receive the end of the command we ran to create the cluster as per below

031921 1226 Gettingread15

We can then check access,

031921 1226 Gettingread16

eksctl created a kubectl config file in ~/.kube or added the new cluster’s configuration within an existing config file in ~/.kube. if you already had say a home lab in your kubectl config then you can see this or switch to this using the following commands. Also covered in a previous post about contexts.

031921 1226 Gettingread17

The final thing to note is, obviously this is costing you money whilst this is running so my advice is to get quick at deploying and destroying this cluster, use it for what you want and need to learn and then destroy it. This is why I still have a Kubernetes cluster available at home that costs me nothing other than it is available to me.

031921 1226 Gettingread18

Hopefully, this will be useful to someone, as always open for feedback and if I am doing something not quite right then I am fine also to be educated and open to the community to help us all learn.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks/feed 8
#VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace https://vzilla.co.uk/vzilla-blog/vmworld-2018-day-2-veeam-in-the-aws-marketplace https://vzilla.co.uk/vzilla-blog/vmworld-2018-day-2-veeam-in-the-aws-marketplace#comments Wed, 29 Aug 2018 00:00:46 +0000 https://vzilla.co.uk/?p=1194 That’s a wrap for day 2 of VMworld, there were two big bits worth mentioning from a Veeam perspective, firstly it is the VMware on AWS marketplace and the addition of Veeam Backup & Replication as an option for automated deployment.

082718 1523 VMworld20181

This screen has been taken from beta testing.

To summarise what this means is, for the Veeam customers that have taken the steps to leverage VMware on AWS by giving those customers the same simple, easy to use and seamlessly easy to get thing protected from a backup and replication point of view, using the same toolset that we know from our on-premises vSphere environment.

Yesterday I spoke about automation and we also shared this information in our session about this feature also coming available, both this and the chef and terraform we spoke about yesterday are both types of automation but for possibly different end users and prospects.

The beauty of what we discussed of the CHEF deployment is it allows for us to be more distributed and dynamic, this offering is going to get things up and running as fast as possible and start protecting workloads. There will also be some customers that don’t want to explore the open source community of our CHEF cookbook.

It’s as simple as deploying to your SDDC and then choosing the CloudFormation template. The template will use the VPC that is linked to your VMware on AWS instance. The CloudFormation template will be executed on the VMware on AWS instance.

082718 1523 VMworld20182

This template from CloudFormation will then run through the creation of the stack required.

082718 1523 VMworld20183

When that CloudFormation template has been ran it’s then time to start the environment configuration, resource pools, network configuration etc.

082718 1523 VMworld20184

Then is the summary screen to show you all the configuration you are about to commit.

This will then continue the deployment with your configuration and you will be able to see the Veeam server deployed within your SDDC.

082718 1523 VMworld20185

When you first login to the newly created Veeam server you will see that the repository server has been added down to the stack configuration of the CloudFormation template. It has also added your vSphere vCenter server. You can now see the VMs within your SDDC and can begin protecting those instances.

The final thing I wanted to share was the capabilities, it’s not just backup, you also have the ability to replicate these virtual machines from this vSphere environment to any other environment including an on-premises environment, a Cloud Connect Service Provider offering replication as a service or to another vSphere environment anywhere.

082718 1523 VMworld20186

]]>
https://vzilla.co.uk/vzilla-blog/vmworld-2018-day-2-veeam-in-the-aws-marketplace/feed 2