AWS – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Tue, 10 Aug 2021 10:29:20 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png AWS – vZilla https://vzilla.co.uk 32 32 How to – Amazon EBS CSI Driver https://vzilla.co.uk/vzilla-blog/how-to-amazon-ebs-csi-driver https://vzilla.co.uk/vzilla-blog/how-to-amazon-ebs-csi-driver#comments Tue, 06 Apr 2021 11:02:48 +0000 https://vzilla.co.uk/?p=2928 In a previous post, we hopefully covered the why and where the CSI has come from and where it is going and the benefits that come with having an industry-standard interface by enabling storage vendors to develop a plugin once and have it work across a number of container orchestration systems.

The reason for this post is to highlight how to install the driver and enable volume snapshots, the driver itself is still in the beta phase, and the volume snapshot is in the alpha phase, alpha phase software is not supported within Amazon EKS clusters. The driver is well tested and supported in Amazon EKS for production use. The fact that we must deploy it in our new Amazon EKS clusters means that the CSI for Amazon EBS volumes is not the default option today. But this will become the standard or default in the future.

Implements CSI interface for consuming Amazon EBS volume

Before we start the first thing we need is an EKS cluster, to achieve this you can follow either this post that walks through creating an EKS cluster or this which will walk through creating an AWS Bottlerocket EKS cluster. If you want the official documentation from Amazon then you can also find that here.

OIDC Provider for your cluster

For the use case or at least my use case here with the CSI driver I needed to use IAM roles for services accounts to do this you need an IAM OIDC provider to exist in your cluster. First up on your EKS cluster run the following command to understand if you have an existing IAM OIDC provider for your cluster.

#Determine if you have an existing IAM OIDC provider for your cluster


aws eks describe-cluster --name bottlerocket --query "cluster.identity.oidc.issuer" --output text

040621 0648 HowtoAmazon1

Now we can run the following command to understand if we have any OIDC providers, you can take that id number shown above and pipe that with a grep search into the below command.

#List the IAM OIDC Providers if nothing is here then you need to move on and create


aws iam list-open-id-connect-providers

040621 0648 HowtoAmazon2

If the above command did not create anything then we need to create an IAM OIDC provider. We can do this with the following command.

#Create an IAM OIDC identity provider for your cluster


eksctl utils associate-iam-oidc-provider --cluster bottlerocket –-approve

Repeat the AWS IAM command that will now or should return something as per the above screenshot.

IAM Policy Creation

The IAM policy that we now need to create is what will be used for the CSI drivers service account. This service account will be used to speak to AWS APIs

Download the IAM Policy example if this is a test cluster you can use this, you can see the actions allowed for this IAM account in the JSON screenshot below the command.

#Download IAM Policy - https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json


curl -o example-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.9.0/docs/example-iam-policy.json

040621 0648 HowtoAmazon3

For test purposes, I am also going to keep the same name as the documentation walkthrough. Following the command, I will show how this looks within the AWS Management Console.

#Create policy


aws iam create-policy --policy-name AmazonEKS_EBS_CSI_Driver_Policy --policy-document file://example-iam-policy.json

the below shows the policy from within the AWS Management Console but you can see, well hopefully that the JSON file outputs are the same.

040621 0648 HowtoAmazon4

Next, we need to create the IAM role

#Create an IAM Role

aws eks describe-cluster --name bottlerocket --query "cluster.identity.oidc.issuer" --output text

aws iam create-role --role-name AmazonEKS_EBS_CSI_DriverRole --assume-role-policy-document "file://D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\trust-policy.json"

040621 0648 HowtoAmazon5

The reason for the first command is to gather the ARN and to add that to the trust-policy.json file You would need to replace the Federated line with your AWS Account ID. Further information can be found on the official AWS documentation. You can find the trust-policy.json below here.

040621 0648 HowtoAmazon6

Next, we need to attach the policy to the role, this can be done with the following command. Take a copy of the ARN output from the above command.

#Attach policy to IAM Role


aws iam attach-role-policy --policy-arn arn:aws:iam::197325178561:policy/AmazonEKS_EBS_CSI_Driver_Policy --role-name AmazonEKS_EBS_CSI_DriverRole

040621 0648 HowtoAmazon7

Installing the CSI Driver

There seem to be quite a few different ways to install the CSI driver but Helm is going to be the easy option.

#Install EBS CSI Driver - https://github.com/kubernetes-sigs/aws-ebs-csi-driver#deploy-driver


helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver


helm repo update


helm upgrade --install aws-ebs-csi-driver --namespace kube-system --set enableVolumeScheduling=true --set enableVolumeResizing=true --set enableVolumeSnapshot=true aws-ebs-csi-driver/aws-ebs-csi-driver

Now annotate your controller pods so that they understand how to interact with AWS to create EBS storage and attach nodes.

kubectl annotate serviceaccount ebs-csi-controller-sa -n kube-system eks.amazonaws.com/role-arn=arn:aws:iam::197325178561:role/AmazonEKS_EBS_CSI_DriverRole

kubectl delete pods -n kube-system -l=app=ebs-csi-controller

Regardless of how you deployed the driver, you will then want to run the following command to confirm that the driver is running. You will see on the screenshot you will see the CSI controller and CSI node; the node should be equal to the number of worker nodes you have within your cluster.

#Verify driver is running (ebs-csi-controller pods should be running)


kubectl get pods -n kube-system

040621 0648 HowtoAmazon8

Now that we have everything running that we should have running we will now create a storage class

#Create a StorageClass


kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/specs/classes/storageclass.yaml


kubectl apply -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\storageclass.yaml"

CSI Volume Snapshots

Before we continue to check and configure volume snapshots, confirm that you have the ebs-snapshot-controller-0 running in your kube-system namespace.

CSI

You then need to install the following CRDs that can be found at this location if you wish to view them before implementing them.

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml

040621 0648 HowtoAmazon10

Finally, we need to create a volume snapshot class this enables operators much like a storage class, to describe the storage when provisioning a snapshot.

#Create volume snapshot class using the link https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot


kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/specs/classes/snapshotclass.yaml


kubectl apply -f "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\snapshotclass.yaml"

Those steps should get you up and running with the CSI Driver within your AWS EKS cluster. There are a few steps I need to clarify for myself especially around the snapshot steps. The reason for this for me was so that I could use Kasten K10 to create snapshots of my applications and export those to S3, which is why I am unsure if this is required or not.

If you have any feedback either comment down below or find me on Twitter, I am ok to be wrong as this is a learning curve for a lot of people.

]]>
https://vzilla.co.uk/vzilla-blog/how-to-amazon-ebs-csi-driver/feed 2
Kubernetes, How to – AWS Bottlerocket + Amazon EKS https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks#comments Sun, 28 Mar 2021 16:45:03 +0000 https://vzilla.co.uk/?p=2892 Over the last week or so I have been diving into the three main public clouds, I covered Microsoft Azure Kubernetes Service, Google Kubernetes Engine and Amazon Elastic Kubernetes Service. We are heading back to Amazon EKS for this post and we are focusing on a lightweight Linux container focused open-source operating system that will be our EKS node operating system in our cluster.

What is Bottlerocket?

Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.”

Kubernetes

Bottlerocket was released around a year ago in March 2020, an operating system designed for hosting Linux containers, the key areas of focus and improvement for Bottlerocket was around enhancing security, ensuring the instances in the cluster are identical, and having good operational behaviours and tooling. Bottlerocket improves each of these situations. This is why I wanted to look into this a little deeper in my learning curve around Kubernetes and cloud-native workloads.

Security-focused

Key ingredients when focusing on security regardless of running on-premises or in the public cloud is reducing the attack surface, having verified software and images, and enforced permissions. Bottlerocket does not have SSH or many other components that simply reduces a lot of security headaches that we see maybe with traditional VM operating systems. Then I also mentioned reducing the attack surface comes in the way of hardening the operating system with position-independent executables, using relocation read-only linking, and building all first-party software with memory-safe languages like Rust and Go.

Open Source

Bottlerocket is also fully open-sourced, with specific components written in Rust and Go, the Linux Kernel of course and some other open-source components, all under the MIT or Apache 2.0 license.

Another interesting angle I found was that Bottlerocket being open source is one thing but then also the roadmap is also open source. I think this really allows you to not only see what is coming but also enables you to really pin your efforts on a container-based OS that you know is moving in the right direction.

You can find more of a description here as well from the official AWS documentation.

EKS + Bottlerocket

A few posts back we covered EKS and deployment using the AWS CLI, here we are going to walk through creating an EKS cluster using the Bottlerocket OS. With all the benefits listed above about Bottlerocket, I wanted to explore the use case for running the Bottlerocket OS as my nodes in an EKS cluster.

In the next section, we are going to walk through the way in which I did this using the AWS CLI, I was also intrigued that because this is a lightweight open-source operating system it would also mean that I am not having to pay a license fee for the OS and would only have to pay for the EC2 instances and AWS EKS.

Now don’t get me wrong Bottlerocket is not the first and will not be the last container optimised operating system. Neither are AWS the first company to build one on Linux. The first and most notable would be CoreOS, when we think container optimised operating systems, we think small, stripped-down version of Linux.

The final thing I will mention is Bottlerocket is able to perform automated OS updates seamlessly. This is done by having two OS partitions on the OS disk that are identical, when you update only the inactive partition is updated and then once the update is complete without errors the partitions are swapped this also increases the possibilities here when it comes to updates, rollbacks and just keeping the lights on to serve the workloads that we need.

How to create your Kubernetes Cluster

That is enough theory for one day, but hopefully, that gives you a good grasp on some of the benefits and reasons why this little OS is popping up more and more out there in the wild a year after its launch and release.

To begin we are going to create a new key pair using the following command.

#Create a keypair


aws ec2 create-key-pair --key-name bottlerocket --query "bottlerocket" --output text > bottlerocket.pem

next, we are going to modify this YAML file to suit your requirements. I have labelled some of the key parts to this that you may wish to change to suit your requirements, I will also make sure that this YAML is stored in this repository I have been collecting from these learning posts. I have not highlighted here the AMI Family, this is obviously bottlerocket and if you run through the UI this becomes clear enough on why this is being chosen. You will also notice the publicKeyName that we created in the previous step.

032821 1630 Gettingstar2

Then we need to create our cluster based on our YAML cluster configuration file above. You can find more information here. You can see I have added how long this took in the comments and this will also be stored in the above repository.

#Create EKS Cluster based on yaml configuration (16 mins)


eksctl create cluster --config-file "D:\Personal OneDrive\OneDrive\Veeam Live Documentation\Blog\AWS EKS Setup\bottlerocket-cluster.yaml"

When the above command is completed you will be able to confirm this with the following command.

#Confirm you have access to your new EKS Cluster


kubectl get nodes

032821 1630 Gettingstar3

But the above command just looks the same as it does for any OS being used as the node operating system.

#The above doesn't show your OS image used so run the following to confirm Bottlerocket is being used.


kubectl get nodes -o=wide

032821 1630 Gettingstar4

Now you can go about deploying your workloads in your new Kubernetes cluster. I have not found any limitations to this but I will cover in a later blog about Installing the CSI Driver and then deploying Kasten K10 into my cluster in EKS to start protecting my stateful workloads.

]]>
https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks/feed 2
Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks#comments Fri, 19 Mar 2021 13:01:37 +0000 https://vzilla.co.uk/?p=2799 Over the last few weeks since completing the 10 part series covering my home lab Kubernetes playground I have started to look more into the Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.

I will say here that the continuation of “this is not that hard” is still the case and if anything and as probably expected when you start looking into managed services. Don’t get me wrong I am sure if you are running multiple clusters and hundreds of nodes that might change that perception I have although the premise is still the same.

Pre-requisites

I am running everything on a Windows OS machine, as you can imagine though everything we talk about can be run on Linux, macOS and of course Windows. In some places, it can also be run in a docker container.

AWS CLI

Top of the tree is the management CLI to control all of your AWS services. Dependent on your OS you can find the instructions here.

031921 1226 Gettingread1

The installation is straight forward once you have the MSI downloaded. Just follow these next few steps.

031921 1226 Gettingread2

Everyone should read the license agreement. This one is a short one.

031921 1226 Gettingread3

031921 1226 Gettingread4

031921 1226 Gettingread5

031921 1226 Gettingread6

Confirm that you have installed everything successfully.

031921 1226 Gettingread7

Install kubectl

The best advice here is to check here on the version to be using within AWS EKS, you need to make sure for stable working conditions that you have the supported version of kubectl installed on your workstation. If you have been playing a lot with kubectl then you may have a newer version depending on your cluster, my workstation is using v1.20.4 as you can see below. To note it is the client version you need to focus on here. The second line (“Server Version”) contains the apiserver version.

031921 1226 Gettingread8

My suggestion is to grab the latest MSI here.

Install eksctl CLI

This is what we are specifically going to be using to work with our EKS cluster. Again official AWS Documentation can be found here. Again, various OS options here but we are using Windows so we will be installing eksctl using chocolatey.

031921 1226 Gettingread9

IAM & VPC

Now I am not going to cover this as this would make it a monster post but you need an IAM account with specific permissions that allow you to create and manage EKS clusters in your AWS account and you need a VPC configuration. For lab and education testing, I found this walkthrough very helpful.

Let’s get to it

Now we have our prerequisites we can begin the next easy stages of deploying our EKS cluster. We will start by configuring our workstation AWS CLI to be able to interact with our AWS IAM along with the region we wish to use.

031921 1226 Gettingread10

Next, we will use EKSCTL commands to build out our cluster, the following command is what I used for test purposes. Notice with this we will not have SSH access into our nodes as we did not specify this, but I will cover off the how on this later. This command will create a cluster called mc-eks in the eu-west-2 (London) region with a standard node group and it will use t3.small instances. This is my warning shot. If you do not specify a node type here it will use m5.large and for those using this for education then things will get costly. Another option here to really simplify things is to run eksctl create cluster and this will create an EKS cluster in your default region that we specified above with AWS CLI with one nodegroup with 2 of those monster nodes.

031921 1226 Gettingread11

Once you are happy you have the correct command then hit enter and watch the cluster build start to commence.

031921 1226 Gettingread12

If you would like to understand what the above is then you can head into your AWS management console and location CloudFormation and here you will see the progress of your new EKS stack being created.

031921 1226 Gettingread13

Then when this completes you will have your managed Kubernetes cluster running in AWS and accessible via your local kubectl. Because I also wanted to connect via SSH to my nodes I went with a different EKS build-out for longer-term education and plans. Here is the command that I run when I require a new EKS Cluster. To what we had above it looks similar but when I also created the IAM role I wanted the SSH key so I could connect to my nodes this is reflected in the –ssh-access being enabled and then ssh-public-key that is being used to connect. You will also notice that I am creating my cluster with 3 nodes with 1 minimum and 3 maximum. There are lots of options you can put into creating the cluster including versions

eksctl create cluster –name mc-eks –region eu-west-2 –nodegroup-name standard –managed –ssh-access –ssh-public-key=MCEKS1 –nodes 3 –nodes-min 1 –nodes-max 4

031921 1226 Gettingread14

Accessing the nodes

If you did follow the above and you did get the PEM file when you created the IAM role then you can now SSH into your nodes by using a similar command to below: obviously making sure you had the correct ec2 instance and the location of your pem file.

ssh ec2-user@ec2-18-130-232-27.eu-west-2.compute.amazonaws.com -i C:\Users\micha\.kube\MCEKS1.pem

in order to get the public DNS name or public IP then you can run the following command, again for the note I am filtering to only show m5.large because I know this is the only instances I have running with that size ec2 instance type.

aws ec2 describe-instances –filters Name=instance-type,Values=m5.large

if these are the only machines you have running in your default region, we provided then you can just run the following command.

aws ec2 describe-instances

Accessing the Kubernetes Cluster

Finally we now just need to connect to our Kubernetes cluster, when you receive the end of the command we ran to create the cluster as per below

031921 1226 Gettingread15

We can then check access,

031921 1226 Gettingread16

eksctl created a kubectl config file in ~/.kube or added the new cluster’s configuration within an existing config file in ~/.kube. if you already had say a home lab in your kubectl config then you can see this or switch to this using the following commands. Also covered in a previous post about contexts.

031921 1226 Gettingread17

The final thing to note is, obviously this is costing you money whilst this is running so my advice is to get quick at deploying and destroying this cluster, use it for what you want and need to learn and then destroy it. This is why I still have a Kubernetes cluster available at home that costs me nothing other than it is available to me.

031921 1226 Gettingread18

Hopefully, this will be useful to someone, as always open for feedback and if I am doing something not quite right then I am fine also to be educated and open to the community to help us all learn.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks/feed 8
Disaster Recovery to the Cloud https://vzilla.co.uk/vzilla-blog/disaster-recovery-to-the-cloud https://vzilla.co.uk/vzilla-blog/disaster-recovery-to-the-cloud#comments Wed, 19 Aug 2020 12:20:48 +0000 https://vzilla.co.uk/?p=2334 I think it is fair to say, the public cloud is very much in everyone’s mind when looking at an IT refresh or how you approach the constant requirement to innovate on where you enable your business to do more. A constant conversation we are having is around the ability to send workloads to the cloud by using our Direct Restore to Microsoft Azure or AWS, taking care of the conversion process and configuration migration. The most common use case to date has been around performing testing against specific application stacks. Then it comes down to data recovery, for example if you have a failure scenario on premises that maybe doesn’t require the complete failover to a DR site but maybe some virtualisation hosts are in an outage situation and you are now requiring those workloads that lived on those hosts to be ran somewhere whilst remediation takes place. Both use cases that Veeam Backup & Replication have been able to achieve for several years and versions.

But disaster recovery always carries a level of speed. The process of taking your virtual machine backups and restoring them to the public cloud offerings takes some time, maybe outside the required SLAs the business has set. With the most recent version update of Veeam Backup & Replication v10a this process of conversion has been dramatically enhanced, and speed is now a game changer and this Disaster Recovery to the Cloud may now fit within those SLAs that were maybe once impossible using this process.

10,000ft view

Let’s think about your environment or an environment, you have your vSphere / Hyper-V / Nutanix virtualisation environment on premises running your virtual machines. You are using Veeam Backup & Replication to protect these machines on a daily or twice daily or more frequent schedule. You maybe had the requirement to directly restore certain image based backups to Microsoft Azure or AWS for some testing or development, but you likely would not have considered this as a way of recovering those workloads should a failure scenario happen in your environment. What you likely had or have for Disaster Recovery is another site running similar hardware and you are using replication technologies to move your workloads between the sites for that failover.

Maybe you are not familiar with Direct Restore to Microsoft Azure you can find out more here in a previous post. A similar post can be found here for AWS.

Speed Improvements

As previously mentioned the key part of being able to now think of this direct restore option as a Disaster Recovery scenario are the speed improvements that were introduced in the recent Veeam Backup & Replication 10a update. If we go back to v10 that was released early 2020 this will enable me to share how much faster this process is now.

This video demo walks through in detail of some of those restore scenarios generally focused around test and development or data recovery but not full disaster recovery options.

You will see in the 10a update post linked above that there was also a test performed at the time to show you when and where to use the Azure proxy and also depending on your environment variables what speed you would see in regards to direct restore to Microsoft Azure. The below table shows the comparison between 10 and 10a across the board.

081920 1219 DisasterRec1

This video demo in the section below shows the final two results and how this can be achieved.

The Situation

Let’s think about the situation of, our local site is toast, we may not have any access to our local on premises Veeam Backup & Replication server either, but hopefully and if not you should be sending your data offsite to a different location. Preferably into Object Storage, for the purpose of the post I am going to talk to the fact that we are sending our backups into Microsoft Azure Blob Storage as our offsite copy.

We are using Scale Out Backup Repository on premises as our performance tier and Microsoft Azure Blob Storage for our capacity tier.

But we cannot access that Veeam Backup & Replication server! That is ok, the Veeam Backup & Replication server is just software that can be installed on any Windows OS (supported but can be client versions if really need be)

We have also made it super easy to deploy a Veeam Backup & Replication server from the Microsoft Azure Marketplace and this takes 5 minutes! You then add your object storage, import your backup metadata and then you can start the improved direct restore to Microsoft Azure from this machine.

This video shows this process from top to bottom and highlights the speed improvements from the version 10 release.

Other thoughts?

Ok, so we have mentioned Disaster Recovery, this is only applicable if your SLAs allow it, we must get the data converted and up and running in the public cloud and all of this is going to take time. There are ways to streamline this deployment and configuration of the Azure based Veeam Backup & Replication, I am currently working on this process to make things super-fast and streamlined.

I also want to shout out Ian, one of our Senior Systems Engineers here at Veeam. He has been doing some stuff and helping me with some of this process here.

The other angle that could be taken here is around DR testing without affecting or running through a bad outage or failure to the actual live production systems.

You should be able to automate most of the process to make sure that these machines are seen to be up and running and talking to each other in Microsoft Azure or AWS and then auto power off and either sat there waiting for an actual failure scenario or removed from the public cloud.

More of these ideas to come.

]]>
https://vzilla.co.uk/vzilla-blog/disaster-recovery-to-the-cloud/feed 1
Veeam Backup & Replication 10a Released – Another Release https://vzilla.co.uk/vzilla-blog/veeam-backup-replication-10a-released-another-release https://vzilla.co.uk/vzilla-blog/veeam-backup-replication-10a-released-another-release#comments Tue, 28 Jul 2020 10:57:34 +0000 https://vzilla.co.uk/?p=2309 The releases just keep coming from Veeam this year, we have seen releases to the flagship products within the Veeam Availability Suite which consist of Veeam Backup & Replication and Veeam ONE, releases for your Cloud IaaS backup requirements with offerings for both AWS and Azure. SaaS backup aimed at Microsoft Office 365. Business continuity and disaster recovery with Veeam Availability Orchestrator and that’s just the big releases. There have also been some updates.

10a released today, and this release concentrates on and addresses issues reported by customers, but it wouldn’t be a Veeam release if it did not contain some new features and functionality.

Platform Support

I mentioned earlier that we have been releasing at some pace new products and new versions of products and that included Veeam Backup for AWS. I have done covered this product in this YouTube playlist. Very much released as a standalone product but also allowed for the new Veeam Universal License so you had the flexible way of being able to move workloads around your environments whilst always ensuring Veeam would be able to protect your workloads and data.

The new feature in 10a addresses that standalone management of Veeam Backup & Replication and Veeam Backup for AWS, this new plug in also released today enables you to unlock some more centralised management for the Veeam Backup for AWS product. The ability to connect to an existing or deploy a new appliance. Then being able to configure your S3 Backup repository for the appliance also known as your external repository which was already there from v10 of the release earlier in the year, but you had to create that manually in the AWS management console and add to both Veeam Backup & Replication and Veeam Backup for AWS.

But that’s not all you also can create, edit, start and stop your AWS backup policies from within Veeam Backup & Replication. Amongst some more pretty cool features. You can download this plugin from here.

  • Monitor session statistics.
  • View created snapshots and image-level backups.
  • Restore entire EC2 instances as Amazon EC2 instances or Microsoft Azure VMs.
  • Instantly restore EC2 instances as VMs into VMware vSphere or Hyper-V environment.
  • Export EC2 instance volumes as virtual disks.

072820 1044 VeeamBackup1

Ok that was probably the biggest thing for 10a, but there is also support for many new operating system versions, taken from the linked KB article above.

  • Microsoft Windows 10 version 2004 and Microsoft Windows Server SAC version 2004 support as guest OS, as Hyper-V servers, for the installation of Veeam Backup & Replication components, and for agent-based backup with the Veeam Agent for Microsoft Windows 4.0.1 (included in the update).
  • Linux Kernel version 5.7 support for guest VMs, for the installation of Veeam Backup & Replication components, and for agent-based backup with the Veeam Agent for Linux 4.0.1 (included in the update).
  • RHEL 8.2, CentOS 8.2, Oracle Linux 8.2 (RHCK) and VMware Photon OS support for guest processing functionality in host-based backup jobs.
  • RHEL 8.2, CentOS 8.2, Oracle Linux 8.2 (RHCK), Ubuntu 20.04, Debian 10.4, openSUSE Leap 15.2, Oracle Linux 8 (up to UEK R6) and Fedora 32 (up to kernel 5.7.7) distributions support in the Veeam Backup & Replication agent management functionality.
  • VMware vCloud Director 10.1 support and better handling of deployments without network access to vCD cells.
  • Recent Azure Stack versions support. Please refer to the documentation for additional registry settings required depending on your version.

There was also some storage enhancement support around primary storage integrations with HPE Primera, secondary storage system support for Dell EMC Data Domain all this detail can be found in the KB article linked above.

NAS Backup

The noticeable new enhancement around NAS backup is the ability to support Azure File Sync, for those not familiar with Azure File Sync it enables the following

  • Synchronise file shares between offices
  • Fix problems with full file servers by using tiered storage in the cloud
  • Use online backup
  • Get a DR solution for file servers, e.g. small business or branch office

072820 1044 VeeamBackup2

The issue with Veeam Backup & Replication prior to this 10a release is we were not aware of the cloud tiering but now files not present in the local cache will be backed up directly from Azure. I am going to try and set this up and create a video demo, I think this might be useful for all, let me know if you are interested in the comments below.

Ah yes, another thing added in 10a related to NAS backup is for the first 250GB of NAS data it will be protected for FREE!

Restore

It wouldn’t be a Veeam release if there was not some focus on restore and restore speed and performance, the one I have picked from the notes is the enhancements around Direct Restore to Azure. I have spent a lot of time in testing out restore scenarios from either backup locations or if using a proxy within Azure would make a difference.

I will have to jump back on this testing and confirm how much quicker this same process is with the new enhancements. The below image shows the testing prior to this new 10a release.

072820 1044 VeeamBackup3

Another massive release and we are just calling it an update. I have seen the list of what else we can expect for the rest of the year and it will not disappoint.

]]>
https://vzilla.co.uk/vzilla-blog/veeam-backup-replication-10a-released-another-release/feed 3
How-To – Veeam Backup for Microsoft Office 365 – Portability Part 2 https://vzilla.co.uk/vzilla-blog/how-to-veeam-backup-for-microsoft-office-365-portability-part-2 https://vzilla.co.uk/vzilla-blog/how-to-veeam-backup-for-microsoft-office-365-portability-part-2#respond Sun, 22 Dec 2019 15:29:00 +0000 https://vzilla.co.uk/?p=1926
banner installation
banner configuration
banner BackupJob
banner Restore
banner portabilitypart1
banner portabilitypart2

This post will continue from part 1 and will cover protecting the VBO Server in AWS with Veeam Backup for AWS and bringing it back to vSphere on premises.

We now have our VBO server running as an Amazon EC2 instance and continued protection of our Microsoft Office 365 environment.

What if we wanted to move that back on premises?

The last thing we did was confirmed that we had a successful backup of our Microsoft Office 365 environment, this was covered in part 1.

Veeam Backup for AWS (FREE)

I actually wrote about this a few weeks back, introducing the latest product release into the Veeam Availability Platform, more can be covered off here on that overview.

122119 2222 HowToVeeam1

It is very simple to get up and running with the free version of the product, and the paid for products are the same. I am using the free version for this task as it gives us the ability to protect up to 10 Amazon EC2 instances.

My first task was to create a new policy that would enable me to get a backup of the Veeam Veeam Backup for Office 365 instance.

VBA VBOPolicy

This policy will then start at the scheduled times and start protecting your AWS EC2 Instance of Veeam Backup for Office 365.

122119 2222 HowToVeeam2

The important part to note is in the same way that we described when taking the machine from vSphere to AWS we had to take our final backup and then I shut down the machine to avoid any misconfigurations or consistency. We should perform the same task here. When we are ready to migrate our workload back to vSphere, we perform those same steps.

Veeam Backup & Replication – External Repository

How do we get that data from the AWS S3 bucket now that is stored in our portable data format the .VBK. That S3 bucket can also be added to our on premises or any installation of Veeam Backup & Replication.

The External Repository feature was added with 9.5 update 4 and it gives that ability to bring backups from AWS into Veeam Backup & Replication for recovery tasks or additional backup capabilities such as sending backups into tape media. You can find a little more of that description here.

You will also find below a silent video on how to easily add your external repository to Veeam Backup & Replication.

Once you have added that S3 bucket in Veeam Backup & Replication you will see your backups that you have taken with Veeam Backup for AWS.

122119 2222 HowToVeeam3

Recovery / Migration – Portability Full Circle

From here as I said you can do many things, create backup copy jobs and send that data elsewhere, you can send to tape or to one of our service providers using cloud connect. Or you may wish to use it for recovery. By recovery this can be individual files or folders, application items or full instance recovery.

122119 2222 HowToVeeam4

Now we could use Instant VM recovery, this is the ability to mount this backup to our vSphere environment and boot that machine then once up you can confirm and then commit the storage vMotion back to production storage. Depending on link speed will depend on performance, I did perform this step but there was some lag from what I was trying to achieve.

The alternative is to backup copy that instance backup to a more local repository and then perform that task. The above task would likely perform better in an actual non lab environment.

122119 2222 HowToVeeam5

A quick recap, we began with a virtual machine on premises running in our vSphere environment, we had a business decision to move that to AWS. We performed a consistent backup and migrated our workload using Direct Restore to EC2 and our Veeam Backup for Office 365 instance was now running in Amazon as an EC2 instance. We then decided that for other business benefits that it needs to be back on premises in our vSphere environment. We then used Veeam Backup for AWS (Free) to perform a backup of that EC2 instance and bring that back to our vSphere environment.

We powered on that machine and our Veeam Backup for Office 365 was able to take a backup and store that data on the connected AWS S3 Object Storage the same as it was in each of the other stages of the process.

122119 2222 HowToVeeam6

The closing point I will make is that if you are bringing machines back from the external repository be sure that if you need a required amount of resources on the recovered machine then make sure you set that appropriately by default it will not take the same specifications it maybe once was, this will also possibly work the other way if you are recovering a smaller workload be sure that the recovery process is not giving away resources.

I really wanted to show off that portability message that not only comes from the vbk file format but all of Veeam’s products have this functionality in mind.

]]>
https://vzilla.co.uk/vzilla-blog/how-to-veeam-backup-for-microsoft-office-365-portability-part-2/feed 0
How-To – Veeam Backup for Microsoft Office 365 – Portability Part 1 https://vzilla.co.uk/vzilla-blog/how-to-veeam-backup-for-microsoft-office-365-portability-part-1 https://vzilla.co.uk/vzilla-blog/how-to-veeam-backup-for-microsoft-office-365-portability-part-1#respond Sun, 22 Dec 2019 10:28:00 +0000 https://vzilla.co.uk/?p=1918
banner installation
banner configuration
banner BackupJob
banner Restore
banner portabilitypart1
banner portabilitypart2

How do we backup the backup? Veeam have a solution to that as well, although today the Veeam Backup for Office 365 product is separate to the flagship product Veeam Backup & Replication, they still have the same simple and easy to approach to boring backup.

Our customers running Veeam Backup for Office 365 they are most likely today to be running the software within their datacentre running as a virtual machine most likely on VMware vSphere.

Because of that scenario we can leverage Veeam Backup & Replication to perform a backup of that machine and that opens the doors to lots of possibilities.

Veeam Backup & Replication backing up the backup

It is extremely easy to get a backup job up and running on Veeam Backup & Replication even if you are not using Veeam Backup & Replication what I am showing you below can all be achieved using the Community Edition this is the free edition and allows for protecting up to 10 instances.

122119 2220 HowToVeeam1

Portability

Ok very good, we have a backup of our Veeam Backup for Office 365 server, this would be also easy to achieve wherever it resides if physical we have our agents also we have some offerings in the public cloud that would also achieve a similar protected machine, I am going to touch on that aspect later.

What I really want to talk about now is because of the major feature released in Veeam Backup for Office 365 v4 in the fact that backups can now be sent directly to Object Storage.

122119 2220 HowToVeeam2

What this means though is that our Veeam Backup for Office 365 server is now quite minimal, it’s a Windows machine that now does not require the space to act as a repository. My example is a 50GB typical Windows Server of which with Operating System and Applications we are using just over 20GB. We still have to be mindful of the metadata and cache files that are required to be stored on the Veeam Backup for Office 365 server.

122119 2220 HowToVeeam3

This opens the door to many flexible options to where this machine can now reside. Especially when we are using one of the Public Cloud Object Storage for our backup target.

Because we are protecting this machine now with Veeam Backup & Replication we open the door to the portability and mobility features built into the software even with Community Edition. (FREE)

Prepping for Migration

Before we get on with the moving of our server, we need to make sure we follow the correct steps, if we do not follow this, we could waste a lot of time to find that something is not quite right.

  1. To ensure consistency you should if you wish, perform a final backup on your Veeam Backup for Office 365 server.
  1. Once the job is complete, you can now safely power down that server.
  1. Perform a Veeam Backup & Replication incremental backup of your Veeam Backup for Office 365 server whilst the server is powered down.

Direct Restore to AWS EC2

We now have the ability should the business require or there are performance or migration reasons to migrate our Veeam Backup for Office 365 server to somewhere else, I am using AWS EC2 as my option, but this could also be to Microsoft Azure. As well as other on premises options in vSphere, Hyper-V or Nutanix AHV.

122119 2220 HowToVeeam4

We run through this wizard to get our machine into its AWS EC2 state. Depending on the options you selected this may or may not have powered on once complete.

122119 2220 HowToVeeam5

When you are ready you can now power this server on, again depending on the configuration you will either have a public IP to connect or a private way of connecting to the private IP for the machine.

122119 2220 HowToVeeam6

Portability Level: Complete

We have now moved our Veeam Backup for Office 365 server to an AWS EC2 instance. Now to complete the level we need to get in and we need confirm all is well and if we performed the clean shutdown then when we open the console you can confirm all components are in a good state.

You can then wait for the schedule to start or you can run a manual backup job.

122119 2220 HowToVeeam7

Ok so we managed to get our Veeam Backup for Office 365 machine from our vSphere environment into AWS, but how do we get it back? That would show true portability. Well we have an answer to that too.

The next part of the series will show how we protect the machine in AWS and get that back to our vSphere environment.

]]>
https://vzilla.co.uk/vzilla-blog/how-to-veeam-backup-for-microsoft-office-365-portability-part-1/feed 0
Veeam Backup for AWS FREE GA https://vzilla.co.uk/vzilla-blog/veeam-backup-for-aws-free-ga https://vzilla.co.uk/vzilla-blog/veeam-backup-for-aws-free-ga#comments Tue, 03 Dec 2019 17:22:48 +0000 https://vzilla.co.uk/?p=1787 At AWS ReInvent this week Veeam announced and released the free version of the Veeam Backup for AWS. The free version is only the start and expect to see additional versions later on down the line, and in particular where the product integrates with Veeam Backup & Replication for those customers that have a hybrid cloud approach.

What is Veeam Backup for AWS

Veeam Backup for AWS Free Edition and subsequent versions are available within the AWS Marketplace.

image

The FREE edition allows you to protect 10 Amazon EC2 instances using native EBS snapshots and then tier those snapshots to an Amazon S3 repository.

Within the S3 Repository these snapshots are stored in the portable data format that Veeam has had for a while. This allows for the Veeam Backup & Replication External Repository feature to be leveraged and enable the ability to further additional data protection, or allow for other tasks such as migrations or on premises data recovery.

As you would expect the offering also allows you to recover those EC2 instances not only back where they initially resided but also across accounts and even across regions. As well as being able to provide file level recovery for a more granular option.

Another cool feature is the ability to see a level of cloud cost, when you create your policies through the wizard driven approach you have the ability to start seeing some cost forecasting so you can make better decisions about your cloud cost consumption.

Instances, Policies, Workers & Protected Data

Those familiar with Veeam will notice a different approach to some of the key functions and naming, and maybe you can liken these new terms with those found in Veeam Backup & Replication they have some differences.

Before we go through these functions in more detail, when you have deployed your Veeam Backup for AWS instance and you have authenticated in then you will see the following configuration walkthrough.

image 1

First of all to access your existing or future Amazon EC2 instances we need to add our AWS IAM Role and authenticate against that. Workers are next and we will cover them in more detail below and finally as I mentioned previously add a repository. This is the AWS S3 bucket where we can store our Veeam portable backup file format for long term retention but also for the ability to leverage with Veeam Backup & Replication.

Finally we can then create our Policy, again will cover this shortly.

Instances

The instances tab is where we can see all the associated Amazon EC2 instances that the added IAM account has visibility to. This screen gives you some good initial information about your instances, Disk Size, Instance Size, Region and when the last backup was performed. From here you can select your instances and create ad-hoc snapshots of your instances. There is also the ability to export this as a CSV or XML file, this can be found on most of the screens within the product.

image 2

Policies

Those familiar with Veeam Backup & Replication will recognise Policies as something more commonly known as Backup Jobs, however even within Veeam Backup & Replication world we are seeing policies now entering the fold with the CDP policy coming in later releases.

Policies give you the ability to define several requirements when it comes to your cloud data management. But again it’s that same very easy to use wizard driven approach that all Veeam customers will be familiar with. First we will give our policy a name and a description.

image 3

Next you will need to define your account (you can have several IAM accounts added to your Veeam Backup for AWS instance) we need to choose the one that has the appropriate access to the instances and storage you require to be looked after by this policy.

image 4

We then define the regions we wish to protect from, this will depend on your cloud architecture and also access and management.

image 5

Next we can choose to protect everything in those regions or we can be granular on what to protect. An awesome feature here is that you can select either by Instance or by AWS Tag. AWS Tags really lend well to the fast moving pace of Cloud Instances being spun up and spun down all the time. The ability to use tags means we can protect in a more dynamic fashion. The screen following the resources also enables you to exclude certain resources.

image 6

We then have the ability to define Snapshot Settings and Backup Settings, You may wish on some workloads to only provide Snapshots and some only backups, or both. Snapshot settings allows you to define when these will be taken and how many snapshots you intend to keep.

image 7

Backup Settings is where we can define that AWS S3 bucket in which we wish to store those backups to, this will also play the part of making that data visible if you wish to see that within Veeam Backup & Replication. You also have the same retention setting to define here.

image 8

A very unique feature that is built into the Veeam Backup for AWS free edition and will obviously included across other versions is the ability to estimate cost when it comes to backups and storing the retention you have defined. More information about that cost information can be found here in a Veeam KB article.

image 9

Finally the settings allow you to define number of retries and also notification settings.

image 10

The summary screen gives a nice breakdown of everything that you have configured through the policy wizard.

Workers

The workers are configured during the configuration stage and setup of the Veeam Backup for AWS. Those familiar with Veeam Backup & Replication could maybe liken these worker nodes to the Veeam Backup Proxy component within VBR.

The worker is a non-persistent Linux based instance that is dynamically deployed when data needs to be transferred, the worker is used for both backup and recovery. When the policy is complete then the workers are shut down and destroyed. The nature and size of these workers means they can be deployed extremely fast. The workers are deployed dynamically in the regions where they are required. Should the workload of backup require multiple workers multiple workers can and will be deployed in each region and then dynamically removed.

image 12

Protected Data

The final screen I wanted to share in this walkthrough is the protected data, we have been through we have created our policies and we have started protecting our Amazon EC2 instances, this view allows you to see the associated restore points along with some other information.

image 11

I think that at least covers the basics for now and expect to see a number of other posts over the week and coming months. I am super excited about this release as a v1 it really enables our customers to protect those workloads in AWS and when you look at how the Veeam Availability Platform is expanding and at a great speed. It is a super exciting place to be at the moment.

]]>
https://vzilla.co.uk/vzilla-blog/veeam-backup-for-aws-free-ga/feed 3