Veeam – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Sun, 22 Dec 2024 22:16:29 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png Veeam – vZilla https://vzilla.co.uk 32 32 Veeam Kasten: ARM Support (Raspberry PI – How to) https://vzilla.co.uk/vzilla-blog/veeam-kasten-arm-support-raspberry-pi-how-to https://vzilla.co.uk/vzilla-blog/veeam-kasten-arm-support-raspberry-pi-how-to#respond Sun, 22 Dec 2024 22:16:29 +0000 https://vzilla.co.uk/?p=3476 This has been on my product bucket list for a while, in fact this initial feature request went in on the 9th September 2021. My reasons then were not sales orientated, I was seeing the Kubernetes community using the trusty Raspberry PIs as part of a Kubernetes cluster at home.

By supporting in my eyes this architecture it would have opened the door to the home users, technologists and community to having a trusted way to protect the learning environment at home.

Here we are 3 years on and we got the support.

image 8

I have a single node k3s cluster running on a single Raspberry Pi. We have 4gb of memory and we had to make some changes to get things up and running. It is a Pi4.

image 9

I chose K3s due to the lightweight approach and I was limited by only having this one box for now, the others are elsewhere in the house serving as print servers and other useful stuff.

I actually also started with minikube on the pi with some nightly builds as this is a very fast way to rinse and repeat things but the resources consumed were too much.

As Veeam Kasten for Kubernetes is focused on protecting, moving and restoring your Kubernetes applications and data I need also a layer of storage to play with. the CSI hostpath driver is something quite easy to deploy and mimics any other CSI in a single node cluster. With this in mind we also created a storageclass and volumesnapshotclass

image 10

I am not going to repeat the steps as they can be found here.

Deploying Veeam Kasten

With the above Kubernetes storage foundations in place we can now get Kasten deployed and working on our single node cluster.

We will start this process with a script that runs a primer on your cluster to ensure that you have met requirements, storageclasses are present, and if a CSI provisioner exists so we run the following command on our system. (this is the same process for any deployment of Kasten) (Air gap methods can also be found in the documentation)


curl https://docs.kasten.io/tools/k10_primer.sh | bash

At this point you should have helm and everything else pre installed and available for use here.

As of today, the process to get things installed as with any x86 or IBM Power based cluster deployment of Kasten can be as simple as the command below, although you will likely want to check the documentation.


helm install k10 kasten/k10 --namespace=kasten-io --create-namespace

In an ideal world you will have all pods come up and be running and this might be the case on your cluster or your single node depending on resources. Within my cluster I have also deployed the bitnami Postgres chart as well so resources were low. But in an ideal world you have this.

image 11

I did not… so I had to make some modifications… I am going to state here that this is not supported but then I don’t think Raspberry PI deployments on a single node is something we will have to deal with either. I also believe though that resources are going to play a crucial play in things later on when we come to protecting some data.

My gateway pod was in a state of not enough memory resource to get up and running, I simply modified the deployment and made some reductions to that. to get to the above state.

Backing up

In the below demo, I have created a simple policy considerate of local storage space and only keeping a couple of snapshots for test and demo purposes.

My Deployment modification


    resources:
      limits:
        cpu: "1"
        memory: 100Mi
      requests:
        cpu: 200m
        memory: 100Mi

by default the gateway deployment is


    resources:
      limits:
        cpu: "1"
        memory: 1Gi
      requests:
        cpu: 200m
        memory: 300Mi
]]>
https://vzilla.co.uk/vzilla-blog/veeam-kasten-arm-support-raspberry-pi-how-to/feed 0
Veeam Data Platform: 12.3 Release Summary (a dot release) https://vzilla.co.uk/vzilla-blog/veeam-data-platform-12-3-release-summary-a-dot-release https://vzilla.co.uk/vzilla-blog/veeam-data-platform-12-3-release-summary-a-dot-release#respond Tue, 03 Dec 2024 13:06:22 +0000 https://vzilla.co.uk/?p=3463 Just another dot release at the end of a busy year… the second dot release this year and as I review the 12.3 what’s new document that for Veeam Backup & Replication so minus Veeam ONE we are looking at 14 pages!

I want this post to be a quick look at some of the features and hopefully I can get into some of these areas in more dedicated posts.

Platform Support (Windows Server 2025)

It’s a very standard thing for Veeam to include platform support updates in these releases. Microsoft ignite only happened a few weeks back where Windows Server 2025 was announced and released.

This will provide the ability for Veeam to be installed on Windows 2025 but also protect this operating system including Hyper-V 2025 and SCVMM for those virtual machine environments.

Cloudy Authentication Protection (Entra ID)

Probably the biggest ticket item we have in this release is the ability to protect, restore and compare Microsoft Entra ID. Entra ID is fast becoming the defacto authentication engine across the industry.

It’s not just for the Azure and Microsoft 365 workloads, its being adopted across multiple clouds and applications.

This is one of those big-ticket items that I cannot do it justice here in these summary post.

2024 05 20 15 37 04

Hypervisor Hunger Games – Season 2 (Nutanix AHV)

It’s been another year of Hypervisor hunger games, and we have customers moving to the cloud, sticking with their current licensing dilemma, moving to another hypervisor or even considering Virtual Machines on Kubernetes.

We have had the ability to protect Nutanix AHV VMs for a while now, I wrote about this when we first launched this capability. In 12.3 we are releasing the ability to provide application consistency in an experimental mode.

image 3

Database Support

Having been over on the cloud and cloud native train for a few years, in coming back it was the database support and the breadth that Veeam had continued to innovate and provide more options when it comes to protecting critical databases in your environments.

One of the first areas I explored was around the integration with Microsoft SQL Server with the enterprise application plugin, this goes straight into SQL management studio providing the DBA the freedom to still control their backups, but it puts some control and visibility on the data protection and security teams. 12.3 brings the ability to use SMSS version 20.

There were also some more version updates as per below:

  • MongoDB 8
  • Postgres 17
  • Oracle 23ai
  • SAP HANA – SLES 15 SP6 and RHEL 8.10 support

If you have not had chance to read in more detail my posts covering some of these database protection areas here are some posts on them.

MongoDB ReplicaSets + Pac-Man – Mission Critical High Scores

The SQL continues…. Automating the deployment of the homelab

image 6

Agents Updates

Another area that always gets a list of new features in every release is the protection of agents. I think we do a very good job of talking about protecting Windows and Linux physical servers and then also for the hypervisors we do not have native support for, but we also have a handful of other agents that are covered in this release.

I was really impressed with the latest update of agent for MacOS, my work laptop is a Mac, so I need a good way to protect things here. The 12.3 release enables me to move to MacOS 15 (Sequoia) and still get things protected.

The other noticeable addition is the Veeam Agent for Linux being able to support the IBM Power Architecture.  Something that has been an ask in the Kubernetes landscape as well.

image 5

Cyber, Cyber, Cyber

I think it’s also fair to say that Veeam are not a security company but over the last 18 months we have a lot of smart people focused on data security, both left and right of the bang.

We have always been hot on the right of bang, the remediation of workloads and data when bad things happen but these last few releases have focused on left of bang regarding prevention.

IoC (Indicators of Compromise) also known as the hacker’s toolbox, could be the same tools we have all been using for years but in the wrong hands provide the ability to extract data out of a network. My favourite example here is FileZilla. We have likely all used it, but if this is on your mission critical servers then you potentially have a concern or an indicator that something is afoot. This feature in VBR is going to bring in the recon scanner from Coveware by Veeam to provide this within the backup phase.

Veeam Threat Hunter is another area I want to touch on, this is an evolution of the malware and YARA scanning features added in previous versions. Threat Hunter will provide the ability to use the most up to date signatures from a built-in virus scanning tool set.  Have a read of the what’s new to get a better understanding of how this whole feature compliments the story so far.

image 1

Veeam Data Cloud Vault

The first party backup target, hosted by Veeam since early 2024 will now have an easy route within Veeam Backup & Replication to add in the backup repository for both your primary backups but also your longer-term retention.

I am very excited to see the overall Veeam Data Cloud story in 2025 continue to grow.

image

Veeam Intelligence

It would not be a bit of content in 2024 without the mention of AI, but this is not your 2023 AI chatbot but a way to interact with your backup resources and gain valuable insight into what’s going on without having to navigate through all jobs and potentially many different servers.

This is more focused on the data that Veeam ONE can gather across your environment, be sure to look at that what new as its packed with new stuff as well.

NAS Backup

Another area of my focus a few years back was bringing to market our NAS backup feature. I have had a blog related to the snapdiff integration for those 4 years and we can finally go back and update things as this is now available after some communications with NetApp on the licensing.

Another topic close to my heart is the ability to protect FSx in AWS, in particular this is NetApp FSx other FSx offerings in AWS can be protected with the Veeam backup for AWS appliance already. This completes the FSx list though.

It will be good to cover some NetApp & Veeam content again…

image 7

Rest API

Finally, I wanted to touch on APIs. It’s been a journey when it comes to APIs, public APIs just were not a thing 18 years when Veeam came to market and each release we are striving towards more and more public API functionality.

In this release the notable candidates that I am intrigued to investigate are the Entra ID APIs and the Data Integration API.

image 2
]]>
https://vzilla.co.uk/vzilla-blog/veeam-data-platform-12-3-release-summary-a-dot-release/feed 0
VMs on Kubernetes protected unofficially by Veeam* https://vzilla.co.uk/vzilla-blog/vms-on-kubernetes-protected-unofficially-by-veeam Fri, 29 Nov 2024 19:20:19 +0000 https://vzilla.co.uk/?p=3413 *As the title suggests in this post we are going to be talking about the upstream project KubeVirt, KubeVirt as a standalone project release and the protection of these VMs is not supported. It is only today supported for Red Hat OpenShift Virtualisation (OCP-V) and Harvester from SUSE. This is based on all the varying hardware KubeVirt can be deployed on.

With that caveat out of the way in a home lab, we are able to tinker around with whatever we want. I am also clarifying that I am using the 5 nodes that we have available for the community to protect these virtual machines.

We are going to cover getting Kubevirt deployed on my bare metal Talos Kubernetes cluster, getting a virtual machine up and running and then protecting said machine.

Some pre-reqs to this is to make sure you follow this guide, making sure you have virtualisation enabled and a bridge network defined in the Talos configuration.

Here is my configuration repository for both my virtual cluster and bare metal. I will say though that this documentation was really handy in finding the way. Remember these commands are based on my environment.

Installing virtctl

We will start with virtctl, virtctl is a command-line utility for managing KubeVirt virtual machines. It extends kubectl functionality to include VM-specific operations like starting, stopping, accessing consoles, and live migration. Designed to streamline VM lifecycle management within Kubernetes, it simplifies tasks otherwise requiring complex YAML configurations or direct API calls.


export VERSION=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64

Warning here, be sure to check the copy and paste as it broke on mine.

Deploying KubeVirt

Keeping things simple we will now deploy Kubevirt via YAML manifests as per the Talos docs linked above.


export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml

Now we have the operator installed in our bare metal cluster, we need to apply the custom resource. I have modified this slightly from the talos example.


apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration:
      featureGates:
        - LiveMigration
        - NetworkBindingPlugins
  certificateRotateStrategy: {}
  customizeComponents: {}
  imagePullPolicy: IfNotPresent
  workloadUpdateStrategy:
    workloadUpdateMethods:
      - LiveMigrate

Finally before we get to deploying a VM we are going to deploy the CDI (Containerised Data Importer) which is needed to import disk images. I modified mine again here to suit the storageclasses I have available to me.


apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
  name: cdi
spec:
  config:
    scratchSpaceStorageClass: ceph-block
    podResourceRequirements:
      requests:
        cpu: "100m"
        memory: "60M"
      limits:
        cpu: "750m"
        memory: "2Gi"

All of these will then be deployed using

kubectl create -f <filename>
but you can see this below in the demo.

Create a VM

Next up we can create our virtual machine. I am going to again copy but modify slightly the example that we have from Talos. Here is my VM YAML manifest.

Note SSH configuration is redacted and you would want to add your own here.


apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora-vm
  namespace: fedora-vm
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: fedora-vm
      annotations:
        kubevirt.io/allow-pod-bridge-network-live-migration: "true"
    spec:
      evictionStrategy: LiveMigrate
      domain:
        cpu:
          cores: 2
        resources:
          requests:
            memory: 4G
        devices:
          disks:
            - name: fedora-vm-pvc
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: podnet
              masquerade: {}
        networks:
          - name: podnet
            pod: {}
        volumes:
          - name: fedora-vm-pvc
            persistentVolumeClaim:
              claimName: fedora-vm-pvc
          - name: cloudinitdisk
            cloudInitNoCloud:
              networkData: |
                network:
                  version: 1
                  config:
                    - type: physical
                      name: eth0
                      subnets:
                        - type: dhcp
              userData: |-
                #cloud-config
                users:
                  - name: cloud-user
                    ssh_authorized_keys:
                      - ssh-rsa <REDACTED>
                  sudo: ['ALL=(ALL) NOPASSWD:ALL']
                  groups: sudo
                  shell: /bin/bash
              runcmd:
                - "sudo touch /root/installed"
                - "sudo dnf update"
                - "sudo dnf install httpd fastfetch -y"
                - "sudo systemctl daemon-reload"
                - "sudo systemctl enable httpd"
                - "sudo systemctl start --no-block httpd"

  dataVolumeTemplates:
  - metadata:
      name: fedora-vm-pvc
      namespace: fedora-vm
    spec:
      storage:
        resources:
          requests:
            storage: 35Gi
        accessModes:
          - ReadWriteMany
        storageClassName: "ceph-filesystem"
      source:
        http:
          url: "https://fedora.mirror.wearetriple.com/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2"

The final piece to this puzzle that I have not mentioned is that I am using Cilium as my CNI and with this I am also using this to provide me with some IP addresses accessible from my LAN. I created a service so that I could SSH to the newly created VM.


apiVersion: v1
kind: Service
metadata:
  labels:
    kubevirt.io/vm: fedora-vm
  name: fedora-vm
  namespace: fedora-vm
spec:
  ipFamilyPolicy: PreferDualStack
  externalTrafficPolicy: Local
  ports:
  - name: ssh
    port: 22
    protocol: TCP
    targetPort: 22
  - name: httpd
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    kubevirt.io/vm: fedora-vm
  type: LoadBalancer

Below is a demo, you will notice that I had to remove a previous known host with the same IP from my file.

Some other interesting commands using virtctl would be the following, I am going to let you guess what they each do:


virtctl start fedora-vm -n fedora-vm

virtctl console fedora-vm -n fedora-vm

virtctl stop fedora-vm -n fedora-vm

Protect with Veeam Kasten

Now we have a working machine running on our Kubernetes cluster, we should probably backup and protect it. A similar process to the last post covering protecting your stateful workloads within Kubernetes. We can create a policy to protect this VM and everything in the namespace.

Wrap Up…

I got things protected with Kasten but I need to go back and check a few things are correct in regards to the Ceph Filesystem storageclass and make sure I am protecting the VMs in the correct way moving forward.

This was really to focus on getting Virtual machines up and running in my lab at home to get to grips with virtualisation on Kubernetes. I want to get another post done on Kanister and the specifics around application consistency and then come back to a more relevant workload on these VMs alongside your containerised workloads.

]]>
Updating your Veeam Backup for Microsoft Office 365 to v5 https://vzilla.co.uk/vzilla-blog/updating-your-veeam-backup-for-microsoft-office-365-to-v5 https://vzilla.co.uk/vzilla-blog/updating-your-veeam-backup-for-microsoft-office-365-to-v5#comments Thu, 03 Dec 2020 20:30:01 +0000 https://vzilla.co.uk/?p=2441 Yesterday I decided to walk through and record for the first time the upgrade process from the previous version of Veeam Backup for Microsoft Office 365 v4 to v5 to take advantage of all the good stuff in v5 around Microsoft Teams and some proxy enhancements, you can catch that demo here below and also the GA blog post that also went live on the day of GA here.

One of the areas that I stumbled upon was having to enable something during the process to take advantage of the new team’s functionality above so wanted to document that also.

Firstly, head on over to the download link posted in the blog linked above this post will also give you a short overview on what you can expect in v5.

Once you have that downloaded, you are good to close down the console and begin the upgrade process, advice here is to make sure all jobs are finished and nothing is scheduled for the next 10 minutes maybe longer depending on the size of the environment.

Run through the pretty simple next next upgrade process.

Then open the console and check you have the correct version. By heading here.

120320 2027 Updatingyou1

By selecting the about option, you are going to then see this following screen to show you your build number and version.

120320 2027 Updatingyou2

This shows we have successfully updated our server to v5 and we can start protecting those Microsoft Teams objects.

But before we can do that especially if you are an existing Veeam Backup for Microsoft Office 365 user then you will need to enable this option on the organisation.

120320 2027 Updatingyou3

What you are going to find is that the Microsoft Teams check box is unselected, if you wish to protect this within the organisation then select that checkbox

120320 2027 Updatingyou4

If you are a green field first time installation of Veeam Backup for Microsoft Office 365 and you are starting with v5 or newer, then when you add your organisation it is going to look like this.

120320 2027 Updatingyou5

You can see there that this is automatically selected.

]]>
https://vzilla.co.uk/vzilla-blog/updating-your-veeam-backup-for-microsoft-office-365-to-v5/feed 1
Veeam Backup for Microsoft Office 365 v5 is GA https://vzilla.co.uk/vzilla-blog/veeam-backup-for-microsoft-office-365-v5-is-ga https://vzilla.co.uk/vzilla-blog/veeam-backup-for-microsoft-office-365-v5-is-ga#comments Thu, 03 Dec 2020 16:28:08 +0000 https://vzilla.co.uk/?p=2433 In a year where the world has been reliant on remote working and collaboration tools like Microsoft Office 365, the emphasis has also grown in this space on how we protect or if we protect that data and how, our roadmap for Veeam Backup for Office 365 was always planned to have a better way to protect Microsoft Teams before the surge of many companies and users switching to remote working during 2020.

As well as making things much faster when it comes to backing up the data but more importantly the granular recovery and speed of that recovery back into your Office 365 environment.

Microsoft Teams

Microsoft Teams data was already being protected when Veeam Backup for Microsoft Office 365 was protecting your SharePoint Online environment, however when it comes to recovery it wasn’t as nice and granular to perform those recoveries. There was a great post back in 2019 talking about this way of protecting Teams data and restoring by Veeam Vanguard Falko Banaszek.

Now with v5 we have a much better way to not only capture the Microsoft Teams data but also a much faster way to recover granular items with the new Veeam Explorer for Microsoft Teams functionality.

In terms of what granular levels of recovery can we get to, well this would include your team channels, settings, permissions as well as those files and data also stored within Microsoft Teams. Then there is also the search functionality being able to search across chat and files to find the objects you require for recovery. Then for the final step of the restore you can either just grab individual files or grab multiple files and chats and restore those back to Microsoft Office 365.

The one thing not possible is backing up those GIFs but I feel the internet has a big repository of these some place already.

Proxy placement and deployment

With every Veeam release there is always a focus on Performance and Scalability, this release of v5 is no different. The ability to leverage concurrent tasks with SharePoint backup making those backups faster, but also around proxies and scalability, the number of supported proxies for Veeam Backup for Microsoft Office 365 has been increased by something silly like 5 times.

Prior to this release as well the proxies that deal with the movement of data between Microsoft Office 365 and the repository location had to be joined to a trusted domain the same as the Veeam Backup & Replication server, for most cases this is fine but there are some environments where this is not possible or wanted. V5 brings around the ability to now deploy those proxies in a non-domain joined fashion. Not only that but the Veeam Backup for Microsoft Office 365 management server also does not need to be joined to a domain. This really does enable complete flexibility and scalability in those required environments.

Cloud Field Day – Demo Time

For those that know me and the Veeam Product Strategy team you know we don’t leave home without the ability to perform a live demo, especially when it comes to Cloud Field Day and big events like that. Back earlier in 2020 when we were able to do a session there, we decided to show off Veeam Backup for Microsoft Office 365 in general but also highlighting the features and functionality that have now arrived in v5 of the product.

You can see that demo below.

Free

A lot of us will be running our own personal Office 365, and for that we still have you covered with our community edition, this is going to enable you to protect your Office 365 data to either disk or directly to object storage. You can find out more of that here.

120320 1621 VeeamBackup1

Release Notes

There is so much more than what I have just mentioned here in this post but as always I think we do a great job of noting down all of the What’s New features and functionality in the new releases here.

120320 1621 VeeamBackup2

Download

You can download the update or the whole install file by using this link

120320 1621 VeeamBackup3

I will also be recording the update process with my current v4 version of Veeam Backup for Microsoft Office 365 to this latest version, just to highlight some of these new features but also how super simple and easy the upgrade process is. You will find that appear here on my YouTube channel and alongside the existing Veeam Backup for Microsoft Office 365 demos.

]]>
https://vzilla.co.uk/vzilla-blog/veeam-backup-for-microsoft-office-365-v5-is-ga/feed 1
Automated deployment of Veeam in Microsoft Azure – Part 2 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-2 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-2#respond Thu, 27 Aug 2020 08:07:59 +0000 https://vzilla.co.uk/?p=2377 The first part of this series was aimed at getting a Veeam Backup & Replication Azure VM up and running from the Azure Marketplace using Azure PowerShell. A really quick and easy way to spin the system up.

The use case we are talking about is the ability to recover your backups from maybe on premises up into Microsoft Azure.

I was asked “what about AWS?” and yes of course if you are using the capacity tier option within Veeam Backup & Replication on premises and you are using the copy mode function to land a copy of your backups on AWS S3 or IBM Cloud or any S3 Compatible storage then there could be possible synergies in doing this in AWS, why I chose Microsoft Azure was simply because there is an Azure Marketplace offering we can take advantage of.

If you would like to see a similar series with AWS then let me know either on twitter or in the comments below. This will involve a different way of automating the provisioning of a Windows OS and the installation of Veeam Backup & Replication, but not too hard as we already have this functionality using Terraform & CHEF but only for vSphere but the code can be changed to work with AWS and really any platform that requires this functionality.

Veeam Configuration

As I said if you followed Part 1 of this series then you will have your Veeam server now running in Azure with no Veeam configuration.

In order for us to automate the direct restore process we need to provide some details in the script which i will share in stages and in full at the end of the post. But as a high level we need to

Add Azure Storage Account
Import Backups
Add Azure Compute Account

Then we will take those backups and run the Direct Restore to Microsoft Azure on the appropriate backups in a converted state ready to be powered on, or you can choose to power them on as part of this script process.

Firstly we need to add the Veeam snap in and connect to the local Veeam Backup & Replication Server, depending on where you run this script you will need to change the appropriate localhost below to the relevant DNS or IP Address. It is my recommendation that this is done on the server itself, but I am exploring how this PowerShell script could be hosted on your network and not publicly and used that way to fill in the secure details.


Add-PSSnapin VeeamPSSnapin

#Connects to Veeam backup server.
Connect-VBRServer -server "localhost"

Next we will add the Microsoft Azure Compute Account, this command will prompt you to login and authenticate into Microsoft Azure. I use MFA so this was the only way I could find to achieve this.


#Add Azure Compute Account

Add-VBRAzureAccount -Region Global

Next we will add the storage account, You will need to update the script with the requirements below.

Access Key – this will be based on a storage account that you have already created and you will need the long access key for authentication.

Azure Blob Account – this is the name of the storage blob account you have previously created. This is the same blob account and process that you used for adding Microsoft Azure Blob Storage to Veeam Backup & Replication on premises.


#Add Azure Storage Account

$accesskey = "ADD AZURE ACCESS KEY"
 
$blob1 = Add-VBRAzureBlobAccount -Name "AZUREBLOBACCOUT" -SharedKey $accesskey

Now we need to add our capacity tier, this is where you have been sending those backups.


#Add Capacity Tier (Microsoft Azure Blob Storage) Repository

$account = Get-VBRAzureBlobAccount -Name "AZUREBLOBACCOUNT"
 
$connect = Connect-VBRAzureBlobService -Account $account -RegionType Global -ServiceType CapacityTier

$container = Get-VBRAzureBlobContainer -Connection $connect | where {$_.name -eq 'AZURECONTAINER'}

$folder = Get-VBRAzureBlobFolder -Container $container -Connection $connect

The next part to adding capacity tier is important and I have also added this into the script, this repository needs to be added with exactly the same name that you have in your production Veeam Backup & Replication.


#The name needs to be exactly the same as you find in your production Veeam Backup & Replication server
$repositoryname = "REPOSITORYNAME"

Add-VBRAzureBlobRepository -AzureBlobFolder $folder -Connection $connect -Name $repositoryname

Next we need to import and rescan those backups that are in the Azure Blob Storage.


#Import backups from Capacity Tier Repository

$repository = Get-VBRObjectStorageRepository -Name $repositoryname

Mount-VBRObjectStorageRepository -Repository $repository
Rescan-VBREntity -AllRepositories

Now if you are using encryption then you will need the following commands instead of the one above.


#if you have used an encryption key then configure this section

$key = Get-VBREncryptionKey -Description "Object Storage Key"
Mount-VBRObjectStorageRepository -Repository $repository -EncryptionKey $key

At this point if we were to jump into the Veeam Backup & Replication console we would see our Storage and Compute accounts added to the Cloud Credential Manager, we would see the Microsoft Azure Blob Storage container added to our backup repositories and on the home screen you will see the object storage (imported) which is where you will also see the bakcups that reside there.

Next we need to create the variables in order to start our Direct Restore scenarios to Microsoft Azure.

A lot of the variables are quite self explanatory, but as a brief overview you will need to change the following to suit your backups.

VMBACKUPNAME = Which VM is it you want to restore

AZURECOMPUTEACCOUNT = this is the Azure Compute Account you added to Veeam Backup & Replication at the beginning of the script.

SUBSCRIPTIONNAME = you may have multiple subscriptions on one Azure compute account pick the appropriate one here.

STORAGEACCOUNTFORRESTOREDMACHINE = we are going to be converting that backup to your Azure Storage Group

REGION = Which Azure region would you like this to be restored to

$vmsize = this is where you will define what size Azure VM you wish to use here. In this example Basic_A0 is being used, you can change this to suit your workload.

AZURENETWORK = define the Azure Virtual Network you wish this converted machine to live.

SUBNET = Which subnet should the machine live

AZURERESOURCEGROUP = the Azure Resource Group you wish the VM to live

NAMEFORRESTOREDMACHINEINAZURE = Maybe a different naming conversion but this is what you wish to call your machine in Azure.


 #This next section will enable you to automate the Direct Restore to Microsoft Azure

$restorepoint = Get-VBRRestorePoint -Name "VMBACKUPNAME" | Sort-Object $_.creationtime -Descending | Select -First 1

$account = Get-VBRAzureAccount -Type ResourceManager -Name "AZURECOMPUTEACCOUNT"

$subscription = Get-VBRAzureSubscription -Account $account -name "SUBSCRIPTIONNAME"

$storageaccount = Get-VBRAzureStorageAccount -Subscription $subscription -Name "STORAGEACCOUNTFORRESTOREDMACHINE"

$location = Get-VBRAzureLocation -Subscription $subscription -Name "REGION"

$vmsize = Get-VBRAzureVMSize -Subscription $subscription -Location $location -Name Basic_A0

$network = Get-VBRAzureVirtualNetwork -Subscription $subscription -Name "AZURENETWORK"

$subnet = Get-VBRAzureVirtualNetworkSubnet -Network $network -Name "SUBNET"

$resourcegroup = Get-VBRAzureResourceGroup -Subscription $subscription -Name "AZURERESOURCEGROUP"

$RestoredVMName1 = "NAMEOFRESTOREDMACHINEINAZURE"

Now we have everything added to Veeam Backup & Replication, We have all the variables for our machines that we wish to convert and recover to Microsoft Azure VMs. Next is to start the restore process.


Start-VBRVMRestoreToAzure -RestorePoint $restorepoint -Subscription $subscription -StorageAccount $storageaccount -VmSize $vmsize -VirtualNetwork $network -VirtualSubnet $subnet -ResourceGroup $resourcegroup -VmName $RestoredVMName1 -Reason "Automated DR to the Cloud Testing"

The full script can be found here


#This script will automate the configuration steps of adding the following steps
#Add Azure Compute Account
#Add Azure Storage Account
#Add Capacity Tier (Microsoft Azure Blob Storage) Repository
#Import backups from Capacity Tier Repository
#This will then enable you to perform Direct Restore to Azure the image based backups you require.

Add-PSSnapin VeeamPSSnapin

#Connects to Veeam backup server.
Connect-VBRServer -server "localhost"

#Add Azure Compute Account

#Need to think of a better way to run this as this will close down PowerShell when installing
msiexec.exe /I "C:\Program Files\Veeam\Backup and Replication\Console\azure-powershell.5.1.1.msi"

Add-VBRAzureAccount -Region Global

#Add Azure Storage Account

$accesskey = "ADD AZURE ACCESS KEY"
 
$blob1 = Add-VBRAzureBlobAccount -Name "AZUREBLOBACCOUT" -SharedKey $accesskey

#Add Capacity Tier (Microsoft Azure Blob Storage) Repository

$account = Get-VBRAzureBlobAccount -Name "AZUREBLOBACCOUNT"
 
$connect = Connect-VBRAzureBlobService -Account $account -RegionType Global -ServiceType CapacityTier

$container = Get-VBRAzureBlobContainer -Connection $connect | where {$_.name -eq 'AZURECONTAINER'}

$folder = Get-VBRAzureBlobFolder -Container $container -Connection $connect

#The name needs to be exactly the same as you find in your production Veeam Backup & Replication server
$repositoryname = "REPOSITORYNAME"

Add-VBRAzureBlobRepository -AzureBlobFolder $folder -Connection $connect -Name $repositoryname

#Import backups from Capacity Tier Repository

$repository = Get-VBRObjectStorageRepository -Name $repositoryname

Mount-VBRObjectStorageRepository -Repository $repository
Rescan-VBREntity -AllRepositories

#if you have used an encryption key then configure this section

#$key = Get-VBREncryptionKey -Description "Object Storage Key"
#Mount-VBRObjectStorageRepository -Repository $repository -EncryptionKey $key

 #This next section will enable you to automate the Direct Restore to Microsoft Azure

$restorepoint = Get-VBRRestorePoint -Name "VMBACKUPNAME" | Sort-Object $_.creationtime -Descending | Select -First 1

$account = Get-VBRAzureAccount -Type ResourceManager -Name "AZURECOMPUTEACCOUNT"

$subscription = Get-VBRAzureSubscription -Account $account -name "SUBSCRIPTIONNAME"

$storageaccount = Get-VBRAzureStorageAccount -Subscription $subscription -Name "STORAGEACCOUNTFORRESTOREDMACHINE"

$location = Get-VBRAzureLocation -Subscription $subscription -Name "REGION"

$vmsize = Get-VBRAzureVMSize -Subscription $subscription -Location $location -Name Basic_A0

$network = Get-VBRAzureVirtualNetwork -Subscription $subscription -Name "AZURENETWORK"

$subnet = Get-VBRAzureVirtualNetworkSubnet -Network $network -Name "SUBNET"

$resourcegroup = Get-VBRAzureResourceGroup -Subscription $subscription -Name "AZURERESOURCEGROUP"

$RestoredVMName1 = "NAMEOFRESTOREDMACHINEINAZURE"


Start-VBRVMRestoreToAzure -RestorePoint $restorepoint -Subscription $subscription -StorageAccount $storageaccount -VmSize $vmsize -VirtualNetwork $network -VirtualSubnet $subnet -ResourceGroup $resourcegroup -VmName $RestoredVMName1 -Reason "Automated DR to the Cloud Testing"

You will also find the most up to date and committed PowerShell script here within the GitHub repository.

Feedback is key on this one and would love to make this work better and faster. Feedback welcome below in the comments as well as getting hold of me on Twitter.

]]>
https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-2/feed 0
Automated deployment of Veeam in Microsoft Azure – Part 1 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-1 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-1#respond Wed, 26 Aug 2020 15:58:43 +0000 https://vzilla.co.uk/?p=2373 For those that saw this post and the video demo that walks through the manual steps to get your instance of Veeam Backup & Replication running in Microsoft Azure. I decided although that was still quick to deploy it can always be quicker. Then following on from this post we will then look at the automation of the Veeam configuration as well as the direct restore functionality from in this instance Microsoft Azure Blob Storage into Azure VMs.

Installing Azure PowerShell

In order for us to start this automated deployment we need to install locally on our machine the Azure PowerShell module.

More details of that can be found here.

Run the following code on your system.


if ($PSVersionTable.PSEdition -eq 'Desktop' -and (Get-Module -Name AzureRM -ListAvailable)) {
    Write-Warning -Message ('Az module not installed. Having both the AzureRM and ' +
      'Az modules installed at the same time is not supported.')
} else {
    Install-Module -Name Az -AllowClobber -Scope CurrentUser

Select either [Y] Yes or [A] Yes to All as this is an untrusted repository. You can also change currentuser to allusers if you wish to enable for all users on the local machine.

Breaking down the code

This section is going to talk through the steps taken in the code, the way in which this will work though is by taking this code from the GitHub Repository you will be able to modify the variables and begin testing yourself without any actual code changes.

First we need to connect to our Azure account, this will provide you with a web browser to login to your Azure Portal, if you are using MFA then this will enable you to authenticate this way also.


# Connect to Azure with a browser sign in token
Connect-AzAccount

Next we want to start defining what, where and how we want this to look in our Azure accounts. It should be pretty straight forward to understand the following but

locName = Azure Location

Publisher Name = Veeam

Offer Name = is the particular offering we wish to deploy from the publisher, there are quite a few so expect to see other options using this method.

SkuName = what product sku of the offering do you wish to use

version = what version of the product


# Set the Marketplace image
$locName="EASTUS"
$pubName="veeam"
$offerName="veeam-backup-replication"
$skuName="veeam-backup-replication-v10"
$version = "10.0.1"

The following are aligned to the environment.

resourcegroup = which resource group do you wish to use this can be an existing resource group or a new name

vmname = what name do you wish your Veeam Backup & Replication server to have within your Azure environment

vmsize = this is the image that will be used, my advice to pick the supported sizes, this is the default size used for production environments.


# Variables for common values
$resourceGroup = "CadeTestingVBR"
$vmName = "CadeVBR"
$vmSize = "Standard_F4s_v2"

Next we need to agree to the license terms of deploying from the marketplace for this specific VM Image. The following commands will do this.


Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version

$agreementTerms=Get-AzMarketplaceterms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1"

Set-AzMarketplaceTerms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1" -Terms $agreementTerms -Accept

If you wish to review the terms then you can do by running the following command. Spoiler alert the command will give you a link to a txt file to save you the hassle here is the link in the txt file where you will find the Veeam EULA – https://www.veeam.com/eula.html


Get-AzMarketplaceTerms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1"

Next we need to start defining how our Veeam Backup & Replication server will look in regards to configuration of network, authentication and security.

I also wanted to keep this script following best practice and not containing any usernames or passwords so the first config setting is to gather the username and password for your deployed machine in a secure string.


# Create user object
$cred = Get-Credential -Message "Enter a username and password for the virtual machine."

Create a resource group


# Create a resource group

New-AzResourceGroup -Name $resourceGroup -Location $locname -force

Create a subnet configuration


# Create a subnet configuration
$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name "cadesubvbr" -AddressPrefix 10.0.0.0/24

Create a virtual network


# Create a virtual network
$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNet -AddressPrefix 10.0.0.0/24 -Subnet $subnetConfig

Create a public IP Address


# Create a public IP address and specify a DNS name
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $locName `
  -Name "CadeVBR$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4

Create inbound security group rule for RDP


# Create an inbound network security group rule for port 3389
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name CadeVBRSecurityGroupRuleRDP  -Protocol Tcp `
  -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
  -DestinationPortRange 3389 -Access Allow

Create network security group


# Create a network security group
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNetSecurityGroup -SecurityRules $nsgRuleRDP

Create a virtual network


# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name CadeVBRNIC -ResourceGroupName $resourceGroup -Location $locName `
  -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

Next we need to define what the virtual machine configuration is going to look in our environment using the above environment configurations.


#Create a virtual machine configuration

$vmConfig = New-AzVMConfig -VMName "$vmName" -VMSize $vmSize
$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $pubName -Product $offerName -Name $skuName
$vmConfig = Set-AzVMOperatingSystem -Windows -VM $vmConfig -ComputerName $vmName -Credential $cred
$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
$vmConfig = Add-AzVMNetworkInterface -Id $nic.Id -VM $vmConfig

Then now we have everything we need we can now begin deploying the machine.


# Create a virtual machine
New-AzVM -ResourceGroupName $resourceGroup -Location $locName -VM $vmConfig

If you saw the video demo you would have seen that the deployment really does not take long at all, I actually think using this method is a little faster either way less than 5 minutes to quickly deploy a Veeam Backup & Replication server in Microsoft Azure.

Now that we have our machine there is one thing we want to do to ensure the next stages of configuration run smoothly. Out of the box there is a requirement for Azure PowerShell to be installed to be able to use the Azure Compute accounts and Direct Restore to Microsoft Azure. The installer is already on the deployed box and if we go through manually you would have to just install that msi instead in this script we remote run a powershell script from GitHub that will do it for you.


# Start Script installation of Azure PowerShell requirement for adding Azure Compute Account
Set-AzVMCustomScriptExtension -ResourceGroupName $resourceGroup `
    -VMName $vmName `
    -Location $locName `
    -FileUri https://raw.githubusercontent.com/MichaelCade/veeamdr/master/AzurePowerShellInstaller.ps1 `
    -Run 'AzurePowerShellInstaller.ps1' `
    -Name DemoScriptExtension

At this stage the PowerShell installation for me has required a reboot but it is very fast and generally up within 10-15 seconds. So we run the following command to pause the command before then understanding what that public IP is and then start a Windows Remote Desktop to that IP address.


Start-Sleep -s 15

Write-host "Your public IP address is $($pip.IpAddress)"
mstsc /v:$($pip.IpAddress)

Now, this might seem like a long winded approach to getting something up and running but with this combined into one script and you having the ability to create all of this on demand brings a powerful story to being able to recover workloads into Microsoft Azure.

In the next parts to this post will concentrate on a configuration script which is where we will configure Veeam Backup & Replication to attach the Microsoft Azure Blob Storage where our backups reside, Our Azure Compute Account and then we can look at how we could automate end to end this process to bring your machines up in Microsoft Azure when you need them or before you need them.

here is the complete script


# Connect to Azure with a browser sign in token
Connect-AzAccount

# Set the Marketplace image
$locName="EASTUS"
$pubName="veeam"
$offerName="veeam-backup-replication"
$skuName="veeam-backup-replication-v10"
$version = "10.0.1"

# Variables for common values
$resourceGroup = "CadeTestingVBR"
$vmName = "CadeVBR"
$vmSize = "Standard_F4s_v2"
$StorageSku = "Premium_LRS"
$StorageName = "cadestorage"

Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version

$agreementTerms=Get-AzMarketplaceterms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1"

Set-AzMarketplaceTerms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1" -Terms $agreementTerms -Accept


# Create user object
$cred = Get-Credential -Message "Enter a username and password for the virtual machine."

# Create a resource group

New-AzResourceGroup -Name $resourceGroup -Location $locname -force

# Create a subnet configuration
$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name "cadesubvbr" -AddressPrefix 10.0.0.0/24

# Create a virtual network
$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNet -AddressPrefix 10.0.0.0/24 -Subnet $subnetConfig

# Create a public IP address and specify a DNS name
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $locName `
  -Name "CadeVBR$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4

# Create an inbound network security group rule for port 3389
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name CadeVBRSecurityGroupRuleRDP  -Protocol Tcp `
  -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
  -DestinationPortRange 3389 -Access Allow

# Create a network security group
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNetSecurityGroup -SecurityRules $nsgRuleRDP

# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name CadeVBRNIC -ResourceGroupName $resourceGroup -Location $locName `
  -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

# Create a virtual machine configuration
#vmConfig = New-AzVMConfig -VMName $vmName -VMSize $vmSize | `
#Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
#Set-AzVMSourceImage -VM $vmConfig -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version | `
#Add-AzVMNetworkInterface -Id $nic.Id

#Create a virtual machine configuration

$vmConfig = New-AzVMConfig -VMName "$vmName" -VMSize $vmSize
$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $pubName -Product $offerName -Name $skuName
$vmConfig = Set-AzVMOperatingSystem -Windows -VM $vmConfig -ComputerName $vmName -Credential $cred
$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
$vmConfig = Add-AzVMNetworkInterface -Id $nic.Id -VM $vmConfig

# Create a virtual machine
New-AzVM -ResourceGroupName $resourceGroup -Location $locName -VM $vmConfig

# Start Script installation of Azure PowerShell requirement for adding Azure Compute Account
Set-AzVMCustomScriptExtension -ResourceGroupName $resourceGroup `
    -VMName $vmName `
    -Location $locName `
    -FileUri https://raw.githubusercontent.com/MichaelCade/veeamdr/master/AzurePowerShellInstaller.ps1 `
    -Run 'AzurePowerShellInstaller.ps1' `
    -Name DemoScriptExtension

Start-Sleep -s 15

Write-host "Your public IP address is $($pip.IpAddress)"
mstsc /v:$($pip.IpAddress)

You can also find this version and updated versions of this script here in my GitHub repository.

Any comments feedback either down below here, twitter or on GitHub.

]]>
https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-1/feed 0
Getting your XFS Repository set up for some block cloning https://vzilla.co.uk/vzilla-blog/getting-your-xfs-repository-set-up-for-some-block-cloning https://vzilla.co.uk/vzilla-blog/getting-your-xfs-repository-set-up-for-some-block-cloning#respond Thu, 20 Aug 2020 14:00:24 +0000 https://vzilla.co.uk/?p=2349 Way back when Veeam Backup & Replication v10 was released, there was a lot of new features and functionalities focused around the Linux ecosystem, this ranged from the ability to now leverage Linux Proxies in hot add mode to protect your VMware virtualised environment, on top of that the ability to use NFS repositories, well this was possible pre v10 but it required a middle man to achieve this, the middle man I mention is where we required a Linux server to write the data to the NFS share, ideal for some smaller NAS devices. VIX for Linux was another important feature for file level restores, application aware processing etc.

The feature we want to talk about in this post though is the ability to leverage XFS as a backup repository (not new in v10) but with the ability to integrate into the block cloning technology or data block sharing feature of XFS also known as reflink. A similar story to Windows ReFS which there has been lot of content over the last few years in that space.

Using XFS and Block Cloning for a Veeam repository

As I mentioned before Windows ReFS integration came about a few years ago from a Veeam Backup & Replication point of view which brought some benefits to certain processes when it comes to the Veeam backup process. In general, what both block cloning technology enables is the ability to perform faster merges as well space less synthetic full backups.

Ultimately this will enable faster backup jobs (faster merges) and reduce space consumed when using synthetic full backups.

This post will be a short introduction into XFS and Ubuntu, we have been able to use XFS as a repository for years running on Ubuntu and other Linux distributions, this post will run through the adding a disk to an Ubuntu machine and getting things set and ready for taking advantage of this block cloning technology.

XFS as a file system is available on other distributions of Linux but the one being used here today is Ubuntu, requirements can be found here in the Veeam Help Centre.

Benefits of Block Clone for Backup Repository

I have already mentioned some of the benefits in previous sections but I think the visual concept of these benefits also are worth seeing, this YouTube video demo below is based on Windows ReFS but the concept is the same for those fast merges and space less synthetic full backups.

https://youtu.be/NndMBCDPBDY

more details from a Windows ReFS point of view is explained in this blog post.

Preparing your Ubuntu XFS repository

For the purposes of the demo I am using a virtual machine and I am going to add a disk to that virtual machine and then run through these steps, the same steps will apply for a physical Linux system to achieve the same thing.

We first need to add the new disk to our virtual machine, obviously you can use a previously added disk but you will be formatting this drive so please do take care before just running through some of the following steps especially if you are holding existing backups on the disk you are about to format.

Once added reboot the VM

Now that your system is back up and you are connected again run the following command, this will now show you your newly added disk.

fdisk -l

082020 1315 Gettingyour1

At this stage this disk is not mounted or useable, so we must run the following command, where /dev/sdc is label of your disk shown above.

Mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sdc

082020 1315 Gettingyour2

Next we need to create a mount point run the following command, I am using the name /backup you can choose to call this what you wish but advice is something relevant to what you will be storing in this extent.

mkdir /backup

to this point you will now have a useable disk where you can store data but on reboot this mount will be removed so we need to make it persistent across reboots. We can do this by inserting the new disk device and mount point to the following. Run the following command.

vi /etc/fstab
then hit I to insert
/dev/sdc    /backups    xfs    defaults    1 2

To commit these changes hit ESC and the following.

:wq

Now we can mount the new disk and file system using the following command.

mount /dev/sdc

Now just to confirm that you have your newly created disk and mount point type the following command.

df -h

Adding new XFS disk to Veeam Backup & Replication as a Backup Repository

Now we are ready to add the machine above into Veeam Backup & Replication so we can start sending our backups and take advantage of the data block sharing. First up we need to open the Veeam Backup & Replication console and navigate to Backup infrastructure and Backup Repositories and hit Add Repository you will then select direct attached storage and walk through the wizard below or edit if you previously had this machine added. Give the repository an appropriate name.

082020 1315 Gettingyour3

If this is a new Linux machine you will have to add the Linux box as a managed server if it already added, then it will be available in the drop down. When you add and you select populate you will see all the available paths that can be used, this also gives you the capacity and free space.

082020 1315 Gettingyour4

Next, we need to define the path to the folder we want to use for the backup repository, this is where we must remember to tick use fast cloning on XFS volumes.

082020 1315 Gettingyour5

The following screens to completion are to define the mount server, this is used for restoring scenarios so choose something close to the repository or use the local Veeam Backup & Replication server if all in the same environment. Then we have review, apply and the summary of what you have just created.

From here you can go and create your backup job and point your backups to land on the new repository. As part of the job configuration be sure to enable synthetic full. You will find this setting under storage, and advanced settings with the following window.

082020 1315 Gettingyour6

To confirm those space less synthetic full backups you can jump back onto the Linux machine and run the following command and see the space used.

dh -h

You should not be using root!

It was suggested that I should add a section on creating a specific user so that we are not just using the root account, I in fact do not just use the god mode to add the repository but I follow this process when creating or editing a repository server to be used within Veeam Backup & Replication. Aside from the obvious security issues with using root which works fine for most lab environments it is not going to be best practice for most organisations (this should be not best practice for any organisation)

Veeam only needs a regular user that has the correct permissions to be able to write to the specific repository folder. Sudo is not required and again why would it be based on the best practice I just mentioned. The best approach is to create a restricted user, set the appropriate required permissions and then set the repository folder to only that user.

I will also mention that I create my Ubuntu / CentOS boxes with a template using a more privileged account, depending on your distribution and how you are attaching disks or first starting here you may have to use sudo depending on your default access user to make the required Veeam accounts for repository use. But, the point is that Veeam does not require root access or sudo to your repository servers.

Creating the user

Useradd –d /home/veeam -m veeam
Passwd Veeam123!

(BYOP – Bring Your Own Password)

Configure the folder / repository permissions

Chown veeam.veeam /disk3/backups
Chmod 700 backups

Then when you go and add this repository server to Veeam Backup & Replication then you will add the Linux account and credentials like below

2020 08 25 18 30 27

Let’s show some savings

I am interested in people that have already gone down this route and those that are seeing some good savings from using XFS and Reflink. I know a number of Service Providers that have gone down this route. My ask is that if you can share your results and show us the savings. If you mail them to me Michael.Cade@veeam.com I will upload the results here to show the benefits from a real life scenario rather than just lab testing.

]]>
https://vzilla.co.uk/vzilla-blog/getting-your-xfs-repository-set-up-for-some-block-cloning/feed 0
Disaster Recovery to the Cloud https://vzilla.co.uk/vzilla-blog/disaster-recovery-to-the-cloud https://vzilla.co.uk/vzilla-blog/disaster-recovery-to-the-cloud#comments Wed, 19 Aug 2020 12:20:48 +0000 https://vzilla.co.uk/?p=2334 I think it is fair to say, the public cloud is very much in everyone’s mind when looking at an IT refresh or how you approach the constant requirement to innovate on where you enable your business to do more. A constant conversation we are having is around the ability to send workloads to the cloud by using our Direct Restore to Microsoft Azure or AWS, taking care of the conversion process and configuration migration. The most common use case to date has been around performing testing against specific application stacks. Then it comes down to data recovery, for example if you have a failure scenario on premises that maybe doesn’t require the complete failover to a DR site but maybe some virtualisation hosts are in an outage situation and you are now requiring those workloads that lived on those hosts to be ran somewhere whilst remediation takes place. Both use cases that Veeam Backup & Replication have been able to achieve for several years and versions.

But disaster recovery always carries a level of speed. The process of taking your virtual machine backups and restoring them to the public cloud offerings takes some time, maybe outside the required SLAs the business has set. With the most recent version update of Veeam Backup & Replication v10a this process of conversion has been dramatically enhanced, and speed is now a game changer and this Disaster Recovery to the Cloud may now fit within those SLAs that were maybe once impossible using this process.

10,000ft view

Let’s think about your environment or an environment, you have your vSphere / Hyper-V / Nutanix virtualisation environment on premises running your virtual machines. You are using Veeam Backup & Replication to protect these machines on a daily or twice daily or more frequent schedule. You maybe had the requirement to directly restore certain image based backups to Microsoft Azure or AWS for some testing or development, but you likely would not have considered this as a way of recovering those workloads should a failure scenario happen in your environment. What you likely had or have for Disaster Recovery is another site running similar hardware and you are using replication technologies to move your workloads between the sites for that failover.

Maybe you are not familiar with Direct Restore to Microsoft Azure you can find out more here in a previous post. A similar post can be found here for AWS.

Speed Improvements

As previously mentioned the key part of being able to now think of this direct restore option as a Disaster Recovery scenario are the speed improvements that were introduced in the recent Veeam Backup & Replication 10a update. If we go back to v10 that was released early 2020 this will enable me to share how much faster this process is now.

This video demo walks through in detail of some of those restore scenarios generally focused around test and development or data recovery but not full disaster recovery options.

You will see in the 10a update post linked above that there was also a test performed at the time to show you when and where to use the Azure proxy and also depending on your environment variables what speed you would see in regards to direct restore to Microsoft Azure. The below table shows the comparison between 10 and 10a across the board.

081920 1219 DisasterRec1

This video demo in the section below shows the final two results and how this can be achieved.

The Situation

Let’s think about the situation of, our local site is toast, we may not have any access to our local on premises Veeam Backup & Replication server either, but hopefully and if not you should be sending your data offsite to a different location. Preferably into Object Storage, for the purpose of the post I am going to talk to the fact that we are sending our backups into Microsoft Azure Blob Storage as our offsite copy.

We are using Scale Out Backup Repository on premises as our performance tier and Microsoft Azure Blob Storage for our capacity tier.

But we cannot access that Veeam Backup & Replication server! That is ok, the Veeam Backup & Replication server is just software that can be installed on any Windows OS (supported but can be client versions if really need be)

We have also made it super easy to deploy a Veeam Backup & Replication server from the Microsoft Azure Marketplace and this takes 5 minutes! You then add your object storage, import your backup metadata and then you can start the improved direct restore to Microsoft Azure from this machine.

This video shows this process from top to bottom and highlights the speed improvements from the version 10 release.

Other thoughts?

Ok, so we have mentioned Disaster Recovery, this is only applicable if your SLAs allow it, we must get the data converted and up and running in the public cloud and all of this is going to take time. There are ways to streamline this deployment and configuration of the Azure based Veeam Backup & Replication, I am currently working on this process to make things super-fast and streamlined.

I also want to shout out Ian, one of our Senior Systems Engineers here at Veeam. He has been doing some stuff and helping me with some of this process here.

The other angle that could be taken here is around DR testing without affecting or running through a bad outage or failure to the actual live production systems.

You should be able to automate most of the process to make sure that these machines are seen to be up and running and talking to each other in Microsoft Azure or AWS and then auto power off and either sat there waiting for an actual failure scenario or removed from the public cloud.

More of these ideas to come.

]]>
https://vzilla.co.uk/vzilla-blog/disaster-recovery-to-the-cloud/feed 1
An update to the Veeam CHEF Cookbook https://vzilla.co.uk/vzilla-blog/an-update-to-the-veeam-chef-cookbook https://vzilla.co.uk/vzilla-blog/an-update-to-the-veeam-chef-cookbook#comments Mon, 17 Aug 2020 13:45:15 +0000 https://vzilla.co.uk/?p=2329 For those interested in Configuration Management and those that are looking to use these tools to also set established rules from which your infrastructure management software should adhere to including your backup software for creation, deployment, maintenance and deletion. There has been an on going community project happening where the CHEF Cookbook that was released firstly back in 2018 has been maintained mostly by one contributor Jeremy Goodrum and you will find his other contributions over on his GitHub. You can find some further deep dive into why we chose CHEF over other configuration management options at the time and walk you through the key considerations and use cases in the below to posts.

Cooking up some Veeam deployment with CHEF automation – Part 1

Cooking up some Veeam deployment with CHEF automation – Part 2

Always be updating

Veeam Backup & Replication at the beginning of 2020 released v10 of the product which packed many new features and functionality. This was a major release and more about what this entailed can be found here. Prior to this release Veeam Backup & Replication worked through several releases through the 9.5 release code with update 1 through to 4 before going to v10.

The first release of the cookbook was at the beginning of 2018 covering the GA release of Veeam Backup & Replication with the ability of deploying version 9.0 through to today the latest available release which is 10a which we will touch on shortly. You will see the efforts from start to the current build throughout the release notes

The baseline requirements of this cookbook are the following:

  • Installs Veeam Backup and Replication Server with an optional Catalog and Console plug-in plus all the Explorers. In our testing, the entire solution deploys in under 15mins including provisioning the host.
  • Allows you to quickly evaluate Veeam Backup and Replication Server or install using your own license file.
  • Get started backing up your VMware or Hyper-V environments in minutes with an industry leading backup solution.
  • Customize the Veeam cookbook by creating your own wrapper cookbook and referring to the included custom_resources for Chef 12.5+ environments.
  • Deploy to Windows 2012R2 or Windows 2016

Version support

This has fundamentally stayed the same throughout the versions of the cookbook whilst updating the capability of being able to use the latest version of Veeam Backup & Replication for fresh installations and deployments but also the upgrade process between the different versions.

You will see the timeline below in the next section that highlights the Veeam Backup & Replication versions that are supported with the cookbook versions.

Latest Release

The most recent release of the cookbook which was released 17/08/2020 brings the ability to install the latest Veeam Backup & Replication v10 and v10a release. The cookbook version was updated today and as if I was sitting by and waiting for the release, I saw the notification come through. The new cookbook can be found here.

081720 1344 Anupdatetot1

There has been a timeline of version support and releases since the start of this community project, there have also been several contributions from other community members.

081720 1344 Anupdatetot2

You will also notice that the latest release of Veeam Backup & Replication 10a is also included here and has been tested with the cookbook. You can find out more regarding the 10a release and although it seems like a minor update there are some significant features in there worth looking at here.

If you have any questions then please reach out and if you would like to contribute to the development of this cookbook then you can find the source code here. Another big thank you to Jeremy for his contributions on this.

What else would you like to see here?

]]>
https://vzilla.co.uk/vzilla-blog/an-update-to-the-veeam-chef-cookbook/feed 1