Automation – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Thu, 27 Aug 2020 08:08:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png Automation – vZilla https://vzilla.co.uk 32 32 Automated deployment of Veeam in Microsoft Azure – Part 2 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-2 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-2#respond Thu, 27 Aug 2020 08:07:59 +0000 https://vzilla.co.uk/?p=2377 The first part of this series was aimed at getting a Veeam Backup & Replication Azure VM up and running from the Azure Marketplace using Azure PowerShell. A really quick and easy way to spin the system up.

The use case we are talking about is the ability to recover your backups from maybe on premises up into Microsoft Azure.

I was asked “what about AWS?” and yes of course if you are using the capacity tier option within Veeam Backup & Replication on premises and you are using the copy mode function to land a copy of your backups on AWS S3 or IBM Cloud or any S3 Compatible storage then there could be possible synergies in doing this in AWS, why I chose Microsoft Azure was simply because there is an Azure Marketplace offering we can take advantage of.

If you would like to see a similar series with AWS then let me know either on twitter or in the comments below. This will involve a different way of automating the provisioning of a Windows OS and the installation of Veeam Backup & Replication, but not too hard as we already have this functionality using Terraform & CHEF but only for vSphere but the code can be changed to work with AWS and really any platform that requires this functionality.

Veeam Configuration

As I said if you followed Part 1 of this series then you will have your Veeam server now running in Azure with no Veeam configuration.

In order for us to automate the direct restore process we need to provide some details in the script which i will share in stages and in full at the end of the post. But as a high level we need to

Add Azure Storage Account
Import Backups
Add Azure Compute Account

Then we will take those backups and run the Direct Restore to Microsoft Azure on the appropriate backups in a converted state ready to be powered on, or you can choose to power them on as part of this script process.

Firstly we need to add the Veeam snap in and connect to the local Veeam Backup & Replication Server, depending on where you run this script you will need to change the appropriate localhost below to the relevant DNS or IP Address. It is my recommendation that this is done on the server itself, but I am exploring how this PowerShell script could be hosted on your network and not publicly and used that way to fill in the secure details.


Add-PSSnapin VeeamPSSnapin

#Connects to Veeam backup server.
Connect-VBRServer -server "localhost"

Next we will add the Microsoft Azure Compute Account, this command will prompt you to login and authenticate into Microsoft Azure. I use MFA so this was the only way I could find to achieve this.


#Add Azure Compute Account

Add-VBRAzureAccount -Region Global

Next we will add the storage account, You will need to update the script with the requirements below.

Access Key – this will be based on a storage account that you have already created and you will need the long access key for authentication.

Azure Blob Account – this is the name of the storage blob account you have previously created. This is the same blob account and process that you used for adding Microsoft Azure Blob Storage to Veeam Backup & Replication on premises.


#Add Azure Storage Account

$accesskey = "ADD AZURE ACCESS KEY"
 
$blob1 = Add-VBRAzureBlobAccount -Name "AZUREBLOBACCOUT" -SharedKey $accesskey

Now we need to add our capacity tier, this is where you have been sending those backups.


#Add Capacity Tier (Microsoft Azure Blob Storage) Repository

$account = Get-VBRAzureBlobAccount -Name "AZUREBLOBACCOUNT"
 
$connect = Connect-VBRAzureBlobService -Account $account -RegionType Global -ServiceType CapacityTier

$container = Get-VBRAzureBlobContainer -Connection $connect | where {$_.name -eq 'AZURECONTAINER'}

$folder = Get-VBRAzureBlobFolder -Container $container -Connection $connect

The next part to adding capacity tier is important and I have also added this into the script, this repository needs to be added with exactly the same name that you have in your production Veeam Backup & Replication.


#The name needs to be exactly the same as you find in your production Veeam Backup & Replication server
$repositoryname = "REPOSITORYNAME"

Add-VBRAzureBlobRepository -AzureBlobFolder $folder -Connection $connect -Name $repositoryname

Next we need to import and rescan those backups that are in the Azure Blob Storage.


#Import backups from Capacity Tier Repository

$repository = Get-VBRObjectStorageRepository -Name $repositoryname

Mount-VBRObjectStorageRepository -Repository $repository
Rescan-VBREntity -AllRepositories

Now if you are using encryption then you will need the following commands instead of the one above.


#if you have used an encryption key then configure this section

$key = Get-VBREncryptionKey -Description "Object Storage Key"
Mount-VBRObjectStorageRepository -Repository $repository -EncryptionKey $key

At this point if we were to jump into the Veeam Backup & Replication console we would see our Storage and Compute accounts added to the Cloud Credential Manager, we would see the Microsoft Azure Blob Storage container added to our backup repositories and on the home screen you will see the object storage (imported) which is where you will also see the bakcups that reside there.

Next we need to create the variables in order to start our Direct Restore scenarios to Microsoft Azure.

A lot of the variables are quite self explanatory, but as a brief overview you will need to change the following to suit your backups.

VMBACKUPNAME = Which VM is it you want to restore

AZURECOMPUTEACCOUNT = this is the Azure Compute Account you added to Veeam Backup & Replication at the beginning of the script.

SUBSCRIPTIONNAME = you may have multiple subscriptions on one Azure compute account pick the appropriate one here.

STORAGEACCOUNTFORRESTOREDMACHINE = we are going to be converting that backup to your Azure Storage Group

REGION = Which Azure region would you like this to be restored to

$vmsize = this is where you will define what size Azure VM you wish to use here. In this example Basic_A0 is being used, you can change this to suit your workload.

AZURENETWORK = define the Azure Virtual Network you wish this converted machine to live.

SUBNET = Which subnet should the machine live

AZURERESOURCEGROUP = the Azure Resource Group you wish the VM to live

NAMEFORRESTOREDMACHINEINAZURE = Maybe a different naming conversion but this is what you wish to call your machine in Azure.


 #This next section will enable you to automate the Direct Restore to Microsoft Azure

$restorepoint = Get-VBRRestorePoint -Name "VMBACKUPNAME" | Sort-Object $_.creationtime -Descending | Select -First 1

$account = Get-VBRAzureAccount -Type ResourceManager -Name "AZURECOMPUTEACCOUNT"

$subscription = Get-VBRAzureSubscription -Account $account -name "SUBSCRIPTIONNAME"

$storageaccount = Get-VBRAzureStorageAccount -Subscription $subscription -Name "STORAGEACCOUNTFORRESTOREDMACHINE"

$location = Get-VBRAzureLocation -Subscription $subscription -Name "REGION"

$vmsize = Get-VBRAzureVMSize -Subscription $subscription -Location $location -Name Basic_A0

$network = Get-VBRAzureVirtualNetwork -Subscription $subscription -Name "AZURENETWORK"

$subnet = Get-VBRAzureVirtualNetworkSubnet -Network $network -Name "SUBNET"

$resourcegroup = Get-VBRAzureResourceGroup -Subscription $subscription -Name "AZURERESOURCEGROUP"

$RestoredVMName1 = "NAMEOFRESTOREDMACHINEINAZURE"

Now we have everything added to Veeam Backup & Replication, We have all the variables for our machines that we wish to convert and recover to Microsoft Azure VMs. Next is to start the restore process.


Start-VBRVMRestoreToAzure -RestorePoint $restorepoint -Subscription $subscription -StorageAccount $storageaccount -VmSize $vmsize -VirtualNetwork $network -VirtualSubnet $subnet -ResourceGroup $resourcegroup -VmName $RestoredVMName1 -Reason "Automated DR to the Cloud Testing"

The full script can be found here


#This script will automate the configuration steps of adding the following steps
#Add Azure Compute Account
#Add Azure Storage Account
#Add Capacity Tier (Microsoft Azure Blob Storage) Repository
#Import backups from Capacity Tier Repository
#This will then enable you to perform Direct Restore to Azure the image based backups you require.

Add-PSSnapin VeeamPSSnapin

#Connects to Veeam backup server.
Connect-VBRServer -server "localhost"

#Add Azure Compute Account

#Need to think of a better way to run this as this will close down PowerShell when installing
msiexec.exe /I "C:\Program Files\Veeam\Backup and Replication\Console\azure-powershell.5.1.1.msi"

Add-VBRAzureAccount -Region Global

#Add Azure Storage Account

$accesskey = "ADD AZURE ACCESS KEY"
 
$blob1 = Add-VBRAzureBlobAccount -Name "AZUREBLOBACCOUT" -SharedKey $accesskey

#Add Capacity Tier (Microsoft Azure Blob Storage) Repository

$account = Get-VBRAzureBlobAccount -Name "AZUREBLOBACCOUNT"
 
$connect = Connect-VBRAzureBlobService -Account $account -RegionType Global -ServiceType CapacityTier

$container = Get-VBRAzureBlobContainer -Connection $connect | where {$_.name -eq 'AZURECONTAINER'}

$folder = Get-VBRAzureBlobFolder -Container $container -Connection $connect

#The name needs to be exactly the same as you find in your production Veeam Backup & Replication server
$repositoryname = "REPOSITORYNAME"

Add-VBRAzureBlobRepository -AzureBlobFolder $folder -Connection $connect -Name $repositoryname

#Import backups from Capacity Tier Repository

$repository = Get-VBRObjectStorageRepository -Name $repositoryname

Mount-VBRObjectStorageRepository -Repository $repository
Rescan-VBREntity -AllRepositories

#if you have used an encryption key then configure this section

#$key = Get-VBREncryptionKey -Description "Object Storage Key"
#Mount-VBRObjectStorageRepository -Repository $repository -EncryptionKey $key

 #This next section will enable you to automate the Direct Restore to Microsoft Azure

$restorepoint = Get-VBRRestorePoint -Name "VMBACKUPNAME" | Sort-Object $_.creationtime -Descending | Select -First 1

$account = Get-VBRAzureAccount -Type ResourceManager -Name "AZURECOMPUTEACCOUNT"

$subscription = Get-VBRAzureSubscription -Account $account -name "SUBSCRIPTIONNAME"

$storageaccount = Get-VBRAzureStorageAccount -Subscription $subscription -Name "STORAGEACCOUNTFORRESTOREDMACHINE"

$location = Get-VBRAzureLocation -Subscription $subscription -Name "REGION"

$vmsize = Get-VBRAzureVMSize -Subscription $subscription -Location $location -Name Basic_A0

$network = Get-VBRAzureVirtualNetwork -Subscription $subscription -Name "AZURENETWORK"

$subnet = Get-VBRAzureVirtualNetworkSubnet -Network $network -Name "SUBNET"

$resourcegroup = Get-VBRAzureResourceGroup -Subscription $subscription -Name "AZURERESOURCEGROUP"

$RestoredVMName1 = "NAMEOFRESTOREDMACHINEINAZURE"


Start-VBRVMRestoreToAzure -RestorePoint $restorepoint -Subscription $subscription -StorageAccount $storageaccount -VmSize $vmsize -VirtualNetwork $network -VirtualSubnet $subnet -ResourceGroup $resourcegroup -VmName $RestoredVMName1 -Reason "Automated DR to the Cloud Testing"

You will also find the most up to date and committed PowerShell script here within the GitHub repository.

Feedback is key on this one and would love to make this work better and faster. Feedback welcome below in the comments as well as getting hold of me on Twitter.

]]>
https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-2/feed 0
Automated deployment of Veeam in Microsoft Azure – Part 1 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-1 https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-1#respond Wed, 26 Aug 2020 15:58:43 +0000 https://vzilla.co.uk/?p=2373 For those that saw this post and the video demo that walks through the manual steps to get your instance of Veeam Backup & Replication running in Microsoft Azure. I decided although that was still quick to deploy it can always be quicker. Then following on from this post we will then look at the automation of the Veeam configuration as well as the direct restore functionality from in this instance Microsoft Azure Blob Storage into Azure VMs.

Installing Azure PowerShell

In order for us to start this automated deployment we need to install locally on our machine the Azure PowerShell module.

More details of that can be found here.

Run the following code on your system.


if ($PSVersionTable.PSEdition -eq 'Desktop' -and (Get-Module -Name AzureRM -ListAvailable)) {
    Write-Warning -Message ('Az module not installed. Having both the AzureRM and ' +
      'Az modules installed at the same time is not supported.')
} else {
    Install-Module -Name Az -AllowClobber -Scope CurrentUser

Select either [Y] Yes or [A] Yes to All as this is an untrusted repository. You can also change currentuser to allusers if you wish to enable for all users on the local machine.

Breaking down the code

This section is going to talk through the steps taken in the code, the way in which this will work though is by taking this code from the GitHub Repository you will be able to modify the variables and begin testing yourself without any actual code changes.

First we need to connect to our Azure account, this will provide you with a web browser to login to your Azure Portal, if you are using MFA then this will enable you to authenticate this way also.


# Connect to Azure with a browser sign in token
Connect-AzAccount

Next we want to start defining what, where and how we want this to look in our Azure accounts. It should be pretty straight forward to understand the following but

locName = Azure Location

Publisher Name = Veeam

Offer Name = is the particular offering we wish to deploy from the publisher, there are quite a few so expect to see other options using this method.

SkuName = what product sku of the offering do you wish to use

version = what version of the product


# Set the Marketplace image
$locName="EASTUS"
$pubName="veeam"
$offerName="veeam-backup-replication"
$skuName="veeam-backup-replication-v10"
$version = "10.0.1"

The following are aligned to the environment.

resourcegroup = which resource group do you wish to use this can be an existing resource group or a new name

vmname = what name do you wish your Veeam Backup & Replication server to have within your Azure environment

vmsize = this is the image that will be used, my advice to pick the supported sizes, this is the default size used for production environments.


# Variables for common values
$resourceGroup = "CadeTestingVBR"
$vmName = "CadeVBR"
$vmSize = "Standard_F4s_v2"

Next we need to agree to the license terms of deploying from the marketplace for this specific VM Image. The following commands will do this.


Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version

$agreementTerms=Get-AzMarketplaceterms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1"

Set-AzMarketplaceTerms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1" -Terms $agreementTerms -Accept

If you wish to review the terms then you can do by running the following command. Spoiler alert the command will give you a link to a txt file to save you the hassle here is the link in the txt file where you will find the Veeam EULA – https://www.veeam.com/eula.html


Get-AzMarketplaceTerms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1"

Next we need to start defining how our Veeam Backup & Replication server will look in regards to configuration of network, authentication and security.

I also wanted to keep this script following best practice and not containing any usernames or passwords so the first config setting is to gather the username and password for your deployed machine in a secure string.


# Create user object
$cred = Get-Credential -Message "Enter a username and password for the virtual machine."

Create a resource group


# Create a resource group

New-AzResourceGroup -Name $resourceGroup -Location $locname -force

Create a subnet configuration


# Create a subnet configuration
$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name "cadesubvbr" -AddressPrefix 10.0.0.0/24

Create a virtual network


# Create a virtual network
$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNet -AddressPrefix 10.0.0.0/24 -Subnet $subnetConfig

Create a public IP Address


# Create a public IP address and specify a DNS name
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $locName `
  -Name "CadeVBR$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4

Create inbound security group rule for RDP


# Create an inbound network security group rule for port 3389
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name CadeVBRSecurityGroupRuleRDP  -Protocol Tcp `
  -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
  -DestinationPortRange 3389 -Access Allow

Create network security group


# Create a network security group
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNetSecurityGroup -SecurityRules $nsgRuleRDP

Create a virtual network


# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name CadeVBRNIC -ResourceGroupName $resourceGroup -Location $locName `
  -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

Next we need to define what the virtual machine configuration is going to look in our environment using the above environment configurations.


#Create a virtual machine configuration

$vmConfig = New-AzVMConfig -VMName "$vmName" -VMSize $vmSize
$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $pubName -Product $offerName -Name $skuName
$vmConfig = Set-AzVMOperatingSystem -Windows -VM $vmConfig -ComputerName $vmName -Credential $cred
$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
$vmConfig = Add-AzVMNetworkInterface -Id $nic.Id -VM $vmConfig

Then now we have everything we need we can now begin deploying the machine.


# Create a virtual machine
New-AzVM -ResourceGroupName $resourceGroup -Location $locName -VM $vmConfig

If you saw the video demo you would have seen that the deployment really does not take long at all, I actually think using this method is a little faster either way less than 5 minutes to quickly deploy a Veeam Backup & Replication server in Microsoft Azure.

Now that we have our machine there is one thing we want to do to ensure the next stages of configuration run smoothly. Out of the box there is a requirement for Azure PowerShell to be installed to be able to use the Azure Compute accounts and Direct Restore to Microsoft Azure. The installer is already on the deployed box and if we go through manually you would have to just install that msi instead in this script we remote run a powershell script from GitHub that will do it for you.


# Start Script installation of Azure PowerShell requirement for adding Azure Compute Account
Set-AzVMCustomScriptExtension -ResourceGroupName $resourceGroup `
    -VMName $vmName `
    -Location $locName `
    -FileUri https://raw.githubusercontent.com/MichaelCade/veeamdr/master/AzurePowerShellInstaller.ps1 `
    -Run 'AzurePowerShellInstaller.ps1' `
    -Name DemoScriptExtension

At this stage the PowerShell installation for me has required a reboot but it is very fast and generally up within 10-15 seconds. So we run the following command to pause the command before then understanding what that public IP is and then start a Windows Remote Desktop to that IP address.


Start-Sleep -s 15

Write-host "Your public IP address is $($pip.IpAddress)"
mstsc /v:$($pip.IpAddress)

Now, this might seem like a long winded approach to getting something up and running but with this combined into one script and you having the ability to create all of this on demand brings a powerful story to being able to recover workloads into Microsoft Azure.

In the next parts to this post will concentrate on a configuration script which is where we will configure Veeam Backup & Replication to attach the Microsoft Azure Blob Storage where our backups reside, Our Azure Compute Account and then we can look at how we could automate end to end this process to bring your machines up in Microsoft Azure when you need them or before you need them.

here is the complete script


# Connect to Azure with a browser sign in token
Connect-AzAccount

# Set the Marketplace image
$locName="EASTUS"
$pubName="veeam"
$offerName="veeam-backup-replication"
$skuName="veeam-backup-replication-v10"
$version = "10.0.1"

# Variables for common values
$resourceGroup = "CadeTestingVBR"
$vmName = "CadeVBR"
$vmSize = "Standard_F4s_v2"
$StorageSku = "Premium_LRS"
$StorageName = "cadestorage"

Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version

$agreementTerms=Get-AzMarketplaceterms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1"

Set-AzMarketplaceTerms -Publisher "veeam" -Product "veeam-backup-replication" -Name "10.0.1" -Terms $agreementTerms -Accept


# Create user object
$cred = Get-Credential -Message "Enter a username and password for the virtual machine."

# Create a resource group

New-AzResourceGroup -Name $resourceGroup -Location $locname -force

# Create a subnet configuration
$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name "cadesubvbr" -AddressPrefix 10.0.0.0/24

# Create a virtual network
$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNet -AddressPrefix 10.0.0.0/24 -Subnet $subnetConfig

# Create a public IP address and specify a DNS name
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $locName `
  -Name "CadeVBR$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4

# Create an inbound network security group rule for port 3389
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name CadeVBRSecurityGroupRuleRDP  -Protocol Tcp `
  -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
  -DestinationPortRange 3389 -Access Allow

# Create a network security group
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $locName `
  -Name CadeVBRNetSecurityGroup -SecurityRules $nsgRuleRDP

# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name CadeVBRNIC -ResourceGroupName $resourceGroup -Location $locName `
  -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

# Create a virtual machine configuration
#vmConfig = New-AzVMConfig -VMName $vmName -VMSize $vmSize | `
#Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
#Set-AzVMSourceImage -VM $vmConfig -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version | `
#Add-AzVMNetworkInterface -Id $nic.Id

#Create a virtual machine configuration

$vmConfig = New-AzVMConfig -VMName "$vmName" -VMSize $vmSize
$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $pubName -Product $offerName -Name $skuName
$vmConfig = Set-AzVMOperatingSystem -Windows -VM $vmConfig -ComputerName $vmName -Credential $cred
$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
$vmConfig = Add-AzVMNetworkInterface -Id $nic.Id -VM $vmConfig

# Create a virtual machine
New-AzVM -ResourceGroupName $resourceGroup -Location $locName -VM $vmConfig

# Start Script installation of Azure PowerShell requirement for adding Azure Compute Account
Set-AzVMCustomScriptExtension -ResourceGroupName $resourceGroup `
    -VMName $vmName `
    -Location $locName `
    -FileUri https://raw.githubusercontent.com/MichaelCade/veeamdr/master/AzurePowerShellInstaller.ps1 `
    -Run 'AzurePowerShellInstaller.ps1' `
    -Name DemoScriptExtension

Start-Sleep -s 15

Write-host "Your public IP address is $($pip.IpAddress)"
mstsc /v:$($pip.IpAddress)

You can also find this version and updated versions of this script here in my GitHub repository.

Any comments feedback either down below here, twitter or on GitHub.

]]>
https://vzilla.co.uk/vzilla-blog/automated-deployment-of-veeam-in-microsoft-azure-part-1/feed 0
An update to the Veeam CHEF Cookbook https://vzilla.co.uk/vzilla-blog/an-update-to-the-veeam-chef-cookbook https://vzilla.co.uk/vzilla-blog/an-update-to-the-veeam-chef-cookbook#comments Mon, 17 Aug 2020 13:45:15 +0000 https://vzilla.co.uk/?p=2329 For those interested in Configuration Management and those that are looking to use these tools to also set established rules from which your infrastructure management software should adhere to including your backup software for creation, deployment, maintenance and deletion. There has been an on going community project happening where the CHEF Cookbook that was released firstly back in 2018 has been maintained mostly by one contributor Jeremy Goodrum and you will find his other contributions over on his GitHub. You can find some further deep dive into why we chose CHEF over other configuration management options at the time and walk you through the key considerations and use cases in the below to posts.

Cooking up some Veeam deployment with CHEF automation – Part 1

Cooking up some Veeam deployment with CHEF automation – Part 2

Always be updating

Veeam Backup & Replication at the beginning of 2020 released v10 of the product which packed many new features and functionality. This was a major release and more about what this entailed can be found here. Prior to this release Veeam Backup & Replication worked through several releases through the 9.5 release code with update 1 through to 4 before going to v10.

The first release of the cookbook was at the beginning of 2018 covering the GA release of Veeam Backup & Replication with the ability of deploying version 9.0 through to today the latest available release which is 10a which we will touch on shortly. You will see the efforts from start to the current build throughout the release notes

The baseline requirements of this cookbook are the following:

  • Installs Veeam Backup and Replication Server with an optional Catalog and Console plug-in plus all the Explorers. In our testing, the entire solution deploys in under 15mins including provisioning the host.
  • Allows you to quickly evaluate Veeam Backup and Replication Server or install using your own license file.
  • Get started backing up your VMware or Hyper-V environments in minutes with an industry leading backup solution.
  • Customize the Veeam cookbook by creating your own wrapper cookbook and referring to the included custom_resources for Chef 12.5+ environments.
  • Deploy to Windows 2012R2 or Windows 2016

Version support

This has fundamentally stayed the same throughout the versions of the cookbook whilst updating the capability of being able to use the latest version of Veeam Backup & Replication for fresh installations and deployments but also the upgrade process between the different versions.

You will see the timeline below in the next section that highlights the Veeam Backup & Replication versions that are supported with the cookbook versions.

Latest Release

The most recent release of the cookbook which was released 17/08/2020 brings the ability to install the latest Veeam Backup & Replication v10 and v10a release. The cookbook version was updated today and as if I was sitting by and waiting for the release, I saw the notification come through. The new cookbook can be found here.

081720 1344 Anupdatetot1

There has been a timeline of version support and releases since the start of this community project, there have also been several contributions from other community members.

081720 1344 Anupdatetot2

You will also notice that the latest release of Veeam Backup & Replication 10a is also included here and has been tested with the cookbook. You can find out more regarding the 10a release and although it seems like a minor update there are some significant features in there worth looking at here.

If you have any questions then please reach out and if you would like to contribute to the development of this cookbook then you can find the source code here. Another big thank you to Jeremy for his contributions on this.

What else would you like to see here?

]]>
https://vzilla.co.uk/vzilla-blog/an-update-to-the-veeam-chef-cookbook/feed 1
A sweet Chocolatey way of deploying your Veeam Software https://vzilla.co.uk/vzilla-blog/a-sweet-chocolatey-way-of-deploying-your-veeam-software https://vzilla.co.uk/vzilla-blog/a-sweet-chocolatey-way-of-deploying-your-veeam-software#respond Mon, 17 Aug 2020 10:18:41 +0000 https://vzilla.co.uk/?p=2325 The ways in which you can deploy Veeam Backup & Replication and all Veeam products for that matter is vast. You could just take the ISO and install on the physical or virtual and click next, next, and you can be protecting your workloads and data within 15 minutes. This is the same software regardless of the size of your environment, no where flexibility in deployment does become a challenge is when you have many sites that require their own Veeam Backup & Replication configuration.

This is where automation plays a huge part in this story, the great thing about Veeam is that it is software only so you can choose where you deploy this, could be on a virtual machine, physical system or a cloud based VM.

There are many angles to take when it comes to automating the deployment of Veeam Backup & Replication, we can leverage configuration management software, unattended install scripts or package management solutions.

081720 1018 AsweetChoco1

Deployment Options

As mentioned above there is quite several ways to leverage configuration management software to automate the deployment and installation of Veeam Backup & Replication. Here are some of those examples:

Veeam & Chef

Cooking up some Veeam deployment with CHEF automation – Part 1

Cooking up some Veeam deployment with CHEF automation – Part 2

Veeam & Ansible

Veeam unattended installation with Ansible – Thanks to Markus Kraus for the effort put in here on this project.

Windows Package Management

Another way to quickly install Veeam Backup & Replication on your Windows operating system is by using a package manager, I hear you shouting at your screen… but there is no package manager in Windows! This is true but there is Chocolatey.

Chocolatey is FREE and an Open Source package manager for Windows. Package managers are great for installing and managing multiple programs at the same time. Chocolatey do offer a pro-business version. I have previously touched on Chocolatey in previous posts here and also here which covers how to install the Veeam agent for Windows using this simple to use package manager. But these posts go back to 2018 and in the world of automation that’s a little while back and if you now go and take a search on the chocolatey site for Veeam and you are going to find a load more options for deploying various different Veeam products.

081720 1018 AsweetChoco2

This is where I have to make a BIG shout out to another of the Veeam Vanguards who has contributed so much in this space, to help is company with some of those deployment challenges at scale by using chocolatey to achieve that but then has the done the community thing by sharing those efforts for everyone to take advantage of.

You can find the contributions of Maurice Kevenaar here on his Chocolatey Profile. Twitter

Big shout out to Maurice here again on the community effort and the contributions in general across the chocolatey packages.

What is available for installation?

Those familiar with Veeam Backup & Replication may be aware that there are many components available that can be individually scaled out depending on your environment. You are able to deploy that all in one server but then you can start deploying separate components into different systems across the estate such as the catalog service or the Veeam explorers that give you the application item level recovery options for your application servers.

All in one Veeam Backup & Replication server

Maurice has also added additional packages for installing other Veeam products such as Veeam ONE and Veeam Backup for Microsoft Office 365 all via chocolatey and a simple command within your Windows OS.

The other package to shine a light on is the extract utility, a tool that allows you to extract your Veeam image-based backups (VBK, VIB, VRB) without requiring a full installation of Veeam Backup & Replication, even though Community Edition gives you all the recovery options. This means that any Veeam backup file mentioned above can be accessed, opened and extracted on both Windows and Linux (Chocolatey package for Windows Only) without the requirement of a full blown Veeam installation. This is massive and some of the benefits I mentioned here on why this is important as you need to ensure you have access to this data and you are not tied to a specific vendor in years to come because they can only open the backup files with a full copy of the software and full restore functionality is also generally required.

The purpose of the post was to highlight yet another great community effort but also the flexibility when it comes to the deployment of Veeam products.

There will be a follow up YouTube walkthrough on how this looks and how easy it is to get things up and running using Chocolatey

]]>
https://vzilla.co.uk/vzilla-blog/a-sweet-chocolatey-way-of-deploying-your-veeam-software/feed 0
HashiConf Digital 2020 – An Online Experience https://vzilla.co.uk/vzilla-blog/hashiconf-digital-2020 https://vzilla.co.uk/vzilla-blog/hashiconf-digital-2020#respond Wed, 24 Jun 2020 14:51:52 +0000 https://vzilla.co.uk/?p=2269 062420 1451 HashiConfDi1

This is probably the first online virtual / digital event I have really attended so far this year, of course apart from VeeamON. I have to say that some of the other talk amongst the community has been about platforms not really being able to offer the experience we expect and I think that has to be true for the most part there really has not been that real need to provide a platform this scenario and this scale before.

Hashiconf did not disappoint, not only was the platform lightweight and really quite easy and nice to navigate the content was the reason I came and that was really on point.

Opening Keynote

Things kicked off on Monday with the opening keynote from the two founders and CTOs of Hashicorp, I have to say that these guys are some of the most interesting and exciting guys in the industry today and will no doubt be on the radar for many years to come. Their straight to the point announcement and releases and focus on where the company is and where its going is a very refreshing benefit, rather than talking about digital transformation and other dizzy height buzzword bingo. Armon Dadger and Mitchell Hashimoto kicked things off with a short but well executed live opening keynote where they had release after release or beta after beta across their portfolio.

The raft of announcements was as follows.

Announcing General Availability of HashiCorp Consul 1.8

Announcing HashiCorp Terraform 0.13 Beta

Announcing HashiCorp Nomad 0.12 Beta

Announcing the HashiCorp Cloud Platform

You can catch the HashiConf Digital June 2020 – Full Opening Keynote here

Sessions

062420 1451 HashiConfDi2

My exposure to the HashiCorp products has been pretty limited to hands on with Terraform, Packer. But with everything I have been learning over the last 12 months from a public cloud, security and Kubernetes. I wanted to take this opportunity at HashiConf Digital to catch some of the other interesting products that HashiCorp have to offer it was about learning more about Terraform, Vault, Consul and Nomad.

I believe all sessions will be available on demand after the event has finished, I am not sure how long for but I wanted to share the sessions I had scheduled to go along with the education plans above.

The Hitchhiker’s Guide to Terraform Your Infrastructure which was delivered by Fernanda Martins, this session covered some best practices and tricks when using terraform.

Vault Configuration as Code via Terraform: Stories From the Trenches delivered by Andrey Devyatkin, the draw to this session was things were going to start with the basics of terraforming vault and then get deeper along the journey they took as a company.

Panel: Kubernetes First Class Experience was another session worth grabbing the on demand if available this was a session fielding questions from the audience to the product managers for Vault and Terraform.

The Nomad AutoscalerJames Rasell covers the concepts and how you would approach autoscaling using Nomad.

Life of a Packet Through Consul Service Mesh this was a high learning curve as I know the 101 of what consul does and what benefits it brings as a service mesh, Christoph Puhl covers this off.

Panel: HashiCorp Certification – I didn’t actually get to see this session but it was on my list and I will go back and catch this first, I have not been into certs for a few years now but I am interested in this well thought out track that HashiCorp have put on. This was also covered by two top community guys Bryan Krausen and Ned Bellavance, Ned I have been following a lot of his content on Pluralsight.

Oh No! I Deleted My Vault Secret!
Lucy Davinhart anything that involves backup strategy I am in for given my daily role.

Panel: Nomad Demystified another panel discussion panel

Closing Keynote: In Conversation with Kelsey Hightower

Protecting Workflows and Secrets – Andy Manoske

Automating Private SaaS Infrastructure Across AWS and GCP at Scale

Panel: HCS on Azure with MicrosoftOperationalizing HashiCorp Vault

Panel: Infrastructure-as-code and the Future of Terraform

But as I said all of these should be available on demand.

Also a big shout out to DevOps Rob who gave me a shout on the opening keynote which was unexpected but always good to see everything going live.

Finally

Overall, really good event and because of timezones I wasn’t pressured to get straight into the platform and constantly play catch up, not too many sessions to choose from either which made that selection easier to deal with. I will be trying to catch up on the sessions that I missed over the summer as I continue that journey of learning this new world further.

I am also looking forward to attending this I would hope in October but the more and more we get closer I cannot see an in person event happening, the in person event will be very much on my list though when it does roll around again.

]]>
https://vzilla.co.uk/vzilla-blog/hashiconf-digital-2020/feed 0
Cooking up some Veeam deployment with CHEF automation – Part 2 https://vzilla.co.uk/vzilla-blog/cooking-up-some-veeam-deployment-with-chef-automation-part-2 https://vzilla.co.uk/vzilla-blog/cooking-up-some-veeam-deployment-with-chef-automation-part-2#comments Fri, 15 Nov 2019 16:08:21 +0000 https://vzilla.co.uk/?p=1759 This post will highlight the capabilities of the Veeam CHEF cookbook and walkthrough getting things going.

The first deployment mode is “Simple” this is a single Windows machine that will act as the all in one server, this server will contain all the mandatory components required for Veeam Backup & Replication to function.

111219 1314 Cookingupso1

Next, we have the Advanced Deployment; this breaks up those components and allows us to deploy multiple nodes for different Veeam functions.

Orange Lines are request and send cookbooks to and from CHEF Server. Black lines are initiating the bootstrap process for each node / component to be deployed.

In advanced deployments, the backup proxy role is manually assigned to one or more Windows servers. This approach allows for offloading the Veeam backup server, achieving better performance and reducing the backup window.

111219 1314 Cookingupso2

With the above in mind we can pick and choose which Veeam components go where and which node they reside on, this truly allows us to deploy a truly scalable Veeam deployment.

111219 1314 Cookingupso3

This also then leads to a real-life scenario where we can have a truly scaled out proxy deployment to attack those large environments. Having the ability to spin up these images as fast as this and also configure them by installing the appropriate Veeam components it allows for us to scale out the deployment.

111219 1314 Cookingupso4

Walkthrough — Let’s get cooking

]]>
https://vzilla.co.uk/vzilla-blog/cooking-up-some-veeam-deployment-with-chef-automation-part-2/feed 3
Cooking up some Veeam deployment with CHEF automation – Part 1 https://vzilla.co.uk/vzilla-blog/cooking-up-some-veeam-deployment-with-chef-automation-part-1 https://vzilla.co.uk/vzilla-blog/cooking-up-some-veeam-deployment-with-chef-automation-part-1#comments Thu, 14 Nov 2019 20:34:52 +0000 https://vzilla.co.uk/?p=1753 As part of my #SummerProject of exploring deeper into Infrastructure as Code, Public Cloud and Cloud Native it led me back to a session that I delivered last year at VeeamON.

The concept of the session was to highlight some of the things we had done from a community point of view with CHEF and Veeam to really show Day 0 operations and getting Veeam up and running in a declarative fashion.

Why did we choose to automate Veeam deployments?

We wanted to come up with way to consistently deploy Veeam servers. When you look at the difference between simple deployments and more advanced environments, we found that everyone can benefit from this consistent model

What is configuration management?

Established set of rules by which an application or environment is created, maintained and destroyed

Provides for a consistent methodology of delivery

Advanced concepts allow for self-healing and protection from configuration drift

Ongoing and not a one-time process

Benefits of configuration management

  • Application or environment desired states are declared in a repeatable format
  • Configurations can be versioned like any application code
  • Parameterization allows for a one-to-many delivery of solutions
  • Overall manpower hours and time-to-market reduced for new environments
  • A desired state creates an audit trail and self-healing remediation

What is CHEF?

Server / client configuration management software where configuration ‘recipes’ are applied to servers

Supports a pull method whereby a client checks into a central CHEF server to gather configuration updates and apply those to the host

A cookbook contains one or more recipes with each responsible for a specific configuration state

Key CHEF Terms

CHEF Server: A centralized, multi-tenant aware host to which all clients will connect; Serves as the central storage of configurations and environment specifics

CHEF Client: A client-side application installed on the host, which is responsible for gathering the configuration details from the CHEF Server and applying the desired state to the host

Cookbook: A collection of one or more recipes typically focused on the deployment of an application or set of applications

Recipe: The desired state to which a host should be configured

111219 1308 Cookingupso1

Why is configuration management important in the enterprise?

  • Reduction in large deployment lead times due to limited number of build engineers
  • Reduction in growing list of outdated templates and images
  • Reduction in yearly release cycle for updated templates
  • Development teams impacted by testing against unsupported images

    Delays in product release

    Back and forth and long lead times for servers

  • Changes to images and deployed servers required manual intervention

Why did we choose CHEF?

Chef gives us a universal delivery method that benefits every size organization. Chef also includes some of the best cross-platform support and testing for Windows environments. Since Veeam is deployed on Windows, this was a major win.

Desired State / Configuration / Consistently Deployed

It is important to consider the difference between a one-time use script and on-going delivery.

By using a tool like Chef, we can create a standardized delivery of all Veeam products that can be consistent across organizations. It was also an Enterprise tool, there are many different Desired State configuration tools available. Our initial investigation showed that CHEF was not only a leader in this space but a visionary in how Configuration as Code can be delivered.

Rapidly spin up / down extra components based on workload (On-Demand)

This is at the heart of what we wanted to accomplish. The ability to quickly add backup capacity as the need arises and release those licenses (this is Windows) when we don’t need them.

Grow, change, expand, shrink, evolve

We saw the DevOps and Cloud as cornerstones to how companies are evolving their business. Why should your backup software not be able to meet those goals?

The Challenge

Veeam traditionally has been really simple to install and deploy, although over the past years I have been involved in some large-scale deployments and installations of Veeam and whilst it works, the number of components required in these environments can take a while in the traditional sense.

Veeam needs at least one Windows OS to run all components, but also all other components can be scaled out in terms of architecture. (this is talking pre v10 where we introduce Linux Proxies) Veeam proxies in pre v10 require multiple VMs or physical systems to act as that data mover. Even with a template to deploy them if you need 10 to 100 that’s a lot of manual labor to get them running.

Now, imagine you have 10,000 VMs to protect, how much effort is it to deploy the scale-out architecture?

This was the reason for what we were doing with CHEF.

The second part of this post will go a little deeper into how and what we have done but also how you can get started here.

]]>
https://vzilla.co.uk/vzilla-blog/cooking-up-some-veeam-deployment-with-chef-automation-part-1/feed 3
Do you use GitHub? Ever thought about backup… https://vzilla.co.uk/vzilla-blog/do-you-use-github-ever-thought-about-backup https://vzilla.co.uk/vzilla-blog/do-you-use-github-ever-thought-about-backup#comments Mon, 21 Oct 2019 12:37:59 +0000 https://vzilla.co.uk/?p=1698 Do you use GitHub?

How do you ensure if GitHub was down for a reason that your developers could still gain access to their code but also how many people actually run their code from GitHub directly into their environment?

Why do you need to backup GitHub?

As mentioned above if something was to happen to access to GitHub and that doesn’t just mean a site failure from that end it could also mean internet connectivity or issues within your environment which stops the ability to gain access to GitHub.

What if one of your developers or GitHub administrators brings down an important repository or makes a change that needs to be rolled back, this will also give you the ability to backup any other GitHub repository that you have watched or starred.

How did we get to this topic?

Well it was thanks to a couple of conversations but the trigger to actually exploring things more was having a quick chat with Ruairi McBride which then pushed me to go and do some digging which led me to some articles I will also mention as they could be useful.

The first resource I found was from Volkan Paksoy Volkan is a software developer so although approached this with backup in mind he also talks about some tools that are not the normal for us infrastructure people, but he covers things really well here. The bulk of the script I used actually is based on Volkans work I have just added some additional benefits to it.

Do I need to backup my GitHub?

My argument is how important is this code base, project work that you have within your GitHub account? Can you afford to lose it? Yes you most likely have a version of GitHub desktop running somewhere but what if mistakes occur? What if you lost that? Were compromised? If you feel like you should then there are lots of different scripts and open source tools out there as well as some paid for offerings that you can also use to create backups.

How can I start backing up GitHub?

As I have said there are many ways in which you can make this happen as with any backup methodology it’s down to what you want to achieve. I decided that as a test I wanted to create a daily backup of my GitHub repositories, I had no concern for space as I also know my Github only really contains PowerShell or code based repositories nothing with a huge size, I chose to take a full backup as it were on a daily basis

Having followed Volkans blog above where he states he already had GIT installed (Software Developers generally will have, in my case I did not) so this was the first step in order to start some level of backup.

Another resource to help with this –

https://www.atlassian.com/git/tutorials/install-git#windows

We then need to connect to your GitHub and this involves a few commands that can be found here but I will also print below.

Open a terminal/shell and type:


$ git config --global user.name "Your name here"
$ git config --global user.email your_email@example.com

Next we need to setup ssh on your machine, in my instance this machine is purely going to be a standalone machine that looks after this backup or other backup tasks this is not a developer machine or anywhere I will likely consume this source code we are backing up.

If you have not generated an SSH key for access to GitHub this resource will also help.

Connect GIT to your GitHub – https://kbroman.org/github_tutorial/pages/first_time.html

Not sure if this is needed but this helped me get some folder structure in place - git clone https://hostname/YOUR-USERNAME/YOUR-REPOSITORY

https://help.github.com/en/enterprise/2.18/user/articles/cloning-a-repository

Creating personal access token with Repo Scope – https://github.com/settings/tokens

How to then compress a group of files – https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.archive/compress-archive?view=powershell-6

create such public/private keys: Open a terminal/shell and type:


$ ssh-keygen -t rsa -C your_email@example.com

On windows you are going to find your required files here: C:\users\username\.ssh

  • Go to your github Account Settings
  • Click “SSH Keys” on the left.
  • Click “Add SSH Key” on the right.
  • Add a label “backup” and paste the public key from id_rsa into the text box

Then we can test if the above worked by running


ssh -T git@github.com

If that worked then you will get a return of


Hi username! You've successfully authenticated, but Github does
not provide shell access.

Ok so we now have GIT installed and we have now connected to our GitHub account. Next we are back to the Volkan page for the backup script. I have added some additional steps here as I want a point in time scheduled copy of my GitHub repositories that I can access if GitHub is not available or if someone is malicious within and deletes or edits my repositories.


#Script Original from https://volkanpaksoy.com/archive/2017/11/30/Backing-up-GitHub-Account-with-PowerShell/

#Define these four variables based on your own environment.
$backupDirectory = 'BACKUP LOCATION'
$backupretention = 'COMPRESSEDBACKUPLOCATION'
$token = 'GITUSERNAME:PERSONALACCESSTOKEN'
$base64Token = [System.Convert]::ToBase64String([char[]]$token)

$headers = @{
    Authorization = 'Basic {0}' -f $base64Token
};

Set-Location -Path $backupDirectory
$page = 1
$perPage = 30

Do
{
    Write-Host "Getting page: $page"
    $response = Invoke-RestMethod -Headers $headers -Uri "https://api.github.com/user/repos?page=$page&per_page=$perPage"
   
    foreach ($repo in $response)
    {
        $repoName = $repo.name
        $repoPath = "$backupDirectory/$repoName"

        Write-Host "Processing repo at path: $repoPath"

        if ( (Test-Path $repoPath) -eq 0)
        {
            Write-Host "Repo doesn't exist, clone it"
            git clone $repo.ssh_url
        }
        else
        {
            Write-Host "Repo exists, update"

            # Change to repo directory to fetch updates
            Set-Location -Path $repoPath

            git fetch --all
              #git reset --hard origin/master

            # Change back to root backup directory
            Set-Location -Path $backupDirectory
        }
    }
   
    $page = $page + 1
}
While ($response.Count -gt 0)

# Enable this command if you wish to store retention points for your GitHub repositories.

# The following commands will allow for us to take a compressed point in time version of our GitHub repository and assign the date to the compressed file and store to a relevant backup location.
# The Compress-Archive -Path <LOCATION> should be your GitHub repository location, this could also be used in conjunction with another script that on a schedule will bring down and update from the live GitHub repository to this landing area.
# The -DestinationPath should be the target location you wish your backups to reside and potentially then be further protected by your Backup Software.
 

Compress-Archive -Path C:\Backup\Github\ -CompressionLevel Optimal -DestinationPath ('$backupretention' + (get-date -Format yyyyMMdd) + '_GitHubBackup.zip') -force

This is what I have started to do on a scheduled basis so I have at least a copy of my scripts and work completed outside of GitHub, the next challenge is going to be restoring that back into GitHub. If anyone has that as a workaround then please let me know and I will add to this post.

]]>
https://vzilla.co.uk/vzilla-blog/do-you-use-github-ever-thought-about-backup/feed 4
#SummerProject – Infrastructure As Code – Example Tools https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-example-tools https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-example-tools#respond Thu, 08 Aug 2019 08:14:06 +0000 https://vzilla.co.uk/?p=1686 Terraform

As I said above, I wanted to get into some of the examples of actually some of the tools used to provision your infrastructure using code, terraform use the terms “Execution Plans” to describe the way your code is deployed.

Terraform was created by a company called Hashicop they have a number of really good tools in this space.

The biggest pull factor for me and why I wanted to kick things off with Terraform is because Terraform is cloud agnostic or pretty much any infrastructure agnostic as you can use Terraform with you on premises vSphere environment as well as AWS, Azure and GCP Cloud Platforms. Below is a link to an awesome but simple introduction to Terraform. All of these resources can be found here amongst other training material around other tools available.

Azure Resource Manager Templates

Up until today I would have put the ability of using PowerShell in Azure to deploy my Resource Groups and Storage Accounts was IAC. I was wrong, the code itself could form some of that IAC effort but alone in a PowerShell script this is not IAC.

IAC in an Azure world is determined around Azure Resource Templates. A Declarative way of saying this is how I want the end state to be within my Azure environment. These are defined in a JSON file and they allow you to determine how and what your resource and infrastructure looks like.

These templates can be deployed through PowerShell, Azure CLI or through the Azure console.

The biggest thing that needs to happen here and the benefit of IAC is understanding and being able to use versioning, a good example of versioning would be using GIT this allows for source control so you can see when things have changed to the configuration code.

There are alternatives to GIT but I am pretty confident as a noob here that GIT is the mostly used out there, and really I am not looking to be a programmer I just need to understand and potentially be able to act upon a little but not be a fully-fledged and knighted into the Developer kingdom.

Azure DevOps is another resource to mention here. Azure DevOps allows for your developers to collaborate on code development, again this could be a little outside the IAC remit, but there may be some use cases where it is absolutely required as part of IAC.

Azure Repos are leveraged to centrally store code but there are a lot of other Azure services that coexist in here and potentially worth reading some more here if interested.

What was interesting in the resource video stated below “Infrastructure as code for the IT administrator” the presenter also touches on Continuous Deployment and Azure Pipelines. I found this very interesting in that by pushing changes to GIT it would automatically deploy those committed changes to the pipeline or workflow.

I think the example that John Savill uses in the demo is very simple and to be honest that task could be quicker using the UI but obviously he didn’t have endless amounts of time to walk through a more aligned example of this but I think it is the best resource I have seen today where it explains what IAC is and why it should be absolutely considered.

AWS CloudFormation

I think by now we are clear that Infrastructure As Code is about yes code but it’s probably more important to remember that it’s about version control and a Declarative way of saying this is how I want the end state to be within my environment, whichever environment you wish that to be.

Now a question I have at this point is we first talked about Terraform and we stated how it was agnostic to the environment it can be used with vSphere, AWS, Azure etc etc now colour me silly but am I right in thinking that Azure Resource Templates mentioned in the last section and AWS CloudFormation are fixed to their public cloud offerings?

This is quite an old resource but this completely makes sense to me – https://www.techdiction.com/2017/08/24/migrating-aws-cloudformation-templates-to-azure-resource-manager-templates/

I am still convinced that maybe Terraform is the right fit but I might be missing something fundamental here.

In the same way I mentioned in the Azure section and the nature of templates. AWS Cloud Formations use templates also which is a JSON file.

That JSON file serves as a blueprint to define the configuration of all the AWS resources that make up your infrastructure and application stack or you can select a sample pre-built templates that CloudFormation provides for commonly used architectures such as a lamp stack running on Amazon ec2 and Amazon RDS.

Upload your template to CloudFormation, select parameters such as the number of instances or instance types if necessary then CloudFormation will provision and configure your AWS resource stack.

Update your CloudFormation stack at any time by uploading a modified template through the AWS management console or command line.

You can check your template into version control so it’s possible to keep track of all changes made to your infrastructure and application stack.

CloudFormation brings the ability to leverage version control your infrastructure architecture the same way you would with software code.

Provisioning infrastructure seems as simple as creating and uploading a template to CloudFormation.

My first thought here now that I have touched on 3 of the most commonly used IAC tools in the industry today is that whichever one you use this makes it very simple and easy to replicate your infrastructure again and again either for additional site rollouts or test and development scenarios.

The ability to easily and quickly spin up a replica of your production environment for development and test with just a few clicks in this case the AWS management console and then quickly tear it down when finished and rebuild and rinse and repeat that process whenever you want. Manually this was always going to be a pain point and although yes IAC is apparent today in the traditional on premises world it’s reliant on having the physical hardware in place to make this happen unless software or application stack only in which case that could work if resources were spare. In the Public Cloud with those infinite resources this is a great story to be told.

Google Cloud Deployment

Typically Google Cloud Platform is the one Public Cloud out of the above 2 already mentioned that I have not really had any dealings with at all, so when I come to look for resource on Google Cloud Deployment, there is very little out there, great from a content creation point of view if you know your way around the platform, rubbish if you are learning.

Although looking at the product page. It follows the same footprint as the above-mentioned tools but with a focus on the Google Cloud Platform.

  • Simplify your cloud management
  • Repeatable deployment process
  • Declarative language
  • Focus on the application
  • Template-driven

One thing at first glance that I really like about Google is that they seem to have the documentation down really well and depending on how we get on this summer I think before we see the end of 2019 I want to be in better shape to understand more about GCP.

GIT

GIT is a version control system, open source, distributed architecture. The reason for the mention is that it may be required. Generally I get the impression that this is used on projects where you have multiple developers and you need some version control, I thought it was worth mentioning though as there will be some use cases within IAC where this will be relevant and matter to Infrastructure admins.

This is a great resource that will actually allow you to walk through some use cases with GIT

Resources

I cannot take any credit for this collection of resources either used above or below, these were all shared on the show notes of CloudSkills.fm I will also keep adding resources here as I find good useful content to share.

CloudSkills.fm – Infrastructure as code in the cloud:002

Build Azure Resource Manager templates

Azure Quickstart Templates

AWS CloudFormation Getting Started

AWS Quick Start Templates

Google Cloud Deployment Manager

Learn Terraform

Infrastructure as Code for the IT Administrator

I know this was a long post, but I think as a primer into each of the areas was enough and it didn’t seem long enough for each tool to have their own post. Also, you can probably tell that a lot of the content here is basically my notes. There is going to be a huge amount that I am sure I have missed but I wanted to get my views over on what I think or what I deem to be important as we move into this new world. Depending on time there is an endless amount of content, training and follow ups to go back to here and I really find this an interesting part of the future or as we move more and more into the Cloud Computing space.

]]>
https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-example-tools/feed 0
#SummerProject – Infrastructure As Code – Why? https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-why https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-why#respond Wed, 07 Aug 2019 08:13:21 +0000 https://vzilla.co.uk/?p=1684 From my first post I wasn’t sure what to expect when diving head first into this newish world of Infrastructure As Code and what it would look like specifically in another world I wasn’t too sure about which is Cloud Computing.

I felt that although I believe in the first post, I have grasped the reasons behind and the benefits for Infrastructure As Code I think we need to take a look how things were traditionally managed and still are for the most part in on premises datacentre. But also highlight some of the reasons why things are changing.

How was infrastructure traditionally managed

The Infrastructure was traditionally managed and still is today by many organisations, for example let’s take a common estate. VMware running inside of a private data center, the classic approach would be if I’m a consumer of infrastructure I would file a request and then someone at the other end of this request queue is pulling it off logging into either a management portal or an administrative console and pointing and clicking to provision that piece of infrastructure

There is no issue with this especially if I didn’t have to manage a lot of infrastructure or if the churn of my infrastructure was relatively minimal and this was and is true for many sort of private data centers, a Virtual Machine would live for months to years, there was a relatively limited scale of deployment and so it was possible to manually point and click and administer these systems.

Things are changing

There are a couple of changes that are changing the way we also think about the traditional sense of managing our infrastructure. The first of those changes is we do not just have that one private data center to administer we have a sprawl of other possible consumable cloud-based environments and with that they are API driven. The second change is around the elasticity of infrastructure where instead of months to years it’s now days to weeks in terms of how long a resource might live.

The scale of infrastructure is much higher because instead of a handful of large instances we might have many smaller instances there’s many more things we need to provision, and this infrastructure tends to be occurring in cycles and regularly repeating.

We might scale up to handle our load during peak days and times and scale down at night to save on cost because it’s not a fixed cost unlike owning hardware that we can depreciate we’re now paying by the hour, so it makes sense to only use the infrastructure you need and you have to have the sort of elasticity.

As you start making these changes all of a sudden the thought of I’m going to file a thousand requests every morning to spin up to our peak capacity and then file another thousand requests at night to spin back down and then manually manage all this is clearly going to become challenging in terms of how do we even begin to operationalise this in a way that’s reliable and robust and not prone to human error.

There is a change in terms of the dynamics of our infrastructure, the idea behind infrastructure as code is how do we take the process that we were pointing and clicking to achieve our end goal and capture that in a codified way and now if I need to that task one time, ten times or a thousand times I can automate that so now every morning I can run a script that brings up a thousand machines and every evening hit the same script to bring it back down to whatever the required footprint should be. We can begin to both automate that but also now that we have captured the code form, we can start versioning control, we can then see an incremental history of who changed what. This methodology also allows you to see how the infrastructure is actually defined at any given point of time and we have this transparency of documentation, that we don’t have in the traditional point-and-click environment.

The reusability of the code and the ability to then drive automation tasks whilst keeping version control is the real value of Infrastructure as code.

Next up is a long post covering some examples of Infrastructure As Code, in particular I have chosen Terraform from a Cloud agnostic approach, and then each of the major public cloud hyperscalers options to IAC.

]]>
https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-why/feed 0