automation – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Fri, 31 Aug 2018 03:17:16 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.3 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png automation – vZilla https://vzilla.co.uk 32 32 VMworld 2018 Wrap Up https://vzilla.co.uk/vzilla-blog/vmworld-2018-wrap-up https://vzilla.co.uk/vzilla-blog/vmworld-2018-wrap-up#respond Fri, 31 Aug 2018 03:17:16 +0000 https://vzilla.co.uk/?p=1199 It’s the end of another busy conference week, VMworld US is probably on par with our own conference VeeamON in terms of how busy we are. It was my first ever VMworld last year and this one well and truly topped it.

Announcements

The opening keynote was really exciting to hear, the aquisition of CloudHealth was great. I have been following these guys for a while and the tech they bring or will bring to the VMware portfolio is really exciting. With CloudHealth, you gain visibility into cost, usage and performance across your physical infrastructure and virtual machines.

The second exciting announcement was around Amazon RDS and the ability to set up, operate, and scale databases on-premises and in hybrid environments as it is in AWS. That’s a huge thing for those customers that have gone already into the AWS realms and migrated their databases in, this to me though completes the gap around making that database function more available. Prior to this how would you have ever got that RDS instance out of AWS and functioning elsewhere if there was an outage or a reason that AWS was not available?

Veeam also made a couple of announcements during the week, the major one was Veeam Intelligent Data Management Combines with Cisco HyperFlex to Deliver New High Availability Solution

Veeam also announced the release of some deployment automation for customers wanting the easy button for deploying Veeam when using VMware on AWS.

082718 1523 VMworld20182

Veeam Session – Automation & Orchestration

For me personally there was also lots of great things to list at the event. Myself and @AnthonySpiteri had the responsibility of the first Veeam session of the week. You can find the QR code to get going with the Terraform scripts we created for the Veeam end to end deployment.

You can also pick up the session here.

vBrownBag / VMTN Session

Following the Veeam session I had the chance to go a little deeper into the CHEF element of how we are using Desired State and Configuration Management to deploy a dynamic Veeam deployment within any vSphere environment.

Cooking up some Veeam Deployment with CHEF automation

Community

It was another great week for engagement within the IT Community, the events aside from the main conference that continue to be a major success. The people get more helpful I think, I have such a good time chatting to everyone at this conference and sharing views and ideas.

Another shout out I have to give around the community is to the Virtually Speaking podcast guys, they are constantly busy over there capturing content. We managed to get some time on there to talk about the week.

Podcast%2Fartworks%2F1294%2Fmedium%2F9c499c3ddf63c6ea23a4831f4ab64c36c8e5d64461dc4f09e12cf6d4da78ca4b%2Fvsp3 itunes

Wrap Up

Now it’s time to head home and then before we know it we will be over in Barcelona for the European show. We will be enhancing the script and code that we used to potentially add even more detail and functionality but with the same premise of deploying a portable set of code for all to use in any vSphere environment which gives you a dynamic and scalable way of delivering your Veeam infrastructure.

I have lots of ideas that have come out of this week around content but also about other areas that we can transition the Veeam platform to continue to be the number one choice for availability for your data. Expect to see some of this in whitepaper or blog format.

Safe Travels everyone.

aircraft 479772 1280

]]>
https://vzilla.co.uk/vzilla-blog/vmworld-2018-wrap-up/feed 0
Intelligent Data Management for a Hybrid World https://vzilla.co.uk/vzilla-blog/intelligent-data-management-for-a-hybrid-world https://vzilla.co.uk/vzilla-blog/intelligent-data-management-for-a-hybrid-world#comments Mon, 27 Aug 2018 21:06:55 +0000 https://vzilla.co.uk/?p=1186 082718 1520 Intelligent1

Our session this year is focusing on the automation and orchestration around Veeam and VMware. But what does that mean? The point of our session to highlight the flexibility of the Veeam Hyper-Availability Platform, some people just want the simple, easy to use wizard driven approach to install their Veeam components within their environments but some will want that little bit more and this is where APIs come in and allow us to drive a more streamlined and automated approach to delivering Veeam components.

We also highlighted this by running through everything live, I will get to the nuts and bolts of that shortly.

082718 1520 Intelligent2

With there being a strong focus this year’s event, we wanted to highlight the capabilities by using VMware on AWS. Veeam were one of the first vendors highlighted as a supported data protection platform that could protect workloads within VMware on AWS that was 1 year ago and we wanted to highlight those features and capabilities within Veeam.

Veeam Availability Orchestrator – “Replication on Steroids”

082718 1520 Intelligent3

The first thing we will touch on is Veeam Availability Orchestrator released this year, this provides a “Replication on Steroids” option for your vSphere environment, this environment can be on premises or leveraging any other vSphere environment, maybe say VMware on AWS where maybe you would still like to keep your DR location on premises and send those replicas down in case of any service disruption within the AWS cloud. The replication concentrates on the application over just sending a VM from site to site, what this also enables is the ability to run automated testing against these replicas to simulate disaster recovery scenarios. Oh and the other large part of this is the automated documentation. Ever had to create your own DR Run Book? I have this does the majority for you whilst being dynamic to any changes in configuration.

Veeam DataLabs

082718 1520 Intelligent4

The we wanted to highlight some more automation goodness around Veeam DataLabs, what this gives alongside that Backup and Replication capability is the ability to have an automated way of testing that your backups, replicas or storage snapshots are in a good recoverable state. It also provides the ability to get more leverage from those sources to provide isolated environments for other ways of gaining insight or improving better business outcomes.

I plan to follow up on this as this is one of my passions within our technology stack the ability to leverage Veeam DataLabs from many of the products in the platform to drive different outcomes is really where I see us differentiating in the market.

The Bulk of the session

As you can see we are already cramming quite a bit into the session. But this is the main focus point of the day for us, it’s about delivering a fully configured Veeam environment from the ground up, all live whilst we are on stage. Oh and because we can we are doing this on VMware on AWS.

The driving use case for this was around the Veeam proof of concept process, I mean it was fast, deploy one windows server and 7 clicks later you have Veeam configured, perfect. But the issue wasn’t the Veeam installation, what if we could take an automated approach and be able to understand the customer pain points and needs and then in the background automate out the process of building the Veeam components and automatically start protecting a sub set of data all in the first hour of that meeting?

The beauty of this is, is you do not need to be a DevOps engineer skilled in configuration management or a developer in Ruby. The hard work has been done already and is available for free on GitHub and in the CHEF Supermarket.

I have listed the tools below that we used to get things up and running and to be honest if you were to pull this down when we make this available you will only really need PowerShell, PowerCLI and Terraform installed on your workstation.

082718 1520 Intelligent5

The steps we went through live was deploying that Veeam Backup & Replication server along with multiple proxy servers to deal with the load appropriately. Because of the location of the Veeam components and our production environment we chose to also leverage the native AWS services and we deployed an EC2 instance for our backup repository but this could be any storage in any location as per our repository best practices, we also added a Veeam cloud connect service provider to show a different media type and location for your backup or replication requirements. Finally we automated the provisioning of vSphere tags and then created backup jobs based on those.

082718 1520 Intelligent6

By the end of the session we had the following built out, over on the right you can see we have a Veeam Backup & Replication server and some additional proxies. On the left at the top we have our Veeam cloud connect backup as a service offering and at the bottom left we also have our on-premises vSphere environment where we could send further backup files or even as a target for those replication jobs. Underneath the VMware on AWS you can see the Amazon EC2 instance where we will store our initial backup files for fast recovery.

082718 1520 Intelligent7

As I know some of you will be catching this whilst at the show I want to give a shameless plug out for the next session which goes into more detail around the Chef element of this dynamic deployment so you can find those details below.

082718 1520 Intelligent8

I also want to give a huge shoutout to @VirtPirate aka Jeremy Goodrum of Exosphere who helped make the terraform and chef peice happen he also has an article over here diving into the latest version of the cookbook and some other code releases he has made that are related.

Veeam Cookbook 2.1.1 released and Sample vSphere Terraform Templates

Expect to see much more content about this in the form of a whitepaper and more blogs to consume.

]]>
https://vzilla.co.uk/vzilla-blog/intelligent-data-management-for-a-hybrid-world/feed 2
Zapier to Slack https://vzilla.co.uk/vzilla-blog/zapier-to-slack https://vzilla.co.uk/vzilla-blog/zapier-to-slack#respond Tue, 27 Feb 2018 19:27:47 +0000 https://vzilla.co.uk/?p=930 Following on from the post I shared on Word to WordPress, I wanted to share another post that makes my life much easier when it comes to work and that’s having the ability to receive notifications in one place.

I seem to have become a big user of Slack, which the email side of my life has really calmed down the instant messaging, always on nature of Slack has rocketed I think at last count I had 9 Slack teams including the one that I am using for this notification piece.

We all read blogs, and we all consume information from various sources this can be via RSS to maybe an RSS feeder to centrally capture all articles that have just been published, I use Feedly for my daily catch up for all content out there. And that works well, when I get the chance I check in and work me through the many feeds and read the interesting stuff.

But it was another Application and another thing to remember to look at during the day. I also for work have a similar task where I want and need to be more active within the Veeam forums. Because I live in Slack on the side pretty much all day long, phone, laptop or desktop there is Slack installed. for this example, I am using the Veeam forums to capture the new posts via something called Zapier to this custom made slack team I have created with only me in.

Let’s head over to https://zapier.com/ first we need to create a new account. Once you have created this will create you a free plan tier. I would advise the free plan to begin with get the feel of how the platform works and if it works for you and you require more tasks or zaps then obviously you can pay for the pleasure.

022718 1926 ZapiertoSla1

The first step in creating this notification engine to Slack is selecting the elements or Apps that we need for the workflow. You will see in this list thousands of Apps and the different things you can do with them. For this to work I needed to find RSS.

022718 1926 ZapiertoSla2

Once you find your RSS App you then define what do you want to do with that App and this is where you have another search and you find Slack.

022718 1926 ZapiertoSla3

By pairing these together, you are creating something called a Zap. The following screen shows what the Zap will do.

022718 1926 ZapiertoSla4

I imagine that some of the other Apps have more options here but all I want my RSS App to do is capture all new posts, comments etc and then send them to my slack team channel.

022718 1926 ZapiertoSla5

Next up is adding the RSS URL that you want to capture from, with the Veeam forums there are internal channels so for this I added an additional auth=http which allows me to authenticate to allow me to get those new posts that are in the hidden internal forums.

022718 1926 ZapiertoSla6

Add in those forum credentials

022718 1926 ZapiertoSla7

Finally, this will give a summary of what it is going to do and run a test against that.

022718 1926 ZapiertoSla8

We then want to select some configuration for the Slack channel, we need to define what team channel this is going to and which channel. Pretty sure there are some channels in my Slack list that wouldn’t want this information so to make sure you select the correct one. Within Slack I created a new slack channel named veeam_forum and this channel will only show forum posts.

022718 1926 ZapiertoSla9

I think when you first sign up for Zapier it’s a 30-day trial and unlimited tasks and zaps after which that’s when they want you to pay for things. On the 31st day or maybe even before as I think I exceeded the opening free tier of tasks with other zaps and integrations I created for Reddit and Twitter.

It’s worth a try though if this is where you spend a lot of time during the day.

]]>
https://vzilla.co.uk/vzilla-blog/zapier-to-slack/feed 0
Veeam Replication – PowerShell https://vzilla.co.uk/vzilla-blog/veeam-replication-powershell https://vzilla.co.uk/vzilla-blog/veeam-replication-powershell#respond Thu, 11 Jan 2018 08:30:46 +0000 https://vzilla.co.uk/?p=743 veeam replication series data protection

Veeam Replication – PowerShell

There is a large adoption of people using PowerShell to make their lives easier and reduce time spent on performing lengthy repeatable tasks. Veeam Replication could be one of those where in the last post I showed the simple steps to setup your replication jobs. If you have many different groups or even sites, then to do that a number of times is going to be a pain.

Veeam Replication Series

1VeeamReplication 101  2VeeamReplication workflowcomponents e1515696389784  3VeeamReplication transportmodes  

4VeeamReplication walkthrough 1  5VeeamReplication PowerShell  6VeeamReplication advft

7VeeamReplication wan  8VeeamReplication failover  9VeeamReplication surereplica

10VeeamReplication sandbox  12VeeamReplication cdp  11VeeamReplication storage

What I want to show you in this post is how easy it is to use the Veeam PowerShell Snap-In and the full set of capabilities to run and manage those replication tasks.

Before I begin there is a more in-depth resource on all different PowerShell commands that can be found here – https://helpcenter.veeam.com/docs/backup/powershell/replication.html?ver=95

For the purpose of this post I am going to be using VMware as my platform the command structure for Hyper-V is slightly different and the above link can also help there to determine the required steps.

Add-VBRViReplicaJob

Resource – https://helpcenter.veeam.com/docs/backup/powershell/add-vbrvireplicajob.html?ver=95

There are many parameters available for this cmdlet these can be found in more detail at the above link. I am going to be using the same parameters that we used during the walkthrough in the last post.

First, we need to add the snap in and connect to our Veeam Backup & Replication Server this can be done with the following. All code will be available in the end script.

#Add the Veeam PSSnapIn to get access to the Veeam Backup & Replication cmdlets library. Run the following command:
Add-PSSnapin VeeamPSSnapin
#Connects to Veeam backup server.
Connect-VBRServer -server "10.0.40.10"

010618 2121 VeeamReplic1 

This command will set the server you want the replica to be sent to, it will also save it to a variable of $Server.

#Set Destination ESXi Server 
$server = Get-VBRServer -Name "tpm03-131.aperaturelabs.biz"

Next, we need to determine which VMs or objects we would like to replicate we can do this with the following command. And this will set the VM to a variable.

#Define Source Virtual Machine(s)
$vm = Find-VBRViEntity -Server vc03.aperaturelabs.biz -Name "TPM04-DC-01"

Now we need to determine the resource pool that we would want the replica to be located in and set that as a variable.

#Define destination resource pool 
$pool = Find-VBRViResourcePool -Server $server -Name "TPM04-MC"

The final task of finding and setting variables comes in the form of which datastore do we want our replica to be stored in.

#Define destination datastore
$datastore = Find-VBRViDatastore -Server "tpm03-131.aperaturelabs.biz" -Name "SolidFire005_iSCSI"

I will share the commands that I used to find out the below information at the end of the post. Now that we have set our variables and we know the:

  • What we are replication
  • Where are we replicating

010618 2121 VeeamReplic2

Now that we have set the variables for our script it might be worth just checking that they all return the right objects this can be done with the following code.

#Now that you have defined your variables I would suggest running the following to ensure all have been populated.
$server
$vm
$pool
$datastore

010618 2121 VeeamReplic3

Creating the Replication Job

We can now create the replication job using the above information.

#This command will create the backup job with no schedule defined.
Add-VBRViReplicaJob -Name "PS Replication Job" -Server $server -Entity $vm -ResourcePool $pool -Datastore $datastore -Suffix "_replicated"

010618 2121 VeeamReplic4

Now that we have the job created you can also see the job created in the Veeam Backup & Replication console.

010618 2121 VeeamReplic5

At this point or later we also have the option to add additional VMs to the replication job. The first one will add the additional VMs to the variable. The second is how you would update the job.

#If you then wanted to add additional VMs to the job then you could do by adding to this line of code 
$vm = Find-VBRViEntity -Server vc03.aperaturelabs.biz -Name "TPM04-DC-01", "TPM04-SQL-02" 

010618 2121 VeeamReplic6

#This command will then update the job with the newly added Virtual Machines. 
Set-VBRViReplicaJob -Job $job -Server $server -Entity $vm

See below the console view now with the newly added extra VM in the job,

010618 2121 VeeamReplic7

010618 2121 VeeamReplic8

If we had a large number of Virtual Machines to add to the job maybe with a similar naming convention such as the one I have used here, then we also have the ability to add against a wildcard to the job.

#This command will also allow you to add wildcards to your replication job Note that this will add all objects with this name. 
Find-VBRViEntity -Name TPM04-* | Add-VBRViJobObject -Job $job 

010618 2121 VeeamReplic9

We can then use the following command to give us the configuration for the job. As well as setting a variable for the job.

#This command will show the job configuration 
Get-VBRJob -name "PS Replication Job" 

010618 2121 VeeamReplic10

Setting the schedule

For those that have noticed we have the job created but no schedule so next up is creating a schedule for the requirements. The below command will set the schedule on the job.

#This command schedules the job represented by the $job variable to run every 1 hour
Set-VBRJobSchedule -Job $job -Periodicaly -FullPeriod 1 -PeriodicallyKind Hours

010618 2121 VeeamReplic11

As well as setting the schedule we need to enable the schedule on the job this can be done like this.

#This will enable the job schedule
Enable-VBRJobSchedule -Job $job 

010618 2121 VeeamReplic12

Navigating back into the job in the console then you will now see the schedule is now configured.

010618 2121 VeeamReplic13

Starting the Job

We have come this far so let’s continue and start our job from the PowerShell script as well.

#The following command will allow us to start the job 
Start-VBRJob -Job $job

010618 2121 VeeamReplic14

within the console

010618 2121 VeeamReplic15

As the job progresses we will see on the destination ESXi host our replicated virtual machine.

010618 2121 VeeamReplic16

When the job completes we get the following summary back in our PowerShell window.

010618 2121 VeeamReplic17

Obviously, we can get that same information and some more from the console.

010618 2121 VeeamReplic18

The complete script

#Add the Veeam PSSnapIn to get access to the Veeam Backup & Replication cmdlets library. Run the following command:
Add-PSSnapin VeeamPSSnapin
#Connects to Veeam backup server.
Connect-VBRServer -server "10.0.40.10"
#Returns hosts connected to Veeam Backup & Replication.
Get-VBRServer 
#This will show all objects 
Find-VBRViEntity
#This will show all resource pools available on the defined server either use variable or add in your ESXi host name/ip
Find-VBRViResourcePool -Server $server
#This will show available datastores on host 
Find-VBRViDatastore -Server "tpm03-131.aperaturelabs.biz"
#This will show availabel VMs 
Find-VBRViEntity -Server vc03.aperaturelabs.biz -name "TPM04-*"

#Set Destination ESXi Server 
$server = Get-VBRServer -Name "tpm03-131.aperaturelabs.biz"
#Define Source Virtual Machine(s)
$vm = Find-VBRViEntity -Server vc03.aperaturelabs.biz -Name "TPM04-DC-01"
#Define destination resource pool 
$pool = Find-VBRViResourcePool -Server $server -Name "TPM04-MC"
#Define destination datastore
$datastore = Find-VBRViDatastore -Server "tpm03-131.aperaturelabs.biz" -Name "SolidFire005_iSCSI"
 

#Now that you have defined your variables I would suggest running the following to ensure all have been populated.
$server
$vm
$pool
$datastore

#This command will create the backup job with no schedule defined.
Add-VBRViReplicaJob -Name "PS Replication Job" -Server $server -Entity $vm -ResourcePool $pool -Datastore $datastore -Suffix "_replicated"

#This command will show the job configuration 
Get-VBRJob -name "PS Replication Job"

#If you then wanted to add additional VMs to the job then you could do by adding to this line of code 
$vm = Find-VBRViEntity -Server vc03.aperaturelabs.biz -Name "TPM04-DC-01", "TPM04-SQL-02"

#Setting a variable to the newly created replication job
$job = Get-VBRJob -name "PS Replication Job"

#Confirming that the variable has populated
$job

#This command will then update the job with the newly added Virtual Machines. 
Set-VBRViReplicaJob -Job $job -Server $server -Entity $vm
#This command will also allow you to add wildcards to your replication job Note that this will add all objects with this name. 
Find-VBRViEntity -Name TPM04-* | Add-VBRViJobObject -Job $job

#This command schedules the job represented by the $job variable to run every 1 hour
Set-VBRJobSchedule -Job $job -Periodicaly -FullPeriod 1 -PeriodicallyKind Hours

#This will enable the job schedule
Enable-VBRJobSchedule -Job $job

#The following command will allow us to start the job 
Start-VBRJob -Job $job

 


Reference – https://helpcenter.veeam.com/docs/backup/powershell/create_replica_vmware.html?ver=95


]]>
https://vzilla.co.uk/vzilla-blog/veeam-replication-powershell/feed 0