#VDM30in30 – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Mon, 02 Dec 2019 15:00:18 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png #VDM30in30 – vZilla https://vzilla.co.uk 32 32 #VDM30in30 – Post 14 – Tech Unplugged https://vzilla.co.uk/vzilla-blog/vdm30in30-post-14-tech-unplugged https://vzilla.co.uk/vzilla-blog/vdm30in30-post-14-tech-unplugged#respond Tue, 28 Mar 2017 08:45:44 +0000 http://vzilla.apps-1and1.net/?p=44 Last week I had the pleasure of attending NetApp Insight in Berlin, having been to this conference in this same city for the past 3 years and also once for a Cisco Live, I knew the lay of the land and my bearings.

This however was the third Insight in Berlin and my third different role. This time I was over for the Technical Evangelist role. This gave a different steer to the attendance. I was able to attend a lot of good technical sessions some of which I plan to shake down in another post this week. It was also my debut session speaking at the event and again I want to dive a little deeper into this one too.

I was also invited as part of the NetApp A Team to speak on the TechUnplugged booth and I have reposted this below to go towards my #VDM30in30

 

]]>
https://vzilla.co.uk/vzilla-blog/vdm30in30-post-14-tech-unplugged/feed 0
NetApp Insight – Berlin – 2016 https://vzilla.co.uk/vzilla-blog/netapp-insight-berlin-2016 https://vzilla.co.uk/vzilla-blog/netapp-insight-berlin-2016#respond Tue, 28 Mar 2017 08:45:44 +0000 http://vzilla.apps-1and1.net/?p=46 As I write out this post I am actually on a flight off to Berlin for the fourth time in 3 years, I am heading to the NetApp Insight event which will be the third year of it taking place here. The last time I was here was back in a cold February for Cisco Live, same conference centre so I am hoping for something a little different for this one.

Thinking about it this will be the third time at NetApp insight Berlin and once again I am in a different role… the first time I was at Avnet delivering Pre and Post sales, last year I was at Veeam as a Systems Engineer and practicing my booth babe skills and this year I will be heading out as a Technical Evangelist at Veeam.

NetApp A Team

If you read my blog frequently you will know I am really passionate about the technology portfolio over at NetApp, this has probably stemmed from pre Veeam I was solely working on NetApp technologies where the champion of the NetApp A-Team approached me to join this amazing squad of people from around the globe and from very different verticals within the eco system. A member now for coming up 4 years and the passion has not gone but maybe change slightly.

The A Team participation has helped me develop my own personal brand, confidence and given me opportunity within the industry to really allow me to move on and up in my career.

I was unable to attend the Las Vegas conference which would have been my third trip to that strange city which couldn’t be helped due to other commitments, but it means its going to be full on over in Berlin, lots of fellow A Team members will be over and we will be raising the bar in spreading the news from the event and using all types of outlets to endorse where possible the NetApp portfolio.

I am really looking forward to meeting up with the crew when I arrive later this evening.

My Planned Sessions

In the new role I get the chance to attend sessions again and take in some of the technical content being delivered. It may also help with some ideas for the #VDM30in30 that I am desperately trying to keep up with and hanging on by a whisker.

I will also be making my debut as a speaker at NetApp Insight, I will be delivering the Veeam sponsored session. Co-presenting with my colleague at Veeam, Stefan Renner. This will be our Veeam sponsored session and we will be using the time to discuss our new enhancements specifically around the NetApp & Veeam in the upcoming version 9.5 release which is imminently going to be generally available. My part of the session will involve discussing the leveraging of data and covering what I feel to be a really great feature that I don’t think neither vendor or partners shout about enough, details of the session are below, would love to have a chat with anyone and answer any questions you have.

At the last time of checking we were up to 200 attendees so really excited to be up there and deliver a great message to our customers, partners and prospects.

By all means if you are at the event and you recognise me be sure to introduce and have a conversation, on that I hope everyone who attends has a great week.

Look out for the other NetApp A Team

]]>
https://vzilla.co.uk/vzilla-blog/netapp-insight-berlin-2016/feed 0
#ProjectTomorrow – Living the Veeam Dream – Part 2 https://vzilla.co.uk/vzilla-blog/projecttomorrow-living-the-veeam-dream-part-2 https://vzilla.co.uk/vzilla-blog/projecttomorrow-living-the-veeam-dream-part-2#respond Tue, 28 Mar 2017 08:45:44 +0000 http://vzilla.apps-1and1.net/?p=48

Here is the second part of the Living the Veeam Dream post and also the 10th post in the #ProjectTomorrow I hope everyone has found at least one post useful or will do in the time to come.

The whole point of the series is really to summarise the last 6-12 months of my home lab and really give some insight on to why I have one and what I use it for on a day to day basis. If you have any burning questions or recommendations please reach out to me on twitter @MichaelCade1 I am always more than happy to have a chat and discuss.

Back on task I wanted this post to touch on another key area of my home lab and also some of the things I have been testing over the last few months in my lab to be able to create some really well hit posts back in the August time frame.

Veeam EndPoint

Every single time I mention either EndPoint or the new Veeam Agent for Windows I have to mention the free tier offering, if you are anything like me and you provide IT support willingly or unwillingly to your family and friends then this is the best form of protection of your time you are going to find. It’s FREE!!!

This tool is going to let you take a full bare metal backup of your Windows laptop or desktop, store on a USB, network share or even a Veeam Backup Repository, you can also run volume and file level backup protection too. There is a short video that outlines the key capabilities of this product on the link above.

How do I use EndPoint? Well way back in the posts I mentioned that I have my MSI laptop that is serving out and hosting my management cluster. This is a Windows 8.1 laptop running VMware Workstation.  But it also has Office including Outlook that has my corporate email coming in, although I don’t use this box as I have my mac for work use that is always docked  and also a smaller mac which I carry round with me, this is probably another story for another time. But the MSI is kind of my jump host and windows option if something doesn’t easily work or open on the mac.

On top of that working for the company I work for it would look very silly if I was to lose any data and I am also really over protective of my data, I don’t want to lose anything! And nor should you for that matter. I use Endpoint to backup and protect this MSI laptop, I send the backup data into my lab Veeam backup repository which is then copied further using a Veeam Backup Copy job to an external cloud connect source for added and longer term protection.

This FREE version is not going away, but Veeam have announced the new v2 of this product which will be named Veeam Agent for Windows and this will contain a Free, Workstation and Server license. I explain more about the new agents that are coming here.

Testing

I have mentioned many a time that I use my lab for a lot of testing and this year is by far the busiest it has ever been.

Veeam announced Veeam Backup & Replication v9.5 this year around the June and July time frame. I cant remember exactly when as an SE we were given access but with that announcement it was also announced that the first feature would be our storage integration with Nimble Storage, this was something I wanted or needed to test to understand the differences and advantages of this new feature for the UK market. Being able to spin this environment up in the home lab meant it was quick and easy for me and not having to rely on other red tape and barriers to achieve this in a shared environment was an advantage.

Then the other features were announced and I already that playground setup so it was just a case of adding new machines or resource to test these new features and really get up to speed.

The above along with other announcements around REFS and how this is game changing from a file system point of view for all backup operations with Veeam. As well as even more scalability and addressing and pushing the scalability of the product even further to reach even bigger environments in a more efficient manner.

My favourite feature though from Veeam has to be the Direct Restore to Microsoft Azure, something that was there as a standalone product pre v9.5 but with a lot of limitation is now fully integrated giving you the ability to send any Veeam backups into Microsoft Azure and run them as a virtual machine, great for testing and troubleshooting or maybe even for migration.

On August 23rd 2016, Veeam held a world wide SuperCast which highlighted not only new features of 9.5 but also some new products which I will touch on below and how I used them in the home lab. But they also announced the Availability Platform more of a vision or strategy and I summarised this here.

Office 365 – by nature of this product its not going to be sat locally in my home lab but the whole function for the newly announced Veeam Backup for Microsoft Office 365 was around bringing a copy of that mail data back to your data centre for protection and availability. I discuss deeper the benefits of this new product from Veeam here.

Veeam Managed Backup Portal  – Or should I say Veeam Availability Console the newly renamed product is really there to enable service providers and enterprise environments to manage and monitor large distributed environments.  More detail here.

Veeam Agent for Linux – I gave the Veeam Agent for Windows a lot of love earlier in the post but I wanted to say that Veeam also have an Agent for Linux that should be released around the same time. The reason I have been really testing and looking at this is to get back into using Linux more than anything. Its something I was involved with a few years back in my PS days but then kind of let it slip and didn’t really need that skill set any more and now its back.

Cloud Connect – the last feature and product I want to quickly touch on is Cloud Connect, this is a Veeam offering that enabled service providers to offer Backup and Disaster Recovery as a service to their Veeam customers. Lots of benefits to this from a backup point of view, the ability to leverage this as an off site copy for long term retention potentially a great tape replacement. As well as the replication and disaster recovery really allowing a customer to send replicated virtual machines into a service provider location removing that requirement for a secondary site. I have this set-up from a cloud connect point of view within my lab allowing workloads to be replicated as well as storage to allow for tenant backups.

I also send several backups out to other cloud providers and also sometimes other colleagues or community members to help them test their environment.

This wraps up the planned ten sessions of the #ProjectTomorrow I do hope they have helped at least someone in some way or another. As always please leave me some feedback on the twitters @MichaelCade1

I am now off to pack for NetApp Insight in Berlin a post to follow tomorrow about this shortly.

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-living-the-veeam-dream-part-2/feed 0
#ProjectTomorrow – Living the Veeam Dream – Part 1 https://vzilla.co.uk/vzilla-blog/projecttomorrow-living-the-veeam-dream-part-1 https://vzilla.co.uk/vzilla-blog/projecttomorrow-living-the-veeam-dream-part-1#respond Tue, 28 Mar 2017 08:45:43 +0000 http://vzilla.apps-1and1.net/?p=50

We have finally made it to the penultimate post from my HomeLab series #ProjectTomorrow

I have split this one title into two parts because really this is the fundamental reason why today I even have a home lab.

I have a fully functional demo side of things so I will dive into that and what and where the Veeam components are residing, I also have a test environment which really allows for me to test another instance of Veeam software and this is really there to be spun up and spun down with no real persistence. Obviously the virtualisation piece is key to the flagship products Veeam Backup & Replication and Veeam ONE, but then also running virtual storage arrays to be able to test and demo against these around our snapshot and storage integration.

Up first is the Demo side of things.  As I said this is there for the Demo type work I need to take part in on a day to day basis, for this reason this needs to be handled with care. I need this to be running with no problems whenever I need to demo the product features.

High level Veeam diagram

The diagram below shows the Veeam elements and also their placement within the lab and which physical ESXi host they reside on, earlier on in the series I discussed placement and the specific resources available between these hosts as well as the different constructs around clusters that are also labelled here.

Veeam Components

The Veeam backup server or as I like to explain this component as the “brain” of the solution from Veeam, this is where all the scheduling, indexing and job management is going to take place. Think of this as your central management console to control all other Veeam components.

The Veeam Proxy which can either be virtual or physical in my homelab they are all virtual machines for ease and really I am not protecting anything substantial enough to warrant dedicated hardware for performance. If the backup server is the “brain” then the Proxy is the “muscle” the proxy is what moves that data from the live production system and moves to the backup repository. This is going to be achieved through the most optimal route.

The third mandatory component we need is the storage to store our backup files, named the Backup Repository. Lets think of this as the “stomach” its primary role is to store all backup image backups and copies of data. It also keeps the metadata files for any replicated virtual machines, Technically a repository can be any storage, (performance is going to vary depending on the disk or solution you have chosen) really to summarise though this could be a Windows Share, Linux via NFS, Block/SAN device but also could be a de-duplication device.

Another component that I do have within the environment is Veeam ONE and this really gives me the ability to demo the monitoring and reporting against the virtualised environment, great for a demo but this system is also great to see what actually is being used and any bottlenecks I am seeing within this demo environment.

Storage Virtual Array Placement

I rely heavily on being able to show the benefits of our integration with our storage vendor alliance partners so to have these virtual instances is a powerful tool to demo this functionality. For our integration with both Nimble & NetApp we are able to offer a deeper integration meaning we can orchestrate a lot of the snapshot and replication tasks via the storage so this is why you will see that I have two nodes here.

NetApp
I have two NetApp Sims available in my lab at least at any one time to be able to demonstrate this functionality.

Nimble

Nimble also have a virtual appliance which allows me to demo the same functionality using their technologies.

I also have a virtual appliance for HPE StoreVirtual and EMC VNX (vVNX) this allows for me to demonstrate the backup from storage snapshots.

Backup target
I mentioned above that the Veeam backup repository could literally be any type of storage and this is true, this might be the cheapest storage solution leveraging local direct attached storage or it might be a highly effcient global deduplication device.

The following mentioned are all residing on that physical disk that I have been mentioning throughout the series. This is ultimatly fine from a performance point of view, the appliances are not powered on at the same generally although we can really make that happen if we need to push and demo these functions.

Local Storage – I have several backup repositories that use some local storage, namely VMDK disks added to the Veeam backup server they may span over several hosts but this allows me to backup any management machines on a regular basis as well as other VMs that sit on that physical layer.

NetApp AltaVault – I have touched on this appliance as a great cloud integration storage offering from NetApp in a previous post, there is in fact no integration with the AltaVault but there is a technical report from both NetApp & Veeam on how this can be used together.

AWS Storage Gateway – another appliance that doesn’t actually have any integration but there is a technical writing supporting this as a solution to send your backup files into the AWS Public Cloud.

There are other virtual appliances in fact an endless amount of them that I could roll out in the environment, but these few options give me really enough to demo to our prospects and partners.

Tomorrow I will touch on everything else Veeam related that I have within the HomeLab environment. As always please leave me some feedback I want to make sure all is good and if this was useful or not so much.

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-living-the-veeam-dream-part-1/feed 0
#ProjectTomorrow – Tag the World – vSphere tags a plenty https://vzilla.co.uk/vzilla-blog/projecttomorrow-tag-the-world-vsphere-tags-a-plenty https://vzilla.co.uk/vzilla-blog/projecttomorrow-tag-the-world-vsphere-tags-a-plenty#respond Tue, 28 Mar 2017 08:45:43 +0000 http://vzilla.apps-1and1.net/?p=53 Picture1

Welcome back and today’s post is really talking about one of my favourite features from VMware but also I want to touch on a little how Veeam can use Tags to protect your workloads.  The powerful world of tags and being able to assign this little tag to a virtual machine, datastore or pretty much all objects within the vSphere inventory.

You may decide to tag virtual machines based on OS or possibly in order of importance, the old adage of Gold, Silver, Bronze and paper this coupled with other technologies that can consume this tag profiling can really help automate a lot more within an environment.

It’s also so simple to configure and start creating tags, firstly through the Web Client today (vSphere 6.0) only as I don’t believe you can create tags in the fat c# client. Below you will see where this wizard driven creation can be found. You have the concept of creating the tag category which can be used to tag lots of tags together or just house one tag. You must create at least one category before creating any categories.

Picture2

Once you have at least one tag category you can then go and start creating your tags, for example lets take backup as a great example and use case for vSphere tags. We might create a category called “Veeam” we then might want to create a tag named Gold, Silver and Bronze, we can then TAG these to our objects, in this case lets say a group of virtual machines for each of those tags. Lets say that all the Gold virtual machines require a level of backup on an hourly basis so these are our business critical systems, we then have our single which only really require a backup on a daily basis and finally those bronze machines, do we really need to back those up… yes but maybe only once a week so they get assigned to the bronze tag.

Another use case might be to differentiate between business units or departments to allow the IT department to provide chargeback or showback on each department usage. There are endless amounts of use cases for tags. I think a major benefit of them though comes in that if VM sprawl is occurring or many different vSphere administrators or users are provisioning lots of different machines but someone else is looking after the data protection piece in a traditional way those newly created virtual machines are sometimes going to slip through the net when it comes to being included in a backup job.

Picture3

As a massive fan of tags I am using them in the trusty home lab, very simply I am using them from a Veeam point of view to really demonstrate the benefits of having them within a system.  Veeam can take advantage of these tags in several ways and I want to now touch on how.

Veeam Backup & Replication

As mentioned above the use case for profiling virtual machines or vSphere objects is a powerful way of making sure that all virtual machines get captured by a backup job. Veeam started supporing tags way back in v8, my colleague has a post that covers off the features, benefits and configuration – https://www.veeam.com/blog/8-gems-in-veeam-availability-suite-v8-part-4-support-for-vsphere-tags.html

Running through a backup job or replication job you hit the virtual machines tab and this is where within Veeam you can select either via Hosts & Clusters, VMs & Templates, Datastores and finally Tags. Below you will see a screen shot of my environment with my category named “Veeam” and you will then see the associated tags and a brief description of what they need to achieve. Within vSphere these are assigned to my virtual machines already so in our job we just need to select the appropriate one and adhere the configuration to the requirements.

Picture5

Veeam ONE – Business View

Another thing that Veeam can bring to the table other than the awesome backup and replication of virtual machines associated to vSphere tags but it can also monitor and report against them. Again in Luca’s post linked above this can be seen as a walkthrough.

The Veeam One Business View categories can also be added as tags and categories with the vSphere environment.

Picture6

These will also self associate with the virtual machines for example the VMs with no backups are associated to all Virtual Machines in the vSphere inventory but then in Business View you can now use the workspace to show you all virtual machines within the estate that are not currently being protected.

Picture7

PowerShell Script

I also created the following script to really speed the process of creating these specific tags. I have actually used this script for a number of years from when I was implementing solutions as a consultant.


#Connect to VC which is hosted on MSI (MGMT) and then Nested ESXi Hosts are shut down.

Connect-VIServer -server 192.168.2.11 -Protocol https -User Administrator@vzilla.co.uk

New-TagCategory -Name "Expiry Date" -Cardinality "Single" -EntityType "VirtualMachine" -Description "Expiry Date for VM"
New-TagCategory -Name "Veeam" -Description "vSphere Tags for Backup and Replication tasks"
New-Tag -Name "Platinum Backup - 15Mins" -Category "Veeam"
New-Tag –Name “Gold Backup - Hourly” –Category “Veeam”
New-Tag –Name “Silver Backup - Daily” –Category “Veeam”
New-Tag –Name “Bronze Backup - Weekly” –Category “Veeam”
New-Tag –Name “Storage Snapshots Only” –Category “Veeam”
New-Tag –Name “Platinum Replication - 15Mins” –Category “Veeam”
New-Tag –Name “Gold Replication - Hourly” –Category “Veeam”
New-Tag –Name “Silver Replication - Daily” –Category “Veeam”
New-Tag –Name “Bronze Replication - Weekly” –Category “Veeam”
New-Tag –Name “Storage Replication Only” –Category “Veeam”
That wraps up another post from the #ProjectTomorrow series, this was number 7, The next post will cover the specific Veeam lab and build components I am running in the HomeLab, this may span over two posts given the size if i were to squeeze into one.

This also caps the 10th post for the #VDM30in30 by far didn’t think I would be able to contribute this much given the change in roles. hopefully the content is good and worth reading.

As always please leave me some feedback at @MichaelCade1

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-tag-the-world-vsphere-tags-a-plenty/feed 0
#ProjectTomorrow – Automation across the Nation https://vzilla.co.uk/vzilla-blog/projecttomorrow-automation-across-the-nation https://vzilla.co.uk/vzilla-blog/projecttomorrow-automation-across-the-nation#respond Tue, 28 Mar 2017 08:45:43 +0000 http://vzilla.apps-1and1.net/?p=55 Day 9 – Post 9, I wrote this prior to knowing the outcome of the US presidential vote…. I stand by if Donald Trump at the time of publishing this is President then we are going to be in a world of pain as a world not just the US!

EDIT – so the above happened and maybe there is a complete post on its own here to reflect on the news I woke up to this morning, but I am not as shocked as I thought I would be. Having spoken throughout my night to a lot of my American friends who seem fine not overjoyed with the prospect but not running for the border of Canada, that might change when they have a nights sleep.

Let’s get back on track, Automation within my home lab. Over the last few years in general and work surroundings I have found myself trying to make my life easier, back when I was installing and provisioning NetApp and VMware environments I would look at ways how I could automate the provisioning of volumes, LUNs and then presenting them to vSphere. Simple scripts just created in NotePad ++ if you have not used this free tool then you should be it adds so much ease for just boringly long tasks.

So I wanted to include some level of automation in the home lab, or maybe not a want but a must as the reason for my use of automation is to ultimately save my house money as well as an organised chaos of bringing up certain lab requirements.
PowerShell

I will use PowerShell to achieve some basic automation tasks and  I’ll share my setup and scripts in this post, but please note that this is (as always) work in progress and is by no means a perfect solution. PowerShell is a task automation and configuration management framework from Microsoft, consisting of a command-line shell and associated scripting language built on the .NET Framework. The reason for choosing PowerShell is simple, from a vSphere point of view we have the ability of leveraging PowerShell through PowerCLI and also through Veeam we can use aspects of PowerShell.

I have never really got involved from a “coding” point of view if you can even call that, I know the scripts below are not deemed a coding exercise as such and I am not going to start getting all DevOps on you.

I wanted to share some of the scripts I have used and prepared within my home lab, running through the PowerShell Integrated Scripting Environment (ISE) pretty much always open on my admin machine and allows to tweak the configurable to suit my requirement.

Shutdown VMs

#This script is designed to shut down the Virtual Machines, Nested ESXi and Physical Machines

#All virtual machine IP addresses should be included here.

Restart-Computer -Computer localhost -Force -Credential michael.cade@outlook.com
Stop-Computer -Computer 192.168.2.16,192.168.2.17, 192.168.2.18 -Force -Credential Administrator@vzilla.co.uk

#Connect to VC which is hosted on MSI (MGMT) and then Nested ESXi Hosts are shut down.

Connect-VIServer -server 192.168.2.11 -Protocol https -User Administrator@vzilla.co.uk
Stop-VMhost 192.168.2.125,192.168.2.126,192.168.2.127,192.168.2.128 -Confirm -Force

#The command below will shut down all physical ESXi hosts
Stop-VMhost 192.168.2.121,192.168.2.122,192.168.2.123 -Confirm -Force

Restart-Computer localhost

#This command will shut down the Management nested ESXi host on the MSI
Stop-Computer -Computer 192.168.2.10 -Force -Credential Administrator@vZilla.co.uk
Get-VM
Stop-VM -VM dc01 -Confirm
Stop-VMhost 192.168.2.124 -Confirm -Force

Stop-VM NetApp_AltaVault01 -confirm

Start up

#This report will allow for specific virtual machines to be started, this script will contain all scripts not all should be run at the same time.

#Connect to VC which is hosted on MSI (MGMT) and then Nested ESXi Hosts are shut down.
Connect-VIServer -server 192.168.2.11 -Protocol https -User Administrator@vzilla.co.uk
Get-VM

#Start NetApp AltaVault Appliance
Start-VM NetApp_AltaVault01 -confirm

#Can we be used to pause commands
Start-Sleep -Seconds    60
Get-VM

#Start Veeam servers
Start-VM Veeam_BR01,Veeam_ONE -confirm

#Start Nested ESXi hosts – takes 5 minutes for this to complete.
Start-VM ESX01,ESX02,ESX03,ESX04 -Confirm

#Start Exchange VM
Start-VM Exch01 -confirm

#Start SQL VM
Start-VM SQL01 -confirm

#Start Oracle VM
Start-VM Ora01 -confirm

#Start Sharepoint VM
Start-VM SP01 -confirm

Firewall & Access

#allowing remote access to machine
Set-Item wsman:\localhost\Client\TrustedHosts -value *
Set-Item wsman:\localhost\Client\TrustedHosts 192.168.2.10 -Concatenate -Force
Set-Item wsman:\localhost\Client\TrustedHosts DC01 -Concatenate -Force
Set-Item wsman:\localhost\Client\TrustedHosts DC01.vzilla.co.uk -Concatenate -Force
Set-NetFirewallProfile -Profile * -Enabled False
Invoke-Command -ComputerName 192.168.2.18 -Credential Administrator@vzilla.co.uk -FilePath C:\Users\Michael\Desktop\NoFirewall.ps1
#Enable Remote Desktop

Set-ItemProperty -Path ‘HKLM:\system\CurrentControlSet\Control\Terminal Server’-name “fDenyTSConnections”-value 0

Enable-NetFirewallRule -DisplayGroup “Remote Desktop”
Set-itemProperty -Path ‘HKLM:\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp’-name “UserAuthentication”-Value 1

Thanks for reading, the next post in the series is going to cover off more about automation but now touching on the vSphere tags and how I have created a simple approach and profiling system within the lab.

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-automation-across-the-nation/feed 0
#ProjectTomorrow – Deployment phase – Rack, Stack and Attack https://vzilla.co.uk/vzilla-blog/projecttomorrow-deployment-phase-rack-stack-and-attack https://vzilla.co.uk/vzilla-blog/projecttomorrow-deployment-phase-rack-stack-and-attack#respond Tue, 28 Mar 2017 08:45:42 +0000 http://vzilla.apps-1and1.net/?p=60 Welcome to Post 5, in this post I want to dive a little deeper into the Host Configuration, Physical Build and Networking configuration. The previous post has outlined the simple approach and where I place each of the physical nodes and even slightly touching on the nested hosts for demo purposes.

This post also signifies the 7th post of the #VDM30in30 challenge that I am attempting and I am now at the stage where I have the ideas and titles in my head but its now about putting it down on paper and getting constructive useful posts out on the site for all to consume.

In capturing this information I have used the trustworthy source of Veeam ONE to capture and report on and I have used screen captures from these reports. Where relevant and where I have not yet implemented for example my fourth physical host this will follow suit with all other host configuration.

Host Configuration

Each host is configured in very much the same way, this is not because of DRS or HA as this is not really a use case because of the differences in hardware and lack of shared storage in the most part.

General Information

Management

Physical
Site 1 & Site 2 (Nested)
Available Resources

Management

Physical
Site 1 & Site 2 (Nested)
Network Configuration

Management

Physical
*Virtual Lab 1 is a virtual switch with no physical connectivity and is used for Veeam SureBackup/SureReplica and Sandbox Test operations.

Site 1 & Site 2 (Nested)

*Storage Snapshots Lab VM Network is a virtual switch with no physical connectivity and is used for Veeam OnDemand SandBox from Storage Snapshots

All Hosts Configuration

*Note that this is including some  nested ESXi so not a clear indication on physical hardware being used.

Physical Build
I wanted to start here by sharing the rack that I have situated in my home office. I managed to pick this up really really cheap around 2-3 years back. It gives me 22U for all home  lab equipment, when I purchased it was sound proofed but over time this just has not withstood the movement and I won’t be including it in the write up.

More information can be found here – https://www.amazon.co.uk/19-Inch-Server-Rack-Cabinet/dp/B005SSSNR6

I have also purchased a mounted power strip and this is mounted on the back of the rack towards the bottom.

That concludes this post, tomorrows post will be covering off a little more detail on how I am using the storage in the lab, the majority of storage I have is spinning rust but there are elements of shared SAN, NAS and SSD that I am using sparingly for different tasks within the lab.

Any feedback or advice please reach out to me @MichaelCade1 / @vZillaUK

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-deployment-phase-rack-stack-and-attack/feed 0
#ProjectTomorrow – Storage, Network and Beyond https://vzilla.co.uk/vzilla-blog/projecttomorrow-storage-network-and-beyond https://vzilla.co.uk/vzilla-blog/projecttomorrow-storage-network-and-beyond#respond Tue, 28 Mar 2017 08:45:42 +0000 http://vzilla.apps-1and1.net/?p=62
Following on from post 3 of the series I want to now outline the design I went for and some of the reasoning for this, the equipment I have has a lot of direct attached storage, but not a good enough RAID controller if a RAID controller at all. This led me to having individual disks as datastores.

This method is clearly not very good for resiliency, if those disks die then I lose the contents… or have to revert to a backup at very least. This is not really an issue as I am running regular backups on the machines I need to have protected everything else is non persistent or is protected by some level of RAID. This method of single disk datastores also works well with the Virtual Storage Arrays although I have a better plan for this I think and will mull this over before sharing.

I break my lab into 3 areas, Physical, Virtual and Home. The physical being the actual physical ESXi hosts and any management virtual machines I have in the lab, the virtual is the nested ESXi hosts and Hyper-V hosts as well as all other virtual machines, I then have the home and this will consist of everything that is connected via Wi-Fi or physical Ethernet connection.

The hub of this network is controlled by the BT physical router (ordered a new Linksys recently) this gives out any wifi connectivity and physical connectivity for the home network. For internet connectivity to the lab, I have a virtual router currently running Untangle as a virtual machine residing on the HP ML110 with a WAN and LAN link to both Home and Lab networks.

Consideration of having direct uplink from Dell switch to BT router? There was a consideration but I wanted to at least keep some control.

Networking Subnets

My networking skills are limited and I am sure I am missing some level of detail here and features and functionality with the Dell Managed switch that I have however I believe how I have configuered this to be the best way and easiest way to segregate traffic between Home and Lab networks.

On my HP ML110 G7 I have a Virtual Appliance running, this is running Untangle, this appliance acts as a network gateway for my home lab. It has one physical connection to the Dell Managed Switch for Lab networking and one physical connection to my broadband router for Internet access. Simple stuff this appliance also has some added features around Firewall, DNS, DHCP if I was to need it as well as monitoring and reporting against traffic.

From a sub-netting point of view I kept things very simple here using just the two and on a very common configuration.

High Level

The following sections are to give a very high level overview of how all of my systems are put together currently.

Management

As mentioned in the previous post my MSI Laptop acts as my Windows Desktop but also has VMware workstation installed with a nested ESXi host which is then running my main Domain Controller and the VMware Virtual Center. The MSI is directly connected to the Dell Managed Switch, and Wi-Fi connected to my Home network for access.

Physical

Within my configuration I have my Virtual Center and Datacenter then underneath that we have the above Management cluster which only contains the one host, we then have a physical cluster that currently contains 3 hosts with one waiting to be added shortly. Finally we have our nested Site 1 and Site 2 these are nested hosts that reside on the physical clusters. I have not detailed these on this post.

You will see from the below that only host .121 has access to both the lab network and home network because this is where the Untangle appliance resides.

Virtual Machines

The key to point out here is that on the physical layer of this lab, we have our Storage Virtual Arrays, Nested Hosts (VMware  ESXi and Hyper-V) and some of our Veeam component servers. All other virtual machines such as Exchange, SQL, Oracle and Sharepoint reside on those Nested hosts.

From a testing functionality though I generally use the Physical hosts directly to spin up testing and training resources to preserve the Nested environments.

Next up I am going to look into the physical requirements of my home lab. Any feedback or advice please reach out to me @MichaelCade1 / @vZillaUK

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-storage-network-and-beyond/feed 0
#ProjectTomorrow – Let’s Get Physical – Outlining Physical Resources https://vzilla.co.uk/vzilla-blog/projecttomorrow-lets-get-physical-outlining-physical-resources https://vzilla.co.uk/vzilla-blog/projecttomorrow-lets-get-physical-outlining-physical-resources#respond Tue, 28 Mar 2017 08:45:42 +0000 http://vzilla.apps-1and1.net/?p=64 Welcome back and we are now on the 3rd post of my #ProjectTomorrow series and also the 5th post of November meaning I am still there and on par with what I achieved last year in the #VDM30in30 which was a pretty shoddy performance. Happy Guy Fawkes / Fireworks night. Funny Story that Jack Cade was actually friends with Guy Fawkes back in the day, nothing to do with wanting to burn down parliament but went against the king on numerous occasions. A legend in himself and one of the reasons I wanted to call my first son Jack. Maybe another idea for a post to change topics completely.

Anyway back on topic, the previous two posts touched on Why do we put ourselves through the whole Home Lab headache, what use cases do we see and I listed some that I believe are out there and I am sure you guys have more you can add and then in the second post we touched on the use cases for my home lab and why I do things the way we do, and reading back its like some of the content was actually taken from an executive summary for a customer high level design.

I want to touch on the hardware resources I have today in my lab but also some of the things in my head about expanding this home lab footprint.

Hosts

MSI Laptop – As it says this is my work laptop, the reason this is listed as part of my home lab is purely because this is on all day everyday and this houses my management servers, a Domain Controller and the VMware Virtual Center on a nested ESXi host running on VMware Workstation.  This rig also runs Windows and has my work applications on, even though I do have two macs to choose from, one docked at home and one that follows me around on the road. Its really there just in case. Also a good play rig for Veeam EndPoint tools.
HPML110 – Next up is my first ever server that I have not been able to say goodbye to for a number of years and reasons.  Some perks for this little box though is that its quiet and back in the day it had enough RAM to achieve what I needed to from a testing and certification point of view.
SuperMicro 32GB – The best server in the playground, most RAM and best CPU as well as local disk a plenty. This really houses the majority of the home lab for me and in particular my current storage VSAs (However this will change later on) This being a full on 2U rackmount server it’s not too bad on noise and with my rack being in the same office as my WebEx sessions and calls I don’t believe this server affects the quality of audio.
SuperMicro 16GB – Finally we have the second SuperMicro box, only 16GB here and considerably noisier than the big brother shown above. This is outlined or highlighted for nested Hyper-V, System Center and some other niche BETA testing, with it being noisier this cannot be on during the days I am working from home but it does add some extra power to the lab combined system.
All of these hosts run a flavour of VMware vSphere ESXi, in the upcoming posts I will dive deeper into how these look and why these have been configured in this way.

Just to paint a brief picture on the Storage and Networking that I also physically have in the lab below you will find that detail as well as the Moar section where we can only keep one eye on the eBay saved searches to see if something pops up.

Storage

Lots of Direct Attached

MSI has an SSD running the Windows OS as well as the lab virtual machines, there is also a capacity drive not used for anything regarding the home lab.

SuperMicro 32GB = 10TB of spinning rust.
SuperMicro 16GB = 6TB of spinning rust.
HPML110 G7 = 1TB of spinning rust.

NETGEAR ReadyNAS 716 – 5TB of Hybrid disk, capacity tier on SAS and also 2 x SSD
NETGEAR ReadyNAS 312  – 1TB of spinning rust – really used for ISO media and home shares.

Networking

Dell 1GB Managed Switch
BT Broadband Router
Linksys Wireless Router
Linksys wireless extender (Living Room)

Software

VMware – courtesy of vExpert
Microsoft – msdn subscription

MOAR

Disclaimer – I am not a massive fan of cats but I am a massive fan of a good meme. Thankyou.

We always need Moar right, apart from sickness and illness we don’t need any more of that! But anyway as I was building out the above I was starting to see some flaws to this design of mine. Virtual Storage Arrays are taking up more and more resources and in the current configuration, I could run maybe 1 or 2 of these Storage Arrays but it was going to put the hosts quite close to the edge.

In true HomeLab style I was on eBay looking for the next bit of hardware. I wasn’t sure what would cover this requirement but knew it would be on eBay that’s for sure. Apart from the ML110 mentioned above and a few little bits everything else has come from the eBay powerhouse of HomeLabs.

No difference on this occasion I found a SuperMicro 4 node server in a 2U form factor.

Next up I want to walk through more detail on the storage layout and networking. As always thanks for reading and please provide any feedback @MichaelCade1 / @vZillaUK

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-lets-get-physical-outlining-physical-resources/feed 0
#ProjectTomorrow – My own use case. Test, Demo & Train https://vzilla.co.uk/vzilla-blog/projecttomorrow-my-own-use-case-test-demo-train https://vzilla.co.uk/vzilla-blog/projecttomorrow-my-own-use-case-test-demo-train#respond Tue, 28 Mar 2017 08:45:41 +0000 http://vzilla.apps-1and1.net/?p=66 Following on from the 1st post, I want to touch on what I will be using my lab specifically for  and the reasons why, then later on in the series I will outline how I have achieved certain aspects and I am very much open for feedback over the whole design piece to make the most of this lab equipment.

As I mentioned in the first post my lab usage will consist of Testing, Demo and some training.

To set the scene here, I have recently started a new role at Veeam Software as a Technical Evanglist, prior to this I was a Systems Engineer based in the UK.

As a Systems Engineer it was my responsibility to provide pre sales technical assistance throughout the sales cycle to our customers, this would involve a lot of demo work but also a lot of testing and making sure that certain workloads would work or finding the best way to protect certain workloads. I believe the new role the percentage use case for each area may differ slightly but to at least begin with I am going to keep the same model and design.

Pre joining Veeam I had already invested in a certain amount of home lab hardware, this investment was really aimed at the Testing & Troubleshooting as in my role before Veeam I was a Solutions Architect and there was no real reason to demo anything. This obviously changed so I want to touch on these areas a little further.

Demo

n my opinion and specifically for a Veeam demo I want a running infrastructure with the ability to show all the features of Veeam Backup & Replication as well as the Monitoring & Reporting functions from Veeam ONE. This environment needs to be as clean as possible but still show an element of usage.

This environment should at least consist of 2 sites, more than 1 host on each site and at least 1 virtual centre (my lab really consists of VMware as the underlying hypervisor of choice)

A virtual machine point of view we should have at least one domain controller to authenticate against, this will also handle all demo lab DNS and DHCP functionality. The VMware VCSA will handle all vCenter management within the demo lab (it may be due to licensing that this VCSA will reside on a separate management cluster for all function of the “HomeLab”

From the Application layer we should have built a Domain Controller (As Mentioned Above) Exchange, SQL, SharePoint, Oracle. These where possible should contain some level of data and should also be the latest supported versions by Veeam and possibly several version depending on “HomeLab” resource.

Everything in this demo lab is built and restorable from a VM or an Application level to show this functionality within the Veeam software.

Testing

Even more so in this new role I will have the ability or requirement to test more software, in particular BETA releases from Veeam but being part of many different community programs I also gain early release and BETA testing software from these vendors, some of these community driven BETAs may not be related to Veeam but they are in fact related to Infrastructure and will warrant some of my time to test and discover on.

The testing element of the home lab is not as fixed as the demo, this can be left broken or half configured but will be required to be completely separated from the demo lab. (The only thing that this may be linked with is the overall domain, DNS and DHCP and VMware Virtual Center) Unless there is a requirement to segregate these away also. (BETA testing for the latest VMware version would be good use case for this)

Testing is not something as a Systems Engineer you need to have running 24/7 or for the whole year for example, it’s the ability to spin up a lab to test certain things at the specific times you need or when you are free to do so. I consider this workload to be non persistent, any data protection that is being performed here is purely because the exercise is probably spanning multiple days or weeks so would like that level of protection or I am testing Veeam software that brings the requirement to test data protection.

Training

The final use case and possibly the least used in my own home lab is the Training piece. The ability to spin up test environments to allow you to really deep dive into HA or DRS, nested workloads to allow for a deeper understanding on what is going on and really giving you that preparation for something like the VMware VCP.

The training element wasn’t always the least utilised though within my home lab. In fact at the beginning it was the most used because you are at that stage where you need to learn and understand many different elements of the infrastructure stack. I used my home lab to obtain the VCP, NetApp NCDA and NCIE, Veeam VMCE and also endless amounts of Microsoft Exams from back in the day. But I think moving through your career the training doesn’t disappear, but time gets shorter, I wouldn’t think I could find that amount of time I once took for all of the above certifications.

That’s really all I have on this one, its really about building you a picture of the use cases I believe I try to cater for in my home lab to achieve different tasks, It might be that I am doing it ALL wrong and I should just have a flat cluster for everything and just use all resources equally.

Next up I want to divulge the plethora of hardware that I am running in the HomeLab, and how once you have started you just don’t stop.

]]>
https://vzilla.co.uk/vzilla-blog/projecttomorrow-my-own-use-case-test-demo-train/feed 0