DevOps – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Tue, 10 Aug 2021 10:27:26 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.3 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png DevOps – vZilla https://vzilla.co.uk 32 32 Getting started with CIVO Cloud https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud#respond Mon, 02 Aug 2021 18:22:06 +0000 https://vzilla.co.uk/?p=3055 I have been meaning to jump in here for a while and finally today I got the chance, and it was super quick to get things up and running. Especially when you get the £250 free credits as well! For a playground for learning this is a great place to get started, quick deployment.

This post is going to walk through pretty much from step 1 when you sign in for the first time and how you can easily deploy a Kubernetes cluster from both the UI portal and the Civo CLI.

When you sign up for your CIVO account and your free $250 credit balance, you need to add your credit card and then you can start exploring.

080221 1819 Gettingstar1

My next task was to get the CIVO CLI on my WSL instance, to get this I used arkade to install the CLI

arkade get civo

to add your newly created account to your CIVO CLI then follow these next simple steps, first you will need your API key from your portal you can find this under Account > Security and then you need to take a copy of this string I have blurred out.

080221 1819 Gettingstar2

On your system where you have deployed the CIVO CLI you can now take this API Key and add this using the following command.

civo apikey add MichaelCade <API KEY>

I called my account my name you can it seems choose the name of the account you wish it does not have to be lined up to a username. We can confirm that we added this API key with the following command:

civo apikey list

and then if you want to see the API Key and compare to what we found in the portal then you could run the following command also.

civo apikey show MichaelCade

080221 1819 Gettingstar3

There are many other things you can get from the CLI and obviously incorporate a lot of this into your workflows and automation. For now I am just getting things set up and ready for my first deployment. The other commands can be found here.

From the UI

We can start by creating a Kubernetes cluster through the UI, simply select Kubernetes from the menu on the left and then create new Kubernetes cluster and then you are greeted with this simple wizard to build out your cluster with some great overview of how much your cluster is going to cost you.

080221 1819 Gettingstar4

We then have the option to add marketplace applications and storage to your cluster if you would like to hit the ground running, for the purpose of my walkthrough I am not going to do that just yet. But you can see there are a lot of options to choose from.

080221 1819 Gettingstar5

We then hit create cluster down the bottom and no joke in 2 minutes you have a cluster available to you

080221 1819 Gettingstar6

Now we can also go and jump back to our Civo CLI and confirm we have some visibility into that cluster by using the following command.

civo Kubernetes list

080221 1819 Gettingstar7

Connecting to your cluster

From the UI we can see below it is as simple as downloading the kubeconfig file to access your cluster from your local machine. I have been reading up on this approach not being so secure but for the purpose of learning and labbing I think this way to access is just fine. But we should all be aware of reasons of not exposing the kubeconfig and Kubernetes over the public internet.

080221 1819 Gettingstar8

I downloaded the config file and then put that in my local .kube folder and renamed to config (there might be a better way to handle this or merge this with an existing config file, point me in the right direction if you know a good resource)

080221 1819 Gettingstar9

Ok, so pretty quick and in less than 5 minutes I have a 3 node Kubernetes cluster up and running and ready for some applications. I am also going to show you how if you decide to use the UI to create your first cluster but you would like to use the CLI to get your kubeconfig file then carry on to the next section.

Create a cluster from the CLI

Creating the cluster through the UI was super quick but we always want to have a way of creating a cluster through the UI, maybe it’s a few lines of code that means we can have a new cluster up and running in seconds and no reason to hit a UI maybe it’s a build that is part of a wider demo, there are lots of reasons for using a CLI to deploy your Kubernetes cluster.

When I first installed my Civo CLI in WSL2 I did not have a region configured so I checked this using the following command. And you can see neither London or NYC are set to current.

civo region ls

080221 1819 Gettingstar10

To change this so that LON1 is my default I ran the following command and then ran the ls command again.

civo region current LON1

080221 1819 Gettingstar11

And now if I run civo kubernetes list to show the cluster created in the UI I will not see it as this was created in NYC so I would have to switch regions to see that again.

Let’s now create a Kubernetes cluster from the CLI, issue the following command this is going to create a medium 3 node cluster, obviously you can get granular on size, networking, and other detail that you wish to configure as part of your cluster.

civo kubernetes create mcade-civo-cluster02

once your cluster is created and ready you can issue this command to see your clusters, now in my account I have one cluster shown below in Lon1 and I have another in NYC1

civo kubernetes list

080221 1819 Gettingstar12

If you wish to save your configuration from the CLI so that you can use kubectl locally then you can do this using the following command

civo kubernetes config mcade-civo-cluster02 -s

080221 1819 Gettingstar13

Now I want to have access to both my London cluster and my New York via kubectl and that can be done using the following command. This will then give you access to both contexts. In order to run this, you need to be in the correct region. If you do not use the merge flag then you will overwrite your kubeconfig, if you are like me and you have several configs to different clusters across multiple environments then always make sure you protect that file as well and merge and keep tidy.

civo Kubernetes config mcade-civo-cluster02 -s –merge

080221 1819 Gettingstar14

Obviously this post only touches the surface of what CIVO have going on, I am planning to revisit with some applications being deployed and then getting into the data management side of things and how we can then protect these workloads in CIVO.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud/feed 0
How to Dual Boot – Windows and Ubuntu – Razer Blade Stealth https://vzilla.co.uk/vzilla-blog/dual-boot-windows-and-ubuntu-razer-blade-stealth https://vzilla.co.uk/vzilla-blog/dual-boot-windows-and-ubuntu-razer-blade-stealth#comments Mon, 12 Apr 2021 09:43:40 +0000 https://vzilla.co.uk/?p=2964 Not my usual content but over the weekend I took the plunge again and successfully this time in getting Ubuntu dual booting with my Razer Blade Stealth 13″ 4K that I picked up mid pandemic last year (why oh why did I do that during a pandemic and no travel) This post will cover the steps I took to make sure that this would work on the laptop, I expect for many other laptops this process would also work especially if you had Windows pre-installed on the device.

You might also want to check the warranty if you are going to do this, I did not check because there is always a fallback plan if you have a backup! But you have been warned.

Dual boot Ubuntu with Windows 10

For this walkthrough, I will be having Windows 10 pre-installed and wanting to carve out some space to have Ubuntu 20.10 running as a dual boot operating system on the laptop. There are some prerequisites within Windows that we must ensure is done before this will work (This is the reason for the blog, I lost many battles)

We are going to need the following:

  • A backup of your Windows 10 machine! (cannot stress this enough, given what I do for a living) (free tools out there hint hint, Veeam Agent for Windows)
  • Ubuntu ISO downloaded – https://ubuntu.com/download/desktop (I am using the Ubuntu 20.10 image)
  • Software to create a bootable USB – https://www.balena.io/etcher/
  • A USB Drive (I used a 4GB capacity drive, I did not test smaller options)
  • Free disk space on your existing Windows OS or another disk available, I decided to shrink down the 500GB disk I have available and carve out 150GB for my new Ubuntu Installation.

Create a bootable USB drive

Using the downloaded Ubuntu ISO and the Balena Etcher software downloaded and installed on your machine we can now begin to create a bootable USB Drive. This software is super simple but will show all three steps and completion.

041221 0936 DualBootWin1

First, select the Ubuntu ISO.

041221 0936 DualBootWin2

Next, select the USB Drive, when I am doing things like this I tend to just use one at a time so I do not complicate things.

041221 0936 DualBootWin3

Then hit flash!

041221 0936 DualBootWin4

This process should take around 5 minutes, remove the drive safely when complete and put it to one side for now.

Windows Configuration Steps

There are a few steps we must complete so that we maintain the Windows OS we already have installed on the laptop.

Shrinking the disk.

For the record, this is not the system I used but the process is the same, open disk management you can find this by typing in disk management into the search bar next to the start menu.

041221 0936 DualBootWin5

Locate the disk you wish to use, my laptop only has one disk so that was straight forward, this system has multiple disk options. A warning here if you have a 500GB disk and it has 400GB of data (windows should prevent you from doing this) but don’t try and shrink it below what is being used. Right-click on the disk and you will see in the context menu “Shrink Volume”

041221 0936 DualBootWin6

Select Shrink Volume, enter in the amount you wish to shrink in MB I wanted 150GB so I made those changes. If you are working in GB which you will be then a quick Google for “150gb in mb” will give you a quick number to use.

041221 0936 DualBootWin7

Then select Shrink and your disk management should look like this below image and show an unallocated space, this is what we will use later for the Ubuntu Installation.

041221 0936 DualBootWin8

Secure Boot

This must be done for you to get things working. Secure Boot I think was a feature that came in with Windows 8. It was initially again I believe for security reasons to prevent boot viruses and the like, well in order for us to dual boot with our Ubuntu OS we need to turn it off. We start by searching for Advanced Start-up and we choose the top option in the menu below.

041221 0936 DualBootWin9

We then need to select Restart now, WARNING this is going to reboot your machine so if you are on the machine and reading this blog then you will be losing it and the next steps.

041221 0936 DualBootWin10

Your system is going to reboot into what I can describe as a safe mode option or boot mode. From here we are going to select “Troubleshoot”

041221 0936 DualBootWin11

Next is Advanced Options

041221 0936 DualBootWin12

Next is UEFI Firmware Settings

041221 0936 DualBootWin13

Then finally (well finally in this nice blue set of screens at least) hit restart

041221 0936 DualBootWin14

Next, you will find yourself in the BIOS of the system, use the arrow keys to move along to security.

041221 0936 DualBootWin15

Navigate down to secure boot and make sure this is disabled. Then save and exit, you will then be booted into Windows again.

041221 0936 DualBootWin16

Ubuntu Installation

Now we are finally ready to get Ubuntu installed on our system. You have your USB drive; you will need to reboot Windows and then make sure that the boot menu goes to the USB drive. On the Razer Blade Stealth, you need to press F12 on boot to access the boot menu. I am not going to walk through each step as this has been done a hundred times across the internet, The first and thorough walkthrough I found from a google search was this one here.

The only bit that I need to mention is taking that free 150Gb space we carved out in previous steps, this is where we need to state a few things here. There are three options that you will get that is not covered in the above walkthrough and they are: On this menu, we should choose “Something Else”

041221 0936 DualBootWin17

Then you will be back in a similar process to the linked walkthrough article and I have modified the images and hope the author doesn’t mind. Based on the free space that we now see in our list of drives, we need to select that and then the + we need to create 3 partitions as highlighted below. We will first create the system partition and we will make this 20GB, we will create an 8GB swap area partition and then the final partition in your home (where you store your documents etc) and this number will be the remaining total available from that 150GB so it should be around the 120GB mark.

041221 0936 DualBootWin18

Click through the rest of the wizard and you will be asked to create user accounts and join your networks. Out of the box this all worked for me, there are lots of stories out there about the graphics drivers but out of the box, Ubuntu used the open-source option which I later changed to NVIDIA drivers for the 1650TI. I did go through and then start installing my apps and updates and everything is running super smooth so far. I am yet to find anything that does not work, even the 4K touch panel works! Which I was shocked but equally pleased about as this was one of the reasons I went for the 4K option.

041221 0936 DualBootWin19

Another note is that the GRUB loader is super tiny, if anyone knows how to increase the size of this that would be super helpful although I can read it, it is very tiny on the 4K screen. I think that covers everything, if you have any questions then please reach out to me on Twitter @MichaelCade1 and I will gladly help or improve this article to help others.

041221 0936 DualBootWin20

Anyone that has done this will then see the GRUB loader be super tiny and almost unreadable on the 4K version of the Razer Blade Stealth. This may not bother you as by default this is going to boot into your Ubuntu desktop and you can just about select the Windows boot option there as well.

To fix this we have to make 1 tiny little change to the grub.cfg which can be found at etc/default/grub.cfg you need to remove the # from the

GRUB_GFXMODE=640x480
line.

I did the above change using VI in the terminal, once you have made that change then run

sudo update-grub
and reboot your system, providing you made the correct change you should now actually be able to read the text on the GRUB loader.

image
]]>
https://vzilla.co.uk/vzilla-blog/dual-boot-windows-and-ubuntu-razer-blade-stealth/feed 1
#SummerProject – Infrastructure As Code – Example Tools https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-example-tools https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-example-tools#respond Thu, 08 Aug 2019 08:14:06 +0000 https://vzilla.co.uk/?p=1686 Terraform

As I said above, I wanted to get into some of the examples of actually some of the tools used to provision your infrastructure using code, terraform use the terms “Execution Plans” to describe the way your code is deployed.

Terraform was created by a company called Hashicop they have a number of really good tools in this space.

The biggest pull factor for me and why I wanted to kick things off with Terraform is because Terraform is cloud agnostic or pretty much any infrastructure agnostic as you can use Terraform with you on premises vSphere environment as well as AWS, Azure and GCP Cloud Platforms. Below is a link to an awesome but simple introduction to Terraform. All of these resources can be found here amongst other training material around other tools available.

Azure Resource Manager Templates

Up until today I would have put the ability of using PowerShell in Azure to deploy my Resource Groups and Storage Accounts was IAC. I was wrong, the code itself could form some of that IAC effort but alone in a PowerShell script this is not IAC.

IAC in an Azure world is determined around Azure Resource Templates. A Declarative way of saying this is how I want the end state to be within my Azure environment. These are defined in a JSON file and they allow you to determine how and what your resource and infrastructure looks like.

These templates can be deployed through PowerShell, Azure CLI or through the Azure console.

The biggest thing that needs to happen here and the benefit of IAC is understanding and being able to use versioning, a good example of versioning would be using GIT this allows for source control so you can see when things have changed to the configuration code.

There are alternatives to GIT but I am pretty confident as a noob here that GIT is the mostly used out there, and really I am not looking to be a programmer I just need to understand and potentially be able to act upon a little but not be a fully-fledged and knighted into the Developer kingdom.

Azure DevOps is another resource to mention here. Azure DevOps allows for your developers to collaborate on code development, again this could be a little outside the IAC remit, but there may be some use cases where it is absolutely required as part of IAC.

Azure Repos are leveraged to centrally store code but there are a lot of other Azure services that coexist in here and potentially worth reading some more here if interested.

What was interesting in the resource video stated below “Infrastructure as code for the IT administrator” the presenter also touches on Continuous Deployment and Azure Pipelines. I found this very interesting in that by pushing changes to GIT it would automatically deploy those committed changes to the pipeline or workflow.

I think the example that John Savill uses in the demo is very simple and to be honest that task could be quicker using the UI but obviously he didn’t have endless amounts of time to walk through a more aligned example of this but I think it is the best resource I have seen today where it explains what IAC is and why it should be absolutely considered.

AWS CloudFormation

I think by now we are clear that Infrastructure As Code is about yes code but it’s probably more important to remember that it’s about version control and a Declarative way of saying this is how I want the end state to be within my environment, whichever environment you wish that to be.

Now a question I have at this point is we first talked about Terraform and we stated how it was agnostic to the environment it can be used with vSphere, AWS, Azure etc etc now colour me silly but am I right in thinking that Azure Resource Templates mentioned in the last section and AWS CloudFormation are fixed to their public cloud offerings?

This is quite an old resource but this completely makes sense to me – https://www.techdiction.com/2017/08/24/migrating-aws-cloudformation-templates-to-azure-resource-manager-templates/

I am still convinced that maybe Terraform is the right fit but I might be missing something fundamental here.

In the same way I mentioned in the Azure section and the nature of templates. AWS Cloud Formations use templates also which is a JSON file.

That JSON file serves as a blueprint to define the configuration of all the AWS resources that make up your infrastructure and application stack or you can select a sample pre-built templates that CloudFormation provides for commonly used architectures such as a lamp stack running on Amazon ec2 and Amazon RDS.

Upload your template to CloudFormation, select parameters such as the number of instances or instance types if necessary then CloudFormation will provision and configure your AWS resource stack.

Update your CloudFormation stack at any time by uploading a modified template through the AWS management console or command line.

You can check your template into version control so it’s possible to keep track of all changes made to your infrastructure and application stack.

CloudFormation brings the ability to leverage version control your infrastructure architecture the same way you would with software code.

Provisioning infrastructure seems as simple as creating and uploading a template to CloudFormation.

My first thought here now that I have touched on 3 of the most commonly used IAC tools in the industry today is that whichever one you use this makes it very simple and easy to replicate your infrastructure again and again either for additional site rollouts or test and development scenarios.

The ability to easily and quickly spin up a replica of your production environment for development and test with just a few clicks in this case the AWS management console and then quickly tear it down when finished and rebuild and rinse and repeat that process whenever you want. Manually this was always going to be a pain point and although yes IAC is apparent today in the traditional on premises world it’s reliant on having the physical hardware in place to make this happen unless software or application stack only in which case that could work if resources were spare. In the Public Cloud with those infinite resources this is a great story to be told.

Google Cloud Deployment

Typically Google Cloud Platform is the one Public Cloud out of the above 2 already mentioned that I have not really had any dealings with at all, so when I come to look for resource on Google Cloud Deployment, there is very little out there, great from a content creation point of view if you know your way around the platform, rubbish if you are learning.

Although looking at the product page. It follows the same footprint as the above-mentioned tools but with a focus on the Google Cloud Platform.

  • Simplify your cloud management
  • Repeatable deployment process
  • Declarative language
  • Focus on the application
  • Template-driven

One thing at first glance that I really like about Google is that they seem to have the documentation down really well and depending on how we get on this summer I think before we see the end of 2019 I want to be in better shape to understand more about GCP.

GIT

GIT is a version control system, open source, distributed architecture. The reason for the mention is that it may be required. Generally I get the impression that this is used on projects where you have multiple developers and you need some version control, I thought it was worth mentioning though as there will be some use cases within IAC where this will be relevant and matter to Infrastructure admins.

This is a great resource that will actually allow you to walk through some use cases with GIT

Resources

I cannot take any credit for this collection of resources either used above or below, these were all shared on the show notes of CloudSkills.fm I will also keep adding resources here as I find good useful content to share.

CloudSkills.fm – Infrastructure as code in the cloud:002

Build Azure Resource Manager templates

Azure Quickstart Templates

AWS CloudFormation Getting Started

AWS Quick Start Templates

Google Cloud Deployment Manager

Learn Terraform

Infrastructure as Code for the IT Administrator

I know this was a long post, but I think as a primer into each of the areas was enough and it didn’t seem long enough for each tool to have their own post. Also, you can probably tell that a lot of the content here is basically my notes. There is going to be a huge amount that I am sure I have missed but I wanted to get my views over on what I think or what I deem to be important as we move into this new world. Depending on time there is an endless amount of content, training and follow ups to go back to here and I really find this an interesting part of the future or as we move more and more into the Cloud Computing space.

]]>
https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-example-tools/feed 0
#SummerProject – Infrastructure As Code – Why? https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-why https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-why#respond Wed, 07 Aug 2019 08:13:21 +0000 https://vzilla.co.uk/?p=1684 From my first post I wasn’t sure what to expect when diving head first into this newish world of Infrastructure As Code and what it would look like specifically in another world I wasn’t too sure about which is Cloud Computing.

I felt that although I believe in the first post, I have grasped the reasons behind and the benefits for Infrastructure As Code I think we need to take a look how things were traditionally managed and still are for the most part in on premises datacentre. But also highlight some of the reasons why things are changing.

How was infrastructure traditionally managed

The Infrastructure was traditionally managed and still is today by many organisations, for example let’s take a common estate. VMware running inside of a private data center, the classic approach would be if I’m a consumer of infrastructure I would file a request and then someone at the other end of this request queue is pulling it off logging into either a management portal or an administrative console and pointing and clicking to provision that piece of infrastructure

There is no issue with this especially if I didn’t have to manage a lot of infrastructure or if the churn of my infrastructure was relatively minimal and this was and is true for many sort of private data centers, a Virtual Machine would live for months to years, there was a relatively limited scale of deployment and so it was possible to manually point and click and administer these systems.

Things are changing

There are a couple of changes that are changing the way we also think about the traditional sense of managing our infrastructure. The first of those changes is we do not just have that one private data center to administer we have a sprawl of other possible consumable cloud-based environments and with that they are API driven. The second change is around the elasticity of infrastructure where instead of months to years it’s now days to weeks in terms of how long a resource might live.

The scale of infrastructure is much higher because instead of a handful of large instances we might have many smaller instances there’s many more things we need to provision, and this infrastructure tends to be occurring in cycles and regularly repeating.

We might scale up to handle our load during peak days and times and scale down at night to save on cost because it’s not a fixed cost unlike owning hardware that we can depreciate we’re now paying by the hour, so it makes sense to only use the infrastructure you need and you have to have the sort of elasticity.

As you start making these changes all of a sudden the thought of I’m going to file a thousand requests every morning to spin up to our peak capacity and then file another thousand requests at night to spin back down and then manually manage all this is clearly going to become challenging in terms of how do we even begin to operationalise this in a way that’s reliable and robust and not prone to human error.

There is a change in terms of the dynamics of our infrastructure, the idea behind infrastructure as code is how do we take the process that we were pointing and clicking to achieve our end goal and capture that in a codified way and now if I need to that task one time, ten times or a thousand times I can automate that so now every morning I can run a script that brings up a thousand machines and every evening hit the same script to bring it back down to whatever the required footprint should be. We can begin to both automate that but also now that we have captured the code form, we can start versioning control, we can then see an incremental history of who changed what. This methodology also allows you to see how the infrastructure is actually defined at any given point of time and we have this transparency of documentation, that we don’t have in the traditional point-and-click environment.

The reusability of the code and the ability to then drive automation tasks whilst keeping version control is the real value of Infrastructure as code.

Next up is a long post covering some examples of Infrastructure As Code, in particular I have chosen Terraform from a Cloud agnostic approach, and then each of the major public cloud hyperscalers options to IAC.

]]>
https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-why/feed 0
#SummerProject – Infrastructure As Code – Learning / Foundation https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-learning-foundation https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-learning-foundation#respond Tue, 06 Aug 2019 08:12:31 +0000 https://vzilla.co.uk/?p=1682 In the last post I said I was going to be kicking off my summer project and this year it was going to be around being more aware of cloud computing, by no means am I going to be knowing everything in the 3 weeks I have set aside but I want to be in a better place than I was at the beginning of summer and understand enough to have a good solid conversation with our customers but also the IT Community.

080519 2212 SummerProje1

First Steps

I also mentioned in the first post about some of the resources I was going to get into but this will take from those resources my own spin and take on what I learnt and hopefully somewhere out there it helps someone else, I will of course list my resources again at the end.

Initial Overview and Perspective

Having already played in this area a little for just over a year I think I have a pretty good understanding of what Infrastructure As Code is and what benefit it brings, but I also want to make sure I portray my thoughts here as well.

Infrastructure as code is the practice of defining your architecture formally in some form of code usually this looks like a set of templates that describe your architecture along with configuration files for setting parameters. The biggest reasons to use infrastructure as code are to save yourself repeated work and to know exactly what’s in the environment at any point in time. Your infrastructure will become more reliable repeatable and ephemeral by using infrastructure as code you can stand up environments so fast to play around in them and tear them back down to save costs.

When you use infrastructure as code it’s important to stick to using infrastructure as code once you describe something in the template all updates to it need to be made in that template otherwise you risk introducing drift of configuration.

Wikipedia the keeper of truth has a pretty good opener on what IAC is also.

“Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.[1] The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources. The definitions may be in a version control system. It can use either scripts or declarative definitions, rather than manual processes, but the term is more often used to promote declarative approaches.

IaC approaches are promoted for cloud computing, which is sometimes marketed as infrastructure as a service (IaaS). IaC supports IaaS but should not be confused with it.”

The three reasons I can see people moving this way towards Infrastructure As Code is basically down to Speed, Risk and it offering a highly efficient way of deploying infrastructure.

Speed – If you take a process and you can just copy and paste it then its effectively quicker than typing the line of code or performing the process over and over again IAC allows for that template methodology and how people can take advantage of templating their infrastructure or even applications.

Risk – If you leverage the template like function you are removing the amount of hands on interaction an actual human being needs to have with the infrastructure thus removing risk or at least some risk.

Efficiency – templating analogy again means I can repeat this process over and over again, hundreds and thousands of times and each time we are going to get the same output with the correct parameters and settings.

In the next posts, I am going to drill into some of the key areas that I have found to be most useful to understand and learn more on and I think as a follow up I will go into each one in more detail. In the next post up though I am going to look at the WHY IAC is a thing and how things were and why the needs to make this shift for both on premises and cloud computing.

There are a large number of offerings here and some that I didn’t touch on were the likes of vSphere vRealize Automation, Ansible, Puppet or CHEF all absolutely valid tools to provide IAC but I wanted to keep things broad and also show the Public Cloud native service offerings.

]]>
https://vzilla.co.uk/vzilla-blog/summerproject-infrastructure-as-code-learning-foundation/feed 0
The Summer of 2019 – Cloud Computing for the infrastructure guy https://vzilla.co.uk/vzilla-blog/the-summer-of-2019-cloud-computing-for-the-infrastructure-guy https://vzilla.co.uk/vzilla-blog/the-summer-of-2019-cloud-computing-for-the-infrastructure-guy#respond Mon, 05 Aug 2019 18:43:27 +0000 https://vzilla.co.uk/?p=1677 Every year the summer months in the UK and all over I guess are a good time to start not only reflecting but also thinking about things, last year we worked on some pretty interesting Infrastructure as code as part of a project that saw a lot of good content come out around deploying Veeam components using Terraform and CHEF, this actually made up the majority of our VMworld session and a few other events thereafter.

This summer I felt it was now around the right time to use these slowly dwindling 3 weeks of summer before we head to VMworld to really focus in on some of the new “Cloud Computing” or “Cloud Native” areas that I have thus far only really brushed over and know enough to ask questions but very little to add input or ideas around.

080519 1839 TheSummerof1

Every one of us trains in different ways, some love to read, some love to watch and some love to listen and there are people that span all or some of those formats. For me the best form of training is watching and listening, training videos and podcasts are my go-to at least to start then its hands on and make something work. I am not a classroom fan, never have been… brings back too many memories of school, and reading is only good for getting to sleep. I will say though that I have found my happy medium when it comes to “reading” and that’s Audible, I have simply amazed myself with the number of books I have been through so far this year. Amazing for someone that only really read maybe a book a year.

Ok so you think you want to learn Cloud Native… where do you start. I don’t know.

The first resource I found through sharing and in fact I believe it was Nick Howell now Field CTO at NetApp for their Cloud Data Services Business Unit was the “CNCF Cloud Native Interactive Landscape” this my friends is a monster syllabus for learning Cloud Native!

At the time of writing this post there are 1,172 cards, this is a really good resource as this is the bible and is constantly updated.

The screen grab I took is barely visible, the landscape is vast and someone coming from an infrastructure point of view may be absolutely overwhelmed at first. I know I am and was even more so a few weeks back.

080519 1839 TheSummerof2

Where do you start?

Before you start you need to understand what the focus and end game is, for me my end game is to know more about these areas so that I can understand how and what pain points customers are having as they move into this new way of IT delivery.

I come from an Infrastructure point of view, I know Storage systems very well, I know virtualisation very well, backup and more recently I have let’s say dabbled into automation and configuration space.

For me looking at that above matrix of all these vendors of which some I have never heard of it was overwhelming, but when you actually look at the sections it became a lot clearer for me.

The first area I want to focus on is Infrastructure as code but on this chart really the focus is “Provisioning – Automation & Configuration” 72 cards in total of vendors again some of which I had never come across.

The reason for this choice is I know the infrastructure side of things and this section by all accounts allows me to take that infrastructure and automate the deployment and configuration of the different aspects.

080519 1839 TheSummerof3

Let’s work back one step

I mentioned that I know how I train and how I learn but before you get started on any personal project you need to make sure you know where or at least roughly where your education material is going to come from.

Over the last few years I have defaulted to checking Pluralsight first for video training, I am extremely lucky that as part of my #CiscoChampion and #vExpert membership I receive a rolling 12-month free subscription for the service. I would argue this is one of the most valuable perks of being in the advocacy programs. And if you don’t use it you absolutely should.

080519 1839 TheSummerof4

There is a course there that I fully intend to start with once I get through some podcasts on the same topic.

Infrastructure from Code: The Big Picture Now it is from 2017 so I don’t know if it’s going to throw things off to what we have today. But the premise of the course and overview is going to be a good primer and probably the level of education I need today.

My second pick for a resource of where I am going to get going is with this podcast that started at the very beginning of 2019 and it’s a weekly podcast so quite easy to get caught up moving forward. The podcast is CloudSkills.fm and is hosted by Mike Pfeiffer and listening to the opening show from January 2019 I was like well this is exactly where I am at.

080519 1839 TheSummerof5

The first show as I said was an introduction and touches on some of the certifications and training out there in this space. The second episode gives a good 30-minute primer on Infrastructure As Code. It’s here that cemented the fact that IAC should be the first endeavour for the summer project.

This is a great listen and a great list of resources to get started, in particular Terraform is going to be a huge player moving forward.

080519 1839 TheSummerof6

https://cloudskills.fm/002

I will go into more detail on what I find as well as anymore great resources that I find on the way, next up expect to see a post specifically on Infrastructure As Code.

]]>
https://vzilla.co.uk/vzilla-blog/the-summer-of-2019-cloud-computing-for-the-infrastructure-guy/feed 0