Kubernetes – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Thu, 15 May 2025 10:17:48 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.3 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png Kubernetes – vZilla https://vzilla.co.uk 32 32 Taking a look at KubeBuddy for Kubernetes https://vzilla.co.uk/vzilla-blog/taking-a-look-at-kubebuddy-for-kubernetes https://vzilla.co.uk/vzilla-blog/taking-a-look-at-kubebuddy-for-kubernetes#respond Thu, 15 May 2025 10:17:42 +0000 https://vzilla.co.uk/?p=3536 I have been meaning to get to this little project for a while, and here we are. You can find a link to the site below, I like this initial in your face message though, this tells me that this tool is going to tell me something about my Kubernetes cluster that I didn’t know, for the record I am going to download and run this on my home lab cluster and see what we get. This is not a production cluster!

So what is it…

KubeBuddy powered by KubeDeck helps you monitor, analyze, and report on your Kubernetes environments with ease. Whether you’re tracking cluster health, reviewing security configurations, or troubleshooting workloads, KubeBuddy provides structured insights.

image 21

Lets get started

Suspiciously this Kubernetes tool is built using PowerShell, I don’t think I can name another tool with this characteristic?

Luckily, PowerShell is now available cross platform, I am using a Mac so as part of this getting started we will also be getting PowerShell installed via brew.


brew install powershell

Other installation steps can be found in the usage section of the page link above. Ok, Good stuff we have our PowerShell installed and we can use

pwsh
from our Ghostty terminal to get into the shell. We can then run a command to get the KubeBuddy module installed.


Install-Module -Name KubeBuddy -Scope CurrentUser
image 22

Also from the above we can see the way in which we can start playing with KubeBuddy is by starting with the

Invoke-KubeBuddy
command.

We can then also use

Get-Help Invoke-KubeBuddy -Detailed
as a way to understand some additional flags we have access to here.

image 23

Are we ready to find out something we didn’t know about our cluster?

As you can see it was pretty easy to run against my Kubernetes cluster, I am running a Talos cluster, which is designed to be very minimum and extremely secure so there might be some things reported that are related this.

The Output

As you can see from the end of the video above, we have an output. For this output we chose html but you can get JSON and have seen in the report a save to pdf feature as well.

Here is the html output, I am not going to get into the issues its found, maybe that is a follow up but I think its great that we get a lot of detail without a lot of effort, the tool has taken away the having to search and find this.

Navigation along the top allows you to dive into each of those areas and display warnings and errors found in those specifics.

image 24

When we scroll down we see some more detail about the cluster, even for home lab 20 Critical seems like something we should investigate further.

image 25

Finally, on this initial page we see some information about resources and cluster events, not much going on in the lab right now or something not being picked up is my suspicion here.

image 26

As you then go across the tabs at the top you can get more granular detail on each area, all tabs have this similar layout, the initial Total of resources and those with issues then some recommendations and some findings. Again useful as to find this using

kubectl
would be a needle in a haystack.

image 27

My Thoughts?

This was a very quick overview of this little tool, I am intrigued by the PowerShell, I am intrigued by how this can be progressed and the future of the project and where it can go and highlight.

]]>
https://vzilla.co.uk/vzilla-blog/taking-a-look-at-kubebuddy-for-kubernetes/feed 0
My initial thoughts on using AI to manage Kubernetes Clusters – kubectl-ai https://vzilla.co.uk/vzilla-blog/my-initial-thoughts-on-using-ai-to-manage-kubernetes-clusters-kubectl-ai https://vzilla.co.uk/vzilla-blog/my-initial-thoughts-on-using-ai-to-manage-kubernetes-clusters-kubectl-ai#respond Mon, 12 May 2025 08:52:15 +0000 https://vzilla.co.uk/?p=3504 As with most Mondays, we start with a job and task in mind but quickly as we begin catching up on news from the weekend, we find some interesting rabbit holes to investigate. This Monday morning was no different but I also do not usually have the urge to share such information.

As you all know AI is everywhere, I mean if you do not have a chatbot can you even spell AI!?

My morning started with reading up on a tool called ‘kubectl-cli’ from Google – https://github.com/GoogleCloudPlatform/kubectl-ai

I had seen others doing similar things so was intrigued when Google come out with a project, to name one that I had on my list would be k8sgpt – https://k8sgpt.ai/

K8sGPT is for understanding and debugging what’s going wrong inside a Kubernetes cluster.

kubectl-ai is for interacting with the cluster more easily, translating your intent into commands.

The premise of these tools is the ability to use AI to manage your Kubernetes cluster and resources leveraging natural language. For me this does a few things, the barrier to entry in learning Kubernetes is the overwhelming CLI options and variations, albeit this is a superpower in itself its a challenge for many people that do not have that background. Kubernetes does have a complexity to it, its why it is so diversified in the fields we see it which means by the nature of it, it can do many things which brings complexity. My dad used to say to me “children should be seen but not heard” never really understood that saying but Kubernetes is the same… should be used but not seen… by most people… Maybe that works, who knows…

By adding the ability to query your cluster and instruct tasks via this and other tools we now dont need to memorise everything about kubectl and we can instruct it to run this and do that, or provide me feedback on this.

I started off trying to use the Google AI Studio API key but initially it said the model was overloaded and then the key seemed to be wrong.

image

So I then went and tried the ability to use a local model with Ollama but I only had my MacBook and you need to download the gemini pro model which is around 8GB and with no GPU I need to wait to do this with my desktop PC… maybe a video on this setup.

You can bring many models and services, so I used my trusty OpenAI key and got to work… exporting the key and asking some initial questions.

image 1

As I am focused an interested in the world of data management within Kubernetes I wanted to see how we could go about creating a backup policy and what I needed to provide to make this work.

image 2

Meanwhile up to this point, we were barely touching the spending on our OpenAI key…

image 3

As the $$ are low, we can ask a few more things about our backup policies

image 5

I then thought, what about getting some insight into our cluster, whats the health of things… maybe things I have not been able to see yet, I can just ask right and get a simple output of things I need to troubleshoot.

image 4

Very quick post to start with, but I am now intrigued into this simplification. Maybe I could release some of that RAM in my brain where I am storing all those kubectl commands and store something else.

As a beginner to Kubernetes you have the best chance to accelerate on here and get to grips with a lot more much faster… Just ask it to deploy your nginx deployment and expose it via a service… no longer do you have to worry about the YAML and kubectl commands.

My final thought, this is great for home labs and dev environments… Still be mindful running this on anything important… I also want to give K8sGPT a try as I can see this might do the same and some more things here.

I am sure there are many other tools popping up in this area, but as a quick comparison of the two I created this table.

FeatureK8sGPTkubectl-ai
PurposeDiagnoses and explains Kubernetes cluster issuesHelps write and understand
kubectl
commands using AI
FocusCluster health, error analysis, and troubleshootingCommand-line assistant for
kubectl
AI RoleUses AI to explain root causes and suggest fixesUses AI to translate natural language to
kubectl
commands
InstallationCLI tool + CRDs (optional for full diagnostics)
kubectl
plugin via Krew or direct install
IntegrationCan run inside clusters; supports multi-language outputWorks locally in the terminal as a plugin
Common Use CaseDebugging failed pods, misconfigurations, alertsHelping users construct or correct
kubectl
commands
]]>
https://vzilla.co.uk/vzilla-blog/my-initial-thoughts-on-using-ai-to-manage-kubernetes-clusters-kubectl-ai/feed 0
Veeam Kasten: ARM Support (Raspberry PI – How to) https://vzilla.co.uk/vzilla-blog/veeam-kasten-arm-support-raspberry-pi-how-to https://vzilla.co.uk/vzilla-blog/veeam-kasten-arm-support-raspberry-pi-how-to#respond Sun, 22 Dec 2024 22:16:29 +0000 https://vzilla.co.uk/?p=3476 This has been on my product bucket list for a while, in fact this initial feature request went in on the 9th September 2021. My reasons then were not sales orientated, I was seeing the Kubernetes community using the trusty Raspberry PIs as part of a Kubernetes cluster at home.

By supporting in my eyes this architecture it would have opened the door to the home users, technologists and community to having a trusted way to protect the learning environment at home.

Here we are 3 years on and we got the support.

image 8

I have a single node k3s cluster running on a single Raspberry Pi. We have 4gb of memory and we had to make some changes to get things up and running. It is a Pi4.

image 9

I chose K3s due to the lightweight approach and I was limited by only having this one box for now, the others are elsewhere in the house serving as print servers and other useful stuff.

I actually also started with minikube on the pi with some nightly builds as this is a very fast way to rinse and repeat things but the resources consumed were too much.

As Veeam Kasten for Kubernetes is focused on protecting, moving and restoring your Kubernetes applications and data I need also a layer of storage to play with. the CSI hostpath driver is something quite easy to deploy and mimics any other CSI in a single node cluster. With this in mind we also created a storageclass and volumesnapshotclass

image 10

I am not going to repeat the steps as they can be found here.

Deploying Veeam Kasten

With the above Kubernetes storage foundations in place we can now get Kasten deployed and working on our single node cluster.

We will start this process with a script that runs a primer on your cluster to ensure that you have met requirements, storageclasses are present, and if a CSI provisioner exists so we run the following command on our system. (this is the same process for any deployment of Kasten) (Air gap methods can also be found in the documentation)


curl https://docs.kasten.io/tools/k10_primer.sh | bash

At this point you should have helm and everything else pre installed and available for use here.

As of today, the process to get things installed as with any x86 or IBM Power based cluster deployment of Kasten can be as simple as the command below, although you will likely want to check the documentation.


helm install k10 kasten/k10 --namespace=kasten-io --create-namespace

In an ideal world you will have all pods come up and be running and this might be the case on your cluster or your single node depending on resources. Within my cluster I have also deployed the bitnami Postgres chart as well so resources were low. But in an ideal world you have this.

image 11

I did not… so I had to make some modifications… I am going to state here that this is not supported but then I don’t think Raspberry PI deployments on a single node is something we will have to deal with either. I also believe though that resources are going to play a crucial play in things later on when we come to protecting some data.

My gateway pod was in a state of not enough memory resource to get up and running, I simply modified the deployment and made some reductions to that. to get to the above state.

Backing up

In the below demo, I have created a simple policy considerate of local storage space and only keeping a couple of snapshots for test and demo purposes.

My Deployment modification


    resources:
      limits:
        cpu: "1"
        memory: 100Mi
      requests:
        cpu: 200m
        memory: 100Mi

by default the gateway deployment is


    resources:
      limits:
        cpu: "1"
        memory: 1Gi
      requests:
        cpu: 200m
        memory: 300Mi
]]>
https://vzilla.co.uk/vzilla-blog/veeam-kasten-arm-support-raspberry-pi-how-to/feed 0
VMs on Kubernetes protected unofficially by Veeam* https://vzilla.co.uk/vzilla-blog/vms-on-kubernetes-protected-unofficially-by-veeam Fri, 29 Nov 2024 19:20:19 +0000 https://vzilla.co.uk/?p=3413 *As the title suggests in this post we are going to be talking about the upstream project KubeVirt, KubeVirt as a standalone project release and the protection of these VMs is not supported. It is only today supported for Red Hat OpenShift Virtualisation (OCP-V) and Harvester from SUSE. This is based on all the varying hardware KubeVirt can be deployed on.

With that caveat out of the way in a home lab, we are able to tinker around with whatever we want. I am also clarifying that I am using the 5 nodes that we have available for the community to protect these virtual machines.

We are going to cover getting Kubevirt deployed on my bare metal Talos Kubernetes cluster, getting a virtual machine up and running and then protecting said machine.

Some pre-reqs to this is to make sure you follow this guide, making sure you have virtualisation enabled and a bridge network defined in the Talos configuration.

Here is my configuration repository for both my virtual cluster and bare metal. I will say though that this documentation was really handy in finding the way. Remember these commands are based on my environment.

Installing virtctl

We will start with virtctl, virtctl is a command-line utility for managing KubeVirt virtual machines. It extends kubectl functionality to include VM-specific operations like starting, stopping, accessing consoles, and live migration. Designed to streamline VM lifecycle management within Kubernetes, it simplifies tasks otherwise requiring complex YAML configurations or direct API calls.


export VERSION=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64

Warning here, be sure to check the copy and paste as it broke on mine.

Deploying KubeVirt

Keeping things simple we will now deploy Kubevirt via YAML manifests as per the Talos docs linked above.


export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml

Now we have the operator installed in our bare metal cluster, we need to apply the custom resource. I have modified this slightly from the talos example.


apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration:
      featureGates:
        - LiveMigration
        - NetworkBindingPlugins
  certificateRotateStrategy: {}
  customizeComponents: {}
  imagePullPolicy: IfNotPresent
  workloadUpdateStrategy:
    workloadUpdateMethods:
      - LiveMigrate

Finally before we get to deploying a VM we are going to deploy the CDI (Containerised Data Importer) which is needed to import disk images. I modified mine again here to suit the storageclasses I have available to me.


apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
  name: cdi
spec:
  config:
    scratchSpaceStorageClass: ceph-block
    podResourceRequirements:
      requests:
        cpu: "100m"
        memory: "60M"
      limits:
        cpu: "750m"
        memory: "2Gi"

All of these will then be deployed using

kubectl create -f <filename>
but you can see this below in the demo.

Create a VM

Next up we can create our virtual machine. I am going to again copy but modify slightly the example that we have from Talos. Here is my VM YAML manifest.

Note SSH configuration is redacted and you would want to add your own here.


apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora-vm
  namespace: fedora-vm
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: fedora-vm
      annotations:
        kubevirt.io/allow-pod-bridge-network-live-migration: "true"
    spec:
      evictionStrategy: LiveMigrate
      domain:
        cpu:
          cores: 2
        resources:
          requests:
            memory: 4G
        devices:
          disks:
            - name: fedora-vm-pvc
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: podnet
              masquerade: {}
        networks:
          - name: podnet
            pod: {}
        volumes:
          - name: fedora-vm-pvc
            persistentVolumeClaim:
              claimName: fedora-vm-pvc
          - name: cloudinitdisk
            cloudInitNoCloud:
              networkData: |
                network:
                  version: 1
                  config:
                    - type: physical
                      name: eth0
                      subnets:
                        - type: dhcp
              userData: |-
                #cloud-config
                users:
                  - name: cloud-user
                    ssh_authorized_keys:
                      - ssh-rsa <REDACTED>
                  sudo: ['ALL=(ALL) NOPASSWD:ALL']
                  groups: sudo
                  shell: /bin/bash
              runcmd:
                - "sudo touch /root/installed"
                - "sudo dnf update"
                - "sudo dnf install httpd fastfetch -y"
                - "sudo systemctl daemon-reload"
                - "sudo systemctl enable httpd"
                - "sudo systemctl start --no-block httpd"

  dataVolumeTemplates:
  - metadata:
      name: fedora-vm-pvc
      namespace: fedora-vm
    spec:
      storage:
        resources:
          requests:
            storage: 35Gi
        accessModes:
          - ReadWriteMany
        storageClassName: "ceph-filesystem"
      source:
        http:
          url: "https://fedora.mirror.wearetriple.com/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2"

The final piece to this puzzle that I have not mentioned is that I am using Cilium as my CNI and with this I am also using this to provide me with some IP addresses accessible from my LAN. I created a service so that I could SSH to the newly created VM.


apiVersion: v1
kind: Service
metadata:
  labels:
    kubevirt.io/vm: fedora-vm
  name: fedora-vm
  namespace: fedora-vm
spec:
  ipFamilyPolicy: PreferDualStack
  externalTrafficPolicy: Local
  ports:
  - name: ssh
    port: 22
    protocol: TCP
    targetPort: 22
  - name: httpd
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    kubevirt.io/vm: fedora-vm
  type: LoadBalancer

Below is a demo, you will notice that I had to remove a previous known host with the same IP from my file.

Some other interesting commands using virtctl would be the following, I am going to let you guess what they each do:


virtctl start fedora-vm -n fedora-vm

virtctl console fedora-vm -n fedora-vm

virtctl stop fedora-vm -n fedora-vm

Protect with Veeam Kasten

Now we have a working machine running on our Kubernetes cluster, we should probably backup and protect it. A similar process to the last post covering protecting your stateful workloads within Kubernetes. We can create a policy to protect this VM and everything in the namespace.

Wrap Up…

I got things protected with Kasten but I need to go back and check a few things are correct in regards to the Ceph Filesystem storageclass and make sure I am protecting the VMs in the correct way moving forward.

This was really to focus on getting Virtual machines up and running in my lab at home to get to grips with virtualisation on Kubernetes. I want to get another post done on Kanister and the specifics around application consistency and then come back to a more relevant workload on these VMs alongside your containerised workloads.

]]>
Dark Kubernetes Clusters & managing multi clusters – Part 2 https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters-part-2 https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters-part-2#respond Tue, 10 Aug 2021 07:56:03 +0000 https://vzilla.co.uk/?p=3077 In the last post we focused on using inlets to create a WebSocket to provide a secure public endpoint for the Kubernetes API and port 8080 for Kasten K10 that are otherwise not publicly reachable. In this post we are going to concentrate on the Kasten K10 and multi cluster configuration. I am going to share a great article talking about Kasten multi-cluster from Dean Lewis.

Deploying K10

Deploying Kasten K10 is a simple helm chart deployment that I covered in a post a few months back here.

 kubectl create ns kasten-io
namespace/kasten-io created
 helm install k10 kasten/k10 --namespace=kasten-io

Accessing K10

For the purposes of this demo, I am just port forwarding each cluster out, but you could use ingress to expose to specific network addresses, If I was going to do this again though I would setup ingress on each of the clusters and then this would slightly change the inlets configuration.

Multi-cluster setup-primary

We have 3 clusters, and we need to decide our primary cluster so that we can start the configuration and bootstrap process. In this demo I have chosen the CIVO cluster located in NYC1. More about this configuration setup can be found here in the official documentation.

You will see from the commands and the images below that we are using the K10multicluster tool this is a binary available from the Kasten github page and it provides the functionality of bootstrapping your multi cluster configurations.

k10multicluster setup-primary --context=mcade-civo-cluster01 --name=mcade-civo-cluster01

080721 1029 DarkKuberne1

Bootstrap the secondary (dark site)

The main purpose of the demo is to prove that we can add our local K3D cluster from a data management perspective in one location.

k10multicluster bootstrap --primary-context=mcade-civo-cluster01 --primary-name=mcade-civo-cluster01 --secondary-context=k3d-darksite --secondary-name=k3d-darksite --secondary-cluster-ingress-tls-insecure=true --secondary-cluster-ingress=http://209.97.177.194:8080/k10

or

k10multicluster bootstrap \
--primary-context=mcade-civo-cluster01v \
--primary-name=mcade-civo-cluster01 \
--secondary-context=k3d-darksite \
--secondary-name=k3d-darksite \
--secondary-cluster-ingress-tls-insecure=true \
--secondary-cluster-ingress=http://209.97.177.194:8080/k10

080721 1029 DarkKuberne2

Managing Kasten K10 multi-cluster

I will make more content going into more detail about Kasten K10 multi cluster but for the purposes of the demo, if you now login to your primary cluster web interface you will now have the multi cluster dashboard and with the above commands ran you will now see that we are managing the K3d cluster.

080721 1029 DarkKuberne3

From here we can create global backup policies and other global configurations which also could enable the ability to move applications between your clusters easily. I think there is a lot more to cover when it comes to multi cluster and the capabilities there. The purpose of this blog was to highlight how inlets could enable not only access to the Kubernetes API but also to other services within your Kubernetes clusters.

You will have noticed in the above that I am using TLS insecure, this was due to me changing my environment throughout the demo. Inlets very much enables you to use TLS and have verification on.

Useful Resources

I mentioned in the first post that I would also share some useful posts that I used to get things up and running here. As well as a lot of help from Alex Ellis

https://blog.alexellis.io/get-private-kubectl-access-anywhere/

https://docs.inlets.dev/#/?id=for-companies-hybrid-cloud-multi-cluster-and-partner-access

https://inlets.dev/blog/2021/06/02/argocd-private-clusters.html

I have obviously used Kasten K10 and the Kubernetes API but this same process could be used for anything within side a private environment that needs to be punched out to the internet for access.

]]>
https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters-part-2/feed 0
Dark Kubernetes Clusters & managing multi clusters https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters#respond Mon, 09 Aug 2021 13:33:06 +0000 https://vzilla.co.uk/?p=3072 Let’s first start by defining the “Dark” mentioned in the title. This could relate to a cluster that you have that needs to have minimum to no access from the internet or it could also be a home Kubernetes cluster, the example I will be using in this post will be a K3S cluster deployed in my home network, I do not have a static IP address with my ISP and I would like others to be able to connect to my cluster for collaboration or something that we will get to around data management later.

What is the problem?

How do you access dark sites over the internet?

How do you access dark Kubernetes clusters over the internet? Not to be confused with dark deployment or A/B testing.

Do you really want a full-blown VPN configuration to put in place?

If you are collaborating amongst multiple developers do you want KUBECONFIGS shared everywhere?

And my concern and reason for writing this post is around how would Kasten K10 Multi-Cluster access a dark site Kubernetes cluster to provide data management to that cluster and data?

080721 1005 DarkKuberne1

What is Inlets?

080721 1005 DarkKuberne2

First, I went looking for a solution, I could have implemented a VPN so that people could VPN into my entire network and then get to the K3D cluster I have locally, this seems to be an overkill and complicated way to give access. It’s a much bigger opening than is needed.

Anyway, Inlets enables “Self-hosted tunnels, to connect anything.”

Another important pro to inlets is that it replaces opening firewall-ports, setting up VPNs, managing IP ranges, and keeping track of port-forwarding rules.

I was looking for something that would provide a service that would provide a secure public endpoint for my Kubernetes cluster (6443) and Kasten K10 deployment (8080) which would not normally or otherwise be publicly reachable.

You can find a lot more information about Inlets here at https://inlets.dev/ I am also going to share some very good blog posts that helped me along the way later in this post.

Let’s now paint the picture

What if we have some public cloud clusters but we also have some private clusters maybe running locally on our laptops or even dark sites? For the example I am using CIVO in my last post I went through how I went through the UI and CLI to create these clusters and as they were there, I wanted to take advantage of that. As you can also see we have our local K3D cluster running locally within my network. With the CIVO clusters we have our KUBECONFIG files available with our public IP to access, the managed service offerings make it much simpler to have that public IP ingress to your cluster, it is a little different when you are on home ISP backed Internet, but you still have a requirement.

080721 1005 DarkKuberne3

My local K3D Cluster

If you were not on my network, you would have no access from the internet to my cluster. Which for one stops any collaboration but also stops me being able to use Kasten K10 to protect my stateful workloads within this cluster.

080721 1005 DarkKuberne4

Now for the steps to change this access

There are 6 steps to get this up and running,

  1. Install inletscli on dev machine to deploy exit-server (taken from https://docs.inlets.dev/#/ – The remote server is called an “exit-node” or “exit-server” because that is where traffic from the private network appears. The user’s laptop has gained a “VirtualIP” and users on the Internet can now connect to it using that IP.)
  2. Inlets-Pro Server droplet deployed in Digital Ocean using inletsctl (I am using Digital Ocean but there are other options – https://docs.inlets.dev/#/?id=exit-servers)
  3. License file obtained from Inlets.dev, monthly or annual subscriptions
  4. Export TCP Ports (6443) and define upstream of local Kubernetes cluster (localhost), for Kasten K10 I also exposed 8080 which is what is used for the ingress service for the multi-cluster functionality.
  5. curl -k https://Inlets-ProServerPublicIPAddress:6443
  6. Update KUBECONFIG to access through websocket from the internet

Deploying your exit-server

I used Arkade to install my inletscli more can be found here. The first step once you have the cli is to get your exit server deployed. I created a droplet in Digital Ocean to act as our exit server, could be many other locations as mentioned and shown in the link above. The following command is what I used to get my exit-server created.

inletsctl create \
--provider digitalocean \
--access-token-file do-access-token.txt \
--region lon1

080721 1005 DarkKuberne5

Define Ports and Local (Dark Network IP)

You can see from the above screen shot that the tool also gives you handy tips on what commands you now need to run to configure your inlets pro exit-server within Digital Ocean. We now have to define our ports which for us will be 6443 (Kubernetes API) and 8080 (Kasten K10 Ingress) we also need to define the IP address on our local network.

export TCP_PORTS="6443,8080" - Kubernetes API Server
export UPSTREAM="localhost" - My local network address for ease localhost works.

inlets-pro tcp client --url "wss://209.97.177.194:8123" \
 --token "S8Qdc8j5PxoMZ9GVajqzbDxsCn8maxfAaonKv4DuraUt27koXIgM0bnpnUMwQl6t" \
 --upstream $UPSTREAM \
 --ports $TCP_PORTS \
 --license "$LICENSE"

080721 1005 DarkKuberne6Image note – I had to go back and add export TCP_PORTS=”6443, 8080″ for the kasten dashboard to be exposed

Secure WebSocket is now established

When you commit the commands above to configure inlets-PRO you will then have the following if configured correctly, leave this open in a terminal this is the connection between the exit-server and your local network.

080721 1005 DarkKuberne7

Confirm access with curl

As we are using the Kubernetes API, we are not expecting a fully authorised experience via curl but it does show you have external connectivity with the following command.

curl -k https://178.128.38.160:6443

080721 1005 DarkKuberne8

Updating KubeConfig with Public IP

We already had our KUBECONFIG for our local K3D deployment, to create my cluster I used the following command for the record. If you do not suggest the API port as 6443 then some high random port will be used which will skew everything we have done at this stage.

k3d cluster create darksite --api-port 0.0.0.0:6443

Anyway, back to updating the kubeconfig file, you will have the following in there currently which is fine for access locally inside the same host.

080721 1005 DarkKuberne9

Make that change with the public facing IP of the exit-server

080721 1005 DarkKuberne10

Then locally you can confirm you still have access

080721 1005 DarkKuberne11

Overview of Inlets configuration

Now we have a secure WebSocket configured and we have access externally to our hidden or dark Kubernetes cluster, You can see below how this looks.

080721 1005 DarkKuberne12

At this stage we can share the KUBECONFIG file, and we have shared access to our K3D cluster within our private network.

I am going to end this post here, and then the next post we will cover how I then went to configure Kasten K10 multi cluster so that now I can manage my two CIVO clusters and my K3D clusters from a data management perspective using Inlets to provide that secure WebSocket.

]]>
https://vzilla.co.uk/vzilla-blog/dark-kubernetes-clusters-managing-multi-clusters/feed 0
Welcome to Kubernetes Korner https://vzilla.co.uk/vzilla-blog/welcome-to-kubernetes-korner https://vzilla.co.uk/vzilla-blog/welcome-to-kubernetes-korner#respond Thu, 05 Aug 2021 06:52:17 +0000 https://vzilla.co.uk/?p=3058 We have created an open community forum where we can discuss all things Kubernetes, Data Management, Kasten and DevOps topics in one place where anyone can contribute, learn, or share. You will see the why we decided on this route vs the what seems to be the defacto now to create a discord or slack channel at the end of the post.

Basically, my ask is would love to see you all in there, sharing your experiences and asking your questions.

https://community.veeam.com/groups/kubernetes-korner-90

080521 0652 WelcometoKu1

Welcome!

I wanted to start by welcoming everyone to our new Kubernetes Korner where we can discuss all things cloud-native, Kubernetes and DevOps. More importantly its where we can come to share our experiences in learning this still relatively new world and ask questions of our fellow community members.

We are also hoping to gather feedback around the Kasten K10 platform and Open-Source projects so that we can better understand the product strategy and feedback on where to go next and how to improve the overall experience. We also want this to be a community space to ask your Kasten questions, I will be active in here as well as some of the Kasten product managers and then I expect we have a few community members that are also hands on with Kasten K10 daily that can offer their advice and solutions.

My Ask

I also wanted to kick off the Korner with a question to you all, each, and every one of us will have a different background and learning journey when it comes to Kubernetes and DevOps. I want to know what your biggest challenge has been so far, what has been that one thing you have felt you really struggled with and how did you overcome this? Or maybe it was that you thought a topic was going to be daunting but when you got into it, it wasn’t, and you were able to sail through and get a better foundational knowledge of the topic? Or did you have the skill already and just reinforced the learning that you already had.

Mine was Linux, everything Kubernetes and pretty much DevOps is Linux orientated. I was under the impression you needed years of experience and a massive amount of time behind a Linux OS. Now I am no Linux expert by any stretch but the years of messing around, deploying apps, making things happen has massively helped when it comes to getting around. My biggest advice to anyone that maybe thinks they are in the same boat as I then get hands on, convert a laptop to Ubuntu or one of the overwhelming Linux distributions out there and get hands on every day.

I am going to be suggesting this a lot over the next plan of content I have, get hands on! I started with an introduction to DevOps covering in brief 12 steps to get into and understand more about DevOps. You can find that article here – https://blog.kasten.io/devops-learning-curve and over the next few weeks and months I plan to get into each of these topics a little deeper and share that as I did with my Kubernetes learning journey a few months back.

Community First

Feel free to ask your questions, share your experiences and as you can see any mention to boats, the sea or anything nautical is very much welcome.

Another big part of creating this community forum is that I feel like a lot of the community has gone behind closed doors in hidden slack and discord channels and what was once a super useful searchable open community is now very much locked away unless you join the thousands of channels and platforms. We wanted something open and for everyone to discuss and share experiences.

]]>
https://vzilla.co.uk/vzilla-blog/welcome-to-kubernetes-korner/feed 0
Getting started with CIVO Cloud https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud#respond Mon, 02 Aug 2021 18:22:06 +0000 https://vzilla.co.uk/?p=3055 I have been meaning to jump in here for a while and finally today I got the chance, and it was super quick to get things up and running. Especially when you get the £250 free credits as well! For a playground for learning this is a great place to get started, quick deployment.

This post is going to walk through pretty much from step 1 when you sign in for the first time and how you can easily deploy a Kubernetes cluster from both the UI portal and the Civo CLI.

When you sign up for your CIVO account and your free $250 credit balance, you need to add your credit card and then you can start exploring.

080221 1819 Gettingstar1

My next task was to get the CIVO CLI on my WSL instance, to get this I used arkade to install the CLI

arkade get civo

to add your newly created account to your CIVO CLI then follow these next simple steps, first you will need your API key from your portal you can find this under Account > Security and then you need to take a copy of this string I have blurred out.

080221 1819 Gettingstar2

On your system where you have deployed the CIVO CLI you can now take this API Key and add this using the following command.

civo apikey add MichaelCade <API KEY>

I called my account my name you can it seems choose the name of the account you wish it does not have to be lined up to a username. We can confirm that we added this API key with the following command:

civo apikey list

and then if you want to see the API Key and compare to what we found in the portal then you could run the following command also.

civo apikey show MichaelCade

080221 1819 Gettingstar3

There are many other things you can get from the CLI and obviously incorporate a lot of this into your workflows and automation. For now I am just getting things set up and ready for my first deployment. The other commands can be found here.

From the UI

We can start by creating a Kubernetes cluster through the UI, simply select Kubernetes from the menu on the left and then create new Kubernetes cluster and then you are greeted with this simple wizard to build out your cluster with some great overview of how much your cluster is going to cost you.

080221 1819 Gettingstar4

We then have the option to add marketplace applications and storage to your cluster if you would like to hit the ground running, for the purpose of my walkthrough I am not going to do that just yet. But you can see there are a lot of options to choose from.

080221 1819 Gettingstar5

We then hit create cluster down the bottom and no joke in 2 minutes you have a cluster available to you

080221 1819 Gettingstar6

Now we can also go and jump back to our Civo CLI and confirm we have some visibility into that cluster by using the following command.

civo Kubernetes list

080221 1819 Gettingstar7

Connecting to your cluster

From the UI we can see below it is as simple as downloading the kubeconfig file to access your cluster from your local machine. I have been reading up on this approach not being so secure but for the purpose of learning and labbing I think this way to access is just fine. But we should all be aware of reasons of not exposing the kubeconfig and Kubernetes over the public internet.

080221 1819 Gettingstar8

I downloaded the config file and then put that in my local .kube folder and renamed to config (there might be a better way to handle this or merge this with an existing config file, point me in the right direction if you know a good resource)

080221 1819 Gettingstar9

Ok, so pretty quick and in less than 5 minutes I have a 3 node Kubernetes cluster up and running and ready for some applications. I am also going to show you how if you decide to use the UI to create your first cluster but you would like to use the CLI to get your kubeconfig file then carry on to the next section.

Create a cluster from the CLI

Creating the cluster through the UI was super quick but we always want to have a way of creating a cluster through the UI, maybe it’s a few lines of code that means we can have a new cluster up and running in seconds and no reason to hit a UI maybe it’s a build that is part of a wider demo, there are lots of reasons for using a CLI to deploy your Kubernetes cluster.

When I first installed my Civo CLI in WSL2 I did not have a region configured so I checked this using the following command. And you can see neither London or NYC are set to current.

civo region ls

080221 1819 Gettingstar10

To change this so that LON1 is my default I ran the following command and then ran the ls command again.

civo region current LON1

080221 1819 Gettingstar11

And now if I run civo kubernetes list to show the cluster created in the UI I will not see it as this was created in NYC so I would have to switch regions to see that again.

Let’s now create a Kubernetes cluster from the CLI, issue the following command this is going to create a medium 3 node cluster, obviously you can get granular on size, networking, and other detail that you wish to configure as part of your cluster.

civo kubernetes create mcade-civo-cluster02

once your cluster is created and ready you can issue this command to see your clusters, now in my account I have one cluster shown below in Lon1 and I have another in NYC1

civo kubernetes list

080221 1819 Gettingstar12

If you wish to save your configuration from the CLI so that you can use kubectl locally then you can do this using the following command

civo kubernetes config mcade-civo-cluster02 -s

080221 1819 Gettingstar13

Now I want to have access to both my London cluster and my New York via kubectl and that can be done using the following command. This will then give you access to both contexts. In order to run this, you need to be in the correct region. If you do not use the merge flag then you will overwrite your kubeconfig, if you are like me and you have several configs to different clusters across multiple environments then always make sure you protect that file as well and merge and keep tidy.

civo Kubernetes config mcade-civo-cluster02 -s –merge

080221 1819 Gettingstar14

Obviously this post only touches the surface of what CIVO have going on, I am planning to revisit with some applications being deployed and then getting into the data management side of things and how we can then protect these workloads in CIVO.

]]>
https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud/feed 0
GitOps – Including backup in your continuous deployments https://vzilla.co.uk/vzilla-blog/gitops-including-backup-in-your-continuous-deployments https://vzilla.co.uk/vzilla-blog/gitops-including-backup-in-your-continuous-deployments#respond Mon, 12 Jul 2021 12:57:10 +0000 https://vzilla.co.uk/?p=3037 In the last post we covered the fundamentals at a very high level on why you should be considering adding a backup action into your GitOps workflows, we also deployed ArgoCD into our Kubernetes cluster. In this post we are going to walkthrough a scenario on why and how having that backupaction within in your process ensures that when mistakes happen (and they will) your data is protected and can be recovered easily.

This walkthrough assumes that you have Kasten K10 deployed within your Kubernetes Cluster to perform these steps. More details on this can be found at https://docs.kasten.io/latest/index.html

This is a very simple example of how we can integrate Kasten K10 with ArgoCD. It’s voluntary kept very simple because we focus on using Kasten K10 with a pre-sync phase in ArgoCD.

You can follow along this walkthrough using the following GitHub Repository.

Phase 1 – Deploying the Application

071221 1254 GitOpsInclu1

First let us confirm that we do not have a namespace called mysql as this will be created within ArgoCD

We create a mysql app for sterilisation of animals in a pet clinic.

This app is deployed with Argo CD and is made of: * A mysql deployment * A PVC * A secret * A service to mysql

We also use a pre-sync job (with corresponding sa and rolebinding) to backup the whole application with kasten before application sync.

At the first sync an empty restore point should be created.

071221 1254 GitOpsInclu2

To look at the Kasten Pre Sync file you can see below the hook and sync wave that we have used here, this indicates that this will be performed before any other task. More details can be found in the link above.

071221 1254 GitOpsInclu3

071221 1254 GitOpsInclu4

Phase 2 – Adding Data

071221 1254 GitOpsInclu5

The scenario we are using here is of a vet clinic where there is a requirement to log all information of their patients for safe keeping and understanding what has happened to each one.

Vets are creating the row of the animal they will operate.

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)

kubectl exec -ti $mysql_pod -n mysql — bash

mysql –user=root –password=ultrasecurepassword

CREATE DATABASE test;

USE test;

CREATE TABLE pets (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);

INSERT INTO pets VALUES (‘Puffball’,’Diane’,’hamster’,’f’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Sophie’,’Meg’,’giraffe’,’f’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Sam’,’Diane’,’snake’,’m’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Medor’,’Meg’,’dog’,’m’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Felix’,’Diane’,’cat’,’m’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Joe’,’Diane’,’crocodile’,’f’,’2021-05-30′,NULL);

SELECT * FROM pets;

exit

exit

071221 1254 GitOpsInclu6

Phase 3 – ConfigMaps + Data

071221 1254 GitOpsInclu7

We create a config map that contains the list of species that won’t be eligible for sterilisation. This was decided based on the experience of this clinic, operation on this species are too expansive. We can see here a link between the configuration and the data. It’s very important that configuration and data are captured together.

cat <<EOF > forbidden-species-cm.yaml

apiVersion: v1

data:

species: “(‘crocodile’,’hamster’)”

kind: ConfigMap

metadata:

name: forbidden-species

EOF

git add forbidden-species-cm.yaml

git commit -m “Adding forbidden species”

git push

When deploying the app with Argo Cd we can see that a second restore point has been created

071221 1254 GitOpsInclu8

Phase 4 – The failure scenario

071221 1254 GitOpsInclu9

At this stage of our application we want to remove all the rows that have species in the list, for that we use a job that connects to the database and that deletes the rows.

But we made a mistake in the code and we accidentally delete other rows.

Notice that we use the wave 2 argocd.argoproj.io/sync-wave: “2” to make sure this job is executed after the kasten job.

cat <<EOF > migration-data-job.yaml

apiVersion: batch/v1

kind: Job

metadata:

name: migration-data-job

annotations:

argocd.argoproj.io/hook: PreSync

argocd.argoproj.io/sync-wave: “2”

spec:

template:

metadata:

creationTimestamp: null

spec:

containers:

– command:

– /bin/bash

– -c

– |

#!/bin/bash

# Oh no !! I forgot to the “where species in ${SPECIES}” clause in the delete command 🙁

mysql -h mysql -p\${MYSQL_ROOT_PASSWORD} -uroot -Bse “delete from test.pets”

env:

– name: MYSQL_ROOT_PASSWORD

valueFrom:

secretKeyRef:

key: mysql-root-password

name: mysql

– name: SPECIES

valueFrom:

configMapKeyRef:

name: forbidden-species

key: species

image: docker.io/bitnami/mysql:8.0.23-debian-10-r0

name: data-job

restartPolicy: Never

EOF

git add migration-data-job.yaml

git commit -m “migrate the data to remove the forbidden species from the database, oh no I made a mistake, that will remove all the species !!”

git push

now head on back to ArgoCD and sync again and see what damage it has done to our database.

Let’s now take a look at the database state after making the mistake

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)

kubectl exec -ti $mysql_pod -n mysql — bash

mysql –user=root –password=ultrasecurepassword

USE test;

SELECT * FROM pets;

071221 1254 GitOpsInclu10

This shows below the 3 restore points that we had created via ArgoCD pre code changes.

071221 1254 GitOpsInclu11

Phase 5 – The Recovery

071221 1254 GitOpsInclu12

At this stage we could roll back our ArgoCD to our previous version, prior to Phase 4 but you will notice that this just brings back our configuration and it is not going to bring back our data!

Fortunately, we can use kasten to restore the data using the restore point.

You will see from the above now when we check the database our data is gone! It was lucky that we have this presync enabled to take those backups prior to any code changes. We can now use that restore point to bring back our data.

I am going to link here to how you would configure Kasten K10 to protect your workload but also how you would recover, this post is already getting too long.

Let’s now look at the database state after recovery

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)

kubectl exec -ti $mysql_pod -n mysql — bash

mysql –user=root –password=ultrasecurepassword

USE test;

SELECT * FROM pets;

If you have followed along then you should now see good data.

Phase 6 – Making things right

071221 1254 GitOpsInclu13

We have rectified our mistake in the code and would like to correctly implement this now into our application.

cat <<EOF > migration-data-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: migration-data-job
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/sync-wave: “2”
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
– command:
– /bin/bash
– -c
– |
#!/bin/bash
# Oh no !! I forgot to the “where species in ${SPECIES}” clause in the delete command 🙁
mysql -h mysql -p\${MYSQL_ROOT_PASSWORD} -uroot -Bse “delete from test.pets where species in ${SPECIES}”
env:
– name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-root-password
name: mysql
– name: SPECIES
valueFrom:
configMapKeyRef:
name: forbidden-species
key: species
image: docker.io/bitnami/mysql:8.0.23-debian-10-r0
name: data-job
restartPolicy: Never
EOF
git add migration-data-job.yaml
git commit -m “migrate the data to remove the forbidden species from the database, oh no I made a mistake, that will remove all the species !!”
git push

Another backup / restore point is created at this stage.

Let’s take a look at the database state and make sure we now have the desired outcome.

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)
kubectl exec -ti $mysql_pod -n mysql — bash
mysql –user=root –password=ultrasecurepassword
USE test;
SELECT * FROM pets;

At this stage you will have your desired data in your database but peace of mind that you have a way of recovering if this accident happens again.

You can now check your database and you will see the configmaps now manipulates your data as you originally planned.

Clean Up

If you are using this as a demo, then you may now want to clean up your environment to run this multiple times. You can do this by following the next steps.

Delete App from ArgoCD in the UI – There will also be a way to remove from ArgoCLI but I have not had chance to find this yet.

Delete namespace

kubectl delete namespace mysql

Delete rolebinding

kubectl delete rolebinding pre-sync-k10-basic

]]>
https://vzilla.co.uk/vzilla-blog/gitops-including-backup-in-your-continuous-deployments/feed 0
GitOps – Getting started with ArgoCD https://vzilla.co.uk/vzilla-blog/gitops-getting-started-with-argocd https://vzilla.co.uk/vzilla-blog/gitops-getting-started-with-argocd#respond Mon, 10 May 2021 15:02:52 +0000 https://vzilla.co.uk/?p=3010 Last week at the Kasten booth at KubeCon 2021 EU I gave a 30-minute session on “Incorporating data management into your continuous deployment workflows and GitOps model” the TLDR was that with Kasten K10 we can use BackupActions and hooks from your favourite CD tool to make sure that with any configuration change you are also going to take a backup of your configuration before the change but most importantly the data will also be grabbed. This will become more apparent and more useful when you are leveraging ConfgMaps to interact with data that is being consumed and added by an external group of people and data is not stored within version control.

Continuous Integration and Continuous Deployment seem to come hand in hand in all conversations but actually they are or at least to me they can be too different and separate workflows completely. It is important to note here that this walkthrough is not focusing on Continuous integration but more so on the Deployment / Delivery of your application and incorporating data management into your workflows.

050921 1635 GitOpsGetti1

Deploying ArgoCD

Before we get into the steps and the scenario, we need to deploy our Continuous Deployment tool, for this demo I am going to be using ArgoCD.

I hear you cry “But what is ArgoCD?” – “Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes

Version control is the key here, ever made a change to your environment on the fly and have no recollection of that change and because the lights are on and everything is green you continue to keep plodding along? Ever made a change and broke everything or some of everything? You might have known you made the change and you can quickly roll back your change, that bad script or misspelling. Now ever done this a massive scale and maybe it was not you or maybe it was not found straight away and now the business is suffering. Therefore, version control is important. Not only that but “Application definitions, configurations, and environments should be declarative, and version controlled.” On top of this (which comes from ArgoCD), they also mention that “Application deployment and lifecycle management should be automated, auditable, and easy to understand.”

From an Operations background but having played a lot around Infrastructure as Code this is the next step to ensuring all of that good stuff is taken care of along the way with continuous deployment/delivery workflows.

Now we go ahead and deploy ArgoCD into our Kubernetes cluster. Before I deploy anything I like to make sure that I am on the correct cluster with normally running the following command to check my nodes. We then also need to create a namespace.

#Confirm you are on the correct cluster

kubectl get nodes
#Create a namespace
kubectl create namespace argocd
#Deploy CRDs
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.0.0-rc3/manifests/install.yaml

050921 1635 GitOpsGetti2

When all ArgoCD pods are up running you can confirm this by running the following command.

#Confirm all CRDs are deployed
kubectl get all -n argocd

050921 1635 GitOpsGetti3

When the above is looking good, we then should consider accessing this via the port forward. Using the following command.

#When everything is ready, we want to access the ArgoCD UI
kubectl port-forward svc/argocd-server -n argocd 8080:443

050921 1635 GitOpsGetti4

Now we can connect to ArgoCD, navigate to your port forward using your https://localhost:8080 address and you should have the below screen.

050921 1635 GitOpsGetti5

To log in you will need a username of admin and then to grab your created secret as your password use the following command, I am using WSL and an Ubuntu instance to grab the following command if you are using Windows then there are Base64 tools out there apparently I just have been trying to immerse myself into Linux.

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo

050921 1635 GitOpsGetti6

When you log in for the first time you will not see the boxes that I have in play around apps I have already deployed. You will have a blank canvas.

050921 1635 GitOpsGetti7

Another way to Deploy, Maybe easier

Now the above method works and you can then start working on the next post that walks through the actual demo I performed in the session, but I also want to shout out arkade as another option to deploy not only ArgoCD but many different other tools that are useful in your Kubernetes environments.

The following command will get arkade installed on your system

# Note: you can also run without `sudo` and move the binary yourself
curl -sLS https://dl.get-arkade.dev | sudo sh

050921 1635 GitOpsGetti8

The first thing to do is check out the awesome list of apps available on arkade.

arkade get

050921 1635 GitOpsGetti9

Now back to this way of deployment of ArgoCD, we can now simply run this one command to get up and running.

arkade get argocd

050921 1635 GitOpsGetti10

What if we want to find out more of the options available to us within the ArgoCD deployment, arkade has good info on all the apps to give detail about gaining access and what needs to happen next if you are unsure.

050921 1635 GitOpsGetti11

In the next post, we are going to be walking through the demo aspects of the session.

]]>
https://vzilla.co.uk/vzilla-blog/gitops-getting-started-with-argocd/feed 0