*As the title suggests in this post we are going to be talking about the upstream project KubeVirt, KubeVirt as a standalone project release and the protection of these VMs is not supported. It is only today supported for Red Hat OpenShift Virtualisation (OCP-V) and Harvester from SUSE. This is based on all the varying hardware KubeVirt can be deployed on.

With that caveat out of the way in a home lab, we are able to tinker around with whatever we want. I am also clarifying that I am using the 5 nodes that we have available for the community to protect these virtual machines.

We are going to cover getting Kubevirt deployed on my bare metal Talos Kubernetes cluster, getting a virtual machine up and running and then protecting said machine.

Some pre-reqs to this is to make sure you follow this guide, making sure you have virtualisation enabled and a bridge network defined in the Talos configuration.

Here is my configuration repository for both my virtual cluster and bare metal. I will say though that this documentation was really handy in finding the way. Remember these commands are based on my environment.

Installing virtctl

We will start with virtctl, virtctl is a command-line utility for managing KubeVirt virtual machines. It extends kubectl functionality to include VM-specific operations like starting, stopping, accessing consoles, and live migration. Designed to streamline VM lifecycle management within Kubernetes, it simplifies tasks otherwise requiring complex YAML configurations or direct API calls.


export VERSION=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64

Warning here, be sure to check the copy and paste as it broke on mine.

Deploying KubeVirt

Keeping things simple we will now deploy Kubevirt via YAML manifests as per the Talos docs linked above.


export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml

Now we have the operator installed in our bare metal cluster, we need to apply the custom resource. I have modified this slightly from the talos example.


apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration:
      featureGates:
        - LiveMigration
        - NetworkBindingPlugins
  certificateRotateStrategy: {}
  customizeComponents: {}
  imagePullPolicy: IfNotPresent
  workloadUpdateStrategy:
    workloadUpdateMethods:
      - LiveMigrate

Finally before we get to deploying a VM we are going to deploy the CDI (Containerised Data Importer) which is needed to import disk images. I modified mine again here to suit the storageclasses I have available to me.


apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
  name: cdi
spec:
  config:
    scratchSpaceStorageClass: ceph-block
    podResourceRequirements:
      requests:
        cpu: "100m"
        memory: "60M"
      limits:
        cpu: "750m"
        memory: "2Gi"

All of these will then be deployed using

kubectl create -f <filename>
but you can see this below in the demo.

Create a VM

Next up we can create our virtual machine. I am going to again copy but modify slightly the example that we have from Talos. Here is my VM YAML manifest.

Note SSH configuration is redacted and you would want to add your own here.


apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: fedora-vm
  namespace: fedora-vm
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: fedora-vm
      annotations:
        kubevirt.io/allow-pod-bridge-network-live-migration: "true"
    spec:
      evictionStrategy: LiveMigrate
      domain:
        cpu:
          cores: 2
        resources:
          requests:
            memory: 4G
        devices:
          disks:
            - name: fedora-vm-pvc
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: podnet
              masquerade: {}
        networks:
          - name: podnet
            pod: {}
        volumes:
          - name: fedora-vm-pvc
            persistentVolumeClaim:
              claimName: fedora-vm-pvc
          - name: cloudinitdisk
            cloudInitNoCloud:
              networkData: |
                network:
                  version: 1
                  config:
                    - type: physical
                      name: eth0
                      subnets:
                        - type: dhcp
              userData: |-
                #cloud-config
                users:
                  - name: cloud-user
                    ssh_authorized_keys:
                      - ssh-rsa <REDACTED>
                  sudo: ['ALL=(ALL) NOPASSWD:ALL']
                  groups: sudo
                  shell: /bin/bash
              runcmd:
                - "sudo touch /root/installed"
                - "sudo dnf update"
                - "sudo dnf install httpd fastfetch -y"
                - "sudo systemctl daemon-reload"
                - "sudo systemctl enable httpd"
                - "sudo systemctl start --no-block httpd"

  dataVolumeTemplates:
  - metadata:
      name: fedora-vm-pvc
      namespace: fedora-vm
    spec:
      storage:
        resources:
          requests:
            storage: 35Gi
        accessModes:
          - ReadWriteMany
        storageClassName: "ceph-filesystem"
      source:
        http:
          url: "https://fedora.mirror.wearetriple.com/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2"

The final piece to this puzzle that I have not mentioned is that I am using Cilium as my CNI and with this I am also using this to provide me with some IP addresses accessible from my LAN. I created a service so that I could SSH to the newly created VM.


apiVersion: v1
kind: Service
metadata:
  labels:
    kubevirt.io/vm: fedora-vm
  name: fedora-vm
  namespace: fedora-vm
spec:
  ipFamilyPolicy: PreferDualStack
  externalTrafficPolicy: Local
  ports:
  - name: ssh
    port: 22
    protocol: TCP
    targetPort: 22
  - name: httpd
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    kubevirt.io/vm: fedora-vm
  type: LoadBalancer

Below is a demo, you will notice that I had to remove a previous known host with the same IP from my file.

Some other interesting commands using virtctl would be the following, I am going to let you guess what they each do:


virtctl start fedora-vm -n fedora-vm

virtctl console fedora-vm -n fedora-vm

virtctl stop fedora-vm -n fedora-vm

Protect with Veeam Kasten

Now we have a working machine running on our Kubernetes cluster, we should probably backup and protect it. A similar process to the last post covering protecting your stateful workloads within Kubernetes. We can create a policy to protect this VM and everything in the namespace.

Wrap Up…

I got things protected with Kasten but I need to go back and check a few things are correct in regards to the Ceph Filesystem storageclass and make sure I am protecting the VMs in the correct way moving forward.

This was really to focus on getting Virtual machines up and running in my lab at home to get to grips with virtualisation on Kubernetes. I want to get another post done on Kanister and the specifics around application consistency and then come back to a more relevant workload on these VMs alongside your containerised workloads.

Comments are closed, but trackbacks and pingbacks are open.