I have been concentrating a lot this year on my home lab, in previous posts I have covered the set up but basically I have a 5 node Talos Kubernetes cluster with rook-ceph as my storage layer and I needed some monitoring for my home lab.

In a VM I am running Veeam Backup & Replication and I wanted to get some hands-on with Grafana, I have more plans but this was project #1

My good friend Jorge has been years into the Grafana dashboards for Veeam. You can find one of the dashboards here.

The Plan:

We are going to use our Kubernetes cluster to host our Grafana instance. Jorge has shared a script that we are going to repurpose into a cronjob, this job will run on a schedule. I think every 5 minutes. This will grab us some details via the Veeam Backup & Replication API and we will have some data visualisation inside of our grafana dashboard.

image

Deployment: Grafana & InfluxDB

We obviously need Grafana to show our Grafana Dashboard, we will also need InfluxDB which is where the cronjob will store our API data collected from Veeam Backup & Replication. There are many ways to deploy Grafana into your Kubernetes cluster, you could use helm (Kubernetes package manager) but I am going to be using ArgoCD.

I am storing my ArgoCD application here in this GitHub Repository.

image 1

This will get you up and running with Grafana. Next you need the IP to access your Grafana instance and the secret to go with the default user ‘admin’

image 2

Head over to a browser and get logged in and the first page here you can go and find some more stuff out about Grafana

image 3

Select Dashboards, you will notice that I have currently two configured, the one we are focused on is the “Grafana Dashboard for Veeam Backup & Replication” If you have not added this in your configuration you can manually add this as well using the New button in the top right.

image 4

and if you have been able to run the cronjob you will have something resembling your Veeam environment

image 5

Step Back

Ok all the above is great but I have not really helped you get there yet.

We have used ArgoCD to hopefully deploy Grafana and you will also see a application in there for InfluxDB so lets hope that we have those two up and running. But we need to put some more things in place.

First we will need an influx token and we can get this with the following command.


kubectl get secret -n monitoring influxdb-influxdb2-auth -o jsonpath="{.data.admin-password}" | base64 --decode; echo

Second we need a secret to enable our cronjob to hit our Veeam Backup & Replication server. Obviously add your details there.


kubectl create secret generic veeam-influxdb-sync-secret \<br>--namespace monitoring \<br>--from-literal=veeamUsername=administrator \<br>--from-literal=veeamPassword= \<br>--from-literal=veeamInfluxDBToken=

Then in the same GitHub Repository you will find a file called ‘veeam-influx-sync.yaml’ this is our cronjob configuration file so we need to apply this into our cluster as well but before we get to that we need to make sure we change some of the environment variables within this file as your environment might be different to mine.


          - name: veeamInfluxDBURL
            value: "http://influxdb-influxdb2.monitoring.svc.cluster.local"
          - name: veeamInfluxDBPort
            value: "80"
          - name: veeamInfluxDBBucket
            value: "veeam"
          - name: veeamInfluxDBOrg
            value: "influxdata"
          - name: veeamBackupServer
            value: "192.168.169.185"
          - name: veeamBackupPort
            value: "9419"
          - name: veeamAPIVersion
            value: "1.2-rev0"

Then deploy that into the cluster


kubectl apply -f veeam-influxdb-sync.yaml

This cronjob will run every 5 minutes but if you wanted to trigger it straight away we can use this command


kubectl create job --from=cronjob/veeam-influxdb-sync veeam-influxdb-sync-manual -n monitoring

You can then check the progress of this process using the following command


POD_NAME=$(kubectl get pods -n monitoring | grep '^veeam-influxdb-sync-manual-' | awk '{print $1}')
kubectl logs -f $POD_NAME -n monitoring

A big thank you to Jorge on this one, if it wasn’t for his hard work in this area then we would not have these dashboards! He has also created some amazing content around this and it is also not just Veeam dashboards, lots of great stuff.

Notes

On the final section of the cronjob script I have filtered to only show the VMware platform if you want to change this back then you can do so by changing the below code you will need to remove


?platformNameFilter=VMware"

veeamVBRURL="https://$veeamBackupServer:$veeamBackupPort/api/v1/backupObjects?platformNameFilter=VMware"
image 7

I am working on an update to see if this can be resolved and catch all objects without filtering.

Iteration

If you made it this far… you must be interested! I was not happy with the above situation where I could only display my VMware or one platform when I have several within my environment. I have iterated and now you will find an updated script that loops through the different platforms providing the data to influx and then in turn to Grafana.

Here is that script

And from there you can see that I have my MacOS backups, HyperV backups and Kasten backups all now showing

image 8

Leave a Reply

Your email address will not be published. Required fields are marked *