GitOps – vZilla https://vzilla.co.uk One Step into Kubernetes and Cloud Native at a time, not forgetting the world before Tue, 10 Aug 2021 10:26:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://vzilla.co.uk/wp-content/uploads/2018/01/cropped-profile_picture_symbol-32x32.png GitOps – vZilla https://vzilla.co.uk 32 32 GitOps – Including backup in your continuous deployments https://vzilla.co.uk/vzilla-blog/gitops-including-backup-in-your-continuous-deployments https://vzilla.co.uk/vzilla-blog/gitops-including-backup-in-your-continuous-deployments#respond Mon, 12 Jul 2021 12:57:10 +0000 https://vzilla.co.uk/?p=3037 In the last post we covered the fundamentals at a very high level on why you should be considering adding a backup action into your GitOps workflows, we also deployed ArgoCD into our Kubernetes cluster. In this post we are going to walkthrough a scenario on why and how having that backupaction within in your process ensures that when mistakes happen (and they will) your data is protected and can be recovered easily.

This walkthrough assumes that you have Kasten K10 deployed within your Kubernetes Cluster to perform these steps. More details on this can be found at https://docs.kasten.io/latest/index.html

This is a very simple example of how we can integrate Kasten K10 with ArgoCD. It’s voluntary kept very simple because we focus on using Kasten K10 with a pre-sync phase in ArgoCD.

You can follow along this walkthrough using the following GitHub Repository.

Phase 1 – Deploying the Application

071221 1254 GitOpsInclu1

First let us confirm that we do not have a namespace called mysql as this will be created within ArgoCD

We create a mysql app for sterilisation of animals in a pet clinic.

This app is deployed with Argo CD and is made of: * A mysql deployment * A PVC * A secret * A service to mysql

We also use a pre-sync job (with corresponding sa and rolebinding) to backup the whole application with kasten before application sync.

At the first sync an empty restore point should be created.

071221 1254 GitOpsInclu2

To look at the Kasten Pre Sync file you can see below the hook and sync wave that we have used here, this indicates that this will be performed before any other task. More details can be found in the link above.

071221 1254 GitOpsInclu3

071221 1254 GitOpsInclu4

Phase 2 – Adding Data

071221 1254 GitOpsInclu5

The scenario we are using here is of a vet clinic where there is a requirement to log all information of their patients for safe keeping and understanding what has happened to each one.

Vets are creating the row of the animal they will operate.

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)

kubectl exec -ti $mysql_pod -n mysql — bash

mysql –user=root –password=ultrasecurepassword

CREATE DATABASE test;

USE test;

CREATE TABLE pets (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);

INSERT INTO pets VALUES (‘Puffball’,’Diane’,’hamster’,’f’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Sophie’,’Meg’,’giraffe’,’f’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Sam’,’Diane’,’snake’,’m’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Medor’,’Meg’,’dog’,’m’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Felix’,’Diane’,’cat’,’m’,’2021-05-30′,NULL);

INSERT INTO pets VALUES (‘Joe’,’Diane’,’crocodile’,’f’,’2021-05-30′,NULL);

SELECT * FROM pets;

exit

exit

071221 1254 GitOpsInclu6

Phase 3 – ConfigMaps + Data

071221 1254 GitOpsInclu7

We create a config map that contains the list of species that won’t be eligible for sterilisation. This was decided based on the experience of this clinic, operation on this species are too expansive. We can see here a link between the configuration and the data. It’s very important that configuration and data are captured together.

cat <<EOF > forbidden-species-cm.yaml

apiVersion: v1

data:

species: “(‘crocodile’,’hamster’)”

kind: ConfigMap

metadata:

name: forbidden-species

EOF

git add forbidden-species-cm.yaml

git commit -m “Adding forbidden species”

git push

When deploying the app with Argo Cd we can see that a second restore point has been created

071221 1254 GitOpsInclu8

Phase 4 – The failure scenario

071221 1254 GitOpsInclu9

At this stage of our application we want to remove all the rows that have species in the list, for that we use a job that connects to the database and that deletes the rows.

But we made a mistake in the code and we accidentally delete other rows.

Notice that we use the wave 2 argocd.argoproj.io/sync-wave: “2” to make sure this job is executed after the kasten job.

cat <<EOF > migration-data-job.yaml

apiVersion: batch/v1

kind: Job

metadata:

name: migration-data-job

annotations:

argocd.argoproj.io/hook: PreSync

argocd.argoproj.io/sync-wave: “2”

spec:

template:

metadata:

creationTimestamp: null

spec:

containers:

– command:

– /bin/bash

– -c

– |

#!/bin/bash

# Oh no !! I forgot to the “where species in ${SPECIES}” clause in the delete command 🙁

mysql -h mysql -p\${MYSQL_ROOT_PASSWORD} -uroot -Bse “delete from test.pets”

env:

– name: MYSQL_ROOT_PASSWORD

valueFrom:

secretKeyRef:

key: mysql-root-password

name: mysql

– name: SPECIES

valueFrom:

configMapKeyRef:

name: forbidden-species

key: species

image: docker.io/bitnami/mysql:8.0.23-debian-10-r0

name: data-job

restartPolicy: Never

EOF

git add migration-data-job.yaml

git commit -m “migrate the data to remove the forbidden species from the database, oh no I made a mistake, that will remove all the species !!”

git push

now head on back to ArgoCD and sync again and see what damage it has done to our database.

Let’s now take a look at the database state after making the mistake

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)

kubectl exec -ti $mysql_pod -n mysql — bash

mysql –user=root –password=ultrasecurepassword

USE test;

SELECT * FROM pets;

071221 1254 GitOpsInclu10

This shows below the 3 restore points that we had created via ArgoCD pre code changes.

071221 1254 GitOpsInclu11

Phase 5 – The Recovery

071221 1254 GitOpsInclu12

At this stage we could roll back our ArgoCD to our previous version, prior to Phase 4 but you will notice that this just brings back our configuration and it is not going to bring back our data!

Fortunately, we can use kasten to restore the data using the restore point.

You will see from the above now when we check the database our data is gone! It was lucky that we have this presync enabled to take those backups prior to any code changes. We can now use that restore point to bring back our data.

I am going to link here to how you would configure Kasten K10 to protect your workload but also how you would recover, this post is already getting too long.

Let’s now look at the database state after recovery

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)

kubectl exec -ti $mysql_pod -n mysql — bash

mysql –user=root –password=ultrasecurepassword

USE test;

SELECT * FROM pets;

If you have followed along then you should now see good data.

Phase 6 – Making things right

071221 1254 GitOpsInclu13

We have rectified our mistake in the code and would like to correctly implement this now into our application.

cat <<EOF > migration-data-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: migration-data-job
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/sync-wave: “2”
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
– command:
– /bin/bash
– -c
– |
#!/bin/bash
# Oh no !! I forgot to the “where species in ${SPECIES}” clause in the delete command 🙁
mysql -h mysql -p\${MYSQL_ROOT_PASSWORD} -uroot -Bse “delete from test.pets where species in ${SPECIES}”
env:
– name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-root-password
name: mysql
– name: SPECIES
valueFrom:
configMapKeyRef:
name: forbidden-species
key: species
image: docker.io/bitnami/mysql:8.0.23-debian-10-r0
name: data-job
restartPolicy: Never
EOF
git add migration-data-job.yaml
git commit -m “migrate the data to remove the forbidden species from the database, oh no I made a mistake, that will remove all the species !!”
git push

Another backup / restore point is created at this stage.

Let’s take a look at the database state and make sure we now have the desired outcome.

mysql_pod=$(kubectl get po -n mysql -l app=mysql -o jsonpath='{.items[*].metadata.name}’)
kubectl exec -ti $mysql_pod -n mysql — bash
mysql –user=root –password=ultrasecurepassword
USE test;
SELECT * FROM pets;

At this stage you will have your desired data in your database but peace of mind that you have a way of recovering if this accident happens again.

You can now check your database and you will see the configmaps now manipulates your data as you originally planned.

Clean Up

If you are using this as a demo, then you may now want to clean up your environment to run this multiple times. You can do this by following the next steps.

Delete App from ArgoCD in the UI – There will also be a way to remove from ArgoCLI but I have not had chance to find this yet.

Delete namespace

kubectl delete namespace mysql

Delete rolebinding

kubectl delete rolebinding pre-sync-k10-basic

]]>
https://vzilla.co.uk/vzilla-blog/gitops-including-backup-in-your-continuous-deployments/feed 0
GitOps – Getting started with ArgoCD https://vzilla.co.uk/vzilla-blog/gitops-getting-started-with-argocd https://vzilla.co.uk/vzilla-blog/gitops-getting-started-with-argocd#respond Mon, 10 May 2021 15:02:52 +0000 https://vzilla.co.uk/?p=3010 Last week at the Kasten booth at KubeCon 2021 EU I gave a 30-minute session on “Incorporating data management into your continuous deployment workflows and GitOps model” the TLDR was that with Kasten K10 we can use BackupActions and hooks from your favourite CD tool to make sure that with any configuration change you are also going to take a backup of your configuration before the change but most importantly the data will also be grabbed. This will become more apparent and more useful when you are leveraging ConfgMaps to interact with data that is being consumed and added by an external group of people and data is not stored within version control.

Continuous Integration and Continuous Deployment seem to come hand in hand in all conversations but actually they are or at least to me they can be too different and separate workflows completely. It is important to note here that this walkthrough is not focusing on Continuous integration but more so on the Deployment / Delivery of your application and incorporating data management into your workflows.

050921 1635 GitOpsGetti1

Deploying ArgoCD

Before we get into the steps and the scenario, we need to deploy our Continuous Deployment tool, for this demo I am going to be using ArgoCD.

I hear you cry “But what is ArgoCD?” – “Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes

Version control is the key here, ever made a change to your environment on the fly and have no recollection of that change and because the lights are on and everything is green you continue to keep plodding along? Ever made a change and broke everything or some of everything? You might have known you made the change and you can quickly roll back your change, that bad script or misspelling. Now ever done this a massive scale and maybe it was not you or maybe it was not found straight away and now the business is suffering. Therefore, version control is important. Not only that but “Application definitions, configurations, and environments should be declarative, and version controlled.” On top of this (which comes from ArgoCD), they also mention that “Application deployment and lifecycle management should be automated, auditable, and easy to understand.”

From an Operations background but having played a lot around Infrastructure as Code this is the next step to ensuring all of that good stuff is taken care of along the way with continuous deployment/delivery workflows.

Now we go ahead and deploy ArgoCD into our Kubernetes cluster. Before I deploy anything I like to make sure that I am on the correct cluster with normally running the following command to check my nodes. We then also need to create a namespace.

#Confirm you are on the correct cluster

kubectl get nodes
#Create a namespace
kubectl create namespace argocd
#Deploy CRDs
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.0.0-rc3/manifests/install.yaml

050921 1635 GitOpsGetti2

When all ArgoCD pods are up running you can confirm this by running the following command.

#Confirm all CRDs are deployed
kubectl get all -n argocd

050921 1635 GitOpsGetti3

When the above is looking good, we then should consider accessing this via the port forward. Using the following command.

#When everything is ready, we want to access the ArgoCD UI
kubectl port-forward svc/argocd-server -n argocd 8080:443

050921 1635 GitOpsGetti4

Now we can connect to ArgoCD, navigate to your port forward using your https://localhost:8080 address and you should have the below screen.

050921 1635 GitOpsGetti5

To log in you will need a username of admin and then to grab your created secret as your password use the following command, I am using WSL and an Ubuntu instance to grab the following command if you are using Windows then there are Base64 tools out there apparently I just have been trying to immerse myself into Linux.

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo

050921 1635 GitOpsGetti6

When you log in for the first time you will not see the boxes that I have in play around apps I have already deployed. You will have a blank canvas.

050921 1635 GitOpsGetti7

Another way to Deploy, Maybe easier

Now the above method works and you can then start working on the next post that walks through the actual demo I performed in the session, but I also want to shout out arkade as another option to deploy not only ArgoCD but many different other tools that are useful in your Kubernetes environments.

The following command will get arkade installed on your system

# Note: you can also run without `sudo` and move the binary yourself
curl -sLS https://dl.get-arkade.dev | sudo sh

050921 1635 GitOpsGetti8

The first thing to do is check out the awesome list of apps available on arkade.

arkade get

050921 1635 GitOpsGetti9

Now back to this way of deployment of ArgoCD, we can now simply run this one command to get up and running.

arkade get argocd

050921 1635 GitOpsGetti10

What if we want to find out more of the options available to us within the ArgoCD deployment, arkade has good info on all the apps to give detail about gaining access and what needs to happen next if you are unsure.

050921 1635 GitOpsGetti11

In the next post, we are going to be walking through the demo aspects of the session.

]]>
https://vzilla.co.uk/vzilla-blog/gitops-getting-started-with-argocd/feed 0