I am getting back into the home lab race, what seems like many years ago I once had a half height rack and some servers for my home lab, the pandemic hit and I decided I was all in on cloud, we hav a good commercial, enterprise lab in Columbus, OH I don’t need this power hungry, noisy situation any longer. The original vzilla homelab
My need for a Home Lab?
Many of you may be wondering, “Sounds like you have more than enough options to run some workloads for testing and learning” and you would be right I do have access to public cloud and a very high-end company lab but there is always that want or need to do something locally.
Our company lab is shared by several people, and I consider this lab for demos from our team be it recorded demos or during breakout sessions at the many events that our team cover. We should not be putting our lab at risk which poses the question is it a good fit for the learning side of a lab? My take is not.
Then I have access to the public cloud offerings and I absolutely use really all 3 (Amazon, Microsoft and Google as well as some managed service provider offerings and some cloud based storage vendors) I will continue to have this access but I want something local to me, that I can jump on and ruin because its only me and its costing next to no money (unlike cloud resources).
The Initial Plan
The initial plan was to find some mini-PCs with some i7 11th Generation Intel chips, loads of memory loads or cores and we could add them to my existing home lab environment, and we would have more vSphere resources.
Then I looked on eBay and some similar sites and realised that my expectation of cost was different to what the market was thinking so I had to considerably reset my expectations and plan to a degree.
The plan for this 2024 home lab for me, is to have some bare metal hosts capable of running some workloads that enables me to better understand different “enterprise” offerings when it comes to Kubernetes and running virtual machines on Kubernetes, none of these workloads will be considered production, I won’t be running plex on any of these machines (I do not run Plex anyway)
A learning goal of mine for 2024 is to have a better hands-on understanding of RedHat OpenShift and RedHat OpenShift virtualisation. I would also like to understand a little more about the Kubevirt project which is used for the OpenShift Virtualisation layer as well as within SUSE offering of Harvester.
Another goal is to make sure I have a way to demonstrate Kasten K10 to protect these resources alongside other smaller Kubernetes projects I may stumble across.
What do we already have?
Before we dive into the new toys, I have gathered recently we should discuss the 2 existing nodes and storage I have available today. Whilst these are not changing they will give you insight into the overall home lab environment I will have to play with.
To start the compute nodes that we have came from the VMware vExpert program, they were a gift (Well one of them was a gift from the vExperts, the other came from a good friend of mine not using it)
This was a barebones small form factor box with the specs below:
- CPU: Intel Celeron J6412
- Quad-core
- 2GHz burst 2.5 GHz
- 64-bit
- GPU: Intel UHD Graphic
- Memory: Dual Channel SO-DIMM DDR4 up to 32GB (not included)
- Storage: 1x M.2 2242/2280 SSD, SATA optional (not included)
- Power: 12V, external power adapter
- Ports: 2xLAN, 2xUSB3.2, 2xUSB2.0, Type-C, SIM
- Video Port: 2xHDMI2.0
- Network Connectivity: 1 x 1GbE RJ45
Upon getting these two boxes I added
Memory: Each box has 32GB of Memory “Timetec DDR4 3200MHz PC4-21300” Storage: Each unit has a “Timetec SSD 3D NAND SATA III” in and this is where the OS installed and a local VMFS datastore is available however it is not used, more on shared storage later on.
Network: Due to the joys of VMware vSphere networking support we also could not use the onboard network ports and had to purchase a “Cable Matters USB to Ethernet Adapter”
Fan: As this is a silent fanless system I was concerned for the heat it was going to generate with the workloads planned for it, so I also bought for each a “AC Infinity MULTIFAN S1, Quiet 80mm USB Fan” this sits on top.
All in all, this setup is a very Janky setup, we have 8 CPU cores, and we have 64GB RAM, we then have some shared storage in a relatively old NETGEAR ReadyNAS 312 (1.81TB – RAID1) and 716 (2.89TB – RAID 5) units.
The 716 is what I have been using for VM storage so far there is element of faster SSD in this unit.
Workloads so far
This has been my home lab for just over a year I would say, and this runs a Virtual Center, a Veeam Backup & Replication server, a WordPress testing app server, and a DevOps ubuntu machine that has Docker installed and some CLIs for automation.
We also have a Minecraft server for my little boy to play if he wishes to.
I have played quite a bit with this system, and I have had various Kubernetes cluster also running on top of vSphere both vSphere Tanzu and vSphere CSI backed clusters in K3s, RKE2 and more recently Talos (more on Talos later)
What are we adding?
As I said I was scouring the local sites (Facebook marketplace, eBay, and some others) and found these 3 Dell OptiPlex 6070 units with i5-8500T, 16GB RAM, 512gb nvme m.2 and a 256gb SSD. (There is another up for grabs that I might get later in the month, ideally there is another 2 though to make this a 5-node cluster) I have also upgraded one of these nodes to 32GB RAM with the others to follow soon enough.
I then also needed to source moar ports to replace a 5 port Cisco switch I currently have so I found a Dell X1026 1GB switch on eBay and I am now just waiting for this to arrive hopefully Monday.
The current plan is to install Talos on each node along with Kubevirt and some other Ops/Infra tools on this base layer cluster and then use KubeVirt to provision nested environments. The Talos OS will be installed on the SSDs and we will then use the nvme for a CEPH cluster (Again another learning point for me) over all of the nodes.
This Dell switch will replace the existing Cisco 5 port I had previously mentioned, no doubt we will find another use case for that later on.
The purpose of the blog was not to get into the weeds around the plan but to highlight the why we are going back to a home lab. I hope that was useful in some way. My goal now is to get started on building this bare metal cluster whilst also trying source some additional memory and possibly additional 7060 compute nodes.