In this series i want to create a production grade kubernetes cluster which can be very useful at least in a home environment but maybe also for some small bussiness environments.
I’am using Ubuntu-Server on my nodes.
Download the right arm64 images from here.
Install them with the image tool of your choice on your sd-cards or like me on your SSDs.
If you use the nodes only for kubernetes you can disable bluetooth and get some of the graphic memory. And when you only use Ethernet you can disable wifi.
Open the config.txt with:
sudo nano /boot/firmware/config.txt
And add the following lines:
gpu_mem=16
dtoverlay=disable-bt
dtoverlay=disable-wifi
Save the file and reboot the node.
Now it’s time to install k3s on the nodes. I’am using the excelent k3sup-tool from Alex Ellis.
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/
And test with:
k3sup --help
Create a ssh-key for easily and securly access the nodes.
ssh-keygen -t ed25519 -C "yournameOrEmail"
ssh-keygen will create a public and a private key.
Copy-and-paste the public key to the
nano ~/.ssh/authorized_keys
files on every node. More Info on generating a key here
For easy access you can add your nodes to ssh config file. Open or create with
nano ~/.ssh/config
and add the three nodes:
Host kube1
HostName 192.168.2.1
User ubuntu
IdentityFile ~/.ssh/<private-ssh-key-file>
Host kube2
HostName 192.168.2.1
User ubuntu
IdentityFile ~/.ssh/<private-ssh-key-file>
Host kube3
HostName 192.168.2.3
User ubuntu
IdentityFile ~/.ssh/<private-ssh-key-file>
Now you can access the kube1 node for example with:
ssh kube1
You should know:
k3sup install --ip <ip-node-1> \
--user ubuntu \
--cluster \
--k3s-extra-args '--disable servicelb,traefik,local-storage' \
--ssh-key ~/.ssh/<private-ssh-key>
Default load-balancer, traefik and local-storage-provider are disabled because we are going to use own deployments.
k3sup join --ip <ip-node-2> \
--user ubuntu \
--server --k3s-extra-args '--disable servicelb,traefik,local-storage' \
--server-ip <ip-node-1> --server-user ubuntu \
--ssh-key ~/.ssh/<private-ssh-key>
k3sup join --ip <ip-node-3> \
--user ubuntu \
--server-ip <ip-node-1> --server-user ubuntu \
--ssh-key ~/.ssh/<private-ssh-key>
With kubectl installed you can test if your cluster is working correctly with:
kubectl get nodes
You should see your nodes with Ready-Status
NAME STATUS ROLES AGE VERSION
kube1 Ready control-plane,etcd,master 118d v1.22.5+k3s1
kube2 Ready control-plane,etcd,master 118d v1.22.5+k3s1
kube3 Ready <none> 118d v1.22.5+k3s1
In the next part we install a load-balancer which acts as an access-point to the cluster.