K3s cluster setup
March 25, 2022 | ClusterK3s is Ranchers (SUSE) small kubernetes solution. In the following I'll try to create a small HA kubernetes cluster, made up of cheap VPS.
- 3 node Masters
- Any number of Workers
- Etcd internal db
- Loadbalancing
BTW: K3s can actually be run rootless
Initializing
Start by getting the latest distribution of K3s to your servers and initializing it with --cloud-init which will enable etcd as db.
curl -fL https://get.k3s.io | sh -s - server --cluster-init
But, don't worry if you have allready installed k3s binaries with the following command:
curl -sfL https://get.k3s.io | sh -
If you have installed k3s allready with the above command on the server, just clean it up:
sudo systemctl stop k3s.service (or k3s-agent/k3s-server)
k3s-killall.sh
sudo rm -r /var/lib/rancher/*
And that should be it.
Options
To get an overview of all available flags, use the following command:
k3s server -help
Besides applying these at execution time, you can also provide a config file for k3s. This file is always located at /etc/rancher/k3s/config.yaml
and as mentioned we want to provide the --cloud-init flag to our first node. Edit the above mentioned config.yaml:
cluster-init: true
the above config file replaces any need for additional flad when deploying the node.
Now we are ready to start our first node in the cluster:
sudo systemctl start k3s
run:
kubectl get nodes
to see if everything is up and running.. Kubectl has it's own config file located at ~/.kube/config if you have an old file here you will get en error. Replace this config file with /etc/rancher/k3s/k3s.yaml
If everything is running fine, you should see something like:
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,etcd,master 12m v1.22.7+k3s1
To keep this post simple we don't have a loadbalancer infront of the controlplane. If you decide to put ex. an Nginx loadbalancer infront of all the Master nodes, this should be done before initializing the first node.
Now, we have only two servers left. Grap the token from the existing server:
sudo cat /var/lib/rancher/k3s/server/node-token
and ssh into the next master server that needs to be connected to the cluster and create the folders /etc/rancher/k3s/ and config.yaml inside that:
token: "[your_token]"
server: "https://[your_initial_master_ip]:6443"
Now, that the config file is already located at the upcomming node, we can just install k3s:
curl -sfL https://get.k3s.io | sh -
Do the same for all master nodes (using Etcd best practice is an odd number and more than two, so that'll be three, five etc. and two is actually worse than one in the case of Etcd).
Agents/Workers
To join an agent create the folder and file /etc/rancher/k3s/config.yaml again and provide the token and server:
token: "[your_token]"
server: "https://[your_initial_master_ip]:6443"
If you have clients adding worker nodes by them self, there is actually an agent-token. That prohibits the master level access to the cluster.
Now, we are gonna install this node as an agent instead of a server/master:
curl -sfL https://get.k3s.io | sh -s agent
Thats basically it! You have your cluster and you don't need an external load balancer, as k3s comes with its own internal loadbalancer. But, that may be one of the caveats if that goes down you out of luck. So, if you are striving for more HA you might consider setTting a server up with only nginx as a load balancer og a more expensive and extensive BGP solution.
Remove a node from the cluster
I had a node recently that had some issues that I were never able to debug. So, I decided to remove it completely from the cluster with the intend of reinstall the vm. So, the following Is an attempt to remove a node.
For agents:
/usr/local/bin/k3s-agent-uninstall.sh
For servers / master:
/usr/local/bin/k3s-uninstall.sh
The node will still appear as unavailable when you make a "kubectl get nodes"
Remove the node from the k8s cluster with:
kubectl delete node [node name]