The Hotel Hero

Notes by a Sysadmin

Cluster | Philosophy | Stack

K3s cluster setup

March 25, 2022 | Cluster

K3s is Ranchers (SUSE) small kubernetes solution. In the following I'll try to create a small HA kubernetes cluster, made up of cheap VPS.

BTW: K3s can actually be run rootless


Start by getting the latest distribution of K3s to your servers and initializing it with --cloud-init which will enable etcd as db.

curl -fL | sh -s - server --cluster-init

But, don't worry if you have allready installed k3s binaries with the following command:

curl -sfL | sh -

If you have installed k3s allready with the above command on the server, just clean it up:

sudo systemctl stop k3s.service (or k3s-agent/k3s-server)
sudo rm -r /var/lib/rancher/*

And that should be it.


To get an overview of all available flags, use the following command:

k3s server -help

Besides applying these at execution time, you can also provide a config file for k3s. This file is always located at /etc/rancher/k3s/config.yaml

and as mentioned we want to provide the --cloud-init flag to our first node. Edit the above mentioned config.yaml:

cluster-init: true

the above config file replaces any need for additional flad when deploying the node.

Now we are ready to start our first node in the cluster:

sudo systemctl start k3s


kubectl get nodes

to see if everything is up and running.. Kubectl has it's own config file located at ~/.kube/config if you have an old file here you will get en error. Replace this config file with /etc/rancher/k3s/k3s.yaml

If everything is running fine, you should see something like:

NAME     STATUS   ROLES                       AGE   VERSION
node1   Ready    control-plane,etcd,master   12m   v1.22.7+k3s1

To keep this post simple we don't have a loadbalancer infront of the controlplane. If you decide to put ex. an Nginx loadbalancer infront of all the Master nodes, this should be done before initializing the first node.

Now, we have only two servers left. Grap the token from the existing server:

sudo cat /var/lib/rancher/k3s/server/node-token

and ssh into the next master server that needs to be connected to the cluster and create the folders /etc/rancher/k3s/ and config.yaml inside that:

token: "[your_token]"
server: "https://[your_initial_master_ip]:6443"

Now, that the config file is already located at the upcomming node, we can just install k3s:

curl -sfL | sh -

Do the same for all master nodes (using Etcd best practice is an odd number and more than two, so that'll be three, five etc. and two is actually worse than one in the case of Etcd).


To join an agent create the folder and file /etc/rancher/k3s/config.yaml again and provide the token and server:

token: "[your_token]"
server: "https://[your_initial_master_ip]:6443"

If you have clients adding worker nodes by them self, there is actually an agent-token. That prohibits the master level access to the cluster.

Now, we are gonna install this node as an agent instead of a server/master:

curl -sfL | sh -s agent

Thats basically it! You have your cluster and you don't need an external load balancer, as k3s comes with its own internal loadbalancer. But, that may be one of the caveats if that goes down you out of luck. So, if you are striving for more HA you might consider setTting a server up with only nginx as a load balancer og a more expensive and extensive BGP solution.

Remove a node from the cluster

I had a node recently that had some issues that I were never able to debug. So, I decided to remove it completely from the cluster with the intend of reinstall the vm. So, the following Is an attempt to remove a node.

For agents:


For servers / master:


The node will still appear as unavailable when you make a "kubectl get nodes"

Remove the node from the k8s cluster with:

kubectl delete node [node name]


I'm a Sysadmin, network manager and cyber security entusiast. The main purpose of this public "notebook" is for referencing repetitive tasks, but it might as well come in handy to others. Windows can not be supported! But all other OS compliant with the POSIX-standard can (with minor adjustments) apply the configs on the site. It is Mac OSX, RHEL and all the Fedora based distros and Debian based (several 100's of OS's), all the BSD distros, Solaris, AIX and HP-UX.