MetalLB Load Balancer for K8sJanuary 24, 2022 | Cluster
Basically you would usually don't have to concern your self to much about this subject, when using the established cloud providers, as they usually have their own Load Blancer on the 2/3 OSI level. And networking in a K8s cluster is allready difficult for most people to get compfortably with. But, not to get confused with the higher level (7) ingress routing, usually done with Traefik, Nginx etc. you can probably view L2 Load Balancing as more of an egress issue in contrast to the load balancing often refered to when viewing documentation on ex. Traefik or Niginx.
So, why even consider this subject? The main reason is that you need a load balancer when running K8s in cluster formation and if you have a cluster setup on-prim or on different cheap VPS machines, the chances are that you need to configure one by your self. Often it will be clear for you if your deplyments will be "pending" for an external IP address.
You can follow the development of MetalLB at Github: https://github.com/metallb/metallb/tree/main/manifests
First we will be creating the namespace for MetalLB:
apiVersion: v1 kind: Namespace metadata: name: metallb-system labels: app: metallb
(kubectl apply -f namespace.yml)
Secrets are K8s way of distributing secret information between pods in a namespace (to prevent hardcoding these in the pods). You may consider using "encryption at rest" as this information is not encrypted by default. Setting up som RBAC rule to limit access to the "namespace", is also a very good idea.
Create a secret-key that is used by the speaker to communicate:
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" secret/memberlist created
Then navigate to the "metallb.yaml" file in the manifest directory, at MetalLB's Github repo (manifests/metallb.yaml), and copy the raw URL and apply the deployment file.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/manifests/metallb.yaml (now changed to) https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
The latest change in the above manifest (metallb-native.yaml), will automatically setup namespace and secret. So, you might skip those steps or delete existing ressources.
There are two options for configuring MetalLB, as BGP (I guess, is the ideal solution, if have machines in different locations, that have to work as one big cluster) or L2 which will be used here. Mainly because it seems like the easy path to go, but I might try to go for BGP at a later stage.
So, the simple L2 configuration would look something like this:
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: my-ip-space protocol: layer2 addresses: - 192.168.1.240/28
or for lets say a pool of public addresses (latest changes from metallb's docs)
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 126.96.36.199/32 - 188.8.131.52/32 - 184.108.40.206/32 - 220.127.116.11/32
The important point here is that you put in you IP address or IP address range, that should be reserved for the load balancer. After this you should be able to see external IP for your nodes.
Then they should be advertised:
apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system # Setting no specific pool, uses them all
Disable Klipper on k3s
If you have already spun up a k3s cluster, you would like to disable servicelb (Klipper), if you want another loadbalancer.
It is important to disable it on all nodes (cp).
There should be other ways to carry out configuration changes, but this seems to be effective.
sudo vi /etc/systemd/system/k3s.service
and change the following line (this can for instance also be done with traefik):
ExecStart=/usr/local/bin/k3s \ server \
ExecStart=/usr/local/bin/k3s \ server --disable servicelb \
and restart the service:
sudo systemctl daemon-reload && sudo systemctl restart k3s
If you have not yet initiated k3s:
So, to begin with we will include the following to our /etc/rancher/k3s/config.yaml on all CP nodes.
disable: - servicelb