PI4 Stories

Raspberry Pi 4 cluster Series - Replacing internal traefik with Metallb

The problem with have is that with our home pi4 cluster we don't have a decent external load-balancer. Therefore, it is hard to access pods via an external IP address, such as the ones we have on our hosts (in our case in the range of 192.168.0.200-254).

To read some more in-depth learnings from metallb and ingresses read the blog Ingresses and Load Balancers in Kubernetes with MetalLB and nginx-ingress

The steps we need to perform are...

Re-configure the pi4 cluster with ansible

Our k3s-ansible project was updated with:

$ cat inventory/my-cluster/group_vars/all.yml
---
k3s_version: v1.26.0+k3s2
ansible_user: gdha
systemd_dir: /etc/systemd/system
master_ip: "{{ hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) }}"
extra_server_args: "--write-kubeconfig-mode 644 --disable traefik --disable servicelb"
extra_agent_args: ""

in such way by disabling the default traefik and internal load-balancer delivered with the standard k3s implementation. While we were busy we also used the latest k3s version available at this given moment.

Then it is just a matter of re-running:

ansible-playbook site.yml -i inventory/my-cluster/hosts.ini

It will remove k3s and re-implement it with the internal traefik, but all pods already installed remain present. Excellent news.

Install metalllb layer2 load-balancer

The main documentation of metallb can be found at https://metallb.universe.tf/installation/ [1]. We used the following steps:

$ helm repo add metallb https://metallb.github.io/metallb
$ helm repo list
NAME        URL                              
longhorn    https://charts.longhorn.io       
kiwigrid    https://kiwigrid.github.io       
metallb     https://metallb.github.io/metallb

$ cat metallb-values.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: loadbalancer-pool
  namespace: kube-system
spec:
  addresses:
  - 192.168.0.230-192.168.0.250


$ helm install metallb metallb/metallb --namespace kube-system -f metallb-values.yaml 
NAME: metallb
LAST DEPLOYED: Tue Jan 24 11:24:43 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.

We still have the old config of metallb so we save it under the config.yaml

$ cat >config.yaml <<EOD
apiVersion: v1
data:
  config: |
    address-pools:
    - addresses:
      - 192.168.0.230-192.168.0.250
      name: default
      protocol: layer2
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: metallb
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2022-05-23T10:38:34Z"
  labels:
    app.kubernetes.io/instance: metallb
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: metallb
    app.kubernetes.io/version: v0.12.1
    helm.sh/chart: metallb-0.12.1
  name: metallb
  namespace: kube-system
  resourceVersion: "3256248"
  uid: 774b05a7-7ad7-4de3-a9e1-2a636f988ed1
EOD

According the procedure to generate a new CRD resourse file we got:

$ docker run -d -v $(pwd):/var/input quay.io/metallb/configmaptocrs
Unable to find image 'quay.io/metallb/configmaptocrs:latest' locally
latest: Pulling from metallb/configmaptocrs
9b18e9b68314: Pull complete 
24157a5425f3: Pull complete 
b73e28ff5ad3: Pull complete 
Digest: sha256:6c144621e060722a082f2d5a2c4bd72f81d84f6cedc1153c33bb8f4f1277fac0
Status: Downloaded newer image for quay.io/metallb/configmaptocrs:latest
4254ef07d9b01effb956a89628915e4f3da15624e92edfc6c6f415f6fbe201cc

$ cat resources.yaml 
# This was autogenerated by MetalLB's custom resource generator.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  creationTimestamp: null
  name: default
  namespace: kube-system
spec:
  addresses:
  - 192.168.0.230-192.168.0.250
status: {}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  creationTimestamp: null
  name: l2advertisement1
  namespace: kube-system
spec:
  ipAddressPools:
  - default
status: {}
---

Now, we can apply (or replace) this in our cluster:

$ kubectl create -f resources.yaml
ipaddresspool.metallb.io/default created
l2advertisement.metallb.io/l2advertisement1 created

To verify if our IP range was properly accepted by our helm command we could run:

$ kubectl get customresourcedefinitions.apiextensions.k8s.io ipaddresspools.metallb.io
NAME                        CREATED AT
ipaddresspools.metallb.io   2023-01-24T10:24:52Z

Install NGINX Ingress Controller

To verify the load-balancer is working we could execute [4]:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
$ kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.43.243.82    192.168.0.231   80:30648/TCP,443:30920/TCP   2m28s
ingress-nginx-controller-admission   ClusterIP      10.43.232.246   <none>          443/TCP                      2m28s

Okay, so far so good. However, is the external IP address of our ingress-nginx-controller really open for a connection? To test execute:

$ nc -vz 192.168.0.231 80
Connection to 192.168.0.231 80 port [tcp/http] succeeded!
gdha@n1:~/projects/WIP$ nc -vz 192.168.0.231 443
Connection to 192.168.0.231 443 port [tcp/https] succeeded!

Install traefik2 as replacement for the internal traefik of k3s

Execute the following commands:

$ helm repo add traefik https://helm.traefik.io/traefik
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metallb" chart repository
...Successfully got an update from the "longhorn" chart repository
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "kiwigrid" chart repository
Update Complete. ⎈Happy Helming!⎈

We need to define a dummy (internal) name for our treafik2 application, therefore, create a file like the one shown below:

$ cat traefik-values.yaml 
dashboard:
 enabled: true
 domain: traefik.example.com
rbac:
 enabled: true

And, finally use helm to install traefik2 with our hand-crafted values yaml file:

$ helm install traefik traefik/traefik -n kube-system -f traefik-values.yaml 
NAME: traefik
LAST DEPLOYED: Mon May 23 14:53:46 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

Check if it is created properly:

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-d76bd69b-8z7dh                    1/1     Running   0          4h25m
local-path-provisioner-6c79684f77-jvdf6   1/1     Running   0          4h25m
metrics-server-7cd5fcb6b7-d6wlx           1/1     Running   0          4h25m
metallb-controller-777cbcf64f-vfz5v       1/1     Running   0          136m
metallb-speaker-r7wbg                     1/1     Running   0          136m
metallb-speaker-5lxff                     1/1     Running   0          136m
metallb-speaker-cxskn                     1/1     Running   0          136m
metallb-speaker-24vgg                     1/1     Running   0          136m
metallb-speaker-wmzkg                     1/1     Running   0          136m
traefik-7b9cf77df9-cwp4l                  1/1     Running   0          67s

And, also very if the treafik service is present:

$ kubectl get svc -n kube-system | grep traefik
traefik                   LoadBalancer   10.43.78.204    192.168.0.230   80:31164/TCP,443:31610/TCP   13m

We can also check the logs of traefik:

$ kubectl -n kube-system logs $(kubectl -n kube-system get pods --selector "app.kubernetes.io/name=traefik" --output=name)
time="2023-01-24T13:23:31Z" level=info msg="Configuration loaded from flags."

When we see above listed line the we are sure traefik is properly installed and configured. Now, we are ready to do some more tests with our new load-balancer and traefik.

References

[1] Metallb

[2] Setting up your own k3s home cluster

[3] Configuring Traefik 2 Ingress for Kubernetes

[4] NGINX Ingress Controller

Edit History

  • update with new metallb and NGINC Ingress Controller - 25/Jan/2023