Topics include Supply Chain Security, Microservice Management, Neat Tricks, and Contributor insights.
In Part 1, of this series I walked through an installation of Ubuntu Server 22.04.4 LTS on the Raspberry Pis.
In Part 2, of this series we configured DHCP, DNS, NFS and deployed MicroK8s.
In Part 3 we will deploy the following:
I will be using Helm Charts to configure some of the services as this makes getting started a lot easier. Also Helm Charts are great to compare configuration or reset values.yaml
in case the plot is totally lost. Think of values.yaml
as the defaults for the application you are deploying.
With the NFS CSI Driver I will use Kubernetes to dynamically manage the creation and mounting of persistent volumes to pods using the Synology NAS as the central storage server. Here is some additional technical information for your reference:
Now let’s get started:
kube-system
namespacekubectl config set-context --current --namespace=kube-system
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm repo update
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --version v4.6.0 \
--set controller.dnsPolicy=ClusterFirstWithHostNet \
--set node.dnsPolicy=ClusterFirstWithHostNet \
--set kubeletDir="/var/snap/microk8s/common/var/lib/kubelet" # The Kubelet has permissions at this location to mount the NFS shares
kubectl get pods
nfs-setup.yaml
, copy the YAML below and run kubectl apply -f nfs-setup.yaml.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi-default
provisioner: nfs.csi.k8s.io
parameters:
server: <your nfs server ip goes here>
share: /volume4/pi8s/
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4
kubectl get sc
kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
With MetalLB we will setup a unique IP address on our home network to expose the Microservices running in our Kubernetes cluster. A public cloud provider would give you this during the deployment of your Kubernetes cluster but since we are the cloud we need to provide it and thats where MetalLB comes in.
helm repo add metallb https://metallb.github.io/metallb
helm repo update
metallb-system
namespacehelm install metallb metallb/metallb -n metallb-system
metallb-system
namespacekubectl config set-context --current --namespace=metallb-system
metallb-system
namespacekubectl get pods
metallb-setup.yaml
and run kubectl apply -f metallb-setup.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.0.151-192.168.0.151 # change this to your private ip
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default-pool
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
ipaddresspools.metallb.io
is a CRD which is a custom resource created in our Kubernetes cluster that adds additional magic. Kubectl show all CRDs for MetalLB:kubectl get crds | grep metallb
kubectl get ipaddresspools.metallb.io
With Traefik Proxy we can now direct traffic destined for our Microservices into the Kubernetes cluster and protect our endpoints using a combination of entrypoints, routers, services, providers and middlewares.
Traefik Services
and Kubernetes Services
helm repo add traefik https://traefik.github.io/charts
kubectl create ns traefik-v2
kubectl config set-context --current --namespace=traefik-v2
helm repo update
helm install traefik traefik/traefik --namespace=traefik-v2
kubectl get pods
dashboard, kubernetesCRD and kubernetesIngress
in values.yaml
and don’t forget to saveFYI
they might already be enabled## Create an IngressRoute for the dashboard
ingressRoute:
dashboard:
# -- Create an IngressRoute for the dashboard
enabled: true
providers:
kubernetesCRD:
# -- Load Kubernetes IngressRoute provider
enabled: true
kubernetesIngress:
# -- Load Kubernetes Ingress provider
enabled: true
values.yaml
helm upgrade traefik traefik/traefik --values values.yaml
ingress route
which forms part of the CRDs that were installed with Traefikkubectl get crds | grep traefik
dashboard.yaml
and apply the following logic with kubectl apply -f dashboard.yaml
sudo vi /etc/hosts
by adding your private ip and traefik.yourdomain.your tld
windows\System32\drivers\etc\hosts
by adding your private ip and traefik.yourdomain.your tld
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
namespace: traefik-v2
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.yourdomain.com`) # This where your DNS records come into play
kind: Rule
services:
- name: api@internal
kind: TraefikService
kubectl get ingressroutes.traefik.io
kubectl get svc
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager cert-manager ClusterIP 10.152.183.42 <none> 9402/TCP 25h
cert-manager cert-manager-webhook ClusterIP 10.152.183.32 <none> 443/TCP 25h
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 3d3h
ingress-nginx ingress-nginx-controller NodePort 10.152.183.240 <none> 80:31709/TCP,443:30762/TCP 9h
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.152.183.118 <none> 443/TCP 9h
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 45h
metallb-system metallb-webhook-service ClusterIP 10.152.183.117 <none> 443/TCP 3d2h
netdata netdata ClusterIP 10.152.183.164 <none> 19999/TCP 2d7h
ortelius ms-compitem-crud NodePort 10.152.183.91 <none> 80:30288/TCP 3m24s
ortelius ms-dep-pkg-cud NodePort 10.152.183.124 <none> 80:32186/TCP 3m24s
ortelius ms-dep-pkg-r NodePort 10.152.183.82 <none> 80:31347/TCP 3m22s
ortelius ms-general NodePort 10.152.183.171 <none> 8080:30704/TCP 3m21s
ortelius ms-nginx NodePort 10.152.183.158 <none> 80:32519/TCP,443:31861/TCP 3m19s
ortelius ms-postgres NodePort 10.152.183.75 <none> 5432:30852/TCP 9h
ortelius ms-scorecard NodePort 10.152.183.74 <none> 80:30674/TCP 3m18s
ortelius ms-textfile-crud NodePort 10.152.183.200 <none> 80:30126/TCP 3m16s
ortelius ms-ui NodePort 10.152.183.242 <none> 8080:31073/TCP 3m16s
ortelius ms-validate-user NodePort 10.152.183.55 <none> 80:30266/TCP 3m16s
traefik-v2 traefik LoadBalancer 10.152.183.73 192.168.0.151 80:32700/TCP,443:30988/TCP 2d7h
whoami whoami ClusterIP 10.152.183.168 <none> 80/TCP 47h```
What you see is the traefik
service with the TYPE LoadBalancer
which means it has claimed the MetalLB IP
that we assigned. A CLUSTER-IP
is only accessible inside Kubernetes. So now with MetalLB and Traefik we have built a bridge between the outside world and our internal Kubernetes world. Traefik comes with some self discovery magic in the form of providers which allows Traefik to query provider
APIs to find relevant information about routing and then dynamically update the routes.
Well done for making it this far! We have made it to the point where we can deploy Ortelius into our Kubernetes cluster and access Ortelius through the Traefik Proxy inside the Kubernetes Ortelius namespace.
Ortelius currently consists of the following Microservices. The one we are most interested in at this point is ms-nginx
which is the gateway to all the backing microservices for Ortelius. We are going to deploy Ortelius using Helm then configure Traefik to send requests to ms-nginx
and then we should get the Ortelius dashboard.
Kubectl create the Ortelius namespace
kubectl create ns ortelius
kubectl config set-context --current --namespace=ortelius
helm repo add ortelius https://ortelius.github.io/ortelius-charts/
helm repo update
helm upgrade --install ortelius ortelius/ortelius --set ms-general.dbpass=postgres --set global.postgresql.enabled=true --set global.nginxController.enabled=true --set ms-nginx.ingress.type=k3d --set ms-nginx.ingress.dnsname=<your domain name goes here> --version "${ORTELIUS_VERSION}" --namespace ortelius
Lets stop here to discuss some of these settings.
--set ms-general.dbpass=postgres
| Set the PostgreSQL database password
--set global.nginxController.enabled=true
| Sets the ingress controller which could be one of default nginx ingress, AWS Load Balancer or Google Load Balancer
| Refer to the Helm Chart in ArtifactHub here
--set ms-nginx.ingress.type=k3d
| This setting is for enabling the Traefik Class so that Traefik is made aware of Ortelius even thou its for K3d another very lightweight Kubernetes deployment which uses Traefik as the default ingress
The k3d
value enables the Traefik ingress class to make Traefik Ortelius aware.
--set ms-nginx.ingress.dnsname=<your domain name goes here>
| This is URL that will go in your browser to access Ortelius
Kubectl show the pods for Ortelius
kubectl get pods
ortelius-traefik.yaml
, copy the YAML into the file and then run:kubectl apply -f ortelius-traefik.yaml`
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
labels:
app: ms-nginx
name: ms-nginx-traefik
namespace: ortelius
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: ms-nginx
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}
https://ortelius.pangarabbit.com
Happy alien hunting…….
By this stage you should have three Pi’s each with the NFS CSI Driver, Traefik and Ortelius up and running. Stay tuned for Part 4 where we use Cloudflare, Traefik and LetsEncrypt to implement HTTPS
and TLS v1.3
. Yes there is more extraterrestrial life in a cloud deployment near you……..
Disclaimer: Any brands I mention in this blog post series are not monetized. This is my home setup!