Kubernetes
How to Create a Highly Available k3s Cluster on Proxmox LXCs
Homelab
kubernetes
tutorial
homelab
I wanted to learn more about Kubernetes, so I followed a couple tutorials:
- TechnoTim’s “HIGH AVAILABILITY k3s (Kubernetes) in minutes!”
- Garret Mill’s “Rancher K3s: Kubernetes on Proxmox Containers”
I had to make some modifications, here’s how I set things up:
Operating Systems:
- Proxmox: pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-15-pve)
- LXCs: Ubuntu 23.04 (lunar)
- MacBook Pro: MacOS Sonoma
mysql Server
# Configure a user and database for k3s to use
sudo apt install -y mysql-server # Install on an Ubuntu LXC
sudo service mysql status # Verify mysql is running
sudo mysql_secure_installation # NOTE: I chose the lowest password requirement because it was giving me issues. I dropped all the extra users/dbs when it prompted me to. You may have to change the length/complexity of your user/pass depending on your choices here.
sudo su # become root because it makes it easier
mysql -u root -p # Enter mysql as root, press enter because there should be no password
SHOW VARIABLES LIKE 'validate_password%'; # Show password requirements
CREATE USER 'K3S_USER'@'192.168.2.0/255.255.255.0' IDENTIFIED BY 'K3S-pass'; # The subdomain of my k3s nodes is 192.168.2.0/255.255.255.0; replace that with your subdomain. Replace `K3S_USER` with your desired username, and `K3S-PASS` with your desired password.
CREATE DATABASE K3S-DATABASE; # Replace `K3S-DATABASE` with your desired database name
SHOW CREATE DATABASE K3S-DATABASE; # Verify it was created
USE K3S-DATABASE; # Select the database
GRANT ALL PRIVILEGES ON K3S-DATABASE.* TO 'K3S_USER'@'192.168.2.0/255.255.255.0'; # grant the user access to the database
# Press ctrl+d to exit the mysql prompt
nano /etc/mysql/mysql.conf.d/mysqld.cnf # Edit your mysqld.cnf to comment out the bind-address line.
#bind-address = 127.0.0.1
Proxmox Host
# Add the following to /etc/sysctl.conf (or replace values if already exist)
sudo nano /etc/sysctl.conf
net.ipv4.ip_forward=1
vm.swapiness=0
# Run this command to disable swapterm
swapoff -a
# Verify that swap is disabled in /etc/fstab mount
sudo nano /etc/fstab
proc /proc proc defaults 0 0 # this is what mine looks like
# Create/Start your LXCs.
# Make sure they are privileged (untick the 'unprivileged' checkbox)
# Make sure nesting is enabled.
# You can start them one at a time if that's easier.
# I provision my LXCs using Terraform plans that call Ansible playbooks,
# but you can also create them in the web UI.
# Once your LXCs are created, add the following lines to each of your k3s-server
# and k3s-worker configs (replace LXC_ID with the ID of your LXC)
## NOTE: DO NOT ADD `lxc.mount.auto: "proc:rw sys:rw"`; this results in
# DNS failure for some reason, and everything works fine without it.
sudo nano /etc/pve/lxc/LXC_ID.conf
lxc.apparmor.profile: unconfined # Disables AppArmor
lxc.cgroup2.devices.allow: a # Allows the container’s cgroup to access all devices
lxc.cap.drop: # Prevents dropping any capabilities for the container
mp0: /usr/lib/modules/6.2.16-15-pve,mp=/usr/lib/modules/6.2.16-15-pve,backup=0,ro=1 # Mounts the necessary modules in the LXC
# Add the following to /etc/modules-load.d/modules.conf (if not already present)
sudo nano /etc/modules-load.d/modules.conf
overlay
br_netfilter
# Reboot your LXCs in order for the previous changes to take effect (replace LXC_ID with the ID of your LXC). You can also reboot them from the web UI.
sudo pct reboot LXC_ID
LXCs
# Create the /etc/rc.local file (this is necessary for creating some folders that k3s will use)
sudo nano /etc/rc.local
#!/bin/sh -e
# Kubeadm 1.15 needs /dev/kmsg to be there, but it’s not in lxc, but we can just use /dev/console instead
# see: https://github.com/kubernetes-sigs/kind/issues/662
if [ ! -e /dev/kmsg ]; then
ln -s /dev/console /dev/kmsg
fi
# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
mount --make-rshared /
# Make /etc/rc.local executable
sudo chmod +x /etc/rc.local
# Execute /etc/rc.local
sudo /etc/rc.local
Nginx
# I run an instance of Nginx Proxy Manager in a
# Docker Container in Portainer that is running in a Proxmox LXC (a bit complicated).
# I opened Portainer, went to the nginx container,
# opened a /bin/bash shell, and ran `sudo nano /etc/nginx/nginx.conf`
# to begin editing the file.
# NOTE: I had to open up port 6443:6443 in my docker-compose.yml for nginx.
sudo nano /etc/nginx/nginx.conf
# uncomment this next line if you are NOT running nginx in docker
# load_module /usr/lib/nginx/modules/ngx_stream_module.so;
events {} # doesn't have to be empty (it's ok if there's stuff here already)
stream { # doesn't have to be empty (it's ok if there's stuff here already)
upstream k3s_servers {
server 192.168.2.120:6443;
server 192.168.2.121:6443;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}
# After saving these changes, I had to restart the Nginx reverse proxy docker container.
# This took a while, roughly 5 minutes, so don't assume it's not working if
# you're running the same setup as me.
RUN THE FOLLOWING: nginx -s reload
K3S Servers / Workers
# [BOTH K3S SERVERS] - Configure your environment variables for connecting to your mysql server
nano ~/.bashrc
export K3S_DATASTORE_ENDPOINT='mysql://K3S_USER:K3S-pass@tcp(mysql.my.domain:3306)/K3S-DATABASE' # Replace mysql.my.domain with the IP/local domain name of your mysql instance.
export INSTALL_K3S_EXEC="server --no-deploy traefik" # disables traefik that is used by default
source ~/.bashrc # apply changes
# [FIRST K3S SERVER] - Install k3s
curl -sfL https://get.k3s.io | sh -s - server \
--node-taint CriticalAddonsOnly=true:NoExecute \
--tls-san nginx.my.domain # replace nginx.my.domain with the IP/local domain name of your nginx instance.
# [FIRST K3S SERVER] - Verify your first server node is running
sudo k3s kubectl get nodes
# The above command should output something like the following:
NAME STATUS ROLES AGE VERSION
k3ssrv1 Ready control-plane,master 21h v1.27.6+k3s1
# [FIRST K3S SERVER] - Copy the token for all your additional nodes to connect
sudo cat /var/lib/rancher/k3s/server/node-token # Copy this for all your server and worker nodes you'll add. The first server does not need this, only subsequent nodes.
# [SECOND K3S SERVER] - Connect second server to first server using the node-token from the previous command. NOTE: Replace NODE_TOKEN with the ENTIRE contents of the node-token file seen above.
curl -sfL https://get.k3s.io | sh -s - server \
--token=K104a884884ddd7283cac28d3b66e95134eb12bf2286acb629707787bc0ee210e64::server:072379a74ebb8317b393f8da5e674729 \
--node-taint CriticalAddonsOnly=true:NoExecute \
--tls-san=nginx.internal.cjk # replace nginx.my.domain with the IP/local domain name of your nginx instance.
# [EITHER K3S SERVER] - Verify your second server node is running alongside your first server node
sudo k3s kubectl get nodes
# The above command should output something like the following:
NAME STATUS ROLES AGE VERSION
k3ssrv1 Ready control-plane,master 22h v1.27.6+k3s1
k3ssrv2 Ready control-plane,master 22h v1.27.6+k3s1
# [ALL K3S WORKERS] - Replace nginx.my.domain with the IP/local domain name of your nginx instance. Replace `NODE_TOKEN` with the ENTIRE contents of the node-token file seen above.
curl -sfL https://get.k3s.io | K3S_URL=https://nginx.internal.cjk:6443 K3S_TOKEN=K104a884884ddd7283cac28d3b66e95134eb12bf2286acb629707787bc0ee210e64::server:072379a74ebb8317b393f8da5e674729 sh -
# [EITHER K3S SERVER] - Verify your worker nodes are running
sudo k3s kubectl get nodes
# The above command should output something like the following:
NAME STATUS ROLES AGE VERSION
k3ssrv1 Ready control-plane,master 22h v1.27.6+k3s1
k3ssrv2 Ready control-plane,master 22h v1.27.6+k3s1
k3swrk1 Ready <none> 21h v1.27.6+k3s1
k3swrk2 Ready <none> 8h v1.27.6+k3s1
k3swrk3 Ready <none> 8h v1.27.6+k3s1
k3swrk4 Ready <none> 8h v1.27.6+k3s1
At this point, you should have a mysql server up and running, with k3s user/pass/database that your k3s servers can connect to, and your k3s servers and workers should be connected to your first k3s server via your nginx load balancer.
k3s Dashboard
# [MACOS] - Install kubectl on your Mac
brew install kubectl
# [FIRST K3S SERVER] - copy the contents of k3s.yaml
sudo cat /etc/rancher/k3s/k3s.yaml
# [MACOS] - Copy the k3s.yaml to your mac
mkdir ~/.kube
nano ~/.kube/config # paste the k3s.yaml contents here
# [MACOS] - Test the get nodes command
kubectl get nodes # this should display all your k3s servers/workers
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml # This is the most recent version as of the publishing of this post. Go the the kubernetes release page on github, there are install instructions.
# [MACOS] - Create yml files
nano dashboard.admin-user.yml # First yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
nano dashboard.admin-user-role.yml # second yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
# [MACOS] - Create the dashboard
sudo kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
# [MACOS] - Copy the token (Note: set duration to how long you want it to be valid until expiring; 0s = never expires)
sudo kubectl -n kubernetes-dashboard create token admin-user --duration=0s
# [MACOS] - Open the proxy connection (in a separate terminal window so you can do the test deploy later)
sudo kubectl proxy
You should now be able to log in with the token you copied from earlier at the following link: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
Test deploy
# [MACOS] - create testdeploy
nano testdeploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysite
labels:
app: mysite
spec:
replicas: 1
selector:
matchLabels:
app: mysite
template:
metadata:
labels:
app : mysite
spec:
containers:
- name : mysite
image: nginx
ports:
- containerPort: 80
kubectl get pods # Get pods just in case
kubectl apply -f testdeploy.yaml # Apply the yml
kubectl get pods # View the pods
kubectl exec -it mysite-6fcf68956d-8gcnf curl localhost # Test the testdeploy, should say "welcome to nginx"
kubectl scale --replicas=2 deploy/mysite # scale to 2 replicas
kubectl delete deploy mysite # Delete site
kubectl get deploy mysite # should be deleted
kubectl get pods # should be deleted
Remove traefik from existing system (seems tricky, might be incomplete here)
# If you forgot to add the `--no-deploy traefik` bit from earlier, here's how you fix that:
# [BOTH K3S SERVERS] - remove traefik.yaml file
sudo rm /var/lib/rancher/k3s/server/manifests/traefik.yaml
# [BOTH K3S SERVERS]
kubectl -n kube-system delete helmcharts.helm.cattle.io traefik # delete traefik
kubectl delete addon traefik -n kube-system # really delete traefik
kubectl delete addon traefik-config -n kube-system # no I'm serious delete it now
# [BOTH K3S SERVERS]
sudo systemctl restart k3s
# Also make sure to go to `http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/pod?namespace=_all`
# and delete any existing traefik processes after that.
How to reset if something goes wrong
# In the event that something goes wrong with setting up k3s, do the following to fix this
# 1. On all your k3s servers:
/usr/local/bin/k3s-uninstall.sh && reboot now # this uninstalls k3s
# 2. On all your k3s workers:
/usr/local/bin/k3s-agent-uninstall.sh && reboot now
# 3. On your mysql server:
use k3s; # replace k3s with your database name you created
show tables; # shows all tables in selected database
drop kine; # kine was the only table in my database at the time
# This should sufficiently reset your config to the following:
# 1. An LXC running mysql-server with the k3s user/pass you set,
# along with the k3s database you created, but should be empty.
# 2. Your k3s servers waiting for you to run the k3s install script.
# 3. Your k3s workers waiting for you to run the k3s install script.
Have a wonderful rest of your day, and as always, cheers!
Copied!