Jack Moore

Email: jack(at)jmoore53.com
Project Updates

Hello Kubernetes, my old friend

01 Feb 2022 » kubernetes, proxmox, containers

This is a post documenting a more fluid installation process for Kubernetes. This post exists because unfortunately for me when I went to power on my Kubernetes cluster from about a year ago, the certificates were expired and generating new keys was not working properly. The HAProxy configuration wasn’t the issue, it was SSL within kubernetes api-server. So I did what most normal folks would do in a lab environment and wiped the entire cluster. The original cluster didn’t have any production services running, so this was the logical choice for me.

Installing K8s

I already have a nice K8s ansible installation I ran to add all the dependencies and install the latest kubernetes version. It is about 200 lines, and has been excluded for brevity. After base packages have been installed on a primary/secondary server, the following commands need to be run to configure the servers as a control plane/and secondary:

Note: The Kubernetes networking component (see below) may need to be completed before any nodes are joined to the cluster!

# Initialize the Cluster using the 10.0.0.30 address (this is the CARP HAProxy IP Address configured to point to just the master node)
# note hlvk8cp is a dns entry for the CARP address
sudo kubeadm init --control-plane-endpoint hlvk8cp:6443 --pod-network-cidr 192.168.150.0/23

# On the master node I had to run the following to get the x509 key
kubeadm init phase upload-certs --upload-certs

# From the master again:
kubeadm token create --print-join-command

# From the secondary control plane where $var to join the secondary to the primary:
sudo kubeadm join hlvk8cp:6443 --token $token --discovery-token-ca-cert-hash $hash --control-plane --certificate-key $certificateKeyFromLastOutput

I did run into an issue with Certs again on initial deploy. I believe this was because my servers were set in UTC. I set them to America/New_York with:

# Change timezone
timedatectl set-timezone America/New_York

After certs were looking good and kubectl get nodes --v=7 was showing the two nodes it was time to add workers!

This was easy, I provisioned 3 more worker servers and joined them to the cluster with the following:

# Joining worker nodes to the cluster
# this command can be found from the master node with the above
kubeadm join hlvk8cp:6443 --token $token --discovery-token-ca-cert-hash $hash

After everything was working and kubeclt get nodes was showing my 5 nodes, the cluster was ready for networking, some test pods, and a loadbalancer service.

Networking, Pods, and MetalLB

After the nodes are showing in kubernetes, the pods need a network to reside on. For me this meant using Calico as the pod glue between nodes. Note: The networking component may need to be completed before any nodes are joined to the cluster! Installation of Calico looks like the following:

# Grab the manifest
wget https://docs.projectcalico.org/manifests/calico.yaml

# Update the CIDR Range
vim calico.yaml

# these values (I'm using this because it is not in use in my network):
# - name: CALICO_IPV4POOL_CIDR
# - value: "192.168.150.0/23"

# After the CIDR value has been updated, apply it:
kubectl apply -f calico.yaml

After networking is installed, it’s time for some LoadBalancing for External access in. For MetalLB, strictARP must be set to true in kube-system and kube-proxy. Configuring this for MetalLB via shell looks like the following:

# From the MetalLB Documentation!!!

# see what changes would be made, returns nonzero returncode if different
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system

# actually apply the changes, returns nonzero returncode on errors only
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

After strictARP has been updated, MetalLB can be configured via Manifest which looks like the following:

# I prefer to have the yaml config files in a saved directory, hence the wget
wget https://raw.githubusercontent.com/metallb/metallb/v0.13.5/manifests/namespace.yaml
kubectl apply -f namespace.yaml

wget https://raw.githubusercontent.com/metallb/metallb/v0.13.5/manifests/metallb.yaml
kubectl apply -f metallb.yaml

These will install MetalLB on the system, but MetalLB still doesn’t know where to pull IP Addresses from. This is where the configmap comes in, which looks like the following (metallb-configmap.yaml):

Note the addresses configuration

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.1.240-10.0.1.250

This can then be applied with kubectl apply -f metallb-configmap.yaml and the LoadBalancer is configured!

After the networking and loadbalancing (service) is configured, it’s time for a test deployment and a way for external requests to be served (example.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  #annotations:
  #  metallb.universe.tf/address-pool: production-public-ips
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
  loadBalancerIP: 10.0.1.241
# Apply the deployment:
kubectl apply -f example.yml

# Now we see pods
kubectl get pods

# Now we see our service
kubectl get svc

If we venture on over to 10.0.1.241 in the browser, we get a “Welcome to NGINX” page! Great, everything works!

K9s & Go

K9s is a more interactive way to use kubernetes. It provides a higher level overview and access to pods/services/nodes/configurations. It allows you to forget configurations and locations of configurations. Configuring it is relatively easy, however there wasn’t an apt install k9s available and installing golang with apt installs 1.13 and the latest is 1.17, so both golang and k9s need to be installed from source which look like the following:

# Installing Golang
wget https://go.dev/dl/go1.17.6.linux-amd64.tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.17.6.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin

# I then added go to my bash profile in $HOME/.profile
# To confirm it works this command should return "go version go1.17.6 linux/amd64":
go version

# Installing K9s from Source
git clone https://github.com/derailed/k9s.git
cd k9s
make build
./execs/k9s

# I then added k9s to the path as well

Adding Metrics

Metrics weren’t working originally. Running kubectl top nodes complains bigtime. First it said the Metrics API wasn’t available. Then it reported the security was unavailable. To alleviate these I installed the Metrics API, then I allowed insecure tls.

This looked like the following:

# Pull down the metrics components
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Update the configuration in the components.yaml under containers:
#       containers:
#          - args:
#            - --cert-dir=/tmp
#            - --secure-port=4443
#            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#            - --kubelet-use-node-status-port
#            - --metric-resolution=15s
# Add this line:
#            - --kubelet-insecure-tls


# Apply the configuration:
kubectl apply -f components.yaml

From here kubectl top nodes is now working and our cluster is able to be monitored and ready for pods and production services! On to Ingress, Storage, Vitess SQL deployments for a base level of services!

References

© Jack Moore