Jack Moore

Email: jack(at)jmoore53.com
Project Updates

Kubernetes

25 Nov 2020 » system configuration, sysadmin, homelab, server build, kubernetes

Kubernetes so I can run a multi node overkill cluster to host a blog with tens of page views. because I can

Shell

# Initialize the cluster
sudo kubeadm init --apiserver-advertise-address=10.0.1.2 --apiserver-cert-extra-sans=10.0.1.2  --node-name hlvkt1 --pod-network-cidr=192.168.0.0/16


# Initialize calico network
curl https://docs.projectcalico.org/manifests/calico.yaml -O
vim calico.yaml
kubectl apply -f calico.yaml


# Metallb
curl https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml -O
curl https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml -O
kubectl apply -f namespace.yaml
kubectl apply -f metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"


# IPTABLES (Not Necessary)
iptables --flush
sudo iptables --flush
iptables -tnat --flush
sudo iptables -tnat --flush


# Netplan (Had to configure this properly on the host)
vim /etc/netplan/00-installer-config.yaml


# Create Deployment
kubectl create deployment nginx2 --image=nginx
kubectl get deployment
kubectl scale deployment/nginx2 --replicas=2
kubectl get pod -o wide
kubectl expose deployment nginx2 --type=LoadBalancer --port=80

YAMLs

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: poc-nginx-deploy
spec:
  selector:
    matchLabels:
      app: poc-nginx-deploy
  template:
    metadata:
      labels:
        app: poc-nginx-deploy
    spec:
      containers:
      - name: nginx
        image: nginx:1
        ports:
        - name: http
          containerPort: 80

Service:

apiVersion: v1
kind: Service
metadata:
  name: poc-nginx-deploy
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: poc-nginx-deploy
  type: LoadBalancer

ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.1.240-10.0.1.250

Metallb, and calico have their own config files.

With Calico, the network needs to be set. Find the CALICO_IPV4POOL_CIDR variable in the yaml file and replace the value with the same subnet you used in the kubeadm init command, save the file

- name: CALICO_IPV4POOL_CIDR
  value: "192.168.0.0/16"

Gotchas

Few things to look out for and recommendations:

  • First is applying manifests and files. Use curl -O on the file and then run the manifest from within the folder.
  • Fix DHCP/Confirm the DHCP Range isn’t overlapping
  • Fix the network on the host, just use a static IP honestly.
  • Networking in k8s isn’t easy, DNS also sucks

DHCP

DHCP range was 10.0.1.2-10.0.1.254. I cut it in half to 10.0.1.2-10.0.1.128.

I set the LB to pull the last few IP’s from the range.

MetalLB pulls 10.0.1.240-10.0.1.250.

Static IP

For ubuntu, it’s likely netplan. Probably will end up using an ansible template with the IP.

cat /etc/netplan/00-installer-config.yaml

network:
  ethernets:
    ens18:
      addresses:
        - 10.0.1.2/24
      gateway4: 10.0.1.1
      nameservers:
        addresses: [10.0.0.1, 8.8.8.8]
  version: 2

A Somewhat Full Look

$ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        22h
nginx        NodePort       10.106.246.153   <none>        80:30232/TCP   22h
nginx2       LoadBalancer   10.99.164.117    10.0.1.240    80:32380/TCP   21h


$ kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
nginx    2/2     2            2           22h
nginx2   2/2     2            2           21h


$ kubectl get ep
NAME         ENDPOINTS                             AGE
kubernetes   10.0.1.2:6443                         22h
nginx        192.168.178.194:80,192.168.195.3:80   22h
nginx2       192.168.178.196:80,192.168.195.6:80   21h


$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-7n67m    1/1     Running   0          22h   192.168.195.3     hlvkt1   <none>           <none>
nginx-6799fc88d8-89jvw    1/1     Running   0          22h   192.168.178.194   hlvkt2   <none>           <none>
nginx2-5fc4444698-4gqzc   1/1     Running   0          21h   192.168.178.196   hlvkt2   <none>           <none>
nginx2-5fc4444698-bcd9q   1/1     Running   0          21h   192.168.195.6     hlvkt1   <none>           <none>

Troubleshooting no access after a reboot

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Replicate…

The real question is. Can I do it again?

© Jack Moore - This site was last built Fri 30 Aug 2024 12:31:24 PM EDT