For the current setup I have with kubernetes I ended up using Cilium as the networking provider configured with BGP on the OPNSense Firewall I have. This setup was a bit different because Cilium didn’t support having Nodes on the same network as the pods.. which makes sense, however it should have allowed for better configuration of the pod CIDR. Below is how I configured Cilium and OPNSense:
I made an attempt to run clium configured with BGP, but needed to move the nodes to 172.16.0.0/24 network.
This was the process I used:
- Create new vlan on switch for 172.16.0.0/12 network;
- Provisioning of nodes was done with Terraform;
- Pluck off a space in the network for kubernetes;
- Add Firewall interface for the new L2 network
- Update Terraform to move everything to a 172.16.0.0/12 network;
- Update Scripts
- Try Cilium again;
Installing Helm and Cilium
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.12.2 --namespace kube-system --set bgp.enabled=true --set bgp.announce.loadbalancerIP=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true #--set bgp.announce.podCIDR=true
After installing Clium I needed to add a config map for bgp pointing to opnsense (note this may have needed to have been done before installing cilium):
- Note the peer address is my firewall ip
- Note the peer asn is the AS of the firewall
- Note the current AS is the AS of the kubernetes cluster
apiVersion: v1 kind: ConfigMap metadata: name: bgp-config namespace: kube-system data: config.yaml: | peers: - peer-address: 172.16.0.254 peer-asn: 64512 my-asn: 64513 address-pools: - name: default protocol: bgp addresses: - 172.16.1.0/24
On the opnsense side I had to install the
frr package which allowed for better routing.
From there I had to configure BGP.
- enabling bgp
- Setting the BGP AS Number to 64512
- Adding all the k8s nodes into the neigbors of the bgp config with the correct peer asn (peer asns are 64513)
After everything has been configured LoadBalancer IPs can now be used on the 172.16.1.0/24 network.