Step by Step ARM Kubernetes on Raspberry Pi (Part One)

Samuel W
18 min readJun 29, 2020
8 Node Kubernetes Cluster

This is Part One of a two part series designed to help you create your own Kubernetes cloud platform —multiple systems that act as a unified resource for computing purposes which can then be used to host web applications and application infrastructure components. The first part of understanding how a cloud platform works is to be familiarized with containers, and specifically Docker in this case. Docker allows applications and their environments to be packaged into portable images which can then be launched as containers. This is an extremely efficient and flexible way to run applications, because everything you need is contained within the Docker image. Kubernetes works as a layer on top of Docker as a ‘container orchestrator’. Its job is to manage the containers and related resources of an application deployed through Docker.

The goal of Kubernetes is to maintain a desired system state by monitoring and leveraging the available cluster resources.

Kubernetes in production will likely be configured in what is considered a highly available topology, meaning multiple control plane nodes located behind a load balancer. In this project, we will set up a single control plane cluster, meaning there will be only one master node. This is a great project because the minimum requirements are just two Raspberry Pi Model 3 B+’s, but you are able to scale in as many units as you have available. Once set up, you interact with the platform in the exact same way regardless of the number of nodes.

This part of the guide will focus on hardware configuration and establishing a basic, functioning, Kubernetes cluster. You won’t be left with any pieces at the end but instead with a fresh Kubernetes platform where you can familiarize yourself with the environment and deploy simple applications. You will also be well prepared to move onto Part Two, more advanced Kubernetes configuration. So lets begin!

Plan

This project is no small feat. Before proceeding, read through the process and figure out your plan. Knowing the general process in advance will save time and reduce the chance of mistakes when getting into the details of this guide. I’ll be covering my specific Kubernetes build, but the instructions should be easily adaptable if you know ahead of time how your build will differ from mine.

The minimum hardware requirements needed to follow this guide are two Raspberry Pi Model 3 B+ or newer (a master and a worker), and a network for them to connect to. The number of additional nodes used shouldn’t change the setup process but the method of connecting them to the network will. In order to follow my build exactly, you will need to have a WiFi network as well as an ethernet switch.

Take a few minutes to gather your materials and think about what the end result of your build will look like. Determine how many Raspberry Pi units you will use, which unit will act as the master and which as the worker(s), and if you are going to connect all of the nodes directly to your network(wired or wireless) or if you are going to follow my build and use the master node as a gateway for the workers.

Here is my plan:

— 8 Raspberry Pi Model 3 B+ with the following hostnames:

k8s-master
k8s-worker-01
k8s-worker-02
k8s-worker-03
k8s-worker-04
k8s-worker-05
k8s-worker-06
k8s-worker-07

— All nodes will be connected to each other over ethernet to an 8 port switch, k8s-master will connect to my home WiFi and function as a network bridge for the other nodes.

My supplies:

Assorted components arranged on a table

As an alternative to copying directly from this article, supplementary resources for this tutorial can be found at https://github.com/Otterwerks/arm-kubernetes-setup

Prepare SD Cards

Each Raspberry Pi unit will need a micro SD card flashed with the latest version of Raspbian Lite(Now called Raspberry Pi OS). I downloaded the ‘Raspberry Pi OS Lite’ image from the official page at:

I used this image to flash each of my 8 micro SD cards. The ‘Lite’ designation means that the image does not come with a desktop environment installed. A desktop environment adds no value to Kubernetes and we can free up system resources by choosing the version of Raspbian without one. This becomes increasingly important if you are using older hardware with less resources to spare!

The process of preparing each micro SD card involves these three basic steps:

  1. Use a utility — I used Etcher (https://www.balena.io/etcher/) — to flash the Raspberry Pi OS image to the micro SD card
  2. Mount the newly flashed card (it should show up as ‘boot’) and place an empty file named ‘ssh’, with no file extension, into the root directory of the drive.
  3. Eject and set the card aside, don’t start booting any systems
Screenshot of flashing microSD card with Etcher

SSH connections are disabled by default in Raspbian but placing the ‘ssh’ file into the root directory changes this and will allow you to connect over SSH. This saves the trouble of having to connect a monitor and keyboard to each system to perform the initial setup.

There is an additional optional step to perform only if you want the system the card is being prepared for to connect to your WiFi network when you power it on. For my build, I only performed this step on the micro SD card that I planned to use for my master node since my worker nodes will all be connected to each other and to the master by ethernet. If you want all of your nodes to connect to WiFi, you will need to perform this step for each micro SD card. If you will be connecting and accessing all nodes on a wired network, perform this step to none of them.

To enable WiFi:

Create a file on your computer named ‘wpa_supplicant.conf’. Open it with a text editor and paste this text into the file:

# wpa_supplicant.confcountry=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid=”<WiFi network name>”
psk=”<WiFi password>”
}

Edit the configuration to match your WiFi SSID, password, and country code. Save the file and place it into the root directory of the card (same place as the ‘ssh’ file)

Keep track of your cards if you have lots of them! The method I use is to store them in their full size SD card adapter(most micro SD cards come with these), and to use a marker to write a unique label on each adapter.

The reason not to boot the nodes yet is because they all have the same default network hostname of ‘raspberrypi’, and it would cause a conflict to connect more than one at a time to the network. The next sections describe how to get all of the nodes up and running, one at a time.

Slices of Pi

Boot and Configure Master x 1

Place all of the micro SD cards into their Raspberry Pi units and connect any network cables if applicable. Raspberry Pis don’t have power switches so they will start automatically when power is connected, don’t give any of the Pis power at this point.

Connecting Over SSH

Go ahead and boot up the master node by connecting the power cable, or switching on the power supply. Try to connect to it with ssh using ssh pi@raspberrypi.local, it may take a minute to become ready. You should be prompted if you are sure you want to connect to the host, type ‘yes’. The default password is raspberry, which will grant you terminal access and a new prompt looking like pi@raspberrypi $.

If you have previously SSHed into Raspberry Pis on your computer before, you might be shown this screen when trying to connect:

This is caused when there is already a host entry for the hostname ‘raspberrypi.local’ in the ‘known_hosts’ file of the computer you are trying to connect from. To fix things, run the command ssh-keygen -R raspberrypi.local. This command should work on Windows, Mac, and Linux.

Even if you were able to successfully connect on your first try, still take note of the above procedure. You will need to do this before you try to configure the next node, because each subsequent node will also have the hostname of ‘raspberrypi’.

If you run into trouble with the SSH connection, use a network scanning tool to look for the potential IP address of the system and try to connect to ‘pi@<ip_address>’.

Base Configuration

Once connected, I ran sudo apt-get update && sudo apt-get upgrade -y to update any existing software to the latest versions. Next, I used sudo raspi-config to set a new password and the hostname ‘k8s-master’. After changing the hostname I selected to reboot the system. Don’t forget to use the new hostname when trying to SSH back into this node later!

If you are connecting all nodes directly to your existing network through WiFi or ethernet, you are done configuring the master for now.

Extra/Optional Configuration

These steps will set up the master node to act as a gateway and create a separate subnet for the worker nodes. This is accomplished using a program called dnsmasq and the built in iptables utility.

Set a static IP address for the ethernet adapter by editing dhcpcd.conf, sudo nano /etc/dhcpcd.conf. Uncomment the static IP entry for ‘interface eth0' and the static IP entry below it by removing the ‘#’s, and change the IP address. This IP address should be from a new subnet that you would like to create, I used ‘192.168.10.1’ because I chose to make my worker node subnet ‘192.168.10.x’. I also added in a static configuration for wlan0 conforming to my WiFi network, this step is optional but you can see my dhcpcd.conf file below as an example(changes in bold):

# dhcpcd.conf# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.
# Allow users of this group to interact with dhcpcd via the control socket.
#controlgroup wheel
# Inform the DHCP server of our hostname for DDNS.
hostname
# Use the hardware address of the interface for the Client ID.
clientid
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.
# Some non-RFC compliant DHCP servers do not reply with this set.
# In this case, comment out duid and enable clientid above.
#duid
# Persist interface configuration when dhcpcd exits.
persistent
# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit
# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Respect the network MTU. This is applied to DHCP routes.
option interface_mtu
# Most distributions have NTP support.
#option ntp_servers
# A ServerID is required by RFC2131.
require dhcp_server_identifier
# Generate SLAAC address using the Hardware Address of the interface
#slaac hwaddr
# OR generate Stable Private IPv6 Addresses based from the DUID
slaac private
# Example static IP configuration:
interface eth0
static ip_address=192.168.10.1
#static ip6_address=fd51:42f8:caae:d92e::ff/64
#static routers=192.168.0.1
#static domain_name_servers=192.168.0.1 8.8.8.8 fd51:42f8:caae:d92e::1
interface wlan0
static ip_address=192.168.11.224
static routers=192.168.11.1
# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=192.168.1.23/24
#static routers=192.168.1.1
#static domain_name_servers=192.168.1.1
#fallback to static profile on eth0
#interface eth0
#fallback static_eth0

Next, install dnsmasq with sudo apt-get install dnsmasq -y, this will act as the DHCP server and router for the worker node subnet. The dnsmasq configuration is stored as ‘/etc/dnsmasq.conf’. The resource I followed to set it up suggested backing up the original configuration file and starting with an empty one. Back up the default configuration by adding ‘.orig’ to the end of the file name to indicate it is the original file.

sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig

Open up a new, blank, file with sudo nano /etc/dnsmasq.conf

Paste in the following lines:

# dnsmasq.confinterface=eth0 # Use interface eth0
listen-address=192.168.10.1 # listen on
# Bind to the interface to make sure we aren’t sending things
# elsewhere
#bind-interfaces
server=8.8.8.8 # Forward DNS requests to Google DNS
domain-needed # Don’t forward short names
# Never forward addresses in the non-routed address spaces.
bogus-priv
# Assign IP addresses between 192.168.10.2 and 192.168.10.20 with a
# 12 hours lease time
dhcp-range=192.168.10.2,192.168.10.20,12h

Edit ‘listen-address’ to match the static IP assigned to ‘eth0’ earlier in ‘dhcpcd.conf’. Make another change at ‘dhcp-range’ so that the IP range matches the same subnet as ‘listen-address’.

Exit nano with ‘ctrl+X’, press ‘Y’ to save, and ‘return’ to confirm the file name.

Start the dnsmasq service sudo service dnsmasq start

Next, enable packet forwarding on the system with by editing ‘sysctl.conf’ with nano, sudo nano /etc/sysctl.conf. Remove the ‘#’ before the entry ‘net.ipv4.ip_forward=1’ to uncomment the statement. Exit nano and save. This will take effect on next boot.

Now we need to configure iptables to bridge the two network connections with NAT. Iptables is a firewall utility built into many popular Linux distributions. It shouldn’t contain any rules, but you can run sudo iptables -F and sudo iptables -t nat -F to make sure all rules are cleared.

Run the following 3 commands to set rules in iptables:

sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADEsudo iptables -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPTsudo iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT

We want these rules to persist after system reboots so save them to a file with this command:

sudo sh -c “iptables-save > /etc/iptables.ipv4.nat”

Now, to have the saved rules applied on every boot, open rc.local with sudo nano /etc/rc.local and Insert the following line at the end right before exit 0:

iptables-restore < /etc/iptables.ipv4.nat

Exit nano and save the file.

Restart the node sudo reboot now

SSH back into the master node ssh pi@k8s-master.local and check that dnsmasq is running sudo service dnsmasq status. If you don’t see the ‘active’ status, start dnsmasq with sudo service dnsmasq start.

This completes basic setup for the node, continue to begin configuring the workers.

Boot and Configure Workers (x n workers)

Connecting Over SSH

This part will vary depending on your network configuration. If all of your nodes are connected directly to your main network, you should be able to SSH directly to the worker node after it starts up.

If instead you set up your network like mine, and followed the additional master node configuration steps, you won’t be able to directly SSH into the workers because they will be on their own subnet. You will need to first SSH into the master node and then SSH into the worker node using ssh pi@<worker_hostname> from the master. During first time setup, each worker will have the default hostname of ‘rasbperrypi.local’.

If you are having trouble connecting to a worker node, try checking if any DHCP leases have been assigned by running this command on the master node:

cat /var/lib/misc/dnsmasq.leases

You may also need to use ‘.local’ after the hostname. Try rebooting all nodes if no entries are found, double check master node configuration if connection problems persist. It may also be worth checking the status of dnsmasq with sudo service dnsmasq status, and starting the service if it isn’t already running.

I found it useful to run ping google.com at various points throughout the setup process to make sure my nodes were able to reach the internet through my master node.

Worker System Setup

Once connected, follow the Base Configuration process described in the master configuration section for each worker node, running the commands sudo apt-get update && sudo apt-get upgrade -y and sudo raspi-config. Set your desired password and the hostname to the appropriate, unique, worker hostname. It is important to only power on and connect one system at a time because of the default ‘raspberrypi’ hostname conflict. You can set the same password or different passwords for all nodes. For simplicity, I set the same password for all my nodes.

Don’t forget you will also need to clear the ‘raspberrypi’ host entry in your ‘known_hosts’ file before connecting, ssh-keygen -R raspberrypi.local.

sudo reboot now each node after finishing the configuration.

Passwordless SSH

This step is more a matter of convenience but is well worth the effort and will save many keystrokes if you are connecting to your workers through the master. Passwordless SSH isn’t required for Kubernetes but will facilitate setup and management by allowing the master node to connect to each worker node over SSH without prompting for a password each time.

To set this up, SSH into the master node and run ssh-keygen. Press ‘enter’ to use the default location, and ‘enter’ two more times to leave the password blank.

Still from the master node, copy the SSH key to each worker node with ssh-copy-id pi@<worker_hostname/IP>. You will need to enter the password you set for the worker to complete this action, but from now on any connection to the worker node will go through without prompting for a password.

Install Docker and Kubernetes

Docker and Kubernetes are the foundation of the platform and need to be installed on each node. Complete the following steps on each node, starting with the master. Alternatively, if you are feeling ambitious, skip ahead to automate everything with a script.

Manual Process

In order for Docker to run properly on Raspberry Pi, swap memory must be disabled. Disable swap memory by running these 3 commands:

sudo dphys-swapfile swapoffsudo dphys-swapfile uninstallsudo systemctl disable dphys-swapfile

Install Docker next, with:

curl -sSL get.docker.com | sh

The Docker installation will create a ‘docker’ user group on the system. A user must be a member of this group in order to run ‘docker’ commands. Add the user ‘pi’ to this group with:

sudo usermod -aG docker pi

To install Kubernetes, we first need to add a trusted apt repository that contains ‘kubeadm’. Add the Kubernetes repository and GPG key:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add — echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update the list of available packages with sudo apt-get update, then install ‘kubeadm’ with sudo apt-get install -y kubeadm.

sudo reboot now the node to finish this install process on the node.

Run these commands on each node.

Using an Install Script (optional)

I bundled all of these commands into a shell script which can then be run once on each system. On the master node, in the home directory, create a new file with the command nano install.sh and paste in the following:

#!/bin/sh# install.sh# Now installing on <hostname> (color formatted)
printf “\n\e[1;34mNow installing on\e[0m \e[32m$HOSTNAME\e[0m \n\n”
# Install Docker
curl -sSL get.docker.com | sh && \
sudo usermod -aG docker pi
# Disable Swap
sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo systemctl disable dphys-swapfile
# Add repo list and install kubeadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add — && \
echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubeadm
# Restart to complete the install
sudo reboot now

Exit nano and save the file.

At this point you can run the script on the master by running. install.sh on the master node, and on the workers over SSH:

ssh pi@<worker_hostname/IP> “bash -s” < ./install.sh.

I went a step further by creating a second script on the master node named ‘run_install.sh’ to run ‘install.sh’ on all of my nodes:

#!/bin/sh# run_install.sh# Run install script on all nodes
ssh pi@k8s-worker-01 “bash -s” < ./install.sh
ssh pi@k8s-worker-02 “bash -s” < ./install.sh
ssh pi@k8s-worker-03 “bash -s” < ./install.sh
ssh pi@k8s-worker-04 “bash -s” < ./install.sh
ssh pi@k8s-worker-05 “bash -s” < ./install.sh
ssh pi@k8s-worker-06 “bash -s” < ./install.sh
ssh pi@k8s-worker-07 “bash -s” < ./install.sh
. install.sh

With this script, . run_install.sh will disable swap memory, install Docker, install Kubernetes, and reboot each node. Running this script took about 30 minutes to install the software on my 8 nodes.

Initialize Kubernetes Master and Add Workers

Master Node Configuration

From the master node, run:

sudo kubeadm init

This will set up the node as a control plane node, meaning a master node. The Kubernetes initialization may take a minute, but when it finishes, look toward the end of the output for the join command. This command can be seen in the image below.

Screenshot of kubeadm init join command

Save this command for later by copying it into a text file. It will be needed to add the rest of the nodes to Kubernetes.

Finish the main Kubernetes setup by copying over ‘admin.conf’ into a user accessible file:

mkdir $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubernetes requires a pod network to be installed as a means of communication between pods. Use this command will install Weave Net as the pod network:

kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)

At this point, check which pods are running with kubectl get pods -A.

Screenshot showing Kubernetes pod statuses

You can see in the above image that the system is still starting, wait a minute or two and check the status again.

Screenshot showing Kubernetes pod statuses

If you see problems with the ‘coredns’ pods as shown in the image above (CrashLoopBackOff), this is a DNS problem. CoreDNS is detecting an infinite DNS loop and halting the pod. The issue is caused by dnsmasq placing the entry ‘nameserver 127.0.0.1’ in ‘/etc/resolv.conf’, and CoreDNS being configured by default to forward upstream DNS queries to the same ‘/etc/resolv.conf’. This breaks things, since 127.0.0.1 or ‘localhost’ is relative, and in the context of the pod network — would be the CoreDNS pod itself. It isn’t typically a problem because ‘resolv.conf’ would usually be pointing to an external nameserver. When we fix this problem we want to keep in mind that dnsmasq intentionally made the localhost entry. We want to leave it there because it allows local processes to be forwarded to dnsmasq where they can be handled properly. Instead, we want to edit the CoreDNS configmap to forward upstream DNS queries explicitly to dnsmasq at the address dnsmasq is listening on, rather than ‘localhost’.

Kubernetes, by default, will edit files with VIM and it caused some formatting issues for me when saving and editing the configmap. I changed the Kubernetes editor to nano by entering the line export KUBE_EDITOR=”nano” into the terminal.

Editing the CoreDNS configmap can by done with:

kubectl -n kube-system edit cm/coredns

Look for the line ‘forward . /etc/resolv.conf’ and replace ‘/etc/resolv.conf’ with the IP address that dnsmasq is listening on.

Screenshot of editing CoreDNS configmap

Check the pod status again, kubectl get pods -A. It may take a minute for the CoreDNS pods to enter the ‘Running’ state.

Screenshot of CoreDNS pods running

Worker Node Configuration

Worker nodes can be joined to Kubernetes by running the join command on each node. Here is an example of how I ran it over SSH:

ssh pi@<worker_hostname> sudo kubeadm join 192.168.11.24:6443 — token t2kfme.3hifcr9imgwz6nbl — discovery-token-ca-cert-hash sha256:e04340870272a5bd8db531d41cf509eacbb2c466f09bb51ee225441f5389250d

After joining each worker node, check that they appear in Kubernetes

kubectl get nodes
Screenshot of Kubernetes node statuses

If anything should go wrong and you need to start over, run sudo kubeadm reset -f on each problematic node. If the join command fails on a node because ‘kubeadm’ command was not found, try rerunning the installation script on that node.

Wrap Up

This is a good stopping point if you won’t have time to install and configure the remaining components detailed in Part Two. You may have noticed when running the command kubectl get nodes, that the worker nodes have a role of ‘<none>’. As an optional task, you can assign worker labels to the nodes using the following command:

kubectl label node <node_name> node-role.kubernetes.io/worker=worker
Screenshot of assigning node roles

If you have set up a cluster with only two systems, a master and a worker, you may consider enabling the master node to host pods and other resources. This process involves removing the ‘NoSchedule’ taint on the master, and should only be performed if the worker nodes(or singular worker node in this case) are becoming overwhelmed. Run this command to allow regular pods to be scheduled on your master node:

kubectl taint nodes — all node-role.kubernetes.io/master

Read more about node and pod affinity here https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

Summary

The platform should now be ready to handle docker container deployments but we still have more bells and whistles to add. Continue to Part Two for instructions regarding MetalLB load balancer, Traefik ingress controller, and NFS persistent storage.

Kubernetes uses manifests in the YAML format to easily describe and manage resources. You can follow the Kuberentes tutorial to run and update a couple of Nginx pods on your new cluster here https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/.

For an example of what can be done after finishing this guide, or if you already have a Kubernetes cluster, check out my other article on how to deploy Scalable Jenkins CI/CD On Bare Metal ARM Kubernetes!

Part Two is currently under development and should be ready for release soon!

--

--