Neuigkeiten von trion.
Immer gut informiert.

Kubernetes on ODROID-N2

The ODROID-N2 is an amazing single board computer: It is available with 4GB RAM - and with an eMMC card for high speed storage and a power supply it costs less than 100 Euros. With 4+2 ARM64 CPU cores the ODROID-N2 provides an interesting platform to operate a small Kubernetes cluster without too much worrying about the power bill.

This article explains how to setup Kubernetes on ODROID-N2 single board computers. Since there are several options for operating system as well as Kubernetes distribution and setup method, this article makes the following decisions:

  • Use Arch Linux ARM64 as base operating system (this is quite lean and kept very much up to date)

  • Vanilla Kubernetes will be used, compiled and packaged as Arch ARM64 packages on the ODROID N2

  • Plain kubeadm will be used to setup the Kubernetes cluster

  • CRI-O as container runtime (instead of Docker)

  • Single master node and 4 worker nodes

Unfortunately there is no mainline Linux Kernel support for the ODROID-N2, but Hardkernel promised to work on it. The following features are currently not working as desired:

  • zram for compressed memory as swap device

  • Disable GPU memory allocation to make use of the full 2GB/4GB of the ODROID-N2

Previous experiences with Arch Linux ARM 64bit and Kubernetes on Raspberry Pi and ODROID (ODROID-C2 to be precise) can be found here:

Installing Arch Linux on ODROID-N2

Arch Linux is quite easy to setup. General installation instructions can be found here: https://archlinuxarm.org/platforms/armv8/amlogic/odroid-n2

In order to ease the setup for multiple nodes, scripting can be used to semi-automate preparing the storage (eMMC or sdcard) and extracting the base system. Especially since customization like copying of SSH-keys, setting sudo rights and hostname configuration should be applied as well, automation really pays off.

After setup the following packages are installed as well

  • sudo, htop

  • socat, ethtool, ebtables (for Kubernetes CNI networking)

  • cpupower (reduces power consumption by allowing CPU throttling during idle periods)

  • nfs-utils (if NFS storage is to be used with Kubernetes)

To make use of all the 6 CPU cores when compressing Arch Linux packages, the following parameters can be set in /etc/makepkg.conf

Configure multithreaded compression for Arch Linux package creation
COMPRESSXZ=(xz -T0 -c -z -)

Building Kubernetes Arch Linux ARM 64 packages

At first up to date packages for kubernetes and supporting services will be built as Arch Linux packages. It is recommended to create a directory for each package to be built and place the PKGBUILD file in it.

You can find the used PKGBUILD files here:

Building a package is in general performed by issuing a makepkg -s in each directory. At the moment all packages can be build except the Kubernetes Arch package.
For Kubernetes some special steps need to be taken, since a build of Kubernetes is quite resource intensive: On a 4GB ODROID-N2 a build is possible without additional swap memory, but about 3.5 GB is the minimum. If a 2GB model should be used, a swap file can be added:

Adding a swap file on ODROID-N2 2GB
$ sudo fallocate -l 1000M /swapfile
$ sudo mkswap /swapfile
$ sudo swapon /swapfile

In addition to that two settings need to be performed: The kernel should be allowed to overcommit the available memory instead of eagerly allocating the memory: sudo sysctl -w vm.overcommit_memory=1 and the go build chain must be prevented from performing parallel builds with the number of available cores, leading to highly increased memory consumtion: export GOFLAGS="-p=1". Although each build itself will not run in parallel, each part of the Kubernetes package can leverage all cores during its individual build, avoiding major performance reductions. Since Arch uses a tmpfs filesystem for /tmp, it should be unmounted first: sudo umount /tmp, otherwise memory will be allocated for temporary build artifacts and possibly resulting in an out of memory condition.

After these settings are finished, Kubernetes can be build using makepkg -s as well.

When the build is complete the following packages should be present:

  • cni-plugins-0.7.5-1-aarch64.pkg.tar.xz

  • cri-o-1.14.0-1-aarch64.pkg.tar.xz

  • crictl-bin-1.14.0-1-aarch64.pkg.tar.xz

  • runc-1.0.0rc8-1-aarch64.pkg.tar.xz

  • kubernetes-1.14.1-1-aarch64.pkg.tar.xz

These packages can now be distributed to all ODROID-N2 nodes participating in the cluster. Of course other machines can be used as well, as long all are providing ARM64 as hardware platform.

ODROID-N2 Kubernetes general node setup

Before installing the packages, settings for the correct operation of container networking need to be performed.

The following kernel features need to be present, otherwise Kubernetes networking will not work and might lead to really hard to diagnose errors like "iptables: No chain/target/match by that name." or "Unexpected command output Device 'eth0' does not exist.":

  • CGROUP_PIDS

  • NETFILTER_XTABLES, XT_SET

If the kernel has a missing feature, like shown in the output below, the quickest solution is to build a new kernel package that includes the required features.

Kernel feature verification for Kubernetes CNI
$ zgrep XT_SET /proc/config.gz
# CONFIG_NETFILTER_XT_SET is not set
$ zgrep CONFIG_NETFILTER_XTABLES /proc/config.gz
CONFIG_NETFILTER_XTABLES=m

Building is quite easy since the Arch Linux kernel package can be built using the usual tooling. To speed up the build process it is recommended to edit /etc/makepkg.conf and enable multithreaded compilation using MAKEFLAGS="-j6", reflecting the six cores available on the ODROID-N2.

Building a custom kernel from a git repository
$ git clone https://github.com/everflux/PKGBUILDs.git
$ cd PKGBUILDs/core/linux-odroid-n2
$ git checkout patch-1
$ makepkg -s

Installation of the kernel package is performed using pacman. Afterwards the networking configuration can be performed.

Network configuration for container networking
$ sudo sh -c 'echo "net.ipv4.ip_forward=1" >> /etc/sysctl.d/30-ipforward.conf'
$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
$ sudo sh -c 'echo "xt_set" > /etc/modules-load.d/xt_set.conf'
$ sudo modprobe br_netfilter xt_set

On each node the previously built Arch Linux Kubernetes and container tool packages need to be installed. If a custom kernel package is built, it is to be installed as well.

Installation of all packages
$ sudo pacman -U *pkg.tar.xz
loading packages...
resolving dependencies...
looking for conflicting packages...

Packages (5) cni-plugins-0.7.5-1  cri-o-1.14.0-1  crictl-bin-1.14.0-1  kubernetes-1.14.1-1  runc-1.0.0rc8-1

Total Installed Size:  1065.89 MiB

:: Proceed with installation? [Y/n]
...

After installation the CRI-O container runtime requires configuration. CRI-O honors system wide configuration of trustworthy container registries in /etc/containers/policy.json. In order to be able to pull images from docker.io (and other registries) a default policy can be installed: policy.json

A minimal configuration for CRI-O itself is provided here: crio.conf. It must be placed in /etc/crio/crio.conf.
To avoid CRI-O disabling container networking due to no default CNI network configuration, a simple loopback CNI configuration is setup.

Default CNI network configuration using loopback network
$ sudo sh -c 'cat >/etc/cni/net.d/99-loopback.conf <<-EOF
{
    "cniVersion": "0.2.0",
    "type": "loopback"
}
EOF'

Afterwards the crio service can be enabled and started.

Enabling and starting the CRI-O service
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio
$ sudo mkdir -p /etc/cni/net.d
$ sudo systemctl start crio
$ sudo systemctl enable kubelet.service

ODROID-N2 Kubernetes master setup

On the master node the cluster setup will be performed using kubeadm. Since even the latest ODROID-N2 with 4GB RAM is quite limited with memory, additional capacity using zram-swap or a swap file comes to mind. In order to run kubernetes with enabled swap, the setting --ignore-preflight-errors Swap must be provided for kubeadm.

Kubernetes master setup on ODROID-N2
$ sudo kubeadm init --ignore-preflight-errors Swap --cri-socket=/var/run/crio/crio.sock
...
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.23.200.120:6443 --token c11wrg... \
    --discovery-token-ca-cert-hash sha256:3f5dc1..

Once the kubeadm setup is finished and the join token is shown, the worker nodes can be setup. But first a copy of the cluster configuration is prepared in the home directory of the user, so it can later be retrieved to configure kubectl.

Prepare cluster configuration
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubernetes worker setup

Since the common setup is the same for worker and master nodes, just very little is left to do: The kubeadm command will be used to join the cluster, afterwards the kubernetes worker setup on the node is finished.

If the worker nodes have swap enabled the parameter --ignore-preflight-errors Swap must be provided as well.

Join kubernetes cluster with kubeadm
$ sudo kubeadm join 10.23.202.120:6443 --ignore-preflight-errors Swap --token c11wrg.... \
    --discovery-token-ca-cert-hash sha256:3f5dc1...

Cluster networking and access

In order to access the kubernetes cluster, the generated configuration file for kubectl is obtained from the master.

Retrieving the kubectl configuration from the master
$ mkdir ~/.kube/config
$ scp master:~/admin.conf ~/.kube/config

Afterwards the cluster should be accessible from kubectl.

Accessing the newly setup ODROID-N2 kubernetes cluster
$ kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
n2-master0   NotReady   master   11m   v1.14.1
n2-worker0   NotReady   <none>   5s    v1.14.1
n2-worker1   NotReady   <none>   10s   v1.14.1
n2-worker2   NotReady   <none>   9s    v1.14.1
n2-worker3   NotReady   <none>   8s    v1.14.1

The nodes are all in the state NotReady since no cluster networking is setup. This can be fixed quickly using weave as CNI provider:

Setup CNI with Weave
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Once Weave networking is established, the nodes change to state Ready.

Node status after finished cluster setup
$ kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
n2-master0   Ready    master   77m   v1.14.1
n2-worker0   Ready    <none>   65m   v1.14.1
n2-worker1   Ready    <none>   65m   v1.14.1
n2-worker2   Ready    <none>   65m   v1.14.1
n2-worker3   Ready    <none>   65m   v1.14.1

To get a web based interface for the cluster the kubernetes dashboard is installed. Although it is provided as ARM64 image, the default deployment uses amd64 as platform, so a little substituion with sed is needed.

Installing Kubernetes Dashboard for ARM64
$ curl -sSL https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml | sed 's/-amd64:/-arm64:/' | kubectl apply -f -




Are you interested in training or professional services? We have several offerings for Kubernetes, Docker and Cloud available.

Please contact us to discuss your individual needs!

Feedback oder Fragen zu einem Artikel - per Twitter @triondevelop oder E-Mail freuen wir uns auf eine Kontaktaufnahme!

Zur Desktop Version des Artikels