Skip to main content

PiBox / Kubesail Setup

Last Friday I received the PiBox I ordered through the pibox kickstarter last year. I was originally intrigued by the device since it advertises itself as a two-disk Raspberry PI NAS.

A year later I’m now more interested in the Kubernetes capabilities of this device. Especially since the PiBox can be directly accessed through Kubesail’s website!


The setup process is surprisingly easy. Plugin the device, scan a QR code and connect it to the kubesail webinterface.

Unfortunately the default configuration has a couple of issues:

  • The default user and password are not changed on first boot (pi / kubesail).
  • Kubesail configures a JBOD with all disks by default.
  • Removing the cluster from Kubesail is not documented.

In order to mitigate the Raid 1 issue we have to first remove the cluster from Kubesail and the configure Raid 1 with lvm.

Note: It is also possible to skip step 1 by first copying the data from /var/lib/rancher to another location and then copying it back.

Disclaimer: Use the commands below at your own risk! You will lose all your data!

Remove a PiBox cluster from Kubesail #

First, obtain the cluster removal URL, by logging into and locating your Kubesail removal link in the cluster settings.

Then remove the cluster by running the following command either as root on the pibox or in a host with access to the cluster enabled.

kubectl delete -f${YOUR_CLUSTER_NAME}.yaml

Following the command the kubesail-agent has to be removed as well:

kubectl delete namespace kubesail-agent
kubectl delete kubesail-agent

Now, the cluster can be readded to Kubesail. However, since we also want to switch from a JBOD to Raid 1, we need to do additional steps.

Switching from a JBOD Configuration to Raid1 #

Login to your pibox and obtain root. First, remove k3s and the lvm volumes from the device (these steps have been mentioned by user Eru on discord - I copied the script in full here):

# uninstall k3s
sudo service k3s stop
sudo /usr/local/bin/
# unmount the SSD pool
sudo umount /var/lib/rancher
# wipe the SSD pool
sudo wipefs -af /dev/pibox-group/k3s
# remove the LVM setup
sudo lvremove /dev/pibox-group/k3s
sudo vgreduce --removemissing pibox-group
sudo vgremove pibox-group
sudo pvremove /dev/sda1
sudo pvremove /dev/sdb1
# wipe each SSD
sudo wipefs -a /dev/sda1
sudo wipefs -a /dev/sdb1
# delete the partitions
sudo sfdisk --delete /dev/sda 1
sudo sfdisk --delete /dev/sdb 1

Then we can create the volume-group in raid 1 (assuming you have two disks attached to the PiBox):

mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda /dev/sdb
pvcreate /dev/md0
vgcreate Volume /dev/md0
lvcreate -l +100%FREE Volume -n k3s
mkfs.ext4 /dev/Volume/k3s
echo '/dev/Volume/k3s /var/lib/rancher ext4 defaults,nofail,noatime,discard,errors=remount-ro 0 0' >> /etc/fstab
mount /var/lib/rancher

Following this command we can reinstall the cluster:

# finally, re-install k3s:
curl --connect-timeout 10 --retry 5 --retry-delay 3 -L | \
	INSTALL_K3S_EXEC="server --cluster-cidr= --no-deploy traefik --disable=traefik --kubelet-arg container-log-max-files=3 --kubelet-arg container-log-max-size=10Mi" \

# And re-install the kubesail-agent
sudo kubectl create -f

At this point, you can readd the PiBox to Kubesail. Make sure to obtain the link from the Kubesail website.

If you want to revert these changes you have to first remove the logival volume

lvremove /dev/Volume/k3s

and the also remove the volume group:

vgremove Volume

If either of these commands fail, you have to either check vgdisplay or pvdisplay to list the approriate lvm names.

Now the cluster can be readded to Kubesail.