If you’re of a certain age, you’ll be very familiar with Aerosmith’s 1993 hit “Livin’ on the Edge.” Its refrain is simple: Livin’ on the edge, you can’t help yourself at all.
Well, nowadays, when we talk about living on the edge, we’re talking about Kubernetes — and we’ve never had more technologies available to help ourselves.
The edge is the hot topic around Kubernetes right now, as research shows. Companies big and small are pushing Kubernetes adoption to the last mile (yes, maybe even into the locomotive featured in the Aerosmith video pictured above) and replacing typical virtual machine (VM) infrastructures with bare metal.
A host of distributions have sprung up to meet the needs of managing containerized bare metal Kubernetes at the edge, including projects like FlatCar Linux and Talos, all designed to provide a lightweight, “immutable” OS for the edge with “atomic” upgrades. But why does this matter?
Immutability Is the Next Step Beyond Configuration Management
An immutable OS is a carefully engineered system that boots in a restricted, permissionless mode where certain paths of the system are not writable.
For instance, after installation it’s not possible to install additional packages in the system, and any configuration change is discarded after reboot. This reduces a malicious attacker’s surface, which is vitally important at the edge, where devices may be physically available for tampering. At the same time, it makes sure every node runs a certain version of the software stack, reducing the risk of infrastructure drift.
This approach is all about scale. In the old days before cloud native, each of our servers was a snowflake or “pet”, kept up and running with patches, updates and configuration changes layered on top of the first OS installation, making each server a knot of unique dependencies. Tools like Ansible, Salt and Puppet were often used to handle each tiny detail of a system and to reduce as much as possible the infrastructure drift.
Instead, the “cattle” approach is about treating nodes as interchangeable systems. If there is an issue with a system, instead of debugging it in production or taking corrective actions “live,” we just remove the faulty node and create a new, identical one to swap in its place. This is where immutable OSes really pay off, because when you know that the OS hasn’t been modified since build, you know the replacement will be an exact clone, and workloads will behave predictably. Immutability makes configuration management almost obsolete beyond a central catalog.
Upgrading a system in an immutable infrastructure is done via creating a new image with a new version of the OS and pushing it to your nodes with the upgrade strategy of your choice — blue/green deployment becomes easy as those are “atomic” — and there are no small packages and drift all over to handle and maintain.
Having an immutable OS doesn’t necessarily mean having a management plane — when it’s about edge scale in a cloud native world, we want to manage our nodes inside Kubernetes, and we should instead be free to treat the node as we do for apps — just simple containers that we publish and roll upgrades to with our defined strategy.
That would be a powerful combination, allowing us to leverage the container ecosystem tooling to address real-world problems such as automated security scanning, using pipelines to selectively upgrade nodes, in tandem with fully managed nodes using Kubernetes.
Immutability: One Distribution at a Time
This is where Kairos comes in. Kairos is a new open source project designed to tackle the need for immutability and atomic upgrades. In that sense it’s like Talos, FlatCar and K3OS, which we discussed above.
But there are very important differences: Kairos is distribution-agnostic, Open Container Initiative (OCI)- based and cloud-init first. Let’s take a look at what this means
Distribution agnostic: Unlike, say, K3OS, Kairos is not a Linux distribution. It’s a meta-Linux distribution, which means it enables you to spin up an immutable Kubernetes cluster with the Linux distro of your choice. Kairos is distribution-agnostic by design and supports converting existing distributions from container images to “Kairos-based” distributions. Those automatically inherit features such as A/B atomic upgrades, immutability, live layering and all the Kairos featureset. Importantly, the kernel and initramfs are static and shipped with the image, which really means atomic upgrades for the entire full stack of the system.
At the time of writing this article, Kairos is at 1.3 and supports openSUSE-, Alpine- and Ubuntu-based distributions, which can be directly downloaded from the released assets and will be used in the examples below.
OCI-based: OCI-based means Kairos uses container images. The OS itself is just a single image container that runs natively on the host without any container engine, and it’s overlayed in the booting system with overlayFS. Upgrades are handled atomically with an A/B schema and automatic fallback.
Because Kairos is just an OCI image, you can find the container image in the quay repositories, which can be used to burn ISOs to USB sticks or other media. ISOs are available as well as part of the releases, so we don’t have to worry about that and we can pick the distribution we like among the published assets.
Cloud-init first: The only configuration mechanism for Kairos is performed via cloud-init. As a single source of truth, it is used to configure one or all your nodes in the cluster. This is to enhance user maintenance and configuration at scale, reducing the impact of complex configuration infrastructure required to manage nodes.
Management is optionally handed over to specific Kubernetes components that manage the life cycle of the nodes after bootstrap.
Hands-On with Kairos
Let’s have a closer look at Kairos, and use it to deploy a K3s cluster with MetalLB. In the example below I’m going to use a bare metal host to provision a Kairos node in my local network with K3s and deploy Kubedoom, but similarly you can provision nodes with a VM by following the official quickstart with different charts, manifests and setup.
Step 1: Download a Release and Flash It to a USB Stick
As Kairos comes with different flavors, we can pick between the base distribution of our choice and the version of K3s. Kairos publishes artifacts with K3s included in the image in a separate repository including the release artifacts. This is because it is possible to install additional extensions at runtime. But in this case we want K3s, so we just use the images with K3s inside. We are going to need a .ISO file, as we will flash it to an USB stick in case of a bare metal boot; otherwise we would just load it in the hypervisor settings.
In this article we will use the openSUSE image, and we will pick the latest K3s available version. Kairos has recently added support for Ubuntu and Fedora, and other distributions are available as well, but the openSUSE flavor is well tested and available since the early releases.
We are going to use now a machine to flash the image to a USB stick:
And flash it to the USB drive (in my case it was /dev/sda):
Step 2: Install and Boot the Node
Now we can use the USB stick as a Kairos installer. If it were a VM, we could have just loaded the ISO.
A Kairos node needs a configuration, and in this article we are going to install MetalLB and Kubedoom, so it will look similar to https://gist.github.com/mudler/bde499f156513bbfe2030587295adfca:
- We disable Traefik and the default load balancer that comes with K3s to use MetalLB instead. An IPAddressPool is configured to use the IPs, and an L2Advertisement is associated with it.
- Be sure to replace 192.168.1.10-192.168.1.20 in the IPAddressPool with the available IP range in your network. The service will automatically take one of the IPs in the range, and we will use that to connect to Kubedoom afterward.
- Replace also the GitHub username (github:mudler) with yours to automatically login via SSH using your keys (this works only if you have uploaded your SSH public keys to GitHub). If you don’t have any, we have also set kairos/kairos as username and password so you can also log in with a password prompt.
- If running in a VM, the network interface needs to be bridged to your local network in order to correctly connect to Kubedoom.
- Check out the documentation for more information on the available fields in the configuration file if you need to add any other setting or additional user logic.
Let’s start the ISO now and select manual mode from the console. As we are running over LAN, we will SSH to the node and run the installation manually. However, by default Kairos will boot up and display a QR code that can be used with the CLI to drive the installation process without having to SSH remotely. Check out the official quickstart if you want to use the QR code instead.
Once the node is up, we can SSH to it with the kairos user and we can login with the kairos password. Let’s become root user and run the installation:
Step 3: Log in and Check if You Can Run Your Workload
After the installation has ended, the node will reboot. The first boot might take some time to spin off the cluster, but eventually we should be able to login via SSH with kairos/kairos and via the console as well:
Note that the user configuration might take a few moments after the node is fully booted to be effective.
We can now check that K3s is running and the node is ready to serve a workload:
And we can also check that the service is running and that it was assigned an IP with MetalLB:
In my case, the IP taken from the range was 192.168.1.10, and I can access the NoVNC web UI now from http://192.168.1.10:6080/vnc.html. You will be displayed to the noVNC dashboard, which will ask for a password to connect over the Kubedoom instance ( the default one is “idbehold”), and voilá, our Doom game is available right in our browser:
Cheat codes for Kubedoom and instructions are available here. 🙂
Conclusion: What We Need for Life at the Edge
From the easy abstraction of the cloud, we are transitioning to the tangible hard reality of bare metal at the edge. No one can pretend to know what every edge scenario demands, so an ideal OS needs to be flexible enough to take into account any customization to the stack and make changes and upgrades easy, with the same confidence we have when deploying applications to Kubernetes. This is crucial, as it helps scaling out with the same framework among various use cases that can arise while provisioning nodes to the edge.
When it comes to what OS to install on a cluster, we have to take into account how the nodes will upgrade, what are the fallback systems in place and whether we can handle the automation in a familiar fashion. In the cloud native era that means managing Kubernetes in Kubernetes!
This is why immutable OSes are getting really popular — they are a perfect fit for running Kubernetes workloads, as they are static OSes that run and upgrade (usually) atomically.
In this article we’ve analyzed the aspects that make immutability important for adopters, especially for the compelling properties that immutable infrastructures can bring at the edge. Kairos’s cloud-centric, container-based approach brings version control of the OS at the edge with single, atomic upgrades that can be rolled over to the cluster nodes similarly to application upgrades with the Linux distribution of your choice.
As Aerosmith sang,
Tell me what you think about your situation
Is getting to you, yeah
With a tool like Kairos, our goal is to make livin’ on the edge less complicated, less aggravating, with the power of immutability. You can find out more about Kairos and get started at kairos.io. We welcome any feedback and contributions 🤗!