The simple way to deploy a Kubernetes edge cluster
Edge computing is fast becoming one of the hottest use cases for Kubernetes. While K8s started in the cloud-native data center, it’s not stopping there, and now many IT teams are using it to deploy containerized applications running closer to the business, in the domain of IoT and edge.
We’ve previously written both about the exciting applications of Kubernetes at the edge, and the challenges involved in making it happen. Our own research with Kubernetes users proved that there’s both a huge amount of interest in edge, and a lot of apprehension.
It’s no surprise why. The edge paradigm raises big questions. How do you:
- Scale to deploy Kubernetes clusters across hundreds or thousands of edge locations?
- Ensure consistent configuration across all of those cluster edge environments?
- Maintain operational visibility into Kubernetes cluster edge nodes spread across the country or world?
And of course, how do you secure the edge, too?
One thing is clear: you can’t do all this at scale, manually from the command line, firing out kubectl commands and hacking yaml files. We’ve previously posted our vision for a new Kubernetes edge architecture, so in this blog we want to run you through some of the thinking that led us to develop Palette’s Edge Native capabilities. We’ll show you step by step how our architecture answers the questions above.
Low-touch or zero-touch provisioning with Palette Edge Native
One of the goals of our Edge Native solution was to solve the challenge of Day 0 deployments to environments where the individuals on site most likely are not Kubernetes experts. It could be a store manager, a well operator, a restaurant franchisee, or a factory supervisor.
We needed a way to quickly deploy single or multi-node clusters with little interaction from the individual responsible for physically plugging in the device: in other words, as close to ‘plug ‘n’ play’ as possible.
With Palette Edge Native, we introduce the concept of a ‘stylus operator’ responsible for provisioning the core components of a Kubernetes cluster (Operating System, Kubernetes Distribution, Container Network Interface).
The stylus operator is part of our lightweight bootstrap OS, and is installed via USB flash stick, preboot execution environment (PXE) booting, or by the OEM before the appliance is shipped, providing a true zero-touch experience.
The end user provides power and network to the device; with the stylus operator loaded, it can call out to Palette, awaiting registration and further instructions to turn it into a full Kubernetes node.
Using a QR code and a lightweight Function as a Service (FaaS) customized to your requirements, the edge device can register to the appropriate project within Palette. Automatic registration is also available, requiring no interaction from the end user. Upon successful registration, the device is ready to be consumed as part of a new or existing cluster.
You can read more about how this process works on our Docs pages.
Low risk remote upgrades to edge nodes
The bootstrap OS and the Edge Native architecture leverage the open-source project Kairos.
Kairos provides an immutable K8s and OS image that allows us to ensure there are no snowflakes when distributed across thousands of nodes. Additionally, it gives the ability to have an A/B partition with stability checks at boot.
This makes a huge difference during software upgrades. When an upgrade happens it is done atomically. The inactive B partition is flashed with a new immutable image, and checks are run against that partition to ensure corruption has not occurred; once passed, the device reboots switching over to the new partition.
Remember, at the edge there may not be "smart hands" on site with access to the device. Tthis A/B atomic upgrade gives the operations team an additional level of comfort for OS version upgrades that could brick the device or cause other issues.
If any checks fail during the upgrade, the bootloader automatically switches back to the previous partition, giving remote access for a retry or for additional troubleshooting.
Cluster Profiles for edge, just like any other cluster
Once the edge device registers with Palette, building a cluster uses the same constructs we use for building Kubernetes clusters on public and private cloud data centers. We use Cluster Profiles to specify the OS, Kubernetes distribution, and the CNI.
This profile can be applied to one or many edge clusters, providing consistency across our environments regardless of location.
Cluster Profiles provide the declarative (desired) state of what our Kubernetes cluster should look like. Cluster Profiles can cover everything from the OS to the applications and everything in between, and define, to an agent deployed in the cluster, what the cluster should look like. The agent enforces that declared state, even if connectivity to the Palette control plane is lost.
This decentralization of policy enforcement is absolutely critical in the world of edge deployments. It means:
- Much less traffic between low-power devices over potentially low-bandwidth networks.
- A more resilient edge solution, because enforcing the desired model is done within the cluster with no dependency on continuous connection to a central point of management.
Importantly, this architecture also allows us to scale to thousands of clusters without impacting performance.
Are you ready to make edge happen?
As the use cases for Kubernetes at the edge become more widespread, it’s vital to achieve simplified and uniform management. With Spectro Cloud Palette, you can create and manage your edge Kubernetes cluster using the same platform you manage your public and private cloud Kubernetes clusters, simplifying the operational experience of Kubernetes regardless of where you deploy them.
You can find out more about Palette Edge here. Or if you’re looking to learn more about how to approach your edge deployments, why not check out our Practical Guide to Kubernetes at the Edge?
Run Kubernetes your way, anywhere: Excited to Announce Boldstart’s Investment in Spectro CloudRead our article