August 8, 2022

Kubernetes Unlocks Innovation at the Edge at Scale

Tenry Fu
Tenry Fu
CEO & Co-Founder

This essay is adapted from the LF Edge 2022 State of the Edge report — the latest in the authoritative series of reports tracking developments in all things edge computing. Download the full report here.

The State of Edge report

Edge use cases are poised to transform business

I’m writing this in the middle of our industry’s conference season and a string of customer meetings with big retailers, banks, telcos and healthcare companies. With every conversation, I’m left with the same takeaway: in 2022, the edge is where Kubernetes is really making a difference for customers, and where business model innovation is burning white hot.

We’ve talked to retailers about using edge devices in thousands of stores and restaurants to gather and analyze customer purchasing habits to optimize stock, as well as to run point of sale systems, CCTV analytics, digital signage, environment monitoring, equipment health and more.

We’ve talked to healthcare device companies about bringing powerful analytical tools closer to the edge, to the clinicians as they diagnose and treat patients — even enabling an app store-like experience for health providers to access new clinical features, opening up a new application innovation ecosystem and business model for the device maker.

And we’ve even worked with a startup that’s putting lightweight Kubernetes worker nodes directly on drones to autonomously pick fruit, with the control plane at a nearby ground station. They have plans to scale to over a thousand clusters soon. Kubernetes doesn’t get any more edge than that.

These use cases are new, they’re fascinating, and they have huge potential to improve the customer experience and ultimately drive bottom-line growth for the business. This is exactly the stuff that IT teams want to be involved in and help drive!

Edge environments are the perfect storm for IT teams

But making it happen means deploying code to and managing potentially hundreds of thousands of edge devices. Indeed, even with the portability of containers and the orchestration features of Kubernetes, edge computing is really a perfect storm for IT and DevOps teams. They somehow have to deal with diverse, resource-constrained devices, distributed at mind-boggling scale in non-traditional environments, without access to on-site IT staffing, and a list of requirements for performance, security, resilience, compliance.

Clearing these infrastructural and operational roadblocks is not easy, but I’ve watched customers’ eyes light up when you show them a clever architectural approach to sidestepping a seemingly intractable obstacle.

For example, take the challenge of pushing software updates to running edge devices in unsupervised locations: how can you perform rolling updates without risking application availability, even in single-server edge configurations? This is one of the edge problems we are solving for, in this instance addressing it with an A/B OS partition and multinode Kubernetes deployment for the edge device.

Another challenge we often face is the ability to easily scale to thousands of edge K8s locations. Conventional edge architectures that have no separation between the management plane and control plane — or even worse, those that depend on a management server — are not able to scale beyond a few hundreds of K8s clusters. The way to address this is to let the local edge K8s cluster enforce policies so the management plane does not become a bottleneck as new edge locations are added into the mix.

I’m so bullish on edge in 2022 because I see the excitement in the eyes of our customers when we show them a path that’s free of these kinds of roadblocks.

Clearing the path is a community effort

And the great news is, there are so, so many open source projects and commercial providers working every day to make edge computing easier — not just on PowerPoint slides but in the real world, through integrations and collaborative, community effort. Take the CNCF’s Cluster API, for example. We have always been advocates of declarative management fueled by the open source community, and today Cluster API is the only way for modern K8s management to scale across multiple clusters and locations.

Last summer, we extended Cluster API to support bare metal data center environments with our open-sourced Cluster API provider for Canonical MAAS. For edge, we now further extend Cluster API through integration with Docker Engine to fully support containerized multi-node K8s on single-server or multi-server configurations.

The road to the edge may still be winding, but thanks to herculean community efforts like Cluster API, it’s getting more and more passable. That’s important progress because, fundamentally, IT teams (whether ops, platform, DevOps or somewhere in between) don’t want to be spending time on infrastructure care and feeding, nor do they want to be saying no to their business partners’ next big idea. IT teams want to be innovators — and that’s what the edge has the opportunity for in spades.

Thought Leadership
Enterprise Scale
Edge Computing
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy