Published
September 12, 2022

Kubernetes edge: solving the challenges of edge computing?

Yitaek Hwang
Yitaek Hwang
Guest contributor

The global edge computing infrastructure market is estimated to be worth up to $800 billion by 2028. Right now, edge seems to be everywhere!

Riding on this momentum is the use of Kubernetes as the de facto edge infrastructure management layer. In fact, 35% of Kubernetes users reported that they use Kubernetes at the edge today, according to the 2022 State of Production Kubernetes report.

But at the same time, 72% of the respondents in the same survey reported that it is challenging to deploy and manage Kubernetes on edge devices. Concerns from compliance and security to performance and scaling give pause to organizations looking to adopt Kubernetes at the edge.

So is Kubernetes the right solution to navigating the challenging edge environment?

In this article, we’ll dive into the challenges of edge computing and how Kubernetes promises to solve them. We’ll also look into the growth behind Kubernetes at the edge, fueled by some innovations by the open source community.

Challenges of edge computing

Edge computing is a framework that runs applications close to its data sources whether that is a single edge node like an IoT device or a rack of servers at a remote data center.

**Unlike cloud computing which sends data to the cloud to be processed, computation occurs at the edge location. This improves response times and unlocks new use cases at a cheaper price point. **

However, managing edge applications comes with its unique challenges. These challenges include:

  • Managing the edge hardware and the set up required to deploy applications on bare metal (e.g., configuring the OS, storage drivers, etc).
  • Lack of reliable internet connection in remote locations such as farms and factories.
  • Limited resources allocated to edge devices that can affect performance.
  • Cost to maintain edge computing infrastructure by field engineering personnel.
  • Lifecycle management for applications and devices at the edge (e.g., upgrading the firmware of IoT devices, processing security patches when connectivity is limited.
  • Ensuring the availability of devices and applications.
  • Scaling and managing lots of edge devices.

Some of these are reminiscent of the challenges that were present in on-prem servers before the rise of cloud computing. Edge computing adds the elements of limited internet connectivity and resource constraint on top.

So how does Kubernetes address these concerns at the edge?

The promise of Kubernetes

Kubernetes alone cannot solve all the unique challenges of edge computing related to limited connectivity and resources in bespoke locations. However, it does play a role in mitigating the software-related challenges of managing and scaling edge applications.

Unifying the infrastructure layer

Solutions such as AWS Outposts and Azure Arc are literally bringing the cloud solution to edge locations. AWS Outposts, for example, provides AWS managed compute, storage, and database services to factory floors or healthcare provider locations for faster processing. This allows the user to reuse the same AWS APIs and infrastructure to develop and deploy their applications.

Standardizing the application delivery

Kubernetes goes further to standardize application delivery. Utilizing the same Kubernetes APIs, developers can deploy containerized applications with familiar CI/CD tools. The application itself may need to take resource management and offline processing into consideration. However, the supply chain and the infrastructure to run those applications can be abstracted out for maximal portability.

Of course, this is a simplistic view on Kubernetes solving the challenges of edge computing.

A Kubernetes cluster designed for the cloud may not be the right fit for edge devices given the resource constraints. A lightweight version of [Kubernetes control plane]/blog/the-subtle-difference-between-management-and-control-plane-in-kubernetes/) may be required to run native edge computing applications instead of cloud native applications with virtually infinite resources. Still, the level of standardization Kubernetes brings to the edge is preferable to managing individual devices separately.

Growing K8s usage at the edge

Given the value that Kubernetes brings in standardizing edge application delivery, it’s no surprise that its adoption is significant. The 2021 Kubernetes Edge Survey Report from CNCF notes that over 75% of respondents are using Kubernetes for their edge applications.

2021 Kubernetes Edge Survey Report - results

Another survey released by SlashData shows that nearly two-thirds of edge developers use Kubernetes.

survey showing number of edge developers using Kubernetes

From these survey results, it is clear that the demand and the preference to use Kubernetes at the edge is clearly present. The next step towards continued growth is making Kubernetes as easy to use at the edge as it is in public clouds today.

When asked about the key features that an edge Kubernetes platform should have, survey respondents cited things like:

  • The ability to manage single-device edge nodes.
  • Highly scalable architecture for performance to thousands of clusters.
  • Full management of bare-metal nodes including the underlying OS.
  • Automation of the entire cluster lifecycle.
  • 24x7 technical support covering the whole Kubernetes stack.
  • A single tool to deploy and manage all Kuberntes clusters across multiple environments.

These responses highlight the need for a centralized solution to manage edge Kubernetes applications at scale.

Innovations in Kubernetes and edge computing

In order to address the challenges of running Kubernetes in edge environments, the open source community has been constantly innovating. First, we have various lightweight Kubernetes distributions such as MicroK8s to run on resource-constrained edge nodes. For full lifecycle management of bare-metal nodes, there is Canonical’s MAAS (Metal as a Service) to abstract away the interface to different types of hardware.

However, these solutions on their own present a new issue. The lifecycle of the Kubernetes nodes are managed separately from the OS of the underlying bare metal servers. At scale, this not only becomes a significant management challenge, but also an operational risk.

This is where SpectroCloud’s Cluster API MAAS Provider comes into play. Cluster API provides a declarative endpoint to manage the lifecycle of a Kubernetes cluster. It’s used in popular solutions like GKE Anthos and VMware Tanzu. Coupled with Canonical MAAS, both the cluster lifecycle and the bare metal server lifecycle can be managed by a single control plane. This unlocks the ability to manage a large number of edge Kubernetes clusters at scale, paving the way for wider adoption.

What will the future hold?

Running Kubernetes on the edge at scale is undoubtedly hard. But the opportunity in this space is ever growing with strong demand coming from developers and ops teams. While the adoption numbers of edge K8s in production are low compared to the dominance shown by Kubernetes on public clouds, growth in this space is inevitable.

In this article, we reviewed some of the challenges of edge computing and looked at how Kubernetes aims to solve some of them. We also analyzed survey results that called for an edge Kubernetes management solution that can solve the scaling issue currently challenging organizations. Finally, we introduced some innovations by the open-source community to make this problem space more manageable.

As computing paradigms continue to evolve, so too will tools to support that growth. If you’re looking for a managed solution for Kubernetes at the edge, check out Palette by Spectro Cloud to fast track a production-ready edge solution.

Tags:
Edge Computing
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy