Published
March 16, 2022

A New Kubernetes Edge Architecture

Tenry Fu
Tenry Fu
CEO & Co-Founder

Edge is becoming the next phase of multicloud.

Containers and the ever-growing Kubernetes ecosystem now have become virtual industry standards. But, last year was the first that containerization officially became developers’ primary choice, according to the State of DevOps 2021 report.

Kubernetes (K8s) is among the key driving forces for enterprises transforming into digital-first businesses. K8s continues to gain momentum and is thriving across many industries. Kubernetes’ portability makes adopting the platform a no-brainer for enterprises shifting to multicloud environments.

Edge locations represent the next frontier for accelerating innovation, and containerization is the logical choice for a variety of verticals and organizations — particularly with more data generated closer to users. It’s the logical next step for the multicloud landscape for several reasons: cost-savings accrued from eliminating the need to send data to processing to clouds or data centers; to the ease of containerizing core network infrastructure operations; to user-experience improvement and performance gains due to bringing applications closer to consumers.

Common use cases already exist in sectors such as smart retail and restaurants, health care, oil and gas, manufacturing and telecommunications. Underscoring the trend, the Linux Foundation’s 2021 State of the Edge report projected a 70% year-over-year increase in the edge computing market from 2021 to 2028.

With Kubernetes now the de facto standard in container infrastructure, the need for a next generation of solutions and ecosystem to address container management at scale for the edge has become obvious.

Conventional Edge Economics Don’t Work

Let’s be clear: Edge computing is not easy. Edge locations are, by definition, challenging environments relative to conventional data centers and public cloud environments. A typical edge scenario for most organizations means the existence of hundreds, or even thousands, of locations. Retail chain store companies and hospitals routinely reach those totals. Most of the time, such entities lack on-site IT personnel. Worse, those locations may possess only limited, intermittent, unreliable or no internet connections. The configuration of those locations often is based on a single commodity server for cost reasons or, in some cases, a maximum of three servers in cases where high availability is critical.

The absence of scalable centralized management for edge means organizations must periodically send out field engineers to ensure operation of what could be locations in the thousands. The expense defeats the whole purpose of the paradigm shift. And those locations that consist of just one server are at constant risk of downtime anytime an upgrade arrives. In most edge cases, that directly translates to financial loss.

Pushing Architecture Evolution

Yet, cost might not be the most powerful impediment. Instead, it could be our approach to edge architecture. In the data center, an IaaS interface or controller, such as VMware vCenter, OpenStack, Canonical MAAS, or even hardware stacks such as Amazon Web Services’ Outposts or Azure Stack HCI, enables API-driven, software-defined orchestration for endpoints by design. With edge, this does not apply.

Equally importantly, a key limitation of conventional edge architectures with central management planes that orchestrate and manage all clusters is the inability to scale beyond more than a few hundred locations. With conventional edge architectures, the more locations added, the more the management plane’s performance deteriorates. That creates an architectural bottleneck.

The Edge K8s Challenge Is Not about Distribution, but Managing at Scale

Any edge computing solution will confront these types of barriers — Kubernetes or not. With K8s, a cluster’s resilience can actually be better and lightweight; edge-suitable Kubernetes distributions exist, including Canonical’s MicroK8s or SuSE’s K3s.

But K8s at the edge introduces yet another problem: The ability to consistently and reliably deploy and update the “full stack,” beyond just the K8s infrastructure, that includes K8s, storage and networking interfaces, the host operating system, as well as applications and auxiliary K8s services and integrations (monitoring, logging, service mesh, etc.). Finally, rolling updates for a typical “cost-efficient” single server are impossible due to the lack of additional hardware.

Innovation from the Open Source Community

Here at Spectro Cloud, we have been focusing on supporting our customers’ expansion into multiple locations, including bare metal and edge environments as part of their containerization and K8s journeys. We have always been advocates of declarative management fueled by the open source community and using Cloud Native Computing Foundation’s Cluster API is the only way for modern K8s management at scale across multiple clusters and locations. Our focus starts with providing declarative “full-stack” management in public clouds and on-premises data centers, unifying management between infrastructure and applications and minimizing risk from configuration drift. Last summer, we extended Cluster API to support bare metal data center environments with our open-sourced Cluster API provider for Canonical MAAS. For edge, we now further extend Cluster API through integration with Docker Engine to fully support containerized multinode K8s on single-server or multiserver configurations.

An ‘Autonomous’ Edge Architecture Blueprint

Our architecture relies on bootstrapping the host OS and the Palette management agent to maintain a lightweight approach without an IaaS controller. Just power it up and connect to the internet. Once the edge server powers up, it automatically pairs with the Palette central management platform, wherever it is running, using a unique machine ID and reports back its hardware information for management. After the edge server becomes available, the full-stack cluster provisioning can be initiated. Any update to the cluster definition — a feature we call Cluster Profiles — will be treated as a desired-state change and trigger an update on any layer that is different from the previous desired state.

The control plane remains “at-cluster” by following the separation of the Management Plane and Control Plane design principle. Each individual edge server is packaged with sufficient intelligence to enforce policies independently without adding more pressure to the management plane, enabling virtually infinite scale across thousands of edge locations.

edge architecture blueprint

The local Palette agent can also act as a reverse proxy to provide zero-trust security and remote troubleshooting. In order to allow customers to perform immutable host operating system upgrades on single-server edge locations, the operating system is “A-B” partitioned. That means IT teams may apply updates and easy rollback if something goes wrong. Because the K8s cluster is containerized with multiple CP and Worker nodes, rolling upgrades can be applied with zero downtime. And we support virtual machines beyond only containers — both managed through K8s.

Besides the specific requirements for edge locations, Palette maintains its complete optionality when it comes to the combinations of technologies that our customers deploy on top of their K8s stacks. This enables their development teams and supports specific use cases but without sacrificing control across day 0, day 1, and more importantly, day 2 operations.

Working closely with customers as design partners enabled us to understand what edge K8s meant to them. This fresh approach to a next-gen edge K8s architecture reflects the need for our entire industry to evolve beyond buzzwords and trends, sourcing knowledge from the impact and power the open source ecosystem is delivering by focusing on customer outcomes.

Originally published at thenewstack.io

Tags:
Edge Computing
Enterprise Scale
Thought Leadership
About Spectro Cloud
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy