Things you have probably heard:
Edge computing is the next big thing.
Kubernetes is ideally suited for edge.
That makes it sound as if edge computing platforms are all buttoned up. Hah!
There are lots of definitions of “ edge computing,” but by and large when people talk about it they mean some kind of compute capacity that lives not in the cloud, but that potentially sends data back to the public cloud or data centers. These “edge locations” can be as small as IoT devices or as large as micro data centers at factory floors or in retail locations (or, over time, even compute in the SmartNICs that we are starting to see discussed in enterprise architectures). Placement of compute capacity at these locations can be important for one of several reasons: latency in sending data to a centralized location for processing and return, cost of that transfer or sensitivity of data placement. With technologies such as 5G becoming increasingly common, the promise of and need for edge computing becomes even more important.
When considering what a platform for edge computing would look like, a couple key considerations come to mind:
Infrastructure abstraction: You want a platform that allows you to flex with the infrastructure heterogeneity that you are going to see over time.
Ability to leverage distributed compute elements: Given the nature of the compute elements available to you and the trend to scale out application design, an edge computing platform needs to be able to harness multiple compute sources for those scale-out workloads.
Extensibility: Nobody wants to invest in a single-purpose frozen-in-time infrastructure platform. Being able to build on the platform over time is a key requirement, as the needs for your infrastructure and the underlying infrastructure itself are likely to evolve over time.
Community and ecosystem: Much as nobody wants to tie themselves up with handcuffs in a frozen-in-time platform, most people also want to be able to take advantage of a reach integration community and general ecosystem. While every business has unique problems they need to solve, some problems are shared by others, and it just makes sense (is faster and cheaper) to consume common solutions to common problems.
When you read these requirements you might say to yourself, “Hey, Kubernetes does fit this bill. It provides a common layer of abstraction across different environments, it’s fundamentally focused on distributed application and compute management, it’s designed to be extensible, and has one of the richest and fastest-growing open source ecosystems we have ever seen. Why is this person wasting my time on a 101? I thought she was going to say something interesting.”
OK, OK! Kubernetes is well-suited at a high level as a technology for edge computing. However, even the technology itself needs a little work to be tailored for edge environments. There are some great open source projects out there that you should keep an eye on if you are interested in this area:
K3s: This is a really lightweight k8s that is fantastic for resource-constrained deployments, but still designed for production.
KubeEdge: This extends k8s to the edge, creating what you can think of as a remote worker node at the edge.
Maintainers of these projects admit that even these additional Kubernetes pieces of edge-focused technology are still pieces of a bigger story that has yet to be written. Kubernetes is not itself a full-featured solution, and there is still work to do in figuring out how best to leverage the technology. For a full solution for edge computing more pieces are needed:
Hardware discovery and registration: How can a platform deployment quickly and easily understand all the resources and their capabilities in the infrastructure that are available?
Robust workload placement engines: Solutions for edge need placement engines that understand a broad and growing set of constraints on applications, from particular resource needs to placement requirements, to latency and cost preferences. A complete edge computing solution is going to take the burden off a developer in figuring out where things need to run across multiple clouds and edges. Workload and governance requirements should dictate where a job lands.
Manageability across environments: An edge compute platform lashes together the resources at a general location, but it shouldn’t be an island itself. As mentioned previously, edge compute generally feeds information back to a public cloud or data center. A platform ideally can manage across these different environments, making it easier to enforce governance and compliance, while also speeding innovation by relying on a common interface that minimizes management overhead.
Service meshes and data management across environments: Edge computing is in many ways more difficult because of data and I/O than because of compute itself. A means to manage and optimize these elements without a tremendous burden on developers is going to be an enabler to edge platform adoption.
To conclude, while I may have click-baited you with the title, Kubernetes is not really ideal for edge … yet. We still have work to do.
K8s, Palette, our upcoming webinar, events, and much more!
We are using the information you provide to us to send you our montly newsletter. You may unsubscribe at any time.