Published
August 11, 2020

Containers-as-a-Service (CaaS) and Kubernetes

Tenry Fu
Tenry Fu
CEO & Co-Founder

Deploy and manage container applications without boundaries

This is a follow-up from my TheNewStack guest post.

There is no doubt Kubernetes has become the de facto container cluster management technology. With more and more enterprises adopting container technologies and more container-based applications moving into production, everyone (in Enterprise IT anyway) is screaming: I need Kubernetes!

Unfortunately, Kubernetes itself is not easy to make production-ready. Deploying and upgrading Kubernetes is not a simple task, especially if you are dealing with a production environment that needs multi-master and rolling upgrades. On top of that, there are many additional integration requirements beyond Kubernetes itself. For example: How do you hook up with an external load balancer? How do you integrate with a storage provider for persistent volume? How do you do AD authentication with RBAC on namespaces? Additionally, how do you take care of security hardening of base OS and Kubernetes configurations? How do you handle host security patching? A simple solution to all these concerns is to turn to managed Kubernetes solutions, in short, Kubernetes-as-a-Service (KaaS).

KaaS does help solve some of the Kubernetes lifecycle management challenges. Deployment and upgrade become just a few clicks, and although the provided Kubernetes might be very opinionated with limited integration choices, it meets 80%+ of requirements. If you are already in public clouds, using a cloud provider’s KaaS can make tech ops lives’ easier. However, even with KaaS, you will still need a Kubernetes admin. There are still many cloud-specific nuances to deal with, such as VPC, subnet, security groups, cloud’s native load balancer, and Kubernetes cluster node management if you want to increase or decrease the cluster size either manually or through some auto-scale policy. The Kubernetes users would then access Kubernetes via kubectl or API Server to handle application lifecycle management in Kubernetes.

Container-as-a-Service (CaaS) is a novel concept that targets freeing users from any underlying cluster management. Users of CaaS no longer have to worry about patching, scaling, or securing a cluster to run applications. One example of CaaS is AWS’s Fargate service. It provides a managed ECS (Amazon’s own container cluster service) environment, without users needing to worry about ECS cluster management and scaling; instead they just submit ECS tasks through Fargate services. The tasks are charged based on their resource (vCPU and memory) consumption and the duration of the tasks. Because AWS is handling the cluster management for users, they do charge a premium comparing to just VM pricing. For example, running a task with 4 vCPUs and 8GB memory for 1 hour costs about 20% more compared to running the on-demand VM instance with similar specs. With the AWS re:Invent 2019 announcement, Fargate is now further extended to support EKS, Amazon’s Kubernetes Cluster Service. However, Fargate with EKS still has significant limitations at the moment:

  • There are no stateful workloads with persistent volumes or file systems,
  • No DaemonSet or privileged pod is allowed,
  • Load balancer integration is limited to Application Load Balancer, and
  • The Fargate service is limited to a single region and manages each EKS cluster separately, with the cluster being the boundary for an application workload.

While Fargate with EKS currently has limitations, we believe it sets the right direction for the future of container cloud infrastructure. Our view is that next-gen container cloud infrastructure should be completely transparent to the DevOps user. Kubernetes is great, and if it is the de facto container cluster manager, then it should be the consumption interface for CaaS users so that users can continue to use their familiar CLI and tools to deploy/update applications. However, no one really should worry about the underlying Kubernetes cluster’s lifecycle, security, and scaling. At the end of the day, Kubernetes infrastructure is just the means to run container applications, and the application’s faster time-to-market, agility, resilience, and security are what matters to the business, not the infrastructure itself.

At the same time, enterprises will always need flexibility and control. A switch to CaaS does not mean the enterprises should lose control over their infrastructure. For enterprises, a CaaS admin can set policies, controls for when the infrastructure should upgrade or scale, placement rules, data replication rules, security rules, all while maintaining full visibility to all underlying infrastructure clusters. The CaaS users do not necessarily need to know all the policies but should be able to annotate the workload with intent, e.g., SLA requirements, security compliance requirements, location requirements. With these intents as input, the CaaS platform should be able to intelligently make all the decisions for the application workload, controlled by the policies set by the CaaS admin.

In summary, CaaS is the direction for future container infrastructure — intelligent, self-driving, self-healing infrastructure with policies and controls, allowing fully intent-driven application deployment and management. The underlying cluster infrastructures should be completely transparent to the application, which could be a single cluster or multiple clusters, could reside in different clouds or regions; the applications should not care, and should be able to scale and expand without cluster boundaries. The CaaS platform should manage all this, transparently to the user.

Tags:
Thought Leadership
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy