Containers and Kubernetes are game changer technologies. With containers, a monolithic application can be broken into multiple lightweight microservices, and each microservice can be deployed, updated, and scaled separately. You can even have different versions of microservices co-exist and route proportional traffic between old/new services for A/B testing. Kubernetes made deployment and updating of container based applications much easier — you no longer need to be a distributed systems expert to manage your cluster operations — just define your distributed application desired state in a YAML file, and Kubernetes will automagically make the application match with your defined desired state. This brings huge value to business in terms of agility, resiliency, scalability, and portability.
Operating Kubernetes at its full potential is now mission critical, but the promise of containers and Kubernetes has only been partially realized by most organizations, due to complexity and inflexible tooling.
Kubernetes is an awesome open-source technology, but to make it enterprise grade and production ready, users often have to stitch 20+ components together to create an end-to-end solution. Deployment, update and other day 2 operations such as certificate rotation and load balancer configurations can be tricky and labor intensive. And to keep everything secure and up-to-date, all components should be verified and patched all together — how many times has an OS patch broken the container engine, or a Kubernetes patch required some host configuration changes? As if this is not fun enough, with Kubernetes going mainstream, many development teams are adopting it and ending up with multiple clusters, potentially even in multiple cloud environments. Operational challenges galore. Myriad of them. Everywhere.
Existing solutions like public cloud managed Kubernetes services (e.g., EKS, AKS, GKE) and vendor pre-bundled solutions (e.g., OpenShift, Rancher, Anthos, Tanzu) all try to make Kubernetes lifecycle management easier, but all have a fixed stack and limited Kubernetes version support. Putting cloud lock-in aside, none of them provide the true flexibility developers needed. One-size-fits-all oftentimes ends up actually being one-size-fits-nobody. DIY can bring the flexibility users want, but for most that doesn’t scale, especially when dealing with multi-cluster and multi-cloud.
I have seen many developers get frustrated because they cannot use, say, the latest version of Kubernetes 1.18 for the new kubectl debug feature because their existing solutions are still at 1.16. Some data scientists are forced to maintain and operate Kubernetes because existing solutions don’t have GPU support out of the box. How about additional integrations? Vault, EFK, Prometheus/Grafana with OOB alerting, Service Mesh, F5, … No one is going to be an expert on the entire Kubernetes stack, and for developers, what matters to them is actually developing and running container applications, not dealing with Kubernetes infrastructure and operations.
These limitations are slowing your teams down.
Enterprises really need a Kubernetes Easy Button, with OOB templates to satisfy the majority of use cases, but also with the flexibility to customize if they need to.
Spectro Cloud provides a Kubernetes multi-cluster management platform that gives enterprises the flexibility and control they need, without giving up ease-of-use, consistency, and manageability at scale. That’s whether you are on a single public cloud, private cloud, hybrid cloud, or multicloud. It’s also whether you want to quickly build an AI/ML cluster with GPU support, an experimental dev cluster with the latest version of Kubernetes, or a generic Kubernetes cluster for web apps with logging and monitoring. Spectro Cloud lets you easily define, deploy and manage your Kubernetes cluster infrastructure in minutes, on any cloud. Run Kubernetes your way, anywhere.