February 23, 2023

How can we fix the Kubernetes developer experience?

Yitaek Hwang
Yitaek Hwang
Guest contributor

In its most recent annual report, the Cloud Native Computing Foundation (CNCF) estimated that over 5.6 million developers are using Kubernetes today.

That number is set to grow. In the 2022 State of Production Kubernetes report, respondents expected to deploy more new and existing applications to Kubernetes in the next 12 months, from more development teams.

More and more developers are deploying to Kubernetes

55% of developers who responded to that survey identified “improved developer productivity” as one of the outcomes of adopting Kubernetes.

But the situation is not entirely rosy. There’s a consensus that Kubernetes is complex, and survey respondents cited “lack of skills” as their main challenge with using it in production.

Skills shortages are the top challenge for Kubernetes users

Closing that skills gap is a priority. Naturally, for most organizations this means training and hiring to bolster operational teams.

But many organizations are also delegating Kubernetes operations work to individual developers. 30% of developers said that their organization was pushing operations work on to them!

While getting developers involved in the deployment cycle aligns with the ethos of DevOps and the shift-left movement, it creates friction when not implemented well.

Kubernetes is a steep learning curve for developers

Given Kubernetes’ steep learning curve, asking software engineers to add it to their workflows could decrease productivity. In short, it’s not great DX (developer experience).

So how can organizations reap the benefits of Kubernetes while maintaining a good developer experience and all the outcomes they look for from their developers, like feature velocity?

Define the boundaries

Every team is different. Some may have SREs or DevOps engineers embedded in the team with Kubernetes expertise, while others may have developers who want as little as possible to do with YAML and deployment pipelines. In either case, it’s important to establish boundaries and set expectations:

  • What part of the development lifecycle does each team own? Are developers expected to do production Kubernetes deployments?
  • Which elements of infrastructure management do developers need to be involved with?
  • Are developers expected to create their own ingresses, secrets, and operators?
  • Who will create the Kubernetes manifests to deploy their containers? Are templates available?
  • Should developers be able to choose their own application stacks? For example, who chooses tools for logging, monitoring, chaos or load testing?

The right answer to these questions depends on the size and structure of the team. But ultimately, developers all need to deploy their application to some remote Kubernetes cluster — and will benefit from some automation to streamline their deployment process and access to debug in case things go wrong.

Put simply, this could mean:

  1. CI/CD pipelines with GitOps integration to push containers to production
  2. Granular RBAC profiles for developers to safely access production clusters directly

Don’t let perfect be the enemy of good. Instead of forcing Kubernetes into existing workflows, gradually build out automation to minimize friction.

Provide a consistent experience

One of the main complaints from developers interacting with Kubernetes is interoperability in their stack. Over 66% of developers report suffering issues from different elements.

This is perhaps unsurprising given how many different software elements are deployed to a typical cluster and how often new versions get released.

Add in the concept of ‘configuration drift’ — where the actual state of the cluster gradually deviates from its ideal after ‘snowflake’ config changes and updates — and you end up with a debugging nightmare.

The old line is as true in cloud-native as it ever was
Most people suffer issues due to stack interoperability

Vendors and the community are working hard on this problem, with various infrastructure templates that describe all the preconfigured components needed for a cluster.

For example, AWS provides EKS Blueprints for Terraform and AWS CDK to bootstrap core Kubernetes add-ons and services.

The challenge for some of these solutions is that they are ‘fire and forget’, for build only, or that they only apply to a single destination environment, eg a single cloud.

Here at Spectro Cloud, we have the concept of declarative Cluster Profiles, which describe all the elements of a cluster. Ops teams can build up a library of different Profiles that meet the needs of different development teams or particular use cases.

Teams can even stack together different Profiles as needed (for example, the SecOps team can own and maintain a security pack that is deployed to all clusters by default).

Building a Profile is a matter of selecting a version of a piece of software from an approved repository; updating a Profile, for example with a new version of that software, triggers a refresh of all the clusters with that Profile applied.

For a developer, and the ops teams that work with them, Cluster Profiles take away a lot of pain: of choosing and configuring cluster components from a complex ecosystem; of maintaining clusters over time; of aligning to company policies. It doesn't take control away from the developers, but it gives them guardrails.

A core infrastructure cluster profile
A cluster profile

Provide a sandbox

As the size of the company’s codebases and dev teams grow, inevitably local development must move from laptops to a shared environment. When designing for a Kubernetes software development environment, there are two major requirements:

  1. Access to an unrestricted sandbox with the same characteristics as the production cluster.
  2. Freedom to safely deploy at any time without waiting for approvals.

Most developers today report significant challenges in getting access to such sandbox environments, whether through provisioning delays or arduous security processes.

One way to provide a supercharged sandbox environment is to use Virtual Clusters. Virtual Clusters build on Loft Labs’s open source projects, vcluster and vcluster CAPI Provider, to create an isolated environment within the host cluster.

This gives developers safe access to a sandbox they fully control, while also optimizing for resources on the host cluster. Palette’s implementation of Virtual Clusters also allows you to pause and resume idle virtual clusters for cost savings.

Abstract away the infrastructure

Cluster Profiles can tame the complexity of Kubernetes for application developers. Virtual Clusters cuts the delay of provisioning. But there is another step: to abstract away the infrastructure from the developer entirely.

As part of Palette 3.0, Spectro Cloud has made this simple via Palette Dev Engine (PDE). PDE provides a new mode called App Mode for developers to focus on building, testing, and deploying their Kubernetes applications.

The beauty of App Mode is that devs don't even need to think about or understand Virtual Clusters.

Instead, they focus on Apps and App Profiles, which is a more appropriate/relatable abstraction. App Profiles templatize various configurations for application deployments. For example, App Profiles can pre-configure network access (i.e., private, public), environment variables, storage, and runtime settings.

Developers can then use an App Profile to quickly create their applications and deploy to Virtual Clusters for testing.

Yes, developers eventually get a kubeconfig file once they deploy an App, and might need to use it for debugging, but what they NEED is to be shielded from virtual clusters and all the complexity of the infrastructure layer for as long as possible.


It’s inevitable that developers will have to interact with Kubernetes when developing and deploying cloud native applications. But their focus should always be on their applications, not taking on the pain of serving as a shadow operations team.

So what’s the key to good DX for Kubernetes?

As always, some of the solution is down to people and process. Each organization needs to agree on who does what (and even a glance at the heated conversations on Reddit shows that this is easier said than done).

But there are technical solutions, too.

Enterprise Kubernetes management platforms like Palette streamline the process of designing, building and maintaining clusters, and managing access to them long term. This reduces the cognitive load on developers and makes it easier for platform/ops teams to provide a positive developer experience.

Innovations like Virtual Clusters help accelerate the feedback loops in the development process, eliminating the delays to fire up a sandbox cluster to deploy code changes on, eliminating a ticket-driven touchpoint with the ops team in the development workflow.

And dev-centric environments like Palette Dev Engine provide a greater degree of abstraction away from the ‘plumbing’ of the Kubernetes infrastructure, freeing app developers to focus on their code instead of the ‘configuration tax’.

If you’re curious to learn more, you can watch our webinar on Virtual Clusters here, and learn more about the Palette Dev Engine App Mode for free here.

Using Palette
Thought Leadership
How to
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy