Published
June 24, 2022

Three steps to controlling costs in multi-cloud Kubernetes

Dmitry Shevrin
Dmitry Shevrin
Infrastructure Specialist

Cloud Kubernetes costs can ruin your day

Your K8s estate will inevitably incur costs when using cloud providers like AWS, and especially as your infrastructure grows more complex.

You might believe that you know exactly how expensive your cloud bill is going to be — until you actually get the billing email… and then you realize you underestimated intra-region / intra-zone traffic flows. Or storage usage. Or some other peculiar details you never thought of.

Eventually you might end up paying twice as much as you expected. And once you’re fully committed to running your workloads on one or more cloud providers, there’s no easy way out.

Kubernetes cost meme

One of the most popular ways to control your Kubernetes expenditures is using kubecost, an open-source tool focused on real-time K8s cost visibility and savings insights. That’s not to say Kubecost is the only option: there are a number of other commercial solutions available on the market, such as cast.ai, Vantage or OpsLyft. Cluster cost control is a hot topic, and it’s only going to grow more important in a recessionary economy where IT budgets are constrained.

However, all of these cost-visibility solutions require manual configuration and significant efforts to make them aware of your team structure and cost centers for showback or chargeback purposes. Manual integration can also leave you with a system that generates inaccurate expenditure values.

It is much easier to have cost visibility embedded natively into the management tools you use every day. That’s why here at Spectro Cloud we’ve natively integrated usage and cost visibility and controls directly into our Palette platform, making it easy to keep an eye on your consumption as well as on cumulative costs.

Because Palette tracks all your clusters, wherever they are, this visibility of costs is truly holistic. It enables you to compare multiple cloud providers and on-premises hosting options to help you make better decisions about where you run your applications.

Let’s see how you can achieve a cost-efficient multi-cloud K8s architecture with Palette, step by step.

Step 1: Watch your costs, move your workloads

Palette allows you to see which cloud provider is the best fit for your workloads. Let’s take a look at cumulative costs for the month of April.

check cloud provider cost for your clusters with Palette

This natively integrated graph demonstrates overall costs on running experimental k8s clusters on various clouds, all in one view. You can get a quick readout of total cloud spend for all clusters, and see how your consumption varies day by day — handy for identifying trends in demand or even where a fault is driving your costs up (for example, a misconfiguration is causing a cascading failure).

So what can you do with this cost visibility? With Palette’s embedded cluster profiles, you can easily define software stacks — including identical application packs — for all major cloud providers as well as for your DC and edge instances. Your clusters become portable. This means that if during real world deployment you find out that costs exceed your expectations in one cloud provider, you have the option of deploying exactly the same cluster type on another hyperscaler or on-premises, then redirecting your CI/CD chain into this newly built cluster to transfer data. The result? A (fairly) straightforward cost reduction.

Step 2: Define your on-premises costs for chargeback

Hyperscaler cloud costs are directly billed to you at more or less transparent known rates, and your cost usage is visible. What about your on-premises or hybrid costs? For hybrid architectures using some on-premises parts, Palette enables you to define unit pricing for resources like CPU, GPU, memory and storage usage, as shown in the example below.

Define your on-premises costs

This enables you to make an accurate and transparent usage-based internal chargeback between the different application owners in your organization, on whatever frequency is right for your business.

And, if you’ve architected your environment with multiple small K8s clusters, it also allows you to flexibly distribute your workload to utilize all available resources, for example taking advantage of bare metal clusters for CPU- or GPU-intensive workloads or quickly scaling up to take advantage of hyperscale capacity.

With Palette’s native cost capabilities, it’s easy to compare public cloud and private cloud costs and to make an informed decision on where to run your most demanding workloads, such as GPU-intensive ones.

Step 3: Spin up temporary clusters to save waste

Do you keep all your clusters running for weeks, months or years? It’s a common pattern, but not always an efficient one if your workloads vary or your utilization fluctuates. In other words, you’re probably wasting money.

Conventionally, operations teams have put in a lot of work to get the cluster environment up and running and might be loathe to kill long-running clusters just to optimize costs in the short term. But with the right automation tooling — like Terraform and Palette (which actually work well together) — it’s suddenly feasible to start using temporary clusters.

The idea here is that you can use a Cluster Profile and automated deployment to fire up a new cluster of the right size when it’s needed for dev/test or other short-term requirement, then immediately tear it down again as soon as the work is complete, so you don’t waste any hyperscale budget (or occupy any on-prem resources). Applying this approach kills two birds with one stone: first, you know for sure that you’re running the latest and greatest application version because you’ve just built the stack directly from repo, and second, your infrastructure is not wasting money when no-one is using it, for example at night.

Conclusion

Running Kubernetes clusters can be unexpectedly costly — particularly when they’re scattered across different clouds and environments. It’s essential to get visibility of your total cost, both cloud and on-premises, and there are tools out there to help you do it. If you make a habit of digging into your cost metrics, and actively redistributing workloads, you can bring down your cloud spend significantly and get better utilization from your on-premises resources. And last but not least, you can do a better job of attributing real costs back to the dev teams and business units you support. The next step? Get in touch with us and we’ll take a look at your use case together, or try Palette for yourself — and don’t worry, it won’t cost you anything!

Tags:
How to
Using Palette
Enterprise Scale
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy