How to Find an Optimal Set of Nodes for A Kubernetes Cluster

One of the most impactful ways to reduce spend on Kubernetes infrastructure is to make sure your clusters are optimally sized for the workloads they run. Working with many teams over the past year we have seen that it’s not so obvious how to arrive at an optimally-sized cluster, even in autoscaling environments. In response, we are publicly launching a new tool that we’ve used to help teams recently! While implementing this solution with users so far we’ve seen savings in the range of 25-60%, even having a major impact in autoscaling environments.

How It WorksPermalink

This new tool generates context-aware cluster sizing recommendations after analyzing Kubernetes metrics on historical workloads and cloud billing data. As a user, you only need to provide the context of the cluster, i.e. whether it is being used for development, production, or high-availability workloads. We then recommend cluster configurations, which optimize for the respective balance of cost, simplicity, and performance/headroom. Because this solution is Kubernetes native, we consider the following when providing recommendations:

Setting Real-Time Cost Alerts in Kubernetes With Kubecost

Engineering teams can scale their Kubernetes costs and burn their budget with the same ease by which they scale their infrastructure. Thanks to Kubecost's real-time alerting, the risk of upsetting the finance team can be mitigated. Kubernetes is well-known for its ability to help scale applications rapidly and with ease, but this ability comes with some tradeoffs. Before Kubernetes, teams had to follow a more deliberate procurement approval process to change the capacity allocation. Today, that scaling process has been democratized, and teams can easily scale their clusters up or down.

With the ability to create more frequent changes to infrastructure resources come more opportunities to misallocate and over-allocate costly resources. In this model, technical teams can far exceed their expense budget without even realizing it, while financial managers would only notice it after the fact leading to avoidable organizational stress. So, how do you stay on top of your Kubernetes spending if your resources change daily?

Gain Better Visibility Into Kubernetes Cost Allocation

The Complexity of Measuring Kubernetes Costs

Adopting Kubernetes and service-based architecture can benefit organizations – teams move faster, and applications scale more easily. However, visibility into cloud costs is made more complicated with this transition. This is because applications and their resource needs are often dynamic, and teams share core resources without transparent prices attached to workloads. Additionally, organizations that realize the full benefit of Kubernetes often run resources on disparate machine types and even multiple cloud providers.

In this blog post, we’ll look at best practices and different approaches for implementing cost monitoring in your organization for a shrowback/chargeback program and how to empower users to act on this information. We’ll also look at Kubecost, which provides an open-source approach for ensuring consistent and accurate visibility across all Kubernetes workloads.