How to Find an Optimal Set of Nodes for A Kubernetes Cluster

One of the most impactful ways to reduce spend on Kubernetes infrastructure is to make sure your clusters are optimally sized for the workloads they run. Working with many teams over the past year we have seen that it’s not so obvious how to arrive at an optimally-sized cluster, even in autoscaling environments. In response, we are publicly launching a new tool that we’ve used to help teams recently! While implementing this solution with users so far we’ve seen savings in the range of 25-60%, even having a major impact in autoscaling environments.

How It WorksPermalink

This new tool generates context-aware cluster sizing recommendations after analyzing Kubernetes metrics on historical workloads and cloud billing data. As a user, you only need to provide the context of the cluster, i.e. whether it is being used for development, production, or high-availability workloads. We then recommend cluster configurations, which optimize for the respective balance of cost, simplicity, and performance/headroom. Because this solution is Kubernetes native, we consider the following when providing recommendations:

AWS S3 Client-side Encryption in AWS SDK .NET Core

AWS S3 Client-side Encryption in AWS SDK .NET Core

When you upload the data into the S3 bucket, you need to ensure that the sensitive data is secure by using proper encryption. Amazon S3 allows encrypting the data or objects either on the server-side or client-side.

Here, I will use client-side encryption for data before sending it to Amazon S3 using AWS SDK .Net Core. The advantage of client-side encryption is, encryption is performed locally, and the data never leaves the execution environment unencrypted. Another advantage is you can use your master keys for encryption, and no one can access your data without having your master encryption keys.

Distributed Tracing in ASP.NET Core With Jaeger and Tye Part 1: Distributed Tracing

In This Series:

  1. Distributed Tracing With Jaeger (this article)
  2. Simplifying the Setup With Tye (coming soon)

Modern microservices applications consist of many services deployed on various hosts such as Kubernetes, AWS ECS, and Azure App Services or serverless compute services such as AWS Lambda and Azure Functions. One of the key challenges of microservices is the reduced visibility of requests that span multiple services. In distributed systems that perform various operations such as database queries, publish and consume messages, and trigger jobs, how would you quickly find issues and monitor the behavior of services? The answer to the perplexing problem is Distributed Tracing.

Distributed Tracing, Open Tracing, and Jaeger

Distributed Tracing is the capability of a tracing solution that you can use to track a request across multiple services. The tracing solutions use one or more correlation IDs to collate the request traces and store the traces, which are structured log events across different services, in a central database.