Managing Applications in Kubernetes With the Carvel Kapp Controller

Any typical enterprise-grade application deployed on Kubernetes comprises several API resources that need to be deployed together. For example, the WordPress application, which is one of the example applications available on the Kubernetes GitHub repository, includes:

  • a wordpress frontend pod,
  • a wp-pv-claim persistent volume claim mounted to the frontend pod,
  • a wordpress-mysql MySQL database pod,
  • a mysql-pv-claim persistent volume claim mounted to the MySQL database pod,
  • two persistent volumes: wordpress-pv-1 and wordpress-pv-2 to serve the persistent volume claims,
  • services for the database and frontend pods.

Application (or app) is not a native construct in Kubernetes. However, managing applications is the primary concern of the developers and operations. Application delivery on Kubernetes involves upgrading, downgrading, and customizing the individual API resources. Kubernetes allows you to restrict the spread of your application resources through namespaces such that you can deploy an entire app in a namespace that can be deleted or created. However, a complex application might consist of resources spread across namespaces, and in such cases answering the following questions might be a challenge:

OpenTelemetry in Action: Optimizing Database Operations

Many software developers can attest that some of the most significant issues in their applications arise from database performance. Though many developers prefer to use a relational database for enterprise applications, typical logging and monitoring solutions provide limited signals to detect database performance issues. Rooting out common bad practices such as chatty interactions between the application code and the database is non-trivial.

As developers, we need to understand how our database is performing from the context of user transactions. Ideally, we would have a common tool that can monitor the performance of both the application and the database concerning user transactions. OpenTelemetry has emerged as a popular tool for application monitoring, but it can also be extended for monitoring databases.

OpenTelemetry in Action: Identifying Database Dependencies

Microservices can help any organization achieve its goal of increasing agility by addressing critical factors such as improving team autonomy, reducing time to market, cost-effectively scaling for load, and avoiding complete outages of the applications. As organizations break their monolith applications into microservices, one of the major hurdles they encounter is identifying database dependencies.

Database sharing can be a complex and time-consuming challenge to solve. Databases do not allow you to define what is shared and what is not. While modifying a schema to better serve one microservice, you might inadvertently break how another microservice uses that same database.

Azure Infrastructure Made Immutable With Locks

After an application is deployed to production, developers should lock down its underlying infrastructure to prevent accidental changes. Some of the common accidents that can affect the availability of an application in production are: moving, renaming, or deleting the resource crucial to the function of the application. You can use locks that prevent anyone from performing a forbidden action to avoid such mishaps.

Creating Locks

Almost every resource in Azure supports locks, so you will find the lock option in the settings section of nearly all resources in the portal. For example, the following screenshot illustrates locks on resource groups:

Delete Multiple Resources and Resource Groups in Azure With Tags

You might have noticed that resources comprising some Azure services such as Azure Kubernetes Service (AKS) span multiple resource groups by default. In some cases, you might intentionally want to segregate resources such as disks and network interfaces from VMs by placing them in different resource groups for better management. A common problem arising from the resource spread is that you might find it challenging to delete multiple resources and resource groups to entirely remove a service from a subscription.

We can solve the problem by using resource tags to associate resources and resource groups to a service. Tags are key-value pairs that can be applied to your Azure resources, resource groups, and subscriptions. Of course, you can use tags for many other purposes apart from resource management. The Azure docs website has a detailed guide on the various resource naming and tagging strategies and patterns.

Practical Introduction to Kubernetes Autoscaling Tools with Linode Kubernetes Engine

Your cloud infrastructure can scale in real time with your application without making a configuration change or writing a line of code. Autoscaling is the process of increasing or decreasing the capacity of application workloads without human intervention. When tuned correctly, autoscaling can reduce costs and engineering toil in maintaining the applications.

The overall process of enabling autoscaling is simple. It begins with determining the set of metrics that can provide an indicator for when Kubernetes should scale the application capacity. Next, a set of rules determines whether the application should be scaled up or down. Finally, using the Kubernetes APIs, the resources available to the application are expanded or contracted to accommodate the work that the application must perform.

Enhancing Istio Operations with Kong Istio Gateway

If you’re a developer for a service-oriented application, routing requests between services can be overwhelming. This work may force you to focus on operational details that take you away from building great features for your customers.

Fortunately, with Kong Istio Gateway, we can solve many inter-service networking concerns such as security, resiliency, observability, and traffic control with services-first networking policies. By offloading network-related problems to the service mesh, you can focus on building features that deliver business value.

Kubernetes Container Lifecycle Events and Hooks

You might encounter cases where you need to instruct Kubernetes to start a pod only when a condition is met, such as dependencies are running, or sidecar containers are ready. Likewise, you might want to execute a command before Kubernetes terminates a pod to release the resources in use and gracefully terminate the application.

You can do so easily with two container lifecycle hooks:

3 Fundamental Components of a Reusable .NET Microservices Platform

Development teams frequently need to build new microservices to either add new functionality or replace existing microservices. However, microservices must support some standard features such as providing insight into their health through logging, allowing monitoring, and following the organization's security standards. A reusable microservices platform can help developers jumpstart the development process by providing reusable components that they can use to build new microservices.

To implement a reusable microservices platform, you can use the sidecar pattern or build NuGet packages that are installed in every microservice. Monitoring, tracing, and logging are the key components that must exist in your platform. We will cover an implementation of the platform consisting of the three components in this article. However, you may build many more platform components to implement standard technical policies such as handling database connections, retries, timeouts, etc.

Limit Communication Between Microservices With Kubernetes Network Policies

Security is an important concern for microservices applications. Although security is a broad topic, I want to zoom into a critical aspect: limiting communication between microservices. By default, microservices platforms such as Kubernetes allow unconstrained communication between services. However, to prevent a few compromised services from affecting all the services on the platform, a microservices platform needs to limit the interactions between services. This constraint is enforced by creating network policies in Kubernetes. Network policies allow you to specify which services can communicate with each other and which services can't. For example, you can specify that a service can only communicate with other services in the same namespace with a network policy.

A Kubernetes cluster needs a network controller to enforce the network policies. The network controller is a special pod that runs on every node in the cluster (a.k.a., DaemonSet). It monitors the network traffic between services and enforces the network policies.

Versatile Events in Event Driven Architecture

Simple applications rely on synchronous request-response protocols. It is one of the most common patterns we encounter every day in applications and websites where you press a button and expect a response.

As the number of services increases, the number of synchronous interactions between them increases as well. In such a situation, the downtime of a single system also affects the availability of other systems.

Bulk Copy Data Sharing Pattern for Applications in Azure With Data Explorer, Data Factory, and Cosmos DB

In the initial stages of a data platform development, data size is small, and you can easily share it via email or services such as Power BI. However, once the platform grows, and different parts of the business become dependent on it, sharing data between systems becomes a big challenge.

In a majority of the data-driven systems, one of the two patterns is used for consuming data.

Crosspost Tweets to LinkedIn With Power Automate

Do you want your LinkedIn audience to know what you are up to on Twitter? Here's how I have set up Power Automate to crosspost specific tweets to LinkedIn.

What is Power Automate

Power Automate is one of the products of the Microsoft Power Platform family. It is a web-based service that helps you create automated workflows between your favorite apps and services to synchronize files, get notifications, collect data, and more. Power Automate is available as part of the Office 365 suite and is available in most Office 365 subscriptions.

Distributed Tracing in ASP.NET Core With Jaeger and Tye, Part 2: Project Tye

In This Series:

  1. Distributed Tracing With Jaeger
  2. Simplifying the Setup With Tye (this article)

Tye is an experimental dotnet tool from Microsoft that aims to make developing, testing, and deploying microservices easier. Tye's opinionated nature greatly simplifies the lifecycle of development and deployment of .NET Core microservices.

To understand the benefits of Tye, let's enumerate the steps involved in the development and deployment of the DCalculator application to Kubernetes:

Distributed Tracing in ASP.NET Core With Jaeger and Tye Part 1: Distributed Tracing

In This Series:

  1. Distributed Tracing With Jaeger (this article)
  2. Simplifying the Setup With Tye (coming soon)

Modern microservices applications consist of many services deployed on various hosts such as Kubernetes, AWS ECS, and Azure App Services or serverless compute services such as AWS Lambda and Azure Functions. One of the key challenges of microservices is the reduced visibility of requests that span multiple services. In distributed systems that perform various operations such as database queries, publish and consume messages, and trigger jobs, how would you quickly find issues and monitor the behavior of services? The answer to the perplexing problem is Distributed Tracing.

Distributed Tracing, Open Tracing, and Jaeger

Distributed Tracing is the capability of a tracing solution that you can use to track a request across multiple services. The tracing solutions use one or more correlation IDs to collate the request traces and store the traces, which are structured log events across different services, in a central database.

Event-Driven Architecture With Apache Kafka for .NET Developers Part 2: Event Consumer

In This Series:

  1. Development Environment and Event Producer
  2. Event Consumer (this article)
  3. Azure Integration (coming soon)

Let's carry our discussion forward and implement a consumer of the events published by the Employee service to the leave-applications Kafka topic. We will extend the application that we developed earlier to add two new services to demonstrate how Kafka consumers work: Manager service and Result reader service.

Source Code

The complete source code of the application and other artifacts is available in my GitHub repository.

Event-Driven Architecture With Apache Kafka for .NET Developers Part 1: Event Producer

In This Series:

  1. Development Environment and Event Producer (this article)
  2. Event Consumer (coming soon)
  3. Azure Integration (coming soon)

Introduction

An event-driven architecture utilizes events to trigger and communicate between microservices. An event is a change in the service's state, such as an item being added to the shopping cart. When an event occurs, the service produces an event notification which is a packet of information about the event.

The architecture consists of an event producer, an event router, and an event consumer. The producer sends events to the router, and the consumer receives the events from the router. Depending on the capability, the router can push the events to the consumer or send the events to the consumer on request (poll). The producer and the consumer services are decoupled, which allows them to scale, deploy, and update independently.

Build a Basic GraphQL Server With ASP.NET Core and Entity Framework in 10 Minutes

Since I wrote my first GraphQL post in 2019, much has changed with GraphQL in the .NET space. The ongoing changes have also affected most of the documentation available online. This article will walk you through the steps to create a basic GraphQL API on ASP.NET Core using GraphQL for .NET, Entity Framework Core, Autofac, and the Repository design pattern. I chose the tech stack for the sample application based on the popularity of the frameworks and patterns. You can substitute the frameworks or libraries with equivalent components in your implementation.

If you are not familiar with the concepts of GraphQL, please take some time to read the learn series of articles on the GraphQL website. Let's now fire up our preferred editor or IDE to get started.