Why Kubernetes Is the Best Technology for Running a Cloud-Native Database

We’ve been talking about migrating workloads to the cloud for a long time, but a look at the application portfolios of many IT organizations demonstrates that there’s still a lot of work to be done. In many cases, challenges with persisting and moving data in clouds continue to be the key limiting factor slowing cloud adoption, despite the fact that databases in the cloud have been available for years. 

For this reason, there has been a surge of recent interest in data infrastructure that is designed to take maximum advantage of the benefits that cloud computing provides. A cloud-native database achieves the goals of scalability, elasticity, resiliency, observability, and automation; the K8ssandra project is a great example. It packages Apache Cassandra and supporting tools into a production-ready Kubernetes deployment.

Create a Minimal Web API With ASP.NET Core and Publish To Azure API Management With Visual Studio

Minimal Web API is a new approach for building APIs without all the complex structures of MVC, so, in accordance with the name "minimal," it includes the essential components needed to build HTTP APIs. All that you need is only a CSPROJ and a Program.cs.

Benefits of Using Minimal Web API

  • Less complex than before
  • Easy to learn and use
  • Don’t need an MVC structure: no controllers!
  • Minimal code to build and compile the application, which means the application runs much faster (better performance)
  • Latest improvements and functionalities of .NET 6 and C#10

Prerequisites

  • .NET 6 SDK
  • Visual Studio 2022 or Visual Studio Code (we will use both of them)

We will use two methods to create our Minimal Web API.

Useful Links for Azure Users

It’s been a busy week here for Brian and I, lots of stuff going on. We did have time to collect some links however, which I share with you now. Enjoy!

Source: http://blogs.msdn.com/b/silverlining/archive/2012/02/03/pie-in-the-sky-february-3rd-2012.aspx

Diagrams as Code: The Complete How-to-Use Guide

We're seeing more and more tools that enable you to create software architecture and other Diagrams as Code. The main benefit of using this concept is that majority of the Diagrams as Code tools can be scripted and integrated into a built pipeline for generating automatic documentation. The other benefit responsible for the growing use of Diagrams as code to create software architecture is that it enables the use of text-based tooling, which most software developers already use. Furthermore, text is easily version-controllable and diff’able.

Table of Contents

  • What is Diagram as Code?
  • How to install Diagrams
  • How to use Diagrams
  • Conclusion

Taking Your Database Beyond a Single Kubernetes Cluster

Global applications need a data layer that is as distributed as the users they serve. Apache Cassandra has risen to this challenge, handling data needs for the likes of Apple, Netflix, and Sony. Traditionally, managing data layers for a distributed application was handled with dedicated teams to manage the deployment and operations of thousands of nodes — both on-premises and in the cloud.

To alleviate much of the load felt by DevOps teams, we evolved a number of these practices and patterns in K8ssandra, leveraging the common control plane afforded by Kubernetes (K8s) There has been a catch though — running a database (or indeed any application) across multiple regions or K8s clusters is tricky without proper care and planning up front.

Cloud Pricing Comparison: AWS vs Azure vs Google Cloud

Image Source

AWS vs Azure vs Google: An Overview

Amazon Web Services (AWS) is the world’s leading cloud computing platform. It provides Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) offerings. AWS services can provide organizations with on-demand computing power, storage, application services, and content delivery services.

Azure Infrastructure Made Immutable With Locks

After an application is deployed to production, developers should lock down its underlying infrastructure to prevent accidental changes. Some of the common accidents that can affect the availability of an application in production are: moving, renaming, or deleting the resource crucial to the function of the application. You can use locks that prevent anyone from performing a forbidden action to avoid such mishaps.

Creating Locks

Almost every resource in Azure supports locks, so you will find the lock option in the settings section of nearly all resources in the portal. For example, the following screenshot illustrates locks on resource groups:

Using XML Policies to Log and Analyze API Calls from Azure API Management

Azure API Management (APIM) is a powerful platform that enables you to publish and scale APIs while ensuring they are secured. One of the great features of Azure APIM is that you can add plugins and transforms to your APIs without any code change or restarts.

These capabilities are deployed using XML Policies which are a collection of statements. Moesif API Observability can be added in just a few minutes using an XML policy for APIM which makes it easy get visibility into API calls, even ones that are rejected and never reach your underlying service.

How to Provision an Azure SQL Database With Active Directory Authentication

In this article, we will talk about how to provision an Azure SQL Database with authentication restricted to Active Directory users/groups/applications. We will use Pulumi to do that.

Why This Article?

In a previous article, I already talked about connecting to an Azure SQL Database using Azure Active Directory authentication. However, my focus was on querying an Azure SQL Database from C# code (from an ASP.NET 6 Minimal API that was using Microsoft.Data.SqlClient "Active Directory Default" authentication mode, to be more precise), and not on the configuration of the Azure AD authentication itself.

Environmental Impact of the Cloud: 5 Data-Based Insights and One Good Fix

Does using the cloud make your business sustainable? Research suggests that it’s a greener choice. 

By moving to the cloud, the e-commerce giant Etsy slashed its energy consumption by 13% (from 7330 MWh in 2018 to 6376 MWh in 2019), saving enough energy to power 450 households for a month.(1) However, migrating to the cloud doesn’t guarantee anything if you neglect to optimize your resource utilization over the long term. 

Delete Multiple Resources and Resource Groups in Azure With Tags

You might have noticed that resources comprising some Azure services such as Azure Kubernetes Service (AKS) span multiple resource groups by default. In some cases, you might intentionally want to segregate resources such as disks and network interfaces from VMs by placing them in different resource groups for better management. A common problem arising from the resource spread is that you might find it challenging to delete multiple resources and resource groups to entirely remove a service from a subscription.

We can solve the problem by using resource tags to associate resources and resource groups to a service. Tags are key-value pairs that can be applied to your Azure resources, resource groups, and subscriptions. Of course, you can use tags for many other purposes apart from resource management. The Azure docs website has a detailed guide on the various resource naming and tagging strategies and patterns.

Delivering Your Code to the Cloud With JFrog Artifactory and GitHub Actions

Introduction

Artifactory and GitHub actions work great together to manage your deployments to the cloud. In this post, we’re showing an example that deploys an application to Microsoft Azure. This case involves an Azure web app, but the techniques we’re showing today could be used to deploy to any cloud service, virtual machines, or Kubernetes. Also, the application highlighted today is written in Java, but you could use any type of application code in the same way. I’m using Azure web apps because it has built-in CI/CD integration and it integrates easily with JFrog products, including Artifactory and X-Ray. The IDE I’m using is Visual Studio Code, an open-source, free text editor you can get at code.visualstudio.com. Our code is in a GitHub repo, and we're storing some of the dependency artifacts using Artifactory. Here’s a screenshot of the application:

It's a Spring Boot application that shows the Microsoft Developer Advocate mascot named Bit.  You can find the source code in this repo

Next-Gen Data Pipes With Spark, Kafka, and K8s: Part 2

Introduction 

In our previous article, we discussed two emerging options for building new-age data pipes using stream processing. One option leverages Apache Spark for stream processing and the other makes use of a Kafka-Kubernetes combination of any cloud platform for distributed computing. The first approach is reasonably popular, and a lot has already been written about it. However, the second option is catching up in the market as that is far less complex to set up and easier to maintain. Also, data-on-the-cloud is a natural outcome of the technological drivers that are prevailing in the market. So, this article will focus on the second approach to see how it can be implemented in different cloud environments.

Kafka-K8s Streaming Approach in Cloud

In this approach, if the number of partitions in the Kafka topic matches with the replication factor of the pods in the Kubernetes cluster, then the pods together form a consumer group and ensure all the advantages of distributed computing. It can be well depicted through the below equation:

Using Azure Load Balancer With CockroachDB

Motivation

The purpose of this tutorial is to provide step-by-step instructions in getting an Azure Load Balancer up quickly. Our docs do a great job at covering the CockroachDB portion but the granular steps to get ALB up are missing. Since this is my first foray into managed load balancers, I decided to do the hard work.

High-Level Steps

  • Provision a cluster in Azure
  • Provision a load balancer
  • Test connectivity
  • Clean up

Step by Step Instructions

This article assumes you've set up a Resource Group and a Virtual Network associated with it in your Azure subscription. Following this document will walk you through setting up a CockroachDB cluster. When you have these prerequisites in place, we can continue with setting up a load balancer.

Testing Serverless Functions

Serverless computing, or functions-as-a-service, has picked up a lot of attention and speed due to its cost-effective pay-as-you-go price offering, multi-language/runtime support, as well as its easy learning curve without any need to provide the infrastructure layer. All the major cloud providers now have a serverless computing offer as part of their services portfolio: Amazon Web Services has Lambda, Microsoft Azure has Azure Functions, and Google Cloud has Cloud Functions. Furthermore, there are on-prem/on-Kubernetes options for running serverless functions on OpenWhisk or OpenFaaS. For the sake of consistency, I will refer to all of these services as serverless functions throughout the rest of this post. 

In a microservices (or even nanoservices, as serverless functions are sometimes known) architecture, there are inherently lots of components, modules, and services that form part of an application or platform. This can make testing a chore, and sometimes a neglected part of the SDLC for these platforms. This article will explore some options and techniques for testing these types of platforms to help make this aspect of your projects easier. Testing should always be a first-class citizen, regardless of the infrastructure. Irrespective of the language, framework, or tools we use, testing is vital to ensure both sustained development velocity and the quality of our deliveries to production. 

The Role of CI/CD Pipeline in Software Development

The CI/CD pipeline includes continuous integration, delivery, and deployment. DevOps teams use it to generate, test, and release new software automatically. This pipeline benefits from regular software changes and a more collaborative and agile team process. You've probably heard about the benefits of CI/CD tools, which are used to provide code more frequently and reliably. Let's examine what it is and how it benefits software development.

What Does CI/CD Pipeline Stand For?

There are two abbreviations for CI and CD: CI stands for continuous integration and CD for continuous delivery and deployment. The software development methodology is known as Continuous Integration, and Continuous Deployment (CI/CD) is based on the idea that incremental code changes are made frequently and consistently. Continuous Integration (CI)-triggered automated build and test stages ensure that code changes submitted into the source are trustworthy.

How to Choose a Container Registry: The Top 9 Picks

The invention of the open-source Docker Engine in 2013 resulted in containerization being one of the first steps towards modernizing the process of developing cloud applications. Before the invention of the Docker Engine, you had to configure applications for a specific computer/hardware. The downside of this approach was that it could be time-consuming to move an application from one server to another if the need arose.

But, with the launch of the Docker Registry,  the longstanding challenge of managing and organizing container registries was solved.  In fact, the Docker Registry rapidly became the software industry standard. Today, container registries help firms to collect, store, and deliver container images for different phases through their software development process within a central location. 

Sample Architecture Using Amazon AWS, Microsoft Azure, Google GCP, MongoDB, and Couchbase

Article Image
A drawing should have no unnecessary lines and a machine no unnecessary parts. 

                William Strunk Jr., Elements of Style

In the book Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems, Martin Kleppmann has written about traits and trade-offs for data infrastructure while designing modern applications. He has given an example architecture for a data system that combines several components. I used this example for the article Example Architectures for Data-Intensive Applications. That article explored just the Couchbase features and functions.