The Future Is Cloud-Native: Are You Ready?

Why Go Cloud-Native?

Cloud-native technologies empower us to produce increasingly larger and more complex systems at scale. It is a modern approach to designing, building, and deploying applications that can fully capitalize on the benefits of the cloud. The goal is to allow organizations to innovate swiftly and respond effectively to market demands.

Agility and Flexibility

Organizations often migrate to the cloud for the enhanced agility and the speed it offers. The ability to set up thousands of servers in minutes contrasts sharply with the weeks it typically takes for on-premises operations. Immutable infrastructure provides confidence in configurable and secure deployments and helps reduce time to market.

Making Better Decisions in a Busy World

Hit Pause!

I've been on break for the past few days, completely unplugged from work. It's been a time of reflection, diving into fiction, and sometimes simply sitting and doing nothing. I've wandered through various Christmas markets and taken a few spontaneous day trips to nearby towns, enjoying holiday cheer. Sure, at times, I felt physically tired, but never exhausted. It made me realize how crucial it is to hit pause.

In our hectic day-to-day lives, we constantly find ourselves making decisions. From the moment we wake up — deciding on breakfast, travel to the office or work from home, and so on — to big decisions such as where to attend university, which car to buy, or where to live, each carrying its weight. Internet sources claim that we make roughly whooping 35,000 decisions a day! If we assume an adult sleeps for eight hours, thankfully decision-free, that amounts to more than 2,100 decisions per waking hour or about three decisions every five seconds.

Choreography Pattern: Optimizing Communication in Distributed Systems

In today's rapidly evolving technology landscape,  it's common for applications to migrate to the cloud to embrace the microservice architecture. While this architectural approach offers scalability, reusability, and adaptability, it also presents a unique challenge: effectively managing communication between these microservices. Successfully coordinating messages among these services is a fundamental aspect of their design. There are two popular methodologies available to tackle this challenge. The first, Service Orchestration, was discussed in my previous article. In this article, we will dig into the second methodology: Choreography. We aim to comprehensively understand the nuances, advantages, and disadvantages that the Choreography methodology brings.

Problem Context

To address the communication challenge, we introduced the concept of an orchestrator, a central authority tasked with coordinating and streamlining the flow of transactions across these autonomous microservices. But here's the question that looms large: Is the Orchestration Pattern a universal solution, or does its effectiveness vary depending on the scenario? Can a central orchestrator virtually manage all the business problems, or are there scenarios where a different approach besides being preferable is also essential?

Orchestration Pattern: Managing Distributed Transactions

As organizations migrate to the cloud, they desire to exploit this on-demand infrastructure to scale their applications. But such migrations are usually complex and need established patterns and control points to manage. In my previous blog posts, I covered a few of the proven designs for cloud applications. In this article, I’ll introduce the Orchestration Pattern (also known as the Orchestrator Pattern) to add to the list. This technique allows the creation of scalable, reliable, and fault-tolerant systems. The approach can help us manage the flow and coordination among components of a distributed system, predominantly in a microservices architecture. Let’s dive into a problem statement to see how this pattern works.

Problem Context

Consider a legacy monolith retail e-commerce website. This complex monolith consists of multiple subdomains such as shopping baskets, inventory, payments etc. When a client sends a request, the website performs a sequence of operations to fulfil the request. In this traditional architecture, each operation can be described as a method call. The biggest challenge for the application is scaling with the demand. So, the organisation decided to migrate this application to the cloud. However, the monolithic approach that the application uses is too restricted and would limit scaling even in the cloud. Adopting a lift and shift approach to perform migration would not reap the real benefits of the cloud.

Can I Code Without My Laptop

Learning Adaptability

A few weeks ago, my laptop crashed during a meeting. It was painful as I was about to start on an exciting new feature that my Product Owner (PO) had just proposed. I immediately rushed to the IT department for assistance, and they informed me that they needed to take a backup and completely rebuild my laptop. They estimated that rebuilding would take slightly over a half day to complete. Feeling frustrated, I asked myself: “Can I code without my laptop?”. In the past, I would have answered ‘NO’ without hesitation. But on second thought, I realized that I know my system well and am also familiar with the domain. After more introspection, I recognized that I was already doing it without consciously realizing it. So, I went to my PO and requested him to print the new requirements for me.

From Requirements to Success Criteria

There are numerous factors that a software engineer must consider before writing even a single line of code. First and foremost is understanding the business problem and who the actors are. Once you have a clear understanding of the requirements, it enables you to identify any flaws in the requirements or if it contradicts any existing features. You can then break it down into manageable pieces and think about how to reuse those pieces or determine if something already exists. This process helps you to define the final success criteria.

Scatter Gather Pattern: Designing High-Performing Distributed Systems

In this blog post, I will explore the Scatter-Gather pattern, a cloud scalability design that can process large amounts of data and perform time-consuming, intricate computations. The pattern works like a guide for creating distributed systems to achieve parallelism. The approach can significantly reduce processing times for an online application to cater to as many users. The idea is to break down an operation into independent tasks to achieve approximately constant response time.

What Is the Scatter-Gather Pattern?

Let us take an example to understand the problem where the pattern can be effective. Suppose an application that runs on a single-core processor. It can process an incoming request in 64 seconds to produce a result. If we migrate the same application to a 16-core processor, it can generate the same output in roughly four seconds. The multicore processor will spawn sixteen threads to work in parallel to compute the result, with a few extra microseconds for managing multiple processing threads. A four-second response time is good but still considered slugging for a web application. Upgrading the processor further, verticle scaling, will mitigate the problem, not solve it.

Pipes And Filters Pattern

Applications today collect an infinite amount of data. Many applications need to transform this data before applying any meaningful business logic. Tackling this complex data or a similar processor-intensive task without a thought-through strategy can have a high-performance impact. This article introduces a scalability pattern – pipes and filters – that promotes reusability and is appropriate for such scenarios.

Problem Context

Consider a scenario where incoming data triggers a sequence of processing steps, where each step brings data closer to the desired output state. The origin of data is referred to as a data source. Examples of data sources could be home IoT devices, a video feed from roadside cameras, or continuous inventory updates from warehouses. The processing steps during the transformation usually execute a specific operation and are referred to as filters. These processing steps are independent and do not have a side effect; i.e., running a step does not depend on any other steps. Each filtering step reads the data, performs a transforming operation based on local data, and produces an output. Once the data has gone through the filters, it reaches its final processed stage where it is consumed, referred to as data sink.

Load Balancing Pattern

Any modern website on the internet today receives thousands of hits, if not millions. Without any scalability strategy, the website is either going to crash or significantly degrade in performance—a situation we want to avoid. As a known fact, adding more powerful hardware or scaling vertically will only delay the problem. However, adding multiple servers or scaling horizontally without a well-thought-through approach may not reap the benefits to their full extent.

The recipe for creating a highly scalable system in any domain is to use proven software architecture patterns. Software architecture patterns enable us to create cost-effective systems that can handle billions of requests and petabytes of data. The article describes the most basic and popular scalability pattern known as Load Balancing. The concept of Load Balancing is essential for any developer building a high-volume traffic site in the cloud. The article first introduces the Load balancer, then discusses the type of Load balancers; next is load balancing in the cloud, followed by Open-source options, and finally, a few pointers to choose load balancers.

A Stress-Free 2022

There are ample resources available online to deal with burnout in your job, but I want to share a few ideas that I will be using this year. The WHO says: “Burn-out is a syndrome conceptualized as resulting from chronic workplace stress that has not been successfully managed.” You know that it is happening to you when a part of your inner self is screaming that you have had enough. Perhaps, you want to say it out loud, too. IT jobs seem to have a high percentage of burnout, and Electric identifies that it is rising.

January is the month when employees return from a festive break to start afresh and tackle new challenges. However, IT professionals return to the unfinished backlog, pushed deadlines, and overcommitted sprints from December. Long hours, growing to-do lists, work setup issues, uncountable Slack messages, and remote teams all contribute to stress. We all face it. Long hours and working over the weekends contribute to the biggest reasons for burnout. Teams work throughout the week, release during the nights, and provide support over the weekends, and the cycle continues.

What Is Microbenchmarking?

Introduction

Optimisation of code is an endless struggle. It is often even hard to produce meaningful metrics using jvm as it is an adaptive virtual machine. The article is

  • a brief introduction to microbenchmarking,
  • why microbenchmark,
  • when to consider it, and finally,
  • pitfalls to avoid

What Is a Microbenchmark

A microbenchmark is an attempt to measure the performance of a small unit of code. The tests are usually in the sub-millisecond range. The tests can help determine how the code is going to behave when released into production. These tests are guide to a better implementation.

Scala: Sorting Lists of Objects

One of the most common ADT that a developer uses in their day-to-day coding is List. And one of the most common operations a developer performs on a list is to order it or sort it with given criteria. In this article, I will focus on sorting a list of objects in Scala.

Mainly, there are two ways of sorting a list in Scala, i.e.