Fast Spring Boot AWS Lambdas with GraalVM

In my previous blog post, I documented how to take a Java Spring Boot application and convert it into a serverless function, which can be run in AWS Lambda. 

Anyone who's done this before knows that cold starts are a big downside - Java and Spring Boot are not known for their speedy startup times, and a typical full fat Spring Boot-converted lambda can take anything from 10 to 90 seconds depending on how much memory and CPU you allocate it. This may force you to over-provision them to compensate for the cold starts, but that's a pretty expensive sledgehammer. There is always provisioned concurrency, but that doesn't work out much cheaper either (and negates the responsive scalability of lambda as you have to anticipate how many you'll need in advance).

Trace-Based Testing with OpenTelemetry: Meet Open Source Malabi

By Yuri Shkuro, creator and maintainer of Jaeger, and , Co-Founder & CTO of Aspecto.


If you deal with distributed applications at scale, you probably use tracing. And if you use tracing data, you already realize its crucial role in understanding your system and the relationships between system components, as many software issues are caused by failed interactions between these components.

Use Ketch to Deploy Apps on Kubernetes Without YAML

Ketch, a relatively new open source project from application-as-code platform Shipa, offers a simple command-line interface that developers can use to deploy and manage applications on any Kubernetes cluster without writing YAML configuration files.

Kubernetes is the ubiquitous standard for orchestrating containerized and microservices-based applications. However, operating Kubernetes requires developers to overcome a rather steep learning curve. Running Kubernetes successfully means gaining expertise in Kubernetes concepts, objects, and how to write and manage YAML files. Ketch eliminates much of that complexity by deploying applications directly to Kubernetes clusters, rendering YAML files unnecessary and easing entry for developers.

Microservice Testing Strategies

Introduction

Microservices are distributed in nature, in any architecture there can be many microservices involved. There are various components involved in any microservice like service can consume events from ActiveMQ or a Kafka topic, saves the data in the Database (both RDBMS or NoSQL) in one go, and then produce the new enriched event to another Kafka topic for other services to consume and start its processing or invoking altogether a separate RESTful service. Writing meaningful test cases in the architecture is not straightforward. The article focuses on a possible testing strategy for cleanly separating the various test cases in the application.

Testing Strategy

Consider a Pyramid having three layers L1, L2, and L3. The first layer of the Pyramid is L1, the second one is L2 and the third one is L3, any microservice architecture should have three different types of Test cases categorized as L1, L2, L3, these layers are just for our visualization don’t try to create packages for each one of them, let’s understand them in detail.

Microservices Patterns: Sidecar

A properly designed Microservice should follow the Single Responsibility Principle, hence it’s important to segregate the common functionality which should be reused by other services in the architecture. The Sidecar Pattern advocates increasing modularity by identifying common functionalities in each service and either club them in a library or move them to a separate service.

As the name suggests Sidecar, which Is a one-wheeled device attached to the side of the motorcycle, scooter, etc., similarly Sidecar Pattern advocates the separation of cross-cutting concerns and remove them from the actual service and push them to a separate module, library, or service and these functionalities then will be reused by other services in the architecture.

gRPC for .NET: Creating a gRPC Server Application

While working with Protobuf message format for one of my client projects, I recently came across a gRPC framework. After doing my analysis, gRPC seems pretty promising for inter-service communication, particularly in microservices architectures. gRPC is a language-agnostic, high-performance Remote Procedure Call (RPC) framework. 

gRPC is built on top of the HTTP/2 transport layer and therefore can support four types of gRPC methods (unary, client streaming, server streaming, and bi-directional streaming). It uses Protobuf for message exchange.

Is Today’s Microservice More Bloated than Yesterday’s Monolith?

I am slightly hesitant to write this post, as it might attract some criticism. Nevertheless, I told myself there is nothing wrong with sharing my point of view (even though it might not be well accepted). I would like to share my personal experience regarding yesterday’s Monolithic and today’s Microservice architecture in this post.

Yesterday’s Monolithic Application

20 years back, I was starting my career. At that point, I was working for a large financial corporation in N. America. This financial institution’s middleware platform was running in CORBA, C++ platform. Since CORBA was getting extinct around that time, the technology vendor decided not to support CORBA anymore. It was a huge risk for the enterprise to run its mission-critical middleware platform on unsupported technology. Thus, management decided to port their middleware application to platform-independent technology stack: "SOAP" and "Java," which was considered as the cool kid of the time. 

Observability: Let Your IDE Debug for You

Current events have brought an even stronger push by many enterprises to scale operations across cloud-native and distributed environments. To survive and thrive, companies must now seriously look at cloud-native technologies—such as API management and integration solutions, cloud-native products, integration platform as a service (iPaaS), and low-code platforms—that are easy to use, accelerate time to market, and enable re-use and sharing. However, due to their distributed nature, these cloud-native applications have a higher level of management complexity, which increases as they scale.

Building observability into applications allows teams to automatically collect and analyze data about applications. Such analysis allows us to optimize applications and resolve issues before they impact users. Furthermore, it significantly reduces the debugging time of issues that occur in applications at runtime. This allows developers to focus more on productive tasks, such as implementing high-impact features. 

Writing Logs Into Elastic With NLog, ELK, and .NET 5.0

If you are using Microservice-based architecture, one of the challenges is to integrate and monitor application logs from different services and the ability to search on this data based on message string or sources, etc.

So, What Is The ELK Stack?

"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server-side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

Top 10 Low-Code Articles

Introduction

Creating a business/personal website with little to no technical skills is now easier than you imagined. Low/No Code has been brewing amongst us for quite some time. How awesome would it be to find top trending articles in one place so that you can always stay up to date with the latest trends in technology? We dug into Google analytics to find the top 10 most popular Low/No Code articles at DZone. Let's get started!

10. Can Low-Code Really Solve the Problem of Technical Debt?

Technical debt is the topic of debate when developing any application. Should the team follow the guidelines and code quality or take a quicker route to delivery and result. Code cannot be called a quality code unless all of its debt is settled. As Low-code allows the developers to create applications with minimal code, can it resolve the problem of technical debt? Read this article to know more about Low-code and if it is a way to overcome technical debt.

How To Implement and Design Twitter Search Backend Systems using Java Microservices?

Twitter is the largest and one among the most important social networking service where users can share photos, news, and text-based messages. In this article, I have explained in this blog about designing a service which, will store and search user tweets.

What do you mean by Twitter Search and how this functionality works?

Twitter users can update their status whenever they want to update irrespective of time. Each status or we can say it as Tweets consists of a plain string or test, and I intend to design a system that allows searching over all the user tweets. In this blog, I have given importance to the Tweet Search Functionality.

Checklist for API Verification

Microservices are a designer's way to address the complexity of applications in today's world. Verification and Deployment of these applications are however lost in the implementation as the same thought the process is very rarely communicated in the Real-world. 

A web application that could be decomposed into Model-View-Controller architecture has morphed to the world of Unstructured Data<->Microservice<->Frontend. The ability to have a configurable backend(datastore as opposed to a DB) and a Flexible Frontend places Microservice API as a premier source of data interchange. 

A Java developer’s guide to Quarkus

Serverless architecture has already become an efficient solution to align overprovisioning and underprovisioning resources (e.g., CPU, memory, disk, networking) with actual workloads regardless of physical servers, virtual machines, and cloud environments. Yet, there is a concern for Java developers when choosing new programming languages to develop serverless applications. The Java framework seems too heavyweight and slow for serverless deployment on the cloud, especially Kubernetes.

What if you, Java developer, could keep using the Java framework to build traditional cloud-native microservices as well as new serverless functions at the same time? This approach should be exciting since you don’t have to worry about a steep learning curve for new serverless application frameworks.

Microservices on AWS: Part 2 [Video]

Introduction 

In this AWSome Pipeline tutorial, I will deploy a Spring Boot microservice to AWS Cloud using the different CI/CD tools provided by AWS. We will be creating different IAM roles needed and then set up the AWS pipeline to continuously deliver software changes to our EC2 instances. I will walk you through different steps involved from uploading your code to GitHub, then check out that using AWS code stage, building using AWS Code Build, and then deploying the generated artifact to your targeted auto-scaling group using AWS Code Deploy. We will be creating a new version of the application and then demo that how the AWS pipeline can deploy those changes to our environment seamlessly. 

Source code can be downloaded from the GitHub repository.

Chaos Engineering Make Disciplined Microservices

Chaos and discipline, These two words are an oxymoron, you might be thinking, how can chaos make disciplined microservices?

But the universal truth is discipline means the absence of chaos, so until you have not experienced chaos you can not be disciplined.