Benefits of Setting Initial and Maximum Memory Size to the Same Value

When we launch applications, we specify the initial memory size and maximum memory size. For the applications that run on JVM (Java Virtual Machine), initial and maximum memory size is specified through "-Xms" and "-Xmx" arguments. If Java applications are running on containers, it’s specified through "-XX: InitialRAMPercentage" and "-XX: MaxRAMPercentage" arguments. Most enterprises set the initial memory size to a lower value than the maximum memory size. As opposed to this commonly accepted practice, setting the initial memory size the same as the maximum memory size has certain "cool" advantages. Let’s discuss them in this post.

1. Availability

Suppose you are launching your application with the initial heap size to be 2GB and the maximum heap size to be 24GB. This means when the application starts, the operating system will allocate 2GB of memory to your application. From that point, as the application starts to process new requests, additional memory will be allocated until a maximum of 24GB is reached. 

Memory Leak Due To Improper Exception Handling

In this post, let’s discuss an interesting memory problem we confronted in the production environment and how we went about solving it. This application would take traffic for a few hours after that it would become unresponsive. It wasn’t clear what was causing the unresponsiveness in the application.

Technology Stack

This application was running on AWS cloud in r5a.2xlarge EC2 instances. It was a Java application running on an Apache Tomcat server using the Spring framework. It was also using other AWS services like S3 and Elastic Beanstalk. This application had a large heap size (i.e. -Xmx): 48GB.

Large or Small Heap Size? [Video]

Do you want to run your application with a large heap size or a small heap size? What is the right strategy? Which strategy is more expensive? Which is more performant? Watch this video to know more!

Chaos Engineering – Stackoverflow Error

In our series of chaos engineering articles, we have been learning to simulate various performance problems. In this post, let’s discuss how to simulate a StackOverflow error. A StackOverflow error is a runtime error. In this post, we'll simulate a StackOverflowError, diagnose it, and solve the problem.

Sample Program

Here is a sample program from the open source BuggyApp application, which would generate java.lang.StackOverflowError.

Shallow Heap vs. Retained Heap [Video]

Eclipse MAT is a powerful tool to analyze heap dumps. It comes in handy when debugging OutOfMemoryError. Eclipse MAT reports two types of object size: 1) Shallow Heap 2) Retained Heap. This video explains the difference between them and how they are calculated. Watch this video to learn more!

Heap Dump With Lots of ‘Unresolved Name’ Objects

If you’re familiar with Java as a programming language, you may have come across the following message: java.lang.OutOfMemoryError: Java heap space. We recently got that message in one of the services that we’re currently working on. To better understand why this happens, it’s good to get a Java memory heap dump for further analysis.

After parsing the heap dump in both Eclipse MAT and Visual VM, I noticed something strange. My heap dump felt obfuscated and showed lots of objects named ‘Unresolved Name 0x’.

What Are Garbage Collection Logs, Thread Dumps, and Heap Dumps?

Java Virtual Machine (JVM) generates 3 critical artifacts that are useful for optimizing the performance and troubleshooting production problems. Those artifacts are:

  1. Garbage collection (GC) log
  2. Thread Dump
  3. Heap Dump

In this article, let's try to understand these 3 critical artifacts, where to use them, how do they look, how to capture them, how to analyze them, and their differences.

The Best Way to Optimize Garbage Collection Is NOT By Tuning it

The best way to optimize garbage collection is NOT by tuning it.

When asked: "What would you do if your Java app experiences long GC pauses?" — most people would answer: "increase the heap size and/or tune the garbage collector." That works in many but not all situations. The heap may already be close to the total memory available. And GC tuning, beyond a few most obvious and efficient flags, often becomes a black art where each new change brings a smaller improvement. Worse, hyper-tuning GC makes it tightly coupled with your current heap size, number of CPUs, and workload patterns.

Need help choosing the right GC? Check out this post!

If any of these components changes in the future, the GC may suddenly perform much worse. And (what a surprise!) at that point, it may be really hard to remember why each of the flags in the combination like the one below is there, and what its effect was supposed to be: