Benefits of Setting Initial and Maximum Memory Size to the Same Value

When we launch applications, we specify the initial memory size and maximum memory size. For the applications that run on JVM (Java Virtual Machine), initial and maximum memory size is specified through "-Xms" and "-Xmx" arguments. If Java applications are running on containers, it’s specified through "-XX: InitialRAMPercentage" and "-XX: MaxRAMPercentage" arguments. Most enterprises set the initial memory size to a lower value than the maximum memory size. As opposed to this commonly accepted practice, setting the initial memory size the same as the maximum memory size has certain "cool" advantages. Let’s discuss them in this post.

1. Availability

Suppose you are launching your application with the initial heap size to be 2GB and the maximum heap size to be 24GB. This means when the application starts, the operating system will allocate 2GB of memory to your application. From that point, as the application starts to process new requests, additional memory will be allocated until a maximum of 24GB is reached. 

Chaos Engineering: Thread Leak

In the series of chaos engineering articles, we have been learning to simulate various performance problems. In this post, let’s discuss how to simulate thread leaks. ‘java.lang.OutOfMemoryError: unable to create new native thread’ will be thrown when more threads are created than the memory capacity of the device. When this error is thrown, it will disrupt the application’s availability.

Sample Program

Here is a sample program from the open source BuggyApp application, which keeps creating an infinite number of threads.

Load Average: An Indicator for Only CPU Demand?

Load Average‘ is an age-old metric reported in various operating systems. It’s often assumed as a metric to indicate the CPU demand only. However, that is not the case. ‘Load Average’ not only indicates CPU demand, but also the I/O demand (i.e., network read/write, file read/write, disk read/write). To prove this theory, we conducted this simple case study.

Load Average Study

To validate this theory, we leveraged BuggyApp. BuggyApp is an open-source Java project that can simulate various performance problems. When you launch BuggyApp with the following arguments, it will cause high disk I/O operations on the host.

Different CPU Times: Unix/Linux ‘top’

CPU consumption in Unix/Linux operating systems is studied using eight different metrics: User CPU time, System CPU time, nice CPU time, Idle CPU time, Waiting CPU time, Hardware Interrupt CPU time, Software Interrupt CPU time, Stolen CPU time. Let’s review each of the CPU time in this article.

User CPU Time and System CPU Time

In order to understand ‘user’ CPU Time, one should understand ‘system’ CPU time as well, since they go hand in hand. User CPU time is the amount of time the processor spends in running your application code. System CPU Time is the amount of time the processor spends in running the operating system(i.e., kernel) functions connected to your application. Let’s say your application is manipulating the elements in an array; then, it will be accounted as ‘user’ CPU time. Let’s say your application is making network calls to external applications. To make network calls, it has to read/write data into socket buffers which is part of the operating system code. This will be accounted as ‘system’ CPU time. To learn how to resolve high ‘user’ CPU time, refer to this article. To learn how to resolve high ‘system’ CPU time, refer to this article.

Deep Dive Into .NET Garbage Collection

Garbage collection, and memory management in general, will be the first and last things to work on. It is the main source of the most obvious performance problems, those that are the quickest to fix but need to be constantly monitored. Many problems are actually caused by an incorrect understanding of the garbage collector’s behavior and expectations. 

You have to think about memory performance just as much as about CPU performance. This is also true for unmanaged code performance, but in .NET it is easier to deal with. Once you understand how GC works, it becomes clear how to optimize your program for its operation. It can give you better overall heap performance in many cases because it deals with allocation and fragmentation better.

Optimizing Memory Access With CPU Cache

Nowadays, developers pay less attention to performance because hardware has become cheaper and stronger. If developers understand some basic knowledge about CPU and memory, they can avoid simple mistakes and it is easy to improve the performance of their code.

At the end of this article, I also ask a question that I do not know the answer to, so any suggestions are welcome!