Overview of C# Async Programming

I recently found myself explaining the concept of thread and thread pools to my team. We encountered a complicated threads-problem in our production environment that led us to review and analyze some legacy code. Although it threw me back to the basics, it was beneficial to review .NET capabilities and features in managing threads, which mainly reflected how .NET evolved significantly throughout the years. With the new options available today, tackling the production problem is much easier to cope with.

TL;DR

So, this circumstance led me to write this blog-post to go through the underlying foundations and features of threads and thread pools:

Java Concurrency: The Basics

The Power of Multi-Threading in Java

This article describes the basics of Java Concurrency API, and how to assign work to be done asynchronously.

A thread is the smallest unit of execution that can be scheduled by the operating system, while a process is a group of associated threads that execute in the same, shared environment.

You may also like: [DZone Refcard] Core Java Concurrency

We call “system thread” the thread created by the JVM that runs in the background of the application (e.g. garbage-collector), and a “user-defined thread” those which are created by the developer.

Why Do We Need Thread.currentThread().interrupt() in Interruptible Methods?

Why Do We Need Thread.currentThread().interrupt() in Interruptible Methods?

By an interruptible method, we mean a blocking method that may throwInterruptedException, for example, Thread.sleep(),  BlockingQueue.take(),  BlockingQueue.poll(long timeout, TimeUnit unit), and so on. A blocking thread is usually in a BLOCKED, WAITING, or TIMED_WAITING state, and if it is interrupted, then the method tries to throw InterruptedException as soon as possible.

Since InterruptedException is a checked exception, we must catch it and/or throw it. In other words, if our method calls a method that throws InterruptedException, then we must be prepared to deal with this exception. If we can throw it (propagate the exception to the caller), then it is not our job anymore. The caller has to deal with it further. So, let's focus on the case when we must catch it. Such a case can occur when our code is run inside Runnable, which cannot throw an exception.

Java Concurrency, Part 1: Threads

Concurrency is a game-changer for building Java applications, referring to the ability to run several programs at the same time using multiple threads. 

This post is the first in a series of posts about Java concurrency. All code shared in this article has been tested in Java 12.

Spring Service: Improving Processing Time Could Harm Service Scalability

Learn more about improving processing speed in your Spring service.

Improving processing time by running tasks in parallel became very easy in the latest versions of Java and Spring. Must we understand the threads' usage behind the syntactic sugar? Read on to find out.

You may also like: Get the Most Out of the Java Thread Pool

Let's Start With the Code

Below is a simple controller of service (I will name it users service) that gets the input userId and returns the user object: idname, and phoneNumber:

Python Thread – Part One

Python Thread

A Python thread is like a process, and may even be a process, depending on the Python thread system. In fact, Python threads are sometimes called lightweight processes because threads occupy much less memory and take less time to create than do processes.

Threads allow applications to perform multiple tasks at once. Multithreading is important in many applications.

ConcurrentHashMap: Call Only One Method Per Key

Each method of ConcurrentHashMap is thread-safe. But calling multiple methods from ConcurrentHashMap for the same key leads to race conditions. And calling the same method from ConcurrentHashMap recursively for different keys leads to deadlocks.

Let us look at an example to see why this happens:

Proposed SQL Server Defaults: Disable Lightweight Pooling

A few months ago, I suggested that the following settings should be the default for most SQL Server instances.

  • Set cost threshold for parallelism to 50
  • Disable lightweight pooling if it is enabled
  • Disable priority boost if it is enabled
  • Set optimize for ad hoc workloads to enabled
  • Set max server memory (MB) to a custom value consistent with Jonathan Kehayias's algorithm
  • Set backup compression default to enabled
  • Set the power saving settings on Windows to high performance if possible
  • Provide an option to flush the plan cache as needed

Over the next few posts, I will dive into the why. Last time, we started with cost threshold for parallelism. This week is a quick look at lightweight pooling.

A Bird’s-Eye View on Java Concurrency Frameworks

The Why Question

A few years ago when NoSQL was trending, like every other team, our team was also enthusiastic about the new and exciting stuff; we were planning to change the database in one of the applications. But when we got into the finer details of the implementation, we remembered what a wise man once said, “the devil is in the details,” and eventually, we realized that NoSQL is not a silver bullet to fixing all problems, and the answer to NoSQL VS RDMS was: “It depends.” Similarly, in the last year, concurrency libraries like RX-Java and Spring Reactor were trending with enthusiastic statements, like the asynchronous, non-blocking approach is the way to go, etc. In order to not make the same mistake again, we tried to evaluate how concurrency frameworks like ExecutorService, RxJava, Disruptor, and Akka differ from one another and how to identify the right use case for respective frameworks.

Terminologies used in this article are described in greater detail here.