Managing Throughput with Virtual Threads - Sip of Java

Virtual threads, the long-awaited headline feature of Project Loom, became a final feature in JDK 21. Virtual threads are the start of a new chapter of concurrency in Java by providing developers with lightweight threads, allowing the easy and resource-cheap breaking up of tasks to be executed concurrently. However, with these changes come questions about how to manage this throughput best. Let’s take a look at how developers can manage throughput when using virtual threads.

Creating Virtual Threads with ExecutorService

In most cases, developers will not need to create virtual threads themselves. For example, in the case of web applications, underlying frameworks like Tomcat or Jetty will automatically generate a virtual thread for every incoming request.

If a portion of your application could benefit by being broken into individual tasks and executed concurrently, consider creating new virtual threads with Executors.newVirtualThreadPerTaskExecutor()).

Here, in this hypothetical example, three random numbers are being generated serially:

public class SerialExample {
	static RandomGenerator generator = RandomGenerator.getDefault();

	public Integer sumThreeRandomValues() {
		return generator.nextInt() + generator.nextInt() + generator.nextInt();
	}
}

This could be re-written so that the random numbers are generated in parallel:

public class ParallelExample {
	private ExecutorService service = Executors.newVirtualThreadPerTaskExecutor();

	public Integer sumThreeRandomValues() 
			throws InterruptedException, ExecutionException {

		Callable<Integer> generateRandomInt = new Callable<Integer>() {
			static RandomGenerator generator = RandomGenerator.getDefault();

			@Override
			public Integer call() throws Exception {
				return generator.nextInt();
			}
		};

		Future<Integer> randomValue1 = service.submit(generateRandomInt);
		Future<Integer> randomValue2 = service.submit(generateRandomInt);
		Future<Integer> randomValue3 = service.submit(generateRandomInt);
		return randomValue1.get() + randomValue2.get() + randomValue3.get();
	}
}

AutoCloseable ExecutorService

The ExecutorService interface extends AutoCloseable so the ExecutorService instance returned from Executors.newVirtualThreadPerTaskExecutor(); can be placed in a try-with-resources block to be automatically closed if needed as in the example below:

public class ParallelExampleAutoCloseable {
	
	public Integer sumThreeRandomValues() 
			throws InterruptedException, ExecutionException {
		
		Callable<Integer> generateRandomInt = new Callable<Integer>() {
			static RandomGenerator generator = RandomGenerator.getDefault();
			
			@Override
			public Integer call() throws Exception {
				return generator.nextInt();
			}
		};
		
		try (ExecutorService service = Executors.newVirtualThreadPerTaskExecutor();) {
			Future<Integer> value1 = service.submit(generateRandomInt);
			Future<Integer> value2 = service.submit(generateRandomInt);
			Future<Integer> value3 = service.submit(generateRandomInt);
			return value1.get() + value2.get() + value3.get();
		}
	}
}

However, in most cases, it would likely be preferable to have the ExecutorService live for the application’s lifetime, or at least encapsulating object, as would be the case in the first example.

Structured Concurrency Cometh

Using Executors.newVirtualThreadPerTaskExecutor()) to create and execute tasks as virtual threads is largely a temporary solution. Ideally, Structured Concurrency should be utilized when breaking work down into individual tasks to be executed concurrently. Structured Concurrency is in preview mode as of JDK 21; you can read about preview features here. For more on Structured Concurrency, check out this video from José Paumard.

Rate Limiting Virtual Threads

When I have presented on Virtual Threads, a common question I have heard is how to prevent an external service from getting overwhelmed by too many calls when using virtual threads. The answer to that question would be to use a java.util.concurrent.Semaphore. Semaphores work like pools, but instead of pooling a scarce resource like connections or threads before the introduction of virtual threads. Semaphores instead pool permits, which is simply a counter.

The code example below demonstrates how to implement a Semaphore. The number of permits should be set to the acceptable rate limit for external service, and the call to the service placed between an acquire(), which takes a permit, and a release(), which returns a permit. Be sure to place the release() within a finally block to prevent permits from leaking.

public class SemaphoreExample {
	private static final Semaphore POOL = new Semaphore(10); //Set permit limit

	public void callOldService(...) {
		try{
			POOL.acquire();//Takes a permit and blocks calls if no permit is available
		} catch(InterruptedException e){
			//handle exception if acquiring a permit fails		
		}
	
		try {
			//call service
		} finally {
			POOL.release();//Releases a permit
		}
	}
}

⚠️ Warning: Don’t pool virtual threads, as they are not a scarce resource.

Pools within Pools

If you already use a connection pool to manage the connections to a service, avoid using a Semaphore. If you find that when migrating to JDK 21 and using virtual threads, a service is being overwhelmed with traffic, instead configure the relevant connection pool to have a smaller pool of connections that the external service can manage.

Configuring Platform Threads

Virtual threads run on top of platform threads, a subject I cover in this video. The number of platform threads that are created when the JVM starts up is based upon the number of cores available to the JVM, with the default max size of the platform thread pool being 256. In the vast majority of cases, these defaults should work fine. If, however, you have an edge case where these default values don’t meet your requirements, they can be modified with the VM arguments:

  • jdk.virtualThreadScheduler.parallelism: configures the number of platform threads created per core the JVM has access to.

  • jdk.virtualThreadScheduler.maxPoolSize : configures the maximum number of platform threads that will be created.

Debugging Virtual Threads

When issues arise on applications using virtual threads, remember that observability tools like JDK Flight Recorder (JFR) and jcmd are already set to handle virtual threads.

JFR and Virtual Threads

JFR has been updated with four new events for virtual threads:

jdk.VirtualThreadStart
jdk.VirtualThreadEnd
jdk.VirtualThreadPinned
jdk.VirtualThreadSubmitFailed

By default, jdk.VirtualThreadStart and jdk.VirtualThreadEnd are disabled but you can be enabled through a JFR configuration file or JDK Mission Control. You can read how to enable JFR events here.

Thread dumps with Virtual Threads

Thread dumps can be executed with the following command:

jcmd <PID> Thread.dump_to_file -format=[text|json] <file>

The thread dump will include virtual threads that are blocked in network I/O operations and virtual threads that are created by the ExecutorService interface, one of the reasons to prefer using the ExecutorService for creating virtual threads.

Object addresses, locks, JNI statistics, heap statistics, and other information that appears in traditional thread dumps are not included.

Additional Reading

Virtual Threads - JEP 444

Virtual Threads Guide

Structured Concurrency - JEP 453

Java 21 new feature: Virtual Threads #RoadTo21

Happy coding!