
Menu Topics Archives Downloads Subscribe Going inside Java’s Project CODING Loom and virtual threads Introducing Project Loom Going inside Java’s Project Thread builders Loom and virtual threads Programming with virtual threads See how virtual threads bring back the old days of Java’s green thread—that A cautionary tale is, Java threads not tied to operating- When will Project Loom arrive? system threads. Dig deeper by Ben Evans January 15, 2021 Let’s talk about Project Loom, which is exploring new Java language features, APIs, and runtimes for lightweight concurrency—including new constructs for virtual threads. Java was the first mainstream programming platform to bake threads into the core language. Before threads, the state of the art was to use multiple processes and various unsatisfactory mechanisms (UNIX shared memory, anyone?) to communicate between them. At an operating system (OS) level, threads are independently scheduled execution units that belong to a process. Each thread has an execution instruction counter and a call stack but shares a heap with every other thread in the same process. Not only that, but the Java heap is just a single contiguous subset of the process heap, at least in the HotSpot JVM implementation (other JVMs may differ), so the memory model of threads at an OS level carries over naturally to the Java language domain. The concept of threads naturally leads to a notion of a lightweight context switch: It is cheaper to switch between two threads in the same process than between threads in a different processes. This is primarily because the mapping tables that convert virtual memory addresses to physical memory addresses are mostly the same for threads in the same process. By the way, creating a thread is also cheaper than creating a process. Of course, the exact extent to which this is true depends on the details of the OS in question. The Java Language Specification does not mandate any particular mapping between Java threads and OS threads, assuming that the host OS has a suitable thread concept—which has not always been the case. In fact, in very early Java versions, the JVM threads were multiplexed onto OS threads (also known as platform threads), in what were referred to as green threads because those earliest JVM implementations actually used only a single platform thread. However, this single platform thread practice died away around the Java 1.2 and Java 1.3 era (and slightly earlier on Sun’s Solaris OS). Modern Java versions running on mainstream OSs instead implement the rule that one Java thread equals exactly one OS thread. This means that using Thread.start() calls the thread creation system call (such as clone() on Linux) and actually creates a new OS thread. OpenJDK’s Project Loom aims, as its primary goal, to revisit this long-standing implementation and instead enable new Thread objects that can execute code but do not directly correspond to dedicated OS threads. Or, to put it another way, Project Loom creates an execution model where an object that represents an execution context is not necessarily a thing that needs to be scheduled by the OS. Therefore, in some respects, Project Loom is a return to something similar to green threads. However, the world has changed a lot in the intervening years, and sometimes there are ideas in computing that are ahead of their time. For example, you could regard EJBs (that is, Jakarta Enterprise Beans, formerly Enterprise JavaBeans) as a form of restricted environment that over-ambitiously tried to virtualize the environment away. Can EJBs perhaps be thought of as a prototypical form of the ideas that would later find favor in modern PaaS systems and, to a lesser extent, in Docker and Kubernetes? So, if Loom is a (partial) return to the idea of green threads, then one way of approaching it might be via this question: What has changed in the environment that makes it interesting to return to an old idea that was not found to be useful in the past? To explore this question a little, let’s look at an example. Specifically, let’s try to crash the JVM by creating too many threads, as follows: // // Please do not actually run this code... it // public class CrashTheVM { private static void looper(int count) { var tid = Thread.currentThread().getI if (count > 500) { return; } try { Thread.sleep(10); if (count % 100 == 0) { System.out.println("Thread id } } catch (InterruptedException e) { e.printStackTrace(); } looper(count + 1); } public static Thread makeThread(Runnable return new Thread(r); } public static void main(String[] args) { var threads = new ArrayList<Thread>() for (int i = 0; i < 20_000; i = i + 1 var t = makeThread(() -> looper(1 t.start(); threads.add(t); if (i % 1_000 == 0) { System.out.println(i + " thre } } // Join all the threads threads.forEach(t -> { try { t.join(); } catch (InterruptedException e) e.printStackTrace(); } }); } } The code starts up 20,000 threads and does a minimal amount of processing in each one, or at least tries to. In practice, the application will probably die or lock up the machine long before that steady state is reached, though it is possible to get the example to run through to completion if the machine or OS are throttled and can’t create threads fast enough to induce the resource starvation. Figure 1 shows an example of what happened on my 2019 MacBook Pro right before the machine became totally unresponsive. This image shows inconsistent statistics, such as the thread count, because the OS is already struggling to keep up. Figure 1. An image showing too many threads: Do not try this at home. While it is obviously not completely representative of a practical production Java application, this example signposts what will happen to, for example, a web serving environment with one thread per connection. It is entirely reasonable for a modern high-performance web server to be expected to handle tens of thousands (or more) concurrent connections, and yet this example clearly demonstrates the failure of a thread-per- connection architecture for that case. To put it another way: A modern program may need to keep track of many more executable contexts than it can create threads for. An alternative takeaway could be that threads are potentially much more expensive than most people think and represent a scaling bottleneck for modern JVM applications. Developers have been trying to solve this problem for years, either by taming the cost of threads or by using a representation of execution contexts that aren’t threads. One way of trying to achieve this was the staged event-driven architecture (SEDA) approach, which first appeared 15 years ago. SEDA can be thought of as a system in which a domain object is moved from A to Z along a multistage pipeline with various different transformations happening along the way. This can be implemented in a distributed system using a messaging system or, in a single process, using blocking queues and a thread pool for each stage. At each step of the SEDA approach, the processing of the domain object is described by a Java object that contains code to implement the step transformation. For this to work correctly, the code must be guaranteed to terminate; there must be no infinite loops. However, this requirement cannot be enforced by the framework. There are notable shortcomings to the SEDA approach, not least of which is the discipline required by programmers to use the architecture. Let’s look for a better alternative, and that’s Project Loom. Introducing Project Loom Project Loom is an OpenJDK project that aims to enable “easy- to-use, high-throughput lightweight concurrency and new programming models on the Java platform.” The project aims to accomplish this by adding these new constructs: Virtual threads Delimited continuations Tail-call elimination The key to all of this is virtual threads, which are designed to look to the programmer just like ordinary, familiar threads. However, virtual threads are managed by the Java runtime and are not thin, one-to-one wrappers over OS threads. Instead, virtual threads are implemented in user space by the Java runtime. (This article will not cover delimited continuations or tail- call elimination, but you can read about them here.) The major advantages that virtual threads are intended to bring include Creating and blocking them is cheap. Java execution schedulers (thread pools) can be used. There are no OS-level data structures for the stack. The removal of the involvement of the OS in the lifecycle of a virtual thread is what removes the scalability bottleneck. Large- scale JVM applications can cope with having millions or even billions of objects, so why should they be restricted to just a few thousand OS-schedulable objects (which is one way to think about what a thread is)? Shattering this limitation and unlocking new concurrent programming styles is the main aim of Project Loom. Let’s see virtual threads in action. Download a Project Loom beta build and spin up jshell, as shown in the following example: $ jshell | Welcome to JShell -- Version 16-loom | For an introduction type: /help intro jshell> Thread.startVirtualThread(() -> { ...> System.out.println("Hello World") ...> }); Hello World $1 ==> VirtualThread[<unnamed>,<no carrier th jshell> You can straightaway see the virtual thread construct in the output. The code is also using a new static method, startVirtualThread(), to start the lambda in a new execution context, which is a virtual thread. It’s that simple! Virtual threads must be opted-in: Existing codebases must continue to run in exactly the way they ran before the advent of Project Loom. Nothing can break, and everyone must make the conservative assumption that all existing Java code genuinely needs the standing “lightweight wrapper over OS” thread architecture that has been, until now, the only game in town.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-