Avoiding Scalability Collapse by Restricting Concurrency

Avoiding Scalability Collapse by Restricting Concurrency

Avoiding Scalability Collapse by Restricting Concurrency Dave Dice Alex Kogan Oracle Labs Oracle Labs [email protected] [email protected] Abstract Saturated locks often degrade the performance of a multi- threaded application, leading to a so-called scalability col- lapse problem. This problem arises when a growing num- ber of threads circulating through a saturated lock causes the overall application performance to fade or even drop abruptly. This problem is particularly (but not solely) acute on oversubscribed systems (systems with more threads than available hardware cores). In this paper, we introduce GCR (generic concurrency restriction), a mechanism that aims to avoid the scalability collapse. GCR, designed as a generic, lock-agnostic wrapper, intercepts lock acquisition calls, and decides when threads Figure 1. Microbenchmark performance with different locks would be allowed to proceed with the acquisition of the on a 2-socket machine with 20 hyper-threads per socket. underlying lock. Furthermore, we present GCR-NUMA, a non-uniform memory access (NUMA)-aware extension of GCR, that strives to ensure that threads allowed to acquire An example for scalability collapse can be seen in Fig- the lock are those that run on the same socket. ure1 that depicts the performance of a key-value map mi- The extensive evaluation that includes more than two crobenchmark with three popular locks on a 2-socket x86 dozen locks, three machines and three benchmarks shows machine featuring 40 logical CPUs in total (full details of that GCR brings substantial speedup (in many cases, up to the microbenchmark and the machine are provided later). three orders of magnitude) in case of contention and growing The shape and the exact point of the performance decline thread counts, while introducing nearly negligible slowdown differ between the locks, yet all of them are unable to sustain when the underlying lock is not contended. GCR-NUMA peak throughput. With the Test-Test-Set lock, for instance, brings even larger performance gains starting at even lighter the performance drops abruptly when more than just a few lock contention. threads are used, while with the MCS lock [20] the perfor- mance is relatively stable up to the capacity of the machine Keywords locks, scalability, concurrency restriction, NUMA and collapses once the system gets oversubscribed (i.e., has more threads available than the number of cores). Note that 1 Introduction one of the locks, MCS-TP, was designed specifically to han- The performance of applications on multi-core systems is of- dle oversubscription [13], yet its performance falls short of arXiv:1905.10818v2 [cs.OS] 11 Jul 2019 ten harmed by saturated locks, where at least one thread the peak. is waiting for the lock. Prior work has observed that as It might be tempting to argue that one should never create the number of threads circulating through a saturated lock a workload where the underlying machine is oversubscribed, grows, the overall application performance often fades or pre-tuning the maximum number of threads and using a even drops abruptly [2, 7, 15, 16], a behavior called scalabil- lock, such as MCS, to keep the performance stable. We note ity collapse [7]. This happens because threads compete over that in modern component-based software, the total number shared system resources, such as computing cores and last- of threads is often out of the hands of the developer. A good level cache (LLC). For instance, the increase in the number of example would be applications that use thread pools, or even distinct threads circulating through the lock typically leads have multiple mutually unaware thread pools. Furthermore, to increased cache pressure, resulting in cache misses. At the in multi-tenant and/or cloud-based deployments, where the same time, threads waiting for the lock consume valuable resources of a physical machine (including cores and caches) resources and might preempt the lock holder from making are often shared between applications running inside virtual progress with its execution under lock, exacerbating the machines or containers, applications can run concurrently contention on the lock even further. with one another without even being aware that they share 2019-07-15 • Copyright Oracle and or its affiliates Dave Dice and Alex Kogan the same machine. Thus, limiting the maximum number of fact, in many cases GCR makes the fairness better). GCR- threads by the number of cores does not help much. Finally, NUMA brings even larger performance gains starting at even even when a saturated lock delivers a seemingly stable per- lighter lock contention. We also prove that GCR keeps the formance, threads spinning and waiting for the lock consume progress guarantees of the underlying lock, i.e., it does not energy and take resources (such as CPU time) from other, introduce starvation if the underlying lock is starvation-free. unrelated tasks1. In this paper we introduce generic concurrency restriction (GCR) to deal with the scalability collapse. GCR operates 2 Related Work as a wrapper around any existing lock (including POSIX Prior work has explored adapting the number of active threads pthread mutexes, and specialized locks provided by an ap- based on lock contention [7, 16]. However, that work cus- plication). GCR intercepts calls for a lock acquisition and tomized certain types of locks, exploiting their specific fea- decides which threads would proceed with the acquisition tures, such as the fact that waiting threads are organized in a of the underlying lock (those threads are called active) and queue [7], or that lock acquisition can be aborted [16]. Those which threads would be blocked (those threads are called requirements limit the ability to adapt those techniques into passive). Reducing the number of threads circulating through other locks and use them in practice. For instance, very few the locks improves cache performance, while blocking pas- locks allow waiting threads to abandon an acquisition at- sive threads reduces competition over CPU time, leading to tempt, and many spin locks, such as a simple Test-Test-Set better system performance and energy efficiency. To avoid lock, do not maintain a queue of waiting threads. Further- starvation and achieve long-term fairness, active and passive more, the lock implementation is often opaque to the ap- threads are shuffled periodically. We note that the admission plication, e.g., when POSIX pthread mutexes are used. At policy remains fully work conserving with GCR. That is, the same time, prior research has shown that every lock when a lock holder exits, one of the waiting threads will be has its own “15 minutes of fame", i.e., there is no lock that able to acquire the lock immediately and enter its critical always outperforms others and the choice of the optimal section. lock depends on the given application, platform and work- In this paper we also show how GCR can be extended into a load [6, 12]. Thus, in order to be practical, a mechanism to non-uniform access memory (NUMA) setting of multi-socket control the number of active threads has to be lock-agnostic, machines. In those settings, accessing data residing in a local like the one provided by GCR. cache is far cheaper than accessing data in a cache located on Other work in different, but related contexts has observed a remote socket. Previous research on locks tackled this issue that controlling the number of threads used by an applica- by trying to keep the lock ownership on the same socket [3, tion is an effective approach for meeting certain performance 8, 9, 22], thus increasing the chance that the data accessed by goals. For instance, Raman et al. [23] demonstrate that with a thread holding the lock (and the lock data as well) would be a run-time system that monitors application execution to cached locally to that thread. The NUMA extension of GCR, dynamically adapt the number of worker threads executing called simply GCR-NUMA, takes advantage of that same parallel loop nests. In another example, Pusukuri et al. [21] idea by trying to keep the set of active threads composed propose a system that runs an application multiple times for of threads running on the same socket. As a by-product of short durations while varying the number of threads, and this construction, GCR-NUMA can convert any lock into a determines the optimal number of threads to create based on NUMA-aware one. the observed performance. Chadha et al. [4] identified cache- We have implemented GCR (and GCR-NUMA) in the con- level thrashing as a scalability impediment and proposed text of the LiTL library [12, 19], which provides the imple- system-wide concurrency throttling. Heirman et al. [14] sug- mentation of over two dozen various locks. We have evalu- gested intentional undersubscription of threads as a response ated GCR with all those locks using a microbenchmark as to competition for shared caches. Hardware and software well as two well-known database systems (namely, Kyoto transactional memory systems use contention managers to Cabinet [11] and LevelDB [17]), on three different systems throttle concurrency in order to optimize throughput [24]. (two x86 machines and one SPARC). The results show that The issue is particularly acute in the context of transactional GCR avoids the scalability collapse, which translates to sub- memory as failed optimistic transactions are wasteful of stantial speedup (up to three orders of magnitude) in case resources. of high lock contention for virtually every evaluated lock, Trading off between throughput and short-term fairness workload and machine. Furthermore, we show empirically has been extensively explored in the context of NUMA-aware that GCR does not harm the fairness of underlying locks (in locks [3, 8, 9, 22].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us