
Run, Actor, Run Towards Cross-Actor Language Benchmarking Sebastian Blessing Kiko Fernandez-Reyes Albert Mingkun Yang Imperial College London Uppsala University Uppsala University United Kingdom Sweden Sweden [email protected] [email protected] [email protected] Sophia Drossopoulou Tobias Wrigstad Imperial College London Uppsala University United Kingdom Sweden [email protected] [email protected] Abstract Based on Actors, Agents, and Decentralized Control (AGERE ’19), The actor paradigm supports the natural expression of con- October 22, 2019, Athens, Greece. ACM, New York, NY, USA, 10 pages. currency. It has inspired the development of several actor- https://doi.org/10.1145/3358499.3361224 based languages, whose adoption depends, to a large extent, 1 Introduction on the runtime characteristics (i.e., the performance and scal- ing behaviour) of programs written in these languages. Modern computers are all multicore or manycore machines This paper investigates the relative runtime characterist- that need software written with concurrency and parallelism ics of Akka, CAF and Pony, based on the Savina benchmarks. in mind to utilise their full power. This has sparked a renewed We observe that the scaling of many of the Savina bench- interest in the actor paradigm, which aims to express concur- marks does not reflect their categorization (into essentially rency in a natural way, and provide a high-level abstraction sequential, concurrent and parallel), that many programs that, at least in theory, exploits the inherent concurrency of have similar runtime characteristics, and that their runtime a program’s design. This has inspired the development of behaviour may drastically change nature (e.g., go from essen- several actor languages [6, 11–13, 16, 19, 24, 32, 36, 38]. tially sequential to parallel) by tweaking some parameters. For wide adoption of actor languages, the runtime char- These observations lead to our proposal of a single bench- acteristics—performance and scaling—are important consid- mark program which we designed so that through tweaking erations. The benchmark suite Savina [30], which aims to of some knobs (we hope) we can simulate most of the pro- provide a collection of programs which are “diverse, realistic grams of the Savina suite. and compute intensive (rather than I/O intensive)”, is the current de facto benchmark for actor languages. Benchmark- CCS Concepts • Computing methodologies → Distrib- ing is difficult; language and runtime features may interact uted programming languages; • General and reference → in ways which, if not carefully taken into account, may lead Evaluation; Performance. to a misinterpretation of the results. Keywords actor programming; benchmarks To deepen the understanding of Savina and corroborate that a program’s scaling depends on the inherent concurrency ACM Reference Format: in its design, we compare the performance of the Savina Sebastian Blessing, Kiko Fernandez-Reyes, Albert Mingkun Yang, programs across three actor languages on two architectures, Sophia Drossopoulou, and Tobias Wrigstad. 2019. Run, Actor, Run: Towards Cross-Actor Language Benchmarking. In Proceedings of scaling from 1 to 112 cores. the 9th ACM SIGPLAN International Workshop on Programming The contributions of the paper are as follows: – Description of the aims and (common) pitfalls of cross- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies language benchmarking (§2). are not made or distributed for profit or commercial advantage and that – Reproduction study of the Savina paper with explana- copies bear this notice and the full citation on the first page. Copyrights tions of runtime characteristics (§4 and §5). for components of this work owned by others than the author(s) must – An observation that many Savina benchmarks have be honored. Abstracting with credit is permitted. To copy otherwise, or similar runtime characteristics, and that tuning para- republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. meters in these programs drastically affects their char- AGERE ’19, October 22, 2019, Athens, Greece acteristics so that they become indistinguishable from © 2019 Copyright held by the owner/author(s). Publication rights licensed those of other programs in the suite (§5). to ACM. ACM ISBN 978-1-4503-6982-4/19/10...$15.00 https://doi.org/10.1145/3358499.3361224 AGERE ’19, October 22, 2019, Athens, Greece Sebastian Blessing, Kiko Fernandez-Reyes, Albert Mingkun Yang, Sophia Drossopoulou and Tobias Wrigstad – A proposal for a single benchmark program that (we compilation threads, GC threads, and many other runtime believe) can simulate most of the programs of the Sav- management entities—what is the right thing to do is a lot less ina benchmark suite (§6). clear. Pin all, pin no, or pin only non-VM threads? Ultimately, §3 gives an overview of the three languages in our study, what is a “fair” comparison is not obvious. and §4 introduces the Savina benchmarks. §7 concludes. There are multiple other reasons why fairness is difficult. For example, Akka allows users to supply custom scheduling 2 Cross-Language Benchmarking: Pitfalls behaviours for actors which ultimately hurts its scheduling A central question for the adoption of a concurrent program- performance because it limits the extent to which things can ming language is its performance, and how well it scales be optimised as fewer facts are set in stone. This (at least in when we increase, or decrease, the number of cores. theory) makes it easier to “beat” the default Akka scheduling This section briefly touches on some of the problems when behaviour which, unless one goes through the trouble of comparing performance across multiple languages that in- defining benchmarks where custom scheduling makes sense, troduce differences in the benchmark programs. paints Akka in less favourable light. A similar argument can be made about Erlang favouring fairness over performance. 2.1 Pitfalls of Cross-Language Benchmarking 2.2 The Objective and The Rules of the Game Cross-language benchmarking is difficult. Considering simple programs (aka micro benchmarking), that typically stress a The above discussion brings us to the point of what we call single or a few aspects, like message passing overhead or “the rules of the game”. For a micro benchmark to be useful, backpressure handling, isolating aspects is a delicate matter it needs a clearly stated objective: what it intends to show as many seemingly innocuous operations performed e.g., to (e.g., how a program that is under-saturated with respect to simulate “actual work” can easily become dominating per- parallel units of behaviour is penalised by actor scheduling) formance factors. For example, many actor micro benchmark as well as the rules for meeting the objective—what “tricks” are programs generate random numbers: even with actor-local allowed in implementations. For example, in the context of generators initialised with constant seeds, generators in dif- actor programs, a rule could be not breaking actor isolation. ferent languages may favour one implementation or suppress As §5 will discuss further, some of the initial implementations a pain point in another. Furthermore, random number gener- of Savina programs break isolation to avoid message passing ators may perform differently which may obscure the actual overhead for coordination. Is the ability to do so a strength, benchmark result and the performance of the operations it or is a language’s ability to enforce actor isolation a superior is used in combination with. quality? In the past, we have seen results in micro benchmarks In this example, the question may be formulated as “when skewed (or obscured) by e.g.,: random number generation is a program an actor program?” As there is no universally (such as different numbers generated across platforms; vary- accepted truth, we believe each benchmark should come ing performance of generators across platforms—leading to with its own set of rules, and changing the rules by definition accidentally comparing performance of random number gen- means the creation of a different benchmark. erators rather than the actual metric of interest); string librar- ies (such as the same frequent operation is O¹1º on one plat- 3 Actor Languages form and O¹nº on another); accidentally measuring perform- Actors are concurrent objects interacting via asynchron- ance of “irrelevant” libraries (such as regular expressions); ous point-to-point messages [2, 26]. In the initial model, creation of garbage objects during warm-up that penalises every actor maintains a private heap inaccessible to other subsequent iterations due to e.g., worse locality or additional actors [26]. Hence, the state of an actor can only be manipu- GC cycles. lated through messages. Messages are sent by adding them These problems largely go away in macro benchmarking— to the mailbox of the receiving actor (ideally the only point comparing the performance of entire applications without of synchronisation in a program) after which the sender can synthesized behaviour. But macro benchmarks make it harder continue without waiting for an explicit reply. Actors pro- to isolate the effects of single aspects. cess incoming messages in the order they arrive or out of Measuring scalability is delicate regardless of micro/macro. order (called selective receive). Actors are thus sequential, When using only a fraction of a large machine to simulate but actor programs are inherently concurrent. Consequently, performance on a smaller machine, one typically wants cores actor-based languages are concurrent-by-default [2, 26]. on the same NUMA node to avoid slowdowns due to remote 3.1 Akka memory access, or suffer from unfairness when a lock is suddenly closer to one core than another, etc. This is relat- Akka [24, 38] is a library-based implementation of the actor ively straightforward in compiled systems without a run- model on the JVM with widespread adoption.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-