Concurrent Computing

Total Page:16

File Type:pdf, Size:1020Kb

Concurrent Computing Concurrent Computing Rachid Guerraoui Petr Kuznetsov September 13, 2017 Contents 1. Introduction 9 1.1. A broad picture: the concurrency revolution . ............... 9 1.2. The topic: shared objects . ......... 10 1.3. Linearizability................................ ........ 11 1.4. Wait-freedom ................................... ..... 12 1.5. Combining linearizability and wait-freedom . ................ 12 1.6. Object implementation . ........ 13 1.7. Reducibility................................... ...... 14 1.8. Organization................................... ...... 14 1.9. Bibliographical notes . ......... 15 I. Correctness 17 2. Linearizability 19 2.1. Introduction ................................... ...... 19 2.2. ThePlayers ..................................... 20 2.2.1. Processes .................................... 20 2.2.2. Objects ...................................... 21 2.2.3. Histories .................................... 22 2.2.4. Sequential histories . ....... 23 2.2.5. Legalhistories ............................... 23 2.3. Linearizability................................ ........ 23 2.3.1. The case of complete histories . ........ 24 2.3.2. The case of incomplete histories . ......... 25 2.3.3. Completing a linearizable history . .......... 26 2.4. Composition.................................... ..... 27 2.5. Safety ......................................... 29 2.6. Summary ........................................ 31 2.7. Bibliographic notes . ........ 32 3. Progress 33 3.1. Introduction ................................... ...... 33 3.2. Implementation ................................. ...... 33 3.2.1. High-level and low-level objects . .......... 33 3.2.2. Zooming into histories . ...... 34 3.3. Progress properties . ........ 35 3.3.1. Variations................................... 36 3.3.2. Bounded termination . ..... 37 3.3.3. Liveness ..................................... 37 3.4. Linearizability and wait-freedom . ............. 37 3.4.1. Asimpleexample ............................... 37 3 3.4.2. A more sophisticated example . ....... 39 3.5. Summary ........................................ 40 3.6. Exercises ...................................... 41 II. Read-write objects 43 4. Simple register transformations 45 4.1. Definitions..................................... ..... 45 4.2. Proving register properties . ........... 46 4.3. Register transformations . .......... 48 4.4. Two simple bounded transformations . ............ 49 4.4.1. Safe/regular registers: from single reader to multiple readers . 49 4.4.2. Binary multi-reader registers: from safe to regular ................ 50 4.5. From binary to b-valued registers . 51 4.5.1. From safe bits to safe b-valued registers . 51 4.5.2. From regular bits to regular b-valued registers . 52 4.5.3. From atomic bits to atomic b-valued registers . 54 4.6. Bibliographic notes . ........ 56 4.7. Exercises ...................................... 56 5. Unbounded register transformations 57 5.1. 1W1R registers: From unbounded regular to atomic . ............... 57 5.2. Atomic registers: from unbounded 1W1R to 1WMR . ........... 58 5.3. Atomic registers: from unbounded 1WMR to MWMR . .......... 60 5.4. Concludingremark ............................... ...... 61 5.5. Bibliographic notes . ........ 61 5.6. Exercises ...................................... 61 6. Optimal atomic bit construction 63 6.1. Introduction ................................... ...... 63 6.2. Lowerbound ..................................... 63 6.2.1. Digests and sequences of writes . ........ 64 6.2.2. Impossibility result and lower bound . ........... 65 6.3. From three safe bits to an atomic bit . ........... 66 6.3.1. Base architecture of the construction . ............ 67 6.3.2. Handshaking mechanism and the write operation . ............ 67 6.3.3. An incremental construction of the read operation . ............... 68 6.3.4. Proof of the construction . ....... 71 6.3.5. Costofthealgorithms . ..... 74 6.4. Bibliographic notes . ........ 74 7. Atomic multivalued register construction 75 7.1. From single-reader regular to multi-reader atomic . .................. 75 7.2. Using an atomic control bit . ......... 75 7.3. Thealgorithm................................... ..... 77 7.4. Bibliographic notes . ........ 81 7.5. Exercises ...................................... 81 4 III. Snapshot objects 83 8. Collects and snapshots 85 8.1. Collectobject .................................. ...... 85 8.1.1. Definition and implementation . ........ 85 8.1.2. A collect object has no sequential specification . .............. 86 8.2. Snapshotobject ................................. ...... 87 8.2.1. Definition.................................... 87 8.2.2. The sequential specification of snapshot . ............ 87 8.2.3. Non-blocking snapshot . ...... 88 8.2.4. Wait-free snapshot . ..... 91 8.2.5. The snapshot object construction is bounded wait-free .............. 92 8.2.6. The snapshot object construction is atomic . ............ 93 8.3. Bounded atomic snapshot . ........ 94 8.3.1. Double collect and helping . ....... 94 8.3.2. Binary handshaking . 95 8.3.3. Bounded snapshot using handshaking . ......... 95 8.3.4. Correctness .................................. 95 8.4. Bibliographic notes . ........ 97 9. Immediatesnapshotanditeratedimmediatesnapshot 99 9.1. Immediatesnapshots . ....... 99 9.1.1. Definition.................................... 99 9.1.2. Blockruns .................................... 100 9.1.3. A one-shot implementation . .......100 9.2. Fastrenaming................................... 102 9.2.1. Snapshot-based renaming . 103 9.2.2. IS-basedrenaming . 103 9.3. Long-lived immediate snapshot . ...........106 9.3.1. Overview of the algorithm . 106 9.3.2. Proof of correctness . 106 9.4. Iterated immediate snapshot . ..........106 9.4.1. An equivalence between IIS and read-write . ...........107 9.4.2. Geometric representation of IIS . .........110 IV. Consensus objects 113 10.Consensus and universal construction 115 10.1. Consensus object: specification . .............115 10.2. A wait-free universal construction . ..............116 10.2.1. Deterministic objects . ........116 10.2.2. Bounded wait-free universal construction . ..............118 10.2.3. Non-deterministic objects . .........119 10.3. Bibliographic notes . .........119 11.Consensusnumberandtheconsensushierarchy 121 11.1.Consensusnumber ..... ...... ..... ...... ...... ... 121 5 11.2. Preliminary definitions . ..........121 11.2.1. Schedule, configuration and valence . ...........121 11.2.2. Bivalent initial configuration . ...........122 11.2.3. Critical configurations . ........123 11.3. Consensus number of atomic registers . .............124 11.4. Objects with consensus numbers 2 .............................125 11.4.1. Consensus from test&set objects .........................125 11.4.2. Consensus from queue objects ..........................126 11.4.3. Consensus numbers of test&set and queue ....................127 11.5. Objects of n-consensus type ...............................128 11.6. Objects whose consensus number is +∞ ..........................129 11.6.1. Consensus from compare&swap objects . ..........129 11.6.2. Consensus from augmented queue objects . ...........130 11.7. Consensus hierarchy . ........130 V. Schedulers 133 12.Failure detectors 135 12.1. Solving problems with failure detectors . ...............135 12.1.1. Failure patterns and failure detectors . ..............135 12.1.2. Algorithms using failure detectors . ............136 12.1.3.Runs ........................................ 137 12.1.4.Consensus ................................... 137 12.1.5. Implementing and comparing failure detectors . ..............137 12.1.6. Weakest failure detector . ........138 12.2. Extracting Ω .........................................138 12.2.1. Overview of the Reduction Algorithm . ..........138 12.2.2.DAGs........................................ 139 12.2.3. Asynchronous simulation . .......139 12.2.4.BG-simulation ... ...... ..... ...... ...... ..... 141 12.2.5.Usingconsensus .. ...... ..... ...... ...... ..... 142 12.2.6. Extracting Ω ....................................142 12.3. Implementing Ω in an eventually synchronous shared memory system . 145 12.3.1.Introduction..... ...... ..... ...... ...... .... 145 12.3.2. An omega construction . 145 12.3.3. Proof of correctness . 147 12.3.4.Discussion .................................. 148 12.4. Bibliographic Notes . .........148 13.Resilience 151 13.1. Pre-agreement with Commit-Adopt . ...........151 13.1.1. Wait-free commit adopt implementation . ............151 13.1.2. Usingcommit-adopt . 152 13.2. Safe Agreement and the power of simulation . ..............153 13.2.1. Solving safe agreement . .......153 13.2.2.BG-simulation ... ...... ..... ...... ...... ..... 154 13.3. Bibliographic notes . .........155 6 14.Adversaries 157 14.1. Non-uniform failure models . ..........157 14.2.Background .................................... 159 14.2.1.Model....................................... 159 14.2.2.Tasks ....................................... 160 14.2.3. The Commit-Adopt protocol . .......160 14.2.4. The BG-simulation technique. .........161 14.3. Non-uniform failures in shared-memory systems . ................161 14.3.1. Survivor sets and cores . .......161 14.3.2.Adversaries ................................. 162 14.3.3. Failure patterns and environments . ...........162 14.3.4. Asymmetric progress conditions . ..........163 14.4. Characterizing superset-closed adversaries . ...................163 14.4.1. A topological
Recommended publications
  • Fast and Correct Load-Link/Store-Conditional Instruction Handling in DBT Systems
    Fast and Correct Load-Link/Store-Conditional Instruction Handling in DBT Systems Abstract Atomic Read Memory Dynamic Binary Translation (DBT) requires the implemen- Operation tation of load-link/store-conditional (LL/SC) primitives for guest systems that rely on this form of synchronization. When Equals targeting e.g. x86 host systems, LL/SC guest instructions are YES NO typically emulated using atomic Compare-and-Swap (CAS) expected value? instructions on the host. Whilst this direct mapping is efficient, this approach is problematic due to subtle differences between Write new value LL/SC and CAS semantics. In this paper, we demonstrate that to memory this is a real problem, and we provide code examples that fail to execute correctly on QEMU and a commercial DBT system, Exchanged = true Exchanged = false which both use the CAS approach to LL/SC emulation. We then develop two novel and provably correct LL/SC emula- tion schemes: (1) A purely software based scheme, which uses the DBT system’s page translation cache for correctly se- Figure 1: The compare-and-swap instruction atomically reads lecting between fast, but unsynchronized, and slow, but fully from a memory address, and compares the current value synchronized memory accesses, and (2) a hardware acceler- with an expected value. If these values are equal, then a new ated scheme that leverages hardware transactional memory value is written to memory. The instruction indicates (usually (HTM) provided by the host. We have implemented these two through a return register or flag) whether or not the value was schemes in the Synopsys DesignWare® ARC® nSIM DBT successfully written.
    [Show full text]
  • Edsger W. Dijkstra: a Commemoration
    Edsger W. Dijkstra: a Commemoration Krzysztof R. Apt1 and Tony Hoare2 (editors) 1 CWI, Amsterdam, The Netherlands and MIMUW, University of Warsaw, Poland 2 Department of Computer Science and Technology, University of Cambridge and Microsoft Research Ltd, Cambridge, UK Abstract This article is a multiauthored portrait of Edsger Wybe Dijkstra that consists of testimo- nials written by several friends, colleagues, and students of his. It provides unique insights into his personality, working style and habits, and his influence on other computer scientists, as a researcher, teacher, and mentor. Contents Preface 3 Tony Hoare 4 Donald Knuth 9 Christian Lengauer 11 K. Mani Chandy 13 Eric C.R. Hehner 15 Mark Scheevel 17 Krzysztof R. Apt 18 arXiv:2104.03392v1 [cs.GL] 7 Apr 2021 Niklaus Wirth 20 Lex Bijlsma 23 Manfred Broy 24 David Gries 26 Ted Herman 28 Alain J. Martin 29 J Strother Moore 31 Vladimir Lifschitz 33 Wim H. Hesselink 34 1 Hamilton Richards 36 Ken Calvert 38 David Naumann 40 David Turner 42 J.R. Rao 44 Jayadev Misra 47 Rajeev Joshi 50 Maarten van Emden 52 Two Tuesday Afternoon Clubs 54 2 Preface Edsger Dijkstra was perhaps the best known, and certainly the most discussed, computer scientist of the seventies and eighties. We both knew Dijkstra |though each of us in different ways| and we both were aware that his influence on computer science was not limited to his pioneering software projects and research articles. He interacted with his colleagues by way of numerous discussions, extensive letter correspondence, and hundreds of so-called EWD reports that he used to send to a select group of researchers.
    [Show full text]
  • Glossary of Technical and Scientific Terms (French / English
    Glossary of Technical and Scientific Terms (French / English) Version 11 (Alphabetical sorting) Jean-Luc JOULIN October 12, 2015 Foreword Aim of this handbook This glossary is made to help for translation of technical words english-french. This document is not made to be exhaustive but aim to understand and remember the translation of some words used in various technical domains. Words are sorted alphabetically. This glossary is distributed under the licence Creative Common (BY NC ND) and can be freely redistributed. For further informations about the licence Creative Com- mon (BY NC ND), browse the site: http://creativecommons.org If you are interested by updates of this glossary, send me an email to be on my diffusion list : • [email protected][email protected] Warning Translations given in this glossary are for information only and their use is under the responsability of the reader. If you see a mistake in this document, thank in advance to report it to me so i can correct it in further versions of this document. Notes Sorting of words Words are sorted by alphabetical order and are associated to their main category. Difference of terms Some words are different between the "British" and the "American" english. This glossary show the most used terms and are generally coming from the "British" english Words coming from the "American" english are indicated with the suffix : US . Difference of spelling There are some difference of spelling in accordance to the country, companies, universities, ... Words finishing by -ise, -isation and -yse can also be written with -ize, Jean-Luc JOULIN1 Glossary of Technical and Scientific Terms -ization and -yze.
    [Show full text]
  • Lecture 10: Fault Tolerance Fault Tolerant Concurrent Computing
    02/12/2014 Lecture 10: Fault Tolerance Fault Tolerant Concurrent Computing • The main principles of fault tolerant programming are: – Fault Detection - Knowing that a fault exists – Fault Recovery - having atomic instructions that can be rolled back in the event of a failure being detected. • System’s viewpoint it is quite possible that the fault is in the program that is attempting the recovery. • Attempting to recover from a non-existent fault can be as disastrous as a fault occurring. CA463 Lecture Notes (Martin Crane 2014) 26 1 02/12/2014 Fault Tolerant Concurrent Computing (cont’d) • Have seen replication used for tasks to allow a program to recover from a fault causing a task to abruptly terminate. • The same principle is also used at the system level to build fault tolerant systems. • Critical systems are replicated, and system action is based on a majority vote of the replicated sub systems. • This redundancy allows the system to successfully continue operating when several sub systems develop faults. CA463 Lecture Notes (Martin Crane 2014) 27 Types of Failures in Concurrent Systems • Initially dead processes ( benign fault ) – A subset of the processes never even start • Crash model ( benign fault ) – Process executes as per its local algorithm until a certain point where it stops indefinitely – Never restarts • Byzantine behaviour (malign fault ) – Algorithm may execute any possible local algorithm – May arbitrarily send/receive messages CA463 Lecture Notes (Martin Crane 2014) 28 2 02/12/2014 A Hierarchy of Failure Types • Dead process – This is a special case of crashed process – Case when the crashed process crashes before it starts executing • Crashed process – This is a special case of Byzantine process – Case when the Byzantine process crashes, and then keeps staying in that state for all future transitions CA463 Lecture Notes (Martin Crane 2014) 29 Types of Fault Tolerance Algorithms • Robust algorithms – Correct processes should continue behaving thus, despite failures.
    [Show full text]
  • Actor-Based Concurrency by Srinivas Panchapakesan
    Concurrency in Java and Actor- based concurrency using Scala By, Srinivas Panchapakesan Concurrency Concurrent computing is a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel. Concurrent programs can be executed sequentially on a single processor by interleaving the execution steps of each computational process, or executed in parallel by assigning each computational process to one of a set of processors that may be close or distributed across a network. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational processes, and coordinating access to resources that are shared among processes. Advantages of Concurrency Almost every computer nowadays has several CPU's or several cores within one CPU. The ability to leverage theses multi-cores can be the key for a successful high-volume application. Increased application throughput - parallel execution of a concurrent program allows the number of tasks completed in certain time period to increase. High responsiveness for input/output-intensive applications mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task. More appropriate program structure - some problems and problem domains are well-suited to representation as concurrent tasks or processes. Process vs Threads Process: A process runs independently and isolated of other processes. It cannot directly access shared data in other processes. The resources of the process are allocated to it via the operating system, e.g. memory and CPU time. Threads: Threads are so called lightweight processes which have their own call stack but an access shared data.
    [Show full text]
  • Mastering Concurrent Computing Through Sequential Thinking
    review articles DOI:10.1145/3363823 we do not have good tools to build ef- A 50-year history of concurrency. ficient, scalable, and reliable concur- rent systems. BY SERGIO RAJSBAUM AND MICHEL RAYNAL Concurrency was once a specialized discipline for experts, but today the chal- lenge is for the entire information tech- nology community because of two dis- ruptive phenomena: the development of Mastering networking communications, and the end of the ability to increase processors speed at an exponential rate. Increases in performance come through concur- Concurrent rency, as in multicore architectures. Concurrency is also critical to achieve fault-tolerant, distributed services, as in global databases, cloud computing, and Computing blockchain applications. Concurrent computing through sequen- tial thinking. Right from the start in the 1960s, the main way of dealing with con- through currency has been by reduction to se- quential reasoning. Transforming problems in the concurrent domain into simpler problems in the sequential Sequential domain, yields benefits for specifying, implementing, and verifying concur- rent programs. It is a two-sided strategy, together with a bridge connecting the Thinking two sides. First, a sequential specificationof an object (or service) that can be ac- key insights ˽ A main way of dealing with the enormous challenges of building concurrent systems is by reduction to sequential I must appeal to the patience of the wondering readers, thinking. Over more than 50 years, more sophisticated techniques have been suffering as I am from the sequential nature of human developed to build complex systems in communication. this way. 12 ˽ The strategy starts by designing —E.W.
    [Show full text]
  • A Separation Logic for Refining Concurrent Objects
    A Separation Logic for Refining Concurrent Objects Aaron Turon ∗ Mitchell Wand Northeastern University {turon, wand}@ccs.neu.edu Abstract do { tmp = *C; } until CAS(C, tmp, tmp+1); Fine-grained concurrent data structures are crucial for gaining per- implements inc using an optimistic approach: it takes a snapshot formance from multiprocessing, but their design is a subtle art. Re- of the counter without acquiring a lock, computes the new value cent literature has made large strides in verifying these data struc- of the counter, and uses compare-and-set (CAS) to safely install the tures, using either atomicity refinement or separation logic with new value. The key is that CAS compares *C with the value of tmp, rely-guarantee reasoning. In this paper we show how the owner- atomically updating *C with tmp + 1 and returning true if they ship discipline of separation logic can be used to enable atomicity are the same, and just returning false otherwise. refinement, and we develop a new rely-guarantee method that is lo- Even for this simple data structure, the fine-grained imple- calized to the definition of a data structure. We present the first se- mentation significantly outperforms the lock-based implementa- mantics of separation logic that is sensitive to atomicity, and show tion [17]. Likewise, even for this simple example, we would prefer how to control this sensitivity through ownership. The result is a to think of the counter in a more abstract way when reasoning about logic that enables compositional reasoning about atomicity and in- its clients, giving it the following specification: terference, even for programs that use fine-grained synchronization and dynamic memory allocation.
    [Show full text]
  • Concurrent Computing
    Concurrent Computing CSCI 201 Principles of Software Development Jeffrey Miller, Ph.D. [email protected] Outline • Threads • Multi-Threaded Code • CPU Scheduling • Program USC CSCI 201L Thread Overview ▪ Looking at the Windows taskbar, you will notice more than one process running concurrently ▪ Looking at the Windows task manager, you’ll see a number of additional processes running ▪ But how many processes are actually running at any one time? • Multiple processors, multiple cores, and hyper-threading will determine the actual number of processes running • Threads USC CSCI 201L 3/22 Redundant Hardware Multiple Processors Single-Core CPU ALU Cache Registers Multiple Cores ALU 1 Registers 1 Cache Hyper-threading has firmware logic that appears to the OS ALU 2 to contain more than one core when in fact only one physical core exists Registers 2 4/22 Programs vs Processes vs Threads ▪ Programs › An executable file residing in memory, typically in secondary storage ▪ Processes › An executing instance of a program › Sometimes this is referred to as a Task ▪ Threads › Any section of code within a process that can execute at what appears to be simultaneously › Shares the same resources allotted to the process by the OS kernel USC CSCI 201L 5/22 Concurrent Programming ▪ Multi-Threaded Programming › Multiple sections of a process or multiple processes executing at what appears to be simultaneously in a single core › The purpose of multi-threaded programming is to add functionality to your program ▪ Parallel Programming › Multiple sections of
    [Show full text]
  • Lock-Free Programming
    Lock-Free Programming Geoff Langdale L31_Lockfree 1 Desynchronization ● This is an interesting topic ● This will (may?) become even more relevant with near ubiquitous multi-processing ● Still: please don’t rewrite any Project 3s! L31_Lockfree 2 Synchronization ● We received notification via the web form that one group has passed the P3/P4 test suite. Congratulations! ● We will be releasing a version of the fork-wait bomb which doesn't make as many assumptions about task id's. – Please look for it today and let us know right away if it causes any trouble for you. ● Personal and group disk quotas have been grown in order to reduce the number of people running out over the weekend – if you try hard enough you'll still be able to do it. L31_Lockfree 3 Outline ● Problems with locking ● Definition of Lock-free programming ● Examples of Lock-free programming ● Linux OS uses of Lock-free data structures ● Miscellanea (higher-level constructs, ‘wait-freedom’) ● Conclusion L31_Lockfree 4 Problems with Locking ● This list is more or less contentious, not equally relevant to all locking situations: – Deadlock – Priority Inversion – Convoying – “Async-signal-safety” – Kill-tolerant availability – Pre-emption tolerance – Overall performance L31_Lockfree 5 Problems with Locking 2 ● Deadlock – Processes that cannot proceed because they are waiting for resources that are held by processes that are waiting for… ● Priority inversion – Low-priority processes hold a lock required by a higher- priority process – Priority inheritance a possible solution L31_Lockfree
    [Show full text]
  • Download Distributed Systems Free Ebook
    DISTRIBUTED SYSTEMS DOWNLOAD FREE BOOK Maarten van Steen, Andrew S Tanenbaum | 596 pages | 01 Feb 2017 | Createspace Independent Publishing Platform | 9781543057386 | English | United States Distributed Systems - The Complete Guide The hope is that together, the system can maximize resources and information while preventing failures, as if one system fails, it won't affect the availability of the service. Banker's algorithm Dijkstra's algorithm DJP algorithm Prim's algorithm Dijkstra-Scholten algorithm Dekker's algorithm generalization Smoothsort Shunting-yard algorithm Distributed Systems marking algorithm Concurrent algorithms Distributed Systems algorithms Deadlock prevention algorithms Mutual exclusion algorithms Self-stabilizing Distributed Systems. Learn to code for free. For the first time computers would be able to send messages to other systems with a local IP address. The messages passed between machines contain forms of data that the systems want to share like databases, objects, and Distributed Systems. Also known as distributed computing and distributed databases, a distributed system is a collection of independent components located on different machines that share messages with each other in order to achieve common goals. To prevent infinite loops, running the code requires some amount of Ether. As mentioned in many places, one of which this great articleyou cannot have consistency and availability without partition tolerance. Because it works in batches jobs a problem arises where if your job fails — Distributed Systems need to restart the whole thing. While in a voting system an attacker need only add nodes to the network which is Distributed Systems, as free access to the network is a design targetin a CPU power based scheme an attacker faces a physical limitation: getting access to more and more powerful hardware.
    [Show full text]
  • Automated Distributed Implementation of Component-Based Models with Priorities Borzoo Bonakdarpour, Marius Bozga, Jean Quilbeuf
    Automated Distributed Implementation of Component-Based Models with Priorities Borzoo Bonakdarpour, Marius Bozga, Jean Quilbeuf To cite this version: Borzoo Bonakdarpour, Marius Bozga, Jean Quilbeuf. Automated Distributed Implementation of Component-Based Models with Priorities. 1th International Conference on Embedded Software, EM- SOFT 2011, Oct 2011, Taipei, Taiwan. pp.59-68, 10.1145/2038642.2038654. hal-00722405 HAL Id: hal-00722405 https://hal.archives-ouvertes.fr/hal-00722405 Submitted on 1 Aug 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Automated Distributed Implementation of Component-based Models with Priorities∗ Borzoo Bonakdarpour Marius Bozga Jean Quilbeuf School of Computer Science VERIMAG, Centre Équation VERIMAG, Centre Équation University of Waterloo 2 avenue de Vignate 2 avenue de Vignate 200 University Avenue West 38610, GIÈRES, France 38610, GIÈRES, France Waterloo, Canada, N2L3G1 [email protected] [email protected] [email protected] ABSTRACT and Meanings of Programs]: Specifying and Verifying In this paper, we introduce a novel model-based approach for and Reasoning about Programs—Logic of programs; I.2.2 constructing correct distributed implementation of [Artificial Intelligence ]: Automatic Programming—Pro- component-based models constrained by priorities.
    [Show full text]
  • Empirical Study of Concurrent Programming Paradigms
    UNLV Theses, Dissertations, Professional Papers, and Capstones May 2016 Empirical Study of Concurrent Programming Paradigms Patrick M. Daleiden University of Nevada, Las Vegas Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations Part of the Computer Sciences Commons Repository Citation Daleiden, Patrick M., "Empirical Study of Concurrent Programming Paradigms" (2016). UNLV Theses, Dissertations, Professional Papers, and Capstones. 2660. http://dx.doi.org/10.34917/9112055 This Thesis is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. EMPIRICAL STUDY OF CONCURRENT PROGRAMMING PARADIGMS By Patrick M. Daleiden Bachelor of Arts (B.A.) in Economics University of Notre Dame, Notre Dame, IN 1990 Master of Business Administration (M.B.A.) University of Chicago, Chicago, IL 1993 A thesis submitted in partial fulfillment of the requirements for the Master of Science in Computer Science Department of Computer Science Howard R. Hughes College of Engineering The Graduate College University of Nevada, Las Vegas May 2016 c Patrick M.
    [Show full text]