Louvre: Lightweight Ordering Using Versioning for Release Consistency

Total Page:16

File Type:pdf, Size:1020Kb

Louvre: Lightweight Ordering Using Versioning for Release Consistency Louvre: Lightweight Ordering Using Versioning for Release Consistency Pranith Kumar, Prasun Gera, Hyojong Kim, Hyesoon Kim Georgia Institute of Technology School of Computer Science, College of Computing Atlanta, GA, USA fpranith, prasun.gera, hyojong.kim, [email protected] Abstract—Fence instructions are fundamental primitives that an ordering point in the instruction stream, and when it is ensure consistency in a weakly consistent shared memory multi- encountered by the processor, certain classes of optimizations core processor. The execution cost of these instructions is sig- on the following memory accesses are disabled to ensure con- nificant and adds a non-trivial overhead to parallel programs. In a na´ıve architecture implementation, we track the ordering sistency [4]–[6]. Optimizations such as speculative execution, constraints imposed by a fence by its entry in the reorder prefetch, and buffering of store instructions are implemented buffer and its execution overhead entails stalling the processor’s by modern processors to reduce the cost of memory accesses pipeline until the store buffer is drained and also conservatively [7]–[9]. The fence instructions restrict optimizations which invalidating speculative loads. These actions create a cascading exploit instruction-level parallelism, disable speculative exe- effect of increased overhead on the execution of the following instructions in the program. We find these actions to be overly cution of post-fence loads and stores, and introduce stalls to restrictive and that they can be further relaxed thereby allowing drain the store buffer. aggressive optimizations. Researchers have proposed techniques that allow a proces- The current work proposes a lightweight mechanism in which sor to employ hardware optimizations when ordering instruc- we assign ordering tags, called versions, to load and store in- tions are scheduled, thereby reducing their execution over- structions when they reside in the load/store queues and the write buffer. The version assigned to a memory access allows us to fully head [10], [11]. One such technique utilizes check-pointing exploit the relaxation allowed by the weak consistency model and along with aggressive speculative execution of post-fence restricts its execution in such a way that the ordering constraints loads and stores. The check-pointed state is tracked and by the model are satisfied. We utilize the information captured updated so that it enables roll back in case of ordering vi- through the assigned versions to reduce stalls caused by waiting olations [12]–[14]. Another promising approach is to increase for the store buffer to drain and to avoid unnecessary squashing of speculative loads, thereby minimizing the re-execution penalty. the granularity of enforcing consistency, thereby amortizing This method is particularly effective for the release consistency the overhead of ordering instructions. This technique has been model that employs uni-directional fence instructions. We show applied both in hardware [15] and at an algorithmic level [16], that this mechanism reduces the ordering instruction latency by [17]. Further, to increase the scope for speculative execution 39.6% and improves program performance by 11% on average based on the observation that ordering violations rarely occur over the baseline implementation. at runtime [12], studies have proposed techniques that identify scenarios in which violations are likely to occur and handle I. INTRODUCTION them in special ways. These techniques use annotations that As core counts have continued to increase over the last tag ordering instructions [18], hardware extensions [13], or decade, the shared memory programming model has become run-time information [19]. However, all these techniques were an attractive choice for high performance, scientific, and proposed in the context of stronger memory consistency mod- arXiv:1710.10746v3 [cs.AR] 5 Dec 2017 general purpose parallel applications. The use of this model els such as that of x86 and little work has explored reducing is facilitated by high-level libraries and frameworks such as the overhead of ordering instructions, specifically in weak Cilk [1] and OpenMP [2]. The shared memory programming memory model architectures. model necessitates a consistent view of the memory for pro- Release consistency (RCsc) [20] is one of the most widely grammers to reason about the accuracy of parallel programs. implemented weak memory model in recent processors. Vari- This consistent view is dictated by the memory consistency ants of this model have been adopted in architectures such as model of the processor. Itanium, ARM64, and PowerPC and it is also being considered Higher level languages standardize on a consistency model for adoption in RISC-V processors [21]. In this memory to maintain portability across architectures. Languages such as model memory accesses are classified as ordinary, acquire, C and C++ have standardized on data-race-free memory mod- or release operations. This classification is done using special els [3] that guarantee that accesses to synchronization objects instructions in the ISA. A load-acquire operation and a store- (or atomics) are sequentially consistent. Sequential consistency release operation place ordering constraints either on the for such accesses is ensured by the compiler, which generates memory accesses following it or preceding it respectively. This appropriate ordering instructions. These instructions indicate is in contrast to a full fence which places ordering constraints on both the memory accesses. These acquire and release two-way one-way 100% operations allow more leeway for possible re-ordering of the 80% memory accesses, thereby allowing a processor implementa- 60% tion to reduce the overhead of ensuring memory consistency. 40% We propose LOUVRE, a low-overhead hardware extension 20% that ensures memory ordering using versioning and helps in 0% reducing the overhead of fence instructions in a processor implementing the release consistency model. Our proposed mechanism adds ordering tag fields to the load/store queues and the write buffer. In the proposed technique, we assign Fig. 1. Distribution of fences in ARM64 versions to memory instructions when they are issued. We update the version being assigned upon issuing an ordering instruction. Instead of tracking the ordering constraints by the entry of the fence instruction in the reorder buffer (ROB), II. BACKGROUND AND MOTIVATION we track them in a separate FIFO queue. This allows us to retire the ordering instruction from the ROB while preserving A. Fence instructions the constraint that needs to be enforced. Using the version as- signed to memory accesses, we identify and rectify any consis- Ordering instructions can be classified as either uni- tency violations at run-time without the need for maintaining directional (also known as one-way fences) or bi-directional any global state or inter-core communication. Compared to (also known as full fences). Instructions mfence and dmb are versioning, traditional implementation of ordering instructions bi-directional in that they do not allow any re-ordering across stall the pipeline to drain the store buffer to ensure order them. The restrictions placed by such fence instructions on among stores in the buffer and memory accesses following the memory accesses conflict with the optimization techniques ordering instruction. In the proposed versioning mechanism, used to hide the latency of long memory accesses in modern we issue and execute instructions that follow the ordering processors. Efforts to reduce this overhead have led to the instruction without draining the store buffer. Since we version design of the release consistency model [20]. This model pre- the stores in the store buffer, we enforce the correct ordering scribes acquire and release semantics, which are achieved by among them utilizing these versions. Furthermore, the ver- uni-directional fence instructions. These instructions allow re- sioning mechanism is efficient for one-way fence instructions, ordering of memory accesses across them in only one direction since it allows the processor to speculatively issue and execute and have either acquire or release semantics. Slight variations instructions following a store-release instruction before the of this conventional consistency model are implemented in store-release instruction itself has finished execution. architectures such as Itanium and ARM64. On ARM64, a Although we did not find any hardware optimizations in load-acquire instruction (also called as a synchronizing load, the literature specifically targeting one-way fence instructions, ldar) enforces an order between its load and all the following the work which is closest in spirit to our current work is memory accesses. A store-release instruction (also called as zFence [22]. As in the current work, the authors of the zFence a synchronizing store, stlr) enforces an order between its paper focus on reducing the stalls caused by waiting for the store and all the preceding memory accesses. A store-release store buffer to drain. They achieve this by introducing a new also ensures that the store is multi-copy atomic; a store coherence state which indicates exclusive permission to a is made visible to all other processors simultaneously [23]. cache line without the cache line being available. When all Additionally, a fence instruction is sequentially consistent with the stores waiting in the store buffer acquire this exclusive respect to
Recommended publications
  • Memory Consistency
    Lecture 9: Memory Consistency Parallel Computing Stanford CS149, Winter 2019 Midterm ▪ Feb 12 ▪ Open notes ▪ Practice midterm Stanford CS149, Winter 2019 Shared Memory Behavior ▪ Intuition says loads should return latest value written - What is latest? - Coherence: only one memory location - Consistency: apparent ordering for all locations - Order in which memory operations performed by one thread become visible to other threads ▪ Affects - Programmability: how programmers reason about program behavior - Allowed behavior of multithreaded programs executing with shared memory - Performance: limits HW/SW optimizations that can be used - Reordering memory operations to hide latency Stanford CS149, Winter 2019 Today: what you should know ▪ Understand the motivation for relaxed consistency models ▪ Understand the implications of relaxing W→R ordering Stanford CS149, Winter 2019 Today: who should care ▪ Anyone who: - Wants to implement a synchronization library - Will ever work a job in kernel (or driver) development - Seeks to implement lock-free data structures * - Does any of the above on ARM processors ** * Topic of a later lecture ** For reasons to be described later Stanford CS149, Winter 2019 Memory coherence vs. memory consistency ▪ Memory coherence defines requirements for the observed Observed chronology of operations on address X behavior of reads and writes to the same memory location - All processors must agree on the order of reads/writes to X P0 write: 5 - In other words: it is possible to put all operations involving X on a timeline such P1 read (5) that the observations of all processors are consistent with that timeline P2 write: 10 ▪ Memory consistency defines the behavior of reads and writes to different locations (as observed by other processors) P2 write: 11 - Coherence only guarantees that writes to address X will eventually propagate to other processors P1 read (11) - Consistency deals with when writes to X propagate to other processors, relative to reads and writes to other addresses Stanford CS149, Winter 2019 Coherence vs.
    [Show full text]
  • Memory Consistency: in a Distributed Outline for Lecture 20 Memory System, References to Memory in Remote Processors Do Not Take Place I
    Memory consistency: In a distributed Outline for Lecture 20 memory system, references to memory in remote processors do not take place I. Memory consistency immediately. II. Consistency models not requiring synch. This raises a potential problem. Suppose operations that— Strict consistency Sequential consistency • The value of a particular memory PRAM & processor consist. word in processor 2’s local memory III. Consistency models is 0. not requiring synch. operations. • Then processor 1 writes the value 1 to that word of memory. Note that Weak consistency this is a remote write. Release consistency • Processor 2 then reads the word. But, being local, the read occurs quickly, and the value 0 is returned. What’s wrong with this? This situation can be diagrammed like this (the horizontal axis represents time): P1: W (x )1 P2: R (x )0 Depending upon how the program is written, it may or may not be able to tolerate a situation like this. But, in any case, the programmer must understand what can happen when memory is accessed in a DSM system. Consistency models: A consistency model is essentially a contract between the software and the memory. It says that iff they software agrees to obey certain rules, the memory promises to store and retrieve the expected values. © 1997 Edward F. Gehringer CSC/ECE 501 Lecture Notes Page 220 Strict consistency: The most obvious consistency model is strict consistency. Strict consistency: Any read to a memory location x returns the value stored by the most recent write operation to x. This definition implicitly assumes the existence of a global clock, so that the determination of “most recent” is unambiguous.
    [Show full text]
  • Towards Shared Memory Consistency Models for Gpus
    TOWARDS SHARED MEMORY CONSISTENCY MODELS FOR GPUS by Tyler Sorensen A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Bachelor of Science School of Computing The University of Utah March 2013 FINAL READING APPROVAL I have read the thesis of Tyler Sorensen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place. Date Ganesh Gopalakrishnan ABSTRACT With the widespread use of GPUs, it is important to ensure that programmers have a clear understanding of their shared memory consistency model i.e. what values can be read when issued concurrently with writes. While memory consistency has been studied for CPUs, GPUs present very different memory and concurrency systems and have not been well studied. We propose a collection of litmus tests that shed light on interesting visibility and ordering properties. These include classical shared memory consistency model properties, such as coherence and write atomicity, as well as GPU specific properties e.g. memory visibility differences between intra and inter block threads. The results of the litmus tests are determined by feedback from industry experts, the limited documentation available and properties common to all modern multi-core systems. Some of the behaviors remain unresolved. Using the results of the litmus tests, we establish a formal state transition model using intuitive data structures and operations. We implement our model in the Murphi modeling language and verify the initial litmus tests.
    [Show full text]
  • Memory Ordering: a Value-Based Approach
    Memory Ordering: A Value-Based Approach Harold W. Cain Mikko H. Lipasti Computer Sciences Dept. Dept. of Elec. and Comp. Engr. Univ. of Wisconsin-Madison Univ. of Wisconsin-Madison [email protected] [email protected] Abstract queues, etc.) used to find independent operations and cor- rectly execute them out of program order are often con- Conventional out-of-order processors employ a multi- strained by clock cycle time. In order to decrease clock ported, fully-associative load queue to guarantee correct cycle time, the size of these conventional structures must memory reference order both within a single thread of exe- usually decrease, also decreasing IPC. Conversely, IPC cution and across threads in a multiprocessor system. As may be increased by increasing their size, but this also improvements in process technology and pipelining lead to increases their access time and may degrade clock fre- higher clock frequencies, scaling this complex structure to quency. accommodate a larger number of in-flight loads becomes There has been much recent research on mitigating this difficult if not impossible. Furthermore, each access to this negative feedback loop by scaling structures in ways that complex structure consumes excessive amounts of energy. are amenable to high clock frequencies without negatively In this paper, we solve the associative load queue scalabil- affecting IPC. Much of this work has focused on the ity problem by completely eliminating the associative load instruction issue queue, physical register file, and bypass queue. Instead, data dependences and memory consistency paths, but very little has focused on the load queue or store constraints are enforced by simply re-executing load queue [1][18][21].
    [Show full text]
  • On the Coexistence of Shared-Memory and Message-Passing in The
    On the Co existence of SharedMemory and MessagePassing in the Programming of Parallel Applications 1 2 J Cordsen and W SchroderPreikschat GMD FIRST Rudower Chaussee D Berlin Germany University of Potsdam Am Neuen Palais D Potsdam Germany Abstract Interop erability in nonsequential applications requires com munication to exchange information using either the sharedmemory or messagepassing paradigm In the past the communication paradigm in use was determined through the architecture of the underlying comput ing platform Sharedmemory computing systems were programmed to use sharedmemory communication whereas distributedmemory archi tectures were running applications communicating via messagepassing Current trends in the architecture of parallel machines are based on sharedmemory and distributedmemory For scalable parallel applica tions in order to maintain transparency and eciency b oth communi cation paradigms have to co exist Users should not b e obliged to know when to use which of the two paradigms On the other hand the user should b e able to exploit either of the paradigms directly in order to achieve the b est p ossible solution The pap er presents the VOTE communication supp ort system VOTE provides co existent implementations of sharedmemory and message passing communication Applications can change the communication paradigm dynamically at runtime thus are able to employ the under lying computing system in the most convenient and applicationoriented way The presented case study and detailed p erformance analysis un derpins the
    [Show full text]
  • Consistency Models • Data-Centric Consistency Models • Client-Centric Consistency Models
    Consistency and Replication • Today: – Consistency models • Data-centric consistency models • Client-centric consistency models Computer Science CS677: Distributed OS Lecture 15, page 1 Why replicate? • Data replication: common technique in distributed systems • Reliability – If one replica is unavailable or crashes, use another – Protect against corrupted data • Performance – Scale with size of the distributed system (replicated web servers) – Scale in geographically distributed systems (web proxies) • Key issue: need to maintain consistency of replicated data – If one copy is modified, others become inconsistent Computer Science CS677: Distributed OS Lecture 15, page 2 Object Replication •Approach 1: application is responsible for replication – Application needs to handle consistency issues •Approach 2: system (middleware) handles replication – Consistency issues are handled by the middleware – Simplifies application development but makes object-specific solutions harder Computer Science CS677: Distributed OS Lecture 15, page 3 Replication and Scaling • Replication and caching used for system scalability • Multiple copies: – Improves performance by reducing access latency – But higher network overheads of maintaining consistency – Example: object is replicated N times • Read frequency R, write frequency W • If R<<W, high consistency overhead and wasted messages • Consistency maintenance is itself an issue – What semantics to provide? – Tight consistency requires globally synchronized clocks! • Solution: loosen consistency requirements
    [Show full text]
  • Challenges for the Message Passing Interface in the Petaflops Era
    Challenges for the Message Passing Interface in the Petaflops Era William D. Gropp Mathematics and Computer Science www.mcs.anl.gov/~gropp What this Talk is About The title talks about MPI – Because MPI is the dominant parallel programming model in computational science But the issue is really – What are the needs of the parallel software ecosystem? – How does MPI fit into that ecosystem? – What are the missing parts (not just from MPI)? – How can MPI adapt or be replaced in the parallel software ecosystem? – Short version of this talk: • The problem with MPI is not with what it has but with what it is missing Lets start with some history … Argonne National Laboratory 2 Quotes from “System Software and Tools for High Performance Computing Environments” (1993) “The strongest desire expressed by these users was simply to satisfy the urgent need to get applications codes running on parallel machines as quickly as possible” In a list of enabling technologies for mathematical software, “Parallel prefix for arbitrary user-defined associative operations should be supported. Conflicts between system and library (e.g., in message types) should be automatically avoided.” – Note that MPI-1 provided both Immediate Goals for Computing Environments: – Parallel computer support environment – Standards for same – Standard for parallel I/O – Standard for message passing on distributed memory machines “The single greatest hindrance to significant penetration of MPP technology in scientific computing is the absence of common programming interfaces across various parallel computing systems” Argonne National Laboratory 3 Quotes from “Enabling Technologies for Petaflops Computing” (1995): “The software for the current generation of 100 GF machines is not adequate to be scaled to a TF…” “The Petaflops computer is achievable at reasonable cost with technology available in about 20 years [2014].” – (estimated clock speed in 2004 — 700MHz)* “Software technology for MPP’s must evolve new ways to design software that is portable across a wide variety of computer architectures.
    [Show full text]
  • An Overview on Cyclops-64 Architecture - a Status Report on the Programming Model and Software Infrastructure
    An Overview on Cyclops-64 Architecture - A Status Report on the Programming Model and Software Infrastructure Guang R. Gao Endowed Distinguished Professor Electrical & Computer Engineering University of Delaware [email protected] 2007/6/14 SOS11-06-2007.ppt 1 Outline • Introduction • Multi-Core Chip Technology • IBM Cyclops-64 Architecture/Software • Cyclops-64 Programming Model and System Software • Future Directions • Summary 2007/6/14 SOS11-06-2007.ppt 2 TIPs of compute power operating on Tera-bytes of data Transistor Growth in the near future Source: Keynote talk in CGO & PPoPP 03/14/07 by Jesse Fang from Intel 2007/6/14 SOS11-06-2007.ppt 3 Outline • Introduction • Multi-Core Chip Technology • IBM Cyclops-64 Architecture/Software • Programming/Compiling for Cyclops-64 • Looking Beyond Cyclops-64 • Summary 2007/6/14 SOS11-06-2007.ppt 4 Two Types of Multi-Core Architecture Trends • Type I: Glue “heavy cores” together with minor changes • Type II: Explore the parallel architecture design space and searching for most suitable chip architecture models. 2007/6/14 SOS11-06-2007.ppt 5 Multi-Core Type II • New factors to be considered –Flops are cheap! –Memory per core is small –Cache-coherence is expensive! –On-chip bandwidth can be enormous! –Examples: Cyclops-64, and others 2007/6/14 SOS11-06-2007.ppt 6 Flops are Cheap! An example to illustrate design tradeoffs: • If fed from small, local register files: 64-bit FP – 3200 GB/s, 10 pJ/op unit – < $1/Gflop (60 mW/Gflop) (drawn to a 64-bit FPU is < 1mm^2 scale) and ~= 50pJ • If fed from global on-chip memory: Can fit over 200 on a chip.
    [Show full text]
  • Constraint Graph Analysis of Multithreaded Programs
    Journal of Instruction-Level Parallelism 6 (2004) 1-23 Submitted 10/03; published 4/04 Constraint Graph Analysis of Multithreaded Programs Harold W. Cain [email protected] Computer Sciences Dept., University of Wisconsin 1210 W. Dayton St., Madison, WI 53706 Mikko H. Lipasti [email protected] Dept. of Electrical and Computer Engineering, University of Wisconsin 1415 Engineering Dr., Madison, WI 53706 Ravi Nair [email protected] IBM T.J. Watson Research Laboratory P.O. Box 218, Yorktown Heights, NY 10598 Abstract This paper presents a framework for analyzing the performance of multithreaded programs using a model called a constraint graph. We review previous constraint graph definitions for sequentially consistent systems, and extend these definitions for use in analyzing other memory consistency models. Using this framework, we present two constraint graph analysis case studies using several commercial and scientific workloads running on a full system simulator. The first case study illus- trates how a constraint graph can be used to determine the necessary conditions for implementing a memory consistency model, rather than conservative sufficient conditions. Using this method, we classify coherence misses as either required or unnecessary. We determine that on average over 30% of all load instructions that suffer cache misses due to coherence activity are unnecessarily stalled because the original copy of the cache line could have been used without violating the mem- ory consistency model. Based on this observation, we present a novel delayed consistency imple- mentation that uses stale cache lines when possible. The second case study demonstrates the effects of memory consistency constraints on the fundamental limits of instruction level parallelism, com- pared to previous estimates that did not include multiprocessor constraints.
    [Show full text]
  • Connect User Guide Armv8-A Memory Systems
    ARMv8-A Memory Systems ConnectARMv8- UserA Memory Guide VersionSystems 0.1 Version 1.0 Copyright © 2016 ARM Limited or its affiliates. All rights reserved. Page 1 of 17 ARM 100941_0100_en ARMv8-A Memory Systems Revision Information The following revisions have been made to this User Guide. Date Issue Confidentiality Change 28 February 2017 0100 Non-Confidential First release Proprietary Notice Words and logos marked with ® or ™ are registered trademarks or trademarks of ARM® in the EU and other countries, except as otherwise stated below in this proprietary notice. Other brands and names mentioned herein may be the trademarks of their respective owners. Neither the whole nor any part of the information contained in, or the product described in, this document may be adapted or reproduced in any material form except with the prior written permission of the copyright holder. The product described in this document is subject to continuous developments and improvements. All particulars of the product and its use contained in this document are given by ARM in good faith. However, all warranties implied or expressed, including but not limited to implied warranties of merchantability, or fitness for purpose, are excluded. This document is intended only to assist the reader in the use of the product. ARM shall not be liable for any loss or damage arising from the use of any information in this document, or any error or omission in such information, or any incorrect use of the product. Where the term ARM is used it means “ARM or any of its subsidiaries as appropriate”. Confidentiality Status This document is Confidential.
    [Show full text]
  • Memory Consistency Models
    Memory Consistency Models David Mosberger TR 93/11 Abstract This paper discusses memory consistency models and their in¯uence on software in the context of parallel machines. In the ®rst part we review previous work on memory consistency models. The second part discusses the issues that arise due to weakening memory consistency. We are especially interested in the in¯uence that weakened consistency models have on language, compiler, and runtime system design. We conclude that tighter interaction between those parts and the memory system might improve performance considerably. Department of Computer Science The University of Arizona Tucson, AZ 85721 ¡ This is an updated version of [Mos93] 1 Introduction increases. Shared memory can be implemented at the hardware Traditionally, memory consistency models were of in- or software level. In the latter case it is usually called terest only to computer architects designing parallel ma- DistributedShared Memory (DSM). At both levels work chines. The goal was to present a model as close as has been done to reap the bene®ts of weaker models. We possible to the model exhibited by sequential machines. conjecture that in the near future most parallel machines The model of choice was sequential consistency (SC). will be based on consistency models signi®cantly weaker ¡ Sequential consistency guarantees that the result of any than SC [LLG 92, Sit92, BZ91, CBZ91, KCZ92]. execution of processors is the same as if the opera- The rest of this paper is organized as follows. In tions of all processors were executed in some sequential section 2 we discuss issues characteristic to memory order, and the operations of each individual processor consistency models.
    [Show full text]
  • The Openmp Memory Model
    The OpenMP Memory Model Jay P. Hoeflinger1 and Bronis R. de Supinski2 1 Intel, 1906 Fox Drive, Champaign, IL 61820 [email protected] http://www.intel.com 2 Lawrence Livermore National Laboratory, P.O. Box 808, L-560, Livermore, California 94551-0808* [email protected] http://www.llnl.gov/casc Abstract. The memory model of OpenMP has been widely misunder- stood since the first OpenMP specification was published in 1997 (For- tran 1.0). The proposed OpenMP specification (version 2.5) includes a memory model section to address this issue. This section unifies and clarifies the text about the use of memory in all previous specifications, and relates the model to well-known memory consistency semantics. In this paper, we discuss the memory model and show its implications for future distributed shared memory implementations of OpenMP. 1 Introduction Prior to the OpenMP version 2.5 specification, no separate OpenMP Memory Model section existed in any OpenMP specification. Previous specifications had scattered information about how memory behaves and how it is structured in an OpenMP pro- gram in several sections: the parallel directive section, the flush directive section, and the data sharing attributes section, to name a few. This has led to misunderstandings about how memory works in an OpenMP program, and how to use it. The most problematic directive for users is probably the flush directive. New OpenMP users may wonder why it is needed, under what circumstances it must be used, and how to use it correctly. Perhaps worse, the use of explicit flushes often confuses even experienced OpenMP programmers.
    [Show full text]