A Modern Multi-Core Processor (Forms of Parallelism + Understanding Latency and Bandwidth)

Total Page:16

File Type:pdf, Size:1020Kb

A Modern Multi-Core Processor (Forms of Parallelism + Understanding Latency and Bandwidth) Lecture 2: A Modern Multi-Core Processor (Forms of parallelism + understanding latency and bandwidth) Parallel Computer Architecture and Programming CMU 15-418/15-618, Fall 2016 Quick review 1. Why has single-instruction-stream performance only improved very slowly in recent years? * 2. What prevented us from obtaining maximum speedup from the parallel programs we wrote last time? * Self check 1: What do I mean by “single-instruction stream”? Self check 2: When we talked about the optimization of superscalar execution, were we talking about optimizing the performance of executing a single-instruction stream? CMU 15-418/618, Fall 2016 Today ▪ Today we will talk computer architecture ▪ Four key concepts about how modern computers work - Two concern parallel execution - Two concern challenges of accessing memory ▪ Understanding these architecture basics will help you - Understand and optimize the performance of your parallel programs - Gain intuition about what workloads might beneft from fast parallel machines CMU 15-418/618, Fall 2016 Part 1: parallel execution CMU 15-418/618, Fall 2016 Example program Compute sin(x) using Taylor expansion: sin(x) = x - x3/3! + x5/5! - x7/7! + ... for each element of an array of N foating-point numbers void sinx(int N, int terms, float* x, float* result) { for (int i=0; i<N; i++) { float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! int sign = -1; for (int j=1; j<=terms; j++) { value += sign * numer / denom; numer *= x[i] * x[i]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } CMU 15-418/618, Fall 2016 Compile program void sinx(int N, int terms, float* x, float* result) { x[i] for (int i=0; i<N; i++) { float value = x[i]; float numer = x[i] * x[i] * x[i]; ld r0, addr[r1] int denom = 6; // 3! mul r1, r0, r0 int sign = -1; mul r1, r1, r0 ... for (int j=1; j<=terms; j++) ... { ... value += sign * numer / denom; ... ... numer *= x[i] * x[i]; ... denom *= (2*j+2) * (2*j+3); st addr[r2], r0 sign *= -1; } result[i] = value; } result[i] } CMU 15-418/618, Fall 2016 Execute program x[i] Fetch/ Decode ld r0, addr[r1] mul r1, r0, r0 ALU mul r1, r1, r0 (Execute) ... ... ... Execution ... Context ... ... st addr[r2], r0 result[i] CMU 15-418/618, Fall 2016 Execute program My very simple processor: executes one instruction per clock x[i] Fetch/ Decode PC ld r0, addr[r1] mul r1, r0, r0 ALU mul r1, r1, r0 (Execute) ... ... ... Execution ... Context ... ... st addr[r2], r0 result[i] CMU 15-418/618, Fall 2016 Execute program My very simple processor: executes one instruction per clock x[i] Fetch/ Decode ld r0, addr[r1] PC mul r1, r0, r0 ALU mul r1, r1, r0 (Execute) ... ... ... Execution ... Context ... ... st addr[r2], r0 result[i] CMU 15-418/618, Fall 2016 Execute program My very simple processor: executes one instruction per clock x[i] Fetch/ Decode ld r0, addr[r1] mul r1, r0, r0 ALU PC mul r1, r1, r0 (Execute) ... ... ... Execution ... Context ... ... st addr[r2], r0 result[i] CMU 15-418/618, Fall 2016 Superscalar processor Recall from last class: instruction level parallelism (ILP) Decode and execute two instructions per clock (if possible) x[i] Fetch/ Fetch/ Decode Decode 1 2 ld r0, addr[r1] mul r1, r0, r0 Exec Exec mul r1, r1, r0 1 2 ... ... ... Execution ... Context ... ... st addr[r2], r0 result[i] Note: No ILP exists in this region of the program CMU 15-418/618, Fall 2016 Aside: Pentium 4 Image credit: http://ixbtlabs.com/articles/pentium4/index.html CMU 15-418/618, Fall 2016 Processor: pre multi-core era Majority of chip transistors used to perform operations that help a single instruction stream run fast Fetch/ Decode Data cache (a big one) ALU (Execute) Execution Out-of-order control logic Context Fancy branch predictor Memory pre-fetcher More transistors = larger cache, smarter out-of-order logic, smarter branch predictor, etc. (Also: more transistors → smaller transistors → higher clock frequencies) CMU 15-418/618, Fall 2016 Processor: multi-core era Fetch/ Idea #1: Decode Use increasing transistor count to add more ALU cores to the processor (Execute) Execution Rather than use transistors to increase Context sophistication of processor logic that accelerates a single instruction stream (e.g., out-of-order and speculative operations) CMU 15-418/618, Fall 2016 Two cores: compute two elements in parallel x[i] x[j] Fetch/ Fetch/ Decode Decode ld r0, addr[r1] ld r0, addr[r1] mul r1, r0, r0 ALU ALU mul r1, r0, r0 mul r1, r1, r0 mul r1, r1, r0 ... (Execute) (Execute) ... ... ... ... ... ... ... ... ... ... Execution Execution ... st addr[r2], r0 Context Context st addr[r2], r0 result[i] result[j] Simpler cores: each core is slower at running a single instruction stream than our original “fancy” core (e.g., 25% slower) But there are now two cores: 2 × 0.75 = 1.5 (potential for speedup!) CMU 15-418/618, Fall 2016 But our program expresses no parallelism void sinx(int N, int terms, float* x, float* result) This program, compiled with gcc { for (int i=0; i<N; i++) will run as one thread on one of { the processor cores. float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! If each of the simpler processor int sign = -1; cores was 25% slower than the for (int j=1; j<=terms; j++) original single complicated one, { our program now runs 25% value += sign * numer / denom; numer *= x[i] * x[i]; slower. :-( denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } CMU 15-418/618, Fall 2016 Expressing parallelism using pthreads typedef struct { void sinx(int N, int terms, float* x, float* result) int N; { int terms; for (int i=0; i<N; i++) float* x; { float* result; float value = x[i]; } my_args; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! void parallel_sinx(int N, int terms, float* x, float* result) int sign = -1; { pthread_t thread_id; for (int j=1; j<=terms; j++) my_args args; { value += sign * numer / denom args.N = N/2; numer *= x[i] * x[i]; args.terms = terms; denom *= (2*j+2) * (2*j+3); args.x = x; sign *= -1; args.result = result; } pthread_create(&thread_id, NULL, my_thread_start, &args); // launch thread result[i] = value; sinx(N - args.N, terms, x + args.N, result + args.N); // do work } pthread_join(thread_id, NULL); } } void my_thread_start(void* thread_arg) { my_args* thread_args = (my_args*)thread_arg; sinx(args->N, args->terms, args->x, args->result); // do work } CMU 15-418/618, Fall 2016 Data-parallel expression (in our fctitious data-parallel language) void sinx(int N, int terms, float* x, float* result) Loop iterations declared by the { programmer to be independent // declare independent loop iterations forall (int i from 0 to N-1) { With this information, you could imagine float value = x[i]; how a compiler might automatically float numer = x[i] * x[i] * x[i]; generate parallel threaded code int denom = 6; // 3! int sign = -1; for (int j=1; j<=terms; j++) { value += sign * numer / denom; numer *= x[i] * x[i]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } CMU 15-418/618, Fall 2016 Four cores: compute four elements in parallel Fetch/ Fetch/ Decode Decode ALU ALU (Execute) (Execute) Execution Execution Context Context Fetch/ Fetch/ Decode Decode ALU ALU (Execute) (Execute) Execution Execution Context Context CMU 15-418/618, Fall 2016 Sixteen cores: compute sixteen elements in parallel Sixteen cores, sixteen simultaneous instruction streams CMU 15-418/618, Fall 2016 Multi-core examples Core 1 Core 2 Shared L3 cache Core 3 Core 4 Intel “Skylake” Core i7 quad-core CPU NVIDIA GTX 980 GPU (2015) 16 replicated processing cores (“SM”) (2014) CMU 15-418/618, Fall 2016 More multi-core examples Core 1 Core 2 Intel Xeon Phi “Knights Landing “ 76-core CPU Apple A9 dual-core CPU (2015) (2015) A9 image credit: Chipworks (obtained via Anandtech) http://www.anandtech.com/show/9686/the-apple-iphone-6s-and-iphone-6s-plus-review/3 CMU 15-418/618, Fall 2016 Data-parallel expression (in our fctitious data-parallel language) void sinx(int N, int terms, float* x, float* result) Another interesting property of this code: { // declare independent loop iterations forall (int i from 0 to N-1) Parallelism is across iterations of the loop. { float value = x[i]; All the iterations of the loop do the same float numer = x[i] * x[i] * x[i]; thing: evaluate the sine of a single input int denom = 6; // 3! int sign = -1; number for (int j=1; j<=terms; j++) { value += sign * numer / denom; numer *= x[i] * x[i]; denom *= (2*j+2) * (2*j+3); sign *= -1; } result[i] = value; } } CMU 15-418/618, Fall 2016 Add ALUs to increase compute capability Fetch/ Decode Idea #2: Amortize cost/complexity of managing an ALU 0 ALU 1 ALU 2 ALU 3 instruction stream across many ALUs ALU 4 ALU 5 ALU 6 ALU 7 SIMD processing Single instruction, multiple data Same instruction broadcast to all ALUs Execution Context Executed in parallel on all ALUs CMU 15-418/618, Fall 2016 Add ALUs to increase compute capability Fetch/ Decode ld r0, addr[r1] mul r1, r0, r0 ALU 0 ALU 1 ALU 2 ALU 3 mul r1, r1, r0 ... ALU 4 ALU 5 ALU 6 ALU 7 ... ... ... ... ... st addr[r2], r0 Recall original compiled program: Execution Context Instruction stream processes one array element at a time using scalar instructions on scalar registers (e.g., 32-bit foats) CMU 15-418/618, Fall 2016 Scalar program void sinx(int N, int terms, float* x, float* result) Original compiled program: { for (int i=0; i<N; i++) Processes one array element using scalar { instructions on scalar registers (e.g., 32-bit foats) float value = x[i]; float numer = x[i] * x[i] * x[i]; int denom = 6; // 3! int sign = -1; ld r0, addr[r1] mul r1, r0, r0 for (int j=1; j<=terms; j++) mul r1, r1, r0 { ..
Recommended publications
  • Parallel Patterns for Adaptive Data Stream Processing
    Università degli Studi di Pisa Dipartimento di Informatica Dottorato di Ricerca in Informatica Ph.D. Thesis Parallel Patterns for Adaptive Data Stream Processing Tiziano De Matteis Supervisor Supervisor Marco Danelutto Marco Vanneschi Abstract In recent years our ability to produce information has been growing steadily, driven by an ever increasing computing power, communication rates, hardware and software sensors diffusion. is data is often available in the form of continuous streams and the ability to gather and analyze it to extract insights and detect patterns is a valu- able opportunity for many businesses and scientific applications. e topic of Data Stream Processing (DaSP) is a recent and highly active research area dealing with the processing of this streaming data. e development of DaSP applications poses several challenges, from efficient algorithms for the computation to programming and runtime systems to support their execution. In this thesis two main problems will be tackled: • need for high performance: high throughput and low latency are critical re- quirements for DaSP problems. Applications necessitate taking advantage of parallel hardware and distributed systems, such as multi/manycores or cluster of multicores, in an effective way; • dynamicity: due to their long running nature (24hr/7d), DaSP applications are affected by highly variable arrival rates and changes in their workload charac- teristics. Adaptivity is a fundamental feature in this context: applications must be able to autonomously scale the used resources to accommodate dynamic requirements and workload while maintaining the desired Quality of Service (QoS) in a cost-effective manner. In the current approaches to the development of DaSP applications are still miss- ing efficient exploitation of intra-operator parallelism as well as adaptations strategies with well known properties of stability, QoS assurance and cost awareness.
    [Show full text]
  • A Middleware for Efficient Stream Processing in CUDA
    Noname manuscript No. (will be inserted by the editor) A Middleware for E±cient Stream Processing in CUDA Shinta Nakagawa ¢ Fumihiko Ino ¢ Kenichi Hagihara Received: date / Accepted: date Abstract This paper presents a middleware capable of which are expressed as kernels. On the other hand, in- out-of-order execution of kernels and data transfers for put/output data is organized as streams, namely se- e±cient stream processing in the compute uni¯ed de- quences of similar data records. Input streams are then vice architecture (CUDA). Our middleware runs on the passed through the chain of kernels in a pipelined fash- CUDA-compatible graphics processing unit (GPU). Us- ion, producing output streams. One advantage of stream ing the middleware, application developers are allowed processing is that it can exploit the parallelism inherent to easily overlap kernel computation with data trans- in the pipeline. For example, the execution of the stages fer between the main memory and the video memory. can be overlapped with each other to exploit task par- To maximize the e±ciency of this overlap, our middle- allelism. Furthermore, di®erent stream elements can be ware performs out-of-order execution of commands such simultaneously processed to exploit data parallelism. as kernel invocations and data transfers. This run-time One of the stream architecture that bene¯t from capability can be used by just replacing the original the advantages mentioned above is the graphics pro- CUDA API calls with our API calls. We have applied cessing unit (GPU), originally designed for acceleration the middleware to a practical application to understand of graphics applications.
    [Show full text]
  • AMD Accelerated Parallel Processing Opencl Programming Guide
    AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • GPU Implementation Over IPTV Software Defined Networking
    Esmeralda Hysenbelliu. Int. Journal of Engineering Research and Application www.ijera.com ISSN : 2248-9622, Vol. 7, Issue 8, ( Part -1) August 2017, pp.41-45 RESEARCH ARTICLE OPEN ACCESS GPU Implementation over IPTV Software Defined Networking Esmeralda Hysenbelliu* Information Technology Faculty, Polytechnic University of Tirana, Sheshi “Nënë Tereza”, Nr.4, Tirana, Albania Corresponding author: Esmeralda Hysenbelliu ABSTRACT One of the most important issue in IPTV Software defined Network is Bandwidth Issue and Quality of Service at the client side. Decidedly, it is required high level quality of images in low bandwidth and for this reason it is needed different transcoding standards (Compression of image as much as it is possible without destroying the quality of it) as H.264, H265, VP8 and VP9. During a test performed in SMC IPTV SDN Cloud Network, it was observed that with a server HP ProLiant DL380 g6 with two physical processors there was not possible to transcode in format H.264 more than 30 channels simultaneously because CPU’s achieved 100%. This is the reason why it was immediately needed to use Graphic Processing Units called GPU’s which offer high level images processing. After GPU superscalar processor was integrated and done functional via module NVENC of FFEMPEG Program, number of channels transcoded simultaneously was tremendous increased (more than 100 channels). The aim of this paper is to real implement GPU superscalar processors in IPTV Cloud Networks by achieving improvement of performance to more than 60%. Keywords - GPU superscalar processor, Performance Improvement, NVENC, CUDA --------------------------------------------------------------------------------------------------------------------------------------- Date of Submission: 01 -05-2017 Date of acceptance: 19-08-2017 --------------------------------------------------------------------------------------------------------------------------------------- I.
    [Show full text]
  • Computer Hardware Architecture Lecture 4
    Computer Hardware Architecture Lecture 4 Manfred Liebmann Technische Universit¨atM¨unchen Chair of Optimal Control Center for Mathematical Sciences, M17 [email protected] November 10, 2015 Manfred Liebmann November 10, 2015 Reading List • Pacheco - An Introduction to Parallel Programming (Chapter 1 - 2) { Introduction to computer hardware architecture from the parallel programming angle • Hennessy-Patterson - Computer Architecture - A Quantitative Approach { Reference book for computer hardware architecture All books are available on the Moodle platform! Computer Hardware Architecture 1 Manfred Liebmann November 10, 2015 UMA Architecture Figure 1: A uniform memory access (UMA) multicore system Access times to main memory is the same for all cores in the system! Computer Hardware Architecture 2 Manfred Liebmann November 10, 2015 NUMA Architecture Figure 2: A nonuniform memory access (UMA) multicore system Access times to main memory differs form core to core depending on the proximity of the main memory. This architecture is often used in dual and quad socket servers, due to improved memory bandwidth. Computer Hardware Architecture 3 Manfred Liebmann November 10, 2015 Cache Coherence Figure 3: A shared memory system with two cores and two caches What happens if the same data element z1 is manipulated in two different caches? The hardware enforces cache coherence, i.e. consistency between the caches. Expensive! Computer Hardware Architecture 4 Manfred Liebmann November 10, 2015 False Sharing The cache coherence protocol works on the granularity of a cache line. If two threads manipulate different element within a single cache line, the cache coherency protocol is activated to ensure consistency, even if every thread is only manipulating its own data.
    [Show full text]
  • Superscalar Fall 2020
    CS232 Superscalar Fall 2020 Superscalar What is superscalar - A superscalar processor has more than one set of functional units and executes multiple independent instructions during a clock cycle by simultaneously dispatching multiple instructions to different functional units in the processor. - You can think of a superscalar processor as there are more than one washer, dryer, and person who can fold. So, it allows more throughput. - The order of instruction execution is usually assisted by the compiler. The hardware and the compiler assure that parallel execution does not violate the intent of the program. - Example: • Ordinary pipeline: four stages (Fetch, Decode, Execute, Write back), one clock cycle per stage. Executing 6 instructions take 9 clock cycles. I0: F D E W I1: F D E W I2: F D E W I3: F D E W I4: F D E W I5: F D E W cc: 1 2 3 4 5 6 7 8 9 • 2-degree superscalar: attempts to process 2 instructions simultaneously. Executing 6 instructions take 6 clock cycles. I0: F D E W I1: F D E W I2: F D E W I3: F D E W I4: F D E W I5: F D E W cc: 1 2 3 4 5 6 Limitations of Superscalar - The above example assumes that the instructions are independent of each other. So, it’s easily to push them into the pipeline and superscalar. However, instructions are usually relevant to each other. Just like the hazards in pipeline, superscalar has limitations too. - There are several fundamental limitations the system must cope, which are true data dependency, procedural dependency, resource conflict, output dependency, and anti- dependency.
    [Show full text]
  • The Multiscalar Architecture
    THE MULTISCALAR ARCHITECTURE by MANOJ FRANKLIN A thesis submitted in partial ful®llment of the requirements for the degree of DOCTOR OF PHILOSOPHY (Computer Sciences) at the UNIVERSITY OF WISCONSIN Ð MADISON 1993 THE MULTISCALAR ARCHITECTURE Manoj Franklin Under the supervision of Associate Professor Gurindar S. Sohi at the University of Wisconsin-Madison ABSTRACT The centerpiece of this thesis is a new processing paradigm for exploiting instruction level parallelism. This paradigm, called the multiscalar paradigm, splits the program into many smaller tasks, and exploits ®ne-grain parallelism by executing multiple, possibly (control and/or data) depen- dent tasks in parallel using multiple processing elements. Splitting the instruction stream at statically determined boundaries allows the compiler to pass substantial information about the tasks to the hardware. The processing paradigm can be viewed as extensions of the superscalar and multiprocess- ing paradigms, and shares a number of properties of the sequential processing model and the data¯ow processing model. The multiscalar paradigm is easily realizable, and we describe an implementation of the multis- calar paradigm, called the multiscalar processor. The central idea here is to connect multiple sequen- tial processors, in a decoupled and decentralized manner, to achieve overall multiple issue. The mul- tiscalar processor supports speculative execution, allows arbitrary dynamic code motion (facilitated by an ef®cient hardware memory disambiguation mechanism), exploits communication localities, and does all of these with hardware that is fairly straightforward to build. Other desirable aspects of the implementation include decentralization of the critical resources, absence of wide associative searches, and absence of wide interconnection/data paths.
    [Show full text]
  • Fine-Grained Window-Based Stream Processing on CPU-GPU Integrated
    FineStream: Fine-Grained Window-Based Stream Processing on CPU-GPU Integrated Architectures Feng Zhang and Lin Yang, Renmin University of China; Shuhao Zhang, Technische Universität Berlin and National University of Singapore; Bingsheng He, National University of Singapore; Wei Lu and Xiaoyong Du, Renmin University of China https://www.usenix.org/conference/atc20/presentation/zhang-feng This paper is included in the Proceedings of the 2020 USENIX Annual Technical Conference. July 15–17, 2020 978-1-939133-14-4 Open access to the Proceedings of the 2020 USENIX Annual Technical Conference is sponsored by USENIX. FineStream: Fine-Grained Window-Based Stream Processing on CPU-GPU Integrated Architectures Feng Zhang1, Lin Yang1, Shuhao Zhang2;3, Bingsheng He3, Wei Lu1, Xiaoyong Du1 1Key Laboratory of Data Engineering and Knowledge Engineering (MOE), and School of Information, Renmin University of China 2DIMA, Technische Universität Berlin 3School of Computing, National University of Singapore [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract GPU memory via PCI-e before GPU processing, but the low Accelerating SQL queries on stream processing by utilizing bandwidth of PCI-e limits the performance of stream process- heterogeneous coprocessors, such as GPUs, has shown to be ing on GPUs. Hence, stream processing on GPUs needs to be an effective approach. Most works show that heterogeneous carefully designed to hide the PCI-e overhead. For example, coprocessors bring significant performance improvement be- prior works have explored pipelining the computation and cause of their high parallelism and computation capacity.
    [Show full text]
  • Lightsaber: Efficient Window Aggregation on Multi-Core Processors
    Research 28: Stream Processing SIGMOD ’20, June 14–19, 2020, Portland, OR, USA LightSaber: Efficient Window Aggregation on Multi-core Processors Georgios Theodorakis Alexandros Koliousis∗ Peter Pietzuch Imperial College London Graphcore Research Holger Pirk [email protected] [email protected] [email protected] [email protected] Imperial College London Abstract Invertible Aggregation (Sum) Non-Invertible Aggregation (Min) 200 Window aggregation queries are a core part of streaming ap- 140 150 Multiple TwoStacks 120 tuples/s) Multiple TwoStacks plications. To support window aggregation efficiently, stream 6 Multiple SoE 100 SlickDeque SlickDeque 80 100 SlideSide SlideSide processing engines face a trade-off between exploiting par- 60 allelism (at the instruction/multi-core levels) and incremen- 50 40 20 tal computation (across overlapping windows and queries). 0 0 Throughput (10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Existing engines implement ad-hoc aggregation and par- # Queries # Queries allelization strategies. As a result, they only achieve high Figure 1: Evaluating window aggregation queries performance for specific queries depending on the window definition and the type of aggregation function. an order of magnitude higher throughput compared to ex- We describe a general model for the design space of win- isting systems—on a 16-core server, it processes 470 million dow aggregation strategies. Based on this, we introduce records/s with 132 µs average latency. LightSaber, a new stream processing engine that balances parallelism and incremental processing when executing win- CCS Concepts dow aggregation queries on multi-core CPUs.
    [Show full text]
  • Parallel Stream Processing with MPI for Video Analytics and Data Visualization
    Parallel Stream Processing with MPI for Video Analytics and Data Visualization Adriano Vogel1 [0000−0003−3299−2641], Cassiano Rista1, Gabriel Justo1, Endrius Ewald1, Dalvan Griebler1;3[0000−0002−4690−3964], Gabriele Mencagli2, and Luiz Gustavo Fernandes1[0000−0002−7506−3685] 1 School of Technology, Pontifical Catholic University of Rio Grande do Sul, Porto Alegre - Brazil [email protected] 2 Department of Computer Science, University of Pisa, Pisa - Italy 3 Laboratory of Advanced Research on Cloud Computing (LARCC), Tr^esde Maio Faculty (SETREM), Tr^esde Maio - Brazil Abstract. The amount of data generated is increasing exponentially. However, processing data and producing fast results is a technological challenge. Parallel stream processing can be implemented for handling high frequency and big data flows. The MPI parallel programming model offers low-level and flexible mechanisms for dealing with distributed ar- chitectures such as clusters. This paper aims to use it to accelerate video analytics and data visualization applications so that insight can be ob- tained as soon as the data arrives. Experiments were conducted with a Domain-Specific Language for Geospatial Data Visualization and a Person Recognizer video application. We applied the same stream par- allelism strategy and two task distribution strategies. The dynamic task distribution achieved better performance than the static distribution in the HPC cluster. The data visualization achieved lower throughput with respect to the video analytics due to the I/O intensive operations. Also, the MPI programming model shows promising performance outcomes for stream processing applications. Keywords: parallel programming, stream parallelism, distributed pro- cessing, cluster 1 Introduction Nowadays, we are assisting to an explosion of devices producing data in the form of unbounded data streams that must be collected, stored, and processed in real-time [12].
    [Show full text]
  • Trends in Processor Architecture
    A. González Trends in Processor Architecture Trends in Processor Architecture Antonio González Universitat Politècnica de Catalunya, Barcelona, Spain 1. Past Trends Processors have undergone a tremendous evolution throughout their history. A key milestone in this evolution was the introduction of the microprocessor, term that refers to a processor that is implemented in a single chip. The first microprocessor was introduced by Intel under the name of Intel 4004 in 1971. It contained about 2,300 transistors, was clocked at 740 KHz and delivered 92,000 instructions per second while dissipating around 0.5 watts. Since then, practically every year we have witnessed the launch of a new microprocessor, delivering significant performance improvements over previous ones. Some studies have estimated this growth to be exponential, in the order of about 50% per year, which results in a cumulative growth of over three orders of magnitude in a time span of two decades [12]. These improvements have been fueled by advances in the manufacturing process and innovations in processor architecture. According to several studies [4][6], both aspects contributed in a similar amount to the global gains. The manufacturing process technology has tried to follow the scaling recipe laid down by Robert N. Dennard in the early 1970s [7]. The basics of this technology scaling consists of reducing transistor dimensions by a factor of 30% every generation (typically 2 years) while keeping electric fields constant. The 30% scaling in the dimensions results in doubling the transistor density (doubling transistor density every two years was predicted in 1975 by Gordon Moore and is normally referred to as Moore’s Law [21][22]).
    [Show full text]
  • Intro Evolution of Superscalar Processor
    Superscalar Processors Superscalar Processors vs. VLIW • 7.1 Introduction • 7.2 Parallel decoding • 7.3 Superscalar instruction issue • 7.4 Shelving • 7.5 Register renaming • 7.6 Parallel execution • 7.7 Preserving the sequential consistency of instruction execution • 7.8 Preserving the sequential consistency of exception processing • 7.9 Implementation of superscalar CISC processors using a superscalar RISC core • 7.10 Case studies of superscalar processors TECH Computer Science CH01 Superscalar Processor: Intro Emergence and spread of superscalar processors • Parallel Issue • Parallel Execution • {Hardware} Dynamic Instruction Scheduling • Currently the predominant class of processors 4Pentium 4PowerPC 4UltraSparc 4AMD K5- 4HP PA7100- 4DEC α Evolution of superscalar processor Specific tasks of superscalar processing Parallel decoding {and Dependencies check} Decoding and Pre-decoding • What need to be done • Superscalar processors tend to use 2 and sometimes even 3 or more pipeline cycles for decoding and issuing instructions • >> Pre-decoding: 4shifts a part of the decode task up into loading phase 4resulting of pre-decoding f the instruction class f the type of resources required for the execution f in some processor (e.g. UltraSparc), branch target addresses calculation as well 4the results are stored by attaching 4-7 bits • + shortens the overall cycle time or reduces the number of cycles needed The principle of perdecoding Number of perdecode bits used Specific tasks of superscalar processing: Issue 7.3 Superscalar instruction issue • How and when to send the instruction(s) to EU(s) Instruction issue policies of superscalar processors: Issue policies ---Performance, tread-----Æ Issue rate {How many instructions/cycle} Issue policies: Handing Issue Blockages • CISC about 2 • RISC: Issue stopped by True dependency Issue order of instructions • True dependency Æ (Blocked: need to wait) Aligned vs.
    [Show full text]