Heterogeneous Multicore Openamp

Total Page:16

File Type:pdf, Size:1020Kb

Heterogeneous Multicore Openamp Heterogeneous Multicore OpenAMP Felix Baum Mentor Embedded 1 Trends Create Opportunities Competitive Consolidation SoC Technology Market (Reuse, Space, Advancement Weight, Power) Multiple OSes Mixed Core Types Global Competition Complex Increasing Pace of Innovation Configurations Complexity 2 Heterogeneous Computing for Power/Performance Systems that use more than one kind of processor Allows improved efficiency for various workloads Increases the programming challenge 3 Heterogeneous Computing for Power/Performance 4 SoC FPGA: Heterogeneous Processing Choices Hard processor system: ARM* Cortex*-A9, A53 • Dual- and quad-core variants • System management and control Soft CPU(s): Nios II • Configurable (performance, number of cores) • From C-programmable state machine to Linux* Hardware accelerators • Application specific hardware in FPGA • OpenCL*, MATLAB*/Simulink*, custom designed RTL • Managed by CPU or autonomous 5 Techniques to Parallelize the Processing: OpenMP OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. • It consists of a set of compiler directives, library routines, and environment variables that influence runtime behavior • User identifies supported constructs, and manually inserts directives to assist compiler for a reconstruction of supported constructs into parallel • User does not need to create threads, and does not need to consider the work assigned to threads • OpenMP allows users who do not have a sufficient knowledge of parallel computing to explore parallel computing 6 Techniques to Parallelize the Processing: OpenCL* OpenCL* (Open Computing Language) is the industry’s open standard for writing data-parallel code in heterogeneous computers. • OpenCL is promoted as a primary approach towards programming Intel parallel computing hardware offerings. OpenCL requires a level of low-level understanding and competence to write efficient parallel software. • OpenCL is primarily targeted at leveraging data-parallelism of devices, and additional considerations must be made to use the multiple cores available on the CPUs in the system, for example offloading CPU intensive algorithms to GPUs. The multicore framework listed in this paper differs from OpenCL as it does not try to parallelize processing for the sake of performance but tries to isolate and separate execution blocks based on performance or power consumption requirements. 7 Techniques to Parallelize the Processing: OpenAMP The multicore framework listed in this paper differs from OpenCL* and OpenMP as it does not try to parallelize processing for the sake of performance, but tries to isolate and separate execution blocks based on performance or power consumption requirements. OpenAMP defines how operating systems interact: • Focused on IPC and booting frameworks • In particular between Linux* and RTOS/bare-metal • In particular in heterogeneous systems 8 Multicore Configurations SMP Linux* (SMP) Cortex* A Cortex A 9 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations SMP Linux* (SMP) Cortex* A Cortex A RTOS (SMP) Cortex A Cortex A 10 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations Homogeneous SMP sAMP Linux* Linux Linux (SMP) (master) Cortex* A Cortex A Cortex A Cortex A RTOS RTOS RTOS (SMP) (master) Cortex A Cortex A Cortex A Cortex A 11 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations Homogeneous SMP sAMP Linux* Linux Linux (SMP) (master) Cortex* A Cortex A Cortex A Cortex A RTOS RTOS RTOS (SMP) (master) Cortex A Cortex A Cortex A Cortex A 12 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations Homogeneous SMP Heterogeneous sAMP sAMP Bare Linux* Linux RTOS RTOS Linux Linux Metal (SMP) (master) (master) (master) Env. Cortex* A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Bare RTOS RTOS RTOS Linux RTOS Linux Metal (SMP) (master) (master) (master) Env. Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A 13 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations Homogeneous SMP Heterogeneous sAMP sAMP Bare Linux* Linux RTOS RTOS Linux Linux Metal (SMP) (master) (master) (master) Env. Cortex* A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Bare RTOS RTOS RTOS Linux RTOS Linux Metal (SMP) (master) (master) (master) Env. Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A 14 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations Homogeneous SMP Heterogeneous sAMP sAMP Bare Linux* Linux RTOS RTOS Linux Linux Metal (SMP) (master) (master) (master) Env. Hypervisor Hypervisor Hypervisor Cortex* A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Bare RTOS RTOS RTOS Linux RTOS Linux Metal (SMP) (master) (master) (master) Env. Hypervisor Hypervisor Hypervisor Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A 15 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Multicore Configurations Homogeneous SMP Heterogeneous sAMP sAMP Bare Linux* Linux RTOS RTOS Linux Linux Metal (SMP) (master) (master) (master) Env. Hypervisor Hypervisor Hypervisor Cortex* A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Bare RTOS RTOS RTOS Linux RTOS Linux Metal (SMP) (master) (master) (master) Env. Hypervisor Hypervisor Hypervisor Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A Cortex A 16 SMP – Symmetric MultiProcessing; uAMP – unsupervised Asymmetric MultiProcessing; sAMP – supervised AMP aka virtualized Complexity Skyrockets Extreme complexity is introduced with general purpose development • System architecture • Configuration • Booting • Debugging • Separation • Device sharing • Inter-processor communication Development Complexity • Security Single Dual Multiple Heterogeneous Core Cores Cores Multiple Cores 17 The OpenAMP Standard • OpenAMP is an open source project managed by the Multicore Association • OpenAMP standardizes how operating systems interact in heterogeneous systems - Focused on inter-process communications (IPC) and booting frameworks - In particular between Linux* and RTOS/bare metal • Guiding principles: - Open source implementations for Linux and RTOSes - Prototype and prove in open source before standardizing - Business friendly APIs and implementations to allow proprietary solutions * 18 The OpenAMP Collaboration Mentor Embedded • Participates on MCA Board of Directors • Serves in a Technical Advisor role on the MCA OpenAMP Committee • Contributed most of the initial code to the OpenAMP git tree • Mentor Embedded Multicore Framework is a first commercial implementation of the standard 19 OpenAMP: Core Management • Used by master OS to boot remote OSes on remote CPUs • remoteproc user API for processor lifecycle management • Conformance to upstream Linux* remoteproc implementation • Stand alone OS agnostic clean-room implementation of remoteproc API Where A could be Cortex* A7/A9/A15/A53… 20 OpenAMP: Inter Process Communication • For inter-processor communications between OS/software contexts • rpmsg user API for inter-processor communication • Conformance to upstream Linux* rpmsg implementation • Stand alone OS agnostic clean-room implementation of virtio and rpmsg for bare-metal or other non-Linux OS 21 OpenAMP: Use Cases • Separation of applications. One example, user interface tasks from critical processing tasks. • Offload work for computationally intensive operations such as processing, encryption of secure, sensitive data * 22 OpenAMP: Medical Use Case Real-Time Display Vitals Data Acquisition Web View 23 Summary • Heterogeneous multicore is here to stay and has great potential to help with both consolidation and isolation • Making multiple and varied operating environments run side-by-side is hard work, Open AMP makes it possible while minimizing schedule risk • Open AMP unleashes the ‘silicony goodness’ by providing developers with the framework and tools to optimize the processing to extract most out of the underlying hardware 24 Next Steps • SoC FPGA Overview - portfolio, tools, ecosystem, road map https://www.altera.com/socfpga • Low-cost Development Platform - evaluation kit https://www.altera.com/atlassoc • Mentor Embedded Multicore Framework www.mentor.com/embedded-software/multicore • White Paper: Architecting Success with Heterogeneous Systems • Webinar: Challenges of Debugging Heterogeneous Multicore SoCs 25 Completing an Online Session Evaluation by 10am Tomorrow Automatically Enters You in a Drawing to Win! You will receive an email with a link to the online evaluation prior to the end of this session. ISDF Technical Session Prize Intel® Compute Stick (2) Winners will be notified by email Copies of the complete sweepstakes rules are available at the Info Desk 26 Q&A 27 Legal Notices and Disclaimers • Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system
Recommended publications
  • Microprocessor
    MICROPROCESSOR www.MPRonline.com THE REPORTINSIDER’S GUIDE TO MICROPROCESSOR HARDWARE EEMBC’S MULTIBENCH ARRIVES CPU Benchmarks: Not Just For ‘Benchmarketing’ Any More By Tom R. Halfhill {7/28/08-01} Imagine a world without measurements or statistical comparisons. Baseball fans wouldn’t fail to notice that a .300 hitter is better than a .100 hitter. But would they welcome a trade that sends the .300 hitter to Cleveland for three .100 hitters? System designers and software developers face similar quandaries when making trade-offs EEMBC’s role has evolved over the years, and Multi- with multicore processors. Even if a dual-core processor Bench is another step. Originally, EEMBC was conceived as an appears to be better than a single-core processor, how much independent entity that would create benchmark suites and better is it? Twice as good? Would a quad-core processor be certify the scores for accuracy, allowing vendors and customers four times better? Are more cores worth the additional cost, to make valid comparisons among embedded microproces- design complexity, power consumption, and programming sors. (See MPR 5/1/00-02, “EEMBC Releases First Bench- difficulty? marks.”) EEMBC still serves that role. But, as it turns out, most The Embedded Microprocessor Benchmark Consor- EEMBC members don’t openly publish their scores. Instead, tium (EEMBC) wants to help answer those questions. they disclose scores to prospective customers under an NDA or EEMBC’s MultiBench 1.0 is a new benchmark suite for use the benchmarks for internal testing and analysis. measuring the throughput of multiprocessor systems, Partly for this reason, MPR rarely cites EEMBC scores including those built with multicore processors.
    [Show full text]
  • Programmers' Tool Chain
    Reduce the complexity of programming multicore ++ Offload™ for PlayStation®3 | Offload™ for Cell Broadband Engine™ | Offload™ for Embedded | Custom C and C++ Compilers | Custom Shader Language Compiler www.codeplay.com It’s a risk to underestimate the complexity of programming multicore applications Software developers are now presented with a rapidly-growing range of different multi-core processors. The common feature of many of these processors is that they are difficult and error-prone to program with existing tools, give very unpredictable performance, and that incompatible, complex programming models are used. Codeplay develop compilers and programming tools with one primary goal - to make it easy for programmers to achieve big performance boosts with multi-core processors, but without needing bigger, specially-trained, expensive development teams to get there. Introducing Codeplay Based in Edinburgh, Scotland, Codeplay Software Limited was founded by veteran games developer Andrew Richards in 2002 with funding from Jez San (the founder of Argonaut Games and ARC International). Codeplay introduced their first product, VectorC, a highly optimizing compiler for x86 PC and PlayStation®2, in 2003. In 2004 Codeplay further developed their business by offering services to processor developers to provide them with compilers and programming tools for their new and unique architectures, using VectorC’s highly retargetable compiler technology. Realising the need for new multicore tools Codeplay started the development of the company’s latest product, the Offload™ C++ Multicore Programming Platform. In October 2009 Offload™: Community Edition was released as a free-to-use tool for PlayStation®3 programmers. Experience and Expertise Codeplay have developed compilers and software optimization technology since 1999.
    [Show full text]
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • Multiprocessing and Scalability
    Multiprocessing and Scalability A.R. Hurson Computer Science and Engineering The Pennsylvania State University 1 Multiprocessing and Scalability Large-scale multiprocessor systems have long held the promise of substantially higher performance than traditional uni- processor systems. However, due to a number of difficult problems, the potential of these machines has been difficult to realize. This is because of the: Fall 2004 2 Multiprocessing and Scalability Advances in technology ─ Rate of increase in performance of uni-processor, Complexity of multiprocessor system design ─ This drastically effected the cost and implementation cycle. Programmability of multiprocessor system ─ Design complexity of parallel algorithms and parallel programs. Fall 2004 3 Multiprocessing and Scalability Programming a parallel machine is more difficult than a sequential one. In addition, it takes much effort to port an existing sequential program to a parallel machine than to a newly developed sequential machine. Lack of good parallel programming environments and standard parallel languages also has further aggravated this issue. Fall 2004 4 Multiprocessing and Scalability As a result, absolute performance of many early concurrent machines was not significantly better than available or soon- to-be available uni-processors. Fall 2004 5 Multiprocessing and Scalability Recently, there has been an increased interest in large-scale or massively parallel processing systems. This interest stems from many factors, including: Advances in integrated technology. Very
    [Show full text]
  • Design of a Message Passing Interface for Multiprocessing with Atmel Microcontrollers
    DESIGN OF A MESSAGE PASSING INTERFACE FOR MULTIPROCESSING WITH ATMEL MICROCONTROLLERS A Design Project Report Presented to the Engineering Division of the Graduate School of Cornell University in Partial Fulfillment of the Requirements of the Degree of Master of Engineering (Electrical) by Kalim Moghul Project Advisor: Dr. Bruce R. Land Degree Date: May 2006 Abstract Master of Electrical Engineering Program Cornell University Design Project Report Project Title: Design of a Message Passing Interface for Multiprocessing with Atmel Microcontrollers Author: Kalim Moghul Abstract: Microcontrollers are versatile integrated circuits typically incorporating a microprocessor, memory, and I/O ports on a single chip. These self-contained units are central to embedded systems design where low-cost, dedicated processors are often preferred over general-purpose processors. Embedded designs may require several microcontrollers, each with a specific task. Efficient communication is central to such systems. The focus of this project is the design of a communication layer and application programming interface for exchanging messages among microcontrollers. In order to demonstrate the library, a small- scale cluster computer is constructed using Atmel ATmega32 microcontrollers as processing nodes and an Atmega16 microcontroller for message routing. The communication library is integrated into aOS, a preemptive multitasking real-time operating system for Atmel microcontrollers. Report Approved by Project Advisor:__________________________________ Date:___________ Executive Summary Microcontrollers are versatile integrated circuits typically incorporating a microprocessor, memory, and I/O ports on a single chip. These self-contained units are central to embedded systems design where low-cost, dedicated processors are often preferred over general-purpose processors. Some embedded designs may incorporate several microcontrollers, each with a specific task.
    [Show full text]
  • Piotr Warczak Quarter: Fall 2011 Student ID: 99XXXXX Credit: 2 Grading: Decimal
    CSS 600: Independent Study Contract ­ Final Report Student: Piotr Warczak Quarter: Fall 2011 Student ID: 99XXXXX Credit: 2 Grading: Decimal Independent Study Title The GPU version of the MASS library. Focus and Goals The current version of the MASS library is written in java programming language and combines the power of every computing node on the network by utilizing both multithreading and multiprocessing. The new version will be implemented in C and C++. It will also support multithreading and multiprocessing and finally CUDA parallel computing architecture utilizing both single and multiple devices. This version will harness the power of the GPU a general purpose parallel processor. My specific goals of the past quarter were: • To create a baseline in C language using Wave2D application as an example. • To implement single thread Wave2D • To implement multithreading Wave2D • To implement CUDA enabled Wave2D Work Completed During this quarter I have created Wave2D application in C language using a single thread, multithreaded and CUDA enabled application. The reason for this is that CUDA is an extension of C and thus we need to create baseline against which other versions of the program can be measured. This baseline illustrates how much faster the CUDA enabled program is versus single thread and multithreaded versions. Furthermore, once the GPU version of the MASS library is implemented, we can compare the results to identify if there is any difference in program’s total execution time due to the potential MASS library overhead. And if any major delays in execution are found, the problem ares will be identified and corrected.
    [Show full text]
  • Message Passing Fundamentals
    1. Message Passing Fundamentals Message Passing Fundamentals As a programmer, you may find that you need to solve ever larger, more memory intensive problems, or simply solve problems with greater speed than is possible on a serial computer. You can turn to parallel programming and parallel computers to satisfy these needs. Using parallel programming methods on parallel computers gives you access to greater memory and Central Processing Unit (CPU) resources not available on serial computers. Hence, you are able to solve large problems that may not have been possible otherwise, as well as solve problems more quickly. One of the basic methods of programming for parallel computing is the use of message passing libraries. These libraries manage transfer of data between instances of a parallel program running (usually) on multiple processors in a parallel computing architecture. The topics to be discussed in this chapter are The basics of parallel computer architectures. The difference between domain and functional decomposition. The difference between data parallel and message passing models. A brief survey of important parallel programming issues. 1.1. Parallel Architectures Parallel Architectures Parallel computers have two basic architectures: distributed memory and shared memory. Distributed memory parallel computers are essentially a collection of serial computers (nodes) working together to solve a problem. Each node has rapid access to its own local memory and access to the memory of other nodes via some sort of communications network, usually a proprietary high-speed communications network. Data are exchanged between nodes as messages over the network. In a shared memory computer, multiple processor units share access to a global memory space via a high-speed memory bus.
    [Show full text]
  • Composable Multi-Threading and Multi-Processing for Numeric Libraries
    18 PROC. OF THE 17th PYTHON IN SCIENCE CONF. (SCIPY 2018) Composable Multi-Threading and Multi-Processing for Numeric Libraries Anton Malakhov‡∗, David Liu‡, Anton Gorshkov‡†, Terry Wilmarth‡ https://youtu.be/HKjM3peINtw F Abstract—Python is popular among scientific communities that value its sim- Scaling parallel programs is challenging. There are two fun- plicity and power, especially as it comes along with numeric libraries such as damental laws which mathematically describe and predict scala- [NumPy], [SciPy], [Dask], and [Numba]. As CPU core counts keep increasing, bility of a program: Amdahl’s Law and Gustafson-Barsis’ Law these modules can make use of many cores via multi-threading for efficient [AGlaws]. According to Amdahl’s Law, speedup is limited by multi-core parallelism. However, threads can interfere with each other leading the serial portion of the work, which effectively puts a limit on to overhead and inefficiency if used together in a single application on machines scalability of parallel processing for a fixed-size job. Python is with a large number of cores. This performance loss can be prevented if all multi-threaded modules are coordinated. This paper continues the work started especially vulnerable to this because it makes the serial part of in [AMala16] by introducing more approaches to coordination for both multi- the same code much slower compared to implementations in other threading and multi-processing cases. In particular, we investigate the use of languages due to its deeply dynamic and interpretative nature. In static settings, limiting the number of simultaneously active [OpenMP] parallel addition, the GIL serializes operations that could be potentially regions, and optional parallelism with Intel® Threading Building Blocks (Intel® executed in parallel, further adding to the serial portion of a [TBB]).
    [Show full text]
  • 14. Parallel Computing 14.1 Introduction 14.2 Independent
    14. Parallel Computing 14.1 Introduction This chapter describes approaches to problems to which multiple computing agents are applied simultaneously. By "parallel computing", we mean using several computing agents concurrently to achieve a common result. Another term used for this meaning is "concurrency". We will use the terms parallelism and concurrency synonymously in this book, although some authors differentiate them. Some of the issues to be addressed are: What is the role of parallelism in providing clear decomposition of problems into sub-problems? How is parallelism specified in computation? How is parallelism effected in computation? Is parallel processing worth the extra effort? 14.2 Independent Parallelism Undoubtedly the simplest form of parallelism entails computing with totally independent tasks, i.e. there is no need for these tasks to communicate. Imagine that there is a large field to be plowed. It takes a certain amount of time to plow the field with one tractor. If two equal tractors are available, along with equally capable personnel to man them, then the field can be plowed in about half the time. The field can be divided in half initially and each tractor given half the field to plow. One tractor doesn't get into another's way if they are plowing disjoint halves of the field. Thus they don't need to communicate. Note however that there is some initial overhead that was not present with the one-tractor model, namely the need to divide the field. This takes some measurements and might not be that trivial. In fact, if the field is relatively small, the time to do the divisions might be more than the time saved by the second tractor.
    [Show full text]
  • Educational Goals for Embedded Systems in the Multicore Era
    AC 2009-414: EDUCATIONAL GOALS FOR EMBEDDED SYSTEMS IN THE MULTICORE ERA James Holt, Freescale Semiconductor, Inc. Jim leads the Multicore Design Evaluation team for Freescale’s NMG/NSD division. Jim has 27 years of industry experience focused on distributed systems, microprocessor and SoC architecture, design verification, and optimization. Jim is an IEEE Senior Member, and is a board member for the Multicore Association. He is also chair of the Integrated Systems & Circuits Science area for the Semiconductor Research Corporation (SRC), and chair of the Multicore Resource API Working group for the Multicore Association. Jim earned a Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin, and an MS in Computer Science from Texas State University. Hongchi Shi, Texas State University, San Marcos Hongchi Shi is Professor and Chair of the Computer Science Department at Texas State University-San Marcos. Prior to joining Texas State University, he has been an Assistant/Associate/Full Professor of Computer Science and Electrical and Computer Engineering at the University of Missouri. He obtained his BS degree and MS degree in Computer Science and Engineering from Beijing University of Aeronautics and Astronautics in 1983 and 1986, respectively. He obtained his PhD degree in Computer and Information Sciences from the University of Florida in 1994. Hongchi Shi's research interests include parallel and distributed computing, wireless sensor networks, neural networks, and image processing. He has served on many organizing and/or technical program committees of international conferences in his research areas. He is a member of ACM and a senior member of IEEE.
    [Show full text]
  • Efficient Parallel Approaches to Financial Derivatives and Rapid Stochastic Convergence
    Utah State University DigitalCommons@USU All Graduate Plan B and other Reports Graduate Studies 5-2015 Efficient Parallel Approaches to Financial Derivatives and Rapid Stochastic Convergence Mario Harper Utah State University Follow this and additional works at: https://digitalcommons.usu.edu/gradreports Part of the Business Commons Recommended Citation Harper, Mario, "Efficient Parallel Approaches to Financial Derivatives and Rapid Stochastic Convergence" (2015). All Graduate Plan B and other Reports. 519. https://digitalcommons.usu.edu/gradreports/519 This Thesis is brought to you for free and open access by the Graduate Studies at DigitalCommons@USU. It has been accepted for inclusion in All Graduate Plan B and other Reports by an authorized administrator of DigitalCommons@USU. For more information, please contact [email protected]. EFFICIENT PARALLEL APPROACHES TO FINANCIAL DERIVATIVES AND RAPID STOCHASTIC CONVERGENCE by Mario Y. Harper A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in Financial Economics Approved: Dr. Tyler Brough Dr. Jason Smith Major Professor Committee Member Dr. Jared DeLisle Dr. Richard Inouye Committee Member Associate Dean of the School of Gradu- ate Studies UTAH STATE UNIVERSITY Logan, Utah 2015 ii Copyright c Mario Y. Harper 2015 All Rights Reserved iii Abstract Efficient Parallel Approaches to Financial Derivatives and Rapid Stochastic Convergence by Mario Y. Harper, Master of Science Utah State University, 2015 Major Professor: Dr. Tyler Brough Department: Economics and Finance This Thesis explores the use of different programming paradigms, platforms and lan- guages to maximize the speed of convergence in Financial Derivatives Models. The study focuses on the strengths and drawbacks of various libraries and their associated languages as well as the difficulty of porting code into massively parallel processes.
    [Show full text]
  • Performance Analysis and Tuning in Multicore Environments
    Computer Architecture and Operative Systems Department Master in High Performance Computing Performance Analysis and Tuning in Multicore Environments MSc research Project for the “Master in High Performance Computing” submitted by MARIO ADOLFO ZAVALA JIMENEZ, advised by Eduardo Galobardes. Dissertation done at Escola Tècnica Superior d’Enginyeria (Computer Architecture and Operative Systems Department). 2009 Abstract Performance analysis is the task of monitor the behavior of a program execution. The main goal is to find out the possible adjustments that might be done in order improve the performance. To be able to get that improvement it is necessary to find the different causes of overhead. Nowadays we are already in the multicore era, but there is a gap between the level of development of the two main divisions of multicore technology (hardware and software). When we talk about multicore we are also speaking of shared memory systems, on this master thesis we talk about the issues involved on the performance analysis and tuning of applications running specifically in a shared Memory system. We move one step ahead to take the performance analysis to another level by analyzing the applications structure and patterns. We also present some tools specifically addressed to the performance analysis of OpenMP multithread application. At the end we present the results of some experiments performed with a set of OpenMP scientific application. Keywords: Performance analysis, application patterns, Tuning, Multithread, OpenMP, Multicore. Resumen Análisis de rendimiento es el área de estudio encargada de monitorizar el comportamiento de la ejecución de programas informáticos. El principal objetivo es encontrar los posibles ajustes que serán necesarios para mejorar el rendimiento.
    [Show full text]