Data Intensive ATLAS Workflows in the Cloud

Total Page:16

File Type:pdf, Size:1020Kb

Data Intensive ATLAS Workflows in the Cloud Data intensive ATLAS workflows in the Cloud Dissertation zur Erlangung des mathematisch-naturwissenschaftlichen Doktorgrades Doctor rerum naturalium\ " der Georg-August-Universit¨atG¨ottingen im Promotionsprogramm ProPhys der Georg-August University School of Science (GAUSS) vorgelegt von Gerhard Ferdinand Rzehorz aus Bruchsal CERN-THESIS-2018-094 09/05/2018 G¨ottingen,2018 Betreuungsausschuss Prof. Dr. Arnulf Quadt PD. Dr. J¨ornGroße-Knetter Mitglieder der Pr¨ufungskommission: Referent: Prof. Dr. Arnulf Quadt II. Physikalisches Institut, Georg-August-Universit¨atG¨ottingen Koreferent: Prof. Dr. Ramin Yahyapour Institut f¨urInformatik, Georg-August-Universit¨atG¨ottingen Weitere Mitglieder der Pr¨ufungskommission: Prof. Dr. Steffen Schumann II. Physikalisches Institut, Georg-August-Universit¨atG¨ottingen Prof. Dr. Jens Grabowski Institut f¨urInformatik, Georg-August-Universit¨atG¨ottingen Prof. Dr. Ariane Frey II. Physikalisches Institut, Georg-August-Universit¨atG¨ottingen Dr. Oliver Keeble IT Department, CERN Tag der m¨undlichen Pr¨ufung: 09.05.2018 Data intensive ATLAS workflows in the Cloud Abstract Large physics experiments, such as ATLAS, have participating physicists and institutes all over the Globe. Nowadays, physics analyses are performed on data that is stored thousands of kilo- metres away. This is possible due to the distributed computing infrastructure known as the Worldwide LHC Computing Grid (WLCG). In addition to the analyses, all the previous data transformation steps, such as raw data reconstruction, are performed within the WLCG. Within the next decade, the computing requirements are projected to exceed the available resources by a factor of ten. In order to mitigate this discrepancy, alternative computing solutions have to be investigated. Within this thesis, the viability of Cloud computing is evaluated. The concept of Cloud computing is to rent infrastructure from a commercial provider. In contrast to that, in the WLCG computing concept the hardware within the computing centres is purchased and operated by the WLCG. In order to examine Cloud computing, a model that predicts the work- flow performance on a given infrastructure is created, validated and applied. In parallel, the model was used to evaluate a workflow optimisation technique called overcommitting. Overcom- mitting means that the workload on a computer consists of more parallel processes than there are CPU cores. This technique is used to fill otherwise idle CPU cycles and thereby increase the CPU utilisation. Using the model, overcommitting is determined to be a viable optimisation technique, especially when using remote data input, taking into account the increased memory footprint. Introducing the overcommitting considerations to the Cloud viability evaluation in- creases the feasibility of Cloud computing. This is because Cloud computing may not include a storage solution and has the flexibility to provision virtual machines with additional memory. The final conclusion is drawn by taking the above described results and by combining them with the cost of the WLCG and the Cloud. The result is that Cloud computing is not yet competitive compared to the WLCG computing concept. Data intensive ATLAS workflows in the Cloud Zusammenfassung Die großen Physikexperimente, wie zum Beispiel ATLAS, bestehen aus Kollaborationen mit Physikern und Instituten auf der ganzen Welt. Heutzutage werden physikalische Analysen an Daten durchgef¨uhrt,die Tausende von Kilometern entfernt gespeichert sind. Dies ist auf- grund der verteilten Computing-Infrastruktur, die als Worldwide LHC Computing Grid (WLCG) bekannt ist, m¨oglich. Zus¨atzlich zu den Analysen werden alle vorherigen Datentransformation- sschritte, wie die Rekonstruktion von Rohdaten, innerhalb des WLCG durchgef¨uhrt.Innerhalb des n¨achsten Jahrzehnts wird erwartet, dass die Anforderungen an die Computerinfrastruktur die verf¨ugbarenRessourcen um den Faktor zehn ¨ubersteigen werden. Um diese Diskrepanz zu mindern, m¨ussenAlternativen zur jetzigen Computerinfrastruktur untersucht werden. Im Rah- men dieser Arbeit wird Cloud Computing evaluiert. Das Konzept von Cloud Computing besteht darin, eine Computerinfrastruktur von einem kommerziellen Anbieter zu mieten. Dies steht im Gegensatz zum WLCG Konzept, in dem die Ausstattung der Rechenzentren gekauft und selbst betrieben wird. Um Cloud Computing zu untersuchen, wird ein Modell erstellt, validiert und angewendet, dass das Verhalten von Arbeitsfl¨ussenauf einer beliebigen Infrastruktur vorher- sagt. Parallel dazu wurde das Modell zur Bewertung einer Arbeitsfluss-Optimierungsmethode namens Overcommitting verwendet. Overcomitting bedeutet, dass die Arbeitslast auf einem Computer aus mehr parallelen Prozessen besteht, als CPU-Kerne vorhanden sind. Diese Tech- nik wird verwendet, um ansonsten ungenutzte CPU-Zyklen zu f¨ullenund dadurch die CPU- Auslastung zu erh¨ohen. Unter der Verwendung des Modells wird das Overcommitting als eine brauchbare Optimierungstechnik ermittelt. Dies gilt insbesondere dann, wenn die Daten nur auf weit entfernten Speichermedien vorhanden sind und unter der Ber¨ucksichtigung des erh¨ohten Bedarfs an Arbeitsspeicher. Der Einbezug dieser Uberlegungen¨ in die Cloud Computing Eval- uation verbessert dessen Stellung. Dies liegt daran, dass Cloud Computing nicht unbedingt Speicherm¨oglichkeiten enth¨altund flexibel genug ist, um virtuellen Maschinen zus¨atzlichen Ar- beitsspeicher zuzuweisen. Unter Ber¨ucksichtigung all dieser Gesichtspunkte und in Kombination mit den Kostenmodellen des WLCG und der Cloud, ergibt sich, dass Cloud Computing noch nicht konkurrenzf¨ahiggegen¨uber dem bisherigen WLCG Konzept ist. Contents 1 Introduction1 1.1 Motivation...................................1 1.2 Thesis structure.................................2 2 The Standard Model of particle physics3 2.1 Interactions...................................3 2.1.1 Weak interaction............................3 2.1.2 Electromagnetic interaction......................5 2.1.3 Electroweak unification........................5 2.1.4 Strong interaction...........................5 2.1.5 Quarks and leptons...........................6 2.1.6 The Higgs mechanism.........................6 2.2 Beyond the Standard Model..........................7 3 The ATLAS detector9 3.1 LHC.......................................9 3.1.1 CERN..................................9 3.1.2 Machine specifics............................9 3.2 ATLAS..................................... 12 3.2.1 Detector components.......................... 12 3.2.2 Inner detector.............................. 13 3.2.3 Calorimeters.............................. 13 3.2.4 Muon spectrometer........................... 14 3.2.5 Trigger and data acquisition...................... 15 4 LHC offline computing 17 4.1 Distributed and Grid computing....................... 18 4.2 Cloud computing................................ 19 4.2.1 Concept................................. 20 I Contents 4.2.2 Pricing................................. 22 4.2.3 Storage................................. 24 4.2.4 Security, safety and integrity..................... 27 4.2.5 Availability............................... 28 4.3 Grid Computing................................ 29 4.4 WLCG...................................... 29 4.4.1 Concept and purpose.......................... 30 4.4.2 Composition.............................. 31 4.4.3 Evolution................................ 37 4.5 ATLAS computing components........................ 37 4.5.1 XRootD................................. 37 4.5.2 Athena................................. 38 4.5.3 AthenaMP............................... 38 4.5.4 PanDA................................. 38 4.5.5 Rucio.................................. 39 4.5.6 JEDI................................... 40 4.5.7 CVMFS................................. 40 4.5.8 Tags................................... 40 4.5.9 AMI................................... 40 4.6 General concepts................................ 40 4.6.1 Benchmarking............................. 40 4.6.2 Storage................................. 41 4.6.3 Swapping................................ 42 4.6.4 CPU efficiency............................. 43 4.6.5 Undercommitting............................ 43 4.6.6 Control groups............................. 44 5 Workflows 45 5.1 General model................................. 47 5.1.1 All experiments............................. 49 5.1.2 ATLAS................................. 50 5.2 Monte Carlo simulation............................ 51 5.2.1 Event generation............................ 51 5.2.2 Simulation............................... 54 5.3 Reconstruction................................. 56 5.3.1 Raw data reconstruction........................ 57 5.3.2 Raw data reconstruction profile.................... 57 5.3.3 Simulated data reconstruction..................... 64 5.3.4 Digitisation............................... 66 5.3.5 Trigger simulation........................... 67 5.3.6 Reprocessing.............................. 67 5.4 Analysis..................................... 68 5.4.1 Group production........................... 69 II Contents 5.4.2 Complete processing.......................... 69 6 Models and predictions 71 6.1 Related work.................................. 72 6.2 The Workflow and Infrastructure Model................... 73 6.2.1 Functionalities............................. 74 6.2.2 Model input............................... 76 6.3 Model logic................................... 78 6.3.1 Workflow duration........................... 78 6.3.2 CPU consumption
Recommended publications
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Amd Epyc 7351
    SPEC CPU2017 Floating Point Rate Result spec Copyright 2017-2019 Standard Performance Evaluation Corporation Sugon SPECrate2017_fp_base = 176 Sugon A620-G30 (AMD EPYC 7351) SPECrate2017_fp_peak = 177 CPU2017 License: 9046 Test Date: Dec-2017 Test Sponsor: Sugon Hardware Availability: Dec-2017 Tested by: Sugon Software Availability: Aug-2017 Copies 0 30.0 60.0 90.0 120 150 180 210 240 270 300 330 360 390 420 450 480 510 560 64 550 503.bwaves_r 32 552 165 507.cactuBSSN_r 64 163 130 508.namd_r 64 142 64 141 510.parest_r 32 146 168 511.povray_r 64 175 64 121 519.lbm_r 32 124 64 192 521.wrf_r 32 161 190 526.blender_r 64 188 164 527.cam4_r 64 162 248 538.imagick_r 64 250 205 544.nab_r 64 205 64 160 549.fotonik3d_r 32 163 64 96.7 554.roms_r 32 103 SPECrate2017_fp_base (176) SPECrate2017_fp_peak (177) Hardware Software CPU Name: AMD EPYC 7351 OS: Red Hat Enterprise Linux Server 7.4 Max MHz.: 2900 kernel 3.10.0-693.2.2 Nominal: 2400 Enabled: 32 cores, 2 chips, 2 threads/core Compiler: C/C++: Version 1.0.0 of AOCC Orderable: 1,2 chips Fortran: Version 4.8.2 of GCC Cache L1: 64 KB I + 32 KB D on chip per core Parallel: No L2: 512 KB I+D on chip per core Firmware: American Megatrends Inc. BIOS Version 0WYSZ018 released Aug-2017 L3: 64 MB I+D on chip per chip, 8 MB shared / 2 cores File System: ext4 Other: None System State: Run level 3 (Multi User) Memory: 512 GB (16 x 32 GB 2Rx4 PC4-2667V-R, running at Base Pointers: 64-bit 2400) Peak Pointers: 32/64-bit Storage: 1 x 3000 GB SATA, 7200 RPM Other: None Other: None Page 1 Standard Performance Evaluation
    [Show full text]
  • CS 110 Discussion 15 Programming with SIMD Intrinsics
    CS 110 Discussion 15 Programming with SIMD Intrinsics Yanjie Song School of Information Science and Technology May 7, 2020 Yanjie Song (S.I.S.T.) CS 110 Discussion 15 2020.05.07 1 / 21 Table of Contents 1 Introduction on Intrinsics 2 Compiler and SIMD Intrinsics 3 Intel(R) SDE 4 Application: Horizontal sum in vector Yanjie Song (S.I.S.T.) CS 110 Discussion 15 2020.05.07 2 / 21 Table of Contents 1 Introduction on Intrinsics 2 Compiler and SIMD Intrinsics 3 Intel(R) SDE 4 Application: Horizontal sum in vector Yanjie Song (S.I.S.T.) CS 110 Discussion 15 2020.05.07 3 / 21 Introduction on Intrinsics Definition In computer software, in compiler theory, an intrinsic function (or builtin function) is a function (subroutine) available for use in a given programming language whose implementation is handled specially by the compiler. Yanjie Song (S.I.S.T.) CS 110 Discussion 15 2020.05.07 4 / 21 Intrinsics in C/C++ Compilers for C and C++, of Microsoft, Intel, and the GNU Compiler Collection (GCC) implement intrinsics that map directly to the x86 single instruction, multiple data (SIMD) instructions (MMX, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSSE3, SSE4). Yanjie Song (S.I.S.T.) CS 110 Discussion 15 2020.05.07 5 / 21 x86 SIMD instruction set extensions MMX (1996, 64 bits) 3DNow! (1998) Streaming SIMD Extensions (SSE, 1999, 128 bits) SSE2 (2001) SSE3 (2004) SSSE3 (2006) SSE4 (2006) Advanced Vector eXtensions (AVX, 2008, 256 bits) AVX2 (2013) F16C (2009) XOP (2009) FMA FMA4 (2011) FMA3 (2012) AVX-512 (2015, 512 bits) Yanjie Song (S.I.S.T.) CS 110 Discussion 15 2020.05.07 6 / 21 SIMD extensions in other ISAs There are SIMD instructions for other ISAs as well, e.g.
    [Show full text]
  • AMD's Bulldozer Architecture
    AMD's Bulldozer Architecture Chris Ziemba Jonathan Lunt Overview • AMD's Roadmap • Instruction Set • Architecture • Performance • Later Iterations o Piledriver o Steamroller o Excavator Slide 2 1 Changed this section, bulldozer is covered in architecture so it makes sense to not reiterate with later slides Chris Ziemba, 鳬o AMD's Roadmap • October 2011 o First iteration, Bulldozer released • June 2013 o Piledriver, implemented in 2nd gen FX-CPUs • 2013 o Steamroller, implemented in 3rd gen FX-CPUs • 2014 o Excavator, implemented in 4th gen Fusion APUs • 2015 o Revised Excavator adopted in 2015 for FX-CPUs and beyond Instruction Set: Overview • Type: CISC • Instruction Set: x86-64 (AMD64) o Includes Old x86 Registers o Extends Registers and adds new ones o Two Operating Modes: Long Mode & Legacy Mode • Integer Size: 64 bits • Virtual Address Space: 64 bits o 16 EB of Address Space (17,179,869,184 GB) • Physical Address Space: 48 bits (Current Versions) o Saves space/transistors/etc o 256TB of Address Space Instruction Set: ISA Registers Instruction Set: Operating Modes Instruction Set: Extensions • Intel x86 Extensions o SSE4 : Streaming SIMD (Single Instruction, Multiple Data) Extension 4. Mainly for DSP and Graphics Processing. o AES-NI: Advanced Encryption Standard (AES) Instructions o AVX: Advanced Vector Extensions. 256 bit registers for computationally complex floating point operations such as image/video processing, simulation, etc. • AMD x86 Extensions o XOP: AMD specified SSE5 Revision o FMA4: Fused multiply-add (MAC) instructions
    [Show full text]
  • AMD Ryzen 5 1600 Specifications
    AMD Ryzen 5 1600 specifications General information Type CPU / Microprocessor Market segment Desktop Family AMD Ryzen 5 Model number 1600 CPU part numbers YD1600BBM6IAE is an OEM/tray microprocessor YD1600BBAEBOX is a boxed microprocessor with fan and heatsink Frequency 3200 MHz Turbo frequency 3600 MHz Package 1331-pin lidded micro-PGA package Socket Socket AM4 Introduction date March 15, 2017 (announcement) April 11, 2017 (launch) Price at introduction $219 Architecture / Microarchitecture Microarchitecture Zen Processor core Summit Ridge Core stepping B1 Manufacturing process 0.014 micron FinFET process 4.8 billion transistors Data width 64 bit The number of CPU cores 6 The number of threads 12 Floating Point Unit Integrated Level 1 cache size 6 x 64 KB 4-way set associative instruction caches 6 x 32 KB 8-way set associative data caches Level 2 cache size 6 x 512 KB inclusive 8-way set associative unified caches Level 3 cache size 2 x 8 MB exclusive 16-way set associative shared caches Multiprocessing Uniprocessor Features MMX instructions Extensions to MMX SSE / Streaming SIMD Extensions SSE2 / Streaming SIMD Extensions 2 SSE3 / Streaming SIMD Extensions 3 SSSE3 / Supplemental Streaming SIMD Extensions 3 SSE4 / SSE4.1 + SSE4.2 / Streaming SIMD Extensions 4 SSE4a AES / Advanced Encryption Standard instructions AVX / Advanced Vector Extensions AVX2 / Advanced Vector Extensions 2.0 BMI / BMI1 + BMI2 / Bit Manipulation instructions SHA / Secure Hash Algorithm extensions F16C / 16-bit Floating-Point conversion instructions
    [Show full text]
  • C++ Code __M128 Add (Const __M128 &X, Const __M128 &Y){ X X3 X2 X1 X0 Return Mm Add Ps(X, Y); } + + + + +
    ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications Final Project Related Issues Variable Sharing in OpenMP OpenMP synchronization issues OpenMP performance issues November 9, 2015 Lecture 24 © Dan Negrut, 2015 ECE/ME/EMA/CS 759 UW-Madison Quote of the Day “Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid.” -- Frank Zappa, Musician 1940 - 1993 2 Before We Get Started Issues covered last time: Final Project discussion Open MP optimization issues, wrap up Today’s topics SSE and AVX quick overview Parallel computing w/ MPI Other issues: HW08, due on Wd, Nov. 10 at 11:59 PM 3 Parallelism, as Expressed at Various Levels Cluster Group of computers communicating through fast interconnect Coprocessors/Accelerators Special compute devices attached to the local node through special interconnect Node Group of processors communicating through shared memory Socket Group of cores communicating through shared cache Core Group of functional units communicating through registers Hyper-Threads Group of thread contexts sharing functional units Superscalar Group of instructions sharing functional units Pipeline Sequence of instructions sharing functional units Vector Single instruction using multiple functional units Have discussed already Haven’t discussed yet 4 [Intel] Have discussed, but little direct control Instruction Set Architecture (ISA) Extensions Extensions to the base x86 ISA One way the x86 has evolved over the years Extensions for vectorizing
    [Show full text]
  • Intel® Architecture Instruction Set Extensions and Future Features
    Intel® Architecture Instruction Set Extensions and Future Features Programming Reference May 2021 319433-044 Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. All product plans and roadmaps are subject to change without notice. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Code names are used by Intel to identify products, technologies, or services that are in development and not publicly available. These are not “commercial” names and not intended to function as trademarks. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be ob- tained by calling 1-800-548-4725, or by visiting http://www.intel.com/design/literature.htm. Copyright © 2021, Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
    [Show full text]
  • Intel(R) Advanced Vector Extensions Programming Reference
    Intel® Advanced Vector Extensions Programming Reference 319433-011 JUNE 2011 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANT- ED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS. Intel may make changes to specifications and product descriptions at any time, without notice. Developers must not rely on the absence or characteristics of any features or instructions marked “re- served” or “undefined.” Improper use of reserved or undefined features or instructions may cause unpre- dictable behavior or failure in developer's software code when running on an Intel processor. Intel reserves these features or instructions for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from their unauthorized use. The Intel® 64 architecture processors may contain design defects or errors known as errata. Current char- acterized errata are available on request. Hyper-Threading Technology requires a computer system with an Intel® processor supporting Hyper- Threading Technology and an HT Technology enabled chipset, BIOS and operating system. Performance will vary depending on the specific hardware and software you use. For more information, see http://www.in- tel.com/technology/hyperthread/index.htm; including details on which processors support HT Technology.
    [Show full text]
  • HPC User Guide
    Supercomputing Wales HPC User Guide Version 2.0 March 2020 Supercomputing Wales User Guide Version 2.0 1 HPC User Guide Version 2.0 Table of Contents Glossary of terms used in this document ............................................................................................... 5 1 An Introduction to the User Guide ............................................................................................... 10 2 About SCW Systems ...................................................................................................................... 12 2.1 Cardiff HPC System - About Hawk......................................................................................... 12 2.1.1 System Specification: .................................................................................................... 12 2.1.2 Partitions: ...................................................................................................................... 12 2.2 Swansea HPC System - About Sunbird .................................................................................. 13 2.2.1 System Specification: .................................................................................................... 13 2.2.2 Partitions: ...................................................................................................................... 13 2.3 Cardiff Data Analytics Platform - About Sparrow (Coming soon) ......................................... 14 3 Registration & Access ...................................................................................................................
    [Show full text]
  • Floating Point Multiplication. Instruction Set
    Assembly: Introduction. Yipeng Huang Rutgers University February 23, 2021 1/25 Table of contents Announcements Floats: Review Normalized: exp field Normalized: frac field Normalized/denormalized Special cases Floats: Mastery Normalized number bitstring to real number it represents Floating point multiplication Properties of floating point Instruction set architectures why are instruction set architectures important 8-bit vs. 16-bit. vs. 32-bit vs. 64-bit CISC vs. RISC 2/25 Looking ahead Class plan 1. Today, Tuesday, 2/23: Bits to floats. Introduction to the software-hardware interface. 2. Reading assignment for next four weeks: CS:APP Chapter 3. 3. Thursday, 2/25: Programming Assignment 3 on bits, bytes, integers, floats out. 4. Monday, 3/1: Programming Assignment 2 due. Be sure to test on ilab, "make clean". Quiz 6 on floating point trickiness out. 3/25 Table of contents Announcements Floats: Review Normalized: exp field Normalized: frac field Normalized/denormalized Special cases Floats: Mastery Normalized number bitstring to real number it represents Floating point multiplication Properties of floating point Instruction set architectures why are instruction set architectures important 8-bit vs. 16-bit. vs. 32-bit vs. 64-bit CISC vs. RISC 4/25 Floating point numbers Avogadro’s number +6:02214 × 1023 mol−1 Scientific notation I sign I mantissa or significand I exponent 5/25 Floats and doubles Single precision 31 30 23 22 0 s exp frac Double precision 63 62 52 51 32 s exp frac (51:32) 31 0 frac (31:0) Figure: The two standard formats for floating point data types. Image credit CS:APP 6/25 Floats and doubles property float double total bits 32 64 s bit 1 1 exp bits 8 11 frac bits 23 52 C printf() format specifier "%f" "%lf" Table: Properties of floats and doubles 7/25 The IEEE 754 number line –∞ –10 –5 0 +5 +10 +∞ Denormalized Normalized Infinity Figure: Full picture of number line for floating point values.
    [Show full text]
  • 4. Instruction Tables Lists of Instruction Latencies, Throughputs and Micro-Operation Breakdowns for Intel, AMD and VIA Cpus
    Introduction 4. Instruction tables Lists of instruction latencies, throughputs and micro-operation breakdowns for Intel, AMD and VIA CPUs By Agner Fog. Technical University of Denmark. Copyright © 1996 – 2016. Last updated 2016-01-09. Introduction This is the fourth in a series of five manuals: 1. Optimizing software in C++: An optimization guide for Windows, Linux and Mac platforms. 2. Optimizing subroutines in assembly language: An optimization guide for x86 platforms. 3. The microarchitecture of Intel, AMD and VIA CPUs: An optimization guide for assembly programmers and compiler makers. 4. Instruction tables: Lists of instruction latencies, throughputs and micro-operation breakdowns for Intel, AMD and VIA CPUs. 5. Calling conventions for different C++ compilers and operating systems. The latest versions of these manuals are always available from www.agner.org/optimize. Copyright conditions are listed below. The present manual contains tables of instruction latencies, throughputs and micro-operation breakdown and other tables for x86 family microprocessors from Intel, AMD and VIA. The figures in the instruction tables represent the results of my measurements rather than the offi- cial values published by microprocessor vendors. Some values in my tables are higher or lower than the values published elsewhere. The discrepancies can be explained by the following factors: ● My figures are experimental values while figures published by microprocessor vendors may be based on theory or simulations. ● My figures are obtained with a particular test method under particular conditions. It is possible that different values can be obtained under other conditions. ● Some latencies are difficult or impossible to measure accurately, especially for memory access and type conversions that cannot be chained.
    [Show full text]
  • 17. Risc, Cisc, and Vliw
    17. RISC, CISC, AND VLIW COS375 / ELE375 Mohammad Shahrad Acknowledgements: 3/10/2021 David August Margaret Martonosi David Patterson The Great Debate Ø CISC: Complex Instruction Set Computer Ø RISC: Reduced Instruction Set Computer Ø VLIW: Very Long Instruction Word Ø EPIC: Explicitly Parallel Instruction Computing 1 The CISC Design Philosophy MAKE MACHINE EASY TO PROGRAM! Support for frequent tasks ­ Functions: Provide a “call” instruction, save registers ­ Provide convenient addressing modes (See ISA Lectures) Make each instruction do lots of work ­ Less explicit state necessary (e.g., block copy) ­ Fewer instructions necessary May include variable width instructions ­ Compression ­ Easy to expand the ISA 2 The RISC Design Philosophy KEEP DESIGN SIMPLE! • Reduce number of instruction types • Fixed length instructions, easy to decode formats • Focus on making these few, simple instructions faster • Use only a few popular addressing modes 3 Example: MIPS vs. Intel 80x86 MIPS: “Three-address architecture” ­ Arithmetic-logic specify all 3 operands add $s0,$s1,$s2 # s0=s1+s2 ­ Benefit: fewer instructions Þ performance x86: “Two-address architecture” ­ Only 2 operands, so the destination is also one of the sources add $s1,$s0 # s0=s0+s1 ­ Often true in C statements: c += b; ­ Benefit: smaller instructions Þ smaller code Attribution: David Patterson 4 Example: MIPS vs. Intel 80x86 MIPS: “load-store architecture” ­ Only Load/Store access memory; rest operations register-register; e.g., lw $t0, 12($gp) add $s0,$s0,$t0 # s0=s0+Mem[12+gp] ­ Benefit: simpler hardware Þ easier to pipeline, higher performance x86: “register-memory architecture” ­ All operations can have an operand in memory; other operand is a register; e.g., add 12(%gp),%s0 # s0=s0+Mem[12+gp] ­ Benefit: fewer instructions Þ smaller code Attribution: David Patterson 5 Example: MIPS vs.
    [Show full text]