Fast Calculation of the Lomb-Scargle Periodogram Using Graphics Processing Units 3

Total Page:16

File Type:pdf, Size:1020Kb

Fast Calculation of the Lomb-Scargle Periodogram Using Graphics Processing Units 3 Draft version August 7, 2018 A Preprint typeset using LTEX style emulateapj v. 11/10/09 FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS R. H. D. Townsend Department of Astronomy, University of Wisconsin-Madison, Sterling Hall, 475 N. Charter Street, Madison, WI 53706, USA; [email protected] Draft version August 7, 2018 ABSTRACT I introduce a new code for fast calculation of the Lomb-Scargle periodogram, that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match 8 CPU cores, and on a high-end GPU it is faster by a factor approaching thirty. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities; and Monte-Carlo simulation of periodogram statistical properties. Subject headings: methods: data analysis — methods: numerical — techniques: photometric — stars: oscillations 1. INTRODUCTION of the newly emergent field of GPU computing; then, Astronomical time-series observations are often charac- Section 3 reviews the formalism defining the L-S peri- terized by uneven temporal sampling (e.g., due to trans- odogram, and Section 4 presents a GPU-based code im- formation to the heliocentric frame) and/or non-uniform plementing this formalism. Benchmarking calculations coverage (e.g., from day/night cycles, or radiation belt to evaluate the accuracy and performance of the code passages). This complicates the search for periodic sig- are presented in Section 5. The findings and future out- nals, as a fast Fourier transform (FFT) algorithm can- look are then discussed in Section 6. not be employed. A variety of alternatives have been put 2. BACKGROUND TO GPU COMPUTING forward, the most oft-used being the eponymous Lomb- Scargle (L-S) periodogram developed by Lomb (1976) 2.1. Pre-2006: Initial Forays and Scargle (1982). At the time of writing, NASA’s As- The past decade has seen remarkable increases in the trophysics Data System (ADS) lists 735 and 1,810 pub- ability of computers to render complex 3-dimensional lications (respectively) that cite these two papers, high- scenes at movie frame-rates. These gains have been lighting how important the L-S periodogram has proven achieved by progressively shifting the graphics pipeline for the analysis of time series. Recent applications in- — the algorithmic sequence of steps that converts a scene clude the search for a link between solar rotation and description into an image — from the CPU to dedicated nuclear decay rates (Sturrock et al. 2010); the study of hardware within the GPU. To address the inflexibility pulsar timing noise (Lyne et al. 2010); the characteriza- that can accompany such hardware acceleration, GPU tion of quasi-periodic oscillations in blazars (Rani et al. vendors introduced so-called programmable shaders, pro- 2010); and the measurement of rotation periods in exo- cessing units that apply a simple sequence of transforma- planet host stars (Simpson et al. 2010). tions to input elements such as image pixels and mesh Unfortunately, a drawback of the L-S periodogram is vertices. NVIDIA Corporation were the first to im- 2 a computational cost scaling as O(Nt ), where Nt is the plement programmable shader functionality, with their arXiv:1007.1658v2 [astro-ph.SR] 19 Oct 2010 number of measurements in the time series; this contrasts GeForce 3 series of GPUs (released March 2001) offer- with the far-more-efficient O(Nt log2 Nt) scaling of the ing one vertex shader and four (parallel) pixel shaders. FFT algorithm popularized by Cooley & Tukey (1965). The release in the following year of ATI Corporation’s One approach to reducing this cost has been proposed R300 series brought not only an increase in the num- by Press & Rybicki (1989), based on constructing a uni- ber of shaders (up to 4 vertex and 8 pixel), but also ca- formly sampled approximation to the observations via pabilities such as floating-point arithmetic and looping ‘extirpolation’ and then evaluating its FFT. The present constructs that laid the foundations for what ultimately paper introduces a different approach, not through algo- would become GPU computing. rithmic development but rather by leveraging the com- Shaders are programmed using a variety of special- puting power of graphics processing units (GPUs) — the ized languages, such as the OpenGL Shading Language specialized hardware at the heart of the display sub- (GLSL; e.g., Rost 2006) and Microsoft’s High-Level system in personal computers and workstations. Mod- Shading Language (HLSL). The designs of these lan- ern GPUs typically comprise a number of identical pro- guages are strongly tied to their graphics-related pur- grammable processors, and in recent years there has been pose, and thus early attempts at GPU computing using significant interest in applying these parallel-computing programmable shaders had to map each calculation into resources to problems across a breadth of scientific dis- a sequence of equivalent graphical operations (see, e.g., ciplines. In the following section, I give a brief history Owens et al. 2005, and references therein). In an effort 2 Townsend to overcome this awkward aspect, Buck et al. (2004) de- As discussed by Schwarzenberg-Czerny (1998), Pn in the veloped BrookGPU — a compiler and run-time imple- case of a pure Gaussian-noise time series is drawn from mentation of the Brook stream programming language a beta distribution. For a periodogram comprising Nf for GPU platforms. With BrookGPU, the computational frequencies1, the false-alarm probability (FAP) — that resources of shaders are accessed through a stream pro- some observed peak occurs due to chance fluctuations — cessing paradigm: a well-defined series of operations (the is − Nf kernel) are applied to each element in a typically-large 2P (Nt 3)/2 homogeneous sequence of data (the stream). Q =1 − 1 − 1 − n . (3) N " t # 2.2. Post-2006: Modern Era Equations (1) and (2) can be written schematically as GPU computing entered its modern era in 2006, with the release of NVIDIA’s Compute Unified Device Ar- Pn(f)= G[f, (tj ,Xj)], (4) j chitecture (CUDA) — a framework for defining and X managing GPU computations without the need to map where G is some function. In the classification scheme in- them into graphical operations. CUDA-enabled devices troduced by Barsdell et al. (2010), this follows the form (see Appendix A of NVIDIA 2010) are distinguished by of an interact algorithm. Generally speaking, such al- their general-purpose unified shaders, which replace the gorithms are well-suited to GPU implementation, since function-specific shaders (pixel, vertex, etc.) present in they are able to achieve a high arithmetic intensity. How- earlier GPUs. These shaders are programmed using an ever, a straightforward implementation of equations (1) extension to the C language, which follows the same and (2) involves two complete runs through the time se- stream-processing paradigm pioneered by BrookGPU. ries to calculate a single Pn(f), which is wasteful of mem- Since the launch of CUDA, other vendors have been quick ory bandwidth and requires Nf (4Nt + 1) costly trigono- to develop their own GPU computing offerings, most no- metric function evaluations for the full periodogram. tably Advanced Micro Devices (AMD) with their Stream Press et al. (1992) address this inefficiency by calculat- framework, and Microsoft with their DirectCompute in- ing the trig functions from recursion relations, but this terface. approach is difficult to map onto stream processing con- Abstracting away the graphical roots of GPUs has cepts, and moreover becomes inaccurate in the limit of made them accessible to a very broad audience, and large Nf . An alternative strategy, which avoids these dif- GPU-based computations are now being undertaken in ficulties while still offering improved performance, comes fields as diverse as molecular biology, medical imaging, from refactoring the equations as geophysics, fluid dynamics, economics and cryptogra- phy (see Pharr 2005; Nguyen 2007). Within astron- 1 (c XC + s XS)2 P (f)= τ τ + omy and astrophysics, recent applications include N- n 2 c2 CC +2c s CS + s2 SS body simulations (Belleman et al. 2008), real-time ra- τ τ τ τ 2 dio correlation (Wayth et al. 2009), gravitational lens- (cτ XS − sτ XC) 2 2 , (5) ing (Thompson et al. 2010), adaptive-mesh hydrody- cτ SS − 2cτ sτ CS + sτ CC namics (Schive et al. 2010) and cosmological reionization and (Aubert & Teyssier 2010). 2 CS tan2ωτ = . (6) CC − SS 3. THE LOMB-SCARGLE PERIODOGRAM Here, This section reviews the formalism defining the Lomb- cτ = cos ωτ, sτ = sin ωτ, (7) Scargle periodogram. For a time series comprising Nt measurements Xj ≡ X(tj) sampled at times tj (j = while the sums 1,...,N ), assumed throughout to have been scaled and t XC = X cos ωt , shifted such that its mean is zero and its variance is unity, j j j the normalized L-S periodogram at frequency f is X XS = Xj sin ωtj , 2 j 1 j Xj cos ω(tj − τ) X P (f)= + 2 n h 2 i CC = cos ωtj , (8) 2 P j cos ω(tj − τ) j X P 2 2 SS = sin ωtj, j Xj sin ω(tj − τ) . (1) j h 2 i X P j sin ω(tj − τ) CS = cos ωtj sin ωtj , P j X Here and throughout, ω ≡ 2πf is the angular frequency can be evaluated in a single run through the time series, and all summations run from j = 1 to j = Nt.
Recommended publications
  • High-Performance and Energy-Efficient Irregular Graph Processing on GPU Architectures
    High-Performance and Energy-Efficient Irregular Graph Processing on GPU architectures Albert Segura Salvador Doctor of Philosophy Department of Computer Architecture Universitat Politècnica de Catalunya Advisors: Jose-Maria Arnau Antonio González July, 2020 2 Abstract Graph processing is an established and prominent domain that is the foundation of new emerging applications in areas such as Data Analytics, Big Data and Machine Learning. Appli- cations such as road navigational systems, recommendation systems, social networks, Automatic Speech Recognition (ASR) and many others are illustrative cases of graph-based datasets and workloads. Demand for higher processing of large graph-based workloads is expected to rise due to nowadays trends towards increased data generation and gathering, higher inter-connectivity and inter-linkage, and in general a further knowledge-based society. An increased demand that poses challenges to current and future graph processing architectures. To effectively perform graph processing, the large amount of data employed in these domains requires high throughput architectures such as GPGPU. Although the processing of large graph-based workloads exhibits a high degree of parallelism, the memory access patterns tend to be highly irregular, leading to poor GPGPU efficiency due to memory divergence. Graph datasets are sparse, highly unpredictable and unstructured which causes the irregular access patterns and low computation per data ratio, further lowering GPU utilization. The purpose of this thesis is to characterize the bottlenecks and limitations of irregular graph processing on GPGPU architectures in order to propose architectural improvements and extensions that deliver improved performance, energy efficiency and overall increased GPGPU efficiency and utilization. In order to ameliorate these issues, GPGPU graph applications perform stream compaction operations which process the subset of active nodes/edges so subsequent steps work on compacted dataset.
    [Show full text]
  • GPU Accelerated Approach to Numerical Linear Algebra and Matrix Analysis with CFD Applications
    University of Central Florida STARS HIM 1990-2015 2014 GPU Accelerated Approach to Numerical Linear Algebra and Matrix Analysis with CFD Applications Adam Phillips University of Central Florida Part of the Mathematics Commons Find similar works at: https://stars.library.ucf.edu/honorstheses1990-2015 University of Central Florida Libraries http://library.ucf.edu This Open Access is brought to you for free and open access by STARS. It has been accepted for inclusion in HIM 1990-2015 by an authorized administrator of STARS. For more information, please contact [email protected]. Recommended Citation Phillips, Adam, "GPU Accelerated Approach to Numerical Linear Algebra and Matrix Analysis with CFD Applications" (2014). HIM 1990-2015. 1613. https://stars.library.ucf.edu/honorstheses1990-2015/1613 GPU ACCELERATED APPROACH TO NUMERICAL LINEAR ALGEBRA AND MATRIX ANALYSIS WITH CFD APPLICATIONS by ADAM D. PHILLIPS A thesis submitted in partial fulfilment of the requirements for the Honors in the Major Program in Mathematics in the College of Sciences and in The Burnett Honors College at the University of Central Florida Orlando, Florida Spring Term 2014 Thesis Chair: Dr. Bhimsen Shivamoggi c 2014 Adam D. Phillips All Rights Reserved ii ABSTRACT A GPU accelerated approach to numerical linear algebra and matrix analysis with CFD applica- tions is presented. The works objectives are to (1) develop stable and efficient algorithms utilizing multiple NVIDIA GPUs with CUDA to accelerate common matrix computations, (2) optimize these algorithms through CPU/GPU memory allocation, GPU kernel development, CPU/GPU communication, data transfer and bandwidth control to (3) develop parallel CFD applications for Navier Stokes and Lattice Boltzmann analysis methods.
    [Show full text]
  • Background on GPGPU Programming
    Background on GPGPU Programming • Development of powerful 3D graphics chips – Consumer demand was accelerated by immersive first person games such as Doom and Quake – Microsoft DirectX 8.0 standard released in 2001 required hardware to handle both programmable vertex and programmable pixel shading standards – This meant developers had some control over computations on a GPU – The NVIDIA GeForce 3 series were examples of such chips Early Attempts at GPGPU Programming • Calculated a color value for every pixel on screen – Used (x,y) position, input colors, textures, etc. – Input color and textures were controlled by programmer – Any data value could be used as input and, after some computation, the final data value could be used for any purpose, not just the final pixel color • Restrictions that would prevent widespread use – Only a handful of input color and texture values could be used; there were restrictions on where results could be read from and written to memory – There was no reasonable way to debug code – Programmer would need to learn OpenGL or DirectX NVDIA CUDA Architecture • Introduced in November 2006 – First used on GeForce 8800 GTX – Introduced many features to facilitate GP programming • Some Features – Introduced a unified shader pipeline that allowed every ALU to be used for GP programming – Complied with IEEE single-precision floating point – Allowed arbitrary read and write access to memory via shared memory – Developed a free compiler, CUDA C, based on the widely used C language Selected Applications - 1 • Medical
    [Show full text]
  • Time Complexity Parallel Local Binary Pattern Feature Extractor on a Graphical Processing Unit
    ICIC Express Letters ICIC International ⃝c 2019 ISSN 1881-803X Volume 13, Number 9, September 2019 pp. 867{874 θ(1) TIME COMPLEXITY PARALLEL LOCAL BINARY PATTERN FEATURE EXTRACTOR ON A GRAPHICAL PROCESSING UNIT Ashwath Rao Badanidiyoor and Gopalakrishna Kini Naravi Department of Computer Science and Engineering Manipal Institute of Technology Manipal Academy of Higher Education Manipal, Karnataka 576104, India f ashwath.rao; ng.kini [email protected] Received February 2019; accepted May 2019 Abstract. Local Binary Pattern (LBP) feature is used widely as a texture feature in object recognition, face recognition, real-time recognitions, etc. in an image. LBP feature extraction time is crucial in many real-time applications. To accelerate feature extrac- tion time, in many previous works researchers have used CPU-GPU combination. In this work, we provide a θ(1) time complexity implementation for determining the Local Binary Pattern (LBP) features of an image. This is possible by employing the full capa- bility of a GPU. The implementation is tested on LISS medical images. Keywords: Local binary pattern, Medical image processing, Parallel algorithms, Graph- ical processing unit, CUDA 1. Introduction. Local binary pattern is a visual descriptor proposed in 1990 by Wang and He [1, 2]. Local binary pattern provides a distribution of intensity around a center pixel. It is a non-parametric visual descriptor helpful in describing a distribution around a center value. Since digital images are distributions of intensity, it is helpful in describing an image. The pattern is widely used in texture analysis, object recognition, and image description. The local binary pattern is used widely in real-time description, and analysis of objects in images and videos, owing to its computational efficiency and computational simplicity.
    [Show full text]
  • E6895 Advanced Big Data Analytics Lecture 7: GPU and CUDA
    E6895 Advanced Big Data Analytics Lecture 7: GPU and CUDA Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist, Graph Computing Research E6895 Advanced Big Data Analytics — Lecture 7 © CY Lin, 2016 Columbia University Reference Book CUDA: Compute Unified Device Architecture 2 E6895 Advanced Big Data Analytics — Lecture 7 © CY Lin, Columbia University GPU 2001: NVIDIA’s GeForce 3 series made probably the most breakthrough in GPU technology — the computing industry’s first chip to implement Microsoft’s then-new Direct 8.0 standard; — which required that the compliant hardware contain both programmable vertex and programmable pixel shading stages Early 2000s: The release of GPUs that possessed programmable pipelines attracted many researchers to the possibility of using graphics hardware for more than simply OpenGL or DirectX-based rendering. — The GPUs of the early 2000s were designed to produce a color for every pixel on the screen using programmable arithmetic units known as pixel shaders. — The additional information could be input colors, texture coordinates, or other attributes 3 E6895 Advanced Big Data Analytics — Lecture 7 © CY Lin, Columbia University CUDA 2006: GPU computing starts going for prime time — Release of CUDA — The CUDA Architecture included a unified shader pipeline, allowing each and every arithmetic logic unit (ALU) on the chip to be marshaled by a program intending to perform general-purpose computations. 4 E6895 Advanced Big Data Analytics — Lecture 7 ©
    [Show full text]
  • NVIDIA's Fermi: the First Complete GPU Computing Architecture
    NVIDIA’s Fermi: The First Complete GPU Computing Architecture A white paper by Peter N. Glaskowsky Prepared under contract with NVIDIA Corporation Copyright © September 2009, Peter N. Glaskowsky Peter N. Glaskowsky is a consulting computer architect, technology analyst, and professional blogger in Silicon Valley. Glaskowsky was the principal system architect of chip startup Montalvo Systems. Earlier, he was Editor in Chief of the award-winning industry newsletter Microprocessor Report. Glaskowsky writes the Speeds and Feeds blog for the CNET Blog Network: http://www.speedsnfeeds.com/ This document is licensed under the Creative Commons Attribution ShareAlike 3.0 License. In short: you are free to share and make derivative works of the file under the conditions that you appropriately attribute it, and that you distribute it only under a license identical to this one. http://creativecommons.org/licenses/by-sa/3.0/ Company and product names may be trademarks of the respective companies with which they are associated. 2 Executive Summary After 38 years of rapid progress, conventional microprocessor technology is beginning to see diminishing returns. The pace of improvement in clock speeds and architectural sophistication is slowing, and while single-threaded performance continues to improve, the focus has shifted to multicore designs. These too are reaching practical limits for personal computing; a quad-core CPU isn’t worth twice the price of a dual-core, and chips with even higher core counts aren’t likely to be a major driver of value in future PCs. CPUs will never go away, but GPUs are assuming a more prominent role in PC system architecture.
    [Show full text]
  • SOUNDVISION 3.5.1 Readme - V.1.0
    SOUNDVISION 3.5.1 readme - v.1.0 [EN] Soundvision 3.5.1 Readme Soundvision is the L-Acoustics 3D acoustical and mechanical modeling software. Soundvision 3.5.1 is available to download from www.l-acoustics.com from April 27, 2021. Computer requirements Host computer minimum conguration for Soundvision: — Operating systems: Windows 10, macOS High Sierra (OS X 10.13) to macOS Big Sur (11.2) — RAM: 1 GB minimum — Processor speed: 1.2 GHz minimum — Available hard-disk space: 100 MB minimum — Video card: — Intel HD, Iris graphics (Intel GMA and Intel Extreme Graphics are not supported). PC users equipped with an Intel HD Graphics 5500 graphics card (i3-5005U, i3-5015U, i3-5010U, i3-5020U, i5-5200U, i5-5300U, i7-5500U and i7-5600U processors): update the drivers to version 10.18.15.4279 (from Intel package version 15.40.7.4279) or higher. Previous versions of the drivers may give unexpected mapping results. — GeForce cards series 8 and above. The following models are not compatible: GeForce 256, GeForce 2 series, GeForce 3 series, GeForce 4 series, GeForce FX series, GeForce 6 series, GeForce 7 series. — ATI Radeon HD 2000 series and above. — Third-party software: Adobe® Reader® — (optional) dedicated USB port - to open .sv* les without a hardware key (Windows only), refer to the Soundvision Help Refer to the Soundvision optimization technical bulletin for more information on the optimization of the computer conguration and the troubleshooting procedures. Windows 10 is a registered trademark of Microsoft Corporation. Mac and macOS are trademarks of Apple Inc., registered in the U.S.
    [Show full text]
  • Vysoke´Ucˇenítechnicke´V Brneˇ Zobrazenístínu˚Ve Sce
    VYSOKE´ UCˇ ENI´ TECHNICKE´ V BRNEˇ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA INFORMACˇ NI´CH TECHNOLOGII´ U´ STAV POCˇ ´ITACˇ OVE´ GRAFIKY A MULTIME´ DII´ FACULTY OF INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER GRAPHICS AND MULTIMEDIA ZOBRAZENI´ STI´NU˚ VE SCE´ NEˇ S VYUZˇ ITI´M KNIHOVNY DIRECTX RENDERING OF SHADOWS IN A SCENE WITH DIRECTX DIPLOMOVA´ PRA´ CE MASTER’S THESIS AUTOR PRA´ CE Bc. JOZEF KOBRTEK AUTHOR VEDOUCI´ PRA´ CE Ing. JAN NAVRA´TIL SUPERVISOR BRNO 2012 Abstrakt Tato práce pojednává o metodách zobrazení stínù, jejich analýze a implementaci v rozhraní DirectX 11. Teoretická èást popisuje historický vývoj použití stínù v 3D aplikácích a jedno- tlivé algoritmy pro výpoèet stínù. V rámci práce jsou na demonstrační aplikaci porovnány z hlediska výkonu, nároènosti implementace a kvality výstupu 2 varianty algoritmu shadow mapping pro všesměrová světla - s využitím cube mappingu a parabolické projekce, každá s pěti různě optimalizovanými implementacemi. Abstract This work discusses shadowing methods, analyses them and describes implementation in DirectX 11 API. Theoretical part describes historical evolution of shadow usage in 3D appli- cations and also analyzes shadowing algorithms. This work compares 2 variants of shadow mapping algorithm for omnidirectional lights, based on cube mapping and paraboloid pro- jection, on demo application using quality, performance and implementation aspects. Klíčová slova DirectX, Direct3D, stínovací metody, shadow mapping, cube mapping, stínová tělesa, geo- metry shader Keywords DirectX, Direct3D, shadowing methods, shadow mapping, cube mapping, shadow volumes, geometry shader Citace Jozef Kobrtek: Zobrazení stínù ve scéně s využitím knihovny DirectX, diplomová práce, Brno, FIT VUT v Brně, 2012 Zobrazení stínù ve scéně s využitím knihovny DirectX Prohlášení Prohla¹uji, že jsem tento semestrální projekt vypracoval samostatně pod vedením pana Ing.
    [Show full text]
  • HP Z400 Workstation Overview
    QuickSpecs HP Z400 Workstation Overview HP recommends Windows Vista® Business 1. 3 External 5.25" Bays 2. Power Button 3. Front I/O: 2 USB 2.0, 1 IEEE 1394a (optional card required), Headphone, Microphone DA - 13276 North America — Version 4 — April 17, 2009 Page 1 QuickSpecs HP Z400 Workstation Overview 4. 3 External 5.25” Bays 9. Rear I/O: 6 USB 2.0, PS/2 keyboard/mouse 1 RJ-45 to Integrated Gigabit LAN 5. 4 DIMM Slots for DDR3 ECC Memory 1 Audio Line In, 1 Audio Line Out, 1 Microphone In 6. 2 Internal 3.5” Bays 10. 2 PCIe x16 Gen2 Slots 7. 475W, 85% efficient Power Supply 11.. 1 PCIe x4 Gen2, 1 PCIe x4 Gen1, 2 PCI Slots 8. Dual/Quad Core Intel 3500 Series Processors 12 4 Internal USB 2.0 ports Form Factor Convertible Minitower Compatible Operating Genuine Windows Vista® Business 32-bit* Systems Genuine Windows Vista® Business 64-bit* Genuine Windows Vista® Business 32-bit with downgrade to Windows® XP Professional 32-bit custom installed** (expected available until August 2009) Genuine Windows Vista® Business 64-bit with downgrade to Windows® XP Professional x64 custom installed** (expected available until August 2009) HP Linux Installer Kit for Linux (includes drivers for both 32-bit & 64-bit OS versions of Red Hat Enterprise Linux WS4 and WS5 - see: http://www.hp.com/workstations/software/linux) Novell Suse SLED 11 (expected availability May 2009) *Certain Windows Vista product features require advanced or additional hardware. See http://www.microsoft.com/windowsvista/getready/hardwarereqs.mspx and http://www.microsoft.com/windowsvista/getready/capable.mspx for details.
    [Show full text]
  • Oral Presentation
    Some Reflections on Advanced Geocomputations and the Data Deluge J. A. Rod Blais Dept. of Geomatics Engineering Pacific Institute for the Mathematical Sciences University of Calgary, Calgary, AB www.ucalgary.ca/~blais [email protected] Outline • Introduction • Paradigms in Applied Scientific Research • Evolution of Computing and Applications • High Performance Computational Science (HPCS) • Data Intensive Scientific Computing (DISC) • Cloud Computing and Applications • Examples, Challenges and Future Trends • Concluding Remarks Paradigms in Applied Scientific Research • Traditionally, concept oriented and experimentally tested • Currently, evolving experimental and simulated data driven • Also, often multidisciplinary and multifaceted group projects • In other words, more systems orientation with web collaboration • Experimental / simulated data are becoming cheap and accessible • Distributed data and metadata are often practically unmovable • Sensor webs and computing clouds are changing the landscape • Over 90% of web users look at less than 10% of the pages Evolution of Advanced Computing • Megaflops to Petaflops, i.e. 106 to 1015 floating point operations / sec. • Megabytes to Petabytes, i.e. 106 to 1015 bytes (i.e. 8 binary bits) • Single Core CPUs to Multi- and Many-Core CPUs • Practical limitations: • TB HDs are common but I/O speeds imply max. ~100 TBs • CPUs have memory, // instructions and power limitations • Random-Access files have max. volume I/O limitations ► CPUs to GPUs with CUDA code development ► The strategic future:
    [Show full text]
  • Cuda by Example.Book.Pdf
    ptg Download from www.wowebook.com CUDA by Example ptg Download from www.wowebook.com This page intentionally left blank ptg Download from www.wowebook.com CUDA by Example AN INTRODUCTION TO GENERAL!PUR POSE GPU PROGRAMMING ptg JASON SANDERS EDWARD KANDROT 8SSHU6DGGOH5LYHU1-ǩ%RVWRQǩ,QGLDQDSROLVǩ6DQ)UDQFLVFR 1HZ<RUNǩ7RURQWRǩ0RQWUHDOǩ/RQGRQǩ0XQLFKǩ3DULVǩ0DGULG &DSHWRZQǩ6\GQH\ǩ7RN\Rǩ6LQJDSRUHǩ0H[LFR&LW\ Download from www.wowebook.com 0DQ\RIWKHGHVLJQDWLRQVXVHGE\PDQXIDFWXUHUVDQGVHOOHUVWRGLVWLQJXLVKWKHLUSURGXFWVDUH FODLPHGDVWUDGHPDUNV:KHUHWKRVHGHVLJQDWLRQVDSSHDULQWKLVERRNDQGWKHSXEOLVKHUZDV DZDUHRIDWUDGHPDUNFODLPWKHGHVLJQDWLRQVKDYHEHHQSULQWHGZLWKLQLWLDOFDSLWDOOHWWHUVRULQDOO FDSLWDOV 7KHDXWKRUVDQGSXEOLVKHUKDYHWDNHQFDUHLQWKHSUHSDUDWLRQRIWKLVERRNEXWPDNHQRH[SUHVVHG RULPSOLHGZDUUDQW\RIDQ\NLQGDQGDVVXPHQRUHVSRQVLELOLW\IRUHUURUVRURPLVVLRQV1ROLDELOLW\LV DVVXPHGIRULQFLGHQWDORUFRQVHTXHQWLDOGDPDJHVLQFRQQHFWLRQZLWKRUDULVLQJRXWRIWKHXVHRIWKH LQIRUPDWLRQRUSURJUDPVFRQWDLQHGKHUHLQ 19,',$PDNHVQRZDUUDQW\RUUHSUHVHQWDWLRQWKDWWKHWHFKQLTXHVGHVFULEHGKHUHLQDUHIUHHIURP DQ\,QWHOOHFWXDO3URSHUW\FODLPV7KHUHDGHUDVVXPHVDOOULVNRIDQ\VXFKFODLPVEDVHGRQKLVRU KHUXVHRIWKHVHWHFKQLTXHV 7KHSXEOLVKHURIIHUVH[FHOOHQWGLVFRXQWVRQWKLVERRNZKHQRUGHUHGLQTXDQWLW\IRUEXONSXUFKDVHV RUVSHFLDOVDOHVZKLFKPD\LQFOXGHHOHFWURQLFYHUVLRQVDQGRUFXVWRPFRYHUVDQGFRQWHQW SDUWLFXODUWR\RXUEXVLQHVVWUDLQLQJJRDOVPDUNHWLQJIRFXVDQGEUDQGLQJLQWHUHVWV)RUPRUH LQIRUPDWLRQSOHDVHFRQWDFW 86&RUSRUDWHDQG*RYHUQPHQW6DOHV (800) 382-3419 FRUSVDOHV#SHDUVRQWHFKJURXSFRP )RUVDOHVRXWVLGHWKH8QLWHG6WDWHVSOHDVHFRQWDFW ,QWHUQDWLRQDO6DOHV LQWHUQDWLRQDO#SHDUVRQFRP
    [Show full text]
  • Accelerating Convolutional Neural Network by Exploiting Sparsity on Gpus Weizhi Xu, Shengyu Fan, Hui Yu and Xin Fu, Member, IEEE
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 14, NO. 8, AUGUST 2015 1 Accelerating convolutional neural network by exploiting sparsity on GPUs Weizhi Xu, Shengyu Fan, Hui Yu and Xin Fu, Member, IEEE, Abstract—Convolutional neural network (CNN) is an impor- series of algorithmic optimization techniques were developed, tant deep learning method. The convolution operation takes a such as im2col-based method [12], [13], FFT-based method large proportion of the total execution time for CNN. Feature [14], and Winograd-based method [15]. The above acceleration maps for convolution operation are usually sparse. Multiplica- tions and additions for zero values in the feature map are useless methods have been integrated into cuDNN library, which is a for convolution result. In addition, convolution layer and pooling state-of-the-art library for deep learning on GPU [16]. layer are computed separately in traditional methods, which leads The convolution operation in CNN refers to the process to frequent data transfer between CPU and GPU. Based on these in which the convolution kernel samples on the feature map. observations, we propose two new methods to accelerate CNN In the sampling process, the convolution kernel carries out a on GPUs. The first method focuses on accelerating convolution operation, and reducing calculation of zero values. The second weighted summation operation on the sampling area, and the method combines the operations of one convolution layer with entire feature map is sampled according to the stride size. The the following pooling layer to effectively reduce traffic between process of convolution contains a lot of multiplications and CPU and GPU.
    [Show full text]