UNIVERSITY of CALIFORNIA RIVERSIDE Spatio-Temporal GPU Management for Real-Time Cyber-Physical Systems a Thesis Submitted In

Total Page:16

File Type:pdf, Size:1020Kb

UNIVERSITY of CALIFORNIA RIVERSIDE Spatio-Temporal GPU Management for Real-Time Cyber-Physical Systems a Thesis Submitted In UNIVERSITY OF CALIFORNIA RIVERSIDE Spatio-Temporal GPU Management for Real-Time Cyber-Physical Systems A Thesis submitted in partial satisfaction of the requirements for the degree of Master of Science in Electrical Engineering by Sujan Kumar Saha March 2018 Thesis Committee: Dr. Hyoseung Kim, Chairperson Dr. Nael Abu-Ghazaleh Dr. Daniel Wong Copyright by Sujan Kumar Saha 2018 The Thesis of Sujan Kumar Saha is approved: Committee Chairperson University of California, Riverside Acknowledgments First, I would like to thank my supervising professor, Dr. Hyoseung Kim. All of his unconditional assistance, guidance, and support help me finally accomplish this work. I also wish to express the deepest appreciation to Dr. Nael Abu-Ghazaleh and Dr. Daniel Wong who serves as my committee members for their valuable advice and generous help. I offer my special thanks to my labe-mates: Ankit Juneja, Ankith Rakesh Kumar and Yecheng Xiang for helping me in different aspects. Without their generous help, this work would not have been successful. Finally, it is an honor for me to thank my family especially my mom, Mrs. Anjana Rani Saha and my sister, Poly Saha, and my brother, Abhijeet Saha. All of their love and support encourage me to overcome all the challenges that I have faced. From the bottom of my heart, thank you all. iv To my parents for all the support. v ABSTRACT OF THE THESIS Spatio-Temporal GPU Management for Real-Time Cyber-Physical Systems by Sujan Kumar Saha Master of Science, Graduate Program in Electrical Engineering University of California, Riverside, March 2018 Dr. Hyoseung Kim, Chairperson General-purpose Graphics Processing Units (GPUs) have been considered as a promising technology to address the high computational demands of real-time data-intensive applications. Many of today's embedded processors already provide on-chip GPUs, the use of which can greatly help satisfy the timing challenges of data-intensive tasks by accelerat- ing their executions. However, the current state-of-the-art GPU management in real-time systems still lacks properties required for efficient and certifiable real-time GPU computing. For example, existing real-time systems sequentially execute GPU workloads to guarantee predictable GPU access time, which significantly underutilizes the GPU and exacerbates temporal dependency among the workloads. In this research, we propose a spatial-temporal GPU management framework for real-time cyber-physical systems. Our proposed framework explicitly manages the alloca- tion of GPU's internal execution engines. This approach allows multiple GPU-using tasks to simultaneously execute on the GPU, thereby improving GPU utilization and reducing response time. Also, it can improve temporal isolation by allocating a portion of the GPU vi execution engines to tasks for their exclusive use. We have implemented a prototype of the proposed framework for a CUDA environment. The case study using this implementation on two NVIDIA GPUs, GeForce 970 and Jetson TX2, shows that our framework reduces the response time of GPU execution segments in a predictable manner, by executing them in parallel. Experimental results with randomly-generated tasksets indicate that our frame- work yields a significant benefit in schedulability compared to the existing approach. vii Contents List of Figures ix List of Tables x 1 Introduction 1 2 Background and Related Work 4 2.1 GPU organization and Kernel Execution . .4 2.2 Related Work . .6 2.3 Motivation . .8 2.4 System Model . 10 3 Spatial-Temporal GPU Reservation Framework 14 3.1 Reservation Design . 14 3.2 Admission Control . 16 3.2.1 Self-suspension Mode . 17 3.2.2 Busy-waiting Mode . 19 3.3 Resource Allocator . 20 3.4 Reservation based Program Transformation . 23 4 Evaluation 26 4.1 Implementation . 26 4.2 Overhead Estimation . 28 4.3 Case Study . 29 4.4 Schedulability Results . 32 5 Conclusions 38 Bibliography 40 viii List of Figures 2.1 Overview of GPU Architecture . .5 2.2 Multi-kernel Execution . .9 2.3 Execution time vs Number of SM on GTX970 . 10 2.4 Execution time vs Number of SM on TX2 . 10 3.1 Eample schedule of GPU using tasks showing the blocking times in self- suspending mode . 18 3.2 Normalized Execution Time Vs Different Par Value on GTX970 . 24 3.3 Normalized Execution Time Vs Different Par Value on TX2 . 24 4.1 Percentage overhead of selected benchmarks on GTX970 . 28 4.2 Percentage overhead of selected benchmarks on TX2 . 28 4.3 Comparison of Kernel Execution on GTX970 . 30 4.4 Comparison of Kernel Execution on TX2 . 31 4.5 Schedulability w.r.t Number of Tasks in a taskset . 34 4.6 Schedulability w.r.t Number of SM . 34 4.7 Schedulability w.r.t Number of GPU Segments . 35 4.8 Schedulability w.r.t Ratio of C to G . 36 4.9 Schedulability w.r.t Ratio of Number of GPU tasksk to Number of CPU tasks 36 ix List of Tables 4.1 Parameters for taskset generation . 32 x Chapter 1 Introduction Massive data streams generated by recent embedded and cyber-physical applica- tions pose substantial challenges in satisfying real-time processing requirements. For exam- ple, in self-driving cars, data streams from tens of sensors, such as cameras and laser range finders (LIDARs), should be analyzed in a timely manner so that the results of processing can be delivered to path/behavior planning algorithms with short and bounded delay. This requirement of real-time processing is particularly important for safety-critical domains such as automotive, unmanned automotive, avionics, and industrial automation, where any transient violation of timing constraints may lead to system failures and catastrophic losses. General-purpose graphics processing units (GPUs) have been considered as a promising technology to address the high computational demands of real-time data streams. Many of today's embedded processors, such as NVIDIA TX1/2 and NXP i.MX series, al- ready have on-chip GPUs, the use of which can greatly help satisfy the timing challenges of data-intensive tasks by accelerating their executions. The stringent size, weight, power 1 and cost constraints of embedded and cyber-physical systems are also expected to be sub- stantially mitigated by GPUs. For the safe use of GPUs, much research has been done in the real-time systems community to schedule GPU-using tasks with timing constraints [6, 8, 7, 10, 11, 15, 22]. However, the current state-of-the-art has the following limitations for efficiency and pre- dictability. First, existing real-time GPU management schemes significantly underutilize GPUs in providing predictable GPU access time. They limit a GPU to be accessed by only one task at a time, which can cause unnecessarily long waiting delay when multiple tasks need to access the GPU. This problem will become worse in an embedded computing environment where each machine typically has only a limited number of GPUs, e.g., one on-chip GPU on the latest NVIDIA TX2 processor. Second, systems support for strong temporal isolation among GPU workloads is not yet provided. In a mixed-criticality sys- tem, low-critical tasks and high-critical tasks may share the same GPU. If low-critical tasks use the GPU for longer time than expected, the timing of high-critical tasks can be eas- ily jeopardized. Also, if both types of tasks are concurrently executed on the GPU, it is unpredictable how much temporal interference may happen. In this research, we propose a spatio-temporal GPU reservation framework to address the aforementioned limitations. The key contribution of this work is in the ex- plicit management of GPU's internal execution engines, e.g., streaming multiprocessors on NVIDIA GPUs and core groups on ARM Mali GPUs. With this approach, a single GPU is divided into multiple logical units and a fraction of the GPU can be exclusively reserved for each (or a group of) time-critical task(s). This approach allows simultaneous 2 execution of multiple tasks on a single GPU, which can potentially eliminate the waiting time for GPU execution and achieve strong temporal isolation among tasks. Since recent GPUs have multiple execution engines and many GPU applications are not programmed to fully utilize them, our proposed framework will be a viable solution to efficiently and safely share the GPU among tasks with different criticalities. In addition, our framework substantially improves task schedulability by a fine-grained allocation of GPU resources at the execution-engine level. As a proof of concept, we have implemented our framework in a CUDA pro- gramming environment. The case study using this implementation on two NVIDIA GPUs, GeForce 970 and Jetson TX2, shows that our framework reduces the response time of GPU execution segments in a predictable manner. Experimental results with randomly-generated tasksets indicate that our framework yields a significant benefit in schedulability compared to the existing approach. Our GPU framework does not require any specific hardware support or detailed internal scheduling information. Thus, it is readily applicable to COTS GPUs from various vendors, e.g., AMD, ARM, NVIDIA and NXP. The rest of the thesis is organized as follows. Chapter 2 describes background knowledge about GPU architecture, motivation for this work and related prior works. Our proposed GPU reservation framework is explained in detail in chapter 3. Chapter 4 has the evaluation methodolgy and result analysis. Finally, we conclude in Chapter 5. 3 Chapter 2 Background and Related Work 2.1 GPU organization and Kernel Execution GPUs are used as accelarators along with CPUs in modern computing systems. Its highly parallel structure makes it more efficient than general purpose CPUs for data intensive applications. Figure 2.1 shows a high level overview of the internel structure of a GPU.
Recommended publications
  • GPU Developments 2018
    GPU Developments 2018 2018 GPU Developments 2018 © Copyright Jon Peddie Research 2019. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • Mali-400 MP: a Scalable GPU for Mobile Devices
    Mali-400 MP: A Scalable GPU for Mobile Devices Tom Olson Director, Graphics Research, ARM Outline . ARM and Mobile Graphics . Design Constraints for Mobile GPUs . Mali Architecture Overview . Multicore Scaling in Mali-400 MP . Results 2 About ARM . World’s leading supplier of semiconductor IP . Processor Architectures and Implementations . Related IP: buses, caches, debug & trace, physical IP . Software tools and infrastructure . Business Model . License fees . Per-chip royalties . Graphics at ARM . Acquired Falanx in 2006 . ARM Mali is now the world’s most widely licensed GPU family 3 Challenges for Mobile GPUs . Size . Power . Memory Bandwidth 4 More Challenges . Graphics is going into “anything that has a screen” . Mobile . Navigation . Set Top Box/DTV . Automotive . Video telephony . Cameras . Printers . Huge range of form factors, screen sizes, power budgets, and performance requirements . In some applications, a huge difference between peak and average performance requirements 5 Solution: Scalability . Address a wide variety of performance points and applications with a single IP and a single software stack. Need static scalability to adapt to different peak requirements in different platforms / markets . Need dynamic scalability to reduce power when peak performance isn’t needed 6 Options for Scalability . Fine-grained: Multiple pipes, wide SIMD, etc . Proven approach, efficient and effective . But, adding pipes / lanes is invasive . Hard for IP licensees to do on their own . And, hard to partition to provide dynamic scalability . Coarse-grained: Multicore . Easy for licensees to select desired performance . Putting cores on separate power islands allows dynamic scaling 7 Mali 400-MP Top Level Architecture Asynch Mali-400 MP Top-Level APB Geometry Pixel Processor Pixel Processor Pixel Processor Pixel Processor Processor #1 #2 #3 #4 CLKs MaliMMUs RESETs IRQs IDLEs MaliL2 AXI .
    [Show full text]
  • Intel Desktop Board DG41MJ Product Guide
    Intel® Desktop Board DG41MJ Product Guide Order Number: E59138-001 Revision History Revision Revision History Date -001 First release of the Intel® Desktop Board DG41MJ Product Guide February 2009 If an FCC declaration of conformity marking is present on the board, the following statement applies: FCC Declaration of Conformity This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. For questions related to the EMC performance of this product, contact: Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124 1-800-628-8686 This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and the receiver. • Connect the equipment to an outlet on a circuit other than the one to which the receiver is connected.
    [Show full text]
  • GMAI: a GPU Memory Allocation Inspection Tool for Understanding and Exploiting the Internals of GPU Resource Allocation in Critical Systems
    The final publication is available at ACM via http://dx.doi.org/ 10.1145/3391896 GMAI: A GPU Memory Allocation Inspection Tool for Understanding and Exploiting the Internals of GPU Resource Allocation in Critical Systems ALEJANDRO J. CALDERÓN, Universitat Politècnica de Catalunya & Ikerlan Technology Research Centre LEONIDAS KOSMIDIS, Barcelona Supercomputing Center (BSC) CARLOS F. NICOLÁS, Ikerlan Technology Research Centre FRANCISCO J. CAZORLA, Barcelona Supercomputing Center (BSC) PEIO ONAINDIA, Ikerlan Technology Research Centre Critical real-time systems require strict resource provisioning in terms of memory and timing. The constant need for higher performance in these systems has led industry to recently include GPUs. However, GPU software ecosystems are by their nature closed source, forcing system engineers to consider them as black boxes, complicating resource provisioning. In this work we reverse engineer the internal operations of the GPU system software to increase the understanding of their observed behaviour and how resources are internally managed. We present our methodology which is incorporated in GMAI (GPU Memory Allocation Inspector), a tool which allows system engineers to accurately determine the exact amount of resources required by their critical systems, avoiding underprovisioning. We first apply our methodology on a wide range of GPU hardware from different vendors showing itsgeneralityin obtaining the properties of the GPU memory allocators. Next, we demonstrate the benefits of such knowledge in resource provisioning of two case studies from the automotive domain, where the actual memory consumption is up to 5.6× more than the memory requested by the application. ACM Reference Format: Alejandro J. Calderón, Leonidas Kosmidis, Carlos F. Nicolás, Francisco J.
    [Show full text]
  • Intel® Desktop Board DG965RY Product Guide
    Intel® Desktop Board DG965RY Product Guide Order Number: D46818-003 Revision History Revision Revision History Date -001 First release of the Intel® Desktop Board DG965RY Product Guide June 2006 -002 Second release of the Intel® Desktop Board DG965RY Product Guide September 2006 -003 Added operating system support December 2006 If an FCC declaration of conformity marking is present on the board, the following statement applies: FCC Declaration of Conformity This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. For questions related to the EMC performance of this product, contact: Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124 1-800-628-8686 This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna.
    [Show full text]
  • Intel® Desktop Board DG41TX Product Guide
    Intel® Desktop Board DG41TX Product Guide Order Number: E88309-001 Revision History Revision Revision History Date ® -001 First release of the Intel Desktop Board DG41TX Product Guide February 2010 Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Intel Desktop Board DG41TX may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an ordering number and are referenced in this document, or other Intel literature, may be obtained from Intel Corporation by going to the World Wide Web site at: http://intel.com/ or by calling 1-800-548-4725. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others.
    [Show full text]
  • The Bifrost GPU Architecture and the ARM Mali-G71 GPU
    The Bifrost GPU architecture and the ARM Mali-G71 GPU Jem Davies ARM Fellow and VP of Technology Hot Chips 28 Aug 2016 Introduction to ARM Soft IP . ARM licenses Soft IP cores (amongst other things) to our Silicon Partners . They then make chips and sell to OEMs, who sell consumer devices . “ARM doesn’t make chips”… . We provide all the RTL, integration testbenches, memories lists, reference floorplans, example synthesis scripts, sometimes models, sometimes FPGA images, sometimes with implementation advice, always with memory system requirements/recommendations . Consequently silicon area, power, frequencies, performance, benchmark scores can therefore vary quite a bit in real silicon… 2 © ARM2016 ARM Mali: The world’s #1 shipping GPU 750M 65 27 Mali-based new Mali GPUs shipped 140 Total licenses in in 2015 Total licenses licensees FY15 Mali is in: Mali graphics based IC shipments (units) 750m ~75% ~50% ~40% 550m of of of 400m DTVs… tablets… smartphones 150m <50m 2011 2012 2013 2014 2015 3 © ARM2016 ARM Mali graphics processor generations Mali-G71 BIFROST … GPU Unified shader cores, scalar ISA, clause execution, full coherency, Vulkan, OpenCL MIDGARD Mali-T600 GPU series Mali-T700 GPU series Mali-T800 GPU series Unified shader cores, SIMD ISA, OpenGL ES 3.x, OpenCL, Vulkan Presented at HotChips 2015 Mali-200 Mali-300 Mali-400 Mali-450 Mali-470 UTGARD GPU GPU GPU GPU GPU Separate shader cores, SIMD ISA, OpenGL ES 2.x 4 © ARM2016 Bifrost features . Leverages Mali’s scalable architecture . Scalable to 32 shader cores 20% Scalable to 32 Higher energy Shader cores . Major shader core redesign efficiency* .
    [Show full text]
  • AI Acceleration – Image Processing As a Use Case
    AI acceleration – image processing as a use case Masoud Daneshtalab Associate Professor at MDH www.idt.mdh.se/~md/ Outline • Dark Silicon • Heterogeneouse Computing • Approximation – Deep Neural Networks 2 The glory of Moore’s law Intel 4004 Intel Core i7 980X 2300 transistors 1.17B transistors 740 kHz clock 3.33 GHz clock 10um process 32nm process 10.8 usec/inst 73.4 psec/inst Every 2 Years • Double the number of transistors • Build higher performance general-purpose processors ⁻ Make the transistors available to masses ⁻ Increase performance (1.8×↑) ⁻ Lower the cost of computing (1.8×↓) Semiconductor trends ITRS roadmap for SoC Design Complexity Trens ITRS: International Technology Roadmap for Semiconductors Expected number of processing elements into a System-on-Chip (SoC). 4 Semiconductor trends ITRS roadmap for SoC Design Complexity Trens 5 Semiconductor trends Network-on-Chip (NoC) –based Multi/Many-core Systems Founder: Andreas Olofsson Sponsored by Ericsson AB 6 Dark Silicon Era The catch is powering exponentially increasing number of transistors without melting the chip down. 10 10 Chip Transistor Count 2,200,000,000 109 Chip Power 108 107 106 105 2300 104 103 130 W 102 101 0.5 W 100 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 If you cannot power them, why bother making them? 7 Dark Silicon Era The catch is powering exponentially increasing number of transistors without melting the chip down. 10 10 Chip Transistor Count 2,200,000,000 109 Chip Power 108 107 Dark Silicon 106 105 (Utilization Wall) 2300 104 Fraction of transistors that need to be 103 powered off at all times due to power 130 W 2 constraints.
    [Show full text]
  • Intel® Desktop Board DG43GT Product Guide
    Intel® Desktop Board DG43GT Product Guide Order Number: E68221-001 Revision History Revision Revision History Date -001 First release of the Intel® Desktop Board DG43GT Product Guide June 2009 If an FCC declaration of conformity marking is present on the board, the following statement applies: FCC Declaration of Conformity This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. For questions related to the EMC performance of this product, contact: Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124 1-800-628-8686 This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and the receiver. • Connect the equipment to an outlet on a circuit other than the one to which the receiver is connected.
    [Show full text]
  • I.MX Graphics Users Guide Android
    NXP Semiconductors Document Number: IMXGRAPHICUG Rev. 0, 02/2018 i.MX Graphics User’s Guide Contents Chapter 1 Introduction ............................................................................................................................................. 6 Chapter 2 i.MX G2D API ............................................................................................................................................ 6 2.1 Overview ...................................................................................................................................................... 6 2.2 Enumerations and structures ....................................................................................................................... 6 2.3 G2D function descriptions .......................................................................................................................... 10 2.4 Support of new operating system in G2D .................................................................................................. 16 2.5 Sample code for G2D API usage ................................................................................................................. 16 2.6 Feature list on multiple platforms.............................................................................................................. 19 Chapter 3 i.MX EGL and OGL Extension Support .................................................................................................... 20 3.1 Introduction ..............................................................................................................................................
    [Show full text]
  • Intel® Desktop Board DQ33HS Product Guide
    Intel® Desktop Board DQ33HS Product Guide Order Number: E20672-001 Revision History Revision Revision History Date -001 First release of the Intel® Desktop Board DQ33HS Product Guide August 2007 If an FCC declaration of conformity marking is present on the board, the following statement applies: FCC Declaration of Conformity This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. For questions related to the EMC performance of this product, contact: Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124 1-800-628-8686 This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and the receiver. • Connect the equipment to an outlet on a circuit other than the one to which the receiver is connected.
    [Show full text]
  • The Hitchhiker's Guide to Cross-Platform Opencl Application
    The Hitchhiker’s Guide to Cross-Platform OpenCL Application Development Tyler Sorensen and Alastair F. Donaldson Imperial College London, UK IWOCL April 2016 1 “OpenCL supports a wide range of applications… through a low-level, high-performance, portable abstraction.” Page 11: OpenCL 2.1 specification 2 “OpenCL supports a wide range of applications… through a low-level, high-performance, portable abstraction.” Page 11: OpenCL 2.1 specification 3 “OpenCL supports a wide range of applications… through a low-level, high-performance, portable abstraction.” Page 11: OpenCL 2.1 specification We consider functional portability rather than performance portability 4 Example • single source shortest path application Quadro K5200 (Nvidia) Intel HD5500 5 Example • single source shortest path application Quadro K5200 (Nvidia) Intel HD5500 6 Example • single source shortest path application Quadro K5200 (Nvidia) Intel HD5500 7 An experience report on OpenCL portability • How well is portability evaluated? • Our experience running applications on 8 GPUs spanning 4 vendors • Recommendations going forward 8 An experience report on OpenCL portability • How well is portability evaluated? • Our experience running applications on 8 GPUs spanning 4 vendors • Recommendations going forward 9 Portability in research literature • Reviewed the 50 most recent OpenCL papers on: http://hgpu.org/ • Only considered papers including GPU targets • Only considered papers with some type of experimental evaluation • How many different vendors did the study experiment with?
    [Show full text]