Introduction to Massively-Parallel Computing in High-Energy Physics

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Massively-Parallel Computing in High-Energy Physics ;If\.}y\._ . X —; ~\~;Qi·4·5.;_—\\ _ ` »`\§;i>.».—__\_` ` _ ` ` >‘‘~ AQ. ejo~.\_~_\_ ..».§j`§i\.;\—`\\`\_;` C w ` ` ES-. A. H ~ _~.\ \\.\ ‘ ` `\`—Y`§§§QY ;;q» ;_‘»f M. · S·\v `?»`~Q\\\ ~ x ·` \?~.‘e. .. >.>¤~.\ ` ‘—*T·>¤;\\\.€.—\~.\ . »~`.\\1IN* ._. ` ` s—¤‘>.j>?{~QQ. —. _\\`\ x\\ _\.&\ \\ " ~~`\\`\\}`Q\\\ ` ~\\ ,5 ~\ \x ~ ~‘—T» . \\\~. _ .x<>wFS:2>$q\\,Q ` e xi-;. \§\\\\ . Wx ;.\_\\s&\` \ wx `.~.`_—\.a\> 56*0 Q CERNWW\%\\\\\\\R\\I\mW\\\\\\¥ LIBRARIES, GENEVA ATOOOOOIIS Q m rp Q YOOTIW 1992 — 1993 ACADEMIC TRAINING PROGRAMME LECTURE SERIES SPEAKER : M. SMITH / Edinburgh Parallel Computing Centre TITLE Introduction to massively parallel computing in High Energy Physics 15, 16, 17, 18 & 19 March from 11.00 to 12.00 , . .. \ CILRN PLACE _ _ Auditorium ES G 6 -<’<OTt\<c ABST Az ani 772arh · 3 8 O . Ever sxnce computers were fzrst used for sc1ent1f1c and numerzcal work, there nas existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time scales. However, the vast leaps i n processor performance achieved through advances in semi—conductor science have reached a hiatus as the technology comes up against the physical limits ofthe speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (SIMD and MIMD). We will also review the expected future trends in the technology, with particular reference to the high performance computing initiatives in Europe and the United States, and the drive towards standardization of parallel computing software environments. GE .2.`é QQ m.%LLI °Y`*"~ _ ' ' '~'».*’»"*”_~ »5 " ` \ ?· `—K\?` " *—;£js—}‘»Z£~Z·{ \q\` Ti? i~?i`\~;—§}`_\\T1QT—~—L€—_—?~>Z. ·r-~·»•.-q-¤——. Q.©(;~J_~;._;\_\~`·., .~»‘ i.;;._~.\ _\;_.,;.. .5-\._ ;.—.;;.__ »» . __ . \. " `\ Q.} ‘ \\ TilflilgTiF1s§;Y‘2Qt\i§}§_I_{\T ’ `—_\i·5i`§I"j`..Tg \ "L ¤§5.;·._;.gs—;;{·_;:\;‘{;.—. \ g X~ j. .»——j; .·—.l.;}{.§{.; ;.;_; }g_>;§: psy _\. sgi; . g; i _.\ii\_—I\ » ~—.·— . , * ~ . \· - iw = xx \ is . \ ;\q i.gij§i{\.;\\\g.r_j—§._Q\§.\_.\>_—;. »\\~?>>&%.;>;;_ ._i_ . g.- ~» ~.\\\ §§OCR .—».‘ \\;~;.§ Output`”`— \\—\ ` ~§¤§\\§\ »»..Q \— ` . I ..\ CERN ACADENHC TRAINING PROGRAMME 1992-93 An Introduction to Massively Parallel Computing in High Energy Physics Mark Smith Edinburgh Parallel Computing Centre OCR Output MAssrvELY PARALLEL CGMPUTING IN HIGH ENERGY Pmsrcs OCR Output CERN AcADEM1c TRANNG PR0GRAMME 1992-93 Contents I An Introduction to Parallel Computing 1.1 An Introduction to thc EPCC .......... 1.2 Parallel Computer Architectures ......... 1.2.1 A Taxonomy for Computer Architectures 1.2.2 SIMD: Massively Parallel Machines . 1.2.3 MIMD Architectures .......... 1.2.4 Shared Memory MIMD ......... 10 1.2.5 Distributed Memory MIMD ....... 11 1.3 Architectural Trends ............... 12 131 Future Trends 14 ................ 1.4 General Purpose Parallelism ........... 15 1.4.1 Proven Price/Performance ....... 15 1.4.2 Proven Service Provision ........ 16 1.4.3 Parallel Software ............ 16 17 144................. In Summary 2 Decomposing the Potentially Parallel 19 2.1 Potential Parallelism 19 ............... 19 211................. Granularity 2.1.2 Load Balancing ............. 20 2.2 Decomposing the Potentially Parallel ...... 21 22 ,.·¤. 2.2.1 Trivial Decomposition ......... 2.2.2 Functional Decomposition ....... 22 23 23................ Data Decomposition 2.3.1 Geometric Data Decomposition ..... 24 2.3.2 Dealing with Unbalanced Problems . 25 27 24..................... Summary 3 An HEP Case Study for SIMD Architectures 29 3.1 SIMD Architecture and Programming Languages . 29 3.2 Case Study: Lattice QCD Codes ......... 31 3.2.1 Generation of Gauge Conhgurations . 32 3.2.2 The Connection Machine Implementation 35 3.2.3 Calculation of Quark Propagators ..... 36 36 OCR Output ss1vE1.Y PARALLEL C0M1>UT1Nc 1N H1G1-1 ENERGY Pmsxcs An HEP Case Study for MIMD Architectures 39 4.1 A Brief History of MIMD Computing ..... 39 4.1.1 The Importance of the Transputer . 39 4.1.2 The Need for Message Passing .... 40 4.2 Case Study: Experimental Monte Carlo Codes 4l 4.2.1 Physics Event Generation ....... 41 4.2.2 Task Farming GEANT ........ 42 4.2.3 Geometric Decomposition of the Detector 4.2.4 Summary: A Hierarchy of Approaches . 45 High Performance Computing: Initiatives and Standards 49 {___ 5.1 High Performance Computing and Networking Initiatives 49 49 5.1.2 United States ........... 49 52 513................. Europe 5.2 Emerging Key Technologies ........ 53 521 CHIMP 54 ................ 5.2.2 PUL 55 ................ 5.2.3 NEVIS 57 .............. 5.3 Parallel Computing Standards Forums . 57 5.3.1 High Performance Fortran Forum . 58 5.3.2 Message Passing Interface Forum . 58 Acknowledgements These lectures and their accompanying notes have been produced, in part, from the EPCC Parallel Systems training course material. I am therefore grateful to my EPCC colleagues Neil MacDonald, Arthur Trew, Mike Norman, Kevin Collins, David Wallace, Nick Radcliffe and Lyndon Clarke for the material they produced for that course. In addition, I must thank Ken Peach, Steve Booth, David Henty and Mark Parsons of the University of Edinburgh Physics Department for their help in the HEP aspects of these notes, and CERN’s Fabrizio Gagliardi for the original invitation to present this material. OCR Output CERN ACADEMIC TRAINING PRooRAMME 1992-93 1 An Introduction to Parallel Computing 1.1 An Introduction to the EPCC The Edinburgh Parallel Computing Centre was established during 1990 as a focus for the University’s various interests in high performance supercomputing. It is interdisciplin ary, and combines a decade’s experience of parallel computing applications with a strong research activity and a successful industrial affiliation scheme. The Centre provides a national service to around 300 registered users, on state—of—the—art commercial parallel machines worth many millions of pounds. As parallel computers become more common, the Centre’s task is to accelerate the effect ive exploitation of high performance parallel computing systems throughout academia, industry and commerce. The Centre, housed at the Univcrsity’s King’s Buildings, has a staff of more than 50, in three divisions: Service, Consultancy & Development, and Applications Development, The team is headed by Professors David Wallace (Physics), Roland Ibbett (Computer Science) and Jeff Collins (formerly of Electrical Engineering). Edinburgh’s involvement with parallel computing began in 1980, when physicists in Edinburgh used the ICL Distributed Array Processor (DAP) at Queen Mary College in London to run molecular dynamics and high energy physics simulations. Their pioneering results and wider University interest enabled Edinburgh to acquire two of these machines for its own use, and Edinburgh researchers soon achieved widespread recognition across a range of disciplines: high energy physics, molecular dynamics, phase transitions, neural network models, protein crystallography, stellar dynamics, _,_ image processing, meteorology and data processing. With the imminent decommissioning of the DAPs, a replacement resource was needed and a transputer machine was chosen in 1986. This machine was the new multi-user Meiko Computing Surface, consisting of domains of transputers, each with its own local memory, and interconnected by programmable switching chips. The Edinburgh Concurrent Supercomputer Project was then established in 1987 to create a national parallel computing facility, available to academics and industry throughout the UK. ln 1990 this project evolved into the Edinburgh Parallel Computing Centre (EPCC) to bring together the many strands of parallel research and applications activity in Edinburgh University, and to provide the basis for a broadening of these activities. The Centre now houses a range of machines including two third- generation DAPs, a number of Meiko machines (both transputer and i860—based), a Thinking Machines Corporation CM-200, and a large network of Sun and Silicon Graphics workstations. Plans are currently being made to purchase two further parallel supercomputers in 1993, these will act as an internal development resource, and will be state-of-the-art architectures. OCR Output MAss1vELY PARALLEL C0M1>U‘r1Nc· 1N H1c·H ENERGY Pmsrcs The applications dcvclopmcnt work within the EPCC takes placc within thc Numerical Simulations Group and thc Informadon Systems Group; These teams perform all of the contract work for our industrial and commercial collaborators. Current projects include work on parallelising computational iiuid dynamics code for the aerospace, oil and nuclear power industries; parallel geographical information systems; distributed memory implementations of oil reservoir simulators; spatial interaction modelling on data parallel machines; parallel network modelling and analysis; development of a production environment for seismic processing; and also parallel implementation of speech recognition algorithms. Non-contract software development effort is focused around
Recommended publications
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • A Massively-Parallel Mixed-Mode Computer Designed to Support
    This paper appeared in th International Parallel Processing Symposium Proc of nd Work shop on Heterogeneous Processing pages NewportBeach CA April Triton A MassivelyParallel MixedMo de Computer Designed to Supp ort High Level Languages Christian G Herter Thomas M Warschko Walter F Tichy and Michael Philippsen University of Karlsruhe Dept of Informatics Postfach D Karlsruhe Germany Mo dula Abstract Mo dula pronounced Mo dulastar is a small ex We present the architectureofTriton a scalable tension of Mo dula for massively parallel program mixedmode SIMDMIMD paral lel computer The ming The programming mo del of Mo dula incor novel features of Triton are p orates b oth data and control parallelism and allows hronous and asynchronous execution mixed sync Support for highlevel machineindependent pro Mo dula is problemorientedinthesensethatthe gramming languages programmer can cho ose the degree of parallelism and mix the control mo de SIMD or MIMDlike as need Fast SIMDMIMD mode switching ed bytheintended algorithm Parallelism maybe nested to arbitrary depth Pro cedures may b e called Special hardware for barrier synchronization of from sequential or parallel contexts and can them multiple process groups selves generate parallel activity without any restric tions Most Mo dula programs can b e translated into ecient co de for b oth SIMD and MIMD archi A selfrouting deadlockfreeperfect shue inter tectures connect with latency hiding Overview of language extensions The architecture is the outcomeofanintegrated de Mo dula extends Mo dula
    [Show full text]
  • Porta-SIMD: an Optimally Portable SIMD Programming Language Duke CS-1990-12 UNC CS TR90-021 May 1990
    Porta-SIMD: An Optimally Portable SIMD Programming Language Duke CS-1990-12 UNC CS TR90-021 May 1990 Russ Tuck Duke University Deparment of Computer Science Durham, NC 27706 The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 Text (without appendix) of a Ph.D. dissertation submitted to Duke University. The research was performed at UNC. @ 1990 Russell R. Tuck, III UNC is an Equal Opportunity/Atlirmative Action Institution. PORTA-SIMD: AN OPTIMALLY PORTABLE SIMD PROGRAMMING LANGUAGE by Russell Raymond Tuck, III Department of Computer Science Duke University Dissertation submitte in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 1990 Copyright © 1990 by Russell Raymond Tuck, III All rights reserved Abstract Existing programming languages contain architectural assumptions which limit their porta­ bility. I submit optimal portability, a new concept which solves this language design problem. Optimal portability makes it possible to design languages which are portable across vari­ ous sets of diverse architectures. SIMD (Single-Instruction stream, Multiple-Data stream) computers represent an important and very diverse set of architectures for which to demon­ strate optimal portability. Porta-SIMD (pronounced "porta.-simm'd") is the first optimally portable language for SIMD computers. It was designed and implemented to demonstrate that optimal portability is a useful and achievable standard for language design. An optimally portable language allows each program to specify the architectural features it requires. The language then enables the compiled program to exploit exactly those fea.­ tures, and to run on all architectures that provide them.
    [Show full text]
  • Massively Parallel Computing with CUDA
    Massively Parallel Computing with CUDA Antonino Tumeo Politecnico di Milano 1 GPUs have evolved to the point where many real world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs. Jack Dongarra Professor, University of Tennessee; Author of “Linpack” Why Use the GPU? • The GPU has evolved into a very flexible and powerful processor: • It’s programmable using high-level languages • It supports 32-bit and 64-bit floating point IEEE-754 precision • It offers lots of GFLOPS: • GPU in every PC and workstation What is behind such an Evolution? • The GPU is specialized for compute-intensive, highly parallel computation (exactly what graphics rendering is about) • So, more transistors can be devoted to data processing rather than data caching and flow control ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • The fast-growing video game industry exerts strong economic pressure that forces constant innovation GPUs • Each NVIDIA GPU has 240 parallel cores NVIDIA GPU • Within each core 1.4 Billion Transistors • Floating point unit • Logic unit (add, sub, mul, madd) • Move, compare unit • Branch unit • Cores managed by thread manager • Thread manager can spawn and manage 12,000+ threads per core 1 Teraflop of processing power • Zero overhead thread switching Heterogeneous Computing Domains Graphics Massive Data GPU Parallelism (Parallel Computing) Instruction CPU Level (Sequential
    [Show full text]
  • CS 677: Parallel Programming for Many-Core Processors Lecture 1
    1 CS 677: Parallel Programming for Many-core Processors Lecture 1 Instructor: Philippos Mordohai Webpage: mordohai.github.io E-mail: [email protected] Objectives • Learn how to program massively parallel processors and achieve – High performance – Functionality and maintainability – Scalability across future generations • Acquire technical knowledge required to achieve the above goals – Principles and patterns of parallel programming – Processor architecture features and constraints – Programming API, tools and techniques 2 Important Points • This is an elective course. You chose to be here. • Expect to work and to be challenged. • If your programming background is weak, you will probably suffer. • This course will evolve to follow the rapid pace of progress in GPU programming. It is bound to always be a little behind… 3 Important Points II • At any point ask me WHY? • You can ask me anything about the course in class, during a break, in my office, by email. – If you think a homework is taking too long or is wrong. – If you can’t decide on a project. 4 Logistics • Class webpage: http://mordohai.github.io/classes/cs677_s20.html • Office hours: Tuesdays 5-6pm and by email • Evaluation: – Homework assignments (40%) – Quizzes (10%) – Midterm (15%) – Final project (35%) 5 Project • Pick topic BEFORE middle of the semester • I will suggest ideas and datasets, if you can’t decide • Deliverables: – Project proposal – Presentation in class – Poster in CS department event – Final report (around 8 pages) 6 Project Examples • k-means • Perceptron • Boosting – General – Face detector (group of 2) • Mean Shift • Normal estimation for 3D point clouds 7 More Ideas • Look for parallelizable problems in: – Image processing – Cryptanalysis – Graphics • GPU Gems – Nearest neighbor search 8 Even More… • Particle simulations • Financial analysis • MCMC • Games/puzzles 9 Resources • Textbook – Kirk & Hwu.
    [Show full text]
  • Pnw 2020 Strunk001.Pdf
    Remote Sensing of Environment 237 (2020) 111535 Contents lists available at ScienceDirect Remote Sensing of Environment journal homepage: www.elsevier.com/locate/rse Evaluation of pushbroom DAP relative to frame camera DAP and lidar for forest modeling T ∗ Jacob L. Strunka, , Peter J. Gouldb, Petteri Packalenc, Demetrios Gatziolisd, Danuta Greblowskae, Caleb Makif, Robert J. McGaugheyg a USDA Forest Service Pacific Northwest Research Station, 3625 93rd Ave SW, Olympia, WA, 98512, USA b Washington State Department of Natural Resources, PO Box 47000, 1111 Washington Street, SE, Olympia, WA, 98504-7000, USA c School of Forest Sciences, Faculty of Science and Forestry, University of Eastern Finland, P.O. Box 111, 80101, Joensuu, Finland d USDA Forest Service Pacific Northwest Research Station, 620 Southwest Main, Suite 502, Portland, OR, 97205, USA e GeoTerra Inc., 60 McKinley St, Eugene, OR, 97402, USA f Washington State Department of Natural Resources, PO Box 47000, 1111 Washington Street SE, Olympia, WA, 98504-7000, USA g USDA Forest Service Pacific Northwest Research Station, University of Washington, PO Box 352100, Seattle, WA, 98195-2100, USA ARTICLE INFO ABSTRACT Keywords: There is growing interest in using Digital Aerial Photogrammetry (DAP) for forestry applications. However, the Lidar performance of pushbroom DAP relative to frame-based DAP and airborne lidar is not well documented. Interest Structure from motion in DAP stems largely from its low cost relative to lidar. Studies have demonstrated that frame-based DAP Photogrammetry generally performs slightly poorer than lidar, but still provides good value due to its reduced cost. In the USA Forestry pushbroom imagery can be dramatically less expensive than frame-camera imagery in part because of a na- DAP tionwide collection program.
    [Show full text]
  • A PARALLEL IMPLEMENTATION of BACKPROPAGATION NEURAL NETWORK on MASPAR MP-1 Faramarz Valafar Purdue University School of Electrical Engineering
    Purdue University Purdue e-Pubs ECE Technical Reports Electrical and Computer Engineering 3-1-1993 A PARALLEL IMPLEMENTATION OF BACKPROPAGATION NEURAL NETWORK ON MASPAR MP-1 Faramarz Valafar Purdue University School of Electrical Engineering Okan K. Ersoy Purdue University School of Electrical Engineering Follow this and additional works at: http://docs.lib.purdue.edu/ecetr Valafar, Faramarz and Ersoy, Okan K., "A PARALLEL IMPLEMENTATION OF BACKPROPAGATION NEURAL NETWORK ON MASPAR MP-1" (1993). ECE Technical Reports. Paper 223. http://docs.lib.purdue.edu/ecetr/223 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. TR-EE 93-14 MARCH 1993 A PARALLEL IMPLEMENTATION OF BACKPROPAGATION NEURAL NETWORK ON MASPAR MP-1" Faramarz Valafar Okan K. Ersoy School of Electrical Engineering Purdue University W. Lafayette, IN 47906 - * The hdueUniversity MASPAR MP-1 research is supponed in pan by NSF Parallel InfrasmctureGrant #CDA-9015696. - 2 - ABSTRACT One of the major issues in using artificial neural networks is reducing the training and the testing times. Parallel processing is the most efficient approach for this purpose. In this paper, we explore the parallel implementation of the backpropagation algorithm with and without hidden layers [4][5] on MasPar MP-I. This implementation is based on the SIMD architecture, and uses a backpropagation model which is more exact theoretically than the serial backpropagation model. This results in a smoother convergence to the solution. Most importantly, the processing time is reduced both theoretically and experimentally by the order of 3000, due to architectural and data parallelism of the backpropagation algorithm.
    [Show full text]
  • Core Processors
    UNIVERSITY OF CALIFORNIA Los Angeles Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Maryam Moazeni 2013 © Copyright by Maryam Moazeni 2013 ABSTRACT OF THE DISSERTATION Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors by Maryam Moazeni Doctor of Philosophy in Computer Science University of California, Los Angeles, 2013 Professor Majid Sarrafzadeh, Chair The extensive use of medical monitoring devices has resulted in the generation of tremendous amounts of data. Storage, retrieval, and analysis of such data require platforms that can scale with data growth and adapt to the various behavior of the analysis and processing algorithms. In recent years, many-core processors and more specifically many-core Graphical Processing Units (GPUs) have become one of the most promising platforms for high performance processing of data, due to the massive parallel processing power they offer. However, many of the algorithms and data structures used in medical and bioinformatics systems do not follow a data-parallel programming paradigm, and hence cannot fully benefit from the parallel processing power of ii data-parallel many-core architectures. In this dissertation, we present three techniques to adapt several non-data parallel applications in different dwarfs to modern many-core GPUs. First, we present a load balancing technique to maximize parallelism in non-serial polyadic Dynamic Programming (DP), which is a family of dynamic programming algorithms with more non-uniform data access pattern. We show that a bottom-up approach to solving the DP problem exploits more parallelism and therefore yields higher performance.
    [Show full text]
  • The Helios Operating System
    The Helios Operating System PERIHELION SOFTWARE LTD May 1991 COPYRIGHT This document Copyright c 1991, Perihelion Software Limited. All rights reserved. This document may not, in whole or in part be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine readable form without prior consent in writing from Perihelion Software Limited, The Maltings, Charlton Road, Shepton Mallet, Somerset BA4 5QE. UK. Printed in the UK. Acknowledgements The Helios Parallel Operating System was written by members of the He- lios group at Perihelion Software Limited (Paul Beskeen, Nick Clifton, Alan Cosslett, Craig Faasen, Nick Garnett, Tim King, Jon Powell, Alex Schuilen- burg, Martyn Tovey and Bart Veer), and was edited by Ian Davies. The Unix compatibility library described in chapter 5, Compatibility,im- plements functions which are largely compatible with the Posix standard in- terfaces. The library does not include the entire range of functions provided by the Posix standard, because some standard functions require memory man- agement or, for various reasons, cannot be implemented on a multi-processor system. The reader is therefore referred to IEEE Std 1003.1-1988, IEEE Stan- dard Portable Operating System Interface for Computer Environments, which is available from the IEEE Service Center, 445 Hoes Lane, P.O. Box 1331, Pis- cataway, NJ 08855-1331, USA. It can also be obtained by telephoning USA (201) 9811393. The Helios software is available for multi-processor systems hosted by a wide range of computer types. Information on how to obtain copies of the Helios software is available from Distributed Software Limited, The Maltings, Charlton Road, Shepton Mallet, Somerset BA4 5QE, UK (Telephone: 0749 344345).
    [Show full text]
  • GUIDE to INTERNATIONAL UNIVERSITY ADMISSION About NACAC
    GUIDE TO INTERNATIONAL UNIVERSITY ADMISSION About NACAC The National Association for College Admission Counseling (NACAC), founded in 1937, is an organization of 14,000 professionals from around the world dedicated to serving students as they make choices about pursuing postsecondary education. NACAC is committed to maintaining high standards that foster ethical and social responsibility among those involved in the transition process, as outlined in the NACAC’s Guide to Ethical Practice in College Admission. For more information and resources, visit nacacnet.org. The information presented in this document may be reprinted and distributed with permission from and attribution to the National Association for College Admission Counseling. It is intended as a general guide and is presented as is and without warranty of any kind. While every effort has been made to ensure the accuracy of the content, NACAC shall not in any event be liable to any user or any third party for any direct or indirect loss or damage caused or alleged to be caused by the information contained herein and referenced. Copyright © 2020 by the National Association for College Admission Counseling. All rights reserved. NACAC 1050 N. Highland Street Suite 400 Arlington, VA 22201 800.822.6285 nacacnet.org COVID-19 IMPACTS ON APPLYING ABROAD NACAC is pleased to offer this resource for the fifth year. NACAC’s Guide to International University Admission promotes study options outside students’ home countries for those who seek an international experience. Though the impact the current global health crisis will have on future classes remains unclear, we anticipate that there will still be a desire among students—perhaps enhanced as a result of COVID-19, to connect with people from other cultures and parts of the world, and to pursue an undergraduate degree abroad.
    [Show full text]
  • User Guide - Opendap Documentation
    User Guide - OPeNDAP Documentation 2017-10-12 Table of Contents 1. About This Guide . 1 2. What is OPeNDAP. 1 2.1. The OPeNDAP Client/Server . 2 2.2. OPeNDAP Services . 3 2.3. The OPeNDAP Server (aka "Hyrax"). 4 2.4. Administration and Centralization of Data . 5 3. OPeNDAP Data Model . 5 3.1. Data and Data Models . 5 4. OPeNDAP Messages . 17 4.1. Ancillary Data . 17 4.2. Data Transmission . 23 4.3. Other Services . 24 4.4. Constraint Expressions . 27 5. OPeNDAP Server (Hyrax) . 34 5.1. The OPeNDAP Server. 34 6. OPeNDAP Client . 37 6.1. Clients . 38 1. About This Guide This guide introduces important concepts behind the OPeNDAP data model and Web API as well as the clients and servers that use them. While it is not a reference for any particular client or server, you will find links to particular clients and servers in it. 2. What is OPeNDAP OPeNDAP provides a way for researchers to access scientific data anywhere on the Internet, from a wide variety of new and existing programs. It is used widely in earth-science research settings but it is not limited to that. Using a flexible data model and a well-defined transmission format, an OPeNDAP client can request data from a wide variety of OPeNDAP servers, allowing researchers to enjoy flexibility similar to the flexibility of the web. There are different implementations of OPeNDAP produced by various open source NOTE organizations. This guide covers the implementation of OPeNDAP produced by the OPeNDAP group. The OPeNDAP architecture uses a client/server model, with a client that sends requests for data out onto the network to a server, that answers with the requested data.
    [Show full text]
  • ESAIL D3.3.4 Auxiliary Tether Reel Test Report
    WP 3.3 “Auxiliary tether reel”, Deliverable D3.3.4 ESAIL ESAIL D3.3.4 Auxiliary tether reel test report Work Package: WP 3.3 Version: Version 1.0 Prepared by: DLR German Aerospace Center, Roland Rosta Time: Bremen, June 18th, 2013 Coordinating person: Pekka Janhunen, [email protected] 1 WP 3.3 “Auxiliary tether reel”, Deliverable D3.3.4 ESAIL Document Change Record Pages, Tables, Issue Rev. Date Modification Name Figures affected 1 0 18 June 2013 All Initial issue Rosta 2 WP 3.3 “Auxiliary tether reel”, Deliverable D3.3.4 ESAIL Table of Contents 1. Scope of this Document ......................................................................................................................... 5 2. Test Item Description ............................................................................................................................ 6 2.1. Auxiliary Tether Reel...................................................................................................................... 6 3. Test Results ............................................................................................................................................ 7 3.1. Shock and Vibration Tests .............................................................................................................. 7 3.2. Thermal Vacuum Tests ................................................................................................................. 10 4. Appendix .............................................................................................................................................
    [Show full text]