Parallel MPI I/O in Cube: Design & Implementation

Total Page:16

File Type:pdf, Size:1020Kb

Parallel MPI I/O in Cube: Design & Implementation Parallel MPI I/O in Cube: Design & Implementation Bine Brank A master thesis presented for the degree of M.Sc. Computer Simulation in Science Supervisors: Prof. Dr. Norbert Eicker Dr. Pavel Saviankou Bergische Universit¨atWuppertal in cooperation with Forschungszentrum J¨ulich September, 2018 Erkl¨arung Ich versichere, dass ich die Arbeit selbstst¨andigverfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt sowie Zitate kenntlich gemacht habe. Mit Abgabe der Abschlussarbeit erkenne ich an, dass die Arbeit durch Dritte eingesehen und unter Wahrung urheberrechtlicher Grunds¨atzezi- tiert werden darf. Ferner stimme ich zu, dass die Arbeit durch das Fachgebiet an Dritte zur Einsichtnahme herausgegeben werden darf. Wuppertal, 27.8.2017 Bine Brank 1 Acknowledgements Foremost, I would like to express my deepest and sincere gratitude to Dr. Pavel Saviankou. Not only did he introduce me to this interesting topic, but his way of guiding and supporting me was beyond anything I could ever hoped for. Always happy to discuss ideas and answer any of my questions, he has truly set an example of excellence as a researcher, mentor and a friend. In addition, I would like to thank Prof. Dr. Norbert Eicker for agreeing to supervise my thesis. I am very thankful for all remarks, corrections and help that he provided. I would also like to thank Ilya Zhukov for helping me with the correct in- stallation/configuration of the CP2K software. 2 Contents 1 Introduction 7 2 HPC ecosystem 8 2.1 Origins of HPC . 8 2.2 Parallel programming . 9 2.3 Automatic performance analysis . 10 2.4 Tools . 11 2.4.1 Score-P . 11 2.5 Performance space . 12 2.5.1 Metrics . 12 2.5.2 Call paths . 13 2.5.3 Locations . 14 2.5.4 Severity values . 15 3 Input/Output in HPC 17 3.1 MPI derived datatypes . 17 3.2 MPI I/O . 19 3.3 Performance of MPI I/O . 22 3.4 I/O challenges in HPC community . 23 4 Cube 25 4.1 Cube libraries . 25 4.2 Tar archive . 26 4.2.1 Tar header . 27 4.2.2 Tar file . 28 4.3 Cube4 file format . 29 4.3.1 Metric data file . 30 4.3.2 Metric index file . 34 4.3.3 Anchor file . 34 4.4 CubeWriter . 35 4.4.1 Usage . 35 4.4.2 Library architecture . 36 4.4.3 How Score-P uses CubeWriter library . 40 5 New writing algorithm 42 5.1 Metric data file . 42 5.1.1 File partition . 42 5.1.2 MPI steps . 44 5.2 Metric index file . 45 5.3 Anchor file . 45 6 Implementation 46 6.1 New library architecture . 46 6.2 Reconfiguring Score-P . 49 3 7 Results and discussion 51 7.1 System . 51 7.2 Performance measurement with a prototype . 52 7.2.1 Prototype . 52 7.2.2 Results . 53 7.3 Performance measurement with CP2K . 57 7.4 Discussion . 59 8 Conclusion 61 References 62 A Appendix - source code 64 A.1 cubew cube.c . 64 A.2 cubew metric.c . 68 A.3 cubew parallel metric writer.c . 71 A.4 cubew tar writing.c . 73 A.5 scorep profile cube4 writer.c . 77 A.6 prototype new.c . 85 A.7 prototype old.c . 94 4 List of Figures 1 Memory architectures . 9 2 Call tree . 13 3 System tree . 15 4 Filetype . 19 5 File partitioning . 20 6 Cube libraries . 25 7 CubeGUI . 26 8 Layout of TAR archive . 27 9 Sequence of files in Cube4 tar archive . 29 10 Inclusive vs. exclusive values . 31 11 Tree ordering . 32 12 Structure of metric data file . 33 13 Simplified UML diagram of CubeWriter . 37 14 Internal steps of CubeWriter . 39 15 Score-P sequence diagram . 41 16 File partition . 42 17 New algorithm . 43 18 Filetypes of processes . 44 19 Internal steps of the new CubeWriter library . 48 20 New Score-P sequence diagram . 50 21 Writing time of different files in tar archive . 53 22 Execution time . 54 23 Overall prototype writing speed . 55 24 Writing speed of dense metrics . 56 25 Overall writing time for different call tree sizes . 57 26 Writing time for H2O-64 benchmark . 58 27 Overall writing speed for H2O-64 benchmark . 59 5 List of Tables 1 Data access routines . 22 2 Write performance of MPI . 23 3 Call path order . 32 4 Metrics in the prototype . 52 6 1 Introduction The need for analysis of complex parallel codes in high performance com- puting has caused a growing number of different performance analysis tools. These tools help the user to write a parallel code, which is efficient and does not use more computing resources than absolutely necessary. The user is able to measure how his application is behaving and thus gain insight into the problems and bottlenecks it might have. Measuring a performance of a large-scale application gives a lot of informa- tion. This happens because such large applications can be executed on million computing cores and one is provided with the measurement data for each of them. Writing all this information into a file can therefore be a very slow pro- cess. This thesis revolves around the work of redesigning and rewriting the Cube- Writer library, a part of Cube software for writing an application's performance report. We propose a new parallel approach, where each process writes its own measurement data synchronously. The rest of this thesis is organized as follows. Chapter 2 introduces the reader to high performance computing. We describe how the analysis of com- plex parallel codes led to the development of automatic performance analysis tools. After that, a brief overview of Score-P is given and a description of how measuring an application's performance forms a three dimensional performance space. Chapter 3 deals with parallel input/output operations in HPC, with a main focus on the I/O part of message passing interface (MPI). In chapter 4, we go into the details of Cube, which is the main topic of this thesis. We briefly explain the Cube libraries, before giving an explanation of the Cube4 file format, which is a tar archive. We conclude the chapter by explaining how this file is written by the CubeWriter library. We then describe how the library could be rewritten to include parallel writing. This new writing algorithm and its implementation are described in chapters 5 and 6. In chapter 7, we take a look at the results and how the new CubeWriter library gives a much better performance than the old version. In the last chapter, we conclude the work and give some ideas for the future development. 7 2 HPC ecosystem 2.1 Origins of HPC HPC or High Performance Computing (sometimes also Supercomputing) is a term used for aggregating the computer power to achieve a much higher performance than that of standard desktop computers of their time. Such high performance machines are usually used to solve large linear algebra problems that arise from science and engineering. Origins of HPC date back to 1950s, but in the mid-1970s a first big break- through occurred with the production of Cray 1 [1]. Produced in 1975, it is commonly regarded as the first successful supercomputer. At that time, tech- nology of the most advanced devices was based on vector processing. A vector processor is a processing unit in which a set of instructions operate on a vector of data and not on a single data item. In the 1980s and early 1990s, computers based on vector processes were beginning to be overtaken by massively parallel machines. In such machines, many processors work together on a different parts of a larger problem. Contrary to vector-based systems, which were designed to run instructions on a vector of data as quickly as possible, massively parallel computer instead separates parts of the problem to entirely different processors and then recombines the results. The research took a major shift to such machines. New high-speed networks, availability of low-cost processors and software for distributed computing led to a development of computer clusters, a set of tightly connected computers that work together. First such system was a Beowulf cluster, built in 1994 by NASA [2]. Today, such computer architecture is prevailing in both industry and academic community. In the beginning of the 21st century, another big step occurred with the de- velopment of multi-core processors which are able to run multiple instructions on separate cores at the same time. Almost all computers on the market to- day are equipped with such processors. Early 2000s was also important for the development of general purpose graphics processing units (GPGPU). With an extensive research from the gaming industry, the technology of GPGPUs has advanced and was found to fit scientific computing as well. Nowadays, graphics processing units form a major part in some of the world's fastest supercomput- ers. The performance of supercomputers is measured in floating-point operations per second (flops). Floating point operation is a basic arithmetic operation in a double precision floating point calculations. Double precision format is a computer number format that occupies 64 bits in computer memory. A common way to measure performance is by the LINPACK benchmark, which solves a dense system of linear equations. The fastest supercomputer currently is Summit at the Department of Ener- gys Oak Ridge National Laboratory in the USA [3]. Based mostly on NVIDIA Tesla V100 GPUs, Summit reached a maximum speed of 122:3 · 1015 flops on 8 the LINPACK benchmark. Many powerful supercomputers around the world are already in the petaflops (1015) region and the research on new hardware technologies is causing a never-ending growth of computing power.
Recommended publications
  • NOAA Technical Report NOS NGS 60
    NOAA Technical Report NOS NGS 60 NAD83 (NSR2007) National Readjustment Final Report Dale G. Pursell Mike Potterfield Rockville, MD August 2008 NOAA Technical Report NOS NGS 60 NAD 83(NSRS2007) National Readjustment Final Report Dale G. Pursell Mike Potterfield Silver Spring, MD August 2008 U.S. DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration National Ocean Service Contents Overview ........................................................................................................................ 1 Part I. Background .................................................................................................... 5 1. North American Datum of 1983 (1986) .......................................................................... 5 2. High Accuracy Reference Networks (HARNs) .............................................................. 5 3. Continuously Operating Reference Stations (CORS) .................................................... 7 4. Federal Base Networks (FBNs) ...................................................................................... 8 5. National Readjustment .................................................................................................... 9 Part II. Data Inventory, Assessment and Input ....................................... 11 6. Preliminary GPS Project Analysis ................................................................................11 7. Master File .....................................................................................................................11
    [Show full text]
  • A Longitudinal and Cross-Dataset Study of Internet Latency and Path Stability
    A Longitudinal and Cross-Dataset Study of Internet Latency and Path Stability Mosharaf Chowdhury Rachit Agarwal Vyas Sekar Ion Stoica Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2014-172 http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-172.html October 11, 2014 Copyright © 2014, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. A Longitudinal and Cross-Dataset Study of Internet Latency and Path Stability Mosharaf Chowdhury Rachit Agarwal Vyas Sekar Ion Stoica UC Berkeley UC Berkeley Carnegie Mellon University UC Berkeley ABSTRACT Even though our work does not provide new active mea- We present a retrospective and longitudinal study of Internet surement techniques or new datasets, we believe that there is value in this retrospective analysis on several fronts. First, it latency and path stability using three large-scale traceroute provides a historical and longitudinal perspective of Internet datasets collected over several years: Ark and iPlane from path properties that are surprisingly lacking in the measure- 2008 to 2013 and a proprietary CDN’s traceroute dataset spanning 2012 and 2013. Using these different “lenses”, we ment community today. Second, it can help us revisit and revisit classical properties of Internet paths such as end-to- reappraise classical assumptions about path latency and sta- end latency, stability, and of routing graph structure.
    [Show full text]
  • Misleading Stars: What Cannot Be Measured in the Internet?
    Noname manuscript No. (will be inserted by the editor) Misleading Stars: What Cannot Be Measured in the Internet? Yvonne-Anne Pignolet · Stefan Schmid · Gilles Tredan Abstract Traceroute measurements are one of the main in- set can help to determine global properties such as the con- struments to shed light onto the structure and properties of nectivity. today’s complex networks such as the Internet. This arti- cle studies the feasibility and infeasibility of inferring the network topology given traceroute data from a worst-case 1 Introduction perspective, i.e., without any probabilistic assumptions on, e.g., the nodes’ degree distribution. We attend to a scenario Surprisingly little is known about the structure of many im- where some of the routers are anonymous, and propose two portant complex networks such as the Internet. One reason fundamental axioms that model two basic assumptions on is the inherent difficulty of performing accurate, large-scale the traceroute data: (1) each trace corresponds to a real path and preferably synchronous measurements from a large in the network, and (2) the routing paths are at most a factor number of different vantage points. Another reason are pri- 1/α off the shortest paths, for some parameter α 2 (0; 1]. vacy and information hiding issues: for example, network In contrast to existing literature that focuses on the cardi- providers may seek to hide the details of their infrastructure nality of the set of (often only minimal) inferrable topolo- to avoid tailored attacks. gies, we argue that a large number of possible topologies Knowledge of the network characteristics is crucial for alone is often unproblematic, as long as the networks have many applications as well as for an efficient operation of the a similar structure.
    [Show full text]
  • PAX A920 Mobile Smart Terminal Quick Setup Guide
    PAX A920 QUICK SETUP GUIDE PAX A920 Mobile Smart Terminal Quick Setup Guide 2018000702 v1.8 1 PAX Technology® Customer Support [email protected] (877) 859-0099 www.pax.us PAX A920 QUICK SETUP GUIDE PAX A920 Mobile Terminal Intelligence of an ECR in a handheld point of sale. The PAX A920 is an elegantly designed compact secure portable payment terminal powered by an Android operating system. The A920 comes with a large high definition color display. A thermal printer and includes NFC contactless and electronic signature capture. Great battery life for portable use. 2018000702 v1.8 2 PAX Technology® Customer Support [email protected] (877) 859-0099 www.pax.us PAX A920 QUICK SETUP GUIDE 1 What’s in The Box The PAX A920 includes the following items in the box. 2018000702 v1.8 3 PAX Technology® Customer Support [email protected] (877) 859-0099 www.pax.us PAX A920 QUICK SETUP GUIDE 2 A920 Charging Instructions Before starting the A920 battery should be fully charged by plugging the USB to micro USB cord to a PC or an AC power supply and then plug the other end with the micro USB connector into the micro USB port on the left side of the terminal. Charge the battery until full. Note: There is a protective cover on the new battery terminals that must be removed before charging the battery. See Remove and Replace Battery section. 2018000702 v1.8 4 PAX Technology® Customer Support [email protected] (877) 859-0099 www.pax.us PAX A920 QUICK SETUP GUIDE 3 A920 Buttons and Functions Front Description 6 1 2 7 3 8 9 4 5 2018000702 v1.8 5 PAX Technology® Customer Support [email protected] (877) 859-0099 www.pax.us PAX A920 QUICK SETUP GUIDE 3.1 A920 Buttons and Functions Front Description 1.
    [Show full text]
  • PAX-It! TM Applications the Paxcam Digital USB 2.0 Camera System
    Digital Imaging Workflow for Industrial From the Makers of PAX-it! TM Applications The PAXcam Digital USB 2.0 Camera System l Affordable camera for microscopy, with an easy-to-use interface l Beautiful, high-resolution images; true color rendition l Fully integrated package with camera and software l USB 2.0 interface for the fastest live digital color preview on the market l Easy-to-use interface for color balance, exposure & contrast control, including focus indicator tool l Adjustable capture resolution settings (true optical resolution -- no interpolation) l Auto exposure, auto white balance and manual color adjustment are supported l Create and apply templates and transparencies over the live image l Acquire images directly into the PAX-it archive for easy workflow l Easy one-cable connection to computer; can also be used on a laptop l Adjustable region of interest means smaller file sizes when capturing images l PAXcam interface can control multiple cameras from the same computer l Stored presets may be used to save all camera settings for repeat conditions Capture Images Directly to PAX-it Image Database Software Includes PAXcam Video Agent for motion video capture l Time lapse image capture l Combine still images to create movie files l Extract individual frames of video clips as bitmap images Live preview up to 40 fps PAX-it! l File & retrieve images in easy-to-use cabinet/folder structure l Store images, video clips, documents, and other standard digital file types l Images and other files are in a searchable database that you
    [Show full text]
  • Freebsd and Netbsd on Small X86 Based Systems
    FreeBSD and NetBSD on Small x86 Based Systems Dr. Adrian Steinmann <[email protected]> Asia BSD Conference in Tokyo, Japan March 17th, 2011 1 Introduction Who am I? • Ph.D. in Mathematical Physics (long time ago) • Webgroup Consulting AG (now) • IT Consulting Open Source, Security, Perl • FreeBSD since version 1.0 (1993) • NetBSD since version 3.0 (2005) • Traveling, Sculpting, Go AsiaBSDCon Tutorial March 17, 2011 in Tokyo, Japan “Installing and Running FreeBSD and NetBSD on Small x86 Based Systems” Dr. Adrian Steinmann <[email protected]> 2 Focus on Installing and Running FreeBSD and NetBSD on Compact Flash Systems (1) Overview of suitable SW for small x86 based systems with compact flash (CF) (2) Live CD / USB dists to try out and bootstrap onto a CF (3) Overview of HW for small x86 systems (4) Installation strategies: what needs special attention when doing installations to CF (5) Building your own custom Install/Maintenance RAMdisk AsiaBSDCon Tutorial March 17, 2011 in Tokyo, Japan “Installing and Running FreeBSD and NetBSD on Small x86 Based Systems” Dr. Adrian Steinmann <[email protected]> 3 FreeBSD for Small HW Many choices! – Too many? • PicoBSD / TinyBSD • miniBSD & m0n0wall • pfSense • FreeBSD livefs, memstick • NanoBSD • STYX. Others: druidbsd, Beastiebox, Cauldron Project, ... AsiaBSDCon Tutorial March 17, 2011 in Tokyo, Japan “Installing and Running FreeBSD and NetBSD on Small x86 Based Systems” Dr. Adrian Steinmann <[email protected]> 4 PicoBSD & miniBSD • PicoBSD (1998): Initial import into src/release/picobsd/ by Andrzej Bialecki <[email protected]
    [Show full text]
  • Release 0.11.0
    mpiFileUtils Documentation Release 0.11.0 HPC Sep 29, 2021 Contents 1 Overview 1 2 User Guide 3 2.1 Project Design Principles........................................3 2.1.1 Scale..............................................3 2.1.2 Performance...........................................3 2.1.3 Portability............................................3 2.1.4 Composability..........................................4 2.2 Utilities..................................................4 2.3 Experimental Utilities..........................................4 2.4 libmfu..................................................4 2.4.1 libmfu: the mpiFileUtils common library...........................5 2.4.2 mfu_flist.............................................5 2.4.3 mfu_path............................................5 2.4.4 mfu_param_path........................................5 2.4.5 mfu_io..............................................5 2.4.6 mfu_util.............................................6 2.5 Build...................................................6 2.5.1 Build everything with Spack..................................6 2.5.2 Build everything directly....................................6 2.5.3 Build everything directly with DAOS support.........................8 2.5.4 Build mpiFileUtils directly, build its dependencies with Spack................8 3 Man Pages 9 3.1 dbcast...................................................9 3.1.1 SYNOPSIS...........................................9 3.1.2 DESCRIPTION.........................................9 3.1.3 OPTIONS............................................9
    [Show full text]
  • Z/VM Version 7 Release 2
    z/VM Version 7 Release 2 OpenExtensions User's Guide IBM SC24-6299-01 Note: Before you use this information and the product it supports, read the information in “Notices” on page 201. This edition applies to Version 7.2 of IBM z/VM (product number 5741-A09) and to all subsequent releases and modifications until otherwise indicated in new editions. Last updated: 2020-09-08 © Copyright International Business Machines Corporation 1993, 2020. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures................................................................................................................. xi Tables................................................................................................................ xiii About this Document........................................................................................... xv Intended Audience..................................................................................................................................... xv Conventions Used in This Document......................................................................................................... xv Escape Character Notation................................................................................................................... xv Case-Sensitivity.....................................................................................................................................xv Typography............................................................................................................................................xv
    [Show full text]
  • The Emaginepos Guide to Pax / Emv / Broadpos
    THE EMAGINEPOS GUIDE TO PAX / EMV / BROADPOS Author Davis Ford <[email protected]> Last Updated 9/22/2016 THE EMAGINEPOS GUIDE TO PAX / EMV / BROADPOS What Is BroadPOS? Some things you can do with BroadPOS What Processors Are Currently Supported? How To Login To BroadPOS Support Checklist Step -1: Things Specific To Processors Tokenization (attach credit card): Batch Out Step 0: Get One Or More VAR Sheets for the Merchant Step 1: Create a New Merchant (if necessary) Step 2: Add the Serial Numbers of the New Terminal(s) Step 3: Create A New Template TSYS Parameters TSYS Tab => Merchant Parameters TSYS Tab => Host Feature First Data Omaha Parameters First Data Omaha Tab => Host Features, Host URLs First Data Rapid Connect Parameters FirstData RapidConnect Tab => Host Features FirstData RapidConnect Tab => Host URLs Heartland Portico Parameters Hsptc Tab => Host Features Hsptc Tab => Merchant Parameters Hsptc Tab => Host URLs Hsptc Tab => Dial Parameters Misc Tab => Communication Tab => General Communication Tab => LAN Communication Tab => Communication between ECR/POS and PAX terminal Communication => POS System Feature (Ethernet Only) Step 4: Deploy The Template Step 5: Install the Browser Extension for Emagine Payments Step 6: Setup Payment Terminals in the Backoffice Step 7: Create New Payment Type for EMV (if necessary) Step 8: Test The Terminal Frequently Asked Questions / Troubleshooting PAX Frequently Asked Questions List CONNECT ERROR when I try to execute a transaction SWIPE ONLY error when I try to Void or Adjust NOTES on First
    [Show full text]
  • An Airborne Network Telemetry Link for the Inet Technical Demonstration System
    An Airborne Network Telemetry Link for the iNET Technical Demonstration System Item Type text; Proceedings Authors Temple, Kip; Laird, Daniel Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings Rights Copyright © held by the author; distribution rights International Foundation for Telemetering Download date 25/09/2021 05:15:29 Link to Item http://hdl.handle.net/10150/606170 AN AIRBORNE NETWORK TELEMETRY LINK FOR THE INET TECHNICAL DEMONSTRATION SYSTEM Kip Temple Air Force Flight Test Center, Edwards AFB CA Daniel Laird Air Force Flight Test Center, Edwards AFB CA ABSTRACT A previous paper was presented [1] detailing the design and testing of the first networked demonstration system (ITC 2006) for iNET. This paper extends that work by testing a commercial off the shelf (COTS) solution for the wireless network connection of the Telemetry Network System (TmNS). This paper will briefly discuss specific pieces of the airborne and ground station system but will concentrate on the new wireless network link, how it was tested, and how well it performed. Flight testing results will be presented accessing the performance of the wireless network link. KEY WORDS integrated Networked Enhanced Telemetry (iNET), wireless network link, Telemetry Network System (TmNS), serial streaming telemetry INTRODUCTION The iNET Team has assembled a Demonstration System to not only assess COTS wireless network link performance within the aeronautical test environment but also to demonstrate the potential uses of a network linked system. The TmNS is shown pictorially in Figure 1. The four components are the vehicular network, or the Test Article Segment (TAS), the interface to the ground station, or Ground Station Segment (GSS), the wireless network link that ties the two networks together, and the legacy serial streaming telemetry (SST) link.
    [Show full text]
  • LDM7 Deployment Guidance
    LDM­7 Deployment Guidance Steve Emmerson, University Corporation for Atmospheric Research ([email protected]) ​ ​ Malathi Veeraraghavan, University of Virginia ([email protected]) ​ ​ Multicast network test To install LDM­7, you will need a Linux/Unix machine and a copy of the LDM­7 software package. Also, you need to make sure the network that LDM­7 is going to run on supports and has enabled multicast. To test whether multicast is supported, please follow the link https://github.com/shawnsschen/CC­NIE­Toolbox/tree/master/generic/mcast_test and obtain the two C files. One is for multicast sender and the other for receiver. If there are more than one network interfaces in the sender’s system, you have to manually update the routing table with one new entry: route add 224.0.0.1 dev eth1. Then use route ­n to check the updated routing ​ ​ ​ table. You will be able to see something like: Note that the entry we added is only matching a particular multicast address 224.0.0.1, not a subnet. If the multicast address changes, you need to manually change the configuration or use a subnet address. Now you have a multicast network (though not sure if it works), next step is to upload the mcast_send.c onto the sender system and mcast_recv.c onto all the receiver systems (more than one in a multicast network). Use the following command to compile them, respectively: gcc ­o send mcast_send.c gcc ­o recv mcast_recv.c send is a multicast sender­side test program, it sends out a “hello multicast world” message every second to multicast group 224.0.0.1:5173.
    [Show full text]
  • Objections to Killer Robots
    DEADLY Decisions objections to 8 killer robots Address: Postal Address: Godebaldkwartier 74 PO Box 19318 3511 DZ Utrecht 3501 DH Utrecht The Netherlands The Netherlands www.paxforpeace.nl [email protected] © PAX 2014 Authors: Merel Ekelhof Miriam Struyk Lay-out: Jasper van der Kist This is a publication of PAX. PAX is a founder of the Campaign to Stop Killer Robots. For more information, visit our website: www.paxforpeace.nl or www.stopkillerrobots.org About PAX, formerly known as IKV Pax Christi PAX stands for peace. Together with people in conflict areas and critical citizens in the Netherlands, we work on a dignified, democratic and peaceful society, everywhere in the world. Peace requires courage. The courage to believe that peace is possible, to row against the tide, to speak out and to carry on no matter what. Everyone in the world can contribute to peace, but that takes courage. The courage to praise peace, to shout it from the rooftops and to write it on the walls. The courage to call politicians to accountability and to look beyond your own boundaries. The courage to address people and to invite them to take part, regardless of their background. PAX brings people together who have the courage to stand for peace. We work together with people in conflict areas, visit politicians and combine efforts with committed citizens. www.PAXforpeace.nl The authors would like to thank Roos Boer, Bonnie Docherty, Alexandra Hiniker, Daan Kayser, Noel Sharkey and Wim Zwijnenburg for their comments and advice. First, you had human beings without machines. Then you had human beings with machines.
    [Show full text]