Windows on Computing New Initiatives at Los Alamos

Total Page:16

File Type:pdf, Size:1020Kb

Windows on Computing New Initiatives at Los Alamos Windows on Computing New Initiatives at Los Alamos David W. Forslund, Charles A. Slocomb, and Ira A. Agins o aspect of technology is changing more rapidly than the field of computing and information systems. It is among the fastest growing and most competitive Narenas in our global economy. Each year, more of the items we use contain tiny microprocessors—silicon chips on which are etched hundreds or thousands or millions of electronic circuit elements. Those computer chips direct various operations and adjustments—automatic braking in cars; automatic focusing in cameras; automatic data collection in cash registers; automatic message- taking by answering machines; automatic operation of washers, dryers, and other appliances; automatic production of goods in manufacturing plants; the list could go on and on. Those inconspicuous devices that perform micro-scale computing are profoundly shaping our lives and our culture. Los Alamos Science Number 22 1994 Number 22 1994 Los Alamos Science 1 Windows on Computing Opening illustration: Elements of high-performance computing: Left, the CM-5 Connection a chip), had led to dramatic reductions Machine, the most powerful massively parallel supercomputer at Los Alamos; center, the foyer in the cost of producing powerful mi- in the Laboratory Data Communications Center (LDCC); upper right, digitized images of croprocessors and large memory units. Washington, D.C. from a wavelet-based multiresolution database developed at Los Alamos; As a result affordable personal comput- and lower right, a portion of a “metacomputer,” a 950,000-transistor special-purpose chip for ers and powerful workstations have be- analyzing the behavior of digital circuits. The chip is being developed by a team of graduate come commonplace in science, in busi- students at the University of Michigan. ness, and in the home. New microprocessors continue to be incorporated into various products at an More visible and no less important are The computer chip was invented in increasing rate; development cycles are the ways microprocessors are changing 1958 when Jack Kilby figured out how down to months rather than years as the the way we communicate with each other to fabricate several transistors on a sin- current generation of processors are and even the kinds of tasks we do. In- gle-crystal silicon substrate and thereby used to aid in the design and manufac- dustries such as desktop publishing, created the integrated circuit. Since ture of the next generation. Because of electronic mail, multimedia systems, then more and more transistors have their economies of scale, off-the-shelf and financial accounting systems have been integrated on a single chip. By microprocessors are expanding the use been created by the ubiquitous micro- the early 1980s the technology of of micro- and medium-scale computing processor. It is nothing short of the en- VLSI, or very-large scale integration in business and in the home. They are gine of the digital revolution. (hundreds of thousands of transistors on also motivating changes in the design History of Computers at Los Alamos 1943–45 1952 1955 1956 1961 1971 Desktop calculators MANIAC is built at The MANIAC II pro- MANIAC II is com- STRETCH is com- The Laboratory buys and punched-card the Laboratory under ject, a computer fea- pleted. The Labora- pleted and is about its first CDC 7600, accounting machines the direction of Nick turing floating-point tory installs serial thirty-five times as the successor to the are used as calculat- Metropolis. It is the arithmetic, is started. number 1 of the the powerful as the IBM 6600. These ma- ing tools in the Man- first computer de- The Laboratory be- IBM 704, which has 704. IBM used chines are the main hattan Project. signed from the start gins working closely about the same much of the technol- supercomputers in according to John with computer manu- power as MANIAC ogy developed for use at the Laborato- 1945 von Neumann’s facturers to ensure II. From this point STRETCH in its ry during much of ENIAC, the world’s stored-program that its future com- on, the Laboratory computers for years the 1970s. first large-scale elec- ideas. puting needs will be acquires supercom- afterward. tronic computer, is satisfied. puters from industry. 1972 completed at the 1953 1966 Cray Research, Inc University of Penn- The Laboratory gets Late 1950s The first on-line is founded. The sylvania. Its “shake- serial number 2 of The Laboratory and mass-storage system Laboratory consults down” calculation is the IBM 701. This IBM enter into a joint with a capacity of on the design of the the “Los Alamos “Defense Calculator” project to build over 1012 bits, the Cray-1. problem,” a calcula- is approximately STRETCH, a com- IBM 1360 Photo tion needed for the equal in power to puter based on tran- Store, is installed at 1975 design of thermonu- the MANIAC. sistors rather than the Laboratory. Con- Laboratory scientists clear weapons. MANIAC II vacuum tubes, to trol Data Corporation design and build meet the needs of introduces the first a high-speed net- 1949 the nuclear-weapons “pipelined” computer, work that uses 50- IBM’s first Card Pro- program. the CDC 6600, de- megabit-per-second grammable Calcula- signed by Seymour channels. tors are installed at Cray. The Laborato- the Laboratory. ry buys a few. 2 Los Alamos Science Number 22 1994 Windows on Computing of large-scale scientific computers. The dressing the “Grand Challenge” compu- plans to continue working with the su- focus has shifted from single, very fast tational problems in science and engi- percomputing industry and to help ex- processors to very fast networks that neering, Los Alamos set up the Ad- pand the contributions of computer allow hundreds to thousands of micro- vanced Computing Laboratory as a kind modeling and simulation to all areas of processors to cooperate on a single of proving ground for testing MPPs on society. Here, we will briefly review a problem. Large-scale computation is a real problems. Ironically, just as their few of our past contributions to the critical technology in scientific re- enormous potential is being clearly high end of computing, outline some search, in major industries, and in the demonstrated at Los Alamos and else- new initiatives in large-scale parallel maintenance of national security. It is where, economic forces stemming from computing, and then introduce a rela- also the area of computing in which reduced federal budgets and slow ac- tively new area of involvement, our Los Alamos has played a major role. ceptance into the commercial market- support of the information revolution The microprocessor has opened up place are threatening the viability of the and the National Information Infrastruc- the possibility of continual increases in supercomputing industry. ture initiative. the power of supercomputers through As a leader in scientific computing, Since the days of the Manhattan Pro- the architecture of the MPP, the mas- Los Alamos National Laboratory has ject, Los Alamos has been a driver of sively parallel processor that can con- always understood the importance of and a major participant in the develop- sist of thousands of off-the-shelf micro- supercomputing for maintaining nation- ment of large-scale scientific computa- processors. In 1989, seeing the al security and economic strength. At tion. It was here that Nick Metropolis potential of that new technology for ad- this critical juncture the Laboratory directed the construction of MANIAC I 1976 1980 1985 1988 1990 1994 Serial number 1 of The Laboratory be- The Ultra-High- The Laboratory A test device for A massively parallel the Cray-1 is deliv- gins its parallel- Speed Graphics obtains the first of its HIPPI ports is trans- Cray T3D is installed ered to the Labora- processing efforts. Project is started. It six Cray Y-MP com- ferred to industry. at the ACL for use in tory. pioneers animation puters. It also in- The Laboratory, the collaborations with 1981 as a visualization stalls, studies, and Jet Propulsion Labo- industry. 1977 An early parallel tool and requires evaluates a number ratory, and the San A Common File Sys- processor (PuPS) is gigabit-per-second of massively parallel Diego Supercomput- tem, composed of fabricated at the communication ca- computers. The Ad- er start the Casa IBM mass-storage Laboratory but never pacity. A massively vanced Computing Gigabit Test Project components, is in- completed. parallel (128-node) Laboratory (ACL) is stalled and provides Intel computer is established. 1991 storage for all cen- 1983 installed. The Laboratory tral and remote Lab- Denelcor’s HEP, an 1989 transfers to industry oratory computer early commercially 1987 The ACL purchases the HIPPI frame- systems. available parallel The need for higher the CM-2 Connec- buffer, an important processor, in in- communication tion Machine from component for visu- The Cray T3D stalled, as is the first capacity is answered Thinking Machines. alization of complex of five Cray X-MP by the development It has 65,536 parallel images. computers. of the High-Perfor- processors. mance Parallel Inter- 1992 face (HIPPI), an A 1024-processor 800-megabit/second Thinking Machines channel, which be- CM-5, the most pow- comes an ANSI erful computer at the The Cray-1 standard. time, is installed at the ACL. 3 Windows on Computing intro1.adb 7/26/94 and II. Maniac I (1952) was among the first general-purpose digital computers to realize von Neumann’s concept of a 1011 stored-program computer—one that Cray T3D, CM-5 could go from step to step in a compu- tation by using a set of instructions that was stored electronically in its own Intel Delta, CM-200, etc. memory in the same way that data are stored.
Recommended publications
  • NVIDIA Tesla Personal Supercomputer, Please Visit
    NVIDIA TESLA PERSONAL SUPERCOMPUTER TESLA DATASHEET Get your own supercomputer. Experience cluster level computing performance—up to 250 times faster than standard PCs and workstations—right at your desk. The NVIDIA® Tesla™ Personal Supercomputer AccessiBLE to Everyone TESLA C1060 COMPUTING ™ PROCESSORS ARE THE CORE is based on the revolutionary NVIDIA CUDA Available from OEMs and resellers parallel computing architecture and powered OF THE TESLA PERSONAL worldwide, the Tesla Personal Supercomputer SUPERCOMPUTER by up to 960 parallel processing cores. operates quietly and plugs into a standard power strip so you can take advantage YOUR OWN SUPERCOMPUTER of cluster level performance anytime Get nearly 4 teraflops of compute capability you want, right from your desk. and the ability to perform computations 250 times faster than a multi-CPU core PC or workstation. NVIDIA CUDA UnlocKS THE POWER OF GPU parallel COMPUTING The CUDA™ parallel computing architecture enables developers to utilize C programming with NVIDIA GPUs to run the most complex computationally-intensive applications. CUDA is easy to learn and has become widely adopted by thousands of application developers worldwide to accelerate the most performance demanding applications. TESLA PERSONAL SUPERCOMPUTER | DATASHEET | MAR09 | FINAL FEATURES AND BENEFITS Your own Supercomputer Dedicated computing resource for every computational researcher and technical professional. Cluster Performance The performance of a cluster in a desktop system. Four Tesla computing on your DesKtop processors deliver nearly 4 teraflops of performance. DESIGNED for OFFICE USE Plugs into a standard office power socket and quiet enough for use at your desk. Massively Parallel Many Core 240 parallel processor cores per GPU that can execute thousands of GPU Architecture concurrent threads.
    [Show full text]
  • The Road Ahead for Computing Systems
    56 JANUARY 2019 HiPEAC conference 2019 The road ahead for Valencia computing systems Monica Lam on keeping the web open Alberto Sangiovanni Vincentelli on building tech businesses Koen Bertels on quantum computing Tech talk 2030 contents 7 14 16 Benvinguts a València Monica Lam on open-source Starting and scaling a successful voice assistants tech business 3 Welcome 30 SME snapshot Koen De Bosschere UltraSoC: Smarter systems thanks to self-aware chips 4 Policy corner Rupert Baines The future of technology – looking into the crystal ball 33 Innovation Europe Sandro D’Elia M2DC: The future of modular microserver technology 6 News João Pita Costa, Ariel Oleksiak, Micha vor dem Berge and Mario Porrmann 14 HiPEAC voices 34 Innovation Europe ‘We are witnessing the creation of closed, proprietary TULIPP: High-performance image processing for linguistic webs’ embedded computers Monica Lam Philippe Millet, Diana Göhringer, Michael Grinberg, 16 HiPEAC voices Igor Tchouchenkov, Magnus Jahre, Magnus Peterson, ‘Do not think that SME status is the final game’ Ben Rodriguez, Flemming Christensen and Fabien Marty Alberto Sangiovanni Vincentelli 35 Innovation Europe 18 Technology 2030 Software for the big data era with E2Data Computing for the future? The way forward for Juan Fumero computing systems 36 Innovation Europe Marc Duranton, Madeleine Gray and Marcin Ostasz A RECIPE for HPC success 23 Technology 2030 William Fornaciari Tech talk 2030 37 Innovation Europe Solving heterogeneous challenges with the 24 Future compute special Heterogeneity Alliance
    [Show full text]
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • A Massively-Parallel Mixed-Mode Computer Designed to Support
    This paper appeared in th International Parallel Processing Symposium Proc of nd Work shop on Heterogeneous Processing pages NewportBeach CA April Triton A MassivelyParallel MixedMo de Computer Designed to Supp ort High Level Languages Christian G Herter Thomas M Warschko Walter F Tichy and Michael Philippsen University of Karlsruhe Dept of Informatics Postfach D Karlsruhe Germany Mo dula Abstract Mo dula pronounced Mo dulastar is a small ex We present the architectureofTriton a scalable tension of Mo dula for massively parallel program mixedmode SIMDMIMD paral lel computer The ming The programming mo del of Mo dula incor novel features of Triton are p orates b oth data and control parallelism and allows hronous and asynchronous execution mixed sync Support for highlevel machineindependent pro Mo dula is problemorientedinthesensethatthe gramming languages programmer can cho ose the degree of parallelism and mix the control mo de SIMD or MIMDlike as need Fast SIMDMIMD mode switching ed bytheintended algorithm Parallelism maybe nested to arbitrary depth Pro cedures may b e called Special hardware for barrier synchronization of from sequential or parallel contexts and can them multiple process groups selves generate parallel activity without any restric tions Most Mo dula programs can b e translated into ecient co de for b oth SIMD and MIMD archi A selfrouting deadlockfreeperfect shue inter tectures connect with latency hiding Overview of language extensions The architecture is the outcomeofanintegrated de Mo dula extends Mo dula
    [Show full text]
  • Fog Computing: a Platform for Internet of Things and Analytics
    Fog Computing: A Platform for Internet of Things and Analytics Flavio Bonomi, Rodolfo Milito, Preethi Natarajan and Jiang Zhu Abstract Internet of Things (IoT) brings more than an explosive proliferation of endpoints. It is disruptive in several ways. In this chapter we examine those disrup- tions, and propose a hierarchical distributed architecture that extends from the edge of the network to the core nicknamed Fog Computing. In particular, we pay attention to a new dimension that IoT adds to Big Data and Analytics: a massively distributed number of sources at the edge. 1 Introduction The “pay-as-you-go” Cloud Computing model is an efficient alternative to owning and managing private data centers (DCs) for customers facing Web applications and batch processing. Several factors contribute to the economy of scale of mega DCs: higher predictability of massive aggregation, which allows higher utilization with- out degrading performance; convenient location that takes advantage of inexpensive power; and lower OPEX achieved through the deployment of homogeneous compute, storage, and networking components. Cloud computing frees the enterprise and the end user from the specification of many details. This bliss becomes a problem for latency-sensitive applications, which require nodes in the vicinity to meet their delay requirements. An emerging wave of Internet deployments, most notably the Internet of Things (IoTs), requires mobility support and geo-distribution in addition to location awareness and low latency. We argue that a new platform is needed to meet these requirements; a platform we call Fog Computing [1]. We also claim that rather than cannibalizing Cloud Computing, F. Bonomi R.
    [Show full text]
  • FUNDAMENTALS of COMPUTING (2019-20) COURSE CODE: 5023 502800CH (Grade 7 for ½ High School Credit) 502900CH (Grade 8 for ½ High School Credit)
    EXPLORING COMPUTER SCIENCE NEW NAME: FUNDAMENTALS OF COMPUTING (2019-20) COURSE CODE: 5023 502800CH (grade 7 for ½ high school credit) 502900CH (grade 8 for ½ high school credit) COURSE DESCRIPTION: Fundamentals of Computing is designed to introduce students to the field of computer science through an exploration of engaging and accessible topics. Through creativity and innovation, students will use critical thinking and problem solving skills to implement projects that are relevant to students’ lives. They will create a variety of computing artifacts while collaborating in teams. Students will gain a fundamental understanding of the history and operation of computers, programming, and web design. Students will also be introduced to computing careers and will examine societal and ethical issues of computing. OBJECTIVE: Given the necessary equipment, software, supplies, and facilities, the student will be able to successfully complete the following core standards for courses that grant one unit of credit. RECOMMENDED GRADE LEVELS: 9-12 (Preference 9-10) COURSE CREDIT: 1 unit (120 hours) COMPUTER REQUIREMENTS: One computer per student with Internet access RESOURCES: See attached Resource List A. SAFETY Effective professionals know the academic subject matter, including safety as required for proficiency within their area. They will use this knowledge as needed in their role. The following accountability criteria are considered essential for students in any program of study. 1. Review school safety policies and procedures. 2. Review classroom safety rules and procedures. 3. Review safety procedures for using equipment in the classroom. 4. Identify major causes of work-related accidents in office environments. 5. Demonstrate safety skills in an office/work environment.
    [Show full text]
  • Simulating Physics with Computers
    International Journal of Theoretical Physics, VoL 21, Nos. 6/7, 1982 Simulating Physics with Computers Richard P. Feynman Department of Physics, California Institute of Technology, Pasadena, California 91107 Received May 7, 1981 1. INTRODUCTION On the program it says this is a keynote speech--and I don't know what a keynote speech is. I do not intend in any way to suggest what should be in this meeting as a keynote of the subjects or anything like that. I have my own things to say and to talk about and there's no implication that anybody needs to talk about the same thing or anything like it. So what I want to talk about is what Mike Dertouzos suggested that nobody would talk about. I want to talk about the problem of simulating physics with computers and I mean that in a specific way which I am going to explain. The reason for doing this is something that I learned about from Ed Fredkin, and my entire interest in the subject has been inspired by him. It has to do with learning something about the possibilities of computers, and also something about possibilities in physics. If we suppose that we know all the physical laws perfectly, of course we don't have to pay any attention to computers. It's interesting anyway to entertain oneself with the idea that we've got something to learn about physical laws; and if I take a relaxed view here (after all I'm here and not at home) I'll admit that we don't understand everything.
    [Show full text]
  • UNICOS/Mk Status)
    Status on the Serverization of UNICOS - (UNICOS/mk Status) Jim Harrell, Cray Research, Inc., 655-F Lone Oak Drive, Eagan, Minnesota 55121 ABSTRACT: UNICOS is being reorganized into a microkernel based system. The purpose of this reorganization is to provide an operating system that can be used on all Cray architectures and provide both the current UNICOS functionality and a path to the future distributed systems. The reorganization of UNICOS is moving forward. The port of this “new” system to the MPP is also in progress. This talk will present the current status, and plans for UNICOS/mk. The chal- lenges of performance, size and scalability will be discussed. 1 Introduction have to be added in order to provide required functionality, such as support for distributed applications. As the work of adding This discussion is divided into four parts. The first part features and porting continues there is a testing effort that discusses the development process used by this project. The ensures correct functionality of the product. development process is the methodology that is being used to serverize UNICOS. The project is actually proceeding along Some of the initial porting can be and has been done in the multiple, semi-independent paths at the same time.The develop- simulator. However, issues such as MPP system organization, ment process will help explain the information in the second which nodes the servers will reside on and how the servers will part which is a discussion of the current status of the project. interact can only be completed on the hardware. The issue of The third part discusses accomplished milestones.
    [Show full text]
  • Think in G Machines Corporation Connection
    THINK ING MACHINES CORPORATION CONNECTION MACHINE TECHNICAL SUMMARY The Connection Machine System Connection Machine Model CM-2 Technical Summary ................................................................. Version 6.0 November 1990 Thinking Machines Corporation Cambridge, Massachusetts First printing, November 1990 The information in this document is subject to change without notice and should not be construed as a commitment by Thinking Machines Corporation. Thinking Machines Corporation reserves the right to make changes to any products described herein to improve functioning or design. Although the information in this document has been reviewed and is believed to be reliable, Thinking Machines Corporation does not assume responsibility or liability for any errors that may appear in this document. Thinking Machines Corporation does not assume any liability arising from the application or use of any information or product described herein. Connection Machine® is a registered trademark of Thinking Machines Corporation. CM-1, CM-2, CM-2a, CM, and DataVault are trademarks of Thinking Machines Corporation. C*®is a registered trademark of Thinking Machines Corporation. Paris, *Lisp, and CM Fortran are trademarks of Thinking Machines Corporation. C/Paris, Lisp/Paris, and Fortran/Paris are trademarks of Thinking Machines Corporation. VAX, ULTRIX, and VAXBI are trademarks of Digital Equipment Corporation. Symbolics, Symbolics 3600, and Genera are trademarks of Symbolics, Inc. Sun, Sun-4, SunOS, and Sun Workstation are registered trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. The X Window System is a trademark of the Massachusetts Institute of Technology. StorageTek is a registered trademark of Storage Technology Corporation. Trinitron is a registered trademark of Sony Corporation.
    [Show full text]
  • Massively Parallel Computing with CUDA
    Massively Parallel Computing with CUDA Antonino Tumeo Politecnico di Milano 1 GPUs have evolved to the point where many real world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs. Jack Dongarra Professor, University of Tennessee; Author of “Linpack” Why Use the GPU? • The GPU has evolved into a very flexible and powerful processor: • It’s programmable using high-level languages • It supports 32-bit and 64-bit floating point IEEE-754 precision • It offers lots of GFLOPS: • GPU in every PC and workstation What is behind such an Evolution? • The GPU is specialized for compute-intensive, highly parallel computation (exactly what graphics rendering is about) • So, more transistors can be devoted to data processing rather than data caching and flow control ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • The fast-growing video game industry exerts strong economic pressure that forces constant innovation GPUs • Each NVIDIA GPU has 240 parallel cores NVIDIA GPU • Within each core 1.4 Billion Transistors • Floating point unit • Logic unit (add, sub, mul, madd) • Move, compare unit • Branch unit • Cores managed by thread manager • Thread manager can spawn and manage 12,000+ threads per core 1 Teraflop of processing power • Zero overhead thread switching Heterogeneous Computing Domains Graphics Massive Data GPU Parallelism (Parallel Computing) Instruction CPU Level (Sequential
    [Show full text]
  • Top 10 Reasons to Major in Computing
    Top 10 Reasons to Major in Computing 1. Computing is part of everything we do! Computing and computer technology are part of just about everything that touches our lives from the cars we drive, to the movies we watch, to the ways businesses and governments deal with us. Understanding different dimensions of computing is part of the necessary skill set for an educated person in the 21st century. Whether you want to be a scientist, develop the latest killer application, or just know what it really means when someone says “the computer made a mistake”, studying computing will provide you with valuable knowledge. 2. Expertise in computing enables you to solve complex, challenging problems. Computing is a discipline that offers rewarding and challenging possibilities for a wide range of people regardless of their range of interests. Computing requires and develops capabilities in solving deep, multidimensional problems requiring imagination and sensitivity to a variety of concerns. 3. Computing enables you to make a positive difference in the world. Computing drives innovation in the sciences (human genome project, AIDS vaccine research, environmental monitoring and protection just to mention a few), and also in engineering, business, entertainment and education. If you want to make a positive difference in the world, study computing. 4. Computing offers many types of lucrative careers. Computing jobs are among the highest paid and have the highest job satisfaction. Computing is very often associated with innovation, and developments in computing tend to drive it. This, in turn, is the key to national competitiveness. The possibilities for future developments are expected to be even greater than they have been in the past.
    [Show full text]
  • Open Dissertation Draft Revised Final.Pdf
    The Pennsylvania State University The Graduate School ICT AND STEM EDUCATION AT THE COLONIAL BORDER: A POSTCOLONIAL COMPUTING PERSPECTIVE OF INDIGENOUS CULTURAL INTEGRATION INTO ICT AND STEM OUTREACH IN BRITISH COLUMBIA A Dissertation in Information Sciences and Technology by Richard Canevez © 2020 Richard Canevez Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2020 ii The dissertation of Richard Canevez was reviewed and approved by the following: Carleen Maitland Associate Professor of Information Sciences and Technology Dissertation Advisor Chair of Committee Daniel Susser Assistant Professor of Information Sciences and Technology and Philosophy Lynette (Kvasny) Yarger Associate Professor of Information Sciences and Technology Craig Campbell Assistant Teaching Professor of Education (Lifelong Learning and Adult Education) Mary Beth Rosson Professor of Information Sciences and Technology Director of Graduate Programs iii ABSTRACT Information and communication technologies (ICTs) have achieved a global reach, particularly in social groups within the ‘Global North,’ such as those within the province of British Columbia (BC), Canada. It has produced the need for a computing workforce, and increasingly, diversity is becoming an integral aspect of that workforce. Today, educational outreach programs with ICT components that are extending education to Indigenous communities in BC are charting a new direction in crossing the cultural barrier in education by tailoring their curricula to distinct Indigenous cultures, commonly within broader science, technology, engineering, and mathematics (STEM) initiatives. These efforts require examination, as they integrate Indigenous cultural material and guidance into what has been a largely Euro-Western-centric domain of education. Postcolonial computing theory provides a lens through which this integration can be investigated, connecting technological development and education disciplines within the parallel goals of cross-cultural, cross-colonial humanitarian development.
    [Show full text]