Bibliography

Total Page:16

File Type:pdf, Size:1020Kb

Bibliography Bibliography App, 2017. (2017). appc: App container specification and tooling. https://github.com/appc/spec. Accetta, M. J., Baron, R. V., Bolosky, W. J., Golub, D. B., Rashid, R. F., Tevanian, A., et al. (1986). Mach: A new kernel foundation for UNIX development. In Proceedings of the USENIX Summer Conference. Ahn, D. H., Garlick, J., Grondona, M., Lipari, D., Springmeyer, B., & Schulz, M. (2014). Flux: A next-generation resource management framework for large HPC centers. In 43rd International Conference on Parallel Processing Workshops (ICCPW), 2014 (pp. 9–17). IEEE. Ajima, Y., Inoue, T., Hiramoto, S., Takagi, Y., & Shimizu, T. (2012). The Tofu interconnect. IEEE Micro, 32(1), 21–31. Akkan, H., Ionkov, L., & Lang, M. (2013). Transparently consistent asynchronous shared mem- ory. In Proceedings of the 3rd International Workshop on Runtime and Operating Systems for Supercomputers, ROSS ’13. New York, NY, USA: ACM. Alam, S., Barrett, R., Bast, M., Fahey, M. R., Kuehn, J., McCurdy, C., et al. (2008). Early evaluation of IBM BlueGene/P. In Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, SC ’08 (pp. 23:1–23:12). Piscataway, NJ, USA: IEEE Press. Ali, N., Carns, P., Iskra, K., Kimpe, D., Lang, S., Latham, R., et al. (2009). Scalable I/O forward- ing framework for high-performance computing systems. In IEEE International Conference on Cluster Computing and Workshops, 2009. CLUSTER ’09 (pp. 1–10). Alverson, B., Froese, E., Kaplan, L., & Roweth, D. (2012). Cray Inc., white paper WP-Aries01- 1112. Technical report, Cray Inc. Alverson, G. A., Kahan, S., Korry, R., McCann, C., & Smith, B. J. (1995). Scheduling on the Tera MTA. In Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing, IPPS ’95 (pp. 19–44). London, UK: Springer. Alverson, R., Callahan, D., Cummings, D., Koblenz, B., Porterfield, A., & Smith, B. (1990). The Tera computer system. In Proceedings of the 4th International Conference on Supercomputing, ICS ’90 (pp. 1–6). New York, NY, USA: ACM. Andersen, E. (2010). µClibc. https://uclibc.org. Anderson, T. E., Culler, D. E., & Patterson, D. A. (1995). The berkeley networks of workstations (NOW) project. In Proceedings of the 40th IEEE Computer Society International Conference, COMPCON ’95 (p. 322). Washington, DC, USA: IEEE Computer Society. Arcangeli, A. (2010). Transparent hugepage support. In KVM forum. https://www.linux-kvm.org/ images/9/9e/2010-forum-thp.pdf. Hori, A. (2009). PMX Specification –DRAFT–. Allinea Software. Bailey, D., Barszcz, E., Barton, J., Browning, D., Carter, R., Dagum, L., et al. (1991). The nas parallel benchmarks. International Journal of High Performance Computing Applications, 5(3), 63–73. © Springer Nature Singapore Pte Ltd. 2019 375 B. Gerofi et al. (eds.), Operating Systems for Supercomputers and High Performance Computing, High-Performance Computing Series, https://doi.org/10.1007/978-981-13-6624-6 376 Bibliography Balan, R., & Gollhardt, K. (1992). A scalable implementation of virtual memory HAT layer for shared memory multiprocessor machines. In Proceedings of USENIX Summer 1992 Technical Conference. Barach, D. R., Wells, R., Uban, T., & Gibson, J. (1990). Highly parallel virtual memory management on the TC2000. In Proceedings of the 1990 International Conference on Parallel Processin, ICPP ’90 (pp. 549–550). Barak, A., Drezner, Z., Levy, E., Lieber, M., & Shiloh, A. (2015). Resilient gossip algorithms for collecting online management information in exascale clusters. Concurrency and Computation: Practice and Experience, 27(17), 4797–4818. Baskett, F., Howard, J. H., & Montague, J. T. (1977). Task communication in DEMOS. In Proceed- ings of the Sixth ACM Symposium on Operating Systems Principles, SOSP ’77 (pp. 23–31). New York, NY, USA: ACM. Bautista-Gomez, L., Gainaru, A., Perarnau, S., Tiwari, D., Gupta, S., Cappello, F., et al. (2016). Reducing waste in large scale systems through introspective analysis. In IEEE International Parallel and Distributed Processing Symposium (IPDPS). BDEC Committee, (2017). The BDEC “Pathways to convergence” report. http://www.exascale. org/bdec/. Beckman, P. et al. (2015). Argo: An exascale operating system. http://www.argo-osr.org/. Retrieved November 20, 2015. Beckman, P., Iskra, K., Yoshii, K., & Coghlan, S. (2006a). The influence of operating systems on the performance of collective operations at extreme scale. In IEEE International Conference on Cluster Computing. Cluster. Beckman, P., Iskra, K., Yoshii, K., & Coghlan, S. (2006b). Operating system issues for petascale systems. ACM SIGOPS Operating Systems Review, 40(2), 29–33. Beckman, P., Iskra, K., Yoshii, K., Coghlan, S., & Nataraj, A. (2008). Benchmarking the effects of operating system interference on extreme-scale parallel machines. Cluster Computing, 11(1), 3–16. Beeler, M. (1990). Inside the TC2000 computer. Beserra, D., Moreno, E. D., Endo, P. T., Barreto, J., Sadok, D., & Fernandes, S. (2015). Performance analysis of LXC for HPC environments. In International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS). Black, D. L., Tevanian, A., Jr., Golub, D. B., & Young,M. W.(1991). Locking and reference counting in the Mach kernel. In In Proceedings of the 1991 ICPP, Volume II, Software (pp. 167–173). CRC Press. Blumofe, R. D., Joerg, C. F., Kuszmaul, B. C., Leiserson, C. E., Randall, K. H., & Zhou, Y. (1995). Cilk: An efficient multithreaded runtime system. In Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP ’95 (pp. 207–216). New York, NY, USA: ACM. Boden, N. J., Cohen, D., Felderman, R. E., Kulawik, A. E., Seitz, C. L., Seizovic, J. N., et al. (1995). Myrinet: A gigabit-per-second local area network. IEEE Micro, 15(1), 29–36. Boehme, D., Gamblin, T., Beckingsale, D., Bremer, P.-T., Gimenez, A., LeGendre, M., et al. (2016). Caliper: Performance introspection for HPC software stacks. In Proceedings of the 29th ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, (SC). Boku, T., Itakura, K., Nakamura, H., & Nakazawa, K. (1997). CP-PACS: A massively parallel processor for large scale scientific calculations. In Proceedings of ACM 11th International Con- ference on Supercomputing (pp 108–115). Vienna, Austria. Bolen, J., Davis, A., Dazey, B., Gupta, S., Henry, G., Robboy, D., et al. (1995). Massively parallel distributed computing. In Proceedings of the Intel Supercomputer Users’ Group. 1995 Annual North America Users’ Conference. Bratterud, A., Walla, A., Haugerud, H., Engelstad, P.E., & Begnum, K. (2015). IncludeOS: A resource efficient unikernel for cloud services. In Proceedings of the 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom). Bibliography 377 Breitbart, J., Pickartz, S., Weidendorfer, J., Lankes, S., & Monti, A. (2017). Dynamic co-scheduling driven by main memory bandwidth utilization. In 2017 IEEE International Conference on Cluster Computing (CLUSTER 2017). Accepted for Publication. Brightwell, R., Fisk, L. A., Greenberg, D. S., Hudson, T., Levenhagen, M., Maccabe, A. B., et al. (2000). Massively parallel computing using commodity components. Parallel Computing, 26(2– 3), 243–266. Brightwell, R., Hudson, T., & Pedretti, K. (2008). SMARTMAP: Operating system support for effi- cient data sharing among processes on a multi-core processor. In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC’08). Brightwell, R., Hudson, T., Riesen, R., & Maccabe, A. B. (1999). The Portals 3.0 message passing interface. Technical report SAND99-2959, Sandia National Laboratories. Brightwell, R., Maccabe, A. B., & Riesen, R. (2002). Design and implementation of MPI on Portals 3.0. In D. Kranzlmüller, P. Kacsuk, J. Dongarra & J. Volkert (Eds.), Recent Advances in Parallel Virtual Machine and Message Passing Interface: 9th European PVM/MPI Users’ Group Meeting, Linz, Austria, September 29–October 2, 2002. Proceedings. Lecture notes in computer science (Vol. 2474, pp. 331–340). Springer. Brightwell, R., Maccabe, A. B., & Riesen, R. (2003a). Design, implementation, and performance of MPI on Portals 3.0. The International Journal of High Performance Computing Applications, 17(1), 7–20. Brightwell, R., Oldfield, R., Maccabe, A. B., & Bernholdt, D. E. (2013). Hobbes: Composition and virtualization as the foundations of an extreme-scale OS/R. In Proceedings of the 3rd International Workshop on Runtime and Operating Systems for Supercomputers, ROSS ’13 (pp. 2:1–2:8). Brightwell, R., Riesen, R., Underwood, K., Bridges, P. G., Maccabe, A. B., & Hudson, T. (2003b). A performance comparison of Linux and a lightweight kernel. In IEEE International Conference on Cluster Computing (pp. 251–258). Cluster. Brooks, E. (1990). Attack of the killer micros. In Talk at. Supercomputing’91. Brooks, E. D., Gorda, B. C., Warren, K. H., & Welcome, T. S. (1991). BBN TC2000 architecture and programming models. In Compcon Spring ’91. Digest of papers (pp. 46–50). Brown, N. (2018). Overlay filesystem documentation. https://www.kernel.org/doc/Documentation/ filesystems/overlayfs.txt. Brugger, G., & Streletz. (2001). Network livermore time sharing system (NLTSS). http://www. computer-history.info/Page4.dir/pages/LTSS.NLTSS.dir/pages/NLTSS.pdf. Bull, J. M., Reid, F., & McDonnell, N. (2012). A microbenchmark suite for OpenMP tasks. In Pro- ceedings of the 8th International Conference on OpenMP in a Heterogeneous World, IWOMP’12 (pp. 271–274). Berlin, Heidelberg: Springer. Buntinas,
Recommended publications
  • 1977 Final Report
    DOT-RSPA-DPB-50-78-10 S.C.R. T.D. LIBRARY Van Pool Planning Manual Volume JI NOVEMBER 1977 FINAL REPORT UNDER CONTRACT: DOT-OS-60131 This document is available to the U.S. public through the National Technical Information Service, Springfield, Virginia 22161 S.C.R.T.O. LIBRARY Prepared for U.S. DEPARTMENT OF TRANSPORTATION Research & Special Programs Administration Office of University Research Washington, D.C. 20590 NOTICE This document is disseminated under the sponsorship of the Department of Transportation _in the interest of information exchange. The United States Government asst.anes no liability for the contents or use thereof. • Technical Report Documentation Page 1. Repo ,, No . 2. Governmenr Acc ession No. 3. R ecipient's Catal og No . DOT/RSPA/DPB/50-78/10 T;,ie and Sub,;tle 5. Report Dore 4. RIDE SHARING AND PARK AND RIDE: AN November 1977 ASSESSMENT OF 6. Performing Orgon, zotion Code PAST EXPERIENCE AND PLANNING METHODS FOR THE FUTURE 8. f O "The Van Pool Planning Manual"- VOLUME II Pe-r orm i ng rganization Report No. 7. Author/ sl Chris Johnson and Ashish K. Sen 9. Performing Organi zction Name and Address 10. Work Unit No. (TRAIS) University of tllinois at Chicago Circle School of Urban Sciences 11. Contract or Grant No. Box 4348 DOT-OS-60131 Chicago, Illinois 60680 13. Type Of Report and P er;od Cove red 12. Spon,oring Agency Nome ond AddreH Final Report - VOLUME II Office of University Research July 6 1 1976 to November Research and Special Programs Administration 6 I 1977 u.
    [Show full text]
  • General Disclaimer One Or More of the Following Statements May Affect
    General Disclaimer One or more of the Following Statements may affect this Document This document has been reproduced from the best copy furnished by the organizational source. It is being released in the interest of making available as much information as possible. This document may contain data, which exceeds the sheet parameters. It was furnished in this condition by the organizational source and is the best copy available. This document may contain tone-on-tone or color graphs, charts and/or pictures, which have been reproduced in black and white. This document is paginated as submitted by the original source. Portions of this document are not fully legible due to the historical nature of some of the material. However, it is the best reproduction available from the original submission. Produced by the NASA Center for Aerospace Information (CASI) UCID-30198 COMPUTER DOCUMENTATION REQUIREMENTS FOR MIGRATION OF NSSD CODE SYSTEMS FROM LTSS TO NLTSS UCID--30198 Mike Pratt DE24 009533 February 22, 1984 Ge' I 3n\pt` t an informal report intended primarily for internal or limited external ruli-in. The opinions and conclusions stated are those of the author and r may not be those of the Laboratory. performed under the auspices of the U.S. Department of Energy by the nce Livermore National Laboratary u.ider Contract W-7405-Eng-48. DISCLAIMER -t was prepared as an account of work sponsored by an agency of the United States :nt. Neither t he United States Government nor any agency thereof, nor any of their , makes any warranty, express or implie.:, or assumes ar.y legal liability or responsi- the accuracy, completeness, or usefulness of any information, apparatus, product, or sclosrl, or represents that its use would not infringe privately owi,cd rights.
    [Show full text]
  • Quad-Core Catamount and R&D in Multi-Core Lightweight Kernels
    Quad-core Catamount and R&D in Multi-core Lightweight Kernels Salishan Conference on High-Speed Computing Gleneden Beach, Oregon April 21-24, 2008 Kevin Pedretti Senior Member of Technical Staff Scalable System Software, Dept. 1423 [email protected] SAND Number: 2008-1725A Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Outline • Introduction • Quad-core Catamount LWK results • Open-source LWK • Research directions • Conclusion Going on Four Decades of UNIX Operating System = Collection of software and APIs Users care about environment, not implementation details LWK is about getting details right for scalability LWK Overview Basic Architecture Memory Management … … Policy n 1 n N tio tio Page 3 Page 3 Maker ca ca i l Libc.a Libc.a (PCT) pp ppli Page 2 Page 2 A libmpi.a A libmpi.a Page 1 Page 1 Policy Enforcer/HAL (QK) Page 0 Page 0 Privileged Hardware Physical Application Memory Virtual • POSIX-like environment Memory • Inverted resource management • Very low noise OS noise/jitter • Straight-forward network stack (e.g., no pinning) • Simplicity leads to reliability Nov 2007 Top500 Top 10 System Lightweight Kernel Compute Processors: Timeline 82% run a LWK 1990 – Sandia/UNM OS (SUNMOS), nCube-2 1991 – Linux 0.02 1993 – SUNMOS ported to Intel Paragon (1800 nodes) 1993 – SUNMOS experience used to design Puma First implementation of Portals communication architecture 1994
    [Show full text]
  • The Basis System Release 12.1
    The Basis System Release 12.1 The Basis Development Team November 13, 2007 Lawrence Livermore National Laboratory Email: [email protected] COPYRIGHT NOTICE All files in the Basis system are Copyright 1994-2001, by the Regents of the University of California. All rights reserved. This work was produced at the University of California, Lawrence Livermore National Laboratory (UC LLNL) under contract no. W-7405-ENG-48 (Contract 48) between the U.S. Department of Energy (DOE) and The Regents of the University of California (University) for the operation of UC LLNL. Copyright is reserved to the University for purposes of controlled dissemination, commercialization through formal licensing, or other disposition under terms of Contract 48; DOE policies, regulations and orders; and U.S. statutes. The rights of the Federal Government are reserved under Contract 48 subject to the restrictions agreed upon by the DOE and University as allowed under DOE Acquisition Letter 88-1. DISCLAIMER This software was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes.
    [Show full text]
  • Experiences in Developing Lightweight System Software for Massively Parallel Systems
    1 Experiences in Developing Lightweight System Software for Massively Parallel Systems Barney Maccabe Professor, Computer Science University of New Mexico June 23, 2008 USENIX LASCO Workshop Boston, MA 2 Simplicity Butler Lampson, “Hints for Computer System Design,” IEEE Software, vol. 1, no. 1, January 1984. Make it fast, rather than general or powerful Don't hide power Leave it to the client “Perfection is reached, not when there is no longer anything to add, but when there is no longer anything to take away.” A. Saint-Exupery 3 MPP Operating Systems 4 MPP OS Research 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 Cray Red Storm SUNMOS message passing Intel ASCI/Red Catamount re-engineering of Puma LWK direct comparison Intel Paragon Puma/Cougar levels of trust Cplant (Portals) commodity Unified Cplant features Config JRTOS application driven real-time 5 Partitioning for Specialization Red Storm 6 Functional Partitioning Service nodes authentication and authorization job launch, job control, and accounting Compute nodes memory, processor, communication trusted compute kernel passes user id to file system isolation through communication controls I/O nodes storage and external communication 7 Compute Node Structure QK – mechanism Quintessential Kernel Compute Node provides communication and address spaces fixed size–rest to PCT Task PCT Task loads PCT TaskTask PCT – policy Process Control Thread trusted agent on node Q-Kernel application load task scheduling Applications – work 8 Trust Structure Application Application Not trusted (runtime) Server Server PCT PCT Trusted (OS) Q-Kernel Q-Kernel Hardware 9 Is it good? It’s not bad..
    [Show full text]
  • A Lightweight Operating System for Ultrascale Supercomputers
    Kitten: A Lightweight Operating System for Ultrascale Supercomputers Presentation to the New Mexico Consortium Ultrascale Systems Research Center August 8, 2011 Kevin Pedretti Senior Member of Technical Staff Scalable System Software, Dept. 1423 [email protected] SAND2011-5626P Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.! Outline •"Introduction •"Kitten lightweight kernel overview •"Future directions •"Conclusion Four+ Decades of UNIX Operating System = Collection of software and APIs Users care about environment, not implementation details LWK is about getting details right for scalability, both within a node and across nodes Sandia Lightweight Kernel Targets •"Massively parallel, extreme-scale, distributed- memory machine with a tightly-coupled network •"High-performance scientific and engineering modeling and simulation applications •"Enable fast message passing and execution •"Offer a suitable development environment for parallel applications and libraries •"Emphasize efficiency over functionality •"Move resource management as close to application as possible •"Provide deterministic performance •"Protect applications from each other Lightweight Kernel Overview Basic Architecture Memory Management … … Policy Maker Page 3 Page 3 (PCT) Libc.a Libc.a Application N Application 1 Page 2 Page 2 libmpi.a libmpi.a
    [Show full text]
  • History of Operating Systems
    System Architecture History of Operating Systems Some slides from A. D. Joseph, University of Berkeley See also: www.osdata.com/kind/history.htm www.armory.com/~spectre/tech.html courses.cs.vt.edu/~cs1104/VirtualMachines/OS.1.html en.wikipedia.org/wiki/History_of_operating_systems © 2008 Universität Karlsruhe (TH), System Architecture Group 1 Moore’s Law Drives OS Change 1981 2006 Factor CPU MHz, 10 3200x4 1,280 Cycles/inst 3—10 0.25—0.5 6—40 DRAM capacity 128KB 4GB 32,768 Disk capacity 10MB 1TB 100, 000 Net bandwidth 9600 b/s 1 Gb/s 110,000 # addr bits 16 32 2 #users/machine 10 1 0.1 Price $25,000 $4,000 0.2 Typical academic computer 1981 vs 2006 © 2008 Universität Karlsruhe (TH), System Architecture Group 2 Moore’s Law Effects Nothing like this in any other area of business Transportation in over 200 years: Only 2 orders of magnitude from horseback @10mph to Concorde @1000mph Computers do this every decade What does this mean for us? Techniques have to vary over time to adapt to changing tradeoffs Let’s place a lot more emphasis on principles The key concepts underlying computer systems Less emphasis on facts that are likely to change over the next few years… Let’s examine the way changes in $/MIP has radically changed how OS’s work © 2008 Universität Karlsruhe (TH), System Architecture Group 3 Dawn of Time ENIAC: (1945-55) “The machine designed by Eckert and Mauchly was a monstrosity. When it was finished, the ENIAC filled an entire room, weighed 30 tons, and consumed 200 kilowatts of power.” http://ei.cs.vt.edu/~history/ENIAC.Richey.HTML © 2008 Universität Karlsruhe (TH), System Architecture Group 4 History Phase 1: 19481948--7070 Expensive Hardware Cheap Humans © 2008 Universität Karlsruhe (TH), System Architecture Group 5 History of Systems History OS: Evolution Step 0 APP OS Hardware Simple OS: One program, one user, one machine: examples: early computers, early PCs, embedded controllers such as Nintendo, cars, elevators OS just a library of standard services, e.g.
    [Show full text]
  • Palacios and Kitten: New High Performance Operating Systems for Scalable Virtualized and Native Supercomputing
    Palacios and Kitten: New High Performance Operating Systems For Scalable Virtualized and Native Supercomputing John Lange∗, Kevin Pedrettiy, Trammell Hudsony, Peter Dinda∗, Zheng Cuiz, Lei Xia∗, Patrick Bridgesz, Andy Gocke∗, Steven Jaconette∗, Mike Levenhageny, and Ron Brightwelly ∗ Northwestern University, Department of Electrical Engineering and Computer Science Email: fjarusl,pdinda,leixia,agocke,[email protected] y Sandia National Laboratories, Scalable System Software Department Email: fktpedre,mjleven,[email protected], [email protected] z University of New Mexico, Department of Computer Science Email: fzheng,[email protected] Abstract—Palacios is a new open-source VMM under de- • providing access to advanced virtualization features velopment at Northwestern University and the University of such as migration, full system checkpointing, and de- New Mexico that enables applications executing in a virtualized bugging; environment to achieve scalable high performance on large ma- chines. Palacios functions as a modularized extension to Kitten, • allowing system owners to support a wider range of a high performance operating system being developed at Sandia applications and to more easily support legacy appli- National Laboratories to support large-scale supercomputing cations and programming models when changing the applications. Together, Palacios and Kitten provide a thin layer underlying hardware platform; over the hardware to support full-featured virtualized environ- • enabling system users to incrementally port
    [Show full text]
  • The. Nor01al College New" S
    Eastern Michigan University DigitalCommons@EMU EMU Student Newspaper: The orN mal News & University Archives The Eastern Echo 1917 The orN mal College News, February 2, 1917 Eastern Michigan University Follow this and additional works at: https://commons.emich.edu/student_news Recommended Citation Eastern Michigan University, "The orN mal College News, February 2, 1917" (1917). EMU Student Newspaper: The Normal News & The Eastern Echo. 443. https://commons.emich.edu/student_news/443 This Article is brought to you for free and open access by the University Archives at DigitalCommons@EMU. It has been accepted for inclusion in EMU Student Newspaper: The orN mal News & The Eastern Echo by an authorized administrator of DigitalCommons@EMU. For more information, please contact [email protected]. The . Nor01al College NeW"s VOL. 14 YPSILANTI, MICHIGAN. FRIDAY, FEBRUARY 2, J 917 �o. 1a Normal Meets Old Platform F�e Tonight! GREEN AND WHITE COURT STARS * THIRD ANNUAL NORMAL-FERRIS :. :A�END:R ,, F�R �T:E W;E : : WIN ANOTHER DOUBLE VICTORY * INSTITUTE ENCOUNTER TONIGHT 'l'ODA0Y, 1•1RJ�AY, �EB. 2 : 8. ��::� Beat M. A. C. Fresh 28-20; Olivet 37-14 0 :� ��;o�7 DdiatP, ea:s� : To be Held in Pease Au ditorium at Iight * �� an a a n Thoe M. A. C. Fresh and Olivet quin- The nu l Norm SAT----, U ftVAY, FhlBtHU.AJH,Y ;J l Ferrisn I stitttte tettes a re the latest twoa foes to be PROF. WEBSTER H. PEARCE DR. C. 0. HOYT TO li : 4-5 Baslcetball. Preliminary ADDRESS dual debate takes place to ight, Bur­ WILL SPEAK TO Y.
    [Show full text]
  • Apr4 Porting Portals to Ofed.Pdf
    Porting Portals to OFED Ron Brightwell Scalable System Software Sandia National Laboratories Albuquerque, NM, USA Open Fabrics Alliance International Workshop Monterey, CA April 4, 2011 Sandia is a Multiprogram Laboratory Operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy Under Contract DE-ACO4-94AL85000. Sandia Massively Parallel Systems 2004 1999 1997 Red Storm Prototype Cray XT 1993 Custom interconnect Purpose built RAS Highly balanced and 1990 scalable Cplant Catamount lightweight kernel Commodity-based ASCI Red supercomputer Currently 38,400 cores (quad & dual) Production MPP Hundreds of users Paragon Hundreds of users Enhanced simulation Red & Black capacity Tens of users partitions Linux-based OS First periods Improved licensed for processing MPP nCUBE2 interconnect commercialization World record Sandia’s first large High-fidelity coupled ~2000 nodes performance MPP 3-D physics Routine 3D Achieved Gflops Puma/Cougar simulations performance on lightweight kernel SUNMOS lightweight applications kernel Portals Network Programming Interface • Network API developed by Sandia, U. New Mexico, Intel • Previous generations of Portals deployed on several production massively parallel systems – 1993: 1800-node Intel Paragon (SUNMOS) – 1997: 10,000-node Intel ASCI Red (Puma/Cougar) – 1999: 1800-node Cplant cluster (Linux) – 2005: 10,000-node Cray Sandia Red Storm (Catamount) – 2009: 18,688-node Cray XT5 – ORNL Jaguar (Linux) • Focused on providing – Lightweight “connectionless” model for MPP
    [Show full text]
  • 1993 Cern School of Computing
    CERN 94-06 14 October 1994 ORGANISATION EUROPÉENNE POUR LA RECHERCHE NUCLÉAIRE CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH 1993 CERN SCHOOL OF COMPUTING Scuola Superiore G. Reiss Romoli, L'Aquila, Italy 12-25 September 1993 PROCEEDINGS Eds. C.E. Vandoni, C. Verkerk GENEVA 1994 opyright eneve, 1 Propriété littéraire et scientifique réservée Literary and scientific copyrights reserved in pour tous les pays du monde. Ce document ne all countries of the world. This report, or peut être reproduit ou traduit en tout ou en any part of it, may not be reprinted or trans­ partie sans l'autorisation écrite du Directeur lated without written permission of the copy­ général du CERN, titulaire du droit d'auteur. right holder, the Director-General of CERN. Dans les cas appropriés, et s'il s'agit d'utiliser However, permission will be freely granted for le document à des fins non commerciales, cette appropriate non-commercial use. autorisation sera volontiers accordée. If any patentable invention or registrable Le CERN ne revendique pas la propriété des design is described in the report, CERN makes inventions brevetables et dessins ou modèles no claim to property rights in it but offers it susceptibles de dépôt qui pourraient être for the free use of research institutions, man­ décrits dans le présent document; ceux-ci peu­ ufacturers and others. CERN, however, may vent être librement utilisés par les instituts de oppose any attempt by a user to claim any recherche, les industriels et autres intéressés. proprietary or patent rights in such inventions Cependant, le CERN se réserve le droit de or designs as may be described in the present s'opposer à toute revendication qu'un usager document.
    [Show full text]
  • Performance of Various Computers Using Standard Linear Equations Software
    ———————— CS - 89 - 85 ———————— Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra* Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester CS - 89 - 85 June 15, 2014 * Electronic mail address: [email protected]. An up-to-date version of this report can be found at http://www.netlib.org/benchmark/performance.ps This work was supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-96OR22464, and in part by the Science Alliance a state supported program at the University of Tennessee. 6/15/2014 2 Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester June 15, 2014 Abstract This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. 1. Introduction and Objectives The timing information presented here should in no way be used to judge the overall performance of a computer system. The results reflect only one problem area: solving dense systems of equations. This report provides performance information on a wide assortment of computers ranging from the home-used PC up to the most powerful supercomputers. The information has been collected over a period of time and will undergo change as new machines are added and as hardware and software systems improve.
    [Show full text]