CMTS Cyberinfrastructure

Total Page:16

File Type:pdf, Size:1020Kb

CMTS Cyberinfrastructure CMTS Cyberinfrastructure Michael Wilde - [email protected] hp://swi-lang.org Cyberinfrastructure Challenges • Expressing mul>-scale CMTS Cyberinfrastructure workflow protocols • Leveraging and expressing parallelism Campus-wide clusters and • Moving, processing and managing archival storage data • Traversing networks and security • Diverse environments: schedulers, storage… • Handling system failures • Tracking what was done: Departmental clusters, storage, and where, when, how, who? personal workstaons Cyberinfrastructure Tasks § Enable easy of use of mul>ple distributed large-scale systems § Reduce effort of wri>ng parallel simulaon and analysis scripts § Record and organize provenance of computaons § Annotate datasets to facilitate sharing, collaboraon, and validaon § Reduce the cost of modeling, both in terms of wall clock >me and computer >me. Swift Parallel Scripting Language: A Core Component of CMTS Cyberinfrastructure • Swi is a parallel scrip>ng language • Composes applica,ons linked by files • Captures protocols and processes as a CMTS library • Easy to write: a simple, high-level language • Small Swi: scripts can do large-scale work • Easy to run: on clusters, clouds, supercomputers and grids • Sends work to XSEDE, campus, and Amazon resources • Automates solu>ons to four hard problems • Implicit parallelism • Transparent execu,on loca,on • Automated failure recovery • Provenance tracking SwiSwi is a parallel scrip>ng language is a parallel scrip>ng language SwiSwiSwi is a parallel scrip>ng language is a parallel scrip>ng language is a parallel scrip>ng language Composes applica,ons linked by files Composes applica,ons linked by files Composes applica,ons linked by files Composes applica,ons linked by files Composes applica,ons linked by files Easy to write: a simple, high-level language Easy to write: a simple, high-level language Easy to write: a simple, high-level language Easy to write: a simple, high-level language Easy to write: a simple, high-level language Small Swi: scripts can do large-scale work Small Swi: scripts can do large-scale work Small Swi: scripts can do large-scale work Small Swi: scripts can do large-scale work Small Swi: scripts can do large-scale work Easy to run: on clusters, clouds and grids Easy to run: on clusters, clouds and grids Easy to run: on clusters, clouds and grids Easy to run: on clusters, clouds and grids Easy to run: on clusters, clouds and grids Sends work to XSEDE, Amazon, OSG, Cray Sends work to XSEDE, Amazon, OSG, Cray Sends work to XSEDE, Amazon, OSG, Cray Sends work to XSEDE, Amazon, OSG, Cray Sends work to XSEDE, Amazon, OSG, Cray Fast and highly parallel Fast and highly parallel Fast and highly parallel Fast and highly parallel Fast and highly parallel Runs a million tasks on thousands of cores Runs a million tasks on thousands of cores Runs a million tasks on thousands of cores Runs a million tasks on thousands of cores Runs a million tasks on thousands of cores hundreds of tasks per second hundreds of tasks per second hundreds of tasks per second hundreds of tasks per second hundreds of tasks per second Using Swift in the Larger Cyberinfrastructure Landscape Data servers Swift script Application Campus Local data systems programs Submit host (login node, laptop, Linux server) Swi9’s run>me system supports and aggregates diverse, distributed execu>on environments Expressing CMTS Algorithms in Swift Translang from a shell script of 1300 lines into code like this: 1. (…)EnergyLandscape(…) 2. { 3. loadPDB (pdbcode="3lzt"); 4. createSystem (LayerOfWater=7.5, 5. IonicStrength=0.15); 6. preequilibrate (waterEquilibration_ps=30, 7. totalEquilibration_ps=1000); 8. runMD (config="EL1",tinitial=0, tfinal=10, 9. frame_write_interval=1000); 10. postprocess(…); 11. forcematching(…); 12.} …to create a well-structured library of scripts that automate workflow-level parallelism CMTS Cyberinfrastructure Architecture Researchers • A challenging “breakthrough” project" • Integration & evaluation underway" CMTS Collaboraon Catalogs • Full implementation by the end of# Phase II" Files & Metadata External collaborators Provenance 0: Develop script Script libraries 1: Run script(EL1.trj) 2. Lookup file name=EL1.trj user=Anton 6: Update catalogs type=trajectory Storage 4: Run app 5: Transfer results locaons Compute Facili>es 3: Transfer inputs When do you need Swift? Typical application: protein-ligand docking for drug screening O(10) O(100K) … proteins X drug implicated candidates in a disease 1M compute jobs Tens of fruiul (B) candidates for wetlab & APS Work of M. Kubal, T.A.Binkowski, 8 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm And B. Roux Numerous many-task applications T0623, 25 res., 8.2Å to 6.3Å (excluding tail) § A Simulaon of super- cooled glass materials § B Protein folding using homology-free approaches § C Climate model analysis and decision making in energy A B Inial policy Predicted Nave § D Simulaon of RNA-protein interac>on § E Mul>scale subsurface flow modeling F E § F Modeling of power grid for OE applicaons § > A-E have published science results obtained using Swi9 Protein loop modeling. Courtesy A. Adhikari D 9 C www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Nested parallel prediction loops in Swift 1. Sweep( ) 2. { 3. int nSim = 1000; 4. int maxRounds = 3; 5. Protein pSet[ ] <ext; exec="Protein.map">; 6. float startTemp[ ] = [ 100.0, 200.0 ]; 7. float delT[ ] = [ 1.0, 1.5, 2.0, 5.0, 10.0 ]; 8. foreach p, pn in pSet { 9. foreach t in startTemp { 10. foreach d in delT { 11. ItFix(p, nSim, maxRounds, t, d); 12. } 13. } 14. } 10 proteins x 1000 simulaons x 15. } 3 rounds x 2 temps x 5 deltas 16. Sweep(); = 300K tasks 10 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Programming model: all execution driven by parallel data flow (int r) myproc (int i)! {! j = f(i); ! k = g(i);! r = j + k;! }! ! § f() and g() are computed in parallel § myproc() returns r when they are done § This parallelism is automa,c § Works recursively throughout the program’s call graph 11 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Programming model: Swift in a nutshell § All expressions are computed in parallel, thro5led by various levels of the run>me system § Simple data type model: scalars (boolean, int, float, string, file, external) and collec>ons (arrays and structs) § ALL data atoms and collec>ons have “future” behavior § A few simple control constructs – if, switch, foreach, iterate § A growing library – tracef(), strcat(), regexp(), readData(), writeData() 12 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Swift 2.0 § Mo>vaon for 2.0 – Scalability: 1M tasks/sec vs 500 – goal has been reached for basic tests – Richer programming model, broader spectrum of applicaon – Extensibility and Maintainability § Convergence issues – Some array closing; for loop extensions; library cleanup; mapper models – Data marshalling/passing for in-memory leaf func>ons § Informaon and downloads – hp://exm.xstack.org 13 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Encapsulation enables distributed parallelism Typed Swi data object Applicaon program Files expected Swi9 app( ) func>on or produced Interface defini>on by applicaon program Encapsula?on is the key to transparent distribu?on, paralleliza?on, and automa?c provenance capture 14 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm app( ) func?ons specify cmd line argument passing To run: Fasta psim -s 1ubq.fas -pdb p -t 100.0 -d 25.0 >log file 100.0 25.0 In Swi code: seq dt t app (PDB pg, File log) predict (Protein seq, Float t, Float dt) { psim "-c" "-s" @pseq.fasta "-pdb" @pg -t -c -s -d "–t" temp "-d" dt; PSim applicaon } pdb > Protein p <ext; exec="Pmap", id="1ubq">; PDB structure; Swi9 app func>on File log; pg “predict()” log (structure, log) = predict(p, 100., 25.); 15 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Large scale parallelization with simple loops 1000 Runs of the “predict” applica,on … T1af7 T1b72 T1r6 9 Analyze() foreach sim in [1:1000] { (structure[sim], log[sim]) = predict(p, 100., 25.); } result = analyze(structure) 16 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Dataset mapping example: fMRI datasets type Study { Group g[ ]; } type Group { Subject s[ ]; On-Disk } Data Layout type Subject { Volume anat; Swi’s Run run[ ]; in-memory } data model type Run { Volume v[ ]; } type Volume { Mapping func>on Image img; or script Header hdr; } 17 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Metadata tag /d1/d2/f3 owner=asinitskiy group=cmts-chem \ create-date=2013.0415 \ type=trajectory-namd \ state=published \ note="trajectory for UCG case 1" \ molecule=1ubq domain=loop7, bead=u2 tag /d1/d2/f0 owner=jdama group=cmts-chem \ create-date=2013.0412 \ type=trajectory-namd \ state=validated \ note="trajectory for UCG case 2" molecule=clathrin, domain=loop1, bead=px 18 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Metadata Query ==> (owner=asinitskiy) and (molecule=1ubq)! ! ! dataset_id | name | value! ------------+-------------+------------------------! ! /d1/d2/f4 | owner | asinitskiy! | group | cmts-chem! | create-date | 2013.0416.12:20:29.456! | type | trajectory-namd! | state | published! | note | trajectory for case 1! | molecule | 2ubq! | domain | loop7! | bead | dt! ! 19 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Swift information § Swi Web: – hp://www.ci.uchicago.edu/swi or newer: hp://swi-lang.org § Swi User Guide: – hp://swi-lang.org/guides/trunk/userguide/userguide.html 20 www.ci.uchicago.edu/swi9 www.mcs.anl.gov/exm Running the CMTS “cyber” tutorials # pick up latest changes if instructed: cd /project/cmtsworkshop cp –rp tutorial/cyber $USER/cyber cd $USER/cyber source ./setup.sh cd basic_swi9 cat README cd part01 cat README swi9 p1.swi9 # etc etc etc # start with NAMD cd cyber/namd_sweep cat README source setup.sh # <== Don’t forget this step!!! swi rmsd.swi9 –etc etc etc Summary of the CMTS Cyberinfrastructure Approach § Streamlining scientist / computer interaction:" –! Large datasets (e.g.
Recommended publications
  • The Institutional Challenges of Cyberinfrastructure and E-Research
    E-Research and E-Scholarship The Institutional Challenges Cyberinfrastructureof and E-Research cholarly practices across an astoundingly wide range of dis- ciplines have become profoundly and irrevocably changed By Clifford Lynch by the application of advanced information technology. This collection of new and emergent scholarly practices was first widely recognized in the science and engineering disciplines. In the late 1990s, the term e-science (or occasionally, particularly in Asia, cyber-science) began to be used as a shorthand for these new methods and approaches. The United Kingdom launched Sits formal e-science program in 2001.1 In the United States, a multi-year inquiry, having its roots in supercomputing support for the portfolio of science and engineering disciplines funded by the National Science Foundation, culminated in the production of the “Atkins Report” in 2003, though there was considerable delay before NSF began to act program- matically on the report.2 The quantitative social sciences—which are largely part of NSF’s funding purview and which have long traditions of data curation and sharing, as well as the use of high-end statistical com- putation—received more detailed examination in a 2005 NSF report.3 Key leaders in the humanities and qualitative social sciences recognized that IT-driven innovation in those disciplines was also well advanced, though less uniformly adopted (and indeed sometimes controversial). In fact, the humanities continue to showcase some of the most creative and transformative examples of the use of information technology to create new scholarship.4 Based on this recognition of the immense disciplinary scope of the impact of information technology, the more inclusive term e-research (occasionally, e-scholarship) has come into common use, at least in North America and Europe.
    [Show full text]
  • Cyberinfrastructure for High Energy Physics in Korea
    17th International Conference on Computing in High Energy and Nuclear Physics (CHEP09) IOP Publishing Journal of Physics: Conference Series 219 (2010) 072032 doi:10.1088/1742-6596/219/7/072032 Cyberinfrastructure for High Energy Physics in Korea Kihyeon Cho1, Hyunwoo Kim and Minho Jeung High Energy Physics Team Korea Institute of Science and Technology Information, Daejeon 305-806, Korea E-mail: [email protected] Abstract. We introduce the hierarchy of cyberinfrastructure which consists of infrastructure (supercomputing and networks), Grid, e-Science, community and physics from bottom layer to top layer. KISTI is the national headquarter of supercomputer, network, Grid and e-Science in Korea. Therefore, KISTI is the best place to for high energy physicists to use cyberinfrastructure. We explain this concept on the CDF and the ALICE experiments. In the meantime, the goal of e-Science is to study high energy physics anytime and anywhere even if we are not on-site of accelerator laboratories. The components are data production, data processing and data analysis. The data production is to take both on-line and off-line shifts remotely. The data processing is to run jobs anytime, anywhere using Grid farms. The data analysis is to work together to publish papers using collaborative environment such as EVO (Enabling Virtual Organization) system. We also present the global community activities of FKPPL (France-Korea Particle Physics Laboratory) and physics as top layer. 1. Introduction We report our experiences and results relating to the utilization of cyberinfrastrucure in high energy physics. According to the Wiki webpage on “cyberinfrastrucure,” the term “cyberinfrastructure” describes the new research environments that support advanced data acquisition, storage, management, integration, mining, and visulaization, as well as other computing and information processing services over the Internet.
    [Show full text]
  • Boston University Cyberinfrastructure Plan
    Boston University Information Services & Technology 111 Cummington Street Boston, Massachusetts 02215 Boston University Cyberinfrastructure Plan I. BU Research Infrastructure o THE MASSACHUSETTS GREEN HIGH-PERFORMANCE COMPUTING CENTER (MGHPCC) BU recently partnered with government and regional institutions to establish the Massachusetts Green High Performance Computing Center (MGHPCC), operated by BU, Harvard, MIT, Northeastern, and the University of Massachusetts—five of the most research-intensive universities in Massachusetts—dedicated to serving our growing research computing needs more collaboratively efficiently, ecologically, and cost-effectively. BU high- performance computational resources are housed at the MGHPCC, taking advantage of an abundant source of clean renewable energy from the Holyoke Gas and Electric hydroelectric power plant on the Connecticut River. This project created a world-class high-performance computing center with an emphasis on green sustainable computing, offering unprecedented opportunities for collaboration among research, government, and industry in Massachusetts. MGHPCC is the first academic research computing center to achieve LEED Platinum certification, the highest level of green certification awarded by the Green Building Council. MGHPCC comprises 33,000 SF of space optimized for high performance computing, a 19 MW power feed, and high-efficiency cooling to support up to 10 MW of load with provisions for expansion to 30 MW. The MGHPCC owns the 8.6 acre site, providing substantial room for expansion. Network infrastructure includes a fiber loop passing through Boston, the Northern Crossroads (NoX), and New York City, connecting BU to the MGHPCC at 40 Gb/s. BU is also actively engaged in other national and regional consortia where we provide technical, strategic, and governance leadership.
    [Show full text]
  • The Seamless Cyberinfrastructure: the Challenges of Studying Users of Mass Digitization and Institutional Repositories
    The Seamless Cyberinfrastructure: The Challenges of Studying Users of Mass Digitization and Institutional Repositories Dawn Schmitz April 2008 Seamless Cyberinfrastructure Contents 1 Introduction......................................................................................................................1 1.1 Scope of this Report ................................................................................................1 1.2 Rationale for this Report ........................................................................................2 2 Background on Mass Digitization................................................................................4 2.1 Definition of Mass Digitization .............................................................................4 2.1.1 Selection and Copyright..................................................................................6 2.2 Anticipated Users of Mass Digitization ...............................................................7 2.3 User Issues Related to Mass Digitization............................................................8 3 Background on Institutional Repositories .................................................................8 3.1 Definition of Institutional Repository..................................................................9 3.2 Prevalence of Institutional Repositories ...........................................................11 3.3 User Issues Related to Institutional Repositories............................................11 3.3.1 Who Is Depositing
    [Show full text]
  • What Is Cyberinfrastructure? Craig A
    What is Cyberinfrastructure? Craig A. Stewart* Stephen Simms Beth Plale 812-855-4240 812-855-7211 812-855-4373 Pervasive Technology Institute & Research Technologies, UITS & School of Informatics and Computing Research Technologies, UITS** Pervasive Technology Institute & Pervasive Technology Institute [email protected] [email protected] [email protected] Matthew Link David Y. Hancock Geoffrey C. Fox 812-855-633 812-855-4021 812-856-7977 Research Technologies, UITS & Research Technologies, UITS & School of Informatics and Computing Pervasive Technology Institute Pervasive Technology Institute & Pervasive Technology Institute [email protected] [email protected] [email protected] *All authors at Indiana University, Bloomington, IN 47405 **UITS is University Information Technology Services ABSTRACT General Terms Cyberinfrastructure is a word commonly used but lacking a single, precise definition. One recognizes intuitively the analogy Management, Performance, Design, Security, Human Factors, with infrastructure, and the use of cyber to refer to thinking or Standardization. computing – but what exactly is cyberinfrastructure as opposed to information technology infrastructure? Indiana University has Keywords developed one of the more widely cited definitions of Cyberinfrastructure, e-Science, Infrastructure, Open Science cyberinfrastructure: Grid, TeraGrid. Cyberinfrastructure consists of computing systems, data storage systems, advanced instruments and data 1. INTRODUCTION repositories, visualization environments, and people, The term ‘cyberinfrastructure’ was coined in the late 1990s, and all linked together by software and high performance its usage became widespread in 2003 with the publication of networks to improve research productivity and “Revolutionizing Science and Engineering Through enable breakthroughs not otherwise possible. Cyberinfrastructure: Report of the National Science Foundation Blue-Ribbon Advisory Panel on Cyberinfrastructure” by Atkins A second definition, more inclusive of scholarship generally and et al.
    [Show full text]
  • Cyberinfrastructure for Environmental Research and Education
    Cyberinfrastructure for Environmental Research and Education Report from a workshop held at the National Center for Atmospheric Research October 30 - November 1, 2002 Sponsored by the National Science Foundation Cyberinfrastructure for Environmental Research and Education Report from a workshop held at the National Center for Atmospheric Research October 30 - November 1, 2002 Sponsored by the National Science Foundation September 2003 Table of Contents Introduction..................................................................................................................pg 1 Organization and Methodology ......................................................pg 1 Background and Definitions................................................................pg 2 Why Cyberinfrastructure Matters for Environmental Research & Education ..............................................pg 3 Challenges ......................................................................................................................pg 4 Near-term Opportunities................................................................................pg 5 Scientific, Educational, & Societal Benefits..................................pg 8 Coastal Ocean Prediction & Resource Management ......................................................................pg 9 Regional Observation, Analysis, & Modeling of the Environment ......................................................pg 10 Data Assimilation for Carbon Cycle Science & Biogeochemistry..................................................................pg
    [Show full text]
  • Cyberinfrastructure Vision for 21St Century Discovery
    CYBERINFRASTRUCTURE VISION FOR 21ST CENTURY DISCOVERY National Science Foundation Cyberinfrastructure Council March 2007 CYBERINFRASTRUCTURE VISION National Science Foundation March 2007 FOR 21ST CENTURY DISCOVERY ABOUT THE COVER The visualization on the cover depicts a single cycle of delay measurements made by a CAIDA Internet perfor- mance monitor. The graph was created using the Walrus graph visualization tool, designed for interactively visual- izing large directed graphs in 3-Dimensional space. For more information: http://www.caida.org/tools/visualiza- tion/walrus/ CYBERINFRASTRUCTURE VISION National Science Foundation March 2007 FOR 21ST CENTURY DISCOVERY LETTER FROM THE DIRECTOR Dear Colleague: I am pleased to present NSF’s Cyberinfrastructure Vision for 21st Century Discovery. This document, developed in consultation with the wider science, engineering, and education communities, lays out an evolving vision that will help to guide the Foun- dation’s future investments in cyberinfrastructure. At the heart of the cyberinfrastructure vision is the development of a cultural community that supports peer-to-peer collaboration and new modes of education based upon broad and open access to leadership computing; data and information re- sources; online instruments and observatories; and visualization and collaboration services. Cyberinfra- structure enables distributed knowledge communi- ties that collaborate and communicate across disci- plines, distances and cultures. These research and Dr. Arden L. Bement, Jr. , Director of Na- education communities extend beyond traditional tional Science Foundation brick-and-mortar facilities, becoming virtual organi- zations that transcend geographic and institutional boundaries. This vision is new, exciting and bold. Realizing the cyberinfrastructure vision described in this document will require the broad participation and collaboration of individuals from all fields and institu- tions, and across the entire spectrum of education.
    [Show full text]
  • Cybinfrastructure in the U.S
    Cybinfrastructure in the U.S. Charlie Catlett Director, NSF TeraGrid Initiative Senior Fellow, Computation Institute The University of Chicago and Argonne National Laboratory Member, Board of Directors, Open Grid Forum Workshop on e-Infrastructure Helsinki, Finland October 4-5, 2006 Source: Dan Atkins, NSF Office of Cyberinfrastructure Cyberinfrastructure Vision • “Atkins report” - Blue-ribbon panel, chaired by Daniel E. Atkins • Called for a national-level, integrated system of hardware, software, & data resources and services • New infrastructure to enable new paradigms of science & engineering research and education with increased efficiency www. nsf.gov/od/oci/reports/toc.jsp Source: Dan Atkins, NSF Office of Cyberinfrastructure 3 NSF Cyberinfrastructure (CI) Governance • New NSF Office of Cyberinfrastructure Reports to NSF Director • OCI focuses on provisioning “production-quality” CI to accelerate 21st century research and education Director Dan Atkins (University of Michigan) • Oversight, Strategy, Governance CI Council: NSF Associate Directors, chaired by NSF Director External CI Advisory Committee Cyberinfrastructure User Advisory Committee (CUAC) Adapted from: Dan Atkins, NSF Office of Cyberinfrastructure 4 “NSF Cyberinfrastructure Vision for 21st Century Discovery” 4. Education and Workforce 3. Collaboratories, observatories, virtual organizations From Draft 7.1 CI Plan at www.nsf.gov/oci/ “sophisticated” science application software 1. Distributed, 2. Data, data scalable up to analysis, petaFLOPS HPC visualization includes
    [Show full text]
  • Cyberinfrastructure for Education and Learning for the Future: a VISION and RESEARCH AGENDA
    Cyberinfrastructure for Education and Learning for the Future: A VISION AND RESEARCH AGENDA Computing Research Association This report was made possible by the generous support of the National Science Foundation (Grant No. REC-0449247) Copyright 2005 by the Computing Research Association. Permission is granted to reproduce the contents, provided that such reproduction is not for profit and credit is given to the source. Foreword his report is the result of a series of workshops organized by the Computing Research Association and Tthe International Society of the Learning Sciences with support from the National Science Foundation (Grant No. REC-0449247). The purpose of the workshop series was to explore where we are in the application of pervasive com- puting power to education, and where we need to be. In particular, the intent was to develop a map of where NSF can strategically place its resources in creating the learning environments of the future. The four workshops were: 1. Modeling, Simulation and Gaming Technologies Applied to Education, September 27-29, 2004. 2. Cognitive Implications of Virtual or Web-enabled Environments, November 30-December 1, 2004. 3. How Emerging Technology and Cyberinfrastructure Might Revolutionize the Role of Assessment in Learning, February 16-17, 2005. 4. The Interplay Between Communities of Learning or Practice and Cyberinfrastructure, March 24-25, 2005. This report is based on the results of these four workshops and was written by Shaaron Ainsworth, Margaret Honey, W. Lewis Johnson, Kenneth Koedinger, Brandon Muramatsu, Roy Pea, Mimi Recker, and Stephen Weimar as representatives of the four workshops. A list of the workshops’ participants appears in Appendix A.
    [Show full text]
  • Teragrid: a Foundation for US Cyberinfrastructure
    TeraGrid: A Foundation for US Cyberinfrastructure Charles E. Catlett TeraGrid Director Senior Fellow, Computation Institute of Argonne National Laboratory and University of Chicago [email protected] Abstract. TeraGrid is a collaboration of partners providing a high-performance, nationally distributed capability infrastructure for computational science. The TeraGrid team has utilized multiple surveys of user requirements to develop five- year roadmaps describing new capabilities and services, organized into several new initiatives: Deep, Wide, and Open. TeraGrid is managed by the University of Chicago and includes resources at eight partner sites (Argonne National Laboratory, Indiana University, National Center for Supercomputing Applications, Oak Ridge National Laboratory, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, and Texas Advanced Computing Center). TeraGrid Deep aims to assist scientists with applications that require the combination of multiple leadership class systems- including TeraGrid storage, computing, instruments, visualization, etc. – working in concert. A team of roughly 15 staff is providing hands-on assistance to application teams pursuing TeraGrid Deep projects. TeraGrid Wide is a set of partnerships with peer Grid projects and prototype "science gateways" that are aimed at making TeraGrid resources available to, and tailored to, entire communities of users. Science gateways are driving policy, process, and technology standards to enable web portals, desktop applications, campus clusters, and other grid infrastructure projects to seamlessly use TeraGrid resources. Initial TeraGrid science gateway projects include community portals and desktop tools supporting life sciences and biomedicine, high-energy physics, neutron science, astronomy, nanotechnology, atmospheric and climate sciences, and environmental and emergency decision-support. TeraGrid Open involves the rapid evolution of the TeraGrid software and services toward interoperability with peer Grids and campus resources.
    [Show full text]
  • CASC Fall 2020 Meeting VIRTUAL Zoom Meeting October 14-16, 2020 CASC Officers
    CASC Fall 2020 Meeting VIRTUAL Zoom Meeting October 14-16, 2020 CASC Officers Chair Dr. Sharon Broude Geva Director, Advanced Research Computing, University of Michigan [email protected] Vice Chair Mr. Neil Bright Director, Partnership for an Advanced Computing Environment, [email protected] Georgia Institute of Technology Acting Dr. Claudia Costa Secretary Pervasive Technology Institute, Indiana University [email protected] Treasurer Dr. Scott Yockel Research Computing Officer, Harvard University [email protected] Past Chair Dr. Rajendra Bose Director, Research Computing, Columbia University [email protected] Director Ms. Lisa Arafune Director, CASC [email protected] Speakers Ms. Jayshree Sarma Director, Research Computing, Office of Research Computing, George Mason University [email protected] Mr. Joel Zysman Director of Advanced Computing, University of Miami Joel Zysman is the Director of Advanced Computing for the University of Miami’s Center for Computational Science. Mr. Zysman has been an active member of the advanced computing community for over 15 years. He has held positions in research centers at The Scripps Research Institute (Florida) and in Industry at Merck Research Labs, Silicon Graphics Inc., and Cray Research Inc. He has published numerous articles on IO subsystems as well as advanced computing applications in research. He has more than 15 years’ experience in advanced computing systems and application design and 1 CASC Fall 2020 Meeting implementation and specializes in storage design, data management and movement, and distributed computing in bioinformatics and life sciences. Mr. Zysman has a particular interest in bringing advanced computing technologies to non-traditional user communities and has started a joint research program with the Frost School of Music in Computational Musicology and Music Engineering.
    [Show full text]
  • Computational Nanoelectronics Research and Education at Nanohub.Org
    J Comput Electron DOI 10.1007/s10825-009-0273-3 Computational nanoelectronics research and education at nanoHUB.org Benjamin P. Haley · Gerhard Klimeck · Mathieu Luisier · Dragica Vasileska · Abhijeet Paul · Swaroop Shivarajapura · Diane L. Beaudoin © Springer Science+Business Media LLC 2009 Abstract The simulation tools and resources available at The educational resources of nanoHUB are one of the nanoHUB.org offer significant opportunities for both re- most important aspects of the project, according to user sur- search and education in computational nanoelectronics. veys. New collections of tools into unified curricula have Users can run simulations on existing powerful computa- found a receptive audience in many university classrooms. tional tools rather than developing yet another niche simu- The underlying cyberinfrastructure of nanoHUB has lator. The worldwide visibility of nanoHUB provides tool been packaged as a generic software platform called HUB- authors with an unparalleled venue for publishing their zero. The Rappture toolkit, which generates simulation tools. We have deployed a new quantum transport simu- GUIs, is part of HUBzero. lator, OMEN, a state-of-the-art research tool, as the engine Keywords Computational · Electronics · Nanoelectronics · driving two tools on nanoHUB. Modeling · nanoHUB · nanoHUB.org · OMEN · Rappture · GUI B.P. Haley () · G. Klimeck · M. Luisier · A. Paul Department of Electrical and Computer Engineering, Purdue 1 Introduction University, West Lafayette, IN 47907-2035, USA e-mail: [email protected] 1.1 Mission and vision G. Klimeck e-mail: [email protected] The Network for Computational Nanotechnology (NCN) M. Luisier was created in 2002, as part of the National Nanotechnology e-mail: [email protected] Initiative, to provide computational tools and educational re- A.
    [Show full text]