LEADERSHIP COMPUTING IN THE AGE OF CLOUD

Dan Stanzione Executive Director, Texas Advanced Computing Center Associate Vice President for Research, The University of Texas at Austin

Amazon Seminar November 2019 THE BASIC OUTLINE

„ What the heck is TACC?

„ What is Frontera (and why do you care)?

„ What sort of stuff needs to run on Frontera?

„ What sort of stuff would be better off on AWS?

„ What do future science workflows look like, and why will they run where they run?

1/13/20 2 WHAT IS TACC? Grendel, 1993

The Texas Advanced Computing Center, at UT Austin is a (primarily) NSF-funded center to Frontera, 2019 provide and apply large scale computing resources to the open science community.

1/13/20 3 TACC AT A GLANCE - 2019 Personnel 185 Staff (~70 PhD) Facilities 12 MW Data center capacity Two office buildings, Three Datacenters, two visualization facilities, and a chilling plant. Systems and Services >Seven Billion compute hours per year >5 Billion files, >100 Petabytes of Data, NSF Frontera (Track 1), Stampede2 (XSEDE Flagship), Jetstream (Cloud), Chameleon (Cloud Testbed) system Usage >15,000 direct users in >4,000 projects, >50,000 web/portal users, User demand 8x available system time. Thousands of training/outreach participants annually

1/13/20 4 MODERN Simulation Computationally query our *mathematical models* of the world

Machine Learning/AI Analytics Computationally query our Computationally analyze our *data sets* *experiments* (depending on technique, (driven by instruments that produce also called deep learning) lots of digital information)

I would argue that modern science and engineering combine all three

1/13/20 5 TACC LAUNCHED IN JUNE, 2001 AFTER EXTERNAL REVIEW

„ In 2001, budget of $600k staff of 12 (some shared).

„ 50GF computing resource (1/200,000th of the current system).

1/13/20 6 RAPID GROWTH FROM THEN TO NOW… „ 2003 – First Terascale cluster for open science (#26)

„ 2004 – NSF funding to join the Teragrid

„ 2006 – UT System Partnership to provide Lonestar-3 (#12)

„ 2007 - $59M NSF award – largest in UT history – to deploy Ranger, the world’s largest open system (#4)

„ 2008 – funding for new Vis software and launch of revamped visualization lab.

„ 2009 - $50M iPlant Collaborative award (largest NSF bioinformatics award) moves a major component to TACC, life sciences group launched.

„ In 2009, we reached, 65 employees.

1/13/20 7 NOW, A WORLD LEADER IN CYBERINFRASTRUCTURE

„ 2010, TACC becomes a core partner (1 of 4) in XSEDE, the TeraGrid Replacement „ 2012, Stampede replaces Ranger with new $51.5M NSF Award „ 2013, iPlant is renewed, expanded to $100M „ 2015, Wrangler, first data intensive supercomputer is deployed. „ 2015, Chameleon cloud is launched „ 2015, DesignSafe, the cyberinfrastructure for natural hazard engineering, is launched. „ 2016 Stampede-2 awarded the largest academic system in the United States, 2017-2021. „ 2019 -- Frontera

1/13/20 8 HPC DOESN’T LOOK LIKE IT USED TO. . .

HPC-Enabled Jupyter Notebooks Web Portal Narrative analytics and exploration environment Data management and accessible batch computing

Event-driven Data Processing Extensible end-to-end framework to integrate planning, experimentation, validation and analytics From Batch Processing and single simulations of many MPI Tasks – to that, plus new modes of computing, automated workflows, users who avoid the command line, reproducibility and data reuse, collaboration, end-to-end data management, • Simulation where we have models • Machine Learning where we have data or incomplete models And most things are a blend of most of these. . . SUPPORTING AN EVOLVING CYBERINFRASTRUCTURE

„ Success in Computational/Data Intensive Science and Engineering takes more than systems. „ Modern Cyberinfrastructure requires many modes of computing, many skillsets, and many parts of the scientific workflow. „ Data lifecycle, reproducibility, sharing and collaboration, event driven processing, APIs, etc. „ Our team and software investments are larger than our system investments „ Advanced Intefaces – Web front ends, Rest API, Vis/VR/AR „ Algorithms – Partnerships with ICES @ UT to shape future systems, applications and libraries.

1/13/20 10 FRONTERA SYSTEM --- PROJECT

„ A new, NSF supported project to do 3 things:

„ Deploy a system in 2019 for the largest problems scientists and engineers currently face. „ Support and operate this system for 5 years.

„ Plan a potential phase 2 system, with 10x the capabilities, for the future challenges scientists will face. „ Frontera is the #5 ranked system in the world – and the fastest at any university in the world. „ Highest ranked Dell system ever, Fastest primarily Intel-based system

„ Frontera and Stampede2 are #1 and #2 among US Universities (and Lonestar5 is still in the Top 10).

„ On the current Top 500 list, TACC provides 77% of *all* performance available to US Universities.

1/13/20 11 FRONTERA IS A GREAT MACHINE – AND MORE THAN A MACHINE

1/13/20 12 A LITTLE ON HARDWARE AND INFRASTRUCTURE „ “Main” Compute Partition: 8,008 nodes

„ Node: Dual–socket, 192GB, HDR-100 IB interface, local drive.

„ Processor: Intel 8280 “Cascade Lake” Intel 2nd generation scalable Xeon

„ 28 Cores

„ 2.7Ghz clock “rate” (sometimes)

„ 6 DIMM Channels, 2933Mhz DIMMS

„ Core count+15%, clock rate +30%, memory bandwidth +15% vs. Skylake

„ Why? They are universal, and not experimental

1/13/20 13 FRONTERA SYSTEM --- INFRASTRUCTURE

„ Frontera consumes almost 6 Megawatts of Power at Peak

„ Measured HPL power; 59+KW/rack, 5400KW from Compute nodes

„ Direct water cooling of primary compute racks (CoolIT/DellEMC)

„ Oil immersion Cooling (GRC)

„ Solar, Wind inputs.

TACC Machine Room Chilled Water Plant

1/13/20 14 INTERCONNECT

„ Mellanox HDR , Fat Tree topology

„ 8008 nodes = 88*91 = 91 Compute Racks

„ Mellanox ASICS == 40 HDR ports. Chassis switches have 800 ports.

„ Each rack is divided in half, with it’s own TOR switch:

„ 44 compute nodes at HDR-100 == 22 HDR ports

„ 18 uplink 200Gb HDR ports, 3 lines (600Gb) to each of 6 core switches.

„ No oversubscription in higher layers of tree (11-9 in rack).

„ No oversubscription to storage, DTN, service nodes (all connected to all 6 switches).

„ 8200+ cards, 182 TOR switches, 6 core switches, 50 miles of cable.

„ Good news: 8,008 compute nodes use only 3,276 fibers to connect to core.

1/13/20 15 FILESYSTEMS

„ Lustre, POSIX, and that’s it.

„ Disk: 50PB

„ Flash: 3PB

„ We have come to believe that most user’s codes accessing the filesystem look like this: While (1) { fork(); fopen(); fclose(); //optional }

Mpirun –np 80000 kill_the_filesystem

1/13/20 16 FILESYSTEMS „ We no longer need to scale filesystem size to scale Bandwidth. „ The size of the filesystem is mostly to support concurrent users – Bandwidth is the limit for individual user (or IOPS for pathological ones). „ So – we aren’t going to build one big filesystem any more.

„ /home1 , /home2, /home3 „ /scratch1, /scratch2, /scratch3 (initial assignment round robin)

„ Flash will be a separate filesystem with some clever name, like /flash. „ This will require you to request access; or to be identified by our analytics as maxing a filesystem. „ Roughly 100GB/s to each scratch, 1.2TB/sec to /flash „ The code on the previous slide can trash, at most, 1/7th of the available filesystems.

„ (Seriously, we have put in some tools to limit those; we may ask you to use a library we have that wraps Open(), and limits the number per second).

1/13/20 17 WHY DO WE HAVE COMMERCIAL CLOUD PARTNERSHIPS ON FRONTERA

„ AWS is part of the Frontera project!

„ Cloud/HPC is not, in my opinion, an either/or question. It’s OK to have more than one tool in the toolbox.

„ We want to utilize the strengths of the commercial cloud, hence we are partnering in three areas:

„ Long term data publication

„ Access to unique and ever-changing hardware (you deploy faster than we do!)

„ Hybrid workflows stitched together via web services (more on this later).

1/13/20 18 WHAT KINDS OF THINGS REALLY NEEDS TO RUN ON FRONTERA?

1/13/20 19 20 CENTER FOR THE PHYSICS OF LIVING CELLS ALEKSEI AKSIMENTIEV UNIVERSITY OF ILLINOIS AT URBANA- CHAMPAIGN

• The nuclear pore complex serves as a gatekeeper, regulating the transport of biomolecules in and out of the nucleus of a biological cell.

• To uncover the mechanism of such selective transport, the Aksimentiev lab at UIUC constructed a computational model of the complex.

• The team simulated the model using memory-optimized NAMD 2.13, 8tasks/node, MPI+SMP.

• Ran on up to 7,780 nodes on Frontera.

• One of the largest biomolecular simulations ever performed. • Scaled close to linear on up to half of the machine.

• Plan to build a new system twice as large to take advantage of large runs 21 FRONTIERS OF COARSE- GRAINING GREGORY VOTH UNIVERSITY OF CHICAGO

• Mature HIV-1 capsid proteins self-assemble into large fullerene-cone structures.

• These capsids enclose the infective genetic material of the virus and transport viral DNA from virion particles into the nucleus of newly infected cells.

• On Frontera, Voth’s team simulated a viral capsids containing RNA and stabilizing cellular factors in full “State-of-the-art supercomputing resources like atomic detail for over 500 ns. Frontera are an invaluable resource for researchers. Molecular processes that determine • First molecular simulations of HIV capsids that contain the chemistry of life are often interconnected and biological components of the virus within the capsid. difficult to probe in isolation. Frontera enables large-scale simulations that examine these • The team ran on 4,000 nodes on Frontera. processes, and this type of science simply cannot be performed on smaller supercomputing • Measured the response of the capsid to molecular resources.” components such as including genetic cargo and cellular factors that affect the stability of the capsid. -Alvin Yu, Postdoctoral Scholar in Voth Group 22 LATTICE GAUGE THEORY AT THE INTENSITY FRONTIER CARLTON DETAR UNIVERSITY OF UTAH

• Ab initio numerical simulations of quantum chromodynamics (QCD) help obtain precise predictions for the strong-interaction environment of the decays of mesons that contain a heavy bottom quark.

• Compare predictions with results of experimental measurements to look for discrepancies that point the way to new fundamental particles and interactions.

• Carried out the very initial steps in the shuffle for the Exascale-size lattice during Frontera large-scale capability demonstration.

• 16x larger problem than anything they had previously “In addition to demonstrating feasibility, we calculated. obtained a useful result. We are now in good position for a future exascale run. We have • Ran on 3,400+ nodes. working code and a working starting gauge configuration file.” • The capability demonstration showed that given sufficient resources the team can run an Exascale-level - Carlton DeTar, University of Utah calculation on Frontera. 23 PREDICTION AND CONTROL OF TURBULENCE-GENERATED SOUND DANIEL BODONY UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

• Simulated fluid-structure interactions relevant to hypersonic vehicles.

• Simulations replicated a companion experiment performed at NASA Langley in their 20-inch Mach 6 tunnel.

• Frontera runs used 2 MPI ranks per node (one per socket) and 26 OpenMP threads per MPI rank.

• Saw superlinear speedup on up to 2,000+ nodes by fitting into cache rather than fetching from main memory.

• Linear speedup up to 4,000 nodes. 24 3-D STELLAR HYDRODYNAMICS PAUL WOODWARD UNIVERSITY OF MINNESOTA

• The project's goal is to study the process of Convective Boundary Mixing (CBM) and shell mergers in massive stars.

• The computational plan includes a sequence of brief three-dimensional simulations alternating with longer one-dimensional simulations.

• Ran on 7,300+ nodes for more than 80 hours during Frontera large-scale capability demonstration.

• Saw 588 GFlop/s/node — or 4 Petaflops of sustained performance — for more than 3 days! WHAT IN OUR WORKLOAD COULD RUN ON THE CLOUD?

1/13/20 25 1/13/2026 COMPUTATIONAL MODELS OF SUBDUCTION AND MANTLE FLOW MICHAEL GURNIS, CALTECH

• The Gurnis group studies the evolution of the Earth on a million year timescale, where rock behaves like a fluid as a result of mantle convection. • They study subduction initiation using a finite element code that treats the rock within the Earth as a non-Newtonian fluid. • The team used Stampede2 for large parallel computation of grid searches and to test parameters of interest using the geodynamic code, Underworld. • Configuring Underworld as a docker image allowed researchers to circumvent the installation of packages, configure of environment to their needs, and easily run the code on Stampede2. • Used11,000 node hours (SUs) so far in containers. 1/13/2027 PREDICTIVE MODELS OF CORONARY ARTERY FLOW DAVID KAMENSKY, UCSD; THOMAS HUGHES, THE UNIVERSITY OF TEXAS AT AUSTIN

• Researchers used containers to run the numerical PDE software, FEniCS, on Stampede2.

• FEniCS is a complicated software with many dependencies, and difficult to install.

• To implement isogeometric analysis on top of FEniCS, they converted a Docker image maintained by the FEniCS Project team into a Singularity image.

• Collaborating with John A. Evans at CU Boulder on a turbulence modeling study, they were able to easily switch from Stampede2 to CU Boulder's HPC “I don't see it as practical for supercomputer centers to maintain infrastructure because of containerization. and debug all the different software to meet every scientist's needs. With • Used ~1000 node hours. Singularity, they only need to maintain one piece of software.” -David Kamensky, UCSD 1/13/2028 COLLOIDAL CRYSTALLIZATION AND SELF-ASSEMBLY SHARON GLOTZER, UNIVERSITY OF MICHIGAN

• Researchers in Sharon Glotzer’s group study how building blocks (atoms, molecules, or colloidal particles) assemble into solid phase from fluid phase using molecular simulation. • In particular, they study the assembly behavior of large number of hard particles with shapes using hard particle Monte Carlo (HPMC) simulation code implemented in the HOOMD-Blue software. • The researchers built and maintain container images and make use of compute resources on Stampede2, Comet, Bridges, Summit, and local clusters. • Singularity containers allow them to use the same software environment across all of these systems so they can easily transition their workflows between them. • Used >5,300 Sus in 2019. WHAT SEPARATES THESE TWO TYPES OF WORKLOADS?

„ You might be saying to yourself at this point:

„ “Hey, all those things sounded equally science-y, what’s the difference?”

„ Fair question, let’s drill down into what makes them different. . .

1/13/20 29 WHAT MAKES A WORKFLOW CLOUD (UN)FRIENDLY

„ Scale isn’t really the thing, but these things are often exacerbated at scale.

„ There are 3 classes of barriers:

„ Social (could change to use the cloud if people would change ).

„ Technical (don’t work now, but could be fixed)

„ Structural (probably not going to be solved in the near term, even if we try)

„ Many of them could arguably be in more than one class (i.e., it’s structural until you change human behavior, and good luck with that).

1/13/20 30 BARRIERS/CHARACTERISTICS

„ Latency on large scale collective operations is the dominant factor in performance for many applications.

„ Latency on AWS EFA for TUNED MVAPICH 2 is 15+ microseconds.

„ Latency on Frontera for the *worst* case is <2us. In-rack, 900ns, in-node <200ns.

„ >10% of latency is speed of light in fiber – power dense rack layout in a max radius is a must.

„ No variance for outside traffic.

„ Latency differences can expand run times by 6-7x per job.

„ Mean Time Between Failures

„ Woodward’s run required MTBF –no hard or soft errors or reboots – of >600,000 server/hours (or one reboot per 70 years).

„ Typical large runs more like one in 200,000 hours. Over 10 days of large runs on 8k nodes, we had one node reboot.

„ Aggressively tune out even marginal hardware. Correctable DIMM errors mean replace the DIMM. Equally aggressively tune the OS.

1/13/20 31 BARRIERS/CHARACTERISTICS (2)

„ Reproducible performance and limits

„ Less than 3% variance in node performance for all nodes over time.

„ Viciously reject components vendors consider acceptable. Rigorously monitor temperature. Find and neutralize firmware developers.

„ No node sharing

„ no memory limits;

„ no workloads on other cores that can affect frequency, power or thermals;

„ no variance in available memory bandwidth

„ No variance in available network (injection) bandwidth.

1/13/20 32 CHARACTERISTICS (3)

„ Namespace / Filesystems

„ We have an embedded base of a *lot* of code that likes POSIX Filesystems.

„ By filesystems, I mean petabytes of persistent shared data.

„ On which ”open()” and “ls” works.

„ “Objects are for codes, Files are for People”.

„ Uniform access at roughly 1TB/sec.

„ Semantics for up to 250,000 tasks simultaneously and coherently opening r/w a single file.

„ (BTW, we suck at this too).

„ Behavior: Users never tell us what slice of the namespace they will use.

„ Technical: Intransigent old codes.

„ How intransigent? Glad you asked. . .

1/13/20 33 ASIDE: THE CODE BASE

„ Take, say, WRF (weather) or ADCIRC (storm surge). „ ~35-40 years development. Most started in (and often still mostly are in Fortran). By 10 generations of postdocs/grad students/programmers who aren’t there anymore. „ OK, so let’s rewrite this, for, say, switching to OpenMP5 accelerator directives instead of CUDA (as DOE is doing in the Exascale Computing Project). „ ~30 codes (some classified). „ $900M investment over 5 years -- $30M per code. „ Stampede2 has run 4,000 distinct codes. „ 4k*30M = $120 Billion to update the code base. „ Nope.

„ ”Cloud native” apps can deal with new models of I/O. But old codes… „ Well, my average staffer is younger than my average application, so not optimistic about this changing soon. . .

1/13/20 34 THOSE OLD CODES STICK BECAUSE

„ The cost of the resource is dwarfed by the porting costs, and as a result:

„ It’s not “old” tech vs. “new” tech, it’s the technology you started with versus the one you didn’t.

„ Codes that started in Bigtable are probably going to stick in GCP, because they are painful to move (I’ve run into this twice).

„ Squandering a few tens of thousands of dollars a year is preferable to a rewrite.

„ It’s not just POSIX vs. Object or Python vs. .

„ I would contend a lot of ML codes will always be GPU codes just because they were written before them, even if HBM removes performance advantages over CPUs. They are “GPU native”.

1/13/20 35 CHARACTERISTICS (4)

„ Team size

„ Some of our users have a full time person (usually a postdoc) worrying about the code.

„ Most don’t.

„ Small users grab a code off the web, build it and run.

„ This drives a couple of things:

„ Scientific Support at the cloud.

„ A typical TACC ticket: “I was building NAMD 4.5.1 on MVAPICH 2.4.9 with FFTW 7.2 and MKL 19.4 using the Intel , and I get a linker error. Help”.

„ I would put a person who has built Molecular Dynamics codes on this and build it for them. Your help intro might be. . . Different.

„ Related: Startup/Porting effort. . .

1/13/20 36 CHARACTERISTICS (5)

„ Startup/Porting. „ In Stampede 2 ”early user” start (which, admittedly, had a User environment a lot like Stampede 1, and a lot like Ranger before, and a lot like Lonestar before that). „ I send an email to ~100 projects saying “go”. „ We have finished science results with highlights for the slides I showed you from 50 of those projects within 14 days, then we declare production. „ We answer about 0.5 tickets per project, and provide no funding to the user team. „ Internet2 E-CAS project. „ 6 early science projects selected to run on AWS, Azure, GCP (all claimed to be ready). „ 1 FTE of funding per team to enable the application on the cloud. „ After 5 months, they have few production science results, though some teams are now mostly almost sort of working. „ Your interface is for developers – the science audience, even the developers, are not developers – they don’t know a VLAN from a VM, and even picking the services and provisioning is a huge barrier.

1/13/20 37 CHARACTERISTICS (6)

„ Containerization:

„ If teams have containerized :

„ They probably have thought about portability

„ They have worked in encapsulating their app.

„ They have heard of containers!

„ This is almost always a positive sign!

„ Have used Open Science Grid

„ Probably throughput, probably not a huge amount of I/O intensity per unit of compute; almost invariably can use the cloud.

„ Aren’t linked with an MPI Library, or don’t call collective operations

„ (We track this)

„ “Light” use of MPI (i.e. for launching replica tasks) probably means the app is cloud friendly.

1/13/20 38 LAST BARRIER

„ Cost

„ It isn’t central, but it is a factor.

„ Not just absolute value, but predictability and variability on a grant budget (no opportunity to get more money for ~3 years without cutting salary).

„ There is node cost – but also efficiency of node use, and egress is the one I hear the most griping on.

1/13/20 39 OK, GIVEN ALL THAT, WHAT CAN POSSIBLY RUN ON BOTH FRONTERA AND THE CLOUD AT ONCE?

COINCIDENTALLY, THIS ALSO IS MY “FUTURE OF SCIENCE” SLIDES, WHICH MIGHT BE ENCOURAGING

1/13/20 40 CONSIDER A RECENT HIGH PROFILE EXAMPLE: BLACK HOLE IMAGING

„ In May, you may have seen the wide announcement (e.g. front page of NY Times) about the first successful imaging of a black hole corona (M87) „ A hugely complex project with massive data and eight different telescopes. „ Four TACC projects contributed compute time, software, or expertise to this. „ Simulation runs of estimated M87 mass on Stampede 5 years ago.

„ Much of the new sims on Stampede2. „ Design of the Cloud workflow on IU/TACC Jetstream system „ Much of the production data analysis was run on the Google Cloud (with pipelines designed in collaboration with TACC team). „ Simulation on HPC, high-throughput data analysis in a commercial cloud – ground-breaking results.

1/13/20 41 AN EXEMPLAR PROJECT – SD2E

„ DARPA – “Synergistic Discovery and Design (SD2)” „ Vision: to "develop data-driven methods to accelerate scientific discovery and robust design in domains that lack complete models." „ Initial focus in synthetic biology; ~six data provider teams, ~15 modeling teams, TACC for platform „ Cloud-based tools to collect, integrate, and analyze diverse data types; Promote collaboration and interaction across computational skill levels; Enable a reproducible and explainable research computing lifecycle; Enhance, amplify, and link the capabilities of every SD2 performer

1/13/20 42 ADDRESSING THE OPIOID CRISIS

NIH A2CPS: Acute to Chronic Pain Signatures Program

○ Identify biosignatures (sets of biomarkers) that predict addiction to medicines.

○ Catalyze development of new drugs that people won’t get addicted to.

○ Data Integration and Resource Center (DIRC) for Common Fund Acute to Chronic Pain Signatures Program

● Goal of DIRC: Integrate the efforts of all components of the A2CPS and serve as a community-wide nexus for protocols, data, assay and data standards, and other resources generated by the A2CPS Program.

● Datasets: Electronic health records, patient reported outcomes, accelerometer data, sensory testing, CNS imaging, ‘omics assays (proteomic, metabolomic, lipidomic, extracellular RNA, transcriptome, array-based gene variants).

1/13/20 43 NATURAL HAZARDS RESEARCH

„ Among the components of our DesignSafe project, just in the hurricance part:

„ Massive ensemble simulations of storms

„ Coupled to massive storm surge and tidal models

„ Coupled to lidar datasets for ground and building elevations.

„ Rainfall models and IoT gauges in the field to feed another massive freshwater flooding model.

„ UAV, Space, aircraft, and ground reconnaissance photos for damage assessment.

„ Coupled to interactive GIS systems.

„ This influences everything from Evacuation orders, to first responder directions, to occupancy inspections, to recovery direction, to future building codes.

„ Lots of computation (simulation, ML), lots of data and integration, lots of coupling and collaboration across disciplines and scales.

1/13/20 44 CHARACTERISTICS OF THE HYBRID/FUTURE WORKFLOWS

„ A lot of experimental data to start with. . . A lot of analytics

„ Often, a significant machine learning/surrogate modeling component in the workflow.

„ A shared corpus of data, a lot of needs for collaboration and publication.

„ Usually, somewhere, a massive amount of simulation to extrapolate/test/use models built from these experiments.

„ I can come up with a hundred more examples of this – surely we might need more than one type of computational instrument to tackle these problems!

1/13/20 45 OK, SO NOW WHAT. . .

„ We are already tasked with designing the Facility that will replace Frontera, with 10x the capability (for some definition) in 2024.

„ What are the challenges that we will face?

„ What are the mix of computing, data, and human resources that will be required to tackle them?

„ What will the cloud/HPC workflows look like 5 years from now?

1/13/20 46 THANKS!!

„ The National Science Foundation „ The University of Texas „ Peter and Edith O’Donnell „ Dell, Intel, and our many vendor partners „ Cal Tech, Chicago, Cornell, Georgia Tech, Ohio State, Princeton, Texas A&M, Stanford, UC-Davis, Utah „ Our Users – the thousands of scientists who use TACC to make the world better. „ All the people of TACC

1/13/20 47 „ Humphry Davy, Inventor of Electrochemistry, 1812

„ (Pretty sure he was talking about our machine).

1/13/20 48 1/13/20 49