Molecular Dynamics (MD) on Gpus Feb

Total Page:16

File Type:pdf, Size:1020Kb

Molecular Dynamics (MD) on Gpus Feb Molecular Dynamics (MD) on GPUs Feb. 2, 2017 Accelerating Discoveries Using a supercomputer powered by the Tesla Platform with over 3,000 Tesla accelerators, University of Illinois scientists performed the first all-atom simulation of the HIV virus and discovered the chemical structure of its capsid — “the perfect target for fighting the infection.” Without gpu, the supercomputer would need to be 5x larger for similar performance. 2 Overview of Life & Material Accelerated Apps MD: All key codes are GPU-accelerated QC: All key codes are ported or optimizing Great multi-GPU performance Focus on using GPU-accelerated math libraries, OpenACC directives Focus on dense (up to 16) GPU nodes &/or large # of GPU nodes GPU-accelerated and available today: ACEMD*, AMBER (PMEMD)*, BAND, CHARMM, DESMOND, ESPResso, ABINIT, ACES III, ADF, BigDFT, CP2K, GAMESS, GAMESS- Folding@Home, GPUgrid.net, GROMACS, HALMD, HOOMD-Blue*, UK, GPAW, LATTE, LSDalton, LSMS, MOLCAS, MOPAC2012, LAMMPS, Lattice Microbes*, mdcore, MELD, miniMD, NAMD, NWChem, OCTOPUS*, PEtot, QUICK, Q-Chem, QMCPack, OpenMM, PolyFTS, SOP-GPU* & more Quantum Espresso/PWscf, QUICK, TeraChem* Active GPU acceleration projects: CASTEP, GAMESS, Gaussian, ONETEP, Quantum Supercharger Library*, VASP & more green* = application where >90% of the workload is on GPU 3 MD vs. QC on GPUs “Classical” Molecular Dynamics Quantum Chemistry (MO, PW, DFT, Semi-Emp) Simulates positions of atoms over time; Calculates electronic properties; chemical-biological or ground state, excited states, spectral properties, chemical-material behaviors making/breaking bonds, physical properties Forces calculated from simple empirical formulas Forces derived from electron wave function (bond rearrangement generally forbidden) (bond rearrangement OK, e.g., bond energies) Up to millions of atoms Up to a few thousand atoms Solvent included without difficulty Generally in a vacuum but if needed, solvent treated classically (QM/MM) or using implicit methods Single precision dominated Double precision is important Uses cuBLAS, cuFFT, CUDA Uses cuBLAS, cuFFT, OpenACC Geforce (Workstations), Tesla (Servers) Tesla recommended ECC off ECC on 4 GPU-Accelerated Molecular Dynamics Apps Green Lettering Indicates Performance Slides Included GPUGrid.net MELD ACEMD GROMACS NAMD AMBER HALMD OpenMM CHARMM HOOMD-Blue PolyFTS DESMOND LAMMPS ESPResSO mdcore Folding@Home GPU Perf compared against dual multi-core x86 CPU socket. 5 Benefits of MD GPU-Accelerated Computing Why wouldn’t you want to turbocharge your research? • 3x-8x Faster than CPU only systems in all tests (on average) • Most major compute intensive aspects of classical MD ported • Large performance boost with marginal price increase • Energy usage cut by more than half • GPUs scale well within a node and/or over multiple nodes • K80 GPU is our fastest and lowest power high performance GPU yet Try GPU accelerated MD apps for free – www.nvidia.com/GPUTestDrive 6 ACEMD www.acellera.com 470 ns/day on 1 GPU for L-Iduronic acid (1362 atoms) 116 ns/day on 1 GPU for DHFR (23K atoms) M. Harvey, G. Giupponi and G. De Fabritiis, ACEMD: Accelerated molecular dynamics simulations in the microseconds timescale, J. Chem. Theory and Comput. 5, 1632 (2009) www.acellera.com NVT, NPT, PME, TCL, PLUMED, CAMSHIFT1 1 M. J. Harvey and G. De Fabritiis, An implementation of the smooth particle-mesh Ewald (PME) method on GPU hardware, J. Chem. Theory Comput., 5, 2371–2377 (2009) 2 For a list of selected references see http://www.acellera.com/acemd/publications AMBER 16 June 2017 JAC_NVE on GP100s 450 23,558 atoms PME 2fs 404.09 400 370.32 350 320.19 320.14 300 Running AMBER version 16 250 ns/day The green nodes contain Dual Intel(R) 200 Core(TM) i7-4820K @ 3.70GHz CPUs + Quadro GP100s GPUs (PCIe and NVLink) 150 100 50 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 11 JAC_NVE on GP100s 900 23,558 atoms PME 4fs 800 782.11 714.23 700 614.42 613.16 600 Running AMBER version 16 500 ns/day The green nodes contain Dual Intel(R) 400 Core(TM) i7-4820K @ 3.70GHz CPUs + Quadro GP100s GPUs (PCIe and NVLink) 300 200 100 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 12 JAC_NPT on GP100s 400 23,558 atoms PME 2fs 360.64 350 333.03 295.75 295.42 300 250 Running AMBER version 16 200 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + 150 Quadro GP100s GPUs (PCIe and NVLink) 100 50 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 13 JAC_NPT on GP100s 800 23,558 atoms PME 4fs 706.53 700 654.66 600 580.47 578.48 500 Running AMBER version 16 400 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + 300 Quadro GP100s GPUs (PCIe and NVLink) 200 100 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 14 FactorIX_NVE on GP100s 180 90,906 atoms PME 166.61 160 142.45 140 120 106.23 105.98 Running AMBER version 16 100 ns/day The green nodes contain Dual Intel(R) 80 Core(TM) i7-4820K @ 3.70GHz CPUs + Quadro GP100s GPUs (PCIe and NVLink) 60 40 20 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 15 FactorIX_NPT on GP100s 160 90,906 atoms PME 146.34 140 126.75 120 102.27 102.26 100 Running AMBER version 16 80 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + 60 Quadro GP100s GPUs (PCIe and NVLink) 40 20 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 16 Cellulose_NVE on GP100s 40 408,609 atoms PME 36.91 35 31.35 30 25 24.01 24.02 Running AMBER version 16 20 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + 15 Quadro GP100s GPUs (PCIe and NVLink) 10 5 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 17 Cellulose_NPT on GP100s 35 408,609 atoms PME 32.37 30 28.76 25 22.76 22.8 20 Running AMBER version 16 ns/day The green nodes contain Dual Intel(R) 15 Core(TM) i7-4820K @ 3.70GHz CPUs + Quadro GP100s GPUs (PCIe and NVLink) 10 5 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 18 STMV_NPT on GP100s 25 1,067,095 atoms PME 23.13 20.22 20 15.64 15.43 15 Running AMBER version 16 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + 10 Quadro GP100s GPUs (PCIe and NVLink) 5 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 19 TRPCAGE on GP100s 1500 304 atoms GB 1216.56 1250 1187.3 1000 Running AMBER version 16 750 The green nodes contain Dual Intel(R) ns/day Core(TM) i7-4820K @ 3.70GHz CPUs + Quadro GP100s GPUs (PCIe and NVLink) 500 250 0 1 node + 1x GP100 1 node + 1x GP100 per node (PCIe) per node (NVLink) 20 Myoglobin on GP100s 600 2,492 atoms GB 470.41 458.28 443.49 447.23 450 Running AMBER version 16 300 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + Quadro GP100s GPUs (PCIe and NVLink) 150 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 21 Nucleosome on GP100s 25 21.29 25,095 atoms GB 20.51 20 15 Running AMBER version 16 11.47 11.29 ns/day The green nodes contain Dual Intel(R) Core(TM) i7-4820K @ 3.70GHz CPUs + 10 Quadro GP100s GPUs (PCIe and NVLink) 5 0 1 node + 1x GP100 1 node + 1x GP100 1 node + 2x GP100 1 node + 2x GP100 per node (PCIe) per node (NVLink) per node (PCIe) per node (NVLink) 22 AMBER 16 February 2017 PME-Cellulose_NPT on K80s 20 PME-Cellulose_NPT 16 15.43 Running AMBER version 16.3 The blue node contains Dual Intel Xeon 6.6X E5-2699 [email protected] [3.6GHz Turbo] 12 11.36 (Broadwell) CPUs The green nodes contain Dual Intel ns/day 4.8X Xeon E5-2699 [email protected] [3.6GHz 8 Turbo] (Broadwell) CPUs + Tesla K80 (autoboost) GPUs ➢ 1x K80 is paired with Single Intel 4 Xeon E5-2699 [email protected] [3.6GHz 2.35 Turbo] (Broadwell) 0 1 Broadwell node 1 node + 1 node + 1x K80 per node 2x K80 per node 24 PME-Cellulose_NPT on P100s PCIe 40 PME-Cellulose_NPT 35 30.00 Running AMBER version 16.3 30 The blue node contains Dual Intel Xeon 25 E5-2699 [email protected] [3.6GHz Turbo] 21.85 (Broadwell) CPUs ns/day 12.8X 20 The green nodes contain Dual Intel 9.3X Xeon E5-2699 [email protected] [3.6GHz 15 Turbo] (Broadwell) CPUs + Tesla P100 PCIe (16GB) GPUs 10 ➢ 1x P100 PCIe is paired with Single 5 Intel Xeon E5-2699 [email protected] 2.35 [3.6GHz Turbo] (Broadwell) 0 1 Broadwell node 1 node + 1 node + 1x P100 PCIe (16GB) 2x P100 PCIe (16GB) per node per node 25 PME-Cellulose_NPT on P100s SXM2 40 PME-Cellulose_NPT 36.65 35 32.22 Running AMBER version 16.3 30 15.6X 13.7X The blue node contains Dual Intel Xeon 25 23.37 E5-2699 [email protected] [3.6GHz Turbo] (Broadwell) CPUs 20 ns/day The green nodes contain Dual Intel 9.9X Xeon E5-2698 [email protected] [3.6GHz 15 Turbo] (Broadwell) CPUs + Tesla P100 SXM2 GPUs 10 ➢ 1x P100 SXM2 is paired with Single 5 Intel Xeon E5-2698 [email protected] 2.35 [3.6GHz Turbo] (Broadwell) 0 1 Broadwell node 1 node + 1 node + 1 node + 1x P100 SXM2 2x P100 SXM2 4x P100 SXM2 per node per
Recommended publications
  • MODELLER 10.1 Manual
    MODELLER A Program for Protein Structure Modeling Release 10.1, r12156 Andrej Saliˇ with help from Ben Webb, M.S. Madhusudhan, Min-Yi Shen, Guangqiang Dong, Marc A. Martı-Renom, Narayanan Eswar, Frank Alber, Maya Topf, Baldomero Oliva, Andr´as Fiser, Roberto S´anchez, Bozidar Yerkovich, Azat Badretdinov, Francisco Melo, John P. Overington, and Eric Feyfant email: modeller-care AT salilab.org URL https://salilab.org/modeller/ 2021/03/12 ii Contents Copyright notice xxi Acknowledgments xxv 1 Introduction 1 1.1 What is Modeller?............................................. 1 1.2 Modeller bibliography....................................... .... 2 1.3 Obtainingandinstallingtheprogram. .................... 3 1.4 Bugreports...................................... ............ 4 1.5 Method for comparative protein structure modeling by Modeller ................... 5 1.6 Using Modeller forcomparativemodeling. ... 8 1.6.1 Preparinginputfiles . ............. 8 1.6.2 Running Modeller ......................................... 9 2 Automated comparative modeling with AutoModel 11 2.1 Simpleusage ..................................... ............ 11 2.2 Moreadvancedusage............................... .............. 12 2.2.1 Including water molecules, HETATM residues, and hydrogenatoms .............. 12 2.2.2 Changing the default optimization and refinement protocol ................... 14 2.2.3 Getting a very fast and approximate model . ................. 14 2.2.4 Building a model from multiple templates . .................. 15 2.2.5 Buildinganallhydrogenmodel
    [Show full text]
  • Olaseni Sode
    Olaseni Sode Work Address: 5735 S. Ellis Ave. [email protected] Searle 231 +1 (314) 856-2373 The University of Chicago Chicago, IL 60615 Home Address: 1119 E Hyde Park Blvd [email protected] Apt. 3E +1 (314) 856-2373 Chicago, IL 60615 http://www.olaseniosode.com Education University of Illinois at Urbana-Champaign Urbana, IL Ph.D. in Chemistry September 2012 – Supervisor: Professor So Hirata – Thesis title: A Theoretical Study of Molecular Crystals – Transferred from the University of Florida in 2010 with Prof. Hirata Morehouse College Atlanta, GA B.S. & B.A. in Chemistry and French December 2006 – Studied at the Université de Nantes, Nantes, France (2004-2005) Research Experience The University of Chicago Chicago, IL Postdoctoral Research – research advisor: Dr. Gregory A. Voth 2012–present – Studied proposed proton transport minimum free-energy pathways in [FeFe]-hydrogenase via molecular dynamics simulations and the multistate empirical valence bond technique. – Collaborated to develop the multiscale string method to accelerate refinement of chemical reaction pathways of adenosine triphosphate (ATP) hydrolysis in globular and filamentous actin proteins. University of California, San Diego La Jolla, CA Visiting Scholar – research advisor: Dr. Francesco Paesani 2014, 2015 – Developed and implemented a reactive molecular dynamics scheme for condensed phase systems of the hydrated hydronium ions. University of Illinois at Urbana-Champaign Urbana, IL Doctoral Research – research advisor: Dr. So Hirata 2010-2012 – Characterized the vibrational structure of extended systems including infrared and Raman-active vibrations, as well as phonon dispersions and densities of states in the first Brillouin zone. – Developed zero-point vibrational energy analysis scheme for molecular clusters using a fragmented, local basis scheme.
    [Show full text]
  • Free and Open Source Software for Computational Chemistry Education
    Free and Open Source Software for Computational Chemistry Education Susi Lehtola∗,y and Antti J. Karttunenz yMolecular Sciences Software Institute, Blacksburg, Virginia 24061, United States zDepartment of Chemistry and Materials Science, Aalto University, Espoo, Finland E-mail: [email protected].fi Abstract Long in the making, computational chemistry for the masses [J. Chem. Educ. 1996, 73, 104] is finally here. We point out the existence of a variety of free and open source software (FOSS) packages for computational chemistry that offer a wide range of functionality all the way from approximate semiempirical calculations with tight- binding density functional theory to sophisticated ab initio wave function methods such as coupled-cluster theory, both for molecular and for solid-state systems. By their very definition, FOSS packages allow usage for whatever purpose by anyone, meaning they can also be used in industrial applications without limitation. Also, FOSS software has no limitations to redistribution in source or binary form, allowing their easy distribution and installation by third parties. Many FOSS scientific software packages are available as part of popular Linux distributions, and other package managers such as pip and conda. Combined with the remarkable increase in the power of personal devices—which rival that of the fastest supercomputers in the world of the 1990s—a decentralized model for teaching computational chemistry is now possible, enabling students to perform reasonable modeling on their own computing devices, in the bring your own device 1 (BYOD) scheme. In addition to the programs’ use for various applications, open access to the programs’ source code also enables comprehensive teaching strategies, as actual algorithms’ implementations can be used in teaching.
    [Show full text]
  • Pysimm: a Python Package for Simulation of Molecular Systems
    SoftwareX 6 (2016) 7–12 Contents lists available at ScienceDirect SoftwareX journal homepage: www.elsevier.com/locate/softx pysimm: A python package for simulation of molecular systems Michael E. Fortunato a, Coray M. Colina a,b,∗ a Department of Chemistry, University of Florida, Gainesville, FL 32611, United States b Department of Materials Science and Engineering and Nuclear Engineering, University of Florida, Gainesville, FL 32611, United States article info a b s t r a c t Article history: In this work, we present pysimm, a python package designed to facilitate structure generation, simulation, Received 15 August 2016 and modification of molecular systems. pysimm provides a collection of simulation tools and smooth Received in revised form integration with highly optimized third party software. Abstraction layers enable a standardized 1 December 2016 methodology to assign various force field models to molecular systems and perform simple simulations. Accepted 5 December 2016 These features have allowed pysimm to aid the rapid development of new applications specifically in the area of amorphous polymer simulations. Keywords: ' 2016 The Authors. Published by Elsevier B.V. Amorphous polymers Molecular simulation This is an open access article under the CC BY license Python (http://creativecommons.org/licenses/by/4.0/). Code metadata Current code version v0.1 Permanent link to code/repository used for this code version https://github.com/ElsevierSoftwareX/SOFTX-D-16-00070 Legal Code License MIT Code versioning system used git Software code languages, tools, and services used python2.7 Compilation requirements, operating environments & Linux dependencies If available Link to developer documentation/manual http://pysimm.org/documentation/ Support email for questions [email protected] 1.
    [Show full text]
  • Introducing ONETEP: Linear-Scaling Density Functional Simulations on Parallel Computers Chris-Kriton Skylaris,A) Peter D
    THE JOURNAL OF CHEMICAL PHYSICS 122, 084119 ͑2005͒ Introducing ONETEP: Linear-scaling density functional simulations on parallel computers Chris-Kriton Skylaris,a) Peter D. Haynes, Arash A. Mostofi, and Mike C. Payne Theory of Condensed Matter, Cavendish Laboratory, Madingley Road, Cambridge CB3 0HE, United Kingdom ͑Received 29 September 2004; accepted 4 November 2004; published online 23 February 2005͒ We present ONETEP ͑order-N electronic total energy package͒, a density functional program for parallel computers whose computational cost scales linearly with the number of atoms and the number of processors. ONETEP is based on our reformulation of the plane wave pseudopotential method which exploits the electronic localization that is inherent in systems with a nonvanishing band gap. We summarize the theoretical developments that enable the direct optimization of strictly localized quantities expressed in terms of a delocalized plane wave basis. These same localized quantities lead us to a physical way of dividing the computational effort among many processors to allow calculations to be performed efficiently on parallel supercomputers. We show with examples that ONETEP achieves excellent speedups with increasing numbers of processors and confirm that the time taken by ONETEP as a function of increasing number of atoms for a given number of processors is indeed linear. What distinguishes our approach is that the localization is achieved in a controlled and mathematically consistent manner so that ONETEP obtains the same accuracy as conventional cubic-scaling plane wave approaches and offers fast and stable convergence. We expect that calculations with ONETEP have the potential to provide quantitative theoretical predictions for problems involving thousands of atoms such as those often encountered in nanoscience and biophysics.
    [Show full text]
  • Atom: a MATLAB PACKAGE for MANIPULATION of MOLECULAR SYSTEMS
    Clays and Clay Minerals, Vol. 67, No. 5:419–426, 2019 atom: A MATLAB PACKAGE FOR MANIPULATION OF MOLECULAR SYSTEMS MICHAEL HOLMBOE * 1Chemistry Department, Umeå University, SE-901 87 Umeå, Sweden Abstract—This work presents Atomistic Topology Operations in MATLAB (atom), an open source library of modular MATLAB routines which comprise a general and flexible framework for manipulation of atomistic systems. The purpose of the atom library is simply to facilitate common operations performed for construction, manipulation, or structural analysis. Due to the data structure used, atoms and molecules can be operated upon based on different chemical names or attributes, such as atom-ormolecule-ID, name, residue name, charge, positions, etc. Furthermore, the Bond Valence Method and a neighbor-distance analysis can be performed to assign many chemical properties of inorganic molecules. Apart from reading and writing common coordinate files (.pdb, .xyz, .gro, .cif) and trajectories (.dcd, .trr, .xtc; binary formats are parsed via third-party packages), the atom library can also be used to generate topology files with bonding and angle information taking the periodic boundary conditions into account, and supports basic Gromacs, NAMD, LAMMPS, and RASPA2 topology file formats. Focusing on clay-mineral systems, the library supports CLAYFF (Cygan, 2004) but can also generate topology files for the INTERFACE forcefield (Heinz, 2005, 2013) for Gromacs and NAMD. Keywords—CLAYFF. Gromacs . INTERFACE force field . MATLAB . Molecular dynamics simulations . Monte Carlo INTRODUCTION Dombrowsky et al. 2018; Kapla & Lindén 2018; Matsunaga & Sugita 2018). This work presents the Molecular modeling is becoming increasingly important in Atomistic Topology Operations in MATLAB (atom)li- many different areas of fundamental and applied research brary – a large collection of >100 modular MATLAB (Cygan 2001;Luetal.2006; Medina, 2009).
    [Show full text]
  • Open Babel Documentation Release 2.3.1
    Open Babel Documentation Release 2.3.1 Geoffrey R Hutchison Chris Morley Craig James Chris Swain Hans De Winter Tim Vandermeersch Noel M O’Boyle (Ed.) December 05, 2011 Contents 1 Introduction 3 1.1 Goals of the Open Babel project ..................................... 3 1.2 Frequently Asked Questions ....................................... 4 1.3 Thanks .................................................. 7 2 Install Open Babel 9 2.1 Install a binary package ......................................... 9 2.2 Compiling Open Babel .......................................... 9 3 obabel and babel - Convert, Filter and Manipulate Chemical Data 17 3.1 Synopsis ................................................. 17 3.2 Options .................................................. 17 3.3 Examples ................................................. 19 3.4 Differences between babel and obabel .................................. 21 3.5 Format Options .............................................. 22 3.6 Append property values to the title .................................... 22 3.7 Filtering molecules from a multimolecule file .............................. 22 3.8 Substructure and similarity searching .................................. 25 3.9 Sorting molecules ............................................ 25 3.10 Remove duplicate molecules ....................................... 25 3.11 Aliases for chemical groups ....................................... 26 4 The Open Babel GUI 29 4.1 Basic operation .............................................. 29 4.2 Options .................................................
    [Show full text]
  • Real-Time Pymol Visualization for Rosetta and Pyrosetta
    Real-Time PyMOL Visualization for Rosetta and PyRosetta Evan H. Baugh1, Sergey Lyskov1, Brian D. Weitzner1, Jeffrey J. Gray1,2* 1 Department of Chemical and Biomolecular Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America, 2 Program in Molecular Biophysics, The Johns Hopkins University, Baltimore, Maryland, United States of America Abstract Computational structure prediction and design of proteins and protein-protein complexes have long been inaccessible to those not directly involved in the field. A key missing component has been the ability to visualize the progress of calculations to better understand them. Rosetta is one simulation suite that would benefit from a robust real-time visualization solution. Several tools exist for the sole purpose of visualizing biomolecules; one of the most popular tools, PyMOL (Schro¨dinger), is a powerful, highly extensible, user friendly, and attractive package. Integrating Rosetta and PyMOL directly has many technical and logistical obstacles inhibiting usage. To circumvent these issues, we developed a novel solution based on transmitting biomolecular structure and energy information via UDP sockets. Rosetta and PyMOL run as separate processes, thereby avoiding many technical obstacles while visualizing information on-demand in real-time. When Rosetta detects changes in the structure of a protein, new coordinates are sent over a UDP network socket to a PyMOL instance running a UDP socket listener. PyMOL then interprets and displays the molecule. This implementation also allows remote execution of Rosetta. When combined with PyRosetta, this visualization solution provides an interactive environment for protein structure prediction and design. Citation: Baugh EH, Lyskov S, Weitzner BD, Gray JJ (2011) Real-Time PyMOL Visualization for Rosetta and PyRosetta.
    [Show full text]
  • CESMIX: Center for the Exascale Simulation of Materials in Extreme Environments
    CESMIX: Center for the Exascale Simulation of Materials in Extreme Environments Project Overview Youssef Marzouk MIT PSAAP-3 Team 18 August 2020 The CESMIX team • Our team integrates expertise in quantum chemistry, atomistic simulation, materials science; hypersonic flow; validation & uncertainty quantification; numerical algorithms; parallel computing, programming languages, compilers, and software performance engineering 1 Project objectives • Exascale simulation of materials in extreme environments • In particular: ultrahigh temperature ceramics in hypersonic flows – Complex materials, e.g., metal diborides – Extreme aerothermal and chemical loading – Predict materials degradation and damage (oxidation, melting, ablation), capturing the central role of surfaces and interfaces • New predictive simulation paradigms and new CS tools for the exascale 2 Broad relevance • Intense current interest in reentry vehicles and hypersonic flight – A national priority! – Materials technologies are a key limiting factor • Material properties are of cross-cutting importance: – Oxidation rates – Thermo-mechanical properties: thermal expansion, creep, fracture – Melting and ablation – Void formation • New systems being proposed and fabricated (e.g., metallic high-entropy alloys) • May have relevance to materials aging • Yet extreme environments are largely inaccessible in the laboratory – Predictive simulation is an essential path… 3 Demonstration problem: specifics • Aerosurfaces of a hypersonic vehicle… • Hafnium diboride (HfB2) promises necessary temperature
    [Show full text]
  • Using Constrained Density Functional Theory to Track Proton Transfers and to Sample Their Associated Free Energy Surface
    Using Constrained Density Functional Theory to Track Proton Transfers and to Sample Their Associated Free Energy Surface Chenghan Li and Gregory A. Voth* Department of Chemistry, Chicago Center for Theoretical Chemistry, James Franck Institute, and Institute for Biophysical Dynamics, University of Chicago, Chicago, IL, 60637 Keywords: free energy sampling, proton transport, density functional theory, proton transfer ABSTRACT: The ab initio molecular dynamics (AIMD) and quantum mechanics/molecular mechanics (QM/MM) methods are powerful tools for studying proton solvation, transfer, and transport processes in various environments. However, due to the high computational cost of such methods, achieving sufficient sampling of rare events involving excess proton motion – especially when Grotthuss proton shuttling is involved – usually requires enhanced free energy sampling methods to obtain informative results. Moreover, an appropriate collective variable (CV) that describes the effective position of the net positive charge defect associated with an excess proton is essential for both tracking the trajectory of the defect and for the free energy sampling of the processes associated with the resulting proton transfer and transport. In this work, such a CV is derived from first principles using constrained density functional theory (CDFT). This CV is applicable to a broad array of proton transport and transfer processes as studied via AIMD and QM/MM simulation. 1 INTRODUCTION The accurate and efficient delineation of proton transport (PT) and
    [Show full text]
  • A Study on Cheminformatics and Its Applications on Modern Drug Discovery
    Available online at www.sciencedirect.com Procedia Engineering 38 ( 2012 ) 1264 – 1275 Internatio na l Conference on Modeling Optimisatio n and Computing (ICMOC 2012) A Study on Cheminformatics and its Applications on Modern Drug Discovery B.Firdaus Begama and Dr. J.Satheesh Kumarb aResearch Scholar, Bharathiar University, Coimbatore, India, [email protected] bAssistant Professor, Bharathiar University, Coimbatore, India, [email protected] Abstract Discovering drugs to a disease is still a challenging task for medical researchers due to the complex structures of biomolecules which are responsible for disease such as AIDS, Cancer, Autism, Alzimear etc. Design and development of new efficient anti-drugs for the disease without any side effects are becoming mandatory in the recent history of human life cycle due to changes in various factors which includes food habit, environmental and migration in human life style. Cheminformaticds deals with discovering drugs based in modern drug discovery techniques which in turn rectifies complex issues in traditional drug discovery system. Cheminformatics tools, helps medical chemist for better understanding of complex structures of chemical compounds. Cheminformatics is a new emerging interdisciplinary field which primarily aims to discover Novel Chemical Entities [NCE] which ultimately results in design of new molecule [chemical data]. It also plays an important role for collecting, storing and analysing the chemical data. This paper focuses on cheminformatics and its applications on drug discovery and modern drug discovery techniques which helps chemist and medical researchers for finding solution to the complex disease. © 2012 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of Noorul Islam Centre for Higher Education.
    [Show full text]
  • Popular GPU-Accelerated Applications
    LIFE & MATERIALS SCIENCES GPU-ACCELERATED APPLICATIONS | CATALOG | AUG 12 LIFE & MATERIALS SCIENCES APPLICATIONS CATALOG Application Description Supported Features Expected Multi-GPU Release Status Speed Up* Support Bioinformatics BarraCUDA Sequence mapping software Alignment of short sequencing 6-10x Yes Available now reads Version 0.6.2 CUDASW++ Open source software for Smith-Waterman Parallel search of Smith- 10-50x Yes Available now protein database searches on GPUs Waterman database Version 2.0.8 CUSHAW Parallelized short read aligner Parallel, accurate long read 10x Yes Available now aligner - gapped alignments to Version 1.0.40 large genomes CATALOG GPU-BLAST Local search with fast k-tuple heuristic Protein alignment according to 3-4x Single Only Available now blastp, multi cpu threads Version 2.2.26 GPU-HMMER Parallelized local and global search with Parallel local and global search 60-100x Yes Available now profile Hidden Markov models of Hidden Markov Models Version 2.3.2 mCUDA-MEME Ultrafast scalable motif discovery algorithm Scalable motif discovery 4-10x Yes Available now based on MEME algorithm based on MEME Version 3.0.12 MUMmerGPU A high-throughput DNA sequence alignment Aligns multiple query sequences 3-10x Yes Available now LIFE & MATERIALS& LIFE SCIENCES APPLICATIONS program against reference sequence in Version 2 parallel SeqNFind A GPU Accelerated Sequence Analysis Toolset HW & SW for reference 400x Yes Available now assembly, blast, SW, HMM, de novo assembly UGENE Opensource Smith-Waterman for SSE/CUDA, Fast short
    [Show full text]