NVIDIA GPU Applications Presentation

Total Page:16

File Type:pdf, Size:1020Kb

NVIDIA GPU Applications Presentation Outline of NVIDIA tutorial at MSI I. Intro to GPUs A. MSI plans to add significantly more GPUs in the future; the current system is described at: https://www.msi.umn.edu/hpc/cascade B. MSI currently has: i. 8 Westmere nodes with 4 Fermi M2070 GPGPUs/node (Cascade queue) ii. 4 SandyBridge nodes with 2 Kepler K20m GPGPUs/node (Kepler queue) II. Computational Chemistry and Biology Applications A. Molecular Dynamics Applications, including: i. AMBER, NAMD, LAMMPS, GROMACS, CHARMM, DESMOND, DL_POLY, OpenMM ii. GPU only codes: ACEMD, HOOMD-Blue B. Quantum Chemistry, including: i. Abinit, BigDFT, CP2K, Gaussian, GAMESS, GPAW, NWChem, Quantum Espresso, Q-CHEM, VASP & more ii. GPU only code: TeraChem (ParaChem) C. Bioinformatics i. NVBIO (nvBowtie), BWA-MEM (Both currently in development) update Updated: Oct. 21, 2013 Founded 1993 Invented GPU 1999 – Computer Graphics Visual Computing, Supercomputing, Cloud & Mobile Computing NVIDIA - Core Technologies and Brands GPU Mobile Cloud ® ® GeForce Tegra GRID Quadro® , Tesla® Accelerated Computing Multi-core plus Many-cores GPU Accelerator CPU Optimized for Many Optimized for Parallel Tasks Serial Tasks 3-10X+ Comp Thruput 7X Memory Bandwidth 5x Energy Efficiency How GPU Acceleration Works Application Code Compute-Intensive Functions Rest of Sequential 5% of Code CPU Code GPU CPU + GPUs : Two Year Heart Beat 32 Volta Stacked DRAM 16 Maxwell Unified Virtual Memory 8 Kepler Dynamic Parallelism 4 Fermi 2 FP64 DP GFLOPS GFLOPS per DP Watt 1 Tesla 0.5 CUDA 2008 2010 2012 2014 Kepler Features Make GPU Coding Easier Hyper-Q Dynamic Parallelism Speedup Legacy MPI Apps Less Back-Forth, Simpler Code FERMI 1 Work Queue CPU Fermi GPU CPU Kepler GPU KEPLER 32 Concurrent Work Queues Developer Momentum Continues to Grow 100M 430M CUDA –Capable GPUs CUDA-Capable GPUs 150K 1.6M CUDA Downloads CUDA Downloads 1 50 Supercomputer Supercomputers 60 640 University Courses University Courses 4,000 37,000 Academic Papers Academic Papers 2008 2013 Explosive Growth of GPU Accelerated Apps # of Apps Top Scientific Apps 200 61% Increase Molecular AMBER LAMMPS CHARMM NAMD Dynamics GROMACS DL_POLY 150 Quantum QMCPACK Gaussian 40% Increase Quantum Espresso NWChem Chemistry GAMESS-US VASP CAM-SE 100 Climate & COSMO NIM GEOS-5 Weather WRF Chroma GTS 50 Physics Denovo ENZO GTC MILC ANSYS Mechanical ANSYS Fluent 0 CAE MSC Nastran OpenFOAM 2010 2011 2012 SIMULIA Abaqus LS-DYNA Accelerated, In Development Overview of Life & Material Accelerated Apps MD: All key codes are available AMBER, CHARMM, DESMOND, DL_POLY, GROMACS, LAMMPS, NAMD GPU only codes: ACEMD, HOOMD-Blue Great multi-GPU performance Focus: scaling to large numbers of GPUs / nodes QC: All key codes are ported/optimizing: Active GPU acceleration projects: Abinit, BigDFT, CP2K, GAMESS, Gaussian, GPAW, NWChem, Quantum Espresso, VASP & more GPU only code: TeraChem Analytical instruments – actively recruiting Bioinformatics – market development GPU Test Drive Experience GPU Acceleration For Computational Chemistry Researchers, Biophysicists Preconfigured with Molecular Dynamics Apps Remotely Hosted GPU Servers Free & Easy – Sign up, Log in and See Results www.nvidia.com/gputestdrive 11 Molecular Dynamics (MD) Applications Features Application GPU Perf Release Status Notes/Benchmarks Supported AMBER 12, GPU Revision Support 12.2 PMEMD Explicit Solvent & GB > 100 ns/day JAC Released http://ambermd.org/gpus/benchmarks. AMBER Implicit Solvent NVE on 2X K20s Multi-GPU, multi-node htm#Benchmarks 2x C2070 equals Release C37b1; Implicit (5x), Explicit (2x) Solvent Released 32-35x X5667 http://www.charmm.org/news/c37b1.html#pos CHARMM via OpenMM Single & Multi-GPU in single node CPUs tjump Two-body Forces, Link-cell Pairs, Source only, Results Published Release V 4.04 Ewald SPME forces, 4x http://www.stfc.ac.uk/CSE/randd/ccg/softwar DL_POLY Multi-GPU, multi-node Shake VV e/DL_POLY/25526.aspx 165 ns/Day DHFR Released Release 4.6.3; 1st Multi-GPU support Implicit (5x), Explicit (2x) on GROMACS Multi-GPU, multi-node www.gromacs.org 4X C2075s http://lammps.sandia.gov/bench.html#desktop Lennard-Jones, Gay-Berne, 3.5-18x on Released and LAMMPS Tersoff & many more potentials ORNL Titan Multi-GPU, multi-node http://lammps.sandia.gov/bench.html#titan 4.0 ns/day Released Full electrostatics with PME and NAMD 2.9 F1-ATPase on 100M atom capable NAMD most simulation features http://www.ks.uiuc.edu/Research/namd/ 1x K20X Multi-GPU, multi-node GPU Perf compared against Multi-core x86 CPU socket. GPU Perf benchmarked on GPU supported features and may be a kernel to kernel perf comparison New/Additional MD Applications Ramping Features Application GPU Perf Release Status Notes Supported Production biomolecular dynamics (MD) software specially 150 ns/day DHFR on 1x Released Written for use only on GPUs optimized to run on GPUs ACEMD K20 Single and Multi-GPUs http://www.acellera.com/ Bonded, pair, excluded interactions; Van Released, Version der Waals, electrostatic, nonbonded far TBD http://www.schrodinger.com/productpage/14/3/ DESMOND 3.4.0/0.7.2 interactions Powerful distributed computing molecular Depends upon number of Released http://folding.stanford.edu dynamics system; implicit solvent and Folding@Home GPUs GPUs and CPUs GPUs get 4X the points of CPUs folding High-performance all-atom biomolecular Depends upon number of http://www.gpugrid.net/ Released GPUGrid.net simulations; explicit solvent and binding GPUs Simple fluids and binary mixtures (pair Up to 66x on 2090 vs. 1 Released, Version 0.2.0 http://halmd.org/benchmarks.html#supercooled-binary- potentials, high-precision NVE and NVT, HALMD CPU core Single GPU mixture-kob-andersen dynamic correlations) Kepler 2X faster than Released, Version 0.11.3 http://codeblue.umich.edu/hoomd-blue/ Written for use only on GPUs HOOMD-Blue Fermi Single and Multi-GPU on 1 node Multi-GPU w/ MPI in late 2013 mdcore TBD TBD Released, Version 0.1.7 http://mdcore.sourceforge.net/download.html Implicit: 127-213 ns/day Library and application for molecular dynamics on high- Implicit and explicit solvent, custom Released, Version 5.2 Explicit: 18-55 performance OpenMM forces Multi-GPU ns/day DHFR https://simtk.org/home/openmm_suite GPU Perf compared against Multi-core x86 CPU socket. GPU Perf benchmarked on GPU supported features and may be a kernel to kernel perf comparison Quantum Chemistry Applications Application Features Supported GPU Perf Release Status Notes Local Hamiltonian, non-local Released; Version 7.4.1 www.abinit.org Hamiltonian, LOBPCG algorithm, 1.3-2.7X ABINIT Multi-GPU support diagonalization / orthogonalization Integrating scheduling GPU into SIAL http://www.olcf.ornl.gov/wp- Under development programming language and SIP 10X on kernels content/training/electronic-structure- ACES III Multi-GPU support runtime environment 2012/deumens_ESaccel_2012.pdf Available Q1 2014 Fock Matrix, Hessians TBD www.scm.com ADF Multi-GPU support DFT; Daubechies wavelets, Released, Version 1.7.0 2-5X http://bigdft.org BigDFT part of Abinit Multi-GPU support Code for performing quantum Monte Carlo (QMC) electronic structure Under development TBD http://www.tcm.phy.cam.ac.uk/~mdt26/casino.html Casino calculations for finite and periodic Multi-GPU support systems CASTEP TBD TBD Under development http://www.castep.org/Main/HomePage http://www.olcf.ornl.gov/wp- Released DBCSR (spare matrix multiply library) 2-7X content/training/ascc_2012/friday/ACSS_2012_VandeVon CP2K Multi-GPU support dele_s.pdf Libqc with Rys Quadrature Algorithm, 1.3-1.6X, Released http://www.msg.ameslab.gov/gamess/index.html GAMESS-US Hartree-Fock, MP2 and CCSD 2.3-2.9x HF Multi-GPU support GPU Perf compared against Multi-core x86 CPU socket. GPU Perf benchmarked on GPU supported features and may be a kernel to kernel perf comparison Quantum Chemistry Applications Application Features Supported GPU Perf Release Status Notes (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and Released, Version 7.0 8x http://www.ncbi.nlm.nih.gov/pubmed/21541963 GAMESS-UK density functional theory. Supports Multi-GPU support organics & inorganics. Under development Joint PGI, NVIDIA & Gaussian TBD Multi-GPU support http://www.gaussian.com/g_press/nvidia_press.htm Gaussian Collaboration Electrostatic poisson equation, Released https://wiki.fysik.dtu.dk/gpaw/devel/projects/gpu.ht GPAW orthonormalizing of vectors, residual 8x Multi-GPU support ml, Samuli Hakala (CSC Finland) & minimization method (rmm-diis) Chris O’Grady (SLAC) Under development Schrodinger, Inc. Jaguar Investigating GPU acceleration TBD Multi-GPU support http://www.schrodinger.com/kb/278 http://www.schrodinger.com/productpage/14/7/32/ http://on- Released demand.gputechconf.com/gtc/2013/presentations/S31 CU_BLAS, SP2 algorithm TBD LATTE Multi-GPU support 95-Fast-Quantum-Molecular-Dynamics-in-LATTE.pdf Released, Version 7.8 MOLCAS CU_BLAS support 1.1x Single GPU; Additional GPU support www.molcas.org coming in Version 8 Density-fitted MP2 (DF-MP2), density Under development www.molpro.net fitted local correlation methods (DF-RHF, 1.7-2.3X projected MOLPRO Multiple GPU Hans-Joachim Werner DF-KS), DFT GPU Perf compared against Multi-core x86 CPU socket. GPU Perf benchmarked on GPU supported features and may be a kernel to kernel perf comparison Quantum Chemistry Applications Features Application GPU Perf Release Status Notes Supported Pseudodiagonalization, full Released Academic port. diagonalization, and density matrix 3.8-14X MOPAC2013 available Q1 2014 MOPAC2012 http://openmopac.net assembling Single GPU Development
Recommended publications
  • GPAW, Gpus, and LUMI
    GPAW, GPUs, and LUMI Martti Louhivuori, CSC - IT Center for Science Jussi Enkovaara GPAW 2021: Users and Developers Meeting, 2021-06-01 Outline LUMI supercomputer Brief history of GPAW with GPUs GPUs and DFT Current status Roadmap LUMI - EuroHPC system of the North Pre-exascale system with AMD CPUs and GPUs ~ 550 Pflop/s performance Half of the resources dedicated to consortium members Programming for LUMI Finland, Belgium, Czechia, MPI between nodes / GPUs Denmark, Estonia, Iceland, HIP and OpenMP for GPUs Norway, Poland, Sweden, and how to use Python with AMD Switzerland GPUs? https://www.lumi-supercomputer.eu GPAW and GPUs: history (1/2) Early proof-of-concept implementation for NVIDIA GPUs in 2012 ground state DFT and real-time TD-DFT with finite-difference basis separate version for RPA with plane-waves Hakala et al. in "Electronic Structure Calculations on Graphics Processing Units", Wiley (2016), https://doi.org/10.1002/9781118670712 PyCUDA, cuBLAS, cuFFT, custom CUDA kernels Promising performance with factor of 4-8 speedup in best cases (CPU node vs. GPU node) GPAW and GPUs: history (2/2) Code base diverged from the main branch quite a bit proof-of-concept implementation had lots of quick and dirty hacks fixes and features were pulled from other branches and patches no proper unit tests for GPU functionality active development stopped soon after publications Before development re-started, code didn't even work anymore on modern GPUs without applying a few small patches Lesson learned: try to always get new functionality to the
    [Show full text]
  • D6.1 Report on the Deployment of the Max Demonstrators and Feedback to WP1-5
    Ref. Ares(2020)2820381 - 31/05/2020 HORIZON2020 European Centre of Excellence ​ Deliverable D6.1 Report on the deployment of the MaX Demonstrators and feedback to WP1-5 D6.1 Report on the deployment of the MaX Demonstrators and feedback to WP1-5 Pablo Ordejón, Uliana Alekseeva, Stefano Baroni, Riccardo Bertossa, Miki Bonacci, Pietro Bonfà, Claudia Cardoso, Carlo Cavazzoni, Vladimir Dikan, Stefano de Gironcoli, Andrea Ferretti, Alberto García, Luigi Genovese, Federico Grasselli, Anton Kozhevnikov, Deborah Prezzi, Davide Sangalli, Joost VandeVondele, Daniele Varsano, Daniel Wortmann Due date of deliverable: 31/05/2020 Actual submission date: 31/05/2020 Final version: 31/05/2020 Lead beneficiary: ICN2 (participant number 3) Dissemination level: PU - Public www.max-centre.eu 1 HORIZON2020 European Centre of Excellence ​ Deliverable D6.1 Report on the deployment of the MaX Demonstrators and feedback to WP1-5 Document information Project acronym: MaX Project full title: Materials Design at the Exascale Research Action Project type: European Centre of Excellence in materials modelling, simulations and design EC Grant agreement no.: 824143 Project starting / end date: 01/12/2018 (month 1) / 30/11/2021 (month 36) Website: www.max-centre.eu Deliverable No.: D6.1 Authors: P. Ordejón, U. Alekseeva, S. Baroni, R. Bertossa, M. ​ Bonacci, P. Bonfà, C. Cardoso, C. Cavazzoni, V. Dikan, S. de Gironcoli, A. Ferretti, A. García, L. Genovese, F. Grasselli, A. Kozhevnikov, D. Prezzi, D. Sangalli, J. VandeVondele, D. Varsano, D. Wortmann To be cited as: Ordejón, et al., (2020): Report on the deployment of the MaX Demonstrators and feedback to WP1-5. Deliverable D6.1 of the H2020 project MaX (final version as of 31/05/2020).
    [Show full text]
  • Supporting Information
    Electronic Supplementary Material (ESI) for RSC Advances. This journal is © The Royal Society of Chemistry 2020 Supporting Information How to Select Ionic Liquids as Extracting Agent Systematically? Special Case Study for Extractive Denitrification Process Shurong Gaoa,b,c,*, Jiaxin Jina,b, Masroor Abroc, Ruozhen Songc, Miao Hed, Xiaochun Chenc,* a State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources, North China Electric Power University, Beijing, 102206, China b Research Center of Engineering Thermophysics, North China Electric Power University, Beijing, 102206, China c Beijing Key Laboratory of Membrane Science and Technology & College of Chemical Engineering, Beijing University of Chemical Technology, Beijing 100029, PR China d Office of Laboratory Safety Administration, Beijing University of Technology, Beijing 100124, China * Corresponding author, Tel./Fax: +86-10-6443-3570, E-mail: [email protected], [email protected] 1 COSMO-RS Computation COSMOtherm allows for simple and efficient processing of large numbers of compounds, i.e., a database of molecular COSMO files; e.g. the COSMObase database. COSMObase is a database of molecular COSMO files available from COSMOlogic GmbH & Co KG. Currently COSMObase consists of over 2000 compounds including a large number of industrial solvents plus a wide variety of common organic compounds. All compounds in COSMObase are indexed by their Chemical Abstracts / Registry Number (CAS/RN), by a trivial name and additionally by their sum formula and molecular weight, allowing a simple identification of the compounds. We obtained the anions and cations of different ILs and the molecular structure of typical N-compounds directly from the COSMObase database in this manuscript.
    [Show full text]
  • 5 Jul 2020 (finite Non-Periodic Vs
    ELSI | An Open Infrastructure for Electronic Structure Solvers Victor Wen-zhe Yua, Carmen Camposb, William Dawsonc, Alberto Garc´ıad, Ville Havue, Ben Hourahinef, William P. Huhna, Mathias Jacqueling, Weile Jiag,h, Murat Ke¸celii, Raul Laasnera, Yingzhou Lij, Lin Ling,h, Jianfeng Luj,k,l, Jonathan Moussam, Jose E. Romanb, Alvaro´ V´azquez-Mayagoitiai, Chao Yangg, Volker Bluma,l,∗ aDepartment of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA bDepartament de Sistemes Inform`aticsi Computaci´o,Universitat Polit`ecnica de Val`encia,Val`encia,Spain cRIKEN Center for Computational Science, Kobe 650-0047, Japan dInstitut de Ci`enciade Materials de Barcelona (ICMAB-CSIC), Bellaterra E-08193, Spain eDepartment of Applied Physics, Aalto University, Aalto FI-00076, Finland fSUPA, University of Strathclyde, Glasgow G4 0NG, UK gComputational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA hDepartment of Mathematics, University of California, Berkeley, CA 94720, USA iComputational Science Division, Argonne National Laboratory, Argonne, IL 60439, USA jDepartment of Mathematics, Duke University, Durham, NC 27708, USA kDepartment of Physics, Duke University, Durham, NC 27708, USA lDepartment of Chemistry, Duke University, Durham, NC 27708, USA mMolecular Sciences Software Institute, Blacksburg, VA 24060, USA Abstract Routine applications of electronic structure theory to molecules and peri- odic systems need to compute the electron density from given Hamiltonian and, in case of non-orthogonal basis sets, overlap matrices. System sizes can range from few to thousands or, in some examples, millions of atoms. Different discretization schemes (basis sets) and different system geometries arXiv:1912.13403v3 [physics.comp-ph] 5 Jul 2020 (finite non-periodic vs.
    [Show full text]
  • GROMACS: Fast, Flexible, and Free
    GROMACS: Fast, Flexible, and Free DAVID VAN DER SPOEL,1 ERIK LINDAHL,2 BERK HESS,3 GERRIT GROENHOF,4 ALAN E. MARK,4 HERMAN J. C. BERENDSEN4 1Department of Cell and Molecular Biology, Uppsala University, Husargatan 3, Box 596, S-75124 Uppsala, Sweden 2Stockholm Bioinformatics Center, SCFAB, Stockholm University, SE-10691 Stockholm, Sweden 3Max-Planck Institut fu¨r Polymerforschung, Ackermannweg 10, D-55128 Mainz, Germany 4Groningen Biomolecular Sciences and Biotechnology Institute, University of Groningen, Nijenborgh 4, NL-9747 AG Groningen, The Netherlands Received 12 February 2005; Accepted 18 March 2005 DOI 10.1002/jcc.20291 Published online in Wiley InterScience (www.interscience.wiley.com). Abstract: This article describes the software suite GROMACS (Groningen MAchine for Chemical Simulation) that was developed at the University of Groningen, The Netherlands, in the early 1990s. The software, written in ANSI C, originates from a parallel hardware project, and is well suited for parallelization on processor clusters. By careful optimization of neighbor searching and of inner loop performance, GROMACS is a very fast program for molecular dynamics simulation. It does not have a force field of its own, but is compatible with GROMOS, OPLS, AMBER, and ENCAD force fields. In addition, it can handle polarizable shell models and flexible constraints. The program is versatile, as force routines can be added by the user, tabulated functions can be specified, and analyses can be easily customized. Nonequilibrium dynamics and free energy determinations are incorporated. Interfaces with popular quantum-chemical packages (MOPAC, GAMES-UK, GAUSSIAN) are provided to perform mixed MM/QM simula- tions. The package includes about 100 utility and analysis programs.
    [Show full text]
  • An Ab Initio Materials Simulation Code
    CP2K: An ab initio materials simulation code Lianheng Tong Physics on Boat Tutorial , Helsinki, Finland 2015-06-08 Faculty of Natural & Mathematical Sciences Department of Physics Brief Overview • Overview of CP2K - General information about the package - QUICKSTEP: DFT engine • Practical example of using CP2K to generate simulated STM images - Terminal states in AGNR segments with 1H and 2H termination groups What is CP2K? Swiss army knife of molecular simulation www.cp2k.org • Geometry and cell optimisation Energy and • Molecular dynamics (NVE, Force Engine NVT, NPT, Langevin) • STM Images • Sampling energy surfaces (metadynamics) • DFT (LDA, GGA, vdW, • Finding transition states Hybrid) (Nudged Elastic Band) • Quantum Chemistry (MP2, • Path integral molecular RPA) dynamics • Semi-Empirical (DFTB) • Monte Carlo • Classical Force Fields (FIST) • And many more… • Combinations (QM/MM) Development • Freely available, open source, GNU Public License • www.cp2k.org • FORTRAN 95, > 1,000,000 lines of code, very active development (daily commits) • Currently being developed and maintained by community of developers: • Switzerland: Paul Scherrer Institute Switzerland (PSI), Swiss Federal Institute of Technology in Zurich (ETHZ), Universität Zürich (UZH) • USA: IBM Research, Lawrence Livermore National Laboratory (LLNL), Pacific Northwest National Laboratory (PNL) • UK: Edinburgh Parallel Computing Centre (EPCC), King’s College London (KCL), University College London (UCL) • Germany: Ruhr-University Bochum • Others: We welcome contributions from
    [Show full text]
  • Real-Time Pymol Visualization for Rosetta and Pyrosetta
    Real-Time PyMOL Visualization for Rosetta and PyRosetta Evan H. Baugh1, Sergey Lyskov1, Brian D. Weitzner1, Jeffrey J. Gray1,2* 1 Department of Chemical and Biomolecular Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America, 2 Program in Molecular Biophysics, The Johns Hopkins University, Baltimore, Maryland, United States of America Abstract Computational structure prediction and design of proteins and protein-protein complexes have long been inaccessible to those not directly involved in the field. A key missing component has been the ability to visualize the progress of calculations to better understand them. Rosetta is one simulation suite that would benefit from a robust real-time visualization solution. Several tools exist for the sole purpose of visualizing biomolecules; one of the most popular tools, PyMOL (Schro¨dinger), is a powerful, highly extensible, user friendly, and attractive package. Integrating Rosetta and PyMOL directly has many technical and logistical obstacles inhibiting usage. To circumvent these issues, we developed a novel solution based on transmitting biomolecular structure and energy information via UDP sockets. Rosetta and PyMOL run as separate processes, thereby avoiding many technical obstacles while visualizing information on-demand in real-time. When Rosetta detects changes in the structure of a protein, new coordinates are sent over a UDP network socket to a PyMOL instance running a UDP socket listener. PyMOL then interprets and displays the molecule. This implementation also allows remote execution of Rosetta. When combined with PyRosetta, this visualization solution provides an interactive environment for protein structure prediction and design. Citation: Baugh EH, Lyskov S, Weitzner BD, Gray JJ (2011) Real-Time PyMOL Visualization for Rosetta and PyRosetta.
    [Show full text]
  • Automated Construction of Quantum–Classical Hybrid Models Arxiv:2102.09355V1 [Physics.Chem-Ph] 18 Feb 2021
    Automated construction of quantum{classical hybrid models Christoph Brunken and Markus Reiher∗ ETH Z¨urich, Laboratorium f¨urPhysikalische Chemie, Vladimir-Prelog-Weg 2, 8093 Z¨urich, Switzerland February 18, 2021 Abstract We present a protocol for the fully automated construction of quantum mechanical-(QM){ classical hybrid models by extending our previously reported approach on self-parametri- zing system-focused atomistic models (SFAM) [J. Chem. Theory Comput. 2020, 16 (3), 1646{1665]. In this QM/SFAM approach, the size and composition of the QM region is evaluated in an automated manner based on first principles so that the hybrid model describes the atomic forces in the center of the QM region accurately. This entails the au- tomated construction and evaluation of differently sized QM regions with a bearable com- putational overhead that needs to be paid for automated validation procedures. Applying SFAM for the classical part of the model eliminates any dependence on pre-existing pa- rameters due to its system-focused quantum mechanically derived parametrization. Hence, QM/SFAM is capable of delivering a high fidelity and complete automation. Furthermore, since SFAM parameters are generated for the whole system, our ansatz allows for a con- venient re-definition of the QM region during a molecular exploration. For this purpose, a local re-parametrization scheme is introduced, which efficiently generates additional clas- sical parameters on the fly when new covalent bonds are formed (or broken) and moved to the classical region. arXiv:2102.09355v1 [physics.chem-ph] 18 Feb 2021 ∗Corresponding author; e-mail: [email protected] 1 1 Introduction In contrast to most protocols of computational quantum chemistry that consider isolated molecules, chemical processes can take place in a vast variety of complex environments.
    [Show full text]
  • Chem Compute Quickstart
    Chem Compute Quickstart Chem Compute is maintained by Mark Perri at Sonoma State University and hosted on Jetstream at Indiana University. The Chem Compute URL is https://chemcompute.org/. We will use Chem Compute as the frontend for running electronic structure calculations with The General Atomic and Molecular Electronic Structure System, GAMESS (http://www.msg.ameslab.gov/gamess/). Chem Compute also provides access to other computational chemistry resources including PSI4 and the molecular dynamics packages TINKER and NAMD, though we will not be using those resource at this time. Follow this link, https://chemcompute.org/gamess/submit, to directly access the Chem Compute GAMESS guided submission interface. If you are a returning Chem Computer user, please log in now. If your University is part of the InCommon Federation you can log in without registering by clicking Login then "Log In with Google or your University" – select your University from the dropdown list. Otherwise if this is your first time working with Chem Compute, please register as a Chem Compute user by clicking the “Register” link in the top-right corner of the page. This gives you access to all of the computational resources available at ChemCompute.org and will allow you to maintain copies of your calculations in your user “Dashboard” that you can refer to later. Registering also helps track usage and obtain the resources needed to continue providing its service. When logged in with the GAMESS-Submit tabs selected, an instruction section appears on the left side of the page with instructions for several different kinds of calculations.
    [Show full text]
  • Molecular Dynamics Simulations in Drug Discovery and Pharmaceutical Development
    processes Review Molecular Dynamics Simulations in Drug Discovery and Pharmaceutical Development Outi M. H. Salo-Ahen 1,2,* , Ida Alanko 1,2, Rajendra Bhadane 1,2 , Alexandre M. J. J. Bonvin 3,* , Rodrigo Vargas Honorato 3, Shakhawath Hossain 4 , André H. Juffer 5 , Aleksei Kabedev 4, Maija Lahtela-Kakkonen 6, Anders Støttrup Larsen 7, Eveline Lescrinier 8 , Parthiban Marimuthu 1,2 , Muhammad Usman Mirza 8 , Ghulam Mustafa 9, Ariane Nunes-Alves 10,11,* , Tatu Pantsar 6,12, Atefeh Saadabadi 1,2 , Kalaimathy Singaravelu 13 and Michiel Vanmeert 8 1 Pharmaceutical Sciences Laboratory (Pharmacy), Åbo Akademi University, Tykistökatu 6 A, Biocity, FI-20520 Turku, Finland; ida.alanko@abo.fi (I.A.); rajendra.bhadane@abo.fi (R.B.); parthiban.marimuthu@abo.fi (P.M.); atefeh.saadabadi@abo.fi (A.S.) 2 Structural Bioinformatics Laboratory (Biochemistry), Åbo Akademi University, Tykistökatu 6 A, Biocity, FI-20520 Turku, Finland 3 Faculty of Science-Chemistry, Bijvoet Center for Biomolecular Research, Utrecht University, 3584 CH Utrecht, The Netherlands; [email protected] 4 Swedish Drug Delivery Forum (SDDF), Department of Pharmacy, Uppsala Biomedical Center, Uppsala University, 751 23 Uppsala, Sweden; [email protected] (S.H.); [email protected] (A.K.) 5 Biocenter Oulu & Faculty of Biochemistry and Molecular Medicine, University of Oulu, Aapistie 7 A, FI-90014 Oulu, Finland; andre.juffer@oulu.fi 6 School of Pharmacy, University of Eastern Finland, FI-70210 Kuopio, Finland; maija.lahtela-kakkonen@uef.fi (M.L.-K.); tatu.pantsar@uef.fi
    [Show full text]
  • An Explicit-Solvent Conformation Search Method Using Open Software
    An explicit-solvent conformation search method using open software Kari Gaalswyk and Christopher N. Rowley Department of Chemistry, Memorial University of Newfoundland, St. John’s, Newfoundland and Labrador, Canada ABSTRACT Computer modeling is a popular tool to identify the most-probable conformers of a molecule. Although the solvent can have a large effect on the stability of a conformation, many popular conformational search methods are only capable of describing molecules in the gas phase or with an implicit solvent model. We have developed a work-flow for performing a conformation search on explicitly-solvated molecules using open source software. This method uses replica exchange molecular dynamics (REMD) to sample the conformational states of the molecule efficiently. Cluster analysis is used to identify the most probable conformations from the simulated trajectory. This work-flow was tested on drug molecules a-amanitin and cabergoline to illustrate its capabilities and effectiveness. The preferred conformations of these molecules in gas phase, implicit solvent, and explicit solvent are significantly different. Subjects Biophysics, Pharmacology, Computational Science Keywords Conformation search, Explicit solvent, Cluster analysis, Replica exchange molecular dynamics INTRODUCTION Many molecules can exist in multiple conformational isomers. Conformational isomers have the same chemical bonds, but differ in their 3D geometry because they hold different torsional angles (Crippen & Havel, 1988). The conformation of a molecule can affect Submitted 5 April 2016 chemical reactivity, molecular binding, and biological activity (Struthers, Rivier & Accepted 6May2016 Published 31 May 2016 Hagler, 1985; Copeland, 2011). Conformations differ in stability because they experience different steric, electrostatic, and solute-solvent interactions. The probability, p,ofa Corresponding author Christopher N.
    [Show full text]
  • Parameterizing a Novel Residue
    University of Illinois at Urbana-Champaign Luthey-Schulten Group, Department of Chemistry Theoretical and Computational Biophysics Group Computational Biophysics Workshop Parameterizing a Novel Residue Rommie Amaro Brijeet Dhaliwal Zaida Luthey-Schulten Current Editors: Christopher Mayne Po-Chao Wen February 2012 CONTENTS 2 Contents 1 Biological Background and Chemical Mechanism 4 2 HisH System Setup 7 3 Testing out your new residue 9 4 The CHARMM Force Field 12 5 Developing Topology and Parameter Files 13 5.1 An Introduction to a CHARMM Topology File . 13 5.2 An Introduction to a CHARMM Parameter File . 16 5.3 Assigning Initial Values for Unknown Parameters . 18 5.4 A Closer Look at Dihedral Parameters . 18 6 Parameter generation using SPARTAN (Optional) 20 7 Minimization with new parameters 32 CONTENTS 3 Introduction Molecular dynamics (MD) simulations are a powerful scientific tool used to study a wide variety of systems in atomic detail. From a standard protein simulation, to the use of steered molecular dynamics (SMD), to modelling DNA-protein interactions, there are many useful applications. With the advent of massively parallel simulation programs such as NAMD2, the limits of computational anal- ysis are being pushed even further. Inevitably there comes a time in any molecular modelling scientist’s career when the need to simulate an entirely new molecule or ligand arises. The tech- nique of determining new force field parameters to describe these novel system components therefore becomes an invaluable skill. Determining the correct sys- tem parameters to use in conjunction with the chosen force field is only one important aspect of the process.
    [Show full text]