
Chemical Science View Article Online EDGE ARTICLE View Journal | View Issue Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large Cite this: Chem. Sci.,2018,9,956 complex systems with advanced point dipole polarizable force fields† abc b d c Louis Lagard`ere, Luc-Henri Jolly, Filippo Lipparini, Felix´ Aviat, Benjamin Stamm, e Zhifeng F. Jing, f Matthew Harger,f Hedieh Torabifard,g h i c jkl G. Andres´ Cisneros, Michael J. Schnieders, Nohad Gresh, Yvon Maday, Pengyu Y. Ren,f Jay W. Ponder m and Jean-Philip Piquemal *cfk We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with Creative Commons Attribution 3.0 Unported Licence. applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations This article is licensed under a are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP Open Access Article. Published on 27 noviembre 2017. Downloaded 29/09/2021 12:26:09 p.m.. appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size Received 18th October 2017 grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of Accepted 24th November 2017 its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand- DOI: 10.1039/c7sc04531j fold acceleration over a single-core computation is observed for the largest systems. The extension of the rsc.li/chemical-science present CPU implementation of Tinker-HP to other computational platforms is discussed. 1 Introduction Indeed, solving Newton equations of motion to resolve the time- dependent dynamics of atoms within large molecules allows to Over the last 60 years, classical Molecular Dynamics (MD) has perform simulations in various elds of research ranging from been an intense eld of research with a high rate growth. materials to biophysics and chemistry. For example, in the aSorbonne Universit´e, Institut des Sciences du Calcul et des Donn´ees, Paris, France iThe University of Iowa, Department of Biomedical Engineering, Iowa City, IA, USA bSorbonne Universit´e, Institut Parisien de Chimie Physique et Th´eorique, CNRS, FR jSorbonne Universit´e, Laboratoire Jacques-Louis Lions, UMR 7598, CNRS, Paris, 2622, Paris, France France cSorbonne Universit´e, Laboratoire de Chimie Th´eorique, UMR 7616, CNRS, Paris, kInstitut Universitaire de France, Paris, France France. E-mail: [email protected] lBrown University, Division of Applied Maths, Providence, RI, USA d Universita di Pisa, Dipartimento di Chimica e Chimica Industriale, Pisa, Italy mWashington University in Saint Louis, Department of Chemistry, Saint Louis, MI, eMATHCCES, Department of Mathematics, RWTH Aachen University, Aachen, USA Germany † Electronic supplementary information (ESI) available. See DOI: fThe University of Texas at Austin, Department of Biomedical Engineering, TX, USA 10.1039/c7sc04531j gDepartment of Chemistry, Wayne State University, Detroit, MI 48202, USA hDepartment of Chemistry, University of North Texas, Denton, TX 76202, USA 956 | Chem. Sci.,2018,9,956–972 This journal is © The Royal Society of Chemistry 2018 View Article Online Edge Article Chemical Science community of protein simulations, classical force elds (FF) Indeed, the introduction of Graphical Processing Units (GPUs) such as CHARMM,1 AMBER,2 OPLS,3 GROMOS4 and others,5 offered a brute force hardware acceleration to MD packages enabled large scale simulations on complex systems thanks to thanks to simple- or mixed-precision implementations.33 Tinker the low computational cost of their energy function. In that beneted from the availability of this low cost but powerful type context, various simulation packages appeared, oen associ- of hardware. It led to a GPU version of the code denoted Tinker- ated to these FF such as the popular CHARMM,6 GROMOS7 and OpenMM.34 The code is based both on Tinker and on the AMBER sowares.8 Among these, Tinker (presently version 8 OpenMM library (now version 7 (ref. 35)) which pioneered the (ref. 9)) was introduced in 1990 with the philosophy of being use of GPUs with polarizable force elds. Tinker-OpenMM both user friendly and to provide a reference toolbox for offers a 200-fold acceleration compared to a regular single developers. Later on, the evolution of computer systems core CPU computation giving access to accurate free energy enabled the emergence of massively parallel sowares dedi- simulations. However, when one considers the need for cated to molecular simulations such as LAMMPS,10 NAMD,11 biophysical simulations, this acceleration remains not Gromacs,12 AMBER (PME-MD),13 DLPOLY,14 Genesis15 or Des- sufficient. mond.16 As they were granted the use of large computational The purpose of the present work is to push the scalability resources, access to million atoms systems and biological time improvements of Tinker through new algorithms to explore scales became possible.17 Nevertheless, up to now, such simu- strategies enabling a 1000-fold and more speedup. These new lations are mainly limited to rst-generation molecular developments aim towards modern “big Iron” petascale mechanics (MM) models that remains conned to a lower supercomputers using distributed memory and the code design resolution approximation of the true quantum mechanical also offers consequent speedups on laboratory clusters and on Born–Oppenheimer potential energy surfaces (PES). However, multicore desktop stations. The philosophy here is to build beside these methods, more advanced second generation a highly scalable double precision code, fully compatible and “polarizable” force elds (PFF) emerged in the last 30 years.18–28 consistent with the canonical reference Tinker and Tinker- Creative Commons Attribution 3.0 Unported Licence. Grounded on Quantum Mechanics (QM) and usually calibrated OpenMM codes. As the new code remains a part of the Tinker on the basis of Energy Decomposition Analysis (EDA),29 they go package, it is designed to keep its user-friendliness offered to beyond pairwise approximation by including explicit many- both developers and users but also to provide an extended body induction effects such as polarization and in some cases access to larger scale/longer timescale MD simulations on any charge-transfer. Fluctuating charges, classical Drude type of CPU platforms. The incentive to produce such a refer- approaches or point dipole coupled to point-charge models ence double precision code is guided by the will to also perform using distributed polarizabilities are among the most studied scalable hybrid QM/MM MD simulations where rounding errors techniques aiming to include polarization effects.28 On the must be eliminated. This will bring us not to cut any corners in accuracy side, some PFF go beyond the point charge approxi- our numerical implementation with the key mantra that one This article is licensed under a mation incorporating a more detailed representation of the should not scale at any cost, as the algorithms developed in this permanent and induced charge distributions using QM-derived interdisciplinary project should be based on solid mathematical distributed multipoles and polarizabilities.18,19,24,26 Recently, grounds. a third-generation PFF using distributed frozen electronic The paper is organized as follows. First, we will present the Open Access Article. Published on 27 noviembre 2017. Downloaded 29/09/2021 12:26:09 p.m.. densities in order to incorporate short-range quantum effects30 newly developed extension of 3D spatial decomposition and appeared. In term of PES, these advanced force elds clearly memory distribution to polarizable point dipole models that is tend to offer improved accuracy, better transferability and at the heart of Tinker-HP for short-range interactions. Then we therefore are hoped to be more predictive. Unfortunately, will detail the handling of long-range electrostatic and polari- everything has a cost: such elegant methods are more complex zation interactions with a new framework coupling Smooth by design, and are therefore computationally challenging. Until Particle Ewald to Krylov
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-