Chemistry Packages at CHPC

Total Page:16

File Type:pdf, Size:1020Kb

Chemistry Packages at CHPC Chemistry Packages at CHPC Anita M. Orendt Center for High Performance Computing [email protected] Fall 2014 Purpose of Presentation • Identify the computational chemistry software and related tools currently available at CHPC • Present brief overview of these packages • Present how to access packages on CHPC • Information on usage of Gaussian09 http://www.chpc.utah.edu Ember Lonepeak /scratch/lonepeak/serial 141 nodes/1692 cores 16 nodes/256 cores Infiniband and GigE Large memory, GigE General 70 nodes/852 cores 12 GPU nodes (6 General) Kingspeak 199 nodes/4008 cores Ash cluster (417 nodes) Infiniband and GigE Switch General 44 nodes/832 cores Apex Arch Great Wall PFS Telluride cluster (93 nodes) /scratch/ibrix meteoXX/atmosXX nodes NFS Home NFS Administrative Directories & /scratch/kingspeak/serial Group Nodes Directories 10/1/14 http://www.chpc.utah.edu Slide 3 Brief Overview CHPC Resources • Computational Clusters – kingspeak, ember – general allocation and owner-guest on owner nodes – lonepeak – no allocation – ash and telluride – can run as smithp-guest and cheatham-guest, respectively • Home directory – NFS mounted on all clusters – /uufs/chpc.utah.edu/common/home/<uNID> – generally not backed up (there are exceptions) • Scratch systems – /scratch/ibrix/chpc_gen – kingspeak, ember, ash, telluride – 56TB – /scratch/kingspeak/serial – kingspeak, ember, ash – 175TB – /scratch/lonepeak/serial – lonepeak – 33 TB – /scratch/local on compute nodes • Applications – /uufs/chpc.utah.edu/sys/pkg (use for lonepeak, linux desktops if chpc admined) – /uufs/$UUFSCELL/sys/pkg where $UUFSCELL = kingspeak.peaks, ember.arches, ash.peaks, telluride.arches http://www.chpc.utah.edu Getting Started at CHPC • Account application – an online process – https://www.chpc.utah.edu/apps/profile/account_request.php • Username is your unid with password administrated by campus • Interactive nodes – two CHPC owned nodes per cluster (cluster.chpc.utah.edu) with round- robin access to divide load (on ash, use ash-guest.chpc.utah.edu) • CHPC environment scripts – in account when created – www.chpc.utah.edu/docs/manuals/getting_started/codes/chpc.tcshrc – www.chpc.utah.edu/docs/manuals/getting_started/codes/chpc.bashrc • Getting started guide – www.chpc.utah.edu/docs/manuals/getting_started • Problem reporting system – http://jira.chpc.utah.edu or email to [email protected] http://www.chpc.utah.edu Interactive Node Usage • Interactive nodes for prepping/testing of input files, analyzing results, compilations, debugging, data transfer, etc – no running of jobs – 15 min MAX cpu – no jobs of ANY time length that negatively impact ability of other users to get work done (e.g., heavy cpu, memory usage and/or i/o) • https://wiki.chpc.utah.edu/display/policy/2.1.1+Cluster +Interactive+Node+Policy http://www.chpc.utah.edu Batch System • All use of compute nodes go through a batch system using Torque (PBS) and Maui (Moab) for scheduling #PBS -S /bin/csh #PBS – A account #PBS –l walltime=24:00:00,nodes=2:ppn=16 • qsub –I to get interactive use of a compute node (-X if X11 forwarding needed) • Walltime limits – 72 hours (long qos by request) on all clusters except ash which is 24 hours http://www.chpc.utah.edu Security Policies • No clear text passwords - use ssh and scp • Do not share your account under any circumstances • Don’t leave your terminal unattended while logged into your account • Do not introduce classified or sensitive work onto CHPC systems except on protected environment • Do not try to break passwords, tamper with files, look into anyone else’s directory, etc. – your privileges do not extend beyond your own directory • Do not distribute or copy privileged data or software • Report suspicions to CHPC ([email protected]) • Please see http://www.chpc.utah.edu/docs/policies/security.html for more details http://www.chpc.utah.edu Access to Interactive Nodes • From Windows machine: – Need ssh client • PuTTY: http://www.chiark.greenend.org.uk/~sgtatham/putty/ • Xshell 4: http://www.netsarang.com/products/xsh_overview.html – For Xwindowing – need tool to display graphical interfaces such as Gaussview, VMD, AutoDockTools • Xming (use Mesa version) - http://www.straightrunning.com/XmingNotes – FastX another option which replaces both of the above (and can be used on linux and Mac desktops also) • https://wiki.chpc.utah.edu/display/DOCS/FastX http://www.chpc.utah.edu Default login scripts • CHPC maintains default login scripts that will set up necessary environment for batch commands and many of the programs to work – http://www.chpc.utah.edu/docs/manuals/getting_started/code/chpc.tcshrc – http://www.chpc.utah.edu/docs/manuals/getting_started/code/chpc.bashrc • Located in your home directory as .tcshrc or .bashrc • Can comment out setups for packages not used • Default ones provided have chemistry package setups commented out – need to remove # at start of line • Can customize by creating .aliases file that is sourced at end of the CHPC script – do not customize .tcshrc/.bashrc file as you will need to occasionally get a new version http://www.chpc.utah.edu Software - General Information • Available on CHPC web pages/wiki – Can get to from www.chpc.utah.edu → Software documentation → More (for complete list) http://www.chpc.utah.edu/docs/manuals/software – Available for widely used packages – Has information on licensing restrictions, example batch scripts, where to get more information on a specific package – Also has useful information on running of jobs (scaling, issues, etc) for some applications http://www.chpc.utah.edu Computational Chemistry Packages • Molecular Mechanics/Dynamics • Semi Empirical • Electronic Structure Calculations – HF, DFT, MPn, CI, CC, multireference methods etc • Solid state/Materials Modeling packages – Plane Wave Pseudopotential based packages Calculation type & level of theory used depends on a number of factors – the information you are after, system being studied, size of system, computational resources. Often consider accuracy versus computational cost http://www.chpc.utah.edu Molecular Mechanics/Dynamics • Amber – http://ambermd.org • Gromacs – http://www.gromacs.org/ • NAMD – http://www.ks.uiuc.edu/Research/namd/ • LAMMPS – http://lammps.sandia.gov/ • Charmm – http://www.charmm.org/ – licensed by research group Note – Gaussian and NWChem also have some MM capabilities http://www.chpc.utah.edu MM/MD packages • Versions: – Amber14 – Gromacs 4.6.3 (4.6.5 in chp.utah.edu) – Lammps – package is updated often so installs are different versions; see date on installation. • Basic information on getting started: – https://wiki.chpc.utah.edu/display/DOCS/Amber – https://wiki.chpc.utah.edu/display/DOCS/Gromacs – https://wiki.chpc.utah.edu/display/DOCS/LAMMPS • Different parallel builds available for each cluster to make use of the different interconnects http://www.chpc.utah.edu Semi-Emprical Packages • These are based on the Hartree-Fock formalism of electronic structure methods, but make many approximations and obtain some parameters from empirical data • MOPAC2012 is the only semi-empirical only package installed at CHPC • Installation at /uufs/chpc.utah.edu/sys/pkg/mopac • How to run at https://wiki.chpc.utah.edu/display/DOCS/MOPAC • Gaussian, GAMESS also has some capabilities http://www.chpc.utah.edu Quantum (Mainly) Packages • Gaussian09 • NWChem There are other packages not currently installed available free of charge, e.g., • GAMESS - http://www.msg.ameslab.gov/gamess/ (general purpose, properties) • Psi4 – http://www.psicode.org/ (general purpose, properties) • CFOUR – http://www.cfour.de/ (coupled cluster) • SIESTA – http://departments.icmab.es/leem/siesta/ (DFT, linear scaling) http://www.chpc.utah.edu Gaussian09 • Commercial electronic structure package – http://www.gaussian.com for information and User’s Guide • Version D01 of G09 installed – /uufs/chpc.utah.edu/sys/pkg/gaussian09/EM64T – Also have legacy EM64T, AMD64, and legacy AMD64 builds • For information on accessing the CHPC installation • http://www.chpc.utah.edu/docs/manuals/software/g09.html • More later http://www.chpc.utah.edu NWChem • Package developed at PNNL to work on massively parallel systems • http://www.nwchem-sw.org • Goal: Computational chemistry solutions that are scalable with respect to both chemical system size and MPP hardware size • Has quantum mechanics, molecular mechanics/dynamics, and quantum molecular dynamics, plane waves for periodic systems • Version 6.3 ( NOT with Python support) – /uufs/chpc.utah.edu/sys/pkg/nwchem/nwchem-6.3/bin/LINUX64 – /uufs/ember.arches/sys/pkg/nwchem/nwchem-6.3/bin/LINUX64 – /uufs/kingspeak.peaks/sys/pkg/nwchem/nwchem-6.3/bin/LINUX64 – /uufs/chpc.utah.edu/sys/pkg/nwchem/nwchem-6.3/bin/LINUX64 • To run: – source appropriate etc/nwchem.csh • More information and example batch script at – https://wiki.chpc.utah.edu/display/DOCS/NWChem http://www.chpc.utah.edu Soid State/ Materials Packages Plane wave codes for the study of systems under periodic boundary conditions (PBC) • NWChem has some functionality and more is being added • VASP – http://cms.mpi.univie.ac.at/vasp/ – Ab initio QM/MD – Licensed per research group (have 1 group with an old version) • Quantum Espresso - http://www.quantum-espresso.org/ – Understands crystal space groups – Has GIPAW module to do NMR calculations – J-ICE and XCrysDen to view • CPMD - http://www.cpmd.org/ • BigDFT - http://inac.cea.fr/L_Sim/BigDFT/ http://www.chpc.utah.edu Support Packages • Molecular Viewers • Gaussview • Molden • VMD • Chimera • Babel (Openbabel)
Recommended publications
  • GPAW, Gpus, and LUMI
    GPAW, GPUs, and LUMI Martti Louhivuori, CSC - IT Center for Science Jussi Enkovaara GPAW 2021: Users and Developers Meeting, 2021-06-01 Outline LUMI supercomputer Brief history of GPAW with GPUs GPUs and DFT Current status Roadmap LUMI - EuroHPC system of the North Pre-exascale system with AMD CPUs and GPUs ~ 550 Pflop/s performance Half of the resources dedicated to consortium members Programming for LUMI Finland, Belgium, Czechia, MPI between nodes / GPUs Denmark, Estonia, Iceland, HIP and OpenMP for GPUs Norway, Poland, Sweden, and how to use Python with AMD Switzerland GPUs? https://www.lumi-supercomputer.eu GPAW and GPUs: history (1/2) Early proof-of-concept implementation for NVIDIA GPUs in 2012 ground state DFT and real-time TD-DFT with finite-difference basis separate version for RPA with plane-waves Hakala et al. in "Electronic Structure Calculations on Graphics Processing Units", Wiley (2016), https://doi.org/10.1002/9781118670712 PyCUDA, cuBLAS, cuFFT, custom CUDA kernels Promising performance with factor of 4-8 speedup in best cases (CPU node vs. GPU node) GPAW and GPUs: history (2/2) Code base diverged from the main branch quite a bit proof-of-concept implementation had lots of quick and dirty hacks fixes and features were pulled from other branches and patches no proper unit tests for GPU functionality active development stopped soon after publications Before development re-started, code didn't even work anymore on modern GPUs without applying a few small patches Lesson learned: try to always get new functionality to the
    [Show full text]
  • D6.1 Report on the Deployment of the Max Demonstrators and Feedback to WP1-5
    Ref. Ares(2020)2820381 - 31/05/2020 HORIZON2020 European Centre of Excellence ​ Deliverable D6.1 Report on the deployment of the MaX Demonstrators and feedback to WP1-5 D6.1 Report on the deployment of the MaX Demonstrators and feedback to WP1-5 Pablo Ordejón, Uliana Alekseeva, Stefano Baroni, Riccardo Bertossa, Miki Bonacci, Pietro Bonfà, Claudia Cardoso, Carlo Cavazzoni, Vladimir Dikan, Stefano de Gironcoli, Andrea Ferretti, Alberto García, Luigi Genovese, Federico Grasselli, Anton Kozhevnikov, Deborah Prezzi, Davide Sangalli, Joost VandeVondele, Daniele Varsano, Daniel Wortmann Due date of deliverable: 31/05/2020 Actual submission date: 31/05/2020 Final version: 31/05/2020 Lead beneficiary: ICN2 (participant number 3) Dissemination level: PU - Public www.max-centre.eu 1 HORIZON2020 European Centre of Excellence ​ Deliverable D6.1 Report on the deployment of the MaX Demonstrators and feedback to WP1-5 Document information Project acronym: MaX Project full title: Materials Design at the Exascale Research Action Project type: European Centre of Excellence in materials modelling, simulations and design EC Grant agreement no.: 824143 Project starting / end date: 01/12/2018 (month 1) / 30/11/2021 (month 36) Website: www.max-centre.eu Deliverable No.: D6.1 Authors: P. Ordejón, U. Alekseeva, S. Baroni, R. Bertossa, M. ​ Bonacci, P. Bonfà, C. Cardoso, C. Cavazzoni, V. Dikan, S. de Gironcoli, A. Ferretti, A. García, L. Genovese, F. Grasselli, A. Kozhevnikov, D. Prezzi, D. Sangalli, J. VandeVondele, D. Varsano, D. Wortmann To be cited as: Ordejón, et al., (2020): Report on the deployment of the MaX Demonstrators and feedback to WP1-5. Deliverable D6.1 of the H2020 project MaX (final version as of 31/05/2020).
    [Show full text]
  • 5 Jul 2020 (finite Non-Periodic Vs
    ELSI | An Open Infrastructure for Electronic Structure Solvers Victor Wen-zhe Yua, Carmen Camposb, William Dawsonc, Alberto Garc´ıad, Ville Havue, Ben Hourahinef, William P. Huhna, Mathias Jacqueling, Weile Jiag,h, Murat Ke¸celii, Raul Laasnera, Yingzhou Lij, Lin Ling,h, Jianfeng Luj,k,l, Jonathan Moussam, Jose E. Romanb, Alvaro´ V´azquez-Mayagoitiai, Chao Yangg, Volker Bluma,l,∗ aDepartment of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA bDepartament de Sistemes Inform`aticsi Computaci´o,Universitat Polit`ecnica de Val`encia,Val`encia,Spain cRIKEN Center for Computational Science, Kobe 650-0047, Japan dInstitut de Ci`enciade Materials de Barcelona (ICMAB-CSIC), Bellaterra E-08193, Spain eDepartment of Applied Physics, Aalto University, Aalto FI-00076, Finland fSUPA, University of Strathclyde, Glasgow G4 0NG, UK gComputational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA hDepartment of Mathematics, University of California, Berkeley, CA 94720, USA iComputational Science Division, Argonne National Laboratory, Argonne, IL 60439, USA jDepartment of Mathematics, Duke University, Durham, NC 27708, USA kDepartment of Physics, Duke University, Durham, NC 27708, USA lDepartment of Chemistry, Duke University, Durham, NC 27708, USA mMolecular Sciences Software Institute, Blacksburg, VA 24060, USA Abstract Routine applications of electronic structure theory to molecules and peri- odic systems need to compute the electron density from given Hamiltonian and, in case of non-orthogonal basis sets, overlap matrices. System sizes can range from few to thousands or, in some examples, millions of atoms. Different discretization schemes (basis sets) and different system geometries arXiv:1912.13403v3 [physics.comp-ph] 5 Jul 2020 (finite non-periodic vs.
    [Show full text]
  • Molecular Dynamics Simulations in Drug Discovery and Pharmaceutical Development
    processes Review Molecular Dynamics Simulations in Drug Discovery and Pharmaceutical Development Outi M. H. Salo-Ahen 1,2,* , Ida Alanko 1,2, Rajendra Bhadane 1,2 , Alexandre M. J. J. Bonvin 3,* , Rodrigo Vargas Honorato 3, Shakhawath Hossain 4 , André H. Juffer 5 , Aleksei Kabedev 4, Maija Lahtela-Kakkonen 6, Anders Støttrup Larsen 7, Eveline Lescrinier 8 , Parthiban Marimuthu 1,2 , Muhammad Usman Mirza 8 , Ghulam Mustafa 9, Ariane Nunes-Alves 10,11,* , Tatu Pantsar 6,12, Atefeh Saadabadi 1,2 , Kalaimathy Singaravelu 13 and Michiel Vanmeert 8 1 Pharmaceutical Sciences Laboratory (Pharmacy), Åbo Akademi University, Tykistökatu 6 A, Biocity, FI-20520 Turku, Finland; ida.alanko@abo.fi (I.A.); rajendra.bhadane@abo.fi (R.B.); parthiban.marimuthu@abo.fi (P.M.); atefeh.saadabadi@abo.fi (A.S.) 2 Structural Bioinformatics Laboratory (Biochemistry), Åbo Akademi University, Tykistökatu 6 A, Biocity, FI-20520 Turku, Finland 3 Faculty of Science-Chemistry, Bijvoet Center for Biomolecular Research, Utrecht University, 3584 CH Utrecht, The Netherlands; [email protected] 4 Swedish Drug Delivery Forum (SDDF), Department of Pharmacy, Uppsala Biomedical Center, Uppsala University, 751 23 Uppsala, Sweden; [email protected] (S.H.); [email protected] (A.K.) 5 Biocenter Oulu & Faculty of Biochemistry and Molecular Medicine, University of Oulu, Aapistie 7 A, FI-90014 Oulu, Finland; andre.juffer@oulu.fi 6 School of Pharmacy, University of Eastern Finland, FI-70210 Kuopio, Finland; maija.lahtela-kakkonen@uef.fi (M.L.-K.); tatu.pantsar@uef.fi
    [Show full text]
  • BOSS, Version 4.6 Biochemical and Organic Simulation System User’S Manual for UNIX, Linux, and Windows
    BOSS, Version 4.6 Biochemical and Organic Simulation System User’s Manual for UNIX, Linux, and Windows William L. Jorgensen Department of Chemistry, Yale University P. O. Box 208107 New Haven, Connecticut 06520-8107 Phone: (203) 432-6278 Fax: (203) 432-6299 Internet: [email protected] January 2004 Contents Page Introduction 4 1 Can’t Wait to Start – Use the x Scripts 6 2 Statistical Mechanics Simulation – Theory 6 3 Energy and Free Energy Evaluation 7 4 New Features 9 5 Operating Systems 14 6 Installation 14 7 Files 14 8 Command or bat File Input 16 9 Parameter File Input 18 10 Z-matrix File Input 26 10.1 Atom Input 29 10.2 Geometry Variations 29 10.3 Bond Length Variations 29 10.4 Additional Bonds 30 10.5 Harmonic Restraints 30 10.6 Bond Angle Variations 30 10.7 Additional Bond Angles 31 10.8 Dihedral Angle Variations 31 10.9 Additional Dihedral Angles 33 10.10 Domain Definitions 34 10.11 Conformational Searching 34 10.12 Sample Z-matrix 34 10.13 Z-matrix Input for Custom Solvents 35 11 PDB Input 37 12 Coordinate Input in Mind Format and Reaction Path Following 38 13 Pure Liquid Simulations 39 14 Cluster Simulations 39 2 15 Solventless and Molecular Mechanics Calculations 40 15.1 Continuum Simulations 40 15.2 Energy Minimizations 41 15.3 Dihedral Angle Driving 42 15.4 Potential Surface Scanning 42 15.5 Normal Coordinate Analysis 43 15.6 Conformational Searching 45 16 Available Solvent Boxes 48 17 Ewald Summation for Long-Range Electrostatics 50 18 Test Jobs 51 19 Output 56 19.1 Some Variable Definitions 58 20 Contents of the Distribution Files 59 21 Appendix – No.
    [Show full text]
  • The CECAM Electronic Structure Library and the Modular Software Development Paradigm
    The CECAM electronic structure library and the modular software development paradigm Cite as: J. Chem. Phys. 153, 024117 (2020); https://doi.org/10.1063/5.0012901 Submitted: 06 May 2020 . Accepted: 08 June 2020 . Published Online: 13 July 2020 Micael J. T. Oliveira , Nick Papior , Yann Pouillon , Volker Blum , Emilio Artacho , Damien Caliste , Fabiano Corsetti , Stefano de Gironcoli , Alin M. Elena , Alberto García , Víctor M. García-Suárez , Luigi Genovese , William P. Huhn , Georg Huhs , Sebastian Kokott , Emine Küçükbenli , Ask H. Larsen , Alfio Lazzaro , Irina V. Lebedeva , Yingzhou Li , David López- Durán , Pablo López-Tarifa , Martin Lüders , Miguel A. L. Marques , Jan Minar , Stephan Mohr , Arash A. Mostofi , Alan O’Cais , Mike C. Payne, Thomas Ruh, Daniel G. A. Smith , José M. Soler , David A. Strubbe , Nicolas Tancogne-Dejean , Dominic Tildesley, Marc Torrent , and Victor Wen-zhe Yu COLLECTIONS Paper published as part of the special topic on Electronic Structure Software Note: This article is part of the JCP Special Topic on Electronic Structure Software. This paper was selected as Featured ARTICLES YOU MAY BE INTERESTED IN Recent developments in the PySCF program package The Journal of Chemical Physics 153, 024109 (2020); https://doi.org/10.1063/5.0006074 An open-source coding paradigm for electronic structure calculations Scilight 2020, 291101 (2020); https://doi.org/10.1063/10.0001593 Siesta: Recent developments and applications The Journal of Chemical Physics 152, 204108 (2020); https://doi.org/10.1063/5.0005077 J. Chem. Phys. 153, 024117 (2020); https://doi.org/10.1063/5.0012901 153, 024117 © 2020 Author(s). The Journal ARTICLE of Chemical Physics scitation.org/journal/jcp The CECAM electronic structure library and the modular software development paradigm Cite as: J.
    [Show full text]
  • Improvements of Bigdft Code in Modern HPC Architectures
    Available on-line at www.prace-ri.eu Partnership for Advanced Computing in Europe Improvements of BigDFT code in modern HPC architectures Luigi Genovesea;b;∗, Brice Videaua, Thierry Deutscha, Huan Tranc, Stefan Goedeckerc aLaboratoire de Simulation Atomistique, SP2M/INAC/CEA, 17 Av. des Martyrs, 38054 Grenoble, France bEuropean Synchrotron Radiation Facility, 6 rue Horowitz, BP 220, 38043 Grenoble, France cInstitut f¨urPhysik, Universit¨atBasel, Klingelbergstr.82, 4056 Basel, Switzerland Abstract Electronic structure calculations (DFT codes) are certainly among the disciplines for which an increasing of the computa- tional power correspond to an advancement in the scientific results. In this report, we present the ongoing advancements of DFT code that can run on massively parallel, hybrid and heterogeneous CPU-GPU clusters. This DFT code, named BigDFT, is delivered within the GNU-GPL license either in a stand-alone version or integrated in the ABINIT software package. Hybrid BigDFT routines were initially ported with NVidia's CUDA language, and recently more functionalities have been added with new routines writeen within Kronos' OpenCL standard. The formalism of this code is based on Daubechies wavelets, which is a systematic real-space based basis set. The properties of this basis set are well suited for an extension on a GPU-accelerated environment. In addition to focusing on the performances of the MPI and OpenMP parallelisation the BigDFT code, this presentation also relies of the usage of the GPU resources in a complex code with different kinds of operations. A discussion on the interest of present and expected performances of Hybrid architectures computation in the framework of electronic structure calculations is also adressed.
    [Show full text]
  • Maestro 10.2 User Manual
    Maestro User Manual Maestro 10.2 User Manual Schrödinger Press Maestro User Manual Copyright © 2015 Schrödinger, LLC. All rights reserved. While care has been taken in the preparation of this publication, Schrödinger assumes no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. Canvas, CombiGlide, ConfGen, Epik, Glide, Impact, Jaguar, Liaison, LigPrep, Maestro, Phase, Prime, PrimeX, QikProp, QikFit, QikSim, QSite, SiteMap, Strike, and WaterMap are trademarks of Schrödinger, LLC. Schrödinger, BioLuminate, and MacroModel are registered trademarks of Schrödinger, LLC. MCPRO is a trademark of William L. Jorgensen. DESMOND is a trademark of D. E. Shaw Research, LLC. Desmond is used with the permission of D. E. Shaw Research. All rights reserved. This publication may contain the trademarks of other companies. Schrödinger software includes software and libraries provided by third parties. For details of the copyrights, and terms and conditions associated with such included third party software, use your browser to open third_party_legal.html, which is in the docs folder of your Schrödinger software installation. This publication may refer to other third party software not included in or with Schrödinger software ("such other third party software"), and provide links to third party Web sites ("linked sites"). References to such other third party software or linked sites do not constitute an endorsement by Schrödinger, LLC or its affiliates. Use of such other third party software and linked sites may be subject to third party license agreements and fees. Schrödinger, LLC and its affiliates have no responsibility or liability, directly or indirectly, for such other third party software and linked sites, or for damage resulting from the use thereof.
    [Show full text]
  • Kepler Gpus and NVIDIA's Life and Material Science
    LIFE AND MATERIAL SCIENCES Mark Berger; [email protected] Founded 1993 Invented GPU 1999 – Computer Graphics Visual Computing, Supercomputing, Cloud & Mobile Computing NVIDIA - Core Technologies and Brands GPU Mobile Cloud ® ® GeForce Tegra GRID Quadro® , Tesla® Accelerated Computing Multi-core plus Many-cores GPU Accelerator CPU Optimized for Many Optimized for Parallel Tasks Serial Tasks 3-10X+ Comp Thruput 7X Memory Bandwidth 5x Energy Efficiency How GPU Acceleration Works Application Code Compute-Intensive Functions Rest of Sequential 5% of Code CPU Code GPU CPU + GPUs : Two Year Heart Beat 32 Volta Stacked DRAM 16 Maxwell Unified Virtual Memory 8 Kepler Dynamic Parallelism 4 Fermi 2 FP64 DP GFLOPS GFLOPS per DP Watt 1 Tesla 0.5 CUDA 2008 2010 2012 2014 Kepler Features Make GPU Coding Easier Hyper-Q Dynamic Parallelism Speedup Legacy MPI Apps Less Back-Forth, Simpler Code FERMI 1 Work Queue CPU Fermi GPU CPU Kepler GPU KEPLER 32 Concurrent Work Queues Developer Momentum Continues to Grow 100M 430M CUDA –Capable GPUs CUDA-Capable GPUs 150K 1.6M CUDA Downloads CUDA Downloads 1 50 Supercomputer Supercomputers 60 640 University Courses University Courses 4,000 37,000 Academic Papers Academic Papers 2008 2013 Explosive Growth of GPU Accelerated Apps # of Apps Top Scientific Apps 200 61% Increase Molecular AMBER LAMMPS CHARMM NAMD Dynamics GROMACS DL_POLY 150 Quantum QMCPACK Gaussian 40% Increase Quantum Espresso NWChem Chemistry GAMESS-US VASP CAM-SE 100 Climate & COSMO NIM GEOS-5 Weather WRF Chroma GTS 50 Physics Denovo ENZO GTC MILC ANSYS Mechanical ANSYS Fluent 0 CAE MSC Nastran OpenFOAM 2010 2011 2012 SIMULIA Abaqus LS-DYNA Accelerated, In Development NVIDIA GPU Life Science Focus Molecular Dynamics: All codes are available AMBER, CHARMM, DESMOND, DL_POLY, GROMACS, LAMMPS, NAMD Great multi-GPU performance GPU codes: ACEMD, HOOMD-Blue Focus: scaling to large numbers of GPUs Quantum Chemistry: key codes ported or optimizing Active GPU acceleration projects: VASP, NWChem, Gaussian, GAMESS, ABINIT, Quantum Espresso, BigDFT, CP2K, GPAW, etc.
    [Show full text]
  • Introduction to DFT and the Plane-Wave Pseudopotential Method
    Introduction to DFT and the plane-wave pseudopotential method Keith Refson STFC Rutherford Appleton Laboratory Chilton, Didcot, OXON OX11 0QX 23 Apr 2014 Parallel Materials Modelling Packages @ EPCC 1 / 55 Introduction Synopsis Motivation Some ab initio codes Quantum-mechanical approaches Density Functional Theory Electronic Structure of Condensed Phases Total-energy calculations Introduction Basis sets Plane-waves and Pseudopotentials How to solve the equations Parallel Materials Modelling Packages @ EPCC 2 / 55 Synopsis Introduction A guided tour inside the “black box” of ab-initio simulation. Synopsis • Motivation • The rise of quantum-mechanical simulations. Some ab initio codes Wavefunction-based theory • Density-functional theory (DFT) Quantum-mechanical • approaches Quantum theory in periodic boundaries • Plane-wave and other basis sets Density Functional • Theory SCF solvers • Molecular Dynamics Electronic Structure of Condensed Phases Recommended Reading and Further Study Total-energy calculations • Basis sets Jorge Kohanoff Electronic Structure Calculations for Solids and Molecules, Plane-waves and Theory and Computational Methods, Cambridge, ISBN-13: 9780521815918 Pseudopotentials • Dominik Marx, J¨urg Hutter Ab Initio Molecular Dynamics: Basic Theory and How to solve the Advanced Methods Cambridge University Press, ISBN: 0521898633 equations • Richard M. Martin Electronic Structure: Basic Theory and Practical Methods: Basic Theory and Practical Density Functional Approaches Vol 1 Cambridge University Press, ISBN: 0521782856
    [Show full text]
  • Macromodel Quick Start Guide
    MacroModel Quick Start Guide MacroModel 9.9 Quick Start Guide Schrödinger Press MacroModel Quick Start Guide Copyright © 2012 Schrödinger, LLC. All rights reserved. While care has been taken in the preparation of this publication, Schrödinger assumes no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. BioLuminate, Canvas, CombiGlide, ConfGen, Epik, Glide, Impact, Jaguar, Liaison, LigPrep, Maestro, Phase, Prime, PrimeX, QikProp, QikFit, QikSim, QSite, SiteMap, Strike, and WaterMap are trademarks of Schrödinger, LLC. Schrödinger and MacroModel are registered trademarks of Schrödinger, LLC. MCPRO is a trademark of William L. Jorgensen. DESMOND is a trademark of D. E. Shaw Research, LLC. Desmond is used with the permission of D. E. Shaw Research. All rights reserved. This publication may contain the trademarks of other companies. Schrödinger software includes software and libraries provided by third parties. For details of the copyrights, and terms and conditions associated with such included third party software, see the Legal Notices, or use your browser to open $SCHRODINGER/docs/html/third_party_legal.html (Linux OS) or %SCHRODINGER%\docs\html\third_party_legal.html (Windows OS). This publication may refer to other third party software not included in or with Schrödinger software ("such other third party software"), and provide links to third party Web sites ("linked sites"). References to such other third party software or linked sites do not constitute an endorsement by Schrödinger, LLC or its affiliates. Use of such other third party software and linked sites may be subject to third party license agreements and fees. Schrödinger, LLC and its affiliates have no responsibility or liability, directly or indirectly, for such other third party software and linked sites, or for damage resulting from the use thereof.
    [Show full text]
  • The CECAM Electronic Structure Library and the Modular Software Development Paradigm
    The CECAM electronic structure library and the modular software development paradigm Cite as: J. Chem. Phys. 153, 024117 (2020); https://doi.org/10.1063/5.0012901 Submitted: 06 May 2020 . Accepted: 08 June 2020 . Published Online: 13 July 2020 Micael J. T. Oliveira, Nick Papior, Yann Pouillon, Volker Blum, Emilio Artacho, Damien Caliste, Fabiano Corsetti, Stefano de Gironcoli, Alin M. Elena, Alberto García, Víctor M. García-Suárez, Luigi Genovese, William P. Huhn, Georg Huhs, Sebastian Kokott, Emine Küçükbenli, Ask H. Larsen, Alfio Lazzaro, Irina V. Lebedeva, Yingzhou Li, David López-Durán, Pablo López-Tarifa, Martin Lüders, Miguel A. L. Marques, Jan Minar, Stephan Mohr, Arash A. Mostofi, Alan O’Cais, Mike C. Payne, Thomas Ruh, Daniel G. A. Smith, José M. Soler, David A. Strubbe, Nicolas Tancogne-Dejean, Dominic Tildesley, Marc Torrent, and Victor Wen-zhe Yu COLLECTIONS Paper published as part of the special topic on Electronic Structure SoftwareESS2020 This paper was selected as Featured This paper was selected as Scilight ARTICLES YOU MAY BE INTERESTED IN Recent developments in the PySCF program package The Journal of Chemical Physics 153, 024109 (2020); https://doi.org/10.1063/5.0006074 Electronic structure software The Journal of Chemical Physics 153, 070401 (2020); https://doi.org/10.1063/5.0023185 Siesta: Recent developments and applications The Journal of Chemical Physics 152, 204108 (2020); https://doi.org/10.1063/5.0005077 J. Chem. Phys. 153, 024117 (2020); https://doi.org/10.1063/5.0012901 153, 024117 © 2020 Author(s). The Journal ARTICLE of Chemical Physics scitation.org/journal/jcp The CECAM electronic structure library and the modular software development paradigm Cite as: J.
    [Show full text]