KANAD

HPC APPLICATION

REFERENCE MANUAL

Page 1 of 28 TABLE OF CONTENTS

1. INTRODUCTION ...... 4

2. COMPLIERS & LIBRARIES ...... 4

2.1. INTEL MKL ...... 4

2.2. FFTW ...... 4

2.3. OPENMPI ...... 5

2.4. INTEL® MPI LIBRARY ...... 6

2.5. GCC ...... 6

2.6. INTEL 13.0.1 ...... 7

2.7. ATLAS (AUTOMATICALLY TUNED LINEAR ALGEBRA ) ...... 7

2.8. BLAS ...... 7

2.9. GSL ...... 7

2.10. LIBXC ...... 8

2.11. NETCDF ...... 8

2.12. HDF5 ...... 8

2.13. CFITSIO ...... 8

2.14. PFFT ...... 8

2.15. ETSF ...... 9

2.16. SPARSKIT ...... 9

3. APPLICATIONS...... 10

3.1. GROMACS ...... 10

3.2. DL-POLY ...... 12

3.3. ESPRESSO ...... 13

3.4. ...... 15 Page 2 of 28 3.5. LAMMPS ...... 17

3.6. MPI-BLAST ...... 18

3.7. NAMD ...... 19

3.8. NWCHEM ...... 20

3.9. ...... 21

3.10. PLUMED-AMBER ...... 23

3.11. PLUMED-GROMACS ...... 24

3.12. ...... 25

3.13. CP2K ...... 25

3.14. CPMD ...... 26

3.15. CAMB ...... 26

3.16. TMOLEX ...... 27

3.17. COSMOMC ...... 27

3.18. ...... 27

3.19. HEALPIX ...... 28

3.20. MYDYNAMIX ...... 28

Page 3 of 28 1. Introduction

2. Compliers & Libraries

2.1. Intel MKL

Intel® Math Kernel Library (Intel® MKL) offers highly optimized extensively threaded math routines for scientific, engineering, and financial applications that require maximum performance. It includes the deployment of BLAS, BLACS, LAPACK and SCALAPCK routines that are highly optimized for Intel processors, and provides significant performance improvements over alternative implementations.

Features:  Includes both and interfaces.  Extremely optimized for current multi-core x86 platform.  SCALAPAC can provide significant performance improvements over the standard NETLIB implementation.  Utilize multi-dimensional FFT routines (1D through 7D) with a modern, easy to use C and Fortran interfaces. It also provides compatibility with the FFTW 2.x and 3.x interfaces making it easy for current FFTW users to plug Intel MKL into their existing applications.  Supports distributed memory cluster with the same API enabling to improve performance by distributing the work over large number of processors with minimal efforts.

Installation Path: /opt/intel/mkl/

2.2. FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST). It can compute transforms of real- and complex-valued arrays of arbitrary size and dimension in O(n log n) time. Different version are installed as per the requirement of various applications.

Installation Path: /opt/apps/libs/fftw/3.3.3/intel/ /opt/apps/FFTW

Strictly Confidential Page 4 of 28 2.3. OpenMPI

Installation Path: /opt/mpi/openmpi/1.6.2/intel/ /opt/mpi/openmpi/1.4.5/intel/

To compile Parallel – C program : /opt/mpi/openmpi/1.6.2/intel/bin/mpicc C++ program : /opt/mpi/openmpi/1.6.2/intel/bin/mpicxx F77 program : /opt/mpi/openmpi/1.6.2/intel/bin/mpif77 F90 program : /opt/mpi/openmpi/1.6.2/intel/bin/mpif90

To run openMPI parallel jobs:

1. Assuming that executable name is a.out. 2. $/opt/mpi/openmpi/1.6.2/intel/bin/mpirun -np 16 ./a.out (This will run with 4 mpi processes) 3. To define hosts; create a hostfile stating ib node names associating with number of slots: (e.g. this file will define 4 nodes with 4, 2, 6, and 4 slots of ibn1, ibn2, ibn3, and ibn4 respectively).

$ vi ibhosts ibn1 slots=4 ibn2 slots=2 ibn3 slots=6 ibn4 slots=4

4. Run the job with “-hostfile ” option: $/opt/mpi/openmpi/1.6.2/intel/bin/mpirun -np 16 -hostfile ibhosts ./a.out

Page 5 of 28 2.4. Intel® MPI Library

Intel MPI 4.1 focuses on making applications perform better on Intel® architecture- based clusters—implementing the high performance Message Passing Interface Version 2.2 specification on multiple fabrics. It enables you to quickly deliver maximum end user performance even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.

Installation Path: /opt/intel/impi/4.1.0.024/intel64

To compile Parallel – C program : /opt/intel/impi/4.1.0.024/intel64/bin/mpiicc C++ program : /opt/ intel/impi/4.1.0.024/intel64/bin/mpicxx F77 program : /opt/ intel/impi/4.1.0.024/intel64/bin/mpif77 F90 program : /opt/ intel/impi/4.1.0.024/intel64/bin/mpif90

2.5. GCC

Two version of GNU Compilers and libraries have been installed on the cluster.

GCC-3.4.6 C : /usr/bin/gcc34 C++ : /usr/bin/g++34 Fortan77 : /usr/bin/g77

GCC-4.4.6 C : /usr/bin/gcc C++ : /usr/bin/g++ Fortran77/90 : /usr/bin/gfortran

To compile a program:

C program : gcc -o Gcc34 -o C++ program : g++ -o g++34 -o Fortran program : g77 -o gfortran -o

Page 6 of 28 2.6. Intel 13.0.1

Installed Path C : /opt/intel/composer_xe_2013.1.117/bin/intel64/icc C++ : /opt/intel/composer_xe_2013.1.117/bin/intel64/icpc Fortan77/90 : /opt/intel/composer_xe_2013.1.117/bin/intel64/ifort

To compile a program: C program : icc -o C++ program : icpc -o Fortran program : ifort -o

2.7. ATLAS (Automatically Tuned Linear Algebra Software)

ATLAST provides optimal linear algebra subroutines. This contains BLAS API (for both C and Fortran77) and a very small subset of the LAPACK API.

Installation Path: /usr/lib64/atlas

2.8. BLAS

Basic Linear Algebra Subroutine (BLAS) is a de facto application programming interface standard for publishing libraries to perform basic linear algebra operations such as vector and matrix multiplication.

Installation Path: /opt/apps/libs/blas/openblas/

2.9. GSL

The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.

Installation Path: /opt/apps/libs/gsl/intel/1.15

Page 7 of 28 2.10. libxc

Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals that can be used by all the ETSF codes and also other codes.

Installation Path: /opt/apps/libs/libxc/2.0.1/ /opt/apps/libs/libxc/1.1.0/

2.11. NetCDF

NetCDF (Network Common Data Form) is a set of software libraries and self- describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. Two versions of NetCDF have been installed.

Installation Path: /opt/apps/netcdf/ (Version - 3.6.2) Installation Path: /opt/apps/netcdf/netcdf-4.2.1 (Version – 4.2.1: Default)

2.12. HDF5

Hierarchical Data Format (HDF5) is the name of a set of file formats and libraries designed to store and organize large amounts of numerical data.

Installation Path: /opt/apps/HDF5/1.8.10/

2.13. cfitsio

CFITSIO is a library of C and Fortran subroutines for reading and writing data files in FITS (Flexible Image Transport System) data format. CFITSIO provides simple high- level routines for reading and writing FITS files that insulate the programmer from the internal complexities of the FITS format.

Installation Path: /opt/apps/cfitsio

2.14. pfft

PFFT is a parallel FFT software library based on MPI and distributed under GPL license. PFFT depends on the FFTW software library.

Installation Path: /opt/apps/pfft/1.0.5/ Installation Path: /opt/apps/pfft/1.0.6-new/

Page 8 of 28 2.15. ETSF

Application Description

ETSF_IO enables an architecture-independent exchange of crystallographic data, electronic wavefunctions, densities and potentials, as well as spectroscopic data. It is meant to be used by quantum-physical and quantum-chemical applications relying upon the Density Functional Theory (DFT) framework.

Version : 1.0.3 Path : /opt/apps/etsf/1.0.3

2.16. SPARSKIT

Application Description

SPARSKIT is a tool package for working with sparse matrices. Its main objectives are to convert between different storage schemes in order to simplify exchange of data between researchers, and to do basic linear algebra and matrix manipulation.

Version : 2.6.3 Path : /opt/apps/SPARSKIT2

Page 9 of 28 3. Applications

3.1. Gromacs

Application Description

GROMACS (GROningen MAchine for Chemical Simulations) is a package primarily designed for simulations of proteins, lipids and nucleic acids. Gromacs is a suite of programs which is freely available under the GNU GPL (General Public License). GROMACS is extremely fast due to algorithmic and processor-specific optimization, typically running 3-10 times faster than many simulation programs. It can be executed in parallel, using MPI or threads. GROMACS contains a script to convert molecular coordinates from a PDB file into the formats it uses internally.

Version Information

A. 4.5.5

PATH : /opt/apps//4.5.5/intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/FFTW/fftw-3.3.3-intel-single-mpi

B. 4.6.4

PATH : /opt/apps/gromacs/4.6.4/intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/FFTW/fftw-3.3.3-intel-single-mpi

Page 10 of 28 Sample PBS Job Submission Script for Gromacs

#!/bin/bash #PBS -N GROMACS #PBS –j oe #PBS -l select=1:ncpus=16:mpiprocs=16 #PBS -l walltime=72:00:00 #PBS –V #PBS –v TEMP=/scratch1 #PBS –q medium2 cd $PBS_O_WORKDIR cat $PBS_NODEFILE > pbsnodes /opt/apps/gromacs/4.5.5/intel/bin/grompp_mpi /opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 48 /opt/apps/gromacs/4.5.5/intel/bin/mdrun_mpi

Page 11 of 28 3.2. DL-POLY

Application Description

DL POLY 3 is the property of Daresbury Laboratory and is issued free under licence to academic institutions pursuing scientific research of a non-commercial nature. Commercial organisations may be permitted a licence to use the package after negotiation with the owners. Daresbury Laboratory is the sole centre for distribution of the package. Under no account is it to be redistributed to third parties without consent of the owners.

The purpose of the DL POLY 3 package is to provide software for academic research that is inexpensive, accessible and free of commercial considerations. Users have direct access to source code for modification and inspection

Version Information 4.0.4

PATH : /opt/apps/dlpoly/4.04/intelmpi/execute/DLPOLY.Z Compiler : Intel 13.0.1 MPI : Intel MPI 4.1.1.0.024

Sample PBS Job Submission Script for DLPOLY

#!/bin/bash #PBS -N DLPOLY #PBS –j oe #PBS -l select=3:ncpus=16:mpiprocs=16 #PBS -l walltime=4:00:00 #PBS –V #PBS –v TEMP=/scratch1 #PBS –q short2 cd $PBS_O_WORKDIR cat $PBS_NODEFILE > pbsnodes /opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 48 /opt/apps/dlpoly/4.04/intelmpi/execute/DLPOLY.Z

Page 12 of 28 3.3. ESPRESSO

Application Description

Quantum ESPRESSO is a software suite for ab initio electronic-structure calculations and materials modeling distributed for free under the GNU General Public License. It is based on Density Functional Theory, plane wave basis sets, and (both norm-conserving and ultrasoft). ESPRESSO is an acronym for opEn-Source Package for Research in Electronic Structure, Simulation, and Optimization..

Version Information

A. 4.3.2 with Plumed

PATH : /opt/apps/espresso/4.3.2//intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/libs/fftw/3.3.3/intel

B. 5.0.2

PATH : /opt/apps/espresso/5.0.2/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/libs/fftw/3.3.3/intel BLAS : Internal LAPACK : Internal

C. 5.0.3

PATH : /opt/apps/espresso/5.0.3/intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/libs/fftw/3.3.3/intel BLAS : /opt/intel/mkl/lib/intel64 /libmkl_blacs_intelmpi_lp64.a LAPACK : /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.a

Page 13 of 28 D. 5.0.3-cluster PATH : /opt/apps/espresso/5.0.3-cluster/intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 BLAS : /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_ip64.so LAPACK : /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.so

Sample PBS Job Submission Script for Espresso

#!/bin/bash #PBS -N espresso #PBS –j oe #PBS -l select=1:ncpus=16:mpiprocs=16 #PBS -l walltime=4:00:00 #PBS –V #PBS –v TEMP=/scratch1 #PBS –q short2 cd $PBS_O_WORKDIR cat $PBS_NODEFILE > pbsnodes /opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 48 /opt/apps/espresso/5.0.3-cluster/intel/bin /pw.x

Page 14 of 28 3.4. GAUSSIAN

Application Description

Gaussian has been designed with the needs of the user in mind. All of the standard input is free-format and mnemonic. Reasonable defaults for input data have been provided, and the output is intended to be self-explanatory. Mechanisms are available for the sophisticated user to override defaults or interface their own code to the Gaussian system.

Version Information

09

PATH : /opt/apps/gaussian/g09

Page 15 of 28 PBS Script #!/bin/bash #PBS -N G09 #PBS -j oe #PBS -l select=2:ncpus=16:mpiporcs=16 #PBS -l walltime=4:00:00 #PBS –V #PBS –v TEMP=/scratch1 #PBS –q short1 cd $PBS_O_WORKDIR export g09root=/opt/apps/gaussian source /opt/apps/gaussian/g09/bsd/g09.profile

#setup GAUSS_LFLAGS cat $PBS_NODEFILE | sort | uniq > pbsnodes export GAUSS_LFLAGS="-nodefile pbsnodes -opt \"Tsnet.Node.lindarsharg: ssh\""

GINP=c60_test.com

NSLOTS=`wc -l < $PBS_NODEFILE`

LW=`expr $NSLOTS / 8` echo $LW echo "%NProcShared=8" > test.$PBS_JOBID echo "%NProcLinda=$LW" >> test.$PBS_JOBID cat $GINP >> test.$PBS_JOBID mv $GINP $GINP.bkp cp -f test.$JOB_ID $GINP

/opt/apps/gaussian/g09/g09 $GINP

mv $GINP.bkp $GINP

rm -f test.$JOB_ID

Page 16 of 28 3.5. LAMMPS

Application Description

LAMMPS ("Large-scale Atomic/Molecular Massively Parallel Simulator") is a molecular dynamics program from Sandia National Laboratories. LAMMPS makes use of MPI for parallel communication and is free, open-source software, distributed under the terms of the GNU General Public License.

Version Information

LAMMPS (3 Dec 2012)

PATH : /opt/apps//bench/lmp_linux Compiler : Intel 13.0.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/FFTW/fftw-2.1.5-intel-single-mpi

Sample PBS Job Submission Script for LAMMPS

#!/bin/bash #PBS -N LAMMPS #PBS –j oe #PBS -l select=3:ncpus=16:mpiprocs=16 #PBS -l walltime=4:00:00 #PBS –V #PBS –v TEMP=/scratch1 #PBS -q debug

cd $PBS_O_WORKDIR cat $PBS_NODEFILE > pbsnodes /opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 48 /opt/apps/lammps/bench/lmp_linux

Page 17 of 28 3.6. MPI-BLAST

Application Description

mpiBLAST is a freely available, open-source, parallel implementation of NCBI BLAST. By efficiently utilizing distributed computational resources through database fragmentation, query segmentation, intelligent scheduling, and parallel I/O, mpiBLAST improves NCBI BLAST performance by several orders of magnitude while scaling to hundreds of processors. mpiBLAST is also portable across many different platforms and operating systems..

Version Information

1.6.0

PATH : /opt/apps/mpiblast/1.6.0/intelmpi/bin Compiler : Intel 13.0.1 MPI : Intel MPI 4.1.1.0.024

Sample PBS Job Submission Script for mpiBLAST

#!/bin/bash #PBS -N MPIBLAST #PBS –j oe #PBS -l select=3:ncpus=12:mpiprocs=16 #PBS -l walltime=3:00:00 #PBS –V #PBS –v TEMP=/scratch2 #PBS –q medium1 cd $PBS_O_WORKDIR

cat $PBS_NODEFILE > pbsnodes echo " Start Time: `date` " > time_mb_36

/opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 36 /opt/apps/mpiblast/1.6.0/intelmpi/bin/mpiblast -d nr -i FS1.fa -p blastx -o resultsFS1_36

echo " End Time: `date` " >> time_mb_36

Page 18 of 28 3.7. NAMD

Application Description

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 200,000 cores for the largest simulations.

Version Information

2.9 PATH : /opt/apps/NAMD2.9

Sample PBS Job Submission Script for NAMD

#!/bin/bash #PBS -N NAMD #PBS –j oe #PBS -l select=3:ncpus=16:mpiprocs=16 #PBS -l walltime=2:00:00 #PBS –V #PBS –v TEMP=/scratch2 #PBS –q plong

cd $PBS_O_WORKDIR

export namdnodes=`cat $PBS_NODEFILE`

namdcores=`wc -l < pbsnodes`

echo "group main" > pbsnodes

for namdnode in $namdnodes ; do echo "host $namdnode" >> pbsnodes; done

export namddir=/opt/apps/NAMD2.9

$namddir/charmrun ++remote-shell ssh ++nodelist pbsnodes +p$namdcores $dir/namd2 apoa1.

Page 19 of 28 3.8. NWCHEM

Application Description

NWChem is an ab initio software package which also includes quantum chemical and molecular dynamics functionality. It was designed to run on high-performance parallel supercomputers as well as conventional workstation clusters. It aims to be scalable both in its ability to treat large problems efficiently, and in its usage of available parallel computing resources..

Version Information

A. 6.1.1

PATH : /opt/apps//6.1.1/intel/bin MPI : Intel MPI 4.1.1.0.024

B. 6.3

PATH : /opt/apps/nwchem/6.3/intel/bin MPI : Intel MPI 4.1.1.0.024 PBS Script

#!/bin/bash #PBS -N NWCHEM #PBS –j oe #PBS –V #PBS –v TEMP=/scratch1 #PBS -l select=3:ncpus=16:mpiprocs=16 #PBS –l walltime=16:00:00 #PBS –q long

cd $PBS_O_WORKDIR

cat $PBS_NODEFILE > pbsnodes

/opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes - np 36 /opt/apps/nwchem/6.3/intel/bin/LINUX64/nwchem

Page 20 of 28 3.9. OCTOPUS

Application Description

Octopus is a scientific program aimed at the ab initio virtual experimentation on a hopefully ever-increasing range of system types. Electrons are described quantum- mechanically within density-functional theory (DFT), in its time-dependent form (TDDFT) when doing simulations in time. Nuclei are descry bed classically as point particles.

Version Information

A. 4.0.1

PATH : /opt/apps/octopus/4.0.1/intel-ompi/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Open MPI 1.6.2 BLAS : /opt/intel/composerxe/mkl/lib/intel64 LAPACK : /opt/intel/composerxe/mkl/lib/intel64 LIBXC : /opt/apps/libs/libxc/1.1.0 ETSF-IO : /opt/apps/etsf/1.0.3 GSL : opt/apps/libs/gsl/intel/1.15 SPARSKIT: /opt/apps/SPARSKIT NETCDF : /opt/apps/libs/netcdf PFFT : /opt/apps/pfft/1.0.5

B. 4.1.0

PATH : /opt/apps/octopus/4.1.0/intel-ompi/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Open MPI 1.6.2 BLAS : /opt/intel/composerxe/mkl/lib/intel64 LAPACK : opt/intel/composerxe/mkl/lib/intel64 LIBXC : /opt/apps/libs/libxc/2.0.1 ETSF-IO : /opt/apps/etsf/1.0.3 GSL : opt/apps/libs/gsl/intel/1.15 SPARSKIT: /opt/apps/SPARSKIT NETCDF : /opt/apps/libs/netcdf PFFT : /opt/apps/pfft/1.0.6

Page 21 of 28 Sample PBS Job Submission Script for Octopus

#!/bin/bash

#PBS -N OCTOPUS

#PBS –j oe

#PBS -l select=3:ncpus=16:mpiprocs=16

#PBS –V

#PBS –v TEMP=/scratch1

#PBS –q medium cd $PBS_O_WORKDIR cat $PBS_NODEFILE > pbsnodes

/opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 36 /opt/apps/octopus/4.1.0/intel-ompi/bin/octopus_mpi

Page 22 of 28 3.10. PLUMED-AMBER

Application Description

PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.

Version Information

1.3

PATH : /opt/apps/plumed/1.3/amber/10/intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/libs/fftw/3.3.3/intel

Sample PBS Job Submission Script

#!/bin/bash #PBS -N PLUMED-AMBER #PBS –j oe #PBS -l select=3:ncpus=16:mpiprocs=16 #PBS –V #PBS –v TEMP=/scratch1 #PBS –q short1 cd $PBS_O_WORKDIR

cat $PBS_NODEFILE > pbsnodes

/opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 36 /opt/apps/plumed/1.3/amber/10/intel/bin/sander.MPI

Page 23 of 28 3.11. PLUMED-GROMACS

Application Description

PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.

Version Information 1.3

PATH : /opt/apps/plumed/1.3/gromacs/4.5.5/intel/bin Compiler : Intel 13.0.1 MPI : Intel MPI 4.1.1.0.024 FFTW : /opt/apps/libs/fftw/3.3.3/intel Sample PBS Job submission Script

#!/bin/bash #PBS -N PLUMED-GROMACS #PBS –j oe #PBS -l select=3:ncpus=16:mpiprocs=16 #PBS –l walltime=16:00:00 #PBS –V #PBS –v TEMP=/scratch2 #PBS –q medium

cd $PBS_O_WORKDIR cat $PBS_NODEFILE > pbsnodes

/opt/apps/plumed/1.3/gromacs/4.5.5/intel/bin/gromppmpi_plmd

/opt/intel/impi/4.1.0.024/intel64/bin/mpiexec.hydra -machinefile pbsnodes -np 36 /opt/apps/plumed/1.3/gromacs/4.5.5/intel/bin/mdrunmpi_plmd

Page 24 of 28 3.12. TINKER

Application Description

The TINKER molecular modeling software is a complete and general package for and dynamics, with some special features for biopolymers. TINKER has the ability to use any of several common parameter sets, such as Amber CHARMM Allinger MM , OPLS , Merck Molecular , Liam Dang's polarizable model, and the AMOEBA polarizable atomic multipole force field.

Version Information

6.1.0.1

PATH : /opt/apps/tinker/6.1.01/intel/bin Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024

3.13. CP2K

Application Description

CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems. It provides a general framework for different methods such as e.g., density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials.

Version Information

2.4.0

PATH : /opt/apps//2.4.0/intel/cp2k/exe Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024 BLAS : /opt/intel/composer_xe_2013.1.117/mkl/lib/intel64 SCLAPACK : /opt/intel/composer_xe_2013.1.117/mkl/lib/intel64

Page 25 of 28 3.14. CPMD

Application Description

The CPMD code is a parallelized plane wave / pseudo potential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics.

Version Information 3.15.3

PATH : /opt/apps/cpmd/3.15.3 Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024

3.13.2 PATH : /opt/apps/cpmd/3.13.2 Compiler : Intel 13.0.1 MKL : Intel MKL 11.1 MPI : Intel MPI 4.1.1.0.024

3.15. CAMB

Application Description

Code for Anisotropies in the Microwave Background uses variables derived from covariant quantities, and the equations (in equations.f90) look superficially different from those in CMBFAST using the synchronous gauge.

Version : 2013 Path : /opt/apps/camb/

Page 26 of 28 3.16. TMOLEX

Application Description

A Graphical User Interface for TURBOMOLE, Molecular builder with new simple draw-tool. Combine results from different jobs, including export to Excel file and (for simple organic molecules) 2D graphics.

Version : 3.01 Path : /opt/apps/cosmologic11/TmoleX/

3.17. COSMOMC

Application Description

CosmoMC is a Fortran 2003 Markov-Chain Monte-Carlo (MCMC) engine for exploring cosmological parameter space, together with code for analysing Monte- Carlo samples and importance sampling (plus a suite of python scripts for building grids of runs and plotting and presenting results).

Version : 2013 Path : /opt/apps/cosmomc/

3.18. DALTON

Application Description

The kernels of the Dalton2011 suite are two powerful molecular electronic structure programs, DALTON and LSDALTON. Together, the two programs provide an extensive functionality for the calculations of molecular properties at the HF, DFT, MCSCF, and CC levels of theory.

Version : Dalton2011_release Path : /opt/apps/Dalton2011_release

Page 27 of 28 3.19. HEALPIX

Application Description

HEALPix (sometimes written as Healpix), an acronym for Hierarchical Equal Area isoLatitude Pixelisation of a 2-sphere, can refer to either an algorithm for pixelisation of the 2-sphere, an associated software package or an associated class of map projections. The HEALPix projection is a general class of spherical projections, sharing several key properties, which map the 2-sphere to the Euclidean plane.

Version : 1.0.3 Path : /opt/apps/Healpix_2.10

3.20. MYDYNAMIX

Application Description

MDynaMix (an acronym for Molecular Dynamics of Mixtures) is a general purpose molecular dynamics software package for simulations mixtures of molecules, interacting by AMBER/CHARMM like force fields in periodic boundary conditions. MDynaMix is developed at the Stockholm University, Sweden. Algorithms for NVE, NVT, NPT and anisotropic NPT ensembles are employed, as well as Ewald summation for treatment of the electrostatic interactions. The code was written in Fortran 77 (with MPI for parallel execution) and C++ and released under the GNU GPL.

Version : 5.2 Path : /opt/apps/MdynaMix/md52

Page 28 of 28