GROMACS

GROMACS (GROningen MAchine for Chemical Simulations) is a molecular dynamics package primarily designed for simulations of proteins, lipids and nucleic acids.

Availability / Target HPC systems

  • TinyGPU: best value if only one GPU is used per run – use the latest versions of GROMACS as they allow more and more offloading to the GPU
  • parallel computers: experiment to find proper setting for -npme
  • throughput cluster Woody: best suited for small systems

New versions of GROMACS are installed by RRZE upon request.

Notes

GROMACS can produce large amounts of data in small increments:

  • Try to reduce the frequency and amount of data as much as possible.
  • It also may be useful to stage the generated output in the node’s RAMdisk (i.e. in the directory /dev/shm/) first and only copy it back to e.g. $WORK once just before quitting the job.
  • The high output frequency of small amounts of data is NOT suitable for $FASTTMP.

Sample job scripts

#!bin/bash -l
#PBS -lnodes=1:ppn=4,walltime=10:00:00
#PBS -N my-gmx
#PBS -j eo

cd $PBS_O_WORKDIR

module load gromacs/2019.3-mkl

### the argument of -maxh should match the requested walltime!
gmx mdrun -maxh 10 -s my.tpr

### try automatic restart (adapt the conditions to fit your needs)
if [ -f confout.gro ]; then
   echo "*** confout.gro found; no re-submit required"
   exit
if [ $SECONDS -lt 1800 ]; then
   echo "*** no automatic restart as runtime of the present job was to short"
   exit
fi
qsub $0

#!bin/bash -l
#PBS -lnodes=4:ppn=40,walltime=10:00:00
#PBS -N my-gmx
#PBS -j eo

cd $PBS_O_WORKDIR

module load gromacs/2019.3-mkl-IVB

### 1) The argument of -maxh should match the requested walltime!
### 2) Performance often can be optimized if -npme # with a proper number of pme tasks is specified; 
###    experiment of use tune_mpe to find the optimal value.
###    Using the SMT threads can sometimes be beneficial, however, requires testing.
mpirun [-pinexpr S0:0-19@S1:0-19] mdrun_mpi [-npme #] -maxh 10 -s my.tpr

### try automatic restart (adapt the conditions to fit your needs)
if [ -f confout.gro ]; then
   echo "*** confout.gro found; no re-submit required"
   exit
if [ $SECONDS -lt 1800 ]; then
   echo "*** no automatic restart as runtime of the present job was to short"
   exit
fi
qsub $0

#!bin/bash -l
#PBS -lnodes=1:ppn=4,walltime=10:00:00
#PBS -N my-gmx
#PBS -j eo

cd $PBS_O_WORKDIR

module load gromacs/2019.3-mkl-CUDA101

### 1) the argument of -maxh should match the requested walltime!
### 2) optional arguments are: -pme gpu -npme 1
###                            -bonded gpu
gmx mdrun -maxh 10 -s my.tpr

### try automatic restart (adapt the conditions to fit your needs)
if [ -f confout.gro ]; then
   echo "*** confout.gro found; no re-submit required"
   exit
if [ $SECONDS -lt 1800 ]; then
   echo "*** no automatic restart as runtime of the present job was to short"
   exit
fi
qsub $0

The performance benefit of using multiple GPUs is often very low! You get much better throughout if you run multiple independent jobs on a single GPUs as shown above.

If you request specific GPU types and their nodes support SMT (which currenty is the case for the v100 and rtx2080ti nodes), request ppn=16:smt as SMT typically gives a small performance boost.

Even if using multiple GPUs do not use the MPI-parallel version (mdrun_mpi) but the thread-mpi version (gmx mdrun) of Gromacs. -ntmpi # usually should match the number of GPUs available.

#!bin/bash -l
#PBS -lnodes=1:ppn=16,walltime=10:00:00
#PBS -N my-gmx
#PBS -j eo

cd $PBS_O_WORKDIR

module load gromacs/2019.3-mkl-CUDA101

### 1) The argument of -maxh should match the requested walltime!
### 2) Typical optional arguments are: -pme gpu -npme 1
###                                    -bonded gpu
gmx mdrun -ntmpi 4 -ntomp 4 -maxh 10 -s my.tpr
### if :smt is requested, the following line should typically be used
# gmx mdrun -ntmpi 4 -ntomp 8 -maxh 10 -s my.tpr

### try automatic restart (adapt the conditions to fit your needs)
if [ -f confout.gro ]; then
   echo "*** confout.gro found; no re-submit required"
   exit
if [ $SECONDS -lt 1800 ]; then
   echo "*** no automatic restart as runtime of the present job was to short"
   exit
fi
qsub $0

Further information

Mentors

  • Dr. A. Kahler, RRZE, support-hpc@fau.de
  • AG Böckmann (Professur für Computational Biology, NatFak)