Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

VersionInstallation Pathmodulefilecompilercomment
2018.4/sw/chem/gromacs/2018.4/skl/impigromacs/2018.4intelmpi
2018.4/sw/chem/gromacs/2018.4/skl/impi-plumedgromacs/2018.4-plumedintelmpiwith plumed
2019.6/sw/chem/gromacs/2019.6/skl/impigromacs/2019.6intelmpi
2019.6/sw/chem/gromacs/2019.6/skl/impi-plumedgromacs/2019.6-plumedintelmpiwith plumed
2021.2/sw/chem/gromacs/2021.2/skl/impigromacs/2021.2intelmpi
2021.2/sw/chem/gromacs/2021.2/skl/impi-plumedgromacs/2021.2-plumedintelmpiwith plumed

Usage

Load the necessary modulefiles. Note that Intel MPI module file should be loaded first

module load impi/2019.5 gromacs/20182019.46

This provides access to the binary gmx_mpi wich can be used to run simulations with sub-commands as gmx_mpi mdrun

...

mpirun gmx_mpi mdrun MDRUNARGUMENTS

Job Script Examples

  1. A simple case of a GROMACS job using a total of 640 CPU cores for 12 hours. The requested amount of cores in the example does not include all available cores on the allocated nodes. The job will execute 92 ranks on 3 nodes + 91 ranks on 4 nodes. You can use this example if you know the exact amount of required ranks you want to use.

    Code Block
    languagebash
    linenumberstrue
    #!/bin/bash
    #SBATCH -t 12:00:00
    #SBATCH -p standard96
    #SBATCH -n 640
    
    export SLURM_CPU_BIND=none
    
    module load impi/2019.5
    module load gromacs/2019.6
    
    mpirun gmx_mpi mdrun MDRUNARGUMENTS


  2. In case you want to use all cores on the allocated nodes, there are another options of the batch system to request the amount of nodes and number of tasks. The example below will result in running 672 ranks. 

    Code Block
    languagebash
    linenumberstrue
    #!/bin/bash
    #SBATCH -t 12:00:00
    #SBATCH -p standard96
    #SBATCH -N 7
    #SBATCH --tasks-per-node 96
    
    export SLURM_CPU_BIND=none
    
    module load impi/2019.5
    module load gromacs/2019.6
    
    mpirun gmx_mpi mdrun MDRUNARGUMENTS