Using multiple programs on different data within a single job takes a bit ofset up, as you need to tell the MPI starter exaectly what to run and where to run it.


Example script hello.slurm for a code with two binaries

  • one OpenMP binary hello_omp.bin running on 1 node, 2 MPI tasks per node and 4 OpenMP threads per task,
  • one MPI binary hello_mpi.bin running on 2 nodes, 4 MPI tasks per node.
Intel MPI
#SBATCH --time=00:10:00
#SBATCH --nodes=3
#SBATCH --partition=medium:test

module load impi
export SLURM_CPU_BIND=none

scontrol show hostnames $SLURM_JOB_NODELIST | awk '{if(NR==1) {print $0":2"} else {print $0":4"}}' > machines.txt
mpirun -machine machines.txt -n 2 ./hello_omp.bin : -n 8 ./hello_mpi.bin
  • Mit srun geht das auch. Leider läuft grade keins unserer Systeme...