A Finite Element Analysis Package for Engineering Application

Details of the HLRN Installation of ABAQUS

The ABAQUS versions currently installed are

The module name is abaqus. Other versions may be installed. Inspect the output of :            module avail abaqus

Conditions for Usage and Licensing at HLRN

All usage of ABAQUS at HLRN is strictly limited to teaching and academic research for non-industry funded projects only.

Access to and usage of the software is regionally limited:

Usually, there are always sufficient licenses for Abaqus/Standard and Abaqus/Explicit command-line based jobs. In contrast, we only offer 4 licenses of the interactive Abaqus/CAE (GUI). If you add the flag "#SBATCH -L cae" to your job script, the SLURM scheduler starts your job only, if CAE licenses are available. You can check the available CAE licenses yourself with: scontrol show lic

Example Jobscripts

The input file of the test case (Large Displacement Analysis of a linear beam in a plane) is: c2.inp

Distributed Memory Parallel Processing

#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2  
#SBATCH --ntasks-per-node=48
#SBATCH -p standard96:test
#SBATCH --mail-type=ALL
#SBATCH --job-name=abaqus.c2

module load abaqus/2020

# host list:
echo "SLURM_NODELIST:  $SLURM_NODELIST"
create_abaqus_hostlist_for_slurm
# This command will create the file abaqus_v6.env for you.
# If abaqus_v6.env exists already in the case folder, it will append the line with the hostlist.

### ABAQUS parallel execution
abq2019 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double

echo '#################### ABAQUS finished ############'

SLURM logs to: slurm-<your job id>.out

The log of the solver is written to: c2.msg

The small number of elements in this example does not allow to use 2x96 cores. Hence, 2x48 are utilized here. But typically, if there is sufficient memory per core, we recommend using all physical cores per node (such as, in the case of standard96: #SBATCH --ntasks-per-node=96). Please refer to Compute node partitions, to see the number of cores on your selected partition and machine (Lise, Emmy).

Single Node Processing

#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=1  ## 2016 and 2017 do not run on more than one node
#SBATCH --ntasks-per-node=96
#SBATCH -p standard96:test
#SBATCH --job-name=abaqus.c2

module load abaqus/2016

# host list:
echo "SLURM_NODELIST:  $SLURM_NODELIST"
create_abaqus_hostlist_for_slurm
# This command will create the file abaqus_v6.env for you.
# If abaqus_v6.env exists already in the case folder, it will append the line with the hostlist.

### ABAQUS parallel execution
abq2016 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double

echo '#################### ABAQUS finished ############'