NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulations of large biomolecular systems using force fields. The code was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign.
Only members of the
namd user group have access to NAMD executables provided by HLRN. To have their user ID included in this group, users can send a message to their consultant or to HLRN support.
The environment modules shown in the table below are available to include NAMD executables in the directory search path. To see what is installed and what is the current default version of NAMD at HLRN, a corresponding overview can be obtained by saying
module avail namd.
NAMD is a parallel application. It is recommended to use mpirun as the job starter for NAMD at HLRN. An MPI module providing the
mpirun command needs to be loaded ahead of the NAMD module.
|NAMD version||NAMD modulefile||NAMD requirements|
File I/O Considerations
During run time only few files are involved in NAMD's I/O activities. As long as standard MD runs are carried out, this is unlikely to impose stress on the Lustre file system (
$WORK) as long as one condition is met. Namely, file metadata operations (file
rename) should not occur at too short time intervals. First and foremost, this applies to the management of NAMD restart files. Instead of having a new set of restart files created several times per second, the NAMD input parameter
restartfreq should be chosen such that they are written only every 5 minutes or in even longer intervals. For the case of NAMD replica-exchange runs the situation can be more severe. Here we already observed jobs where heavy metadata file I/O on the individual "
colvars.state" files located in every replica's subdirectory has overloaded our Lustre metadata servers resulting in a severe slowdown of the entire Lustre file system. Users are advised to set corresponding NAMD input parameters such that each replica performs metadata I/O on these files in intervals not shorter than really needed, or, where affordable, that these files are written only at the end of the run.
Job Script Examples
For Intel Skylake compute nodes (Göttingen only) – simple case of a NAMD job using a total of 200 CPU cores distributed over 5 nodes running 40 tasks each
For Intel Cascade Lake compute nodes – simple case of a NAMD job using a total of 960 CPU cores distributed over 10 nodes running 96 tasks each
A set of input files for a small and short replica-exchange simulation is included with the NAMD installation. A description can be found in the NAMD User's Guide. The following job script executes this replica-exchange simulation on 2 nodes using 8 replicas (24 tasks per replica)