TURBOMOLE is a computational chemistry program that implements various quantum chemistry methods (ab initio methods). It was initially developed at the University of Karlsruhe.
TURBOMOLE features all standard methods as well DFT code for molecules and solids, excited states and spectra using DFT or Coupled Cluster methods. Some of the programs can be used with MPI parallelisation.
Read more about it on the developer's homepage.
An overview of the documentation can be found here.
The vendor also provides a list of utilities.
Only members of the
tmol user group can use the TURBOMOLE software. To have their user ID included in this group, users can send a message to their consultant or to HLRN support.
|turbomole/tmolex2022||TmoleX GUI, includes Turbomole 7.6 CLI|
Load the necessary modulefiles. Turbmole has two execution modes. By default it used the SMP version (single node), but it an also run as MPI on multiple nodes on the cluster. To run the MPI version, the variable PARA_ARCH needs to be set to MPI. If it is empty, does not exist or set to SMP, it uses the SMP version.
Example for the MPI version
module load turbomole/7.6
TmoleX is a GUI for TURBOMOLE which allows to build a workflow. Also it aids in building of the initial structure and the visualisation of results.
To run the TmoleX GUI you must connect with X11 forwarding (ssh -Y ...).
module load turbomole/tmolex2022
Job Script Examples
Note that some calculation run only in a certain execution mode, please consult the manual. Here all execution modes are listed.
1. Serial version. The calculations run serial and run only on one node.
2. SMP Version, it can only run on one node. Use one node and use all CPUs:
3. MPI version. The MPI binaries have a _mpi suffix. To use the same binary names as the SMP version, the path will be extended to TURBODIR/mpirun_scripts/. This directory symlinks the binaries to the _mpi binaries. Here we run it on 8 nodes with all 96 cores:
4. Open MP Version, here we need to set the OMP_NUM_THREADS variable. Again it uses 8 nodes with 96 cores. We use the standard binaries with Open MP, do not use the mpi binaries. If OMP_NUM_THREADS is set, then it uses the Open MP version.