The HLRN alliance jointly operates a distributed supercomputer system hosted at the sites Georg-August-Universität Göttingen und Zuse Institute Berlin (ZIB). The system delivery of the phases 1 and 2 of the HLRN-IV from the company Atos/Bull was completed in 2019. The total HLRN-IV system holds more than 230,000 cores with a total peak performance of about 16 PFlop/s.
The HLRN complex in Berlin at ZIB is named after Lise Meitner and it conatins 1270 compute nodes with 121,920 compute cores.
At the site Göttingen University the HLRN system phase 1 named Emmy (for Emmy Noether) is in operation since October 2018. In October 2020 the second phase was added.
There are three different file systems available for Göttingen and Berlin respectively.
For large data at both complexes a WORK file system is available.
The WORK file system is based on a DDN EXAScaler and operates a DDN Lustre file system. Each site has two DDN ES18K embedded-storage systems. The work file system does not have automatic backups.
The HOME file system has the following performance characteristics.
The HOME file system is based on a DDN GRIDScaler (Spectrum Scale) with DDN SFA7700X block storage. The file system supports disk and inode quotas for groups and users, which are tested on all nodes for every file access. Additionally, all the usual tools e.g. hard- and soft-limits, grace periods are available on user, group and fileset level. The HOME file system is exported via NFS to the login and compute nodes. The HOME file system is constantly backed up and there are daily snapshots available.
For long term data handling on both sites an archiv system based on tape storage is operated. At ZIB the archive system is a Hierarchical Storage Management (HSM) system with Sun StorEdge SAM-FS as a mangement software. In Göttingen a Tape-Library with StorNext file system software.