The HLRN alliance jointly operates a distributed supercomputer system hosted at the sites Georg-August-Universität Göttingen und Zuse Institute Berlin (ZIB). In September 2018 the HLRN-IV system phase 1 from the company Atos/Bull was put into operation. After the successful installation of phase 2 the total HLRN-IV system will hold more than 200,000 cores with a total peak performance of about 16 PFlop/s.
The HLRN complex in Berlin at ZIB is named after Lise Meitner and it conatins 1270 compute nodes with 121,920 compute cores.
At the site Göttingen University the HLRN system phase 1 named Emmy (for Emmy Noether) is operated since October 2018.
The HLRN system phase 2 in Göttingen will start operation in 2020.
There are three different file systems available for Göttingen and Berlin respectively.
For large data at both complexes a WORK file system is available.
The WORK file system is based on a DDN EXAScaler and operates a DDN Lustre file system. Each site has two DDN ES18K embedded-storage systems. The work file system does not have automatic backups.
The HOME file system has the following performance characteristics.
The HOME file system is based on a DDN GRIDScaler (Spectrum Scale) with DDN SFA7700X block storage. The file system supports disk and inode quotas for groups and users, which are tested on all nodes for every file access. Additionally, all the usual tools e.g. hard- and soft-limits, grace periods are available on user, group and fileset level. The HOME file system is exported via NFS to the login and compute nodes. The HOME file system is constantly backed up and there are daily snapshots available.
For long term data handling on both sites an archiv system based on tape storage is operated. At ZIB the archive system is a Hierarchical Storage Management (HSM) system with Sun StorEdge SAM-FS as a mangement software. In Göttingen a Tape-Library with StorNext file system software.