Test cluster

The RRZE test and benchmark cluster  is an environment for porting software to new CPU architectures and running benchmark tests. It comprises a variety of nodes with different processors, clock speeds, memory speeds, memory capacity, number of CPU sockets, etc. There is no high-speed network, and MPI parallelization is restricted to one node. The usual NFS file systems are available.

This is a testing ground. Any job may be canceled without prior notice. For further information about proper usage, please contact HPC@RRZE.

This is a quick overview of the systems including their host names (frequencies are nominal values) – NDA systems are not listed:

  • aurora1: Single Intel Xeon „Skylake“ Gold 6126 CPU (12 cores + SMT) @ 2.60GHz.
    Accelerators: 2x NEC Aurora „TSUBASA“ 10B (48 GiB RAM)
  • broadep2: Dual Intel Xeon „Broadwell“ CPU E5-2697 v4 (18 cores + SMT) @ 2.30GHz, 128 GiB RAM
  • casclakesp2: Dual  Intel Xeon „Cascade Lake“ Gold 6248 CPU (20 cores + SMT) @ 2.50GHz, 384 GiB RAM
  • hasep1: Dual Intel Xeon „Haswell“ E5-2695 v3 CPU (14 cores + SMT) @ 2.30GHz, 64 GiB RAM
  • interlagos1: Dual AMD Opteron 6276 „Interlagos“ CPU (16 cores) @ 2.3 GHz, 64 GiB RAM.
    Accelerator: AMD Radeon VII GPU (16 GiB HBM2)
  • ivyep1: Dual Intel Xeon „Ivy Bridge“ E5-2690 v2 CPU (10 cores + SMT) @ 3.00GHz, 64 GiB RAM
  • medusa: Dual Intel Xeon „Cascade Lake“ Gold 6246 CPU (12 cores + SMT) @ 3.30GHz, 192 GiB RAM.
    Accelerators:
    NVIDIA GeForce RTX 2070 SUPER (8 GiB GDDR6)
    – NVIDIA GeForce RTX 2080 SUPER (8 GiB GDDR6)
    – NVIDIA Quadro RTX 5000 (16 GiB GDDR6)
    – NVIDIA Quadro RTX 6000 (24 GiB GDDR6)
  • naples1: Dual AMD EPYC 7451 „Naples“ CPU (24 cores + SMT) @ 2.3 GHz, 128 GiB RAM
  • phinally: Dual Intel Xeon „Sandy Bridge“ CPU E5-2680 (8 cores + SMT) @ 2.70GHz, 64 GiB RAM
  • rome1: Single AMD EPYC 7452 „Rome“ CPU (32 cores + SMT) @ 2.35 GHz, 128 GiB RAM
  • skylakesp2: Intel Xeon „Skylake“ Gold 6148 CPU (20 cores + SMT) @ 2.40GHz, 96 GiB RAM
  • summitridge1: AMD Ryzen 7 1700X CPU (8 cores + SMT), 32 GiB RAM
  • warmup: Dual Cavium/Marvell „ThunderX2“ (ARMv8) CN9980 (32 cores + 4-way SMT) @ 2.20 GHz, 128 GiB RAM

Technical specifications of all more or less recent GPUs available at RRZE (either in the Testcluster or in TinyGPU):

RAM BW

[GB/s]

Ref Clock
[GHz]
Cores
Shader/TMUs/ROPs
TDP

[W]

SP
[
TFlop/s]
DP

[TFlop/s]

Host Host CPU
(base clock frequency)
 
Nvidia Geforce GTX980 4 GB GDDR5 224 1,126 2048/​128/​64 180 4,98 0,156 tg00x Intel Xeon Nehalem X5550 (4 Cores, 2.67GHz)
Nvidia Geforce GTX1080 8 GB GDDR5 320 1,607 2560/​160/​64 180 8,87 0,277 tg03x Intel Xeon Broadwell E5-2620 v4 (8 C, 2.10GHz)
Nvidia Geforce GTX1080Ti 11 GB GDDR5 484 1,480 3584/​224/​88 250 11,34 0,354 tg04x Intel Xeon Broadwell E5-2620 v4 (8 C, 2.10GHz)
Nvidia Geforce RTX2070Super 8 GB GDDR6 448 1,605 2560/​160/​64 215 9,06 0,283 medusa Intel Xeon Cascadelake Gold 6246 (12 C, 3.30GHz)
Nvidia Quadro RTX5000, active
16 GB GDDR6 448 1,620 3072/​192/​64 230 11,15 0,348 medusa Intel Xeon Cascadelake Gold 6246 (12 C, 3.30GHz)
Nvidia Geforce RTX2080Super 8 GB GDDR6 496 1,650 3072/​192/​64 250 11,15 0,348 medusa Intel Xeon Cascadelake Gold 6246 (12 C, 3.30GHz)
Nvidia Geforce RTX2080Ti 11 GB GDDR6 616 1,350 4352/​272/​88 250 13,45 0,420 tg06x Intel Xeon Skylake Gold 6134 (8 Cores, 3.20GHz)
Nvidia Quadro RTX6000, active
24 GB GDDR6 672 1,440 4608/​288/​96 260 16,31 0,510 medusa Intel Xeon Cascadelake Gold 6246 (12 C, 3.30GHz)
Nvidia Tesla V100 (PCIe, passive) 32 GB HBM2 900 1,245 5120 Shader 250 14,13 7,066 tg07x Intel Xeon Skylake Gold 6134 (8 Cores, 3.20GHz)
AMD Radeon VII 16 GB HBM2 1024 1,400 3840/​240/​64 300 13,44 3,360 interlagos1 AMD Interlagos Opteron 6276

This website shows information regarding the following topics:

Access, User Environment, and File Systems

Access to the machine

Note that access to the test cluster is restricted: If you want access to it, you will need to contact hpc@rrze. In order to get access to the NDA machines you have to provide a short (!) description of what you want to do there.

From within the FAU network, users can connect via SSH to the frontend
testfront.rrze.fau.de
If you need access from outside of FAU, you usually have to connect for example to the dialog server cshpc.rrze.fau.de first and then ssh to testfront from there.

While it is possible to ssh directly to a compute node, a user is only allowed to do this while they have a batch job running there. When all batch jobs of a user on a node have ended, all of their processes, including any open shells, will be killed automatically.

The login nodes and most of the compute nodes run Ubuntu 18.04. As on most other RRZE HPC systems, a modules environment is provided to facilitate access to software packages. Type „module avail“ to get a list of available packages. Note that, depending on the node, the modules may be different due to the wide variety of architectures. Expect inconsistencies. In case of questions, contact  hpc@rrze.

File Systems

The nodes have local hard disks of very different capacities and speeds. These are not production systems, so do not expect a production environment.

When connecting to the front end node, you’ll find yourself in your regular RRZE $HOME directory (/home/hpc/...). There are relatively tight quotas there, so it will most probably be too small for the inputs/outputs of your jobs. It however does offer a lot of nice features, like fine grained snapshots, so use it for „important“ stuff, e.g. your job scripts, or the source code of the program you’re working on. See the HPC file system page for a more detailed description of the features and the other available file systems including, e.g., $WORK.

Batch processing

As with all production clusters at RRZE, resources are controlled through a batch system, SLURM in this case. Due to the broad spectrum of architectures in the test cluster, it is usually advisable to compile on the target node using an interactive SLURM job (see below).

There is a „work“ queue and an „nda“ queue, both with up to 24 hours of runtime.  Access to the „nda“ queue is restricted because the machines tied to this queue are pre-production hardware or otherwise special so that benchmark results must not be published without further consideration.

Batch jobs can be submitted on the frontend. The default job runtime is 10 minutes.

The currently available nodes can be listed using:

sinfo -o "%.14N %.9P %.11T %.4c %.8z %.6m %.35f"

To select a node, you can either use the host name or a feature name from sinfo:

  • sbatch --nodes=1 --constraint=featurename --time=hh:mm:ss --export=NONE jobskript
  • sbatch --nodes=1 --nodelist=hostname --time=hh:mm:ss --export=NONE jobskript

Submitting an interactive job:

srun --nodes=1 --nodelist=hostname --time=hh:mm:ss --export=NONE --pty /bin/bash -l

For getting access to performance counter registers and other restricted parts of the hardware (so that lkwid-perfctr works as intended), use the option -C hwperf.

By default, SLURM exports the environment of the shell where the job was submitted. If this is not desired, use   --export=NONE and  unset SLURM_EXPORT_ENV. Otherwise, problems may arise on nodes that do not run Ubuntu.

Please see the batch system description for further details.