Many different applications are used by scientists from the various fields on the HPC systems at RRZE. The HPC team at RRZE does not have any experience with most of the applications. Nevertheless, we try to collect some useful information and tips&tricks for key applications on the following pages. The pages will not teach you how to use the applications but collect information which is specific to the usage on the HPC systems of RRZE.
Central installation of software
As the parallel computers of RRZE are operated diskless, all system software has to reside in a RAM disk, i.e. in main memory. Therefore, only limited packages from the Linux distribution can be installed on the compute nodes, and the compute nodes only contain a subset of the packages installed on the login nodes. Most (application) software but also libraries therefore will be installed in
/apps and made available as modules. Multiple versions of a single software can be provided in that way, too.
As a general rule: software will be installed centrally
- if there are multiple groups which benefit from the software, or
- if the software is very easy to install.
In both cases RRZE will more and more request that a least one group acts as mentor for the software, i.e. provides RRZE with simple input for validation of the installation, but moreover, that group has to provide limited support for other groups to get started with the software.
Notes on specific software
Molecular dynamics for chemistry, life science, and material science
- Amber & AmberTools – suite of biomolecular simulation programs. Here, the term „Amber“ does not refer to the set of molecular mechanical force fields for the simulation of biomolecules but to the package of molecular simulation programs consisting of the AmberTools (sander and many more) and Amber (pmemd).
- DL_POLY / DL_POLY_CLASSIC – general purpose classical molecular dynamics simulation software.
DL_POLY with its fine grain IO can easily kill
$FASTTMP. RRZE will kill / throttle any jobs which abuse the parallel file system.
- GROMACS – versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the non-bonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
- IMD – software package for classical molecular dynamics simulations.
- LAMMPS – classical molecular dynamics code with a focus on materials modeling. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
- CPMD – parallelized plane wave / pseudopotential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics.
- OpenMolcas – a quantum chemistry software package using the multiconfigurational approach to the electronic structure.
- ORCA – an ab initio quantum chemistry program package that contains modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties.
- Quantum Espresso – integrated suite of open-source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
- TURBOMOLE – program package for ab initio electronic structure calculations.
- VASP – Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
Not installed centrally owing to license restrictions.
Computational fluid dynamics (CFD), multiphysics and FE, climatology and glaciology
- ANSYS CFX – commercial CFD software tool with a special focus on turbomachinery, such as pumps, fans, compressors and gas and hydraulic turbines.
- ANSYS Fluent – commercial CFD to model flow, turbulence, heat transfer and reactions in all sorts of applications.
- Simcenter STAR-CCM+ – commercial software tool for CFD and more generally computational continuum mechanics (by CD-adapco or nowadays Siemens PLM).
- OpenFOAM.org/OpenFOAM.com – leading free, open-source software for computational fluid dynamics but also an extensive C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems.
- Abaqus – software suite for finite element analysis and computer-aided engineering.
Currently no (known) active users on the HPC systems.
- COMSOL Multiphysics – general-purpose simulation software for modeling designs, devices, and processes in all fields of engineering, manufacturing, and scientific research.
- Elmer/Ice – open-source Finite Element software for ice sheet, glaciers, and ice flow modelling.
Avoid MATC expressions in the SIF input.
- WRF – Weather Research and Forecasting (WRF) is a next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications.
Mathematical and statistical software
- R – „Microsoft R Open“ is installed on Woody. It is an enhanced distribution of R.
Additional packages available from MRAN can be installed centrally upon request.
Machine learning and big data
- TensorFlow & co
RRZE currently does not provide any central installation for three reasons: (a) the software changes very rapidly; (b) NVIDIA imposes very strict conditions on cuDNN – you can only get a personal license; (c) no group volunteered as mentor so far.
- software for the astrophysics community is maintained by ECAP.
Virtualization and containers
- Singularity is available on all HPC systems. Thus, you can run your own environment in a docker-like fashion.
- DDT – parallel debugger.
- TotalView – parallel debugger. See https://hpc-wiki.info/hpc/TotalView.
- LIKWID – tool suite for performance-oriented programmers and administrators. The term LIKWID stands for ‚Like I know what I do‘. See https://hpc-wiki.info/hpc/Likwid and https://github.com/RRZE-HPC/likwid/wiki.
- Intel Trace Collector / Analyzer – tool for profiling and checking the correctness of MPI communication. See https://hpc-wiki.info/hpc/Intel_Trace_Collector/Analyzer.
- Intel Vtune & friends (advisor, inspector, performance_snapshots, vtune_amplifier) – tools for detailed performance profiling. See https://hpc-wiki.info/hpc/Intel_VTune and https://hpc-wiki.info/hpc/Intel_Advisor.
Only available on special machines – contact firstname.lastname@example.org for details.
Batch processing and scripting
- trapping signals in bash (job) scripts
Signals like SIGUSR1 are not processed while a process is running as detailed in the bash manpage: If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes. When bash is waiting for an asynchronous command via the wait builtin, the reception of a signal for which a trap has been set will cause the wait builtin to return immediately with an exit status greater than 128, immediately after which the trap is executed.
- specific clock frequency in SLURM jobs
By default, the compute nodes at RRZE run with enabled turbo mode and ondemand governor. For benchmarking (or if an application is known to be memory bound), a fixed frequency may be desired. On the PBS/torque-based clusters, the frequency can be specified by a special property (e.g. :f2.2) and is effective on all nodes once the job starts running. With SLURM as batch system (i.e currently on Meggie), the frequency (in kHz) can be specified using the
srun; however, the frequency will only be set once
srunis called (directly or indirectly). This can be a problem for single node OpenMP applications, or on the 1st node of an MPI application started with
mpirunwhich still are on turbo & ondemand.
likwid-setFreqis not supported on the clusters.
You have to distinguish between the Python installation from the Linux distribution in the default path and the one available though the
python/[2.7|3.x]-anacondamodules. The system Python only provides a limited functionality (especially on the compute nodes). Some software packages (e.g. AMBER) come with their own Python installation.