OpenFOAM

OpenFOAM (for „Open-source Field Operation And Manipulation“) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD). It contains solvers for a wide range of problems, from simple laminar regimes to DNS or LES including reactive turbulent flows. It provides a framework for manipulating fields and solving general partial differential equations on unstructured grids based on finite volume methods. Therefore, it is suitable for complex geometries and a wide range of configurations and applications.

There are three main variants of OpenFOAM that are released as free and open-source software under a GPLv3 license: ESI OpenFOAM, The OpenFOAM Foundation, Foam Extend.

Availability / Target HPC systems

We provide modules for some major OpenFOAM versions, which were mostly requested by specific groups or users. If you have a request for a new version, please contact support-hpc@fau.de. Please note that we will only provide modules for fully released versions, which will be used by more than one user. If you need some specific custom configuration or version, please consider building it yourself. Installation guides are available from the respective OpenFOAM distributors.

The installed versions of OpenFOAM may differ between the different HPC clusters. You can check the available versions via module avail openfoam.

Production jobs should be run on the parallel HPC systems in batch mode.  It is NOT permitted to run computationally intensive OpenFOAM simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Notes

  • OpenFOAM produces per default lots of small files – for each processor, every step, and for each field. The parallel file system ($FASTTMP) is not made for such a finely grained file/folder structure. For more recent versions of OpenFOAM, you can use collated I/O, which produces somewhat less problematic output.
  • Paraview is used for post-processing and is also available via the modules system on the HPC cluster. However, keep an eye on the main memory requirements for this visualization, especially on the frontends!

Sample job scripts

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load OpenFOAM environment module
  • Start command for parallel execution of solver of choice

For meggie/Slurm batch system: mpirun takes the parameters (nodes, tasks-per-node) that you specified in the header of your batch file. You don’t have to specify this again in your mpirun call (see also MPI on meggie).  In order that this works correctly, the total number of MPI tasks (nodes times tasks-per-node) must be equal to numberOfSubdomains inside system/decomposeParDict!

 

#!/bin/bash -l
#PBS -lnodes=4:ppn=40,walltime=24:00:00
#PBS -N my-job-name
#PBS -j eo

# number of cores to use per node
PPN=20
# load environment module
module load openfoam/XXXX

# change to working directory 
cd ${PBS_O_WORKDIR}

# count the number of nodes
NODES=`uniq ${PBS_NODEFILE} | wc -l`
# calculate the number of cores actually used
CORES=$(( ${NODES} * ${PPN} ))

# Please insert here your prefered solver executable!
mpirun -np ${CORES} -npernode ${PPN} icoFoam -parallel -fileHandler collated > logfile

#!/bin/bash -l
#SBATCH --job-name=my-job-name
#SBATCH --nodes=4
#SBATCH --tasks-per-node=20                   # for 20 physical cores on meggie
#SBATCH --time=24:00:00 
#SBATCH --export=NONE 

# load environment module 
module load openfoam/XXXX 

unset SLURM_EXPORT_ENV 

# Please insert here your prefered solver executable! 
mpirun icoFoam -parallel -fileHandler collated > logfile

Further information

Mentors

  • please volunteer!