ANSYS CFX

ANSYS CFX is a general purpose Computational Fluid Dynamics (CFD) code. It provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). It is mostly used for simulating turbomachinery, such as pumps, fans, compressors and gas and hydraulic turbines.

Please note that the clusters do not come with any license. If you want to use ANSYS products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, ANSYS HPC licenses are necessary.

Availability / Target HPC systems

Different versions of all ANSYS products are available via the modules system, which can be listed by module avail ansys. A special version can be loaded, e.g. by module load ansys/2020R1.

We mostly install the current versions automatically, but if something is missing, please contact support-hpc@fau.de.

Production jobs should be run on the parallel HPC systems in batch mode.

ANSYS CFX can also be used in interactive GUI mode for serial pre- and/or post-processing on the login nodes (Linux: SSH Option „-X“; Windows: using PuTTY and XMing for X11-forwarding). This should only be used to make quick simulation setup changes. It is NOT permitted to run computationally intensive ANSYS CFX simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Alternatively, ANSYS CFX can be run interactively with GUI on TinyFat (for large main memory requirements) or on a compute node.

Getting started

The (graphical) CFX launcher is started by typing

cfx5launch

on the command line. If you want to use the separate pre- or postprocessing capabilities, you can also launch cfx5pre or cfx5post, respectively.

For running simulations in batch mode on the HPC systems, use the

cfx5solve

command. You can find out the available parameters via cfx5solve -help. One example call to use in your batch script would be

cfx5solve -batch -par-dist $NODELIST -double -def <solver input file>

The number of processes and the hostnames of the compute nodes to be used are defined in $NODELIST. For how to compile this list, refer to the example script below. Using SMT threads is not recommended.

Notes

  • We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure. This can be specified in Output Control → User Interface → Backup Tab
  • Furthermore, it is recommended to use the „Elapsed Wall Clock Time Control“ in the job definition in ANSYS CFX Pre (Solver Control → Elapsed Wall Clock Time Control → Maximum Run Time → <24h). Also plan enough buffer time for writing the final output, depending on your application, this can take quite a long time!

Sample job scripts

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load ANSYS environment module
  • Generate a file with names of hosts of the current simulation run to tell CFX on which nodes it should run (see example below)
  • Execute cfx5solve with appropriate command line parameters (available options via cfx5solve -help)

#!/bin/bash -l
#PBS -lnodes=4:ppn=40,walltime=24:00:00
#PBS -N cfx
#PBS -j eo

# specify the name of your input-def file
DEFFILE="example.def"
# number of cores to use per node
PPN=20
# load environment module
module load ansys/XXXX
# generate node list
uniq $PBS_NODEFILE | sed -e 's/$/*'$PPN'/' | paste -d ',' -s > NODELIST


# execute cfx with command line parameters (see cfx5solve -help for all available parameters)  
 cfx5solve -batch -double -par-dist  $NODELIST -def $DEFFILE

Further information

  • Documentation is available within the application help manual. Further information is provided through the ANSYS Customer Portal for registered users.
  • More in-depth documentation is available at LRZ. Please note: not everything is directly applicable to HPC systems at RRZE!

Mentors