STAR-CCM+

Simcenter STAR-CCM+ is a commercial software tool for CFD and more generally computational continuum mechanics (by CD-adapco or nowadays Siemens PLM). As a general purpose CFD code Simcenter StarCCM+ provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains).

Please note that the clusters do not come with any license. If you want to use Simcenter STAR-CCM+ on the HPC clusters, you have to have access to suitable licenses. Several groups hold a joint license pool for non-commercial academic use which is coordinated through ZISC.

Availability / Target HPC systems

Different versions of all Simcenter STAR-CCM+ are available via the modules system, which can be listed by module avail star-ccm+. A special version can be loaded, e.g. by module load star-ccm+/2020.1.

We mostly install the current versions automatically, but if something is missing, please contact support-hpc@fau.de.

Production jobs should be run on the parallel HPC systems in batch mode.

Simcenter STAR-CCM+ can also be used in interactive GUI mode for serial pre- and/or post-processing on the login nodes (Linux: SSH Option „-X“; Windows: using PuTTY and XMing for X11-forwarding). This should only be used to make quick simulation setup changes. Please be aware that Simcenter STAR-CCM+ is loading the full mesh into the login node’s memory when you open a simulation file. You should do this only with comparable small cases. It is NOT permitted to run computationally intensive Simcenter STAR-CCM+ simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Notes

  • Once you load the star-ccm+ module, the environment variable $PODKEY will hold your specific POD-key. Please only use the environment variable as the value will be updated centrally as needed. The POD-key from the HPC system will not work at your chair and vice versa.
  • Do not use SMT/Hyperthreads, since this will impact performance and slow down your simulation! Refer to the sample job scripts below on how to set it up correctly.
  • We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure.
  • Besides the default mixed precision solver, Siemens PLM is also providing installation packages for higher accuracy double precision simulations. The latter comes for the price of approx. 20% higher execution times and approx. twice as large simulation results files. These modules are only available on demand and are named star-ccm+/XXX-r8.

Sample job scripts

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load Simcenter STAR-CCM+ environment module
  • Generate a file with names of hosts of the current simulation run to tell STAR-CCM+ on which nodes it should run (see example below)
  • Start command for parallel execution of starccm+ with all appropriate command line parameters, including a controlling StarCCM+ java macro. Available parameters can be listed via starccm+ -help.

#!/bin/bash -l
#PBS -lnodes=3:ppn=4,walltime=10:00:00
#PBS -N my-ccm
#PBS -j eo

# star-ccm+ arguments
CCMARGS="-load simxyz.sim -power -podkey $PODKEY"

# specify the time you want to have to save results, etc.
# (remove or comment the line if you don not want this feature)
TIME4SAVE=1200

# number of cores to use per node (ppn=20 without SMT threads)
PPN=20

# STAR-CCM+ version to use
module add star-ccm+/2019.2

#####################################################
#####################################################
### normally, no changes should be required below ###
#####################################################
#####################################################

# change to working directory 
cd ${PBS_O_WORKDIR}

# count the number of nodes
NODES=`uniq ${PBS_NODEFILE} | wc -l`
# calculate the number of cores actually used
CORES=$(( ${NODES} * ${PPN} ))

# generate new node file
for node in `uniq ${PBS_NODEFILE}`; do
  echo "${node}:${PPN}"
done > pbs_nodefile.${PBS_JOBID}

# some exit/error traps for cleanup
trap 'echo; echo "*** Signal TERM received: `date`"; echo; rm pbs_nodefile.${PBS_JOBID}; exit' TERM
trap 'echo; echo "*** Signal KILL received: `date`"; echo; rm pbs_nodefile.${PBS_JOBID}; exit' KILL

if [ ! -z $TIME4SAVE ]; then
    # automatically detect how much time this batch job requested and adjust the
    # sleep accordingly
    export TIME4SAVE
    ( sleep ` qstat -f ${PBS_JOBID} | awk -v t=${TIME4SAVE}                 \
        '{if ( $0 ~ /Resource_List.walltime/ )                              \
            { split($3,duration,":");                                       \
              print duration[1]*3600+duration[2]*60+duration[3]-t }}' ` &&  \
      touch ABORT ) >& /dev/null  &
    SLEEP_ID=$!
fi

echo
echo "============================================================"
echo "Running STAR-CCM+ with $CORES MPI processes in total"
echo "   with $PPN cores per node "
echo "   on $NODES different hosts"
echo "============================================================"
echo

# start STAR-CCM+
starccm+ -batch -rsh ssh -cpubind v -np ${CORES} -machinefile pbs_nodefile.${PBS_JOBID} ${CCMARGS}

# final clean up
rm pbs_nodefile.${PBS_JOBID}
if [ ! -z $TIME4SAVE ]; then
    pkill -P ${SLEEP_ID}
fi

#!/bin/bash -l
#SBATCH --job-name=my-ccm
#SBATCH --nodes=2
#SBATCH --time=01:00:00
#SBATCH --export=NONE

# star-ccm+ arguments
CCMARGS="-load simxyz.sim"

# specify the time you want to have to save results, etc.
# (remove or comment the line if you don not want this feature)
TIME4SAVE=1200

# number of cores to use per node (must be an even number!)
PPN=20

# STAR-CCM+ version to use
module add star-ccm+/2019.2

#####################################################
### normally, no changes should be required below ###
#####################################################

unset SLURM_EXPORT_ENV

echo
echo "Job starts at $(date) - $(date +%s)"
echo

# count the number of nodes
NODES=$( scontrol show hostnames $SLURM_JOB_NODELIST )
NUMNODES=$( echo $NODES | wc -w )
# calculate the number of cores actually used
NUMCORES=$(( $NUMNODES * ${PPN} ))

# determine the cores to use
MPIRUN_OPTIONS="-v -prot -aff=automatic:bandwidth:core -affopt=v"

# change to working directory (should not be necessary for SLURM)
cd $SLURM_SUBMIT_DIR

# generate new node file
for node in $NODES ; do
echo "${node}:${PPN}"
done > $TMPDIR/job_nodefile.${SLURM_JOBID}

# some exit/error traps for cleanup
trap 'echo; echo "*** Signal TERM received: `date`"; echo; rm $TMPDIR/job_nodefile.${SLURM_JOBID}; exit' TERM
trap 'echo; echo "*** Signal KILL received: `date`"; echo; rm $TMPDIR/job_nodefile.${SLURM_JOBID}; exit' KILL

if [ ! -z $TIME4SAVE ]; then
# automatically detect how much time this batch job requested and adjust the
# sleep accordingly
TIMELEFT=$(squeue -j $SLURM_JOBID -o %L -h)
HHMMSS=${TIMELEFT#*-}
[ $HHMMSS != $TIMELEFT ] && DAYS=${TIMELEFT%-*}
IFS=: read -r HH MM SS <<< $TIMELEFT
[ -z $SS ] && { SS=$MM; MM=$HH; HH=0 ; }
[ -z $SS ] && { SS=$MM; MM=0; }
SLEEP=$(( ( ( ${DAYS:-0} * 24 + 10#${HH} ) * 60 + 10#${MM} ) * 60 + 10#$SS - $TIME4SAVE ))
echo "Avilable runtime: ${DAYS:-0}-${HH:-0}:${MM:-0}:${SS}, sleeping for up to $SLEEP, thus reserving $TIME4SAVE for clean stopping/saving results"
( sleep $SLEEP && touch ABORT ) >& /dev/null &
SLEEP_ID=$!
fi

echo
echo "============================================================"
echo "Running STAR-CCM+ with $NUMCORES MPI processes in total"
echo " with $PPN cores per node"
echo " on $NUMNODES different hosts"
echo "============================================================"

echo

# start STAR-CCM+
### since 12.06.01x, Platform-MPI (default) has support for OmniPath interconnect and seems to perform slightly better than Intel-MPI

starccm+ -batch -np ${NUMCORES} -mppflags "$MPIRUN_OPTIONS" -fabricverbose -machinefile $TMPDIR/job_nodefile.${SLURM_JOBID} -power -podkey $PODKEY ${CCMARGS}

# final clean up
rm $TMPDIR/job_nodefile.${SLURM_JOBID}
if [ ! -z $TIME4SAVE ]; then
pkill -P ${SLEEP_ID}
fi

echo "Job finished at $(date) - $(date +%s)"

Mentors

  • please volunteer!
  • for issue with the license server or POD key contact hpc-support@fau.de (T. Zeiser)
  • for contract questions regarding the joint license pool contact ZISC (H. Lanig)