Windows HPC-Cluster

The Windows HPC cluster has been turned off in June 2018!
All information on this page is obsolete.

RRZE’s Windows cluster is running Microsoft Windows Compute Cluster Server 2012. This is a Windows Server 2012 R2 based server with an installation of MS HPC Pack 2012 R2.

The Windows cluster includes the following components:

  • 16 compute nodes, each with
    • dual-socket boards and two hexa-core AMD Opteron Istanbul processors(@2.60 GHz)
    • 32 GBytes of RAM in a ccNUMA architecture per compute node, i.e. 2.6 GBytes per core
  • Head node with 8 GBytes of RAM and 4 Intel Xeon based cores(@2.53GHz)
  • Front node with 8 GBytes of RAM and 4 Intel Xeon based cores(@2.53GHz)

Please keep in mind that all these nodes are from 2009. Thus, they do not deliver the highest performance compared to modern PCs and the hardware may break at any time. There are currently no plans to update the hardware of the Windows HPC cluster.

Access to the Machine

Access to the system is granted via the frontend windowscc.rrze.uni-erlangen.de.

The frontend and the cluster nodes all have private IP addresses and can only be reached directly from within the university network. If you want to connect from the outside, use VPN (recommended), an appropriate SSH tunnel to port 3389 of windowscc through the official HPC dialog server cshpc, or connect to cshpc frist using an NX Client (see below).

Please connect using the RDP protocol, either with the Windows Remote Desktop Client or the rdesktop/xfreerdp tool under UNIX/Linux.

  • Windows Remote Desktop Client: This program is part of each Windows XP (or higher) installation and can be found under „Accessories“-„Communications“. The client allows to make local resources (notably disks) visible on the remote system, which greatly facilitates file transfers.
  • rdesktop / xfreerdp: These are open source programs that are part of all major Linux distributions. Use it by specifying the server to connect to as an argument:
    rdesktop -a 16 -f -k de windowscc.rrze.uni-erlangen.de
    The option -a 16 specifies a color depth of 16 bits, -f turns on fullscreen mode and -k de is required if you use a German keyboard layout. To leave fullscreen mode you can type ctrl-alt-enter. Killing the client leaves your session running and you can reconnect at any time.
    You can make client directories available on the remote server using option -r disk:<share>=<pathname>. The directory under <pathname> will then be accessible on the server under the UNC \\tsclient\<share>.

Always make sure that fauad is used as account domain!
I.e. use fauad\YOURUSERNAME instead of just YOURUSERNAME (or any local domain) when logging in to the Windows server.

Walkthrough:

From within the university network OR connected via VPN, working on a Unix platform

Open a shell and type:
rdesktop -f -k de windowscc.rrze.uni-erlangen.de
Enter your password and login.

From within the university network OR connected via VPN, working on a Windows platform

Click Start -> All Programs -> Accessories -> Remote Desktop Connection (or Start -> Run, type mstsc and press enter).
Connect and login to:
windowscc.rrze.uni-erlangen.de

From outside the university network AND without VPN, working on a Unix platform

Use the NXClient as described on the HPC dialog server page.

Connect and login to:
cshpc.rrze.uni-erlangen.de
Open a shell and enter:
xfreerdp /u:YOURUSERNAME /v:windowscc.rrze.uni-erlangen.de /bpp:16 /f
Replace YOURUSERNAME with your username and enter your password when requested. On the first login, you’ll also have to accept the servers certificate.

From outside the university network AND without VPN, working on a Windows platform

Use the NXClient as described on the HPC dialog server page.

Connect and login to:
cshpc.rrze.uni-erlangen.de
Open a shell and enter:
xfreerdp /u:YOURUSERNAME /v:windowscc.rrze.uni-erlangen.de /bpp:16 /f
Replace YOURUSERNAME with your username and enter your password when requested. On the first login, you’ll also have to accept the servers certificate.

File Systems

On the head node you should not use the „My Documents“ folder, as space is very limited. Furthermore this folder is only visible on that very node, and each of the compute nodes has its own „My Documents“ for each user. Thus we provide a globally visible share for each user under \\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username> which can be read and written from all nodes. This is the place to put all your development data, binaries and input/output data of jobs.

Home and Working directory

\\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username>

Batch Processing: Compute Cluster Job Manager

General Access

  • To access the „Job Manager“ hit the start button, choose „All Programs„, choose „Microsoft HPC Pack 2012“ and click HPC Job Manager.
    If asked for a head node, enter hpcmaster.rrze.uni-erlangen.de.
  • Use „Job Submission“ in the „Tasks“ menu.
  • Insert the desired job name, job template(see below) and a description
  • On the processor pane specify the number of processors
  • On the task pane, add a task for any executables you want to schedule
  • Click submit

Command Line Job Manager

  • More functionality due to the possible use of scripting, is provided by the command line tool job. It is accessible from any command prompt of the Compute Cluster Server.
  • Example: job submit /numprocessors:8
    /stdout:\\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username>\iptest.txt
    hostname.exe
    Reserves 8 processors and runs one executable
  • Example: job submit /numprocessors:8
    /stdout:\\aycasamba.rrze.uni-erlangen.de\hpc_vault\<group>\<username>\iptest.txt
    mpiexec hostname.exe
    Reserves 8 processors and runs the executable once for each processor
  • All cluster related command line tools are described on the Microsoft Compute Cluster Command Line Reference page

Job Templates

Job templates are the Microsoft implementation of different cluster queues for different computational requirements and categories.