Hardware

CPU-only worker features

partition(s)node count(s)processorcores / threadsmemory [GB]mem / core [GB]
wc_cpu / wc_test90 / 10dual Xeon E5-2650 v2 2.6 GHz 16 / 161288

GPU-equipped worker features

The workers below share the wc_gpu slurm partition. The gres specification is used to request a particular worker node type.

node countslurm gresGPU typeGPU countprocessorscores / threadshost memory [GB]
2gpu:a100NVIDIA A1002dual AMD EPYC 7543 2.8 GHz64 / 64512
1gpu:a100NVIDIA A1004dual AMD EPYC 7543 2.8 GHz64 / 64512
4gpu:v100NVIDIA V1002dual Xeon Gold 6248 2.5 GHz40 / 40192
1gpu:p100nvlinkNVIDIA P1002 dual Xeon E5-2680 2.4GHz28 / 28128
1gpu:p100NVIDIA P1008dual Xeon E5-2609 v4 1.7GHz16 / 16768

IBM Power9 (ppc64le) GPU worker

The IBM Power9 worker ppc64le architecture is not binary compatible with the widely used x86_64 architecture of AMD and Intel processors, hence, this node is in a separate slurm partition named wc_gpu_ppc. GPUs are requested using --gres=gpu:v100nvlinkppc64:n where n is the number of GPUs needed. This node’s design is similar to the worker nodes in the OLCF Summit supercomputer.

node countGPU typeGPU countprocessorscores / threadshost memory [TB]
1NVIDIA V1004dual IBM Power9 3.8 GHz32 / 1281.1