The workers below share the wc_gpu slurm partition. The gres specification is used to request a particular worker node type.
node count
slurm gres
GPU type
GPU count
processors
cores / threads
host memory [GB]
2
gpu:a100
NVIDIA A100
2
dual AMD EPYC 7543 2.8 GHz
64 / 64
512
1
gpu:a100
NVIDIA A100
4
dual AMD EPYC 7543 2.8 GHz
64 / 64
512
4
gpu:v100
NVIDIA V100
2
dual Xeon Gold 6248 2.5 GHz
40 / 40
192
1
gpu:p100nvlink
NVIDIA P100
2
dual Xeon E5-2680 2.4GHz
28 / 28
128
1
gpu:p100
NVIDIA P100
8
dual Xeon E5-2609 v4 1.7GHz
16 / 16
768
IBM Power9 (ppc64le) GPU worker
The IBM Power9 worker ppc64le architecture is not binary compatible with the widely used x86_64 architecture of AMD and Intel processors, hence, this node is in a separate slurm partition named wc_gpu_ppc. GPUs are requested using --gres=gpu:v100nvlinkppc64:n where n is the number of GPUs needed. This node’s design is similar to the worker nodes in the OLCF Summit supercomputer.