HPC toolchains on LQ cluster complex
The HPC toolchains on the LQ complex were built using a combination of EasyBuild recipes in combination with additional hand-built packages. The EasyConfig files used to build the software for LQ are on gitHub. Under AlmaLinux 8 (el8) the deployments are found in the directory /srv/software/el8/x86_64
. The subdirectory ./eb
contains packages built with EasyBuild while directory ./hpc
contains the additional packages built by other means. We provide EasyBuild as a package for users wishing to build their own software stacks.
Installed compiler toolchains
The supported compiler toolchains include
toolchain name | compilers | MPI |
gompi (compiler+MPI) | gnu compiler suite | Open MPI |
intel | Intel one API | Intel MPI |
nvhpc | NVIDIA HPC toolkit | Open MPI |
We also provide a CPU-only MVAPICH2 toolchain specifically for the OmniPath network on the LQ1 cluster. This MVAPICH2 should not be run on the LQ2 cluster which has an InfiniBand network. Note that MVAPICH2 is no longer under development and will soon be replaced by MVAPICH 3.x. Unfortunately, version 3.x is currently beta-only software not recommended for production.
Quickstart: gnu compilers and Open MPI
To load the latest supported toolchain plus for CPU compilations
$ module load gompi
For GPU support on LQ2, use
$ module load gompi ucx_cuda ucc_cuda
The ucx_cuda
and ucc_cuda
modules provide GPU direct RDMA support and optimized MPI collective operations. Typically, these options will need to be enabled in application codes when they are built.
Quickstart: Intel compilers and Intel MPI
To load the latest Intel toolchain,
$ module load intel
The load command also enables both Intel’s MKL linear algebra and FFT libraries as well as a corresponding newer gnu compiler suite.
Quickstart: NVIDIA HPC toolkits
The command
$ module avail nvhpc
will show the available nvhpc toolkits. The options are described in the toolkit documentation.
MVAPICH2
For the LQ1 CPU-only cluster, the command
$ module load mvapich2/2.3.7_1_gcc_12.3.0
will load MVAPICH2 built for OmniPath networking and version 12.3 of the gnu compilers. We do not provide MVAPICH2 with CUDA enabled since these builds are provided binary-only for limited combinations of os, network, compiler, and CUDA versions.
Supported toolchain versions
We follow the EasyBuild toolchain version scheme for the gompi
and intel
toolchains
version | date | gcc | Open MPI | binutils | CUDA ver. |
2023a | Jun’23 | 12.3.0 | 4.1.5 | 2.40 | 12.2.x |
2022a | Jun’22 | 11.4.0 | 4.1.5 | 2.40 | 11.8.x |
gompi
toolchains: gnu compilers + Open MPIversion | date | compilers | MPI | MKL | gcc | binutils |
2023a | Jun’23 | 2023.1.0 | 2021.9.0 | 2023.1.0 | 12.3.0 | 2.40 |
intel
toolchainsversion | CUDA |
23.7 | 12.2 |
nvhpc
toolkitsQuick Introduction to using Lmod
Available software components are easily configured using the Lua lmod system which modifies the PATH
and LD_LIBRARY_PATH
(bash) shell environment variables and sets any other needed variables. More information on using Lmod is available in the Introduction to lmod.
You can list all of the software components available with brief descriptions using the spider
option:
$ module spider [output suppressed]
There avail
option, Below is an abridged example of the output. You will see many more packages listed.
$ module avail ---------- /srv/software/el8/x86_64/hpc/lmod/Core ----------------------- anaconda/2023.07-2 cmake/3.27.2 git/2.41.0 julia/1.9.2 apptainer/1.2.1 cuda/12.2.1 julia/1.6.7-lts mambaforge/23.1.0-4 --------- /srv/software/el8/x86_64/hpc/nvhpc/modulefiles ---------------- nvhpc-byo-compiler/23.7 nvhpc-hpcx-cuda12/23.7 nvhpc-nompi/23.7 nvhpc-hpcx-cuda11/23.7 nvhpc-hpcx/23.7 nvhpc/23.7 -------- /srv/software/el8/x86_64/eb/lmod/all --------------------------- easybuild/4.8.0 gcc/12.3.0 mvapich2/2.3.7_1_gcc_12.3.0 gompi/2023a vtune/2022.3.0. intel/2023a
The load
command will enable a software package within your shell environment. If there is only a single package version available, it suffices to use the package name, e.g. gcc
, without specifying the particular version: gcc/12.3.0
. The following loads git
, cmake
, and gcc
:
$ module load git cmake gcc
Currently loaded modules can be displayed with list
:
$ module list Currently Loaded Modules: 1) git/2.41.0 3) gcccore/12.3.0 5) binutils/2.40_gcccore_12.3.0 2) cmake/3.27.2 4) zlib/1.2.13 6) gcc/12.3.0
Note that additional packages such as zlib
and binutils
were automatically loaded since they are runtime dependencies for gcc
.
If a package is no longer needed, it can be unloaded:
$ module unload git $ module list Currently Loaded Modules: 1) cmake/3.27.2 2) gcccore/12.3.0 3) zlib/1.2.13 4) binutils/2.40_gcccore_12.3.0 5) gcc/12.3.0
The purge
command will unload all current modules:
$ module purge $ module list No modules loaded
It is useful to put module purge
at the beginning of batch scripts to prevent the batch shell from unintentionally inheriting a module environment from the submission shell.
Python and conda environments
We provide both the community version of the anaconda
bundle from the anaconda project and the open source mambaforge
package from conda forge. Either mambaforge
or anaconda
can be used to build and support customized python environments, see the custom environments documentation about managing custom environments. Mambaforge provides the mamba package manager a faster more reliable drop-in replacement for conda. Note that anaconda
comes packaged with a rich bundle of python modules preinstalled in the base
environment.
To activate the base
anaconda
environment,
$ module load anaconda
$ conda activate
(base) $ python
Python 3.11.4
>> ^D
(base) $
First deactivate the current conda
environment before unloading anaconda
(base) $ conda deactivate
$ module unload anaconda