Open HPC Community Software
The HPC software on the Fermilab WC cluster is packaged and distributed by the OpenHPC project. We are currently providing OpenHPC release v1.3 for Red Hat Enterprise 7.6.
Installed software components
Available software components are easily configured using the Lua lmod system which modifies the
LD_LIBRARY_PATH (bash) shell environment variables and sets any other needed variables. Lmod also guards against a user having conflicting software packages (e.g. gnu8 compiler and an MPI built with clang) in your shell environment. More information on using lmod is available in the Introduction to lmod.
Quick introduction to using lmod
You can list all of the software components available using the
$ module spider # (spyder is also acceptable spelling)
There is also an
avail option, however, this option, unlike
spider will only show available modules that do not conflict with your currently configured options. Assuming we do not have anything already configured with
avial option will display mainly the available compilers and other utilities that do not depend upon a particular compiler. Here is an (edited) example of the listing you will see:
$ module avail ---------------------------- /opt/ohpc/pub/modulefiles ------------------------------ EasyBuild/3.9.2 cmake/3.14.3 gnu8/8.3.0 llvm5/5.0.1 autotools autotools gnu/5.4.0 hwloc/2.0.3 papi/5.7.0 singularity/3.2.1 charliecloud/0.9.7 gnu7/7.3.0 intel/18.104.22.168 pmix/2.2.2 valgrind/3.15.0
The load command will enable a software package within your shell environment. If there is only a single
package version available, it suffices to use the package name, e.g.
gnu8, without specifying the particular
gnu8/8.3.0. The following loads
$ module load autotools cmake gnu8
Currently loaded modules can be displayed with
$ module list Currently Loaded Modules: 1) autotools 2) cmake/3.14.3 3) gnu8/8.3.0
If a package is no longer needed, it can be
$ module unload autotools $ module list Currently Loaded Modules: 1) cmake/3.14.3 2) gnu8/8.3.0
avail again will now show available modules that have
gnu8 as a dependency
$ module avail --------------- /opt/ohpc/pub/moduledeps/gnu8 ----------------- R/3.5.3 impi/2019.4.243 mpich/3.3 openblas/0.3.5 plasma/2.8.0 scotch/6.0.6 gsl/2.5 likwid/4.3.4 mvapich2/2.3.1 openmpi3/3.1.4 py2-numpy/1.15.3 superlu/5.2.1 hdf5/1.10.5 metis/5.1.0 ocr/1.0.1 pdtoolkit/3.25 py3-numpy/1.15.3
We can now
load an MPI package
$ module load openmpi3 $ module list Currently Loaded Modules: 1) cmake/3.14.3 2) gnu8/8.3.0 3) openmpi3/3.1.4
We can use
swap to change to another MPI implementation
$ module swap openmpi3 mvapich2 $ module list Currently Loaded Modules: 1) cmake/3.14.3 2) gnu8/8.3.0 3) mvapich2/2.3.1
purge command will unload all current modules
$ module purge $ module list No modules loaded
This is useful at the beginning of batch scripts to prevent the batch shell from unintentionally
inheriting a module environment from the submission shell.
Python3, Python2, and R conda environments
The supported python versions are provided by the anaconda project and available via the module system. To activate Python3, the base version of python do the following
$ module load anaconda3 $ conda activate
There is also a legacy Python2.7 version configurable as a conda environment. To activate Python2.7:
$ module load anaconda3 $ conda activate py27
To see a list of available system installed environments:
$ conda env list # conda environments: base /srv/software/Anaconda3-2019.03 gnu-R /srv/software/Anaconda3 2019.03/envs/gnu-R jax-cpu /srv/software/Anaconda3-2019.03/envs/jax-cpu py27 * /srv/software/Anaconda3-2019.03/envs/py27
You are able to create your own local (to you) custom environments by either creating a new environment or starting by cloning and then customizing and existing environment.
Launching MPI processes
Near the beginning of your batch script, prior to launching an MPI process you should ensure only software modules required by the batch script have been loaded, for example if using gnu8 and openmpi:
module purge module load gnu8 openmpi3
There are two common mechanisms for starting up MPI either use ‘mpirun’ or ‘srun’. An ‘mpirun’ command is provided by each MPI implementation, and is specific to that implementation. The commands provided by openmpi, mvapich, and Intel impi are slurm “aware”. They will attempt to use slurm interfaces to distribute and start MPI binaries. In addition, slurm has the srun command that is able to startup MPI. The following command will list APIs ‘srun’ supports for MPI
$ srun --mpi=list srun: MPI types are... srun: pmi2 srun: pmix_v3 srun: pmix srun: none srun: openmpi srun: pmix_v2
The table below lists recommended launchers for the different MPI implementations. These combinations have been proved to work. Combinations that are not listed either fail, or do not properly launch MPI.
|MPI||Launcher + any required Flags|
We recommend using
srun to launch all three MPI implementations since it is then possible to use command line options specify the distribution and binding of MPI ranks to CPU cores and local memory. The options are described in the srun documentation. Careful specification of the process affinities is especially important when running MPI in the hybrid approach combining MPI with thread parallelism. TU Dresden has a nice compendium illustrating different
srundistribution and binding options for MPI.