What is a container?
Containers are a way to package a software in a format that can run in a highly isolated environment on a host operating system. Unlike virtual machines (VMs), containers do not emulate the full OS kernel – only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained environments and guarantees that software will always run the same, regardless of where it’s deployed. The best known container technology is Docker.
Apptainer and Singularity
Singularity was renamed Apptainer and put under the stewardship of the Linux Foundation to differentiate it from the other like-named projects and commercial products [announcement]. The Fermilab HPC clusters support the use of Apptainer.
Unlike the Docker system, Apptainer is designed for regular users to securely run containers on a shared host system, such as an HPC cluster. Apptainer enables users to have full control of their environment. For example, the environment inside the container might be Ubuntu 24.04 or Alma Linux 9.x and the container will run on an Alma Linux 8.x host system.
Containers for portability and reproducibility
Singularity-format containers can be used to package entire scientific workflows, software and libraries, and even data. Singularity-format containers have proven particularly useful to support machine learning (ML) frameworks on the Fermilab HPC clusters since the ML software frameworks evolve rapidly and the ML software development is typically done on an operating system such as Ubuntu rather than Scientific Linux. Containers allow users to select from a wide range of ML frameworks and versions with confidence that their selected environment is isolated from changes to the underlying host OS.
Where to find containers
We recommend using standard “off the shelf” containers whenever possible rather than building and maintaining customized containers. Pre-built containers are available online and Apptainer has the ability to build a local copy of a container. Apptainer is able to convert a Docker container into a Singularity-format container. Every user should be extremely cautious of the security implications of downloading and running binary code within containers. Hence, a user should only download containers that are provided by verified repositories and publishers or that they have built themselves from official Linux package repositories. Docker format containers are found at:
DockerHub: Please ensure you filter your choices by selecting either “Verified Publisher” or “Official Images“.
NVIDIA NGC: Be aware that many of the “latest” version containers built by Nvidia may no longer support older P100 (sm60) GPUs. You may be able to find a suitable container by searching the available container Tags.
Apptainer setup
Apptainer is available via software modules. We list available versions of apptainer and then load the default version.
$ module avail apptainer
--------- /srv/software/el8/x86_64/hpc/lmod/Core ----------------------
apptainer/1.2.1
$ module load apptainer
$ apptainer --version
apptainer version 1.2.1
Appatainer caches the overlay pieces needed to build a container. We set an environment variable to place the cache in Lustre rather than defaulting to your /nashome home directory. The latter has very limited storage that may not be big enough for cache. Below replace my_project_dir with the name of your project area. Apptainer also uses a temporary directory to assemble the image file when building a singularity image. The second variable controls the location used for temporary space.
$ export APPTAINER_CACHEDIR=/wclustre/my_project_dir/apptainer/.apptainer/cache
$ export APPTAINER_TMPDIR=/wclustre/my_project_dir/apptainer/.apptainer/tmp
$ $ export APPTAINER_CONFIGDIR=/wclustre/my_project_dir/apptainer/config
We recommend building large and complex container images on the WC worker nodes in a batch job. The build can be done on the /scratch
partition which typically has a few hundred GB of free space. From a shell on the worker node, use
$ cd /scratch
$ mkdir $USER
$ cd $USER
$ export APPTAINER_CACHEDIR=/scratch/$USER/apptainer/.apptainer/cache
$ export APPTAINER_TMPDIR=/scratch/$USER/apptainer/.apptainer/tmp
$ export APPTAINER_CONFIGDIR=/scratch/$USER/apptainer/config
Please note that you must copy any containers you build in /scratch to either /wclustre
or /work1
before you exit the batch job since /scratch
is cleaned at the end of the job.
Cleaning the cache
This command lists currently cached files
$ apptainer cache list
There are 0 container file(s) using 0.00 KiB and 0 oci blob file(s) using 0.00 KiB of space
Total space used: 0.00 KiB
The block below illustrates the command used to clean the cache. Remove the --dry-run
flag to actually remove cached files.
$ apptainer cache clean --dry-run
Example: Building a container from Docker Hub
Your project area in Lustre is a convenient place to store very large images you need for your work. Since the container in this example is less than 90 MB and requires little cache space during the build, we can build it directly in Lustre from the login node rather than building from a batch job.
$ cd /wclustre/my_project_dir
$ mkdir images
$ cd images
The build command below creates a local singularity-format image from a Docker container from Docker Hub. The following command will download the storage overlays for the lolcow container from Docker Hub to create a copy of the container called lolcow.sif
in the Lustre directory.
$ HOME=/work1/my_project_dir apptainer build lolcow.sif docker://godlovedc/lolcow
Note that above we have reset the HOME
variable while running the command. This is needed when /nashome
is not accessible to apptainer
as when doing the build from a batch job.
Running the container
Containers often define a default action when activated by the run
command. Running the lolcow container prints a random fortune.
$ apptainer run lolcow.sif
_____________________________________
/ You are not dead yet. But watch for \
\ further reports. /
-------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
It is also possible to start a shell within the container. Below we start a shell and type a command to determine the guest OS (Ubuntu) within the container.
$ apptainer shell --home=/work1/your_project_name lolcow.sif
Apptainer> cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
Apptainer> ^D
$
Building a container from an apptainer / singularity recipe
The steps needed to build a container can be described in a text file. The file alma_9.x.def
describes an Alma Linux 9.x (Enterprise Linux 9) container that also provides access to the EPEL RPM repository. The recipe starts from an Alma 9 image obtained from Docker hub.
$ cat alma_9.x.def
# An Alma Linux 9.x container
Bootstrap: docker
From: almalinux:9
%post
yum install -y epel-release
Additional software packages can be added to this container at build time by adding dnf install
commands to the basic alma_9.x.def
file.
The build command will create a container named alma_9.x.sif
.
$ apptainer build alma_9.x.sif alma_9.x.def
We can start a shell in the resulting container to verify that the guest operating system is Alma Linux 9
$ apptainer shell alma_9.x.sif
Apptainer> cat /etc/os-release
NAME="AlmaLinux"
VERSION="9.3 (Shamrock Pampas Cat)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.3"
PLATFORM_ID="platform:el9"
PRETTY_NAME="AlmaLinux 9.3 (Shamrock Pampas Cat)"
REDHAT_SUPPORT_PRODUCT="AlmaLinux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.3"
Apptainer> ^D
$
Using a pyTorch container from NVIDIA
NVIDIA provides many versions of pyTorch containers. See PyTorch NGC for a list of available containers.
Building a local copy of the container
We use apptainer to build a copy of the pyTorch container from NVIDIA. Since this is a large complex container, we do the build in a batch job and then copy the container to Lustre for future reuse. Although pyTorch is GPU accelerated, we do not need a GPU worker node to do the build. We start an interactive batch session on a CPU-only worker, setup apptainer, and setup the http proxy.
$ srun --unbuffered --pty -A myAccount --qos=regular \
--partition=wc_cpu --nodes=1 --time=02:00:00 \
--ntasks-per-node=1 --cpus-per-task=16 /bin/bash
# batch job has started
$ cd /scratch/
$ mkdir $USER
$ cd $USER
$ module load apptainer
$ export APPTAINER_CACHEDIR=/scratch/$USER/apptainer/.apptainer/cache
$ export APPTAINER_TMPDIR=/scratch/$USER/apptainer/.apptainer/tmp
$ export APPTAINER_CONFIGDIR=/scratch/$USER/apptainer/config
$ mkdir -p $APPTAINER_CACHEDIR $APPTAINER_TMPDIR $APPTAINER_CONFIGDIR
$ export https_proxy=http://squid.fnal.gov:3128
$ export http_proxy=http://squid.fnal.gov:3128
We next build the pyTorch container using the setup above. See PyTorch NGC for the current list of available containers. For the build below we have reset HOME
to mitigate an issue affecting apptainer when $HOME
is not accessible.
$ HOME=/scratch/$USER apptainer pull \
pytorch-23.12-py3.sif \
docker://nvcr.io/nvidia/pytorch:23.12-py3
(several minutes and lots of screen output from the build)
INFO: Creating SIF file...
$ ls -sh pytorch-23.12-py3.sif
9.4G pytorch-23.12-py3.sif
Remember to make a copy of your image before ending your batch job.
$ $ cp pytorch-23.12-py3.sif /wclustre/my_project_dir/images/
We do a simple test of pyTorch in the container
$ apptainer shell --home=/scratch/$USER pytorch-23.12-py3.sif
Apptainer> python
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'2.2.0a0+81ea7a4'
>>> torch.cuda.is_available()
False
>>> ^D
Apptainer> ^D
$ exit # from batch
Above, the torch module was loaded from python, but CUDA is not available since the build job was run on a system without GPUs. Note that you can still run torch, however, performance will be slower without a GPU.
Using the pyTorch container on a GPU worker from batch
As an example, we train a NN on MNIST training data. The pyTorch examples are found on gitHub.
$ cd /work1/my_project_dir/torch
$ module load git
$ git clone https://github.com/pytorch/examples.git
We start an interactive job on a GPU worker asking for a single NVIDIA P100 GPU.
$ module load apptainer
$ srun --unbuffered --pty -A myProject --qos=regular --time=1:00:00 \
--partition=wc_gpu --gres=gpu:p100:1 --nodes=1 \
--ntasks-per-node=1 --cpus-per-gpu=4 --mem-per-gpu=64G \
/bin/bash
$ hostname
wcgpu02.fnal.gov
$ pwd
/work1/my_project_dir/torch
We start a shell within the pyTorch container and run the example. The --nv
flag is needed to allow the container to access GPUs on the host system. The --home=/work1/my_project_dir
option sets the home directory inside the container to /work1
rather than your /nashome
directory.
$ export https_proxy=http://squid.fnal.gov:3128
$ export http_proxy=http://squid.fnal.gov:3128
$ mkdir test
$ cd test
$ apptainer shell --nv --home=/work1/my_project_dir \
/wclustre/my_project_dir/images/pytorch-23.12-py3.sif
# check that torch detects the GPU
Apptainer> python
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
>>> import torch
>>> torch.cuda.is_available()
True
>>> ^D
# run the example
Apptainer> python ../examples/mnist/main.py
(the MNIST data set is downloaded to directory ../data)
(training progress is reported)
Train Epoch: 14 [59520/60000 (99%)] Loss: 0.002743
Test set: Average loss: 0.0259, Accuracy: 9917/10000 (99%)
Apptainer>
Using a TensorFlow container from NVIDIA
The steps needed to obtain and use a TensorFlow images are analogous to the steps needed for using a pyTorch container. See NGC Tensorflow for the available containers.
The example addition_rnn.py, written in Keras, trains a RNN to do the addition of integers presented as strings.
Additional Information
- Apptainer user guide
- NVIDIA HPC Container Maker — an open source tool to make it easier to generate container specification files.
- Ten simple rules for writing Dockerfiles for reproducible data science — PLoS Comput Biol 16(11): e1008316
- Slack Apptainer
- Slack hpc-containers
- Open Science Grid Containers – Apptainer/Singularity