{"id":4999,"date":"2023-10-06T16:08:59","date_gmt":"2023-10-06T21:08:59","guid":{"rendered":"https:\/\/computing.fnal.gov\/lqcd\/?page_id=4999"},"modified":"2023-10-10T16:00:32","modified_gmt":"2023-10-10T21:00:32","slug":"mpi-and-binding","status":"publish","type":"page","link":"https:\/\/computing.fnal.gov\/lqcd\/mpi-and-binding\/","title":{"rendered":"MPI: affinity and binding"},"content":{"rendered":"\n<h2 class=\"wp-block-heading has-text-align-left\" id=\"LaunchingMPIprocesses\">Launching MPI processes with srun<\/h2>\n\n\n\n<p>MPI implementations&nbsp;Open MPI,&nbsp;MVAPICH, and Intel MPI&nbsp;are slurm \u201caware\u201d. They will detect slurm and use its services to distribute and start MPI binaries. The slurm&nbsp;<a href=\"https:\/\/slurm.schedmd.com\/srun.html\">srun<\/a> command must be told which API to use for MPI. The command<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ srun --mpi=list\nMPI plugin types are...\n\tpmix\n\tcray_shasta\n\tnone\n\tpmi2\nspecific pmix plugin versions available: pmix_v4,pmix_v5<\/pre>\n\n\n\n<p>lists the supported APIs.<\/p>\n\n\n\n<p>The table below lists recommended launchers for the different MPI implementations. These combinations have been proved to work. Combinations that are not listed either fail, or do not properly launch MPI.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>MPI<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>command<\/strong><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Open MPI<\/td><td class=\"has-text-align-center\" data-align=\"center\">srun <code>--mpi=pmix<\/code><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Intel MPI<\/td><td class=\"has-text-align-center\" data-align=\"center\">srun <code>--mpi=pmi2<\/code><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">nvhpc (Open MPI)<\/td><td class=\"has-text-align-center\" data-align=\"center\">srun <code>--mpi=pmix<\/code><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">MVAPICH<\/td><td class=\"has-text-align-center\" data-align=\"center\">srun <code>--mpi=pmi2<\/code><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading has-text-align-left\">Binding and distribution of tasks<\/h3>\n\n\n\n<p>The srun command provides command line <a href=\"https:\/\/slurm.schedmd.com\/srun.html\">options<\/a> to specify the distribution and binding of MPI ranks to CPU cores and local memory. Careful specification of the distribution and affinities is especially important when running MPI in the hybrid approach combining MPI with thread parallelism. TU Dresden has a nice <a href=\"https:\/\/doc.zih.tu-dresden.de\/jobs_and_resources\/binding_and_distribution_of_tasks\/\">compendium<\/a> illustrating different CPU MPI rank+threads distribution and binding options for MPI.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">LQ2 GPU workers<\/h3>\n\n\n\n<p>Each LQ2 worker is equipped with four <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/a100\/\">NVIDIA A100-80<\/a> GPU devices interconnected by an NVLink mesh. The system is a dual socket with 3rd Gen. <a href=\"https:\/\/www.amd.com\/en\/products\/cpu\/amd-epyc-7543\">AMD EPYC 7543<\/a> 32-Core Processors (64 codes total). Each worker has two InfiniBand adapters. The figure below shows the topology reported by the <code>hwloc-ls<\/code> command.<\/p>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/computing.fnal.gov\/lqcd\/wp-content\/uploads\/2023\/10\/hwloc-lq2-worker.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of hwloc-lq2-worker.\"><\/object><a id=\"wp-block-file--media-7128cd09-709a-44ce-8e17-3d9e653ba4f8\" href=\"https:\/\/computing.fnal.gov\/lqcd\/wp-content\/uploads\/2023\/10\/hwloc-lq2-worker.pdf\">hwloc-lq2-worker<\/a><a href=\"https:\/\/computing.fnal.gov\/lqcd\/wp-content\/uploads\/2023\/10\/hwloc-lq2-worker.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-7128cd09-709a-44ce-8e17-3d9e653ba4f8\">Download<\/a><\/div>\n\n\n\n<p>The <code>nvidia-smi <\/code>command is used to interrogate the affinities between each GPU and system resources.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ nvidia-smi topo -m\n\tGPU0\tGPU1\tGPU2\tGPU3\tNIC0\tNIC1\tCPU Affinity\tNUMA Affinity\tGPU NUMA ID\nGPU0\t X \tNV4\tNV4\tNV4\tPXB\tSYS\t0-31\t0\t\tN\/A\nGPU1\tNV4\t X \tNV4\tNV4\tPXB\tSYS\t0-31\t0\t\tN\/A\nGPU2\tNV4\tNV4\t X \tNV4\tSYS\tPXB\t32-63\t1\t\tN\/A\nGPU3\tNV4\tNV4\tNV4\t X \tSYS\tPXB\t32-63\t1\t\tN\/A\nNIC0\tPXB\tPXB\tSYS\tSYS\t X \tSYS\nNIC1\tSYS\tSYS\tPXB\tPXB\tSYS\t X\n\nLegend:\n  X    = Self\n  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI\/UPI)\n  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)\n  PIX  = Connection traversing at most a single PCIe bridge\n  NV#  = Connection traversing a bonded set of # NVLinks\n\nNIC Legend:\n  NIC0: mlx5_0\n  NIC1: mlx5_1<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Example LQ2 batch script<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>#! \/bin\/bash\n#SBATCH --account=yourAccountName\n#SBATCH --qos=normal\n#SBATCH --partition=lq2_gpu\n#SBATCH --nodes=2\n#SBATCH --ntasks-per-node=4\n#SBATCH --gpus-per-task=1\n#SBATCH --cpus-per-task=16\n#SBATCH --time=00:10:00\n\nmodule purge\nmodule load gompi ucx_cuda ucc_cuda\n\n# enable RDMA and performance tuning options\nexport QUDA_ENABLE_GDR=1\nexport UCX_IB_GPU_DIRECT_RDMA=yes\nexport UCX_MAX_RNDV_RAILS=1\nexport UCX_RNDV_THRESH=1mb\n\nbin=\/project\/admin\/benchmark_FNAL\/el8\/x86_64\/apps\/xthi\/build_gnu12_cuda12_ompi\/xthi-gpu\nargs=\"\"\n\n(( nthreads = SLURM_CPUS_PER_TASK ))\nexport OMP_NUM_THREADS=${nthreads}\n\ncat \/project\/admin\/benchmark_FNAL\/el8\/x86_64\/apps\/xthi\/build_gnu12_cuda12_ompi\/gpu-topo.txt\n\nif &#91; ${SLURM_NTASKS_PER_NODE} -eq 1 ] ; then\n    cpumask=\"0x000000000000FFFF\"\nelse\n    cpumask=\"0x000000000000FFFF,0x00000000FFFF0000,0x0000FFFF00000000,0xFFFF000000000000\"\nfi\n\nbind=\"--gpu-bind=none --cpus-per-task=${SLURM_CPUS_PER_TASK} --cpu-bind=mask_cpu:${cpumask}\"\ncmd=\"srun --mpi=pmix ${bind} ${bin} ${args}\"\necho CMD: ${cmd}\n${cmd}\necho\n\necho BATCH JOB EXIT\nexit 0<\/code><\/pre>\n\n\n\n<p>Here is the batch output from the script above<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>GPU    bus-id    CPU-affinity  preferred-NIC  NUMA-affinity\n---    --------  ------------  -------------  -------------\n 0     00:2F:00     0-31          mlx5_0      0\n 1     00:30:00     0-31          mlx5_0      0\n 2     00:AF:00     32-63         mlx5_1      1\n 3     00:B0:00     32-63         mlx5_1      1\n\nCMD: srun --mpi=pmix --gpu-bind=none --cpus-per-task=16 --cpu-bind=mask_cpu:0x000000000000FFFF,0x00000000FFFF0000,0x0000FFFF00000000,0xFFF\\\nF000000000000 \/project\/admin\/benchmark_FNAL\/el8\/x86_64\/apps\/xthi\/build_gnu12_cuda12_ompi\/xthi-gpu\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 0 CPU= 0 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 1 CPU=15 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 2 CPU= 6 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 3 CPU=11 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 4 CPU= 1 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 5 CPU=14 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 6 CPU= 5 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 7 CPU=10 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 8 CPU= 2 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread= 9 CPU=13 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread=10 CPU= 4 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread=11 CPU= 9 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread=12 CPU= 3 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread=13 CPU=12 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread=14 CPU= 7 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=0 OMP-Thread=15 CPU= 8 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 0 CPU=16 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 1 CPU=27 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 2 CPU=29 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 3 CPU=20 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 4 CPU=17 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 5 CPU=26 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 6 CPU=30 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 7 CPU=21 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 8 CPU=19 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread= 9 CPU=25 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread=10 CPU=31 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread=11 CPU=22 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread=12 CPU=18 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread=13 CPU=24 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread=14 CPU=28 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=1 OMP-Thread=15 CPU=23 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 0 CPU=32 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 1 CPU=37 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 2 CPU=45 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 3 CPU=33 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 4 CPU=39 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 5 CPU=44 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 6 CPU=41 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 7 CPU=35 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 8 CPU=40 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread= 9 CPU=38 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread=10 CPU=46 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread=11 CPU=34 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread=12 CPU=43 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread=13 CPU=36 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread=14 CPU=47 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=2 OMP-Thread=15 CPU=41 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 0 CPU=48 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 1 CPU=57 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 2 CPU=60 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 3 CPU=52 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 4 CPU=50 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 5 CPU=56 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 6 CPU=61 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 7 CPU=53 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 8 CPU=51 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread= 9 CPU=58 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread=10 CPU=62 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread=11 CPU=55 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread=12 CPU=49 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread=13 CPU=59 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread=14 CPU=63 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu03 MPI-Rank=3 OMP-Thread=15 CPU=54 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 0 CPU= 0 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 1 CPU=11 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 2 CPU= 6 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 3 CPU=15 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 4 CPU= 1 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 5 CPU= 8 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 6 CPU= 5 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 7 CPU=14 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 8 CPU= 2 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread= 9 CPU=10 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread=10 CPU= 4 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread=11 CPU=13 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread=12 CPU= 3 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread=13 CPU= 9 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread=14 CPU= 7 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=4 OMP-Thread=15 CPU=12 NUMA-Node=0 CPU-Affinity= 0-15 GPU-IDs=00:2F:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 0 CPU=17 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 1 CPU=28 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 2 CPU=23 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 3 CPU=26 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 4 CPU=18 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 5 CPU=30 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 6 CPU=25 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 7 CPU=22 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 8 CPU=19 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread= 9 CPU=29 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread=10 CPU=27 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread=11 CPU=20 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread=12 CPU=16 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread=13 CPU=31 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread=14 CPU=24 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=5 OMP-Thread=15 CPU=21 NUMA-Node=0 CPU-Affinity=16-31 GPU-IDs=00:30:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 0 CPU=32 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 1 CPU=36 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 2 CPU=42 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 3 CPU=44 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 4 CPU=33 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 5 CPU=39 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 6 CPU=40 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 7 CPU=47 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 8 CPU=38 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread= 9 CPU=34 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread=10 CPU=43 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread=11 CPU=45 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread=12 CPU=37 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread=13 CPU=41 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread=14 CPU=46 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=6 OMP-Thread=15 CPU=35 NUMA-Node=1 CPU-Affinity=32-47 GPU-IDs=00:AF:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 0 CPU=57 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 1 CPU=51 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 2 CPU=52 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 3 CPU=58 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 4 CPU=60 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 5 CPU=48 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 6 CPU=55 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 7 CPU=62 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 8 CPU=59 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread= 9 CPU=49 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread=10 CPU=54 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread=11 CPU=63 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread=12 CPU=56 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread=13 CPU=50 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread=14 CPU=53 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\nHost=lq2gpu04 MPI-Rank=7 OMP-Thread=15 CPU=61 NUMA-Node=1 CPU-Affinity=48-63 GPU-IDs=00:B0:00\n\nBATCH JOB EXIT<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">LQ1 CPU-only workers<\/h3>\n\n\n\n<p>Each LQ1 worker is a dual core system with Intel &#8220;Cascade Lake&#8221; <a href=\"https:\/\/ark.intel.com\/content\/www\/us\/en\/ark\/products\/192446\/intel-xeon-gold-6248-processor-27-5m-cache-2-50-ghz.html\">Xeon Gold 6248<\/a> CPUs. Each system has a total of 40 cores. The hardware topology is shown in the diagram below generated by <code>hwloc-ls<\/code>.<\/p>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/computing.fnal.gov\/lqcd\/wp-content\/uploads\/2023\/10\/hwloc-lq1-worker.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of hwloc-lq1-worker.\"><\/object><a id=\"wp-block-file--media-88622954-1d00-4968-a7eb-147d6947393c\" href=\"https:\/\/computing.fnal.gov\/lqcd\/wp-content\/uploads\/2023\/10\/hwloc-lq1-worker.pdf\">hwloc-lq1-worker<\/a><a href=\"https:\/\/computing.fnal.gov\/lqcd\/wp-content\/uploads\/2023\/10\/hwloc-lq1-worker.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-88622954-1d00-4968-a7eb-147d6947393c\">Download<\/a><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">Example LQ1 batch script<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>#! \/bin\/bash\n#SBATCH --account=yourAccountName\n#SBATCH --qos=normal\n#SBATCH --partition=lq1_cpu\n#SBATCH --nodes=2\n#SBATCH --ntasks-per-node=8\n#SBATCH --cpus-per-task=5\n#SBATCH --time=00:10:00\n\nmodule purge\nmodule load gompi\n\nbin=\/project\/admin\/benchmark_FNAL\/el8\/x86_64\/apps\/xthi\/build_gnu12_cuda12_ompi\/xthi-cpu\nargs=\"\"\n\n(( nthreads = SLURM_CPUS_PER_TASK ))\nexport OMP_NUM_THREADS=${nthreads}\n\nbind=\"--cpus-per-task=${SLURM_CPUS_PER_TASK}\"\ncmd=\"srun --mpi=pmix ${bind} ${bin} ${args}\"\necho CMD: ${cmd}\n${cmd}\n\necho\necho BATCH JOB EXIT\nexit 0<\/code><\/pre>\n\n\n\n<p>Here is the batch output from running this script<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>CMD: srun --mpi=pmix --cpus-per-task=5 \/project\/admin\/benchmark_FNAL\/el8\/x86_64\/apps\/xthi\/build_gnu12_cuda12_ompi\/xthi-cpu\nHost=lq1wn001  MPI Rank= 0  OMP Thread=0  CPU= 0  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn001  MPI Rank= 0  OMP Thread=1  CPU= 2  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn001  MPI Rank= 0  OMP Thread=2  CPU= 4  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn001  MPI Rank= 0  OMP Thread=3  CPU= 3  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn001  MPI Rank= 0  OMP Thread=4  CPU= 1  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn001  MPI Rank= 1  OMP Thread=0  CPU=20  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn001  MPI Rank= 1  OMP Thread=1  CPU=22  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn001  MPI Rank= 1  OMP Thread=2  CPU=24  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn001  MPI Rank= 1  OMP Thread=3  CPU=23  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn001  MPI Rank= 1  OMP Thread=4  CPU=21  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn001  MPI Rank= 2  OMP Thread=0  CPU= 6  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn001  MPI Rank= 2  OMP Thread=1  CPU= 9  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn001  MPI Rank= 2  OMP Thread=2  CPU= 5  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn001  MPI Rank= 2  OMP Thread=3  CPU= 8  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn001  MPI Rank= 2  OMP Thread=4  CPU= 7  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn001  MPI Rank= 3  OMP Thread=0  CPU=25  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn001  MPI Rank= 3  OMP Thread=1  CPU=27  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn001  MPI Rank= 3  OMP Thread=2  CPU=28  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn001  MPI Rank= 3  OMP Thread=3  CPU=29  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn001  MPI Rank= 3  OMP Thread=4  CPU=26  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn001  MPI Rank= 4  OMP Thread=0  CPU=10  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn001  MPI Rank= 4  OMP Thread=1  CPU=14  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn001  MPI Rank= 4  OMP Thread=2  CPU=12  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn001  MPI Rank= 4  OMP Thread=3  CPU=11  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn001  MPI Rank= 4  OMP Thread=4  CPU=13  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn001  MPI Rank= 5  OMP Thread=0  CPU=31  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn001  MPI Rank= 5  OMP Thread=1  CPU=33  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn001  MPI Rank= 5  OMP Thread=2  CPU=34  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn001  MPI Rank= 5  OMP Thread=3  CPU=30  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn001  MPI Rank= 5  OMP Thread=4  CPU=32  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn001  MPI Rank= 6  OMP Thread=0  CPU=16  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn001  MPI Rank= 6  OMP Thread=1  CPU=18  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn001  MPI Rank= 6  OMP Thread=2  CPU=19  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn001  MPI Rank= 6  OMP Thread=3  CPU=15  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn001  MPI Rank= 6  OMP Thread=4  CPU=17  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn001  MPI Rank= 7  OMP Thread=0  CPU=36  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn001  MPI Rank= 7  OMP Thread=1  CPU=38  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn001  MPI Rank= 7  OMP Thread=2  CPU=39  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn001  MPI Rank= 7  OMP Thread=3  CPU=35  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn001  MPI Rank= 7  OMP Thread=4  CPU=37  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn006  MPI Rank= 8  OMP Thread=0  CPU= 1  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn006  MPI Rank= 8  OMP Thread=1  CPU= 0  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn006  MPI Rank= 8  OMP Thread=2  CPU= 3  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn006  MPI Rank= 8  OMP Thread=3  CPU= 2  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn006  MPI Rank= 8  OMP Thread=4  CPU= 4  NUMA Node=0  CPU Affinity=  0-4\nHost=lq1wn006  MPI Rank= 9  OMP Thread=0  CPU=21  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn006  MPI Rank= 9  OMP Thread=1  CPU=20  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn006  MPI Rank= 9  OMP Thread=2  CPU=23  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn006  MPI Rank= 9  OMP Thread=3  CPU=24  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn006  MPI Rank= 9  OMP Thread=4  CPU=22  NUMA Node=1  CPU Affinity=20-24\nHost=lq1wn006  MPI Rank=10  OMP Thread=0  CPU= 6  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn006  MPI Rank=10  OMP Thread=1  CPU= 5  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn006  MPI Rank=10  OMP Thread=2  CPU= 7  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn006  MPI Rank=10  OMP Thread=3  CPU= 9  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn006  MPI Rank=10  OMP Thread=4  CPU= 8  NUMA Node=0  CPU Affinity=  5-9\nHost=lq1wn006  MPI Rank=11  OMP Thread=0  CPU=25  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn006  MPI Rank=11  OMP Thread=1  CPU=29  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn006  MPI Rank=11  OMP Thread=2  CPU=27  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn006  MPI Rank=11  OMP Thread=3  CPU=26  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn006  MPI Rank=11  OMP Thread=4  CPU=28  NUMA Node=1  CPU Affinity=25-29\nHost=lq1wn006  MPI Rank=12  OMP Thread=0  CPU=10  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn006  MPI Rank=12  OMP Thread=1  CPU=13  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn006  MPI Rank=12  OMP Thread=2  CPU=12  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn006  MPI Rank=12  OMP Thread=3  CPU=14  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn006  MPI Rank=12  OMP Thread=4  CPU=11  NUMA Node=0  CPU Affinity=10-14\nHost=lq1wn006  MPI Rank=13  OMP Thread=0  CPU=30  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn006  MPI Rank=13  OMP Thread=1  CPU=33  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn006  MPI Rank=13  OMP Thread=2  CPU=34  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn006  MPI Rank=13  OMP Thread=3  CPU=32  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn006  MPI Rank=13  OMP Thread=4  CPU=31  NUMA Node=1  CPU Affinity=30-34\nHost=lq1wn006  MPI Rank=14  OMP Thread=0  CPU=15  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn006  MPI Rank=14  OMP Thread=1  CPU=16  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn006  MPI Rank=14  OMP Thread=2  CPU=18  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn006  MPI Rank=14  OMP Thread=3  CPU=17  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn006  MPI Rank=14  OMP Thread=4  CPU=19  NUMA Node=0  CPU Affinity=15-19\nHost=lq1wn006  MPI Rank=15  OMP Thread=0  CPU=39  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn006  MPI Rank=15  OMP Thread=1  CPU=38  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn006  MPI Rank=15  OMP Thread=2  CPU=37  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn006  MPI Rank=15  OMP Thread=3  CPU=36  NUMA Node=1  CPU Affinity=35-39\nHost=lq1wn006  MPI Rank=15  OMP Thread=4  CPU=35  NUMA Node=1  CPU Affinity=35-39<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Launching MPI processes with srun MPI implementations&nbsp;Open MPI,&nbsp;MVAPICH, and Intel MPI&nbsp;are slurm \u201caware\u201d. They will detect slurm and use its services to distribute and start MPI binaries. The slurm&nbsp;srun command must be told which API to use for MPI. The command $ srun &#8211;mpi=list MPI plugin types are&#8230; pmix cray_shasta none pmi2 specific pmix plugin&#8230; <a class=\"more-link\" href=\"https:\/\/computing.fnal.gov\/lqcd\/mpi-and-binding\/\"> More &#187;<\/a><\/p>\n","protected":false},"author":25,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-4999","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/pages\/4999","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/users\/25"}],"replies":[{"embeddable":true,"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/comments?post=4999"}],"version-history":[{"count":22,"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/pages\/4999\/revisions"}],"predecessor-version":[{"id":5236,"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/pages\/4999\/revisions\/5236"}],"wp:attachment":[{"href":"https:\/\/computing.fnal.gov\/lqcd\/wp-json\/wp\/v2\/media?parent=4999"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}