Lattice QCD Computing
Quantum chromodynamics is the study of how quarks and gluons interact through the strong force. This requires the study of interactions at distances smaller than the diameter of a proton, about 10-15 meters across.
Experimental physicists do not observe quarks in isolation in their detectors but instead bound together to form particles such as protons, kaons and pions. Predicting the properties of these particles requires Lattice QCD.
In Lattice QCD computations, physicists replace continuous space-time with a four-dimensional lattice that represents the three dimensions of space plus a dimension of time. The space-time box is made big enough for a proton to fit inside. Markov chain Monte Carlo simulations evolve QCD gauge fields in a fictitious simulation "time" sequence. Each gauge configuration file capture a snapshot of the QCD evolution. Quark interactions, such as two- and three-point functions, must be computed on every gauge configuration and then averaged over the whole set of gauge configurations to produce quantities such as particle masses or decay rates.
Physicists use Lattice QCD to make predictions of masses and decay rates. They then compare those predictions to measurements from experiments. Physicists look carefully for any inconsistencies between experiment and the theoretical predictions. Such inconsistencies might be an exciting hint of new physics beyond the Standard Model.
Lattice QCD codes spend much of their time inverting very large very sparse matrices. For example, a 48x48x48x144 problem, typical for current simulations, has a complex matrix of size 47.8 million x 47.8 million. The matrix has 1.15 billion non-zero elements (about one in every 2 million).
Iterative techniques like “conjugate gradient” are used to perform these inversions. Nearly all Flops performed are matrix-vector multiplies (3x3 and 3x1 complex). The matrices describe gluons, and the vectors describe quarks. Memory bandwidth limits the speed of the calculation on a single computer.
Individual LQCD calculations require many TFlop/sec-yrs of computations. They can only be achieved by using large-scale parallel machines.
The 4-dimensional Lattice QCD simulations are divided across hundreds to many thousands of cores. On each iteration of the inverter, each core interchanges data on the faces of its 4D-sub-volume with its nearest neighbor. The codes employ MPI or other message passing libraries for these communications. Networks such as Infiniband provide the required high bandwidth and low latency.
Fermilab USQCD Facilities
Fermilab, together with Jefferson Lab and Brookhaven Lab, operate dedicated facilities for USQCD, the national collaboration of lattice theorists, as part of the DOE Office of Science LQCD-ext Project.
The Fermilab “Ds” cluster has 13472 cores and uses QDR Infiniband. An earlier version with 7680 cores was #218 on the November 2010 Top500 list. It delivers 21.5 TFlop/sec sustained for LQCD calculations and was commissioned in December 2010. The Fermilab “J/Psi” cluster has 6848 cores, uses DDR Infiniband, and was #111 on the June 2009 Top500 list. It delivers 8.4 TFlop/sec sustained for LQCD calculations and was commissioned in January 2009. Fermilab has just deployed a GPU cluster with 152 nVidia M2050 GPUs in 76 hosts, coupled by QDR Infiniband. It was designed to optimize strong scaling and will be used for large GPU-count problems.
Since high performance computing continues to emerge as an important strategy, we provide guidance and operations for diverse scientific areas such as computational cosmology, accelerator modeling and electromagnetic cavity design. We anticipate that our expertise in high performance computing will be adopted by other areas too, such as Monte Carlo simulations for physics and detector design.
More Information: Fermilab Lattice Gauge Theory Computational Facility, Fermilab Lattice QCD Computing Hardware, Fermilab LQCD Cluster Status, USQCD Home, SciDAC Lattice Program
Send comments about this page via the suggestion formLast updated by cdweb on 10/11/2012