HEP Event Reconstruction with Cutting Edge Computing Architectures

Event reconstruction is the process of interpreting the electronic signals produced in a high-energy physics (HEP) experiment’s detector to determine what original particles passed through the detector and their characteristics.

The goal of the 3-year SciDAC project, HEP Event Reconstruction with Cutting Edge Computing Architectures, is to boost the utilization of new computing architectures in high-energy physics (HEP) event reconstruction, particularly, for LHC experiments and neutrino experiments using Liquid Argon Time-Projection Chamber (LArTPC) detectors.

Fermilab and the University of Oregon will collaborate to identify key algorithms in the experiments’ reconstruction workflows and optimize them for execution on parallel architectures. The choice of the algorithms will be based on their importance for the physics outcome of the experiment and on their leading contribution in terms of computing time, such as the track reconstruction for collision experiments. With the use of advanced profiling tools and development techniques, including autotuning, the throughput of the algorithms on the leading parallel architectures will be maximized and portable implementations for usage at supercomputers and with heterogenous platforms will be explored. The optimized version will finally be deployed in the experiments’ reconstruction software. This project will give a key input to the process of defining the computing needs for the reconstruction software of the next-generation HEP experiment such as the High Luminosity Large Hadron Collider (HL-LHC) and Deep Underground Neutrino Experiment (DUNE).

Objectives

The goal of this pilot project is to boost the utilization of new computing architectures in HEP reconstruction, particularly for LHC experiments and for neutrino experiments using LArTPC detectors. The objectives are the following:

  1. Identify a key component/algorithm of the event reconstruction to be addressed; it has to be a fundamental step for the physics outcome of the experiment and a leading contribution in terms of computing time.
  2. Optimize its performance on the leading parallel architectures (Intel KNL, GPU) and explore its usage at supercomputers and with heterogenous platforms.
  3. Integrate the new version in the experiments’ reconstruction software. The code needs to be sustainable (i.e. efficiently maintained) with the experiments’ resources.

Acknowledgments

This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program.