This project brings together HEP and ASCR scientists from P5-critical areas and the latest research in advanced computing to advance physics goals, with significant steps towards making available an unprecedented amount of computing cycles from the state-of-the-art HPC centers by:
- removing computing barriers within current HEP workflows
- solving complex optimization issues
- and blending HPC facility software and services directly into the HEP applications.
The combination of allowing simultaneous processing of workflow stages, dynamically scheduling resources, and enabling new layers of storage and memory will yield incredible advances in the efficiency of carrying out complex real-world analyses.
The goal of our HEP science team from Energy (EF) and Intensity (IF) Frontiers is to extend the physics reach of LHC and neutrino physics experiments in three key areas:
- event generator tuning,
- neutrino oscillations measurements,
- and detector simulation tuning.
Our ASCR science team from math and data analytics will transform how these physics tasks are carried out in three areas:
- high-dimensional parameter fitting,
- workflow automation,
- and introduction of HPC data resources into HEP applications.
This work includes enabling high-level control of analyses involving optimization steering and high-dimensional parameter estimation, utilizing data from experiments and models from theory. These tools are aligned with the computing needs of the HL-LHC (High Luminosity) and DUNE era, and utilize ASCR strategic capabilities.
This project builds upon principles and practices of foundational HEP data analytic tools in use by multiple experiments including PYTHIA8, art, and Geant, along with traditional techniques such as Feldman-Cousins. These tools and techniques provide for a solid comparison baseline. The end products can, therefore, be demonstrated to have broader applicability beyond these chosen physics goals and experiments. We will extend this approach to the HPC tools that will be used within this project, introducing Argonne-developed Decaf for workflow management, DIY for implementing analysis tasks, data tools such as Mochi, and optimization tools such as POUNDERs. Our team is highly experienced in software engineering best-practices, iterative development, and large-scale system design and architecture using C++, Python, and other analysis languages within collaborative scientific setting.
Objectives
Our high-level objectives are to:
- exploit HPC facilities (compute, memory, and storage) to meet new HEP data analysis demands, enabling computationally expensive physics studies to be completed on time-scales not currently feasible;
- use new computational techniques and tools to advance the physics goals for ATLAS, CMS, NOvA, and DUNE, bringing ASCR parameter fitting optimizations to HEP;
- blend ASCR research and tools into HEP collaborative software infrastructure, introducing changes to the experimental science community through demonstrated improvements, permitting analysis automation and computational steering with feedback;
- demonstrate utility in co-locating and dynamic scheduling of traditionally-distinct HEP processing tiers, by providing tools that utilize both experimental measurements and data from simulations.