Computing Facilities and Middleware

 

Active Archive Facilty

Fermilab offers researchers access to remote storage of scientific data through the Strategic Partnership Project mechanism. The “active archive infrastructure” technologies used by Fermilab leverage the unique wide-area transfer protocols and cached storage systems developed by the high energy physics community. These protocols are designed to move lots of data quickly, efficiently and without error between sites on the global network, and are integrated with Fermilab’s high capacity hierarchical storage archive service. The archive facility at Fermilab can provide access over a 100 Gb/s network to hundreds of petabytes of data, with long-term persistence and custodial care, and with high performance retrieval. Read more about Active Archive Facility.

 

CMS Tier-1 Center

The CMS experiment is using a globally distributed computing model. Data is initially processed at CERN and then distributed to regional Tier-1 centers around the world.  The Tier-1 centers provide archival and active storage and processing for simulation, analysis, and further data reconstruction.  The Tier-1 centers further distribute and collect data from other Tier-1 and Tier-2 centers. Fermilab has built and operates the largest CMS Tier-1 computing center in the world.  The Tier-1 center additionally provides analysis computing resources for the LHC Physics Center at Fermilab and to collaborating U.S. universities.The CMS Tier-1 center at Fermilab is mission critical to the success of the experiment.

 

Data Storage and Handling

Fermilab provides a custodial active archive for long-term storage of tens of petabytes of scientific data. Services are provided for both on-site direct access to files on tape, or on and off site access of files through a disk cached front end to the tape storage. The tape storage system is called Enstore, which was developed by Fermilab. Enstore is integrated with the disk caching software called dCache and they both share a namespace called PNFS/Chimera. Read more about Data Storage and Handling.

 

 

Data Centers

Fermilab provides safe high-quality operations for Fermilab’s mission critical data centers located at the Feynman Computing Center and the Grid Computing Center. These data centers are critical to the lab’s scientific mission as our experiments depend on them to provide a reliable, accessible place to store data. Read more about Data Centers.

 

 

 

Fabric for Frontier Experiments (FIFE)

thumb

FabrIc for Frontier Experiments (FIFE) provides collaborative scientific data processing solutions for Intensity Frontier experiments. FIFE takes the collective experience from current and past experiments to provide options for designing offline computing for experiments. FIFE is modular, so experiments can take what they need and new tools from outside communities can be incorporated as they develop. FIFE is based on common toolsets wherever possible to increase flexibility, provide for efficient evolution, and reduce the maintenance load. Read more about FIFE.

 

 

General Purpose Grid

thumb

Fermilab operates a general purpose grid cluster, FermiGrid, that is shared by many experiments to run their physics jobs. Grid computing is a form of distributed computing in which multiple clusters of nodes work together to complete tasks. Physicists submit jobs, or computer programs that physicists use to extract physics results from data, to the grid. The grid determines which resources are free and uses those nodes to process the job.

 

 

HEPCloud

thumb

Fermilab is pursuing a new paradigm in particle physics computing through a single managed portal (“HEPCloud”) that will allow more scientists, experiments, and projects to use more resources to extract more science, without the need for expert knowledge. HEPCloud will provide cost-effective access by optimizing usage across all available types of computing resources and will elastically expand the resource pool on short notice (e.g. by renting temporary resources on commercial clouds). This new elasticity, together with the transparent accessibility of resources, will change the way experiments use computing resources to produce physics results. Read more about HEPCloud.

 

High Performance Computing

thumb

Modern scientific computing is increasingly dependent on heterogeneous computing architectures, particularly including GPUs and low latency interconnects. For example, Lattice Quantum Chromodynamics, or Lattice QCD, relies on high-performance computing and advanced software to provide precision calculations of the properties of particles that contain quarks and gluons. Fermilab is home to one of the US LQCD collaboration’s high-performance computing sites, which uses advanced hardware architectures. Fermilab currently operates both a modern mid-scale HPC cluster preferentially for LQCD and a general purpose HPC cluster offering a variety of architectures and processor types. Read more about High Performance Computing.