ALICE (A Large Ion Collider Experiment) is a general purpose, heavy ion collision detector at the CERN LHC. It is designed to study the physics of strongly interacting matter, and in particular the properties of Quark-Gluon Plasma (QGP), using proton-proton, nucleus-nucleus and proton-nucleus collisions at high energies. The ALICE experiment will be upgraded during the Long Shutdown 2 (LS2, 2020-2021) in order to exploit the full scientific potential of the future LHC.
Computing upgrade overview
The ALICE computing upgrade addresses the challenge of reading out and inspecting the Pb–Pb collisions at rates of 50kHz, sampling the pp and p–Pb at up to 200kHz. The resulting data throughput from the detector has been estimated to be greater than 1TB/s for Pb–Pb events, roughly two orders of magnitude more than in Run 1.
The ALICE Computing Model for Runs 3 and 4 is designed for a maximal reduction of the data volume read out from the detector as early as possible during the data-flow. The zero-suppressed data of all collisions will be shipped to the O2 facility.
The data volume reduction will be achieved by reconstructing the data in several steps synchronously with data taking. For example, the raw data of the TPC (the largest contributor to the data volume) will first be rapidly reconstructed using online cluster finding and a first fast tracking using an early calibration based on average running conditions. Data produced during this stage will be stored temporarily at up to 90 GB/s.
Taking advantage of the duty factor of the accelerator and the experiment, the second reconstruction stage will be performed asynchronously, using the final calibration in order to reach the required data quality.
The O² facility
will be a high-throughput system which will include heterogeneous computing platforms similar to many high performance computing centres. The computing nodes will be equipped with hardware acceleration.
The O² software framework
will provide the necessary abstraction so that common code can deliver the selected functionality on different platforms. The framework will also support a concurrent computing model for a wide spectrum of computing facilities, ranging from laptops to the complete O2 system. Off-the-shelf open source software conforming to open standards will be used as much as possible, either for the development tools or as a basis for the framework itself.
Are you an O2 developer? Please check the O2 developer's entry point for extensive documentation.
Optical Fiber Infrastructure
New optical fiber infrastructure between detector frontend electronics and FLP in CR1:
- Trunk cables and subracks infrastructure between cavern/CR4 and CR1 (see attachment).
- Patch cords mapping in the cavern and CR4.
- Patch cords mapping in CR1.
Schema of CR1 layout with FLPs and CRUs/CRORCs distribution and IPMI/fiber/InfiniBand network.