# Detector commissioning, operation and data processing

Louvain-La-Neuve

CP3 - Research directions and experiments
Modern experimental particle physics requires the use of extremely complex detectors, readout electronics and associated services (e.g. power supply, gas, cooling and safety systems). The behavior of built detectors must be deeply understood before their data can be used to extract physics measurements. This process, called "commissioning", is performed by means of functionality tests of increasing complexity aiming at delivering a device with an understood response in its final working environment. Researchers of CP3 have been involved in the commissioning of several large-scale detectors (e.g. the CMS silicon tracker, the NA62 Gigatracker) and prototypes (e.g. the Calice detector, test-beam devices, etc.).

After commissioning, operation of a complex particle detector is only possible if tools to configure, control and monitor the entire detecting system are developed and deployed. We have experience in the development of such monitoring tools (from Detector Control and Safety to Data Quality Monitoring) and are taking an active role in day-to-day operations of detectors in our facilities and at CERN (both for technical and coordination aspects).

The operation aspect goes beyond these purely "online" aspects.
Many stages of data processing are necessary to go from the fundamental data produced by particle detectors (and their associated auxiliary systems) to a physics measurement. These are aspects that have to be handled "offline" and in some case will have an impact on online activities later on. The quality and precision of physics measurements heavily depends on the following items:
• Data reconstruction methods:
They are necessary to transform the generally large amount of detector raw data into information about the identity and kinematic properties of particles.
• Calibration and alignment:
Detectors and higher level reconstructed data needs to be tuned in order to lead to accurate results.
• Trigger:
The statistics available for an offline analysis as well as the ability to estimate accurately detector acceptances, event selection inefficiencies and backgrounds depends on the quality of the experiment online event selection, called the trigger.

The large amount of data produced by modern high energy physics experiments as well as the complexity of the detectors require complex computing solutions (both hardware and software wise) to perform the data processing steps outlined above. For that purpose, we deployed and maintain a large-scale computing cluster.

## Projects

Click the title to show project description.
• The CMS silicon strip tracker is the largest device of its type ever built. There are 24244 single-sided micro-strip sensors covering an active area of 198m2.
Physics performance of the detector are being constantly assessed and optimized as new data comes.
Members of UCL are playing a major role in the understanding of the silicon strip tracker and in the maintenance and development of the local reconstruction code.

• A framework for Fast Simulation of particle interactions in the CMS detector (FastSim) has been developed and implemented in the overall simulation, reconstruction and analysis framework of CMS. It produces data samples in the same format as the one used by the Geant4-based (henceforth Full) Simulation and Reconstruction chain; the output of the Fast Simulation of CMS can therefore be used in the analysis in the same way as data and Full Simulation samples. FastSim is used in several physics analyses in CMS, in particular those requiring a generation of many samples to scan an extended parameter space of the physics model (e.g. SUSY) or for the purpose of estimating systematic uncertainties. It is also used by several groups to design future sub-detectors for the Phase-II CMS upgrades.
Related activities at UCL include the integration with the Full Simulation in the simulation of the electronic read-out ("digitization") and of the pileup of events from other proton-proton collisions, both in-time and out-of-time; the performance monitoring; and the overall maintenance and upgrade of the tracking-related code. Matthias Komm is current L3 convener of Tracking in FastSim, and Andrea Giammanco has been main responsible of the FastSim project from 2011 to 2013.

• Gigatracker is in the core of one of the spectrometers used in NA62. It's composed of three planes of silicon pixels detectors assembled in a traditional way: readout electronics bump bonded on silicon sensors. Each plane is composed by 18000 pixels 300 um x 300 um arranged in 45 columns and readout by 10 chips. The particularity of this sensor is that its timing resolution should be better than 200 ps in order to cope with high expected rate (800 MHz). Another particularity is its operation in vacuum.

CP3 is involved in several aspects in the production and operation of this detector.

1) Production of 25 GTK stations that will be used during the NA62 <latex>$K^+\to\pi^+\nu\bar{\nu}$</latex> run

2) Operation of GTK during data taking: time and spatial calibration, efficiency studies, effects of radiation, ....

3) Track candidates reconstruction, simulation.

4) Signal development of the signal in the sensor. We use both commercial programs (i.e. TCAD by Synopsys) as well as software developed by us to study the expected signal in this sensor.

• In July 2018 CP3 members have joined the Virgo Collaboration at the European Gravitational Observatory (EGO) near Pisa in Italy. Virgo is the European laser interferometer for gravitational wave detection. After several years of instrument upgrades, Virgo went in observation mode in August 2017, about one year and half after the two LIGO interferometers in the US had detected for the first time gravitational waves. Virgo and LIGO work in close collaboration, sharing data, analysing data and publishing together. Fundamental research in gravitational wave experimental physics was funded for the first time in Belgium at the end of 2018 with a project led by UCLouvain and ULiege. On the data analysis side the plan is on one side to investigate the properties of binary black hole coalescence events, possibly relating them to theoretical models of dark matter and/or primordial black holes, and on the other to search for a stochastic gravitational wave background originating from the very early moments of the life of the Universe, a discovery that would be foundational for cosmology. On the instrumentation side, contributions to computing and the optical system of the Virgo interferometer are planned.
CP3 members are also actively supporting the Einstein Telescope project, a proposed underground laser interferometer project for gravitational wave detection that is expected to take over from LIGO and Virgo around 2030.

• The general goal of this project is to develop muon-based radiography or tomography (“muography”), an innovative multidisciplinary approach to study large-scale natural or man-made structures, establishing a strong synergy between particle physics and other disciplines, such as geology and archaeology.
Muography is an imaging technique that relies on the measurement of the absorption of muons produced by the interactions of cosmic rays with the atmosphere.
Applications span from geophysics (the study of the interior of mountains and the remote quasi-online monitoring of active volcanoes) to archaeology and mining.

We are part of the EU-funded H2020 network INTENSE where we coordinate the Muography work package, which brings together particle physicists, geophysicists, archaeologists, civil engineers and private companies for the development and exploitation of this imaging method.

We are using the local facilities at CP3 for the development of high-resolution portable detectors.
We also participate to the MURAVES collaboration, now merged into the MIVAS collaboration, through algorithmic and data-analysis aspects like the implementation of time-of-flight capabilities, the analysis of control data for the optimization of the reconstruction algorithms, and the understanding of physics and instrumental backgrounds by data-driven and simulation techniques.

• We contribute to the offline absolute calibration of the luminometry system of the CMS detector, by analysing the dedicated "Van der Meer scan" data at different center-of-mass energies and collision types (p-p, p-Pb, Pb-Pb).

As a related task, we also contribute to the data-driven inference of the true amount of "pile-up" collisions.

• NA62 will look for rare kaon decays at SPS accelerator at CERN. A total of about $10^{12}$ kaon decays will be produced in two/three years of data taking. Even though the topology of the events is relatively simple, and the amount of information per event small, the volume of data to be stored per year will be of the order of ~1000 TB. Also, an amount of 500 TB/year is expected from simulation.

Profiting from the synergy inside CP3 in sharing computer resources our group is participating in the definition of the NA62 computing scheme. CP3 will be also one of the grid virtual organization of the experiment.

• The CMS detector at the LHC can be used to identify particles via the measurement of their ionization energy loss. The sub-detectors that have provided so far useful information for this experimental technique are the silicon strip tracker and the pixel detectors. Identification of low momentum hadrons and detection of new exotic massive long-lived charged particles have all benefited from this experimental method. Members of UCL pioneered this technique in the early LHC times and have been developing the tools for its use and calibration. Since 2010 particle identification with ionization energy loss has been the basis of the CMS inclusive search for new massive long-lived charged particles, which has been providing the most stringent and model-independent limits existing to date on any model of new physics predicting such particles.

• We are involved in the activities of the btag POG (performance object group) of CMS, in release and data validation and purity measurement. We are also interested in btagging in special cases like for colinear b-jets. Furthermore, we are involved in the re-optimization and improvement of the Combined Secondary Vertex (CSV) tagger for the 2012 analyses.

• The World LHC Computing GRID (WLCG) is the worldwide distributed computing infrastructure controlled by software middleware that allows a seamless usage of shared storage and computing resources.

About 10 PBytes of data are produced every year by the experiments running at the LHC collider. This data must be processed (iterative and refined calibration and analysis) by a large scientific community that is widely distributed geographically.

Instead of concentrating all necessary computing resources in a single location, the LHC experiments have decided to set-up a network of computing centres distributed all over the world.

The overall WLCG computing resources needed by the CMS experiment alone in 2016 amount to about 1500 kHepSpec06 of computing power, 90 PB of disk storage and 150 PB of tape storage. Working in the context of the WLCG translates into seamless access to shared computing and storage resources. End users do not need to know where their applications run. The choice is made by the underlying WLCG software on the basis of availability of resources, demands of the user application (CPU, input and output data,..) and privileges owned by the user.

Back in 2005 UCL proposed the WLCG Belgian Tier2 project that would involve the 6 Belgian Universities involved in CMS. The Tier2 project consists of contributing to the WLCG by building two computing centres, one at UCL and one at the IIHE (ULB/VUB).

The UCL site of the WLCG Belgian Tier2 is deployed in a dedicated room close to the cyclotron control room of the IRMP Institute and is currently a fully functional component of the WLCG.

The UCL Belgian Tier2 project also aims to integrate, bring on the GRID, and share resources with other scientific computing projects. The projects currently integrated in the UCL computing cluster are the following: MadGraph/MadEvent, NA62 and Cosmology.

## Recent Publications

Click the title to show details.
• CMS Collaboration, June 6, 2018
Public experimental note. [Full text]

• CMS collaboration, August 21, 2017
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• CMS Collaboration, March 26, 2017
Public experimental note. [Full text]

• CMS Collaboration, December 1, 2016
Public experimental note. [Full text]

• Khachatryan, Vardan and others, October 6, 2016
Refereed paper. [Abstract] [PDF] [Journal] [Dial]

• CMS Collaboration, December 1, 2015
Public experimental note. [Full text]

• Roberto Castello on behalf of CMS collaboration, July 4, 2014
Contribution to proceedings. [Dial] [Full text]

• Roberto Castello on behalf of CMS collaboration, July 4, 2014
Contribution to proceedings. [Dial] [Full text]

• The CMS collaboration, June 19, 2014
Refereed paper. [Dial] [Full text]

• Andrea Giammanco, June 9, 2014
Contribution to proceedings. [Full text]

• The CMS Collaboration, February 8, 2011
Public experimental note. [Full text]

• CMS collaboration, December 21, 2010
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• Chatrchyan, Serguei and others, February 12, 2010
[Abstract] [PDF] [Journal] [Dial]

• CMS Collaboration, December 26, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• CMS Collaboration, December 26, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• CMS Collaboration, December 21, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• CMS Collaboration, December 21, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• CMS Collaboration, December 21, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• CMS Tracker Collaboration (W. Adam et al.)., December 21, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

• W. Adam et al., December 21, 2009
Refereed paper. [Abstract] [PDF] [Journal] [Dial]

• S. Ovyn and X.Rouby, January 7, 2009
[Abstract] [PDF] [Full text]

• A.Dierlamm, G.Dirkes, M.Fahrer, M.Frey, F.Hartmann, L.Masetti, O.Militaru, S.Youssaf Shah, R.Stringer, A.Tsirou, December 31, 2008
Contribution to proceedings. [Full text]

• The CMS Collaboration, December 11, 2008
Private experimental note. [Full text]

• The CMS Collaboration, December 10, 2008
Refereed paper. [Journal] [Full text]

• J.-L. Bonnet, G. Bruno, B. De Callatay, B. Florins, A. Giammanco, G. Gregoire, Th. Keutgen, D. Kcira, V. Lemaitre, D. Michotte, O. Militaru, K. Piotrzkowski, L. Quertermont, V. Roberfroid, X. Rouby, D. Teyssier et al. (>100 authors), December 10, 2008
Public experimental note. [Full text]

• S. Assouak, J.-L. Bonnet, G. Bruno, B. de Callatay, S. de Visscher, D. Favart, B. Florins, E. Forton5, A. Giammanco, G. Gregoire, S. Kalinin, D. Kcira, Th. Keutgen, V. Lemaitre, D. Michotte, O. Militaru, S. Ovyn, K. Piotrzkowski, X. Rouby, D. Teyssier, O. Van der Aa et al. (> 100 authors), December 10, 2008
Refereed paper. [Journal] [Full text]

• A. Giammanco, November 18, 2008
Public experimental note. [Full text]

• Dorian Kcira, December 3, 2007
Contribution to proceedings. [Abstract] [PDF]

• P. Demin, S. de Visscher, A. Bocci, R. Ranieri , December 31, 2006
Public experimental note. [Full text]