Detector commissioning, operation and data processing

Modern experimental particle physics requires the use of extremely complex detectors, readout electronics and associated services (e.g. power supply, gas, cooling and safety systems). The behavior of built detectors must be deeply understood before their data can be used to extract physics measurements. This process, called "commissioning", is performed by means of functionality tests of increasing complexity aiming at delivering a device with an understood response in its final working environment. Researchers of CP3 have been involved in the commissioning of several large-scale detectors (e.g. the CMS silicon tracker, the NA62 Gigatracker) and prototypes (e.g. the Calice detector, test-beam devices, etc.).

After commissioning, operation of a complex particle detector is only possible if tools to configure, control and monitor the entire detecting system are developed and deployed. We have experience in the development of such monitoring tools (from Detector Control and Safety to Data Quality Monitoring) and are taking an active role in day-to-day operations of detectors in our facilities and at CERN (both for technical and coordination aspects).

The operation aspect goes beyond these purely "online" aspects.
Many stages of data processing are necessary to go from the fundamental data produced by particle detectors (and their associated auxiliary systems) to a physics measurement. These are aspects that have to be handled "offline" and in some case will have an impact on online activities later on. The quality and precision of physics measurements heavily depends on the following items:
  • Data reconstruction methods:
    They are necessary to transform the generally large amount of detector raw data into information about the identity and kinematic properties of particles.
  • Calibration and alignment:
    Detectors and higher level reconstructed data needs to be tuned in order to lead to accurate results.
  • Trigger:
    The statistics available for an offline analysis as well as the ability to estimate accurately detector acceptances, event selection inefficiencies and backgrounds depends on the quality of the experiment online event selection, called the trigger.

The large amount of data produced by modern high energy physics experiments as well as the complexity of the detectors require complex computing solutions (both hardware and software wise) to perform the data processing steps outlined above. For that purpose, we deployed and maintain a large-scale computing cluster.

Projects

The CMS silicon strip tracker is the largest device of its type ever built. There are 24244 single-sided micro-strip sensors covering an active area of 198m2.
Physics performance of the detector are being constantly assessed and optimized as new data comes.
Members of UCL are playing a major role in the understanding of the silicon strip tracker and in the maintenance and development of the local reconstruction code.

External collaborators: CMS tracker collaboration.

A framework for Fast Simulation of particle interactions in the CMS detector (FastSim) has been developed and implemented in the overall simulation, reconstruction and analysis framework of CMS. It produces data samples in the same format as the one used by the Geant4-based (henceforth Full) Simulation and Reconstruction chain; the output of the Fast Simulation of CMS can therefore be used in the analysis in the same way as data and Full Simulation samples. FastSim is used in several physics analyses in CMS, in particular those requiring a generation of many samples to scan an extended parameter space of the physics model (e.g. SUSY) or for the purpose of estimating systematic uncertainties. It is also used by several groups to design future sub-detectors for the Phase-II CMS upgrades.
Related activities at UCL include the integration with the Full Simulation in the simulation of the electronic read-out ("digitization") and of the pileup of events from other proton-proton collisions, both in-time and out-of-time; the performance monitoring; and the overall maintenance and upgrade of the tracking-related code. Matthias Komm is current L3 convener of Tracking in FastSim, and Andrea Giammanco has been main responsible of the FastSim project from 2011 to 2013.

Gigatracker is in the core of one of the spectrometers used in NA62. It's composed of three planes of silicon pixels detectors assembled in a traditional way: readout electronics bump bonded on silicon sensors. Each plane is composed by 18000 pixels 300 um x 300 um arranged in 45 columns and readout by 10 chips. The particularity of this sensor is that its timing resolution should be better than 200 ps in order to cope with high expected rate (800 MHz). Another particularity is its operation in vacuum.

CP3 is involved in several aspects in the production and operation of this detector.

1) Production of 25 GTK stations that will be used during the NA62 Formula: 0 run

2) Operation of GTK during data taking: time and spatial calibration, efficiency studies, effects of radiation, ....

3) Track candidates reconstruction, simulation.

4) Signal development of the signal in the sensor. We use both commercial programs (i.e. TCAD by Synopsys) as well as software developed by us to study the expected signal in this sensor.

We contribute to the offline absolute calibration of the luminometry system of the CMS detector, by analysing the dedicated "Van der Meer scan" data at different center-of-mass energies and collision types (p-p, p-Pb, Pb-Pb).

As a related task, we also contribute to the data-driven inference of the true amount of "pile-up" collisions.

External collaborators: CMS Luminosity Physics Object Group.

NA62 will look for rare kaon decays at SPS accelerator at CERN. A total of about $10^{12}$ kaon decays will be produced in two/three years of data taking. Even though the topology of the events is relatively simple, and the amount of information per event small, the volume of data to be stored per year will be of the order of ~1000 TB. Also, an amount of 500 TB/year is expected from simulation.

Profiting from the synergy inside CP3 in sharing computer resources our group is participating in the definition of the NA62 computing scheme. CP3 will be also one of the grid virtual organization of the experiment.

External collaborators: INFN (Rome I), University of Birmingham, University of Glasgow.

The CMS detector at the LHC can be used to identify particles via the measurement of their ionization energy loss. The sub-detectors that have provided so far useful information for this experimental technique are the silicon strip tracker and the pixel detectors. Identification of low momentum hadrons and detection of new exotic massive long-lived charged particles have all benefited from this experimental method. Members of UCL pioneered this technique in the early LHC times and have been developing the tools for its use and calibration. Since 2010 particle identification with ionization energy loss has been the basis of the CMS inclusive search for new massive long-lived charged particles, which has been providing the most stringent and model-independent limits existing to date on any model of new physics predicting such particles.

External collaborators: CMS collaboration.

The detection of TeV muons is a fundamental ingredient of a number of key analyses performed by the CMS experiment at the LHC collider, like the search for new high-mass resonances decaying into di-muons or one muon and one neutrino. Muons with an energy of a few hundred GeV or more experience catastrophic energy losses in the material they traverse. These energy losses have a very significant negative imact on the most important parameters of the muon energy measurement distribution: central value, resolution, and tails.

In order to mitigate these effects, a new muon reconstruction algorithm, called DYnamic Truncation (DYT), has been developed. The DYT identifies the muon position measurements that are produced after a catastrophic energy loss. The inclusion of these measurements in the muon track fit is responsible for the degradation of the muon energy measurement. The identification of such measuremnts is based on the level of incompatibility between the position measurement itself and the expected position obtained using the previous measurements.

We are involved in the activities of the btag POG (performance object group) of CMS, in release and data validation and purity measurement. We are also interested in btagging in special cases like for colinear b-jets. Furthermore, we are involved in the re-optimization and improvement of the Combined Secondary Vertex (CSV) tagger for the 2012 analyses.

External collaborators: Strasbourg CMS group, CMS collaboration.

The general goal of this project is to develop muon-based tomography (“muography”), an innovative multidisciplinary approach to study geological structures, establishing a strong synergy between geophysics and particle physics.
Muography is an imaging technique that relies on the measurement of the absorption of muons produced by the interactions of cosmic rays with the atmosphere.
Applications span from geophysics (the study of the interior of mountains and the remote quasi-online monitoring of active volcanoes) to archaeology and mining.

We are part of international networks (G-ENDEAVOR, European Muography Network) that bring together particle physicists and geophysicists for the development and exploitation of high-resolution portable detectors.

We are using the local facilities at CP3 (e.g., the gRPC cosmic test bench) for further hardware developments.
We also participate to the MURAVES collaboration, now merged into the MIVAS collaboration, through algorithmic and data-analysis aspects like the implementation of time-of-flight capabilities, the analysis of control data for the optimization of the reconstruction algorithms, and the understanding of physics and instrumental backgrounds by data-driven and simulation techniques.

External collaborators: G-ENDEAVOR and European Muography Network (Japan, Italy, France, UK, Hungary); MIVAS Collaboration (France and Italy) including CNRS (France), INFN (Italy), INGV(Italy).

The World LHC Computing GRID (WLCG) is the worldwide distributed computing infrastructure controlled by software middleware that allows a seamless usage of shared storage and computing resources.

About 10 PBytes of data are produced every year by the experiments running at the LHC collider. This data must be processed (iterative and refined calibration and analysis) by a large scientific community that is widely distributed geographically.

Instead of concentrating all necessary computing resources in a single location, the LHC experiments have decided to set-up a network of computing centres distributed all over the world.

The overall WLCG computing resources needed by the CMS experiment alone in 2016 amount to about 1500 kHepSpec06 of computing power, 90 PB of disk storage and 150 PB of tape storage. Working in the context of the WLCG translates into seamless access to shared computing and storage resources. End users do not need to know where their applications run. The choice is made by the underlying WLCG software on the basis of availability of resources, demands of the user application (CPU, input and output data,..) and privileges owned by the user.

Back in 2005 UCL proposed the WLCG Belgian Tier2 project that would involve the 6 Belgian Universities involved in CMS. The Tier2 project consists of contributing to the WLCG by building two computing centres, one at UCL and one at the IIHE (ULB/VUB).

The UCL site of the WLCG Belgian Tier2 is deployed in a dedicated room close to the cyclotron control room of the IRMP Institute and is currently a fully functional component of the WLCG.

The UCL Belgian Tier2 project also aims to integrate, bring on the GRID, and share resources with other scientific computing projects. The projects currently integrated in the UCL computing cluster are the following: MadGraph/MadEvent, NA62 and Cosmology.

External collaborators: CISM (UCL), Pascal Vanlaer (Belgium, ULB), Lyon computing centre, CERN computing centre.

Recent publications

2017

Particle-flow reconstruction and global event description with the CMS detector
CMS collaboration
[Abstract] [PDF]
Refereed paper. 21st August.
CMS Luminosity Measurement for the 2016 Data Taking Period
CMS Collaboration
Public experimental note. 26th March.

2016

CMS Luminosity Calibration for the pp Reference Run at sqrt(s) = 5.02 TeV
CMS Collaboration
Public experimental note. 1st December.
Reconstruction and identification of τ lepton decays to hadrons and ν$_τ$ at CMS
Khachatryan, Vardan and others
[Abstract] [PDF] [Journal]
Refereed paper. 6th October.

2015

Tau reconstruction and identification in CMS during LHC run 1
CMS Collaboration
[Journal]
Public experimental note. 1st December.

2014

Data preparation for the Compact Muon Solenoid experiment
Roberto Castello on behalf of CMS collaboration
[Journal]
Contribution to proceedings. 4th July.
Alignment procedures for the CMS Silicon Tracker detector during pp collisions
Roberto Castello on behalf of CMS collaboration
[Journal]
Contribution to proceedings. 4th July.
Alignment of the CMS tracker with LHC and cosmic ray data
The CMS collaboration
[Journal]
Refereed paper. 19th June.
The Fast Simulation of the CMS Experiment
Andrea Giammanco
[Journal]
Contribution to proceedings. 9th June.

2011

Studies of Tracker Material
The CMS Collaboration
[Journal]
Public experimental note. 8th February.

2010

CMS Tracking Performance Results from early LHC Operation
CMS collaboration
[Abstract] [PDF] [Journal]
Refereed paper. 21st December.
Precise Mapping of the Magnetic Field in the CMS Barrel Yoke using Cosmic Rays
Chatrchyan, Serguei and others
[Abstract] [PDF] [Journal]
12th February.

2009

Alignment of the CMS Silicon Tracker during Commissioning with Cosmic Rays
CMS Collaboration
[Abstract] [PDF] [Journal]
Refereed paper. 26th December.
Commissioning and Performance of the CMS Pixel Tracker with Cosmic Ray Muons
CMS Collaboration
[Abstract] [PDF] [Journal]
Refereed paper. 26th December.
Commissioning of the CMS Experiment and the Cosmic Run at Four Tesla
CMS Collaboration
[Abstract] [PDF] [Journal]
Refereed paper. 21st December.