Detector commissioning, operation and data processing

CP3 - Research directions and experiments
Modern experimental particle physics requires the use of extremely complex detectors, readout electronics and associated services (e.g. power supply, gas, cooling and safety systems). The behavior of built detectors must be deeply understood before their data can be used to extract physics measurements. This process, called "commissioning", is performed by means of functionality tests of increasing complexity aiming at delivering a device with an understood response in its final working environment. Researchers of CP3 have been involved in the commissioning of several large-scale detectors (e.g. the CMS silicon tracker, the NA62 Gigatracker) and prototypes (e.g. the Calice detector, test-beam devices, etc.).

After commissioning, operation of a complex particle detector is only possible if tools to configure, control and monitor the entire detecting system are developed and deployed. We have experience in the development of such monitoring tools (from Detector Control and Safety to Data Quality Monitoring) and are taking an active role in day-to-day operations of detectors in our facilities and at CERN (both for technical and coordination aspects).

The operation aspect goes beyond these purely "online" aspects.
Many stages of data processing are necessary to go from the fundamental data produced by particle detectors (and their associated auxiliary systems) to a physics measurement. These are aspects that have to be handled "offline" and in some case will have an impact on online activities later on. The quality and precision of physics measurements heavily depends on the following items:
  • Data reconstruction methods:
    They are necessary to transform the generally large amount of detector raw data into information about the identity and kinematic properties of particles.
  • Calibration and alignment:
    Detectors and higher level reconstructed data needs to be tuned in order to lead to accurate results.
  • Trigger:
    The statistics available for an offline analysis as well as the ability to estimate accurately detector acceptances, event selection inefficiencies and backgrounds depends on the quality of the experiment online event selection, called the trigger.

The large amount of data produced by modern high energy physics experiments as well as the complexity of the detectors require complex computing solutions (both hardware and software wise) to perform the data processing steps outlined above. For that purpose, we deployed and maintain a large-scale computing cluster.

Projects

Click the title to show project description.
  • The CMS silicon strip tracker is the largest device of its type ever built. There are 24244 single-sided micro-strip sensors covering an active area of 198m2.
    Physics performance of the detector are being constantly assessed and optimized as new data comes.
    Members of UCL are playing a major role in the understanding of the silicon strip tracker and in the maintenance and development of the local reconstruction code.

  • Gigatracker is in the core of one of the spectrometers used in NA62. It's composed of three planes of silicon pixels detectors assembled in a traditional way: readout electronics bump bonded on silicon sensors. Each plane is composed by 18000 pixels 300 um x 300 um arranged in 45 columns and readout by 10 chips. The particularity of this sensor is that its timing resolution should be better than 200 ps in order to cope with high expected rate (800 MHz). Another particularity is its operation in vacuum.

    CP3 is involved in several aspects in the production and operation of this detector.

    1) Production of 25 GTK stations that will be used during the NA62 <latex>$K^+\to\pi^+\nu\bar{\nu}$</latex> run

    2) Operation of GTK during data taking: time and spatial calibration, efficiency studies, effects of radiation, ....

    3) Track candidates reconstruction, simulation.

    4) Signal development of the signal in the sensor. We use both commercial programs (i.e. TCAD by Synopsys) as well as software developed by us to study the expected signal in this sensor.

  • The general goal of this project is to develop muon-based radiography or tomography (“muography”), an innovative multidisciplinary approach to study large-scale natural or man-made structures, establishing a strong synergy between particle physics and other disciplines, such as geology and archaeology.
    Muography is an imaging technique that relies on the measurement of the absorption of muons produced by the interactions of cosmic rays with the atmosphere.
    Applications span from geophysics (the study of the interior of mountains and the remote quasi-online monitoring of active volcanoes) to archaeology and mining.

    We are part of the H2020-MSCA-RISE network INTENSE where we coordinate the Muography work package, which brings together particle physicists, geophysicists, archaeologists, civil engineers and private companies for the development and exploitation of this imaging method.

    We are also part of the H2020-RIA project SilentBorder, which aims at developing new muon scanners at border controls.

    We are using the local facilities at CP3 for the development of high-resolution portable detectors.
    We also participate to the MURAVES collaboration through simulations and data-analysis developments (an example of the latter is the implementation and in-situ calibration of time-of-flight capabilities).

  • We are among the founders of MODE (Machine-learning Optimized Design of Experiments, https://mode-collaboration.github.io/), a multi-disciplinary consortium of European and American physicists and computer scientists who target the use of differentiable programming in design optimization of detectors for particle physics applications, extending from fundamental research at accelerators, in space, and in nuclear physics and neutrino facilities, to industrial applications employing the technology of radiation detection.

    We aim to develop a modular, customizable, and scalable, fully differentiable pipeline for the end-to-end optimization of articulated objective functions that model in full the true goals of experimental particle physics endeavours, to ensure optimal detector performance, analysis potential, and cost-effectiveness.
    The main goal of our activities is to develop an architecture that can be adapted to the above use cases but will also be customizable to any other experimental endeavour employing particle detection at its core. We welcome suggestions, as well as interest in joining our effort, by researchers focusing on use cases for which this technology can be of benefit.

    Two CP3 members currently serve as members of the MODE Supervisory Board.

  • NA62 will look for rare kaon decays at SPS accelerator at CERN. A total of about $10^{12}$ kaon decays will be produced in two/three years of data taking. Even though the topology of the events is relatively simple, and the amount of information per event small, the volume of data to be stored per year will be of the order of ~1000 TB. Also, an amount of 500 TB/year is expected from simulation.

    Profiting from the synergy inside CP3 in sharing computer resources our group is participating in the definition of the NA62 computing scheme. CP3 will be also one of the grid virtual organization of the experiment.

  • We are involved in the activities of the btag POG (performance object group) of CMS, in release and data validation and purity measurement. We are also interested in btagging in special cases like for colinear b-jets. Furthermore, we are involved in the re-optimization and improvement of the Combined Secondary Vertex (CSV) tagger for the 2012 analyses.

  • The CP3 computing cluster has been enabled to receive and run LIGO/Virgo jobs over the GRID. The CP3 cluster is being developed to host the so-called StashCash service that serves Virgo data to any job running on the GRID.

  • The World LHC Computing GRID (WLCG) is the worldwide distributed computing infrastructure controlled by software middleware that allows a seamless usage of shared storage and computing resources.

    About 10 PBytes of data are produced every year by the experiments running at the LHC collider. This data must be processed (iterative and refined calibration and analysis) by a large scientific community that is widely distributed geographically.

    Instead of concentrating all necessary computing resources in a single location, the LHC experiments have decided to set-up a network of computing centres distributed all over the world.

    The overall WLCG computing resources needed by the CMS experiment alone in 2016 amount to about 1500 kHepSpec06 of computing power, 90 PB of disk storage and 150 PB of tape storage. Working in the context of the WLCG translates into seamless access to shared computing and storage resources. End users do not need to know where their applications run. The choice is made by the underlying WLCG software on the basis of availability of resources, demands of the user application (CPU, input and output data,..) and privileges owned by the user.

    Back in 2005 UCL proposed the WLCG Belgian Tier2 project that would involve the 6 Belgian Universities involved in CMS. The Tier2 project consists of contributing to the WLCG by building two computing centres, one at UCL and one at the IIHE (ULB/VUB).

    The UCL site of the WLCG Belgian Tier2 is deployed in a dedicated room close to the cyclotron control room of the IRMP Institute and is currently a fully functional component of the WLCG.

    The UCL Belgian Tier2 project also aims to integrate, bring on the GRID, and share resources with other scientific computing projects. The projects currently integrated in the UCL computing cluster are the following: MadGraph/MadEvent, NA62 and Cosmology.

Recent Publications

Click the title to show details.
  • LIGO Scientific and Virgo Collaborations, June 24, 2020
    Refereed paper. [Abstract] [PDF] [Full text]

  • Sirunyan, Albert M and others, February 11, 2020
    Refereed paper. [Abstract] [PDF] [Journal] [Dial]

  • CMS Collaboration, June 6, 2018
    Public experimental note. [Full text]

  • CMS collaboration, August 21, 2017
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • CMS Collaboration, March 26, 2017
    Public experimental note. [Full text]

  • CMS Collaboration, December 1, 2016
    Public experimental note. [Full text]

  • Khachatryan, Vardan and others, October 6, 2016
    Refereed paper. [Abstract] [PDF] [Journal] [Dial]

  • CMS Collaboration, December 1, 2015
    Public experimental note. [Full text]

  • Roberto Castello on behalf of CMS collaboration, July 4, 2014
    Contribution to proceedings. [Dial] [Full text]

  • Roberto Castello on behalf of CMS collaboration, July 4, 2014
    Contribution to proceedings. [Dial] [Full text]

  • The CMS collaboration, June 19, 2014
    Refereed paper. [Dial] [Full text]

  • Andrea Giammanco, June 9, 2014
    Contribution to proceedings. [Full text]

  • The CMS Collaboration, February 8, 2011
    Public experimental note. [Full text]

  • CMS collaboration, December 21, 2010
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • Chatrchyan, Serguei and others, February 12, 2010
    [Abstract] [PDF] [Journal] [Dial]

  • CMS Collaboration, December 26, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • CMS Collaboration, December 26, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • CMS Collaboration, December 21, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • CMS Collaboration, December 21, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • CMS Collaboration, December 21, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • CMS Tracker Collaboration (W. Adam et al.)., December 21, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial] [Full text]

  • W. Adam et al., December 21, 2009
    Refereed paper. [Abstract] [PDF] [Journal] [Dial]

  • S. Ovyn and X.Rouby, January 7, 2009
    [Abstract] [PDF] [Full text]

  • A.Dierlamm, G.Dirkes, M.Fahrer, M.Frey, F.Hartmann, L.Masetti, O.Militaru, S.Youssaf Shah, R.Stringer, A.Tsirou, December 31, 2008
    Contribution to proceedings. [Full text]

  • The CMS Collaboration, December 11, 2008
    Private experimental note. [Full text]

  • The CMS Collaboration, December 10, 2008
    Refereed paper. [Journal] [Full text]

  • J.-L. Bonnet, G. Bruno, B. De Callatay, B. Florins, A. Giammanco, G. Gregoire, Th. Keutgen, D. Kcira, V. Lemaitre, D. Michotte, O. Militaru, K. Piotrzkowski, L. Quertermont, V. Roberfroid, X. Rouby, D. Teyssier et al. (>100 authors), December 10, 2008
    Public experimental note. [Full text]

  • S. Assouak, J.-L. Bonnet, G. Bruno, B. de Callatay, S. de Visscher, D. Favart, B. Florins, E. Forton5, A. Giammanco, G. Gregoire, S. Kalinin, D. Kcira, Th. Keutgen, V. Lemaitre, D. Michotte, O. Militaru, S. Ovyn, K. Piotrzkowski, X. Rouby, D. Teyssier, O. Van der Aa et al. (> 100 authors), December 10, 2008
    Refereed paper. [Journal] [Full text]

  • A. Giammanco, November 18, 2008
    Public experimental note. [Full text]

  • Dorian Kcira, December 3, 2007
    Contribution to proceedings. [Abstract] [PDF]

  • P. Demin, S. de Visscher, A. Bocci, R. Ranieri , December 31, 2006
    Public experimental note. [Full text]