User Tools

Site Tools


Sidebar

Les Houches

2019 Session

Wikis of Previous sessions

Les Houches Themes

Help

2017:groups:tools:recasting

Recasting at LH2017

interested people: fabio.maltoni@uclouvain.be, sabine.kraml@gmail.com, gabriel.facini@cern.ch, D.Grellscheid@gmail.com, ssekmen@cern.ch, J.Butterworth@ucl.ac.uk, nishita.desai@umontpellier.fr, andy.buckley@cern.ch, fuks@lpthe.jussieu.fr, eric.conte@iphc.cnrs.fr, peter.richardson@durham.ac.uk, luca.perrozzi@cern.ch, olivier.mattelaer@uclouvain.be, Pasquale.Musella@cern.ch, andre.lessa@cern.ch, alexandra.oliveira@cern.ch, ursula.laa@lpsc.in2p3.fr, kristin.lohwasser@cern.ch, thrynova@mail.cern.ch, efe.yazgan@cern.ch, philippe.gras@cern.ch, sylvain@ift.unesp.br

This page is meant to collect information on the recasting discussions.

Outcome of the first meeting Fri, Jun 16th

General Activities
  • Feasibility study of the implementation/portability of complicated MVA techniques (BDT, NN,…) into the analyses
  • Improvement of results and recastability: how to provide correlations signal systematics, possibility of providing a few key observables unfolded.
  • Comparison of between DELPHES results and simple object smearing.
  • Trying out the use of particle-level measurements to constrain model models
Formats
  • Object efficiency tables : which format (HEPDATA?)
Benchmarking/Comparisons
  • Implementation of analyses of increasing complexity in the Analysis Description Format (LHADA Proposal) and in (BSM) Rivet and their comparison.
    1. Choose an analysis of ATLAS or CMS which has cutflow and detector effects provided in some form, and possibly is already been implemented in the recasting codes CheckMate/MadAnalysis/Rivet/ATOM/.
    2. Implement the same analysis in LHADA and then use the dedicated parsers to provide the analysis for the recasting codes.
    3. Reproduce the NP interpretation of the original paper (=validation implementation).
    4. Recast the analysis for an other new physics model and compare the results.
    5. Go to point one and choose a more complicated analysis…

AB: would be interesting to see how Delphes performance looks without analysis-specific cards, since a lot of people (outside the “big” recasting groups) are using it that way.

  • How to validate the analyses.

References
Simplified likelihood framework

–> Andy, Sylvain

CMS formalism: https://cds.cern.ch/record/2242860/files/NOTE2017_001.pdf

AB implementations in GAMBIT and SciPy, marginalising over correlated background uncertainties (by unitary transformation + integral, and by MC sampling respectively). MadAnalysis: (Benj: I would like to do it, but time is my main problem. Anyone to help here? AB: Maybe my Python code, when finished?)

AB: reporting of SR n & b arrays and covariance matrix (matrices?) currently ad hoc / non-standardised. Would be really good to establish a standard – ideally in HepData.

Canonical example: CMS 0-lepton search with 174 SRs and covariance matrix: http://cms-results.web.cern.ch/cms-results/public-results/publications/SUS-16-033/index.html

Improvements to the basic CMS proposal: https://arxiv.org/abs/1603.03061

  • Use of exponential nuisance parameters to avoid negative rates.
  • Implement a covariance matrix dependent on the parameters of interest. Happens for example if there are uncertainties on both signal and background. Depends on availability of elementary sources of uncertainties. If released as weights, will open possibilities.

SF: Simplified likelihoods as an alternative to unfolding: comparison between both methods can be done in a specific example

LHADA

Examples of analysis descriptions in LHADA format:

A first version of arxiv:1605.03814 is written. It will be added/linked here after some cleanup.

2017/groups/tools/recasting.txt · Last modified: 2018/02/05 11:50 by sezen.sekmen