Systematic uncertainties [Thurs 14h00-16h00 Auditorium]
Slack channel: #np-ps-syst-uncs
Possible projects:
Shower accuracy and benchmarks
Amplitude evolution, if people are interested
Role and impact of the infrared cutoff, tuning, and uncertainties
A practical approach for experiments
Including systematic uncertainties in ML based analysis
Interested people: Josh, Simon, Stefan, Vinny, Axel, Aishik, Saptaparna, Andrzej, Ana, Maria V., Gianna, Chris G, Christian P, Max, Peter M
Agreed projects:
Perturbative benchmarks of showers [Simon, James, Daniel]
NLL, Numerical impacts
Interplay between shower and hadronisation [Simon]
-
Effective/comprehensive uncertainty band
Uncertainty recommendation using available tools [Josh, Gianna]
Setup 4-point variations of all combinations of shower and hadronisation model
Tune these in a consistent way to get an uncertainty envelope
Tuning strategies [Stefan K]
What is currently done?
Common framework?
Better documentation of existing framework
Benchmark sets of data
Generator software and computing considerations [Friday 11h00-12h30 Auditorium]
Slack channel: #gen-soft-comp
Possible projects:
Interested people: Josh, Vinny, Andrea, Stefan, Saptaparna, Maria V., Sergio, Chris G, Christian P, Ana, Max, Peter M, Simon, Sudeepan
Agreed projects:
GPU/Vectorisation/HPC [Josh]
Experience porting codes
Workflows in experiments
How would code be run in practise
How to get/confirm allocations
Sharing of resources between EXP and TH
Computing performance [Josh]
Benchmarking current code
Also with examples from EXP
Accounting in experiments
Projections for future N(2,3)LO calculations
Updates to interfaces [Josh]
Interoperability of models
Modular framework
Multi-event
API in MC generators
Resampling
Accessibility of tools [Josh]
Containerisation and reproducibility
Docker images, run cards, Rivet routines, Yoda files → HepData
Use these tools to make LH studies reproducible
Can at Docker-based workflow on e.g. lxplus be documented to lower barrier to entry for MC studies?
Make sure cards actually used by EXP are uploaded with TH prediction Yoda files.
Make available for benchmarking (c.f. point 2.), including full chain in EXP
MC Generation generic tool/interface
Like MCPlots
Event formats [Josh]
New HDF5 standard ← LHEF/HepMC
Easier event sharing, smaller disk footprint
Need for more information in intermediate stages of generation?
Sample sharing between experiments
Joint/cross validation
ML for MC []
Interoperability link to point 3. (swapping out parts of generation in modular framework)
Discussion of exact vs approximate use of ML
Resampling/reweighting [Josh]
Reweighting and derivative-based optimisation
Resampling to eliminate negative weights
NN and cell-based tools
Stress tests of these tools?
Problem areas from TH PoV
Validation at scale in EXP
ttW modelling studies [TBC]
Slack channel: #ttw
Possible projects:
Interested people: Josh, Sergio, Stefan, Gianna
Issues with EW evolution, identification of EW final states [Thurs 16h00-17h30 QCD room]
Slack channel: #ew-evo-id
Interested people: Simon, Axel, Andrzej
Interplay between showers and hadronization
Hadronization model should respond to shower IR cut variations
Study IR ctuoff dependence, map out how purely perturbative variations compare
goal is to update the LH 17 study on retuning with a more solid theory framework
Interested: Simon, Andrzej, Stefan K.
Tuning Accord and Strategies
What is currently done:
Common framework for tuning exist, but no single script to run
Nice to have a script, which automatically prints exactly what is done and with which flags
Every generator team does tuning on their own
Almost no documentation on the tunes provided
Rarely publicly available weights
Rarely publicly available tuning options (samples, interpolation order, flags used, …)
Often post-processing tune by eye
Goals of the project:
Coordination among generator authors of Herwig, Pythia and Sherpa (mattermost channel?)
Common framework for tuning exist, but no single reproducible script:
Stefan K. trying to provide this in Professor, which automatically documents what version, options, flags, etc. are being run
Someone could do something similar in Apprentice
Les Houches Accord on some Benchmark observable+weight sets:
Observable quality & quantity requirements:
No ratio Rivets if full information is available?
Reduce double counting of observables:
No gaps in histogram (see e.g. some ALEPH identified particle momentum distribution)
Smaller bin-width is preferred
Enough statistics:
NOTE: If the errors of the data are to small a theory uncertainty should be provided to the tuning framework otherwise the minimization will likely over-fit this data
Benchmark sets:
Conservative set of selected (
IRC safe?) Event Shapes:
Input from perturbative experts needed
Only close to the peak, where hadronization is not dominant
Provide global sets from each generator to gather information
Ultimate goals:
Minimal set of accorded observables+weights among all generators for comparability
Tune using $\chi^2$ (more strict) and $\chi^2/(1+\chi^2)$ (less strict)
Maximal set of accorded observables+weights among all generators for comparability
Tune using $\chi^2$ (more strict) and $\chi^2/(1+\chi^2)$ (less strict)
Provide recommendations to experiments on tuning?
…
Interested: Simon, Andrzej, Stefan K., ….
Status of q/g discrimination [Weds 14h00-15h30 Library]
Interested people: Simon, Andrzej
Fiducial multiboson cross sections (VBF/VBS) [Thurs 11h00-12h30 QCD room]
Interested people: Simon, Josh, Stefan, Axel, Saptaparna
Possible HF review [TBC]
Interested people: Josh, Davide?, Stefan
ML-focused [TBC]
Possible projects:
Interested people: Vinny,Josh, Aishik, Andrzej, Maria V., Peter M
Overlaps between FPF and LHC [TBC]
Slack channel: #lhc-fpf
(Possibly the same for DUNE and EIC if enough people are here?)
Possible projects:
Interested people: Josh, Maria V., Stefan, Peter M (EIC)