This lesson is in the early stages of development (Alpha version)

Developing Benchmarks: Glossary

Key Points

Exercise 1: Analysis Scripts and Snakemake
  • Snakemake allows one to run their data analyses and share them with others

Exercise 2: Setting up your first benchmark with pipelines
Exercise 3: Filling out your benchmark
  • Create setup.config to switch between using the simulation campaign and re-simulating events

  • Each stage of the benchmark pipeline is defined in config.yml

  • config.yml takes normal bash scripts as input

  • Copy resulting figures over to the results directory to turn them into artifacts

Exercise 4: Adding a Status Flag
  • Status flags are used to indicate detrimental changes to software/detector design

  • Add a status flag to your benchmark to alert developers to changes in performance

Exercise 5: Making Useful Figures
  • Create paper-ready benchmark figures whenever possible

  • Clearly label plots with simulation details, and large axis labels and legends

  • If possible, augment the benchmark with an additional explainer document which describes figures in greater detail

Glossary

FIXME