This site documents the Lightning Tracks (LT) event selection, the point-source analyses built on it, and the processing pipeline that applies the selection to IceCube data at scale.
The documentation is organized into three sections:
- Selection (physics-focused): what the starting and throughgoing track samples are, which cuts and models define them, and how the final sample performance looks. This section also covers angular error calibration, sensitivity optimization, and detailed diagnostics for all samples used in the analysis (including reference selections such as PST, NT, ESTES, and DNNC).
- Analyses (results-focused): the point-source analyses that use Lightning Tracks. This section presents analysis methods, configurations, and results.
- Processing (implementation-focused): how Snakemake, dataset configs, containers, and notebooks apply the selection on real data and MC. This section is targeted toward anyone who wants to run or modify the selection pipeline.
Goals and Scope¶
Lightning Tracks aims to provide a modern, ML-driven track selection that is:
- Sensitive: especially to neutrino sources in the southern hemisphere compared to previous starting and throughgoing track event selections.
- Reproducible: full chain captured in version-controlled code, configs, and environments.
- Scalable: able to process all years of IC86 data and relevant MC sets on HPC clusters with reasonable resource requirements (no GPUs needed for application).
- Flexible: usable for multiple analyses, not tied to a single paper.
- Expandable: easy to understand, apply, and build upon.
The primary analysis target is a combined LT + DNNC point-source search comprising a full-sky scan and a Galactic Plane stacking analysis. Benchmarking against earlier IceCube track and cascade selections is included throughout the diagnostics.
Style
Prose follows The Chicago Manual of Style—intentionally.