Sensitivity Optimization¶
This page explains the physics behind sensitivity optimization for point source searches and how it informs the cut function design for Lightning Tracks.
The Fundamental Trade-off¶
Point source sensitivity in neutrino astronomy is fundamentally a signal-to-noise problem. For a counting experiment, the statistical significance of a signal excess scales as:
where \(S\) is the number of signal events and \(B\) is the number of background events. This means:
- Doubling the signal improves significance by \(2\times\)
- Halving the background improves significance by only \(\sqrt{2} \approx 1.4\times\)
When Do Cuts Help?¶
A quality cut reduces both signal and background. If a cut reduces signal by factor \(f_S\) and background by factor \(f_B\):
The cut improves sensitivity only if:
We define the cut power \(p\) such that \(f_B = f_S^p\). A cut helps sensitivity only if \(p > 2\).
Declination Dependence¶
The optimal cut strength varies across the sky because the background composition changes with declination. In the southern sky (downgoing events), atmospheric muons overwhelm all other backgrounds. The sheer rate of atmospheric muons drives the cut power down—even for starting tracks, where the overwhelming number of muons produces a significant population of single muons that penetrate several outer detector layers without depositing light, mimicking a starting event topology. Aggressive cuts cannot reject muons fast enough relative to the signal they remove, so the cut power falls below 2 and further cutting hurts rather than helps sensitivity. Cuts are therefore loosest toward the South Pole.
Near the horizon, the Earth’s overburden begins to attenuate the muon flux while remaining transparent to neutrinos. Here the cut power is strongest, and cutting harder yields the largest sensitivity gains.
In the northern sky (upgoing events), the Earth shields most atmospheric muons. Once muons become subdominant to atmospheric neutrinos, further cutting is counterproductive: atmospheric neutrinos are genuine neutrino-induced muon tracks, topologically indistinguishable from astrophysical signal. No quality cut can separate them, so cutting past this point only reduces signal. These declination-dependent changes in the optimal cut produce the characteristic double-peak structure in SLT’s background spatial distribution (see Background Modeling on the Performance page).
Why Looser Cuts for Short Time Windows¶
For time-dependent searches (transient follow-up, flares), the background scales with the observation window \(T\). Shorter windows mean less background, which loosens the optimal cut.
The sensitivity scales as:
For a 1000-second transient follow-up vs. a 10-year steady-state search, the significance improvement from the reduced background alone is \(\sqrt{10 \times 365.25 \times 24 \times 3600 / 1000} \approx 560\times\). This dramatically changes the optimal cut point.
Ideally, each analysis would have its own optimized cut function. In practice, the IceCube working group approval process uses a single cut function per selection, so we optimize for the most common use case (steady-state point source searches) while accepting suboptimality for transient searches.
Grid Search Optimization¶
To inform the cut function design, we performed a grid search over uniform cut values for the full 12-year time-integrated point source search:
- Cut values: 0.05 to 0.70 in steps of 0.05 (14 values)
- Declinations: 26 sin(dec) points from -0.98 to +0.92
- Spectral indices: \(\gamma \in \{2.0, 2.5, 3.0\}\)
- Background trials: 100,000 per configuration
At each (sindec, gamma) point, we computed the sensitivity for every cut value and identified which cut gave the best (lowest) sensitivity. Figure 2 shows the resulting sensitivity curves for each uniform cut value, and Figure 3 shows the range between the best and worst cuts at each declination.
Cut Functions¶
The optimal cut value at each declination depends on the assumed spectral index, and no single uniform cut is optimal everywhere. To strike a \(\gamma\)-agnostic compromise, we hand-fitted simple analytic functions to the grid scan results, guided by the optimal cut envelopes across all three spectral indices. For SLT:
and for TLT:
Figure 4 compares these functions against the per-declination optimal cut values from the grid scan.
Discussion¶
The hand-fitted functions approximately follow the optimal cut envelope across spectral indices, providing a reasonable compromise without being tuned to any single \(\gamma\). Much of the scatter in the per-declination optimal cut values is driven by Monte Carlo statistical uncertainty in the sensitivity estimates rather than genuine structure, so the smooth analytic functions effectively average over this noise.
These cut functions are optimized for the full 12-year time-integrated point source search. As discussed above, shorter observation windows shift the optimal cut to looser values. Ideally, each analysis would have its own optimized cut function for the relevant time window and source distribution. In practice, the IceCube working group approval process uses a single cut function per selection, so the time-integrated case—the most common use case—is the natural optimization target.