Skip to content

Sensitivity Optimization

This page explains the physics behind sensitivity optimization for point source searches and how it informs the cut function design for Lightning Tracks.

The Fundamental Trade-off

Point source sensitivity in neutrino astronomy is fundamentally a signal-to-noise problem. For a counting experiment, the statistical significance of a signal excess scales as:

\[ \text{Significance} \propto \frac{S}{\sqrt{B}} \]

where \(S\) is the number of signal events and \(B\) is the number of background events. This means:

  • Doubling the signal improves significance by \(2\times\)
  • Halving the background improves significance by only \(\sqrt{2} \approx 1.4\times\)

When Do Cuts Help?

A quality cut reduces both signal and background. If a cut reduces signal by factor \(f_S\) and background by factor \(f_B\):

\[ \text{New S/N} = \frac{S/f_S}{\sqrt{B/f_B}} = \frac{S}{\sqrt{B}} \cdot \frac{\sqrt{f_B}}{f_S} \]

The cut improves sensitivity only if:

\[ \sqrt{f_B} > f_S \quad \Rightarrow \quad f_B > f_S^2 \]

We define the cut power \(p\) such that \(f_B = f_S^p\). A cut helps sensitivity only if \(p > 2\).

Select which figure to display
p = 1.0
Signal-to-noise optimization for cut power p = 1.0. Top: event counts vs cut strength. Middle: muon/atmospheric-ν ratio. Bottom: signal-to-noise ratio. The red marker shows the optimal cut point. At power $< 2$, the optimal cut is zero (no cutting).Signal-to-noise optimization for cut power p = 1.0. Top: event counts vs cut strength. Middle: muon/atmospheric-ν ratio. Bottom: signal-to-noise ratio. The red marker shows the optimal cut point. At power $< 2$, the optimal cut is zero (no cutting).
Figure 1.1: Signal-to-noise optimization for cut power p = 1.0. Top: event counts vs cut strength. Middle: muon/atmospheric-ν ratio. Bottom: signal-to-noise ratio. The red marker shows the optimal cut point. At power $< 2$, the optimal cut is zero (no cutting).

Declination Dependence

The optimal cut strength varies across the sky because the background composition changes with declination. In the southern sky (downgoing events), atmospheric muons overwhelm all other backgrounds. The sheer rate of atmospheric muons drives the cut power down—even for starting tracks, where the overwhelming number of muons produces a significant population of single muons that penetrate several outer detector layers without depositing light, mimicking a starting event topology. Aggressive cuts cannot reject muons fast enough relative to the signal they remove, so the cut power falls below 2 and further cutting hurts rather than helps sensitivity. Cuts are therefore loosest toward the South Pole.

Near the horizon, the Earth’s overburden begins to attenuate the muon flux while remaining transparent to neutrinos. Here the cut power is strongest, and cutting harder yields the largest sensitivity gains.

In the northern sky (upgoing events), the Earth shields most atmospheric muons. Once muons become subdominant to atmospheric neutrinos, further cutting is counterproductive: atmospheric neutrinos are genuine neutrino-induced muon tracks, topologically indistinguishable from astrophysical signal. No quality cut can separate them, so cutting past this point only reduces signal. These declination-dependent changes in the optimal cut produce the characteristic double-peak structure in SLT’s background spatial distribution (see Background Modeling on the Performance page).

Why Looser Cuts for Short Time Windows

For time-dependent searches (transient follow-up, flares), the background scales with the observation window \(T\). Shorter windows mean less background, which loosens the optimal cut.

The sensitivity scales as:

\[ \text{Sensitivity} \propto \frac{S}{\sqrt{B \cdot T}} = \frac{S}{\sqrt{B}} \cdot \frac{1}{\sqrt{T}} \]

For a 1000-second transient follow-up vs. a 10-year steady-state search, the significance improvement from the reduced background alone is \(\sqrt{10 \times 365.25 \times 24 \times 3600 / 1000} \approx 560\times\). This dramatically changes the optimal cut point.

Ideally, each analysis would have its own optimized cut function. In practice, the IceCube working group approval process uses a single cut function per selection, so we optimize for the most common use case (steady-state point source searches) while accepting suboptimality for transient searches.

Grid Search Optimization

To inform the cut function design, we performed a grid search over uniform cut values for the full 12-year time-integrated point source search:

  • Cut values: 0.05 to 0.70 in steps of 0.05 (14 values)
  • Declinations: 26 sin(dec) points from -0.98 to +0.92
  • Spectral indices: \(\gamma \in \{2.0, 2.5, 3.0\}\)
  • Background trials: 100,000 per configuration

At each (sindec, gamma) point, we computed the sensitivity for every cut value and identified which cut gave the best (lowest) sensitivity. Figure 2 shows the resulting sensitivity curves for each uniform cut value, and Figure 3 shows the range between the best and worst cuts at each declination.

Select which figure to display
γ = 2.0
90% sensitivity vs $\sin(\delta)$ for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. Each line represents a different cut value (color scale from loose to tight). The optimal cut varies with declination.90% sensitivity vs $\sin(\delta)$ for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. Each line represents a different cut value (color scale from loose to tight). The optimal cut varies with declination.
Figure 2.1.1: 90% sensitivity vs $\sin(\delta)$ for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. Each line represents a different cut value (color scale from loose to tight). The optimal cut varies with declination.
Select which figure to display
γ = 2.0
Sensitivity envelope for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. The shaded region shows the range between best and worst cuts across the grid. The green line is the best achievable sensitivity; the red dashed line is the worst.Sensitivity envelope for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. The shaded region shows the range between best and worst cuts across the grid. The green line is the best achievable sensitivity; the red dashed line is the worst.
Figure 3.1.1: Sensitivity envelope for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. The shaded region shows the range between best and worst cuts across the grid. The green line is the best achievable sensitivity; the red dashed line is the worst.

Cut Functions

The optimal cut value at each declination depends on the assumed spectral index, and no single uniform cut is optimal everywhere. To strike a \(\gamma\)-agnostic compromise, we hand-fitted simple analytic functions to the grid scan results, guided by the optimal cut envelopes across all three spectral indices. For SLT:

\[ \text{score} > 0.6 \cdot \sin(\theta_\text{zenith})^{1/3} \]

and for TLT:

\[ \text{score} > \frac{0.3}{1 + e^{-6.5(\theta_\text{zenith} - 1.2)}} + 0.05 \]

Figure 4 compares these functions against the per-declination optimal cut values from the grid scan.

Select which figure to display
γ = 2.0
Optimal cut values vs deployed cut function for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. Scatter points show the optimal cut at each declination; point size and color indicate importance (sensitivity range at that declination). The dashed line shows the deployed cut function.Optimal cut values vs deployed cut function for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. Scatter points show the optimal cut at each declination; point size and color indicate importance (sensitivity range at that declination). The dashed line shows the deployed cut function.
Figure 4.1.1: Optimal cut values vs deployed cut function for SLT assuming a steady point source with an $E^{{-\gamma}}$ spectrum at spectral index γ = 2.0. Scatter points show the optimal cut at each declination; point size and color indicate importance (sensitivity range at that declination). The dashed line shows the deployed cut function.

Discussion

The hand-fitted functions approximately follow the optimal cut envelope across spectral indices, providing a reasonable compromise without being tuned to any single \(\gamma\). Much of the scatter in the per-declination optimal cut values is driven by Monte Carlo statistical uncertainty in the sensitivity estimates rather than genuine structure, so the smooth analytic functions effectively average over this noise.

These cut functions are optimized for the full 12-year time-integrated point source search. As discussed above, shorter observation windows shift the optimal cut to looser values. Ideally, each analysis would have its own optimized cut function for the relevant time window and source distribution. In practice, the IceCube working group approval process uses a single cut function per selection, so the time-integrated case—the most common use case—is the natural optimization target.