pingfatigue.com is an independent, vendor-neutral reference on alert fatigue. Not affiliated with PagerDuty, Atlassian, Splunk, or any other vendor. Tool comparisons may contain affiliate links, clearly labelled.
ALERT FATIGUE CALCULATOR + INDEX

Alert Fatigue Calculator: Cost Per Engineer 2026 + Index

Move the sliders to calculate the annual cost of alert fatigue on your team. Based on incident.io 2024 State of On-Call, Catchpoint 2024 SRE Report, DORA 2024, and Google SRE Book benchmarks.

alert-fatigue-index -- status: STRESSED
PAGES/WEEK (TEAM)
252-- above Google SRE threshold of 84
FALSE POSITIVES
70%-- industry median 60-80% (Catchpoint 2024)
ANNUAL COST (DIRECT)
$668,369-- + 0.6 engineers at burnout risk
Team size (on-call engineers)6 engineers
Your rotation size
Pages per engineer per week42 pages/wk
incident.io 2024 median = 42
False-positive rate70%
Catchpoint 2024 median = 70%
Fully-loaded hourly cost ($/hr)$87/hr
Levels.fyi SRE median ~$180K/yr
Average MTTA (minutes)12 min
PagerDuty 2023 median = 8-15 min
Night pages per month (per engineer)8 night pages
incident.io 2024: 62% report sleep disruption
SUMMARY -- screenshot to share
Team of 6 SREs receiving 42 pages/wk each (70% false positive) = $668,369/yr in direct alert-handling cost + 0.6 engineers at burnout risk.
ALERT FATIGUE INDEX 2026 -- UPDATED ANNUALLY

The Alert Fatigue Index

A unified benchmark table aggregating data from primary sources across DevOps, SRE, and incident management research. No comparable table exists in the vendor ecosystem.

MetricHealthyMedianNoisySource
Pages / engineer / week<= 514-42> 100Google SRE + incident.io 2024
False positive ratio< 20%60-80%> 90%Catchpoint 2024
MTTA at night (min)2-58-15> 30PagerDuty 2023
Sleep disruption (self-reported)< 10%62%80%+incident.io 2024
On-call turnover intent< 5%41%60%+incident.io 2024
Correlation / dedup enabledYesPartialNoVendor avg
Cost per engineer / year< $10K$50-100K$300K+Derived (methodology)

What Is Alert Fatigue?

Alert fatigue is the desensitisation that occurs when on-call engineers receive too many monitoring alerts of poor quality. When most pages are false positives or require no action, engineers begin to miss, delay, or ignore even critical ones. The result: slower MTTR, higher incident severity, and eventual attrition. The mechanism is identical to alarm fatigue in healthcare intensive care units, where 85-99% of alarms are clinically non-actionable.

Full definition + taxonomy -->

Why Does It Happen?

Threshold alerting

Alerts fire on cause (CPU > 80%) not symptom. Most recover automatically before anyone acts.

SLO vs Threshold -->
No correlation

A single infrastructure failure triggers 50 duplicate alerts from redundant tools and monitors.

Correlation & Dedup -->
No runbooks

Without a documented response path, every alert starts a new investigation. Alert-to-action time balloons.

Runbooks -->
No audit cadence

Alert rules accumulate without review. Teams inherit noise from engineers who have long since left.

Alert Tuning -->

Tools That Help

AFFILIATE LINKS LABELLED
PagerDuty
9/10
Best overall
incident.io
9/10
Fastest growing
FireHydrant
8/10
Modern UX
Rootly
8/10
Slack-first
Opsgenie
7/10
Atlassian stack
Splunk On-Call
6/10
Splunk-native
Full comparison with pricing + methodology -->
CROSS-DOMAIN RESEARCH

The Healthcare Connection

ICU alarm fatigue has been studied for 40 years. Healthcare ICUs report 85-99% false-positive alarm rates and link them to sentinel events (preventable patient deaths). The Joint Commission issued NPSG.06.01.01 as a regulatory response. DevOps has identical false-positive ratios and similar consequences. No DevOps site has synthesised this research -- until now.

Read the cross-domain analysis -->

Research Behind These Numbers

41%

of on-call engineers have considered leaving because of alert load

incident.io 2024
62%

report sleep disruption from night pages at least weekly

incident.io 2024
70-80%

false positive rate is the industry median (most alerts require no action)

Catchpoint 2024
14

maximum pages per 7-day week recommended by Google SRE Book (Chapter 6)

Google SRE Book 2016
60-90%

noise reduction achievable with correlation and deduplication enabled

Vendor case studies
$200-300K

estimated replacement cost per senior SRE who quits over on-call load

SHRM formula applied
All 25+ citations with methodology notes -->

Frequently Asked Questions

What is alert fatigue?+
Alert fatigue is the desensitisation that occurs when on-call engineers are exposed to too many monitoring alerts of varying quality. Over time, they begin to miss, delay, or ignore important alerts. The industry median is 60-80% false-positive alerts, meaning most pages require no human action.
How much does alert fatigue cost per engineer per year?+
Based on industry medians: an engineer receiving 42 pages per week at $180K fully-loaded cost spends roughly $61,000/year just handling alert interruptions, before accounting for context-switching penalty, after-hours costs, and the probability of attrition ($200K-$300K replacement cost per senior SRE).
What is the Alert Fatigue Index?+
The Alert Fatigue Index is a reference benchmark table published annually by pingfatigue.com. It shows healthy, median, and noisy ranges for key on-call metrics: pages per engineer per week, false-positive ratio, MTTA, sleep disruption, attrition intent, and annual cost. It aggregates data from Google SRE Book, DORA 2024, incident.io 2024, Catchpoint 2024, and PagerDuty 2023.
What is the Google SRE threshold for alert volume?+
The Google SRE Book (Chapter 6) states on-call engineers should receive at most 2 urgent alerts per 12-hour shift, which equates to roughly 14 pages per 7-day week. The incident.io 2024 State of On-Call survey found the actual median is 42 pages per week, three times the healthy threshold.
How do I reduce alert fatigue?+
The highest-impact interventions are: (1) enable alert correlation and deduplication in your pager tool (typically reduces ticket count 60-90%), (2) migrate from threshold-based to SLO-based alerting to eliminate symptomless cause-based pages, (3) run a quarterly alert audit killing 20% of rules, and (4) require a runbook for every remaining P1/P2 alert.
What is the difference between alert fatigue and notification fatigue?+
Alert fatigue affects on-call engineers: they receive too many monitoring system pages (PagerDuty, Opsgenie) and become desensitised to them, causing missed incidents. Notification fatigue affects knowledge workers: they receive too many Slack messages, emails, and Teams pings, damaging focus and productivity. Both share the same mechanism but require different interventions.

Related Tools in the Engineering Cost Suite

outagecost.com

Revenue impact of the downtime your noisy alerts could have prevented

incidentcost.com

Broader incident taxonomy across breach, outage, ransomware

pagerdutypricing.com

PagerDuty tier pricing breakdown

monitoringcost.com

Observability stack economics: Datadog, Grafana, New Relic

techdebtcost.com

Quantifying the other invisible engineering tax

platformengineeringcost.com

Platform team cost context

Updated May 2026