Monitors

Apdex Scoring

Understand Apdex (Application Performance Index) scoring in Pingara. Learn how Apdex measures user satisfaction, configure T thresholds, interpret scores, and use Apdex to identify performance degradation.

6 min readUpdated April 7, 2026
apdexperformancemetricsthresholdsuser-satisfaction

Apdex (Application Performance Index) is an industry-standard metric that quantifies user satisfaction with application response times. Pingara calculates Apdex for every monitor, giving you a single number that reflects real-world user experience.

What Is Apdex?

Apdex converts raw performance data (response times) into a satisfaction score between 0.0 and 1.0.

Higher scores = Better performance

  • 1.0 — Perfect (all users satisfied)
  • 0.94 — Excellent (94% user satisfaction)
  • 0.50 — Poor (half your users frustrated)
  • 0.0 — Unacceptable (all users frustrated)

Why Apdex Matters

Traditional metrics like average response time hide important details:

  • Your average might be 200ms
  • But if 20% of requests take 5 seconds, users are frustrated
  • Apdex reveals this instantly

Apdex shows: What percentage of your users had a good experience?

How Apdex Works

The Formula

Apdex = (Satisfied + Tolerating/2) / Total

Every check is categorized as:

  1. Satisfied — Response time ≤ T threshold
  2. Tolerating — Response time between T and 4T
  3. Frustrated — Response time > 4T OR errors/timeouts

Example:

If T = 500ms and you have 100 checks:

  • 80 checks ≤ 500ms (satisfied)
  • 15 checks 501ms–2000ms (tolerating)
  • 5 checks > 2000ms or errors (frustrated)
Apdex = (80 + 15/2) / 100
      = (80 + 7.5) / 100
      = 87.5 / 100
      = 0.875 (Good)

The T Threshold

T is your target response time — the maximum acceptable duration for a satisfactory user experience.

Choosing T:

  • Web pages: 500ms–1000ms
  • APIs: 100ms–500ms
  • Background services: 1000ms–5000ms

Set T based on your users' expectations, not technical minimums.

Tip: If 90% of your requests normally complete in 300ms, set T = 500ms to catch degradation early.

Configuring Apdex in Pingara

Setting the T Threshold

When creating or editing a monitor:

  1. Navigate to Advanced Settings
  2. Find Apdex Threshold (T)
  3. Enter your target response time in milliseconds
  4. Click Save

Default: 500ms (suitable for most web applications)

Per-Monitor Thresholds

Each monitor can have its own T value:

  • Marketing site: T = 1000ms (users tolerate slower loads)
  • API endpoint: T = 200ms (developers expect fast responses)
  • Admin dashboard: T = 500ms (internal tool, moderate expectations)

Why different thresholds? Users have different performance expectations for different services.

Interpreting Apdex Scores

Score Ratings

Pingara uses industry-standard Apdex ratings:

ScoreRatingMeaning
≥0.94Excellent94%+ users satisfied — keep it up!
0.85–0.93Good85–93% satisfaction — acceptable performance
0.70–0.84Fair70–84% satisfaction — investigate slowdowns
0.50–0.69Poor50–69% satisfaction — urgent optimization needed
<0.50Unacceptable<50% satisfaction — users are frustrated

Trend Analysis

Apdex is most useful when tracked over time:

Healthy trend:

Mon: 0.95 (Excellent)
Tue: 0.94 (Excellent)
Wed: 0.96 (Excellent)

Stable, high performance.

Degrading trend:

Mon: 0.95 (Excellent)
Tue: 0.87 (Good)
Wed: 0.72 (Fair)

Something is slowing down. Investigate immediately.

Recovering trend:

Mon: 0.60 (Poor)
Tue: 0.80 (Fair)
Wed: 0.92 (Good)

Performance improving after optimization or incident resolution.

Apdex vs Other Metrics

Apdex vs Average Response Time

Average response time can be misleading:

  • Hides outliers
  • Doesn't reflect user satisfaction
  • One slow request skews the average

Apdex shows the full picture:

  • Counts satisfied vs frustrated users
  • Outliers directly impact the score
  • Instantly reveals performance issues

Example:

MetricBeforeAfter
Avg Response300ms320ms
Apdex (T=500ms)0.950.72

Average barely changed, but Apdex dropped from Excellent to Fair — revealing that a significant number of requests became slow.

Apdex vs Uptime

Uptime = Did the service respond?

Apdex = Did the service respond fast enough?

Your monitor can have:

  • ✅ 100% uptime
  • ❌ Apdex = 0.50 (Poor)

This means your site is up but slow — users are still frustrated.

Use both:

  • Uptime catches outages
  • Apdex catches performance degradation

When Pingara Marks Monitors as "Degraded"

If Apdex consistently falls below a certain threshold, Pingara changes the monitor status from Up to Degraded.

Degraded trigger:

  • Response time exceeds 4T (frustrated threshold)
  • Apdex score drops significantly
  • Consecutive slow checks

Why "Degraded" matters:

  • Alerts you before a full outage
  • Lets you investigate performance issues proactively
  • Visible on status pages (users see "Partial Outage")

Example:

  • T = 500ms
  • 4T = 2000ms
  • If 3+ consecutive checks exceed 2000ms → Status = Degraded

Practical Tips

Set Realistic Thresholds

Too strict:

  • T = 100ms for a database-backed web page
  • Constant false alarms
  • Apdex always Poor

Too lenient:

  • T = 5000ms for a simple API
  • Users frustrated but Apdex says Excellent
  • You miss real issues

Right approach:

  1. Monitor your baseline performance for a week
  2. Check p95 response time (95th percentile)
  3. Set T slightly above p95
  4. Adjust based on user feedback

A single low score might be a transient blip. Watch for:

  • Sustained drops — Performance degrading over hours/days
  • Spikes during peak hours — Capacity issues
  • Regional differences — Network or server problems in specific regions

Combine with Latency Breakdown

If Apdex drops, drill into the performance breakdown:

  • High DNS time → DNS server issue
  • High TCP connect → Network congestion
  • High TTFB → Backend processing slow
  • High total duration → Large response size or transfer bottleneck

Use the latency chart to identify where the slowdown occurs.

Use Apdex for SLA Reporting

Many teams report Apdex in monthly SLAs:

Example SLA:

"Our API will maintain an Apdex score ≥0.90 (T=200ms) for 99.5% of calendar days."

Apdex gives a user-centric performance guarantee beyond simple uptime.

Apdex on the Dashboard

Overview Card

The dashboard shows overall Apdex — a weighted average across all active monitors:

  • 0.94+ → Green, Excellent
  • ⚠️ 0.70–0.93 → Yellow, Fair to Good
  • 🚨 <0.70 → Red, Poor to Unacceptable

Per-Monitor Apdex

Click any monitor to see:

  • Current Apdex score
  • 24-hour, 7-day, 30-day trends
  • Breakdown by region (Pro feature)
  • Histogram of response times

Apdex in Incident Summaries

When an incident is resolved, the incident report includes:

  • Average Apdex during outage
  • Apdex recovery timeline
  • Comparison to baseline Apdex

This helps diagnose whether the issue was a full outage or performance degradation.

Troubleshooting Low Apdex

Apdex < 0.5 (Unacceptable)

Investigate:

  1. Check for active incidents (site down = Apdex near 0)
  2. Review error rates (timeouts, 500 errors)
  3. Look at recent deployments (did code change break something?)

Common causes:

  • Database overload
  • Memory leak
  • DDoS attack
  • Infrastructure failure

Apdex 0.5–0.7 (Poor)

Investigate:

  1. Check backend resource usage (CPU, memory, disk I/O)
  2. Review slow query logs
  3. Check third-party API latencies
  4. Verify CDN performance

Common causes:

  • Unoptimized database queries
  • Cold cache (after restart)
  • Increased traffic without scaling
  • Slow external API calls

Apdex 0.7–0.85 (Fair)

Investigate:

  1. Look for gradual performance regression
  2. Check time-to-first-byte (TTFB)
  3. Review recent code deployments
  4. Check for N+1 query patterns

Common causes:

  • Code inefficiency introduced in recent deploy
  • Gradual data growth slowing queries
  • Network path degradation

Next Steps