Getting Started

Understanding the Dashboard

Learn what every metric, chart, and indicator means on your Pingara dashboard. Master uptime percentages, Apdex scores, latency charts, and incident tracking.

6 min readUpdated April 7, 2026
dashboardmetricsuptimeapdexlatency

Your Pingara dashboard is mission control for monitoring. This guide explains every metric, chart, and indicator so you can make informed decisions about your infrastructure.

Dashboard Overview

The dashboard has four main sections:

  1. Overview Cards — High-level KPIs (uptime, Apdex, incidents)
  2. Monitors Table — Real-time status of all your monitors
  3. Latency Chart — Performance trends over time
  4. Incidents List — Recent outages and ongoing issues

Overview Cards

Overall Uptime

What it shows: The percentage of successful checks across all your monitors in the selected time period (24h, 7d, 30d).

How it's calculated:

Uptime % = (Successful Checks / Total Checks) × 100

What the numbers mean:

  • 99.9% or higher — Excellent (less than 8.6 hours downtime per year)
  • 99.5% - 99.9% — Good (less than 43 hours downtime per year)
  • Below 99.5% — Needs attention

Tip: Industry standard SLAs typically target 99.9% (three nines) or 99.99% (four nines) uptime.

Apdex Score

What it shows: A 0.0-1.0 score representing user satisfaction based on response times.

How it's calculated:

Apdex = (Satisfied + Tolerating/2) / Total Checks

Where:

  • Satisfied — Response time ≤ T (your threshold)
  • Tolerating — Response time ≤ 4T
  • Frustrated — Response time > 4T or errors

What the scores mean:

  • 0.94-1.0 — Excellent
  • 0.85-0.93 — Good
  • 0.70-0.84 — Fair
  • 0.50-0.69 — Poor
  • Below 0.5 — Unacceptable

Learn more about Apdex scoring.

Active Incidents

What it shows: The count of monitors currently experiencing outages or degraded performance.

Incident statuses:

  • Investigating — Just detected, root cause unknown
  • Identified — Root cause determined
  • Monitoring — Fix deployed, watching for recovery
  • Resolved — Monitor recovered (2+ consecutive successful checks)

Click the number to view incident details.

Monitors Table

The monitors table shows real-time status for each of your monitors.

Status Indicators

IconStatusMeaning
🟢UpResponding successfully
🟡DegradedSlow responses (> Apdex threshold)
🔴DownFailed checks (2+ consecutive failures)
⏸️PausedMonitoring temporarily stopped
PendingNewly created, first check not yet run

Columns Explained

  • Monitor Name — Click to view detailed performance history
  • URL — The endpoint being monitored
  • Status — Current state (see above)
  • Response Time — Latest check duration in milliseconds
  • Uptime — Success rate over last 24 hours
  • Last Checked — Timestamp of most recent check
  • Active Incidents — Count of unresolved incidents for this monitor

What triggers a "Down" status?

Pingara uses consecutive failure detection to avoid false alarms:

  1. First failure → Monitor stays "Up" (could be transient)
  2. Second consecutive failure → Status changes to "Down"
  3. Incident is created and alerts fire

This reduces false positives from temporary network blips.

Latency Chart

The latency chart visualizes response time trends across your monitors.

Percentile Lines

  • p50 (median) — Half of requests are faster than this
  • p95 — 95% of requests are faster than this (common SLA target)
  • p99 — 99% of requests are faster than this (captures worst-case)

Why percentiles matter: Average response time can hide outliers. If p95 is 200ms but p99 is 2000ms, some users are experiencing 10x slower responses.

Time Range Selector

Switch between:

  • 24 hours — Spot recent performance changes
  • 7 days — Identify weekly patterns (e.g., traffic spikes)
  • 30 days — Track long-term trends and seasonal effects

Interpreting the Chart

Healthy pattern:

  • All three lines stay relatively flat
  • p99 stays within 2-3x of p50
  • No sudden spikes

Warning signs:

  • p99 line frequently spikes
  • Growing gap between p50 and p99
  • Gradual upward trend (degrading performance)

Action: If you see spikes, drill down into the specific monitor to see which regions or time periods are affected.

Incidents List

Shows recent incidents across all monitors, ordered by start time (newest first).

Incident Information

Each incident shows:

  • Monitor name — Which monitor went down
  • Status — investigating / identified / monitoring / resolved
  • Started — When the incident began
  • Duration — How long it lasted (or is lasting)
  • Error type — DNS failure, timeout, 5xx error, etc.

Color Coding

  • 🔴 Red — Active incidents (investigating, identified, monitoring)
  • 🟢 Green — Resolved incidents

Root Cause Hints

Click an incident to see AI-generated root cause analysis. Pingara's AI examines:

  • DNS lookup time (DNS issues?)
  • TCP connection time (network problems?)
  • TLS handshake time (certificate issues?)
  • Time to first byte (slow backend?)
  • HTTP status code
  • Error messages

Learn more about AI root cause analysis.

Dashboard Actions

Add Monitor

Click "Add Monitor" in the top right to create a new monitor. You'll configure:

  • URL and monitor type
  • Check interval and regions
  • Expected status codes
  • SSL certificate tracking
  • Keyword validation

Learn more about creating monitors.

Pause/Resume Monitors

Click the ⏸️ icon next to a monitor to:

  • Pause — Stop checks temporarily (e.g., during planned maintenance)
  • Resume — Restart monitoring

Note: Paused monitors don't count against your plan limits but won't generate alerts.

Filter Monitors

Use the status filter to show only:

  • All monitors
  • Up monitors
  • Down monitors
  • Degraded monitors
  • Paused monitors

Mobile Dashboard

The dashboard is fully responsive. On mobile:

  • Overview cards stack vertically
  • Monitor table scrolls horizontally
  • Charts adapt to smaller screens
  • All actions remain accessible

Keyboard Shortcuts

ShortcutAction
/Focus search
nCreate new monitor
rRefresh data
?Show keyboard shortcuts

Dashboard Best Practices

Monitor Grouping

Use tags to organize monitors:

  • By environment: production, staging
  • By service: api, web, cdn
  • By criticality: critical, important, non-critical

Alert Fatigue Prevention

If you're getting too many alerts:

  1. Increase Apdex threshold for less critical monitors
  2. Use longer check intervals for non-production services
  3. Set up alert policies with severity filtering

Performance Baselines

After 7-30 days, you'll have baseline data:

  • Normal p95 response time for each monitor
  • Typical uptime percentage
  • Expected Apdex score

Use these baselines to:

  • Set realistic Apdex thresholds
  • Detect performance regressions
  • Plan capacity upgrades

Next Steps