Alerts

Setting Up Alerts

Learn how to create alert policies, configure notification rules, and ensure your team is always informed when monitors detect issues.

5 min readUpdated April 7, 2026
alertsnotificationspoliciesemail

Alerts are the backbone of effective monitoring. Pingara's alert system ensures your team is notified the moment an issue is detected — and again when it's resolved.

How Alerts Work

When Pingara detects an issue with one of your monitors, it follows this flow:

  1. Check fails — HTTP request returns unexpected status, times out, or keyword is missing
  2. Consecutive failures confirmed — 2 back-to-back failures triggers an incident
  3. Alert policies evaluated — Pingara checks which policies apply to the monitor
  4. Notifications dispatched — Messages sent to all configured channels (email, Slack, etc.)
  5. Recovery detected — 2 consecutive successes resolve the incident
  6. Recovery alert sent — Team is notified that the issue is resolved

No manual intervention needed — the entire lifecycle is automatic.

Alert Policies

An alert policy is a set of rules that determines what triggers a notification and who gets notified.

Creating a Policy

  1. Navigate to Settings → Alert Policies
  2. Click Create Policy
  3. Give it a descriptive name (e.g., "Production Alerts" or "On-Call Team")
  4. Configure the alert triggers
  5. Add notification channels
  6. Save

Alert Triggers

Each policy has four toggles that control when notifications fire:

ToggleWhat It Does
Alert on DownNotify when a monitor goes down (incident created)
Alert on RecoveryNotify when a monitor recovers (incident resolved)
Alert on DegradedNotify when performance degrades below Apdex threshold
Alert on SSL ExpiryNotify when an SSL certificate is approaching expiry

Alert on Down

This is the most important trigger. When enabled, your team receives a notification as soon as an incident is created — after 2 consecutive check failures.

Example notification:

🔴 Monitor Down: api.example.com
Status: Investigating
Error: Connection timeout
Region: US East (N. Virginia)
Started: 2024-01-15 14:32 UTC

Alert on Recovery

Equally important as down alerts. When enabled, your team knows the moment service is restored.

Example notification:

🟢 Monitor Recovered: api.example.com
Status: Resolved
Duration: 12 minutes
Resolved: 2024-01-15 14:44 UTC

Best practice: Always enable both Down and Recovery alerts. Without recovery notifications, your team may waste time investigating issues that have already resolved.

Alert on Degraded

Triggers when a monitor's response time consistently exceeds the Apdex threshold, even though the service is technically "up."

When to enable:

  • Performance-sensitive APIs
  • E-commerce sites where latency impacts conversions
  • SLA-bound services

When to skip:

  • Non-critical internal tools
  • Services with naturally variable response times

Alert on SSL Expiry

Sends milestone-based warnings as your SSL certificate approaches expiry. Default warning thresholds are 30, 14, and 7 days before expiry.

Pingara sends one alert per threshold — you won't be spammed with repeated warnings at the same milestone.

Linking Policies to Monitors

Alert policies are automatically linked to new monitors when created. You can also manually manage which policies apply to each monitor.

Automatic Linking

When you create a new monitor, Pingara links it to all enabled alert policies in your organization. This ensures new monitors are covered immediately.

Manual Linking

To add or remove a policy from a specific monitor:

  1. Go to Monitors → [Your Monitor] → Settings
  2. Scroll to Alert Rules
  3. Toggle policies on or off

This lets you create targeted policies — for example, a "Critical Only" policy that only applies to production monitors.

Escalation and Repeat Notifications

Escalation Rules

Escalation ensures that unresolved incidents get attention from the right people:

  • Immediate — First notification goes to the primary channel (e.g., Slack)
  • After X minutes — If unacknowledged, escalate to secondary channel (e.g., email to team lead)
  • After Y minutes — Escalate further (e.g., PagerDuty to on-call engineer)

Repeat Notifications

For critical monitors, configure repeat notifications to prevent alerts from being missed:

  • Repeat every 15 minutes — Good for production outages
  • Repeat every 30 minutes — Good for important but non-critical services
  • No repeat — Single notification only

Tip: Combine escalation with repeat notifications. The first alert goes to Slack; if unresolved after 15 minutes, repeat to email; after 30 minutes, escalate to PagerDuty.

Pause and Resume Alerts

When you pause a monitor (e.g., during planned maintenance), Pingara can optionally notify your team:

  • Monitor Paused — "Monitor api.example.com has been paused"
  • Monitor Resumed — "Monitor api.example.com has been resumed"

Enable the "Alert on Pause" option in your alert policy if you want visibility into maintenance windows.

Best Practices

Create Separate Policies for Different Environments

Production Alerts → All triggers enabled, Slack + PagerDuty
Staging Alerts    → Down + Recovery only, Slack only
Development       → No alerts (or email digest)

Don't Over-Alert

Alert fatigue is real. If your team receives too many notifications, they start ignoring them — including critical ones.

  • Only enable Degraded alerts for truly performance-sensitive services
  • Use appropriate check intervals (30s for critical, 5m for standard)
  • Set realistic Apdex thresholds to avoid false degradation alerts

Test Your Alerts

After configuring a policy:

  1. Create a test monitor pointing to a non-existent URL
  2. Wait for 2 check failures
  3. Verify notifications arrive on all configured channels
  4. Verify recovery notification when you fix/remove the monitor

Keep Channels Updated

Regularly audit your alert channels:

  • Are email addresses still valid?
  • Are Slack webhooks still active?
  • Are the right people receiving notifications?

Next Steps