Alerts
ThousandEyes alerts help you proactively identify and respond to events that impact the performance and availability of your critical services. Whether you're ensuring an online banking platform is available for vital payments or maintaining reliable connectivity for healthcare services, alerts provide immediate visibility into issues that threaten stability.
The platform provides two types of alert rules: highly customizable manual rules and intelligent adaptive rules. For adaptive alerts, the Why Did My Alert Trigger? tab provides clear insight into why an alert was triggered, helping you understand the system's logic and build trust in the results.
For those who want simplicity, the platform ships with default alert rules enabled for each test. For a hands-on overview, see Getting Started with Alerts. For a more detailed guide, see Creating and Editing Alert Rules.
Assigning Alert Rules to Tests
To receive alerts for a test, you must assign one or more alert rules to it. You can manage rule assignments from the test's settings page or from the alert rule's settings page.
Navigate to the settings page for your test (such as, Network & App Syntethics > Test Settings).
Locate the test you want to configure.
Ensure the Alerts toggle is enabled.
Use the dropdown menu to select or deselect the alert rules you want to apply to the test.

By default, new tests are assigned all "Default" rules for that test type. You can remove this default rule and assign your own custom rules. To create or edit rules directly from this page, click the Edit Alert Rules link.
Understanding Why an Alert Triggered
The automated nature of adaptive alerts can sometimes make it unclear why an alert was triggered. You see that a problem was detected, but the specific data and logic behind that decision are not immediately visible.
To provide this crucial context, the Why Did This Alert Trigger? tab is available for cleared adaptive alerts in the Alerts History. This feature offers a clear, concise explanation of the system's decision-making process, helping you build trust in the alert and take more informed action.
How to Access the Tab
Navigate to Alerts > Alert List.
Select the Alerts History tab.
Click on the row for any cleared adaptive alert. A side panel will open.
In the side panel, click the Why Did This Alert Trigger? tab.
What the Tab Shows
The tab provides a summary of what the system observed and how it interpreted those observations, including:
Observed vs. Expected Anomalies: A graph comparing the number of agents that reported anomalies against the system's learned baseline.
Issue Probability Score: A graph showing how the system's confidence in a real issue evolved over time.
Alert Threshold and Trigger Point: The probability graph displays a clear line indicating the issue probability threshold, which is determined by your configured sensitivity level. It also highlights the exact moment the issue probability crossed this threshold to trigger the alert.
For a detailed breakdown of each component on this tab, see Viewing Alerts: Alert History.
When Agents Are Excluded from Alerts
In certain situations, specific agents are automatically excluded from the calculations that trigger alerts. This prevents unreliable or irrelevant data from causing false positives or negatives. There are two main reasons for exclusion: the agent is offline or the agent has a local problem.
Offline Agents
Agents that are offline cannot send test data to the ThousandEyes platform. Because the alerting system requires timely data to evaluate conditions, it cannot wait for an agent to come back online. Therefore, offline agents are not included when evaluating alert rule conditions.
Agents with Local Problems
Cloud Agents that display a "Local Problems" message in test results are excluded from alert calculations. This status indicates a problem with the agent's local environment (such as high CPU or memory usage) that could skew test results. To ensure alert accuracy, data from these agents is ignored until the local problem is resolved.
Alerting Basics
This section explains the fundamental concepts behind the ThousandEyes alerting system. Understanding these concepts will help you configure effective alerts and interpret them correctly when they trigger.
What Is an Alert? (And How Is It Different from an Anomaly or a Notification?)
Understanding the distinction between an event, an alert, and a notification is the key to using the ThousandEyes platform effectively.
An Anomaly is an individual data point of interest from a single test round. For example, if a test runs every 5 minutes and one agent gets an HTTP 500 error, that is an event. You will see events immediately in a test's timeline view because they are raw test data.
An Alert is a stateful condition that becomes active on the platform only when a pattern of events meets the specific conditions you define in an Alert Rule. An alert rule gives you the power to decide what is important. You can configure a rule to be highly sensitive (triggering an alert on a single event) or more robust (triggering only after many events persist over several rounds).
A Notification is a message (such as an email, PagerDuty incident, or Slack message) that is automatically sent when an alert's state changes (for example, when it becomes active or is cleared). You must configure notifications separately from alert rules.
In short: Events trigger Alerts, and Alerts can trigger Notifications.
The Alert Lifecycle: Triggered, Active, and Cleared
Every alert goes through a lifecycle. Understanding this helps you know what you're seeing in the platform.
Triggered: This is the moment an alert is first created. It happens on the exact test round where the conditions of your alert rule are met for the first time. For example, if your rule is "alert when 3 agents see an error for 2 consecutive rounds," the alert is "triggered" at the end of that second round. Note that an alert being triggered doesn't necessarily mean a problem is "serious"—it means the conditions you defined have been met.
Active: This is the state of an alert after it has been triggered. An alert remains "active" for as long as the trigger conditions continue to be met. The Alerts > Alert List page in the platform shows only active alerts.
Cleared: This is the moment an alert is resolved. An alert "clears" on the first test round where the trigger conditions are no longer met. Once cleared, the alert is moved from the "Active" list to the "Alert History".
Common Questions and Troubleshooting
This section answers common questions when setting up alerts.
Why don't I see anything in the Alert List, but I see events in my test view?
The reason is that the events you see have not yet met the conditions defined in your test's alert rule.
Test Views show you raw data, including every single event from every agent in real-time.
The Alert List is a curated dashboard that shows only active alerts—conditions where the pattern of events has met the specific thresholds defined in an alert rule.
If you see events but no active alert, it means your alert rule is waiting for the pattern of events to meet its trigger conditions, such as more agents failing or the failure persisting for more rounds.
How Can I Test an Alert Rule?
To verify that your entire alerting workflow (including notifications) is working correctly, you can create a temporary alert rule designed to trigger immediately.
The most reliable way to do this is to use a performance-based condition that will almost always be true. Create a custom alert rule with the following settings:
Condition:
Latency > 1 ms(for network tests) orResponse Time > 1 ms(for web tests)Alert when:
1ofAll Agents assigned to testFor at least:
1rounds
When you assign this rule to a test, it will trigger an alert on the next completed test round. This allows you to confirm that an active alert appears in the Alert List and that your configured notifications are sent as expected.
Important: This type of rule is intended for temporary testing only. Because the condition will almost always be met, the alert will remain active and could generate continuous notifications. Remember to disable or remove this rule after you have finished testing.
How long does it take for an alert to clear?
An alert clears automatically on the first test round where the conditions of the alert rule are no longer met. For example, if a rule requires 3 agents to fail, the alert will clear as soon as a test round completes with 2 or fewer agents failing.
Can I manually clear an alert?
Alerts are data-driven and clear automatically based on test results. This ensures that the alert status always reflects the actual, measured state of your monitored service. To clear an alert manually, you can remove the alert rule from the test or you can adjust the alert conditions. You can also set a real-time suppression window.
To learn more about clearing alerts, see Alert Clearing.
If I set up a notification, how many will I get for a single outage?
By default, you will receive two notifications for each alert instance:
One notification when the alert is first triggered.
One notification when the alert is cleared.
You can configure notifications to be sent repeatedly for alerts that remain active for an extended period. To learn more about configuring alert notifications, see Alert Notifications.
How do my alerts relate to the alerts in Internet Insights?
They are completely separate.
Your Alerts (in Alert List): These are generated by your tests based on your alert rules. They are specific to the services you choose to monitor.
Internet Insights Alerts: These are generated automatically by Cisco's global monitoring system when it detects large-scale public internet or service provider outages. They provide broad context about the state of the internet.
Last updated