Using the Network Overview

In the ThousandEyes platform, network tests under Network & App Synthetics are designed to measure connectivity and performance between agents and a target (e.g., a server or another agent). These test types include Agent-to-Server and Agent-to-Agent. When you create any test in which you include network measurements – such as network, HTTP server, page load, or DNS server tests - you get access to the network view. The network view leverages the ThousandEyes standard layout.

Example of the Network View

The screenshot below shows the network view of a page load test:

Metrics for Network Tests

Metrics Plottable on the Timeline for Network Tests

The timeline displays time-series metrics as line graphs and supports overlays, allowing you to compare loss, latency, and jitter, with the option to highlight data from specific agents to compare performance across locations or vantage points.

  • Loss (%): Percentage of packets lost during transmission, indicating network reliability.

  • Latency (ms): Round-trip time for packets to reach the target and return, measuring network delay.

  • Jitter (ms): Mean deviation of latency, reflecting variability in packet delivery.

Overlays for Network Tests

Overlays facilitate comparative analysis by visualizing how loss, latency, and jitter trend over time, with the ability to highlight specific agents to identify regional or agent-specific issues. Highlighting specific agents isolates performance variations, aiding in pinpointing regional or agent-specific problems.

For more details on metrics, see ThousandEyes Metrics: What Do Your Results Mean?.

View Details for Network Tests

The network view is the primary view for network tests (Agent-to-Server and Agent-to-Agent), focusing on network-layer metrics to assess connectivity and performance. Different data views are shown depending on whether an agent location is selected.

Timeline for Network Tests

The timeline shows the average based on the selected metric. When no agent is selected or when multiple agents are selected, this value is calculated across all locations. An example of the global chart for the loss metric is shown below:

When an agent is selected, the timeline shows the average based on the selected location, charted against the global values for the same metric. An example of the location chart for loss is shown below:

Path Visualization for Network Tests

Path visualization renders a traceroute-like map with nodes (routers) and links (network segments), showing a composite path from all agents to the target. Shared hops are highlighted as single nodes, with branches for divergent paths. Arrows indicate direction (source-to-target, target-to-source), with a direction selector to toggle between views or show both. Hops or links are color-coded based on performance (e.g., red for high Loss), using aggregated directional values.

Metrics in path visualization include Loss, Latency, and Jitter. When no agent is selected or when multiple agents are selected, these values are aggregated. Availability, Path MTU, Throughput (Agent-to-Agent), and Error Details appear in modal windows when selecting displayed items such as nodes or links.

Metrics available in path visualization are:

  • Loss (%): Percentage of packets lost during transmission, indicating network reliability.

  • Latency (ms): Round-trip time for packets to reach the target and return, measuring network delay.

  • Jitter (ms): Mean deviation of latency, reflecting variability in packet delivery.

  • Received DSP (numeric): The Differentiated Services Code Point (DSCP) value observed in the IP header of received packets at a specific hop, indicating the Quality of Service (QoS) marking applied to the traffic.

  • Minimum Path MTU (bytes): The smallest Maximum Transmission Unit (MTU) size, in bytes, supported along the network path, determined by path MTU discovery to identify the maximum packet size that can be transmitted without fragmentation.

  • Number of Traces (count): The number of path trace packets that traversed a specific network link or node, expressed as “X of Y,” where X is the number of traces for that link/node, and Y is the total number of path traces across all agents in the test round.

  • Delay (ms): The estimated minimum transmission delay across a specific network link, calculated as the difference between the lowest response time reported by an agent for the node farther from the agent (right side of the link) and the node closer to the agent (left side of the link).

  • Average Response (ms): The average time-to-respond for a node across all probe packet responses received during path tracing, measured in milliseconds.

Map for Network View

When no agent is selected, the computed averages area of the map displays the worldwide average of each selected metric, at the date/time selected in the interface. When one or more agents is selected, these values are computed across the selected agents. For Agent-to-Server tests, the map shows how performance varies across agent locations globally or within a network. For Agent-to-Agent tests, the map shows performance variations across agent pair locations, with directional data for source-to-target and target-to-source paths.

When you select a metric from the Metrics menu in the timeline, pins are color-coded based on the metric’s value. Typically, green indicates good performance (e.g., low latency), yellow/orange indicates moderate issues, and red indicates poor performance (e.g., high loss). For example, a map might show a red pin in Singapore (Latency: 200ms) and green pins in London (Latency: 20ms), highlighting a regional issue.

  • Hovering over a pin displays a tooltip with exact metric values for that agent (e.g., Loss: 5%, Latency: 150ms, Jitter: 10ms, Availability: 100%).

  • Clicking a pin opens a modal or filters the view to show agent-specific details, including all metrics and links to other tabs (e.g., Timeline, Path Visualization).

  • Filtering by agent groups (e.g., “EMEA Agents”) or selecting specific agents focuses the map, with unselected agents faded or hidden.

Table View for Network Tests

The table summarizes metrics for each agent running the test, showing data for the most recent test round or a selected time range. The table is particularly useful for spotting variations in performance across agents. For Agent-to-Server tests, the table summarizes metrics for each agent, showing connectivity and performance to the target. For Agent-to-Agent tests, the table summarizes metrics for each agent pair, highlighting performance in both directions. Each row represents an agent (Agent-to-Server) or agent pair (Agent-to-Agent), with columns for each metric. You can sort the table by clicking the table header. You can click column headers to sort by a metric to quickly identify outliers.

Agent-to-Server Metrics Shown in the Table

  • Agent Name/Location: Identifies the Cloud or Enterprise Agent.

  • Date (UTC): Timestamp of the test round’s data collection cycle.

  • Error Details (if applicable): Description of any errors received during the test.

  • Packet Loss (%): Percentage of packets lost during transmission.

  • Latency (ms): Round-trip time for packets to reach the target and return.

  • Jitter (ms): Variation in latency between packets.

Agent-to-Agent Metrics Shown in the Table

  • Date (UTC): Timestamp of the test round’s data collection cycle.

  • Loss (%): Packet loss percentage in each direction (source-to-target, target-to-source).

  • Latency (ms): Round-trip time or one-way latency (if clock-synced).

  • Jitter (ms): Variation in latency.

Last updated