Endpoint Agent Views
Last updated
Last updated
The Agent Views of the Endpoint Agent provides detailed endpoint-related data from a single agent, offering end-to-end visibility for easy and efficient troubleshooting and issue diagnosis, even with limited technical expertise.
This view enables users to troubleshoot any issue affecting an agent by displaying all the information related to the connectivity of that agent and their monitored applications.
The Agent Views can be broken into two sections:
The Agent Details section is a primary section that describes the information related to the agent along with the Search.
The Agent Experience section is the main part that offers detailed information on how to troubleshoot agent-related issues. It also includes details about the agent's local networks and the monitored applications. Additionally, it provides the Timeline Range Selector.
The Agent Details section located at the top of the view, provides the following detailed information about the selected agent:
Hostname
Location
Agent Version
Private IP Address
Public IP Address
Users
Labels
License Type
The Search field helps you find the desired agent. You can perform the search using the following parameters:
Agent Name
Hostname
Username
IP Address (public/private)
You can analyze the performance of the desired agent by referring to the overall Experience Score in this section of the agent views. The experience section is divided into three progressive sub-sections to give you a detailed visual view and relevant data to troubleshoot the issues:
Timeline Range Selector
Experience Score
Heat Map
Experience Timeline
Segment Visualization
The Timeline Range Selector allows you to customize the time window on which the timeline focuses. You can select the time range as a Relative Time Interval or a Fixed Time Interval.
This Experience Score is not related to the Experience Score computed for the Real User tests.
The Experience Score is a timeline representing the average overall experience across all tests the specified agent conducts. This score is visually displayed using a color gradient, ranging from white (indicating good performance) to red (indicating poor performance). Here, you can quickly assess the overall experience at a glance.
The Heat Map section is located below the timeline range selector. It visually represents internet access performance and all tests conducted for different applications by the specified agent.
The left-hand side lists all the parameters corresponding to internet access (such as Agent, Connection, Gateway, and VPN) and tests running for different applications. The rows adjacent to the tests display the performance for the selected time range.
The Heat map represents the degradation of the test experience in a shade of colors ranging from white (good) to red (worse). You can select the desired column or multiple columns (by holding the Shift key) for quicker issue identification and detailed analysis of underlying causes.
This section describes the performance of the selected tests for the specified time range in a combination of various segments. Each segment represents an element of the end-to-end supply chain that usually belongs to a different owner. By this, the user can more easily isolate where a potential issue might be.
The left-hand side displays all the segments for which the data is shown, and the corresponding timeline shows the complete data for the selected segment. Each segment selected from the left-hand side has a drop-down on the right-hand that offers all the available metrics to refine your results. By default, the segment score is selected. That score represents all the metrics available for that segment in a single value so the user can easily understand the average experience.
In addition, the user can select in each segment the specific elements (e.g., tests for the application segment) to show in the timeline from the drop-down menu at the top of the timeline. Additionally, the legend explains the colors associated with each element in the timeline.
The following table explains the different combinations of metrics that can be used to refine your results:
Application
Application Score HTTP Availabilty HTTP Response Time Loss Latency Jitter TCP Connection Failures
Agent/System
Agent Score CPU Memory
Connection
Connection Score Signal Quality Link Speed Throughput Retransmission Rate Roaming Events Channel Changes
Gateway/Wireless
Gateway Score Signal Quality Link Speed Throughput Retransmission Rate Roaming Events Channel Changes Gateway Loss Gateway Latency Gateway Jitter
VPN
VPN Score Loss Latency Jitter
Proxy
Loss Latency Jitter
DNS
Loss Latency Jitter Domain resolution Time
Imported Metrics
Call Score
The Log tab provides information about different network changes the agent detects, helping to understand its network condition. The following events when triggered, create an entry in the log table:
Wifi getting connected, disconnected, or changed
VPN status connected or disconnected
Agent status online or offline
The following figure shows an example of what you might see when you view the log:
The log table provides the following information:
A short description of the change produced
Location and information related to the change produced
Time of the corresponding change.
This view provides a comprehensive look at the entire network journey for the single round selected in the timeline, starting from the agent and ending at the destination/application. This allows for pinpointing the specific segment causing performance degradation, along with its metrics and values.
The view breaks down the entire path in the following segments:
Agent
Connection
Gateway
VPN
Proxy
Internet
Application
Each segment provides detailed information and visual aids to troubleshoot any underlying network issues. Additionally, each segment highlights issues in Red and good performance in Green.
By default, the segment visualization covers all tests running on the agent, but you can use the drop-down menu at the top to filter the results for a specific test.
When users face issues affecting multiple segments of a specific application, the Guided Troubleshooting feature helps them accurately identify the segment most likely causing the problem. This feature offers a systematic diagnostic approach, allowing users to effectively narrow down potential issues. By streamlining the troubleshooting process, users can solve problems more efficiently, leading to a smoother application experience. Guided Troubleshooting is only available when you select a specific application. If you use the default setting to select all applications, it will not be performed.
Consider the following scenario to learn more about how this feature works:
Select a desired test from the Application drop-down (for example, ThousandEyes).
You can now see the Troubleshooting Assistant below the segment visualization graphic, listing the issues.
The Troubleshooting Assistant points to the most impacted segment and highlights the issues in that particular segment.
You can continue the troubleshooting by clicking the next result in the Troubleshooting Assistant
The Application, Gateway, VPN, and Proxy segments assist you in swiftly assessing whether an issue with a specific application is confined to a particular agent or if it points to a more widespread outage that impacts multiple users simultaneously. Consider the following scenario to learn more about how this feature works:
Click on the Application segment.
Hover your mouse over the Application Score column corresponding to the desired application (for example, MS-365 Outlook).
You can now see in the tooltip, the number of agents experiencing issues in this particular application.
The Connection Score is computed to help you analyze the performance of your Wired (Ethernet), Wi-Fi, or cellular connection. For example, the Connection Score for your Wi-Fi is derived from the wireless metrics collected by the local network tests. Similarly, for cellular connection, the score is computed based on signal strength metrics (such as RSSI, signal power, quality, SNR)
The data from the Microsoft API can take up to 30 minutes to reflect in the ThousandEyes application. This is due to the Microsoft SLA. Even so, we often see cases taking longer than 30 minutes.
To view the Microsoft Teams call data:
Establish an integration with the Microsoft Teams Call Quality Dashboard.
Navigate to the Endpoint Agent > Agent Views section in the ThousandEyes application. You can use the search field to find the agent.
In the Time Range Selector, set the date to the one shared by the end user. Alternatively, you can also select the range on the Experience Score timeline.
Click on the heat map's tile representing high degradation (red color) for the MS Teams dynamic test.
Under the Applications metric, the cursor automatically selects the round with the worst Application Score for the MS Teams dynamic test.
Switch to the Imported Metrics and you will find the Call Score value as “Poor”. This information is imported from the Call Quality Dashboard API of Microsoft.
The Segment Visualization will then show which segment the issue is in. In this case, the problem is due to high Gateway Latency and Jitter.
You can further verify this by checking the Gateway Latency and Gateway Jitter values under the Gateway segment.
To understand the impact of this issue on the end-user experience you can also check the detailed data imported from the Microsoft Call Quality Dashboard in the **Meetings **tab.
In addition, by clicking on View Details you can get full details of the experience of other users in the same meeting.
You can click the layered icon at the end of the last row to directly go to the agent view of that agent.