Distributed Tracing with Splunk Observability APM
This guide explains how to integrate Splunk Observability Cloud APM (Application Performance Monitoring) with ThousandEyes. This integration helps you trace requests across services and identify whether issues are caused by the network or the application layer.
Requirements
Before you start, ensure the following requirements are met:
The monitored endpoint is configured correctly. See Monitored Endpoint Requirements.
You have access to Splunk Observability Cloud.
Your user account must have the Power or Admin role to create an access token.
Configuration steps
Step 1: Create the Splunk APM Integration in ThousandEyes
In ThousandEyes, go to Manage > Integrations > Integrations 2.0.
Create a Generic Connector with the following details:
Target URL:
https://api.<REALM>.signalfx.com
Replace<REALM>
with your Splunk realm (for example,us0
,eu1
).Custom Headers:
Key:
X-SF-Token
Value: Your Splunk access token with API scope. For more information, see Create and manage organization access tokens using Splunk Observability Cloud.
Splunk APM Generic Connector
Create an operation:
Click + New Operation.
Choose Splunk Observability APM.
Enter an operation name.
Enable the operation.
Splunk APM Operation
Step 2: View the Service Map

Open the Service Map tab in ThousandEyes.
Use the service map to analyze the trace path. You can identify:
The services involved in the request.
Any latency issues, highlighted in red if thresholds are exceeded.
Any errors between services, shown as red lines if a request fails.
Trace metadata, such as the trace ID and request flow details.
Step 3: Debug the Trace in Splunk Observability Cloud
From the Service Map tab in ThousandEyes, follow the link to the trace in Splunk. There, you can:
Drill into service-level trace data.
Use Splunk’s trace search, filters, and dashboards for deeper analysis.

Splunk enriches the trace with the following attributes:
thousandeyes.account.id
thousandeyes.test.id
thousandeyes.permalink
thousandeyes.source.agent.id
These attributes provide context and allow you to navigate back to the related test in ThousandEyes.
Troubleshooting
Missing ThousandEyes Link in Splunk Observability Cloud
This issue occurs when the b3
propagator overrides the trace_state
span attribute and clears its value. Set the OTEL_PROPAGATORS
environment variable to the following value:
OTEL_PROPAGATORS=baggage,b3,tracecontext
After updating the environment variable, restart the instrumented service for the change to take effect.
Last updated