Latency, Loss, Disconnections, and Jitter Tests

Latency, packet loss and jitter affect everything you do on the internet. If these fare badly, then every other test will fare badly. So they are critical to measure. You can find more detailed information in the Path Quality Part 1: The Surprising Impact of 1% Packet Loss blog series.

UDP Latency and Loss Test

The UDP (User Datagram Protocol) latency and loss test measures the round-trip time of small UDP packets between the router and a target test server. Each packet consists of an 8-byte sequence number and an 8-byte timestamp. If a packet is not received back within two seconds of sending, it is treated as lost.

UDP Latency Test Frequency

The test operates continuously in the background. It is configured to randomly distribute the sending of the echo requests over a fixed interval, reporting the summarized results once the interval has elapsed.

By default, the test is configured to send a packet every 1.5 seconds, meaning a maximum of 2,400 packets sent per hour. This number will usually be lower, typically around 2000 packets, as by default the test will not send packets when other tests are in progress, or when cross-traffic (i.e., other internet activity – see Testing Thresholds) is detected. A higher or lower sampling rate may be used if desired, up to a maximum of reporting on a one-minute aggregation level, or a minimum of reporting on a 24-hour aggregation level.

UDP Latency Test Metrics

The test records the number of packets sent each hour, the average round-trip time of the packets, and the total number of packets lost. The test uses the 99th percentile when calculating the summarized minimum, maximum and average results on the Device Agent.

The following key metrics are recorded by the test:

  • Round-trip latency (mean).

  • Round-trip latency (minimum).

  • Round-trip latency (maximum).

  • Round-trip latency (standard deviation).

  • Round-trip packet loss.

  • Number of packets sent and received.

  • Hostname and IP address of the test server.

UDP Latency Under Load Test

The UDP latency under load test utilizes the continuous running of the UDP latency and loss test to create a latency under load metric when run in parallel with a speed test. This special use case offers latency under load results via UDP, unlike our responsivenes (latency under load) test, which currently only supports TCP. How it works is:

  1. During normal operation, the UDP latency and loss test runs continually. The test only pauses operation when another test is scheduled to run.

  2. However, during any download or upload speed test that is enabled for latency under load, the UDP latency and loss test continues to send UDP latency measurement packets while the speed test is running so it can measure the impact on the latency caused by said speed test.

  3. The latency results obtained during speed tests are reported as latency under load results, which are separate from the standard latency and loss results derived during normal operation.

The speed test will always load the line to 100% capacity, providing a worst-case latency result.

If the UDP latency and loss test is set to use a higher priority queue using ECN (Explicit Congestion Notification) or DSCP (Differentiated Services Code Point), it will provide insight into the effectiveness of this queuing strategy when compared with another router running UDP latency and loss without access to a higher priority queue.

ICMP Ping Test

We offer ICMP (Internet Control Message Protocol) ping tests that, similar to the UDP latency test, provide the round-trip time of a reply from an ICMP type 8 (echo request) packet sent to the target hostname or IP address.

Contiguous Latency and Loss (Disconnections) Test

The contiguous latency and loss (disconnections) test is an optional extension to the UDP latency/loss test. It records instances when two or more consecutive packets are lost to the same test server. Alongside each event, we record the timestamp, the number of packets lost and the duration of the event.

By executing the test against multiple diverse servers, you can begin to observe server outages (when multiple probes see disconnection events to the same server simultaneously) and disconnections of your users' home or office connection (when a single probe loses connectivity to all servers simultaneously).

Typically, this test works best at a high frequency like the UDP latency/loss test of one packet every 1.5s, which is approximately ~2000 packets per hour given reasonable levels of cross-traffic. This provides a resolution of 2-4 seconds for disconnection events.

UDP Jitter Test

The UDP jitter test is commonly used to simulate a synthetic VoIP (voice-over-IP) call where a real SIP server is unavailable. It uses a fixed-rate stream of UDP traffic, running between agent and test node. A bi-directional 64 kbps stream is used with the same characteristics and properties (i.e. packet sizes, delays, bitrate) as the G.711 audio codec.

The agent initiates the connection, thus overcoming NAT (network address translation) issues, and informs the server of the rate and characteristics with which it would like to receive the return stream.

The standard configuration uses 500 packets upstream and 500 packets downstream.

The agent records the round-trip-time (latency), the number of packets it sent and received (loss rate), and the jitter observed for packets it received from the server. The server does the same, but with the reverse traffic flow, thus providing bi-directional loss and jitter. Importantly, we are able to use these measures as an alternative when the UDP latency and loss test is not available to be used.

Jitter is calculated using the PDV (packet delay variation) approach described in section 4.2 of RFC 5481. The 99th percentile is recorded and used in all calculations when deriving the PDV.

Last updated