Monitor uptime and latency of your websites and API endpoints
Monitor your endpoints 24/7 and receive instant alerts via email, Slack, OpsGenie, or PagerDuty when issues arise.
Break down your request timing into DNS lookup, TCP connection, TLS handshake, server processing, and content transfer.
Support for all HTTP methods, custom headers, and request body data.
Simply paste your API endpoint or website URL into the input field above. Choose your HTTP method and add any required headers or request body.
https://api.example.com/v1/users
Receive comprehensive latency data broken down into different stages of the request lifecycle, helping you identify bottlenecks.
With a Pro plan, you can monitor your endpoints periodically and get alerts if they go down or are facing high latency.
https://api.example.com/v1/users
Monitor latency and uptime for multiple endpoints
Support for all HTTP methods, custom headers, and request body data
Generous quota for your monitoring needs
Extended data retention for trend analysis
Comprehensive breakdowns of response times, latencies and status codes
View trends and patterns with hourly to monthly reporting
We're constantly adding new integrations. Let us know what you need at hello@latencytest.me
Measure latencies and response times of your APIs across different endpoints and methods. Identify bottlenecks in your API infrastructure.
Verify if your CDN is working effectively by comparing response times with and without CDN integration.
Understand how your API performs for your users and identify latency to see if your DNS server and web server operate efficiently.
Monitor your endpoints 24/7 and receive instant alerts when issues arise. Track uptime percentage and response time trends.
Perfect tool for quick API latency checks. The breakdown of latency metrics is incredibly useful.
I use this daily to monitor our website latency. It's simple yet powerful. We have already identified a slow DNS server and remediated it.
The detailed latency timing breakdown helped us identify and fix several performance issues.
latency test is a free-to-use tool with optional premium features to investigate where your requests face latency the most.
Currently, we support all HTTP verbs, such as GET, POST, PUT, PATCH, and DELETE. You can also add custom HTTP headers to your requests, which is especially useful if you need to specify Content-Type
or pass an Authorization
header.
DNS Lookup represents the time taken to resolve the domain name to an IP address. This is the first step in making an HTTP request, where our server asks a DNS server to convert a domain name into an IP address that it can connect to. If you observe a high latency here, it's likely to be due to a slow DNS provider that you are using. Some DNS servers clearly perform better than others. Therefore, we recommend taking a look at dnsperf.com for a comparison and using a fast DNS provider, such as Cloudflare to achieve low latency.
TCP Connection stands for the time taken to establish a TCP connection with the server. This involves the TCP three-way handshake: SYN, SYN-ACK, and ACK. A higher TCP connection time could indicate network congestion or a server that's far away geographically. It's important to note that our servers are located in Europe, and it's likely to see lower latency if your application is also located in Europe.
TLS Handshake is only valid for HTTPS requests. If you test latency for an HTTP URL, this metric won't show up. For HTTPS requests, this is the time taken to establish a secure connection. The TLS handshake involves exchanging encryption keys and certificates to ensure a secure communication channel. A high latency at this stage could be due to many different reasons, but more commonly it indicates to SSL configuration on your servers. We recommend checking how various ciphers perform on your servers using the openssl speed
command. We also recommend taking a look at the MaxClients
value that is defined in your web server configuration.
Server Processing stands for the time the server takes to process your request and generate a response. Depending on the nature of your application, server processing typically includes running the application code, querying the database(s), and preparing and generating a response. A high latency here indicates that your application is taking too long to process the request. If this is the case, we recommend adding profilers to your application to identify bottlenecks.
Content Transfer represents the time taken to fetch the response body from your server. This latency metric depends on the size of the response and the distance between your server and our server. You can expect a higher content transfer latency if your response body is large. Also, it's important to note that our server is located in Europe. Therefore, if your application is geographically far from Europe, a higher latency value is expected here. We recommend using a CDN service, such as Cloudflare or Amazon CloudFront to reduce the latency on this stage, and to serve content across the world rapidly.
The sum of all the above stages represents the total time taken for the complete HTTP request/response cycle. This is what users ultimately experience as the response time of your application.
Yes and no. latency testing view on this page is free to use. However, if you wish to access more features, you can upgrade to a paid plan after registering.
Yes, you can monitor your website and API uptime. We will send you an alert using your preferred alerting service if your website or API is down. This feature is available in Pro plans.