Sign In

Measuring DNS Performance for the User Experience


Chief Architect


If you’re publishing content of any kind on the Internet, there’s no question that performance is important. The faster your users can get the information they’re looking for, the more likely they’ll be to stay engaged (and buy something, if you’ve got something to sell them).

Speed matters and savvy operators use every opportunity to optimize their content delivery. If the content is available in multiple locations, users need to be directed to the copy that will give them the best performance. The best option is often the closest geographically, but not always: the state of the network between the user and their destination is important, too. It doesn’t do any good to direct users to a website close to them if the path along the way offers poor performance. The destination has to be available, too, since being directed to a website that’s down or not responding is obviously bad.

Dyn’s Managed DNS service offers this kind of intelligent traffic routing (see our Traffic Director product), returning a response to the user to get them to the optimum source of the content they’re seeking. You have to return the right response—and that response needs to arrive fast—to quickly get the user on their way to their destination. At Dyn, we’ve built a fast DNS network that does just that. But how fast is fast enough? And how do you measure the performance of a Managed DNS provider?

There’s a saying in the technical community: “The nice thing about standards is that there are so many to choose from.” The same is true with DNS measurements. It’s easy to find both commercial and free services that measure and rank different DNS providers. But which one to believe? Each reports different results, and that’s because there are different ways to test based on so many variables.

At Dyn, we believe that to judge DNS performance, you need to go back to where we started with this post: the user. Ultimately, it’s the user’s experience that matters. It’s easy to measure DNS latency from all over the Internet, including various out-of-the-way places, to a provider’s service, but does that reflect the real-world behavior of the DNS system? Not necessarily. It’s also possible to game test results by studying a particular testing protocol and making changes just to improve test numbers that don’t necessarily translate to actual improved performance for users.

So how do you measure from the user’s perspective? Consider how DNS resolution works. A user’s device—phone, tablet, laptop, etc.—is a DNS client, and it sends every DNS lookup, or query, to a recursive name server. Usually the recursive server is operated by the ISP or on an enterprise network, but it’s possible to override a client’s DNS configuration to use a different recursive server, such as Google’s Public DNS. After receiving a query, the recursive server does all the DNS heavy lifting on behalf of the client: it queries multiple authoritative name servers on the Internet to chase down the answer, which it then returns to the client.

What matters from the user’s perspective, then, is how fast the recursive server they use can get an answer from the authoritative name servers of the DNS provider for the website’s domain name. For example, let’s say I’m going to Twitter uses Dyn’s Managed DNS service, so the faster the recursive server I use can get a response about from Dyn’s authoritative servers, the fastest I’ll get to the Twitter’s website.

We set out to answer the question of how fast our authoritative servers were responding to the recursive servers that query them. Our servers receive queries from literally millions of sources every day, but a relatively small number of sources account for a significant majority of the traffic: each month, 90% of the total queries we receive come from about 150,000 recursive servers. If those recursive servers are seeing a fast response from Dyn, the clients of those servers are getting a correspondingly fast response and their users are seeing good DNS performance.

Our DNS network consists of 20 sites worldwide, and each of those sites hosts a “vantage point” from our 200+ location worldwide Internet performance network (learn more about our new Internet Intelligence product). Using the vantage point at each site, we measured the round-trip latency to each of those 150,000 most popular recursive servers. Because of various filtering, only about half actually responded. For each server that did respond, we noted which of our sites had the lowest latency to reach it. The result was a list of 75,000 latency measurements, each representing the latency of a recursive server to the closest Dyn Managed DNS site.

The results were pretty impressive. The median round-trip time was nine milliseconds: half the servers responded in less than nine milliseconds and half responded in more than nine milliseconds. The average round-trip time was 20ms. That means the Dyn Managed DNS network is providing fast service to the places that matter most: the actual servers that query us, the actual servers that support real users.

It’s good to measure DNS performance, but you have to make sure that you’re measuring something that actually matters. You can never go wrong remembering the users: it’s the performance they see that matters. So when you measure DNS performance, be sure to look at it from the user’s point of view. Actually, do so when measuring any type of performance online.

Matt Larson is the CTO for Dyn, the world leader in Internet performance solutions. Matt leads technical innovation at Dyn. In addition, he oversees the engineering, labs and architecture teams and ensures that Dyn remains a leader in engineering excellence. Follow Matt on Twitter at @MatthewHLarson or @Dyn.

Related Posts