Martin D Kealey wrote:
That's probably true, but it has nothing to do with user experience: the round-trip time from Dunedin to Auckland for a DNS query is going to be completely swamped by the time to fetch the contents of the page, which is likely going to have to come from California.
CDNs alleviate this to a certain degree; and certainly I notice the occasional page blocking rendering while waiting for DNS to resolve.
Firstly, nobody make a page that loads objects from 100 different domains, and all browsers cache DNS results internally (often beyond the declared TTL!).
I wish that were the case; but many web2.0/social networking type sites these days do have massive numbers of DNS transactions due to the number of advertising networks, CDNs, and various embedded contents. Often it's with a fairly low TTL or dynamically generated fqdns (as Joe pointed out). I did some digging in the past month due some DNS issues a customer was seeing and the volume of DNS transactions per your average broadband subscriber has massively increased in the last 2-3 years due to sites like facebook and the multi-embedded nature of lots of popular sites. I think there probably is quite a bit of truth that being further away from your recursive server is unhelpful to performance where there are DNS cache hits possible. For the relatively low cost of deploying recursive DNS infrastructure at your nearest subscriber management POP I'd suggest doing so... aj