Broadband experience and DNS resolution speeds
Hello All I suspect many of you will already know that the Commerce Commission has released its report into broadband quality for the last six month of last year: http://www.comcom.govt.nz/assets/Uploads/Report-on-New-Zealand-Broadband-Qua... On page 31 there is a specific discussion about the impact of caching DNS resolution speeds: "The DNS performance from remote test sites to the ISPs tested in all cities shows that webpage loading is slower the further the user is from the Auckland based DNS." This implies that all ISPs have their caching DNS resolvers based in Auckland. I would be very interested to know if that is that case. If anyone could enlighten me, on or off list, I would be very grateful. cheers Jay -- Jay Daley Chief Executive .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 931 6977 mobile: +64 21 678840
<grrrrrrrrrrr> On Thu, 20 May 2010, Jay Daley wrote, quoting the Commerce Commission:
"The DNS performance from remote test sites to the ISPs tested in all cities shows that webpage loading is slower the further the user is from the Auckland based DNS."
That's probably true, but it has nothing to do with user experience: the round-trip time from Dunedin to Auckland for a DNS query is going to be completely swamped by the time to fetch the contents of the page, which is likely going to have to come from California. And they go on to say... "The test results shown in Figure 15 demonstrate DNS delays of 41ms to 70ms for Dunedin users. Subject to browser type, that could mean, for example, a delay of 70ms x 100 files, or 7 seconds, before a page completes loading." ... which is complete balderdash. Firstly, nobody make a page that loads objects from 100 different domains, and all browsers cache DNS results internally (often beyond the declared TTL!). Secondly all recent browsers both render progressively, and parallelize loading of subordinate objects. Thirdly, it would mean that in everyone in Auckland would be seeing 4.1 seconds for the same page, and they're not.
This implies that all ISPs have their caching DNS resolvers based in Auckland.
It *states* that they're all in Auckland, which probably true, but it does not offer any rationale to support the conclusion (that it's because of the location of the DNS resolvers). I'm going to stop reading the report now; it's getting bad for my blood pressure. </grrrrrrrrrrr> -Martin
Martin D Kealey wrote:
That's probably true, but it has nothing to do with user experience: the round-trip time from Dunedin to Auckland for a DNS query is going to be completely swamped by the time to fetch the contents of the page, which is likely going to have to come from California.
CDNs alleviate this to a certain degree; and certainly I notice the occasional page blocking rendering while waiting for DNS to resolve.
Firstly, nobody make a page that loads objects from 100 different domains, and all browsers cache DNS results internally (often beyond the declared TTL!).
I wish that were the case; but many web2.0/social networking type sites these days do have massive numbers of DNS transactions due to the number of advertising networks, CDNs, and various embedded contents. Often it's with a fairly low TTL or dynamically generated fqdns (as Joe pointed out). I did some digging in the past month due some DNS issues a customer was seeing and the volume of DNS transactions per your average broadband subscriber has massively increased in the last 2-3 years due to sites like facebook and the multi-embedded nature of lots of popular sites. I think there probably is quite a bit of truth that being further away from your recursive server is unhelpful to performance where there are DNS cache hits possible. For the relatively low cost of deploying recursive DNS infrastructure at your nearest subscriber management POP I'd suggest doing so... aj
On 21/05/2010 5:21 p.m., Alastair Johnson wrote:
I wish that were the case; but many web2.0/social networking type sites these days do have massive numbers of DNS transactions due to the number of advertising networks, CDNs, and various embedded contents. Often it's with a fairly low TTL or dynamically generated fqdns (as Joe pointed out).
A major gripe I have with Google Analytics is, depending on the browsers rendering priorities, can cause a page to _load_ slowly even though the user has received 100% of the content. All these third party analytics solutions are causing a lot of additional rendering delays than many would think, and it's partially the additional DNS lookup, as well as then requesting from a remote server. I think it's the trend of the new bread of websites (I hate the term web2.0, as I would call it web0.2a), and it's something that needs to be addressed be it at the production end AND the delivery end. Things change, for better, or for worse, and we need to keep up with demand. Glue and tape anyone ? - Drew
On 21/05/2010, at 6:30 PM, Drew Broadley wrote:
On 21/05/2010 5:21 p.m., Alastair Johnson wrote:
I wish that were the case; but many web2.0/social networking type sites these days do have massive numbers of DNS transactions due to the number of advertising networks, CDNs, and various embedded contents. Often it's with a fairly low TTL or dynamically generated fqdns (as Joe pointed out).
A major gripe I have with Google Analytics is, depending on the browsers rendering priorities, can cause a page to _load_ slowly even though the user has received 100% of the content.
All these third party analytics solutions are causing a lot of additional rendering delays than many would think, and it's partially the additional DNS lookup, as well as then requesting from a remote server.
I think it's the trend of the new bread of websites (I hate the term web2.0, as I would call it web0.2a), and it's something that needs to be addressed be it at the production end AND the delivery end.
Things change, for better, or for worse, and we need to keep up with demand. Glue and tape anyone ?
Does the google analytics loading in the background thing really impact user experience? There was some code running on Wikipedia that also loaded things from several different hostnames, except it was intentionally broken and some would never load. This was 1 in every 100 page views for many many months. How many people noticed? I don't know of anyone that complained. Google were doing similar things on the search home page. I bet you didn't notice..
I'll give you an example. I have a couple of customers that operate strict white lists when it comes to web browsing. You can only visit URLs if they are on the approved list. One of those companies has just 6 URLs on the approved list, and Google is not one of them. They have pretty much taken the stance that the Internet at work is a tool, and not to be used or something else. But every now and then I get asked to investigate why they can't get to certain web sites. The last one from memory was www.courierpost.co.nz, which they were using for couriers. It turned out Courier Post has built their site so that if Google Analytics is not available the pages wont load (well actually, there is about a 2 minute timeout, and then they load). I sent a message to the CourierPost people saying how to modify their website to prevent the problem, but no response. They had simply embedded the Google Anayltics code in a poor place, which prevented the browser from rendering the page even though it was fully loaded. So yes, Google Analytics can definately affect the use experience - in this case *paying* customers. ________________________________________ From: nznog-bounces(a)list.waikato.ac.nz [nznog-bounces(a)list.waikato.ac.nz] on behalf of Nathan Ward [nznog(a)daork.net] ... Does the google analytics loading in the background thing really impact user experience? There was some code running on Wikipedia that also loaded things from several different hostnames, except it was intentionally broken and some would never load. This was 1 in every 100 page views for many many months. How many people noticed? I don't know of anyone that complained. Google were doing similar things on the search home page. I bet you didn't notice..
On Fri, 2010-05-21 at 20:33 +0000, Philip D'Ath wrote:
I'll give you an example. I have a couple of customers that operate strict white lists when it comes to web browsing. You can only visit URLs if they are on the approved list. One of those companies has just 6 URLs on the approved list, and Google is not one of them. They have pretty much taken the stance that the Internet at work is a tool, and not to be used or something else.
But every now and then I get asked to investigate why they can't get to certain web sites. The last one from memory was www.courierpost.co.nz, which they were using for couriers. It turned out Courier Post has built their site so that if Google Analytics is not available the pages wont load (well actually, there is about a 2 minute timeout, and then they load).
You can get rid of the delay if you just add some local DNS to make the analytics address resolve to something local. I use dnsmasq to do this for many of these kinds of domains, making them resolve to a local IP with a catch-all host that serves a blank HTML page for any request. I guess we're all getting pretty off-topic now though, and further discussion should be off-list :-) Cheers, Andrew. -- ------------------------------------------------------------------------ andrew (AT) morphoss (DOT) com +64(272)DEBIAN I have not seen high-discipline processes succeed in commercial settings. - Alistair Cockburn ------------------------------------------------------------------------
Who on produced this report? Can they come to the next NZNOG meeting for a flogging? On 20/05/2010, at 12:52 PM, Jay Daley wrote:
Hello All
I suspect many of you will already know that the Commerce Commission has released its report into broadband quality for the last six month of last year:
http://www.comcom.govt.nz/assets/Uploads/Report-on-New-Zealand-Broadband-Qua...
On page 31 there is a specific discussion about the impact of caching DNS resolution speeds:
"The DNS performance from remote test sites to the ISPs tested in all cities shows that webpage loading is slower the further the user is from the Auckland based DNS."
This implies that all ISPs have their caching DNS resolvers based in Auckland.
I would be very interested to know if that is that case. If anyone could enlighten me, on or off list, I would be very grateful.
cheers Jay
-- Jay Daley Chief Executive .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 931 6977 mobile: +64 21 678840
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
!DSPAM:22,4bf496e4145873053961419!
On 2010-05-20, at 09:22, Nathan Ward wrote:
Who on produced this report? Can they come to the next NZNOG meeting for a flogging?
There are grains of truth in the idea that increased latency between clients and resolvers can lead to decreased performance for web applications. Many of the newfangled javascript-riddled sites that the kids seem to like these days use deliberately-randomised URIs and similar techniques deliberately to defeat caching, since caching for some interactive web $buzzword.$excitement apps leads to user pain and suffering. Vixie presented some data at the recent DNS-OARC meeting in Prague which described a trend for decreasing DNS cache hits, and at least in some cases found that random-looking URIs were contributing to the effect (see https://www.dns-oarc.net/files/workshop-201005/vixie-oarc-Prague.pdf). If an application like Facebook can generate a few hundred HTTP sessions per page load, it seems possible that cache misses (both in DNS and HTTP caches, remote and local) give a greater effect that you would imagine, and perhaps the cumulative effect of Dunedin-Auckland DNS latency has some noticeable effect. But I agree it seems like a stretch (every cache miss in Auckland probably requires a trip to an authority-only server across an ocean). Some actual science might be nice to see, maybe. Joe
On 21/05/2010, at 1:42 AM, Joe Abley wrote:
On 2010-05-20, at 09:22, Nathan Ward wrote:
Who on produced this report? Can they come to the next NZNOG meeting for a flogging?
There are grains of truth in the idea that increased latency between clients and resolvers can lead to decreased performance for web applications. Many of the newfangled javascript-riddled sites that the kids seem to like these days use deliberately-randomised URIs and similar techniques deliberately to defeat caching, since caching for some interactive web $buzzword.$excitement apps leads to user pain and suffering.
Vixie presented some data at the recent DNS-OARC meeting in Prague which described a trend for decreasing DNS cache hits, and at least in some cases found that random-looking URIs were contributing to the effect (see https://www.dns-oarc.net/files/workshop-201005/vixie-oarc-Prague.pdf).
If an application like Facebook can generate a few hundred HTTP sessions per page load, it seems possible that cache misses (both in DNS and HTTP caches, remote and local) give a greater effect that you would imagine, and perhaps the cumulative effect of Dunedin-Auckland DNS latency has some noticeable effect. But I agree it seems like a stretch (every cache miss in Auckland probably requires a trip to an authority-only server across an ocean).
It's quite common to use random hostnames to encourage web browsers to parallelize sessions, as (from memory) most browsers will not open more than 4 connections to a single hostname/port.
Some actual science might be nice to see, maybe.
Yep. -- Nathan Ward
I don't recall how many connections a browser makes to a singe host but you are right it's a common thing that is done. There is more than one product in the market place that uses this 'trick' to try and accelerate web sites. For the most part it works. I would be very interested in any for of investigation around how much of a difference does this really make and would be willing to participate in such an investigation. However I think there is also value in understanding what impact a DNS cache hierarchy within NZ would have for NZ as a whole, I am somewhat of an idealistic person so perhaps it has no value and it just of interests me. I would also be interested in understanding what other providers get in terms of cache hit vs recursion and overall requests vs infrastructure deployed. I know this is somewhat treasured and possibly not publicly available information. If there is any interest from other parties to share this I will approach the people here and see what can be done around some form of disclosure. Just my thoughts at 6:30 am... Regards Paul Tinson -----Original Message----- From: nznog-bounces(a)list.waikato.ac.nz [mailto:nznog-bounces(a)list.waikato.ac.nz] On Behalf Of Nathan Ward Sent: Friday, 21 May 2010 1:49 a.m. To: NZNOG List Subject: Re: [nznog] Broadband experience and DNS resolution speeds On 21/05/2010, at 1:42 AM, Joe Abley wrote:
On 2010-05-20, at 09:22, Nathan Ward wrote:
Who on produced this report? Can they come to the next NZNOG meeting for a flogging?
There are grains of truth in the idea that increased latency between clients and resolvers can lead to decreased performance for web applications. Many of the newfangled javascript-riddled sites that the kids seem to like these days use deliberately-randomised URIs and similar techniques deliberately to defeat caching, since caching for some interactive web $buzzword.$excitement apps leads to user pain and suffering.
Vixie presented some data at the recent DNS-OARC meeting in Prague which described a trend for decreasing DNS cache hits, and at least in some cases found that random-looking URIs were contributing to the effect (see https://www.dns-oarc.net/files/workshop-201005/vixie-oarc-Prague.pdf).
If an application like Facebook can generate a few hundred HTTP sessions per page load, it seems possible that cache misses (both in DNS and HTTP caches, remote and local) give a greater effect that you would imagine, and perhaps the cumulative effect of Dunedin-Auckland DNS latency has some noticeable effect. But I agree it seems like a stretch (every cache miss in Auckland probably requires a trip to an authority-only server across an ocean).
It's quite common to use random hostnames to encourage web browsers to parallelize sessions, as (from memory) most browsers will not open more than 4 connections to a single hostname/port.
Some actual science might be nice to see, maybe.
Yep. -- Nathan Ward _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On 21/05/2010, at 6:43 AM, Paul Tinson wrote:
I don't recall how many connections a browser makes to a singe host but you are right it's a common thing that is done. There is more than one product in the market place that uses this 'trick' to try and accelerate web sites.
Also, modern browsers are now engages in DNS prefetching - Chrome and Firefox 3.5 and above - as another trick. Even before prefetching though most browsers used parallel threads for DNS lookups or an async DNS library.
I would be very interested in any for of investigation around how much of a difference does this really make and would be willing to participate in such an investigation.
+1. We can contribute expertise.
However I think there is also value in understanding what impact a DNS cache hierarchy within NZ would have for NZ as a whole, I am somewhat of an idealistic person so perhaps it has no value and it just of interests me.
+1. I wonder how much the network design of DSL services constrains us? In other words, if I ran an ISP based in Wellington and had customers in Dunedin on Telecom DSL then do I only see their traffic when it arrives at a Wellington POP? So is it actually impossible for me as an ISP with this type of customers, to place DNS servers down in Dunedin to improve the DNS performance for those customers?
I would also be interested in understanding what other providers get in terms of cache hit vs recursion and overall requests vs infrastructure deployed. I know this is somewhat treasured and possibly not publicly available information. If there is any interest from other parties to share this I will approach the people here and see what can be done around some form of disclosure.
We are willing to receive such data individually, anonymise it and then redistribute. Obviously this data is very useful to us too. cheers Jay
Just my thoughts at 6:30 am...
Regards
Paul Tinson
-----Original Message----- From: nznog-bounces(a)list.waikato.ac.nz [mailto:nznog-bounces(a)list.waikato.ac.nz] On Behalf Of Nathan Ward Sent: Friday, 21 May 2010 1:49 a.m. To: NZNOG List Subject: Re: [nznog] Broadband experience and DNS resolution speeds
On 21/05/2010, at 1:42 AM, Joe Abley wrote:
On 2010-05-20, at 09:22, Nathan Ward wrote:
Who on produced this report? Can they come to the next NZNOG meeting for a flogging?
There are grains of truth in the idea that increased latency between clients and resolvers can lead to decreased performance for web applications. Many of the newfangled javascript-riddled sites that the kids seem to like these days use deliberately-randomised URIs and similar techniques deliberately to defeat caching, since caching for some interactive web $buzzword.$excitement apps leads to user pain and suffering.
Vixie presented some data at the recent DNS-OARC meeting in Prague which described a trend for decreasing DNS cache hits, and at least in some cases found that random-looking URIs were contributing to the effect (see https://www.dns-oarc.net/files/workshop-201005/vixie-oarc-Prague.pdf).
If an application like Facebook can generate a few hundred HTTP sessions per page load, it seems possible that cache misses (both in DNS and HTTP caches, remote and local) give a greater effect that you would imagine, and perhaps the cumulative effect of Dunedin-Auckland DNS latency has some noticeable effect. But I agree it seems like a stretch (every cache miss in Auckland probably requires a trip to an authority-only server across an ocean).
It's quite common to use random hostnames to encourage web browsers to parallelize sessions, as (from memory) most browsers will not open more than 4 connections to a single hostname/port.
Some actual science might be nice to see, maybe.
Yep.
-- Nathan Ward _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Jay Daley wrote:
I wonder how much the network design of DSL services constrains us? In other words, if I ran an ISP based in Wellington and had customers in Dunedin on Telecom DSL then do I only see their traffic when it arrives at a Wellington POP? So is it actually impossible for me as an ISP with this type of customers, to place DNS servers down in Dunedin to improve the DNS performance for those customers?
If the ISP placed infrastructure into Dunedin and had a handover from Telecom Wholesale in Dunedin, then your idea works. If the ISP<>TNZ handover is in Wellington then there is no insertion point for L3 services in the L2 path between Wellington and the subscriber in Dunedin. That's how a bitstream service works...
Nathan Ward wrote:
Who on produced this report? Can they come to the next NZNOG meeting for a flogging?
From the last page of the report: "Disclosure Statement from Epitiro The data used in the preparation of this report is provided to the Commission under contractby Epitiro (NZ) Limited, a part of Epitiro, a technology-focused customer experience management and benchmarking company operating world-wide. Epitiro is committed to providing information that is objective, reliable, and unbiased." I've commented on some of the methodology and assertions made in the past but nothing seems to change with each issue of the report.
On 21/05/2010, at 5:16 PM, Alastair Johnson wrote:
Nathan Ward wrote:
Who on produced this report? Can they come to the next NZNOG meeting for a flogging?
From the last page of the report:
"Disclosure Statement from Epitiro The data used in the preparation of this report is provided to the Commission under contractby Epitiro (NZ) Limited, a part of Epitiro, a technology-focused customer experience management and benchmarking company operating world-wide. Epitiro is committed to providing information that is objective, reliable, and unbiased."
I've commented on some of the methodology and assertions made in the past but nothing seems to change with each issue of the report.
Data from Epitiro, sure. Who interpreted it to produce this report though? There is a statement about the lack of peer review in some kind of boilerplate. Why are these sort of documents not peer reviewed? People with more power than us use them to make decisions, this seems really bad. -- Nathan Ward
On 20/05/2010, at 12:52 PM, Jay Daley wrote:
This implies that all ISPs have their caching DNS resolvers based in Auckland.
I would be very interested to know if that is that case. If anyone could enlighten me, on or off list, I would be very grateful.
To answer your question Jay, I have not heard that any of the major ISPs that have their resolvers outside of Auckland. -- Nathan Ward
Telecom certainly does have caching resolvers in the fine city of ChristChurch. They have been active since December but they are closed resolvers for Telecom customers. ns1 and ns2.xtra.co.nz. I would also hazard a guess that iHug/Vodafone have servers north of the Bombay hills as well given the resolution times seen in Dunedin. I haven't looked at TelstraClear recently enough to know if they do, perhaps they will comment... Regards Paul Tinson -----Original Message----- From: nznog-bounces(a)list.waikato.ac.nz [mailto:nznog-bounces(a)list.waikato.ac.nz] On Behalf Of Nathan Ward Sent: Friday, 21 May 2010 1:24 a.m. To: NZNOG List Subject: Re: [nznog] Broadband experience and DNS resolution speeds On 20/05/2010, at 12:52 PM, Jay Daley wrote:
This implies that all ISPs have their caching DNS resolvers based in Auckland.
I would be very interested to know if that is that case. If anyone could enlighten me, on or off list, I would be very grateful.
To answer your question Jay, I have not heard that any of the major ISPs that have their resolvers outside of Auckland. -- Nathan Ward _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Jay Daley wrote:
Hello All
I suspect many of you will already know that the Commerce Commission has released its report into broadband quality for the last six month of last year:
http://www.comcom.govt.nz/assets/Uploads/Report-on-New-Zealand-Broadband-Qua...
On page 31 there is a specific discussion about the impact of caching DNS resolution speeds:
"The DNS performance from remote test sites to the ISPs tested in all cities shows that webpage loading is slower the further the user is from the Auckland based DNS."
This implies that all ISPs have their caching DNS resolvers based in Auckland.
I would be very interested to know if that is that case. If anyone could enlighten me, on or off list, I would be very grateful.
As I'm sure you're aware, a recursive nameserver looking up a name starts with the root nameservers, and works its way down the tree towards the name you care about. So if you're looking up www.example.net, and we assume there is nothing in your nameservers cache, you get this sequence of events: 1) Your recursive nameserver looks up www.example.net by asking one of the root nameservers. We're spoilt for choice in New Zealand with I and F both having instances hosted within New Zealand, as I see it: a.root-servers.net ;; Query time: 209 msec b.root-servers.net ;; Query time: 146 msec c.root-servers.net ;; Query time: 147 msec d.root-servers.net ;; Query time: 234 msec e.root-servers.net ;; Query time: 169 msec f.root-servers.net ;; Query time: 8 msec g.root-servers.net ;; Query time: 192 msec h.root-servers.net ;; Query time: 219 msec i.root-servers.net ;; Query time: 235 msec j.root-servers.net ;; Query time: 205 msec k.root-servers.net ;; Query time: 72 msec l.root-servers.net ;; Query time: 209 msec m.root-servers.net ;; Query time: 248 msec 2) These then refer us to the gtld-servers.net. There aren't any instances inside NZ AFAIK so, again, as I see it: a.gtld-servers.net ;; Query time: 231 msec b.gtld-servers.net ;; Query time: 193 msec c.gtld-servers.net ;; Query time: 210 msec d.gtld-servers.net ;; Query time: 227 msec e.gtld-servers.net ;; Query time: 310 msec f.gtld-servers.net ;; Query time: 137 msec g.gtld-servers.net ;; Query time: 172 msec h.gtld-servers.net ;; Query time: 292 msec i.gtld-servers.net ;; Query time: 211 msec j.gtld-servers.net ;; Query time: 157 msec k.gtld-servers.net ;; Query time: 297 msec l.gtld-servers.net ;; Query time: 214 msec m.gtld-servers.net ;; Query time: 141 msec 3) Now we contact the name server for example.com (again, from my PoV): a.iana-servers.net ;; Query time: 139 msec b.iana-servers.net ;; Query time: 329 msec Now we're ready to start to fetch the page. Best case I've got 8ms (f.root) + 137ms (f.gtld) + 139ms (a.iana) = 284ms. Worst case I've got 248 (m.root) + 310ms (e.gtld) + 329ms (a.iana) = 887ms. And this is for a reasonably well connected site -- nearly .9 of a second before we've *begun* to fetch the page. Somewhere between about 20% and 70% of that time is spent talking to the GTLD servers. And the NS and A .com/.net glue are cachable for 86400, so once a day, at least one person has to wait almost an entire extra second. If you have 86,400 users that have to waste 1 extra second a day, you've just wasted an entire lifetime. If you want to improve Internet performance in New Zealand through improving DNS infrastructure, try and get at least one GTLD server instance hosted within New Zealand. the time it takes to go to the US for the GTLD .COM/.NET/.EDU lookups is by far the easiest of those to solve. Interestingly afilias's .org and .info infrastructure appears to have an instance within NZ (~5ms away), and the rest of their servers also seem to be fairly close. Also, you want to try and implement recursive name servers that have large caches, and have some kind of prefetching for commonly hit domains to avoid having end users wait. Try checking your local nameserver infrastructure with http://code.google.com/p/namebench/ to see how well it performs, it's quite eye opening.
Perry Lorier wrote:
Jay Daley wrote:
Hello All
I suspect many of you will already know that the Commerce Commission has released its report into broadband quality for the last six month of last year:
http://www.comcom.govt.nz/assets/Uploads/Report-on-New-Zealand-Broadband-Qua...
On page 31 there is a specific discussion about the impact of caching DNS resolution speeds:
"The DNS performance from remote test sites to the ISPs tested in all cities shows that webpage loading is slower the further the user is from the Auckland based DNS."
This implies that all ISPs have their caching DNS resolvers based in Auckland. I would be very interested to know if that is that case. If anyone could enlighten me, on or off list, I would be very grateful.
As I'm sure you're aware, a recursive nameserver looking up a name starts with the root nameservers, and works its way down the tree towards the name you care about. So if you're looking up www.example.net, and we assume there is nothing in your nameservers cache, you get this sequence of events:
[Resolution sequence deleted]
Now we're ready to start to fetch the page.
Best case I've got 8ms (f.root) + 137ms (f.gtld) + 139ms (a.iana) = 284ms. Worst case I've got 248 (m.root) + 310ms (e.gtld) + 329ms (a.iana) = 887ms.
And this is for a reasonably well connected site -- nearly .9 of a second before we've *begun* to fetch the page.
You are missing a very important point: this is assuming your cache is totally empty. So you pay this penalty once when the cache is cold. During normal operation, a cache sees a 75-85% hit rate. [1]
Somewhere between about 20% and 70% of that time is spent talking to the GTLD servers. And the NS and A .com/.net glue are cachable for 86400, so once a day, at least one person has to wait almost an entire extra second. If you have 86,400 users that have to waste 1 extra second a day, you've just wasted an entire lifetime.
This affirmation assumes all entries expire at the same time, and the root zone has a 41 days TTL for the glue records. If 86,400 users wasted one second, that's not a lifetime, that's only a day... unless we are talking about the lifetime of some insects. Joking aside, to waste that amount of time 236 years have to pass, because you waste one second per day.
If you want to improve Internet performance in New Zealand through improving DNS infrastructure, try and get at least one GTLD server instance hosted within New Zealand. the time it takes to go to the US for the GTLD .COM/.NET/.EDU lookups is by far the easiest of those to solve.
The gain for having an instance of each .COM/.NET/.EDU in New Zealand is low, because a cache resolver will hit them only when the NS/A records expire. A cache resolver usually queries more frequently the authoritative nameservers for the domains the users ask for, rather than "hierarchy" nameservers. Cheers! [1] http://pdos.csail.mit.edu/papers/dns:ton.pdf (I didn't find a fresher reference)
Interestingly afilias's .org and .info infrastructure appears to have an instance within NZ (~5ms away), and the rest of their servers also seem to be fairly close.
Also, you want to try and implement recursive name servers that have large caches, and have some kind of prefetching for commonly hit domains to avoid having end users wait. Try checking your local nameserver infrastructure with http://code.google.com/p/namebench/ to see how well it performs, it's quite eye opening.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Sebastian Castro DNS Specialist .nz Registry Services (New Zealand Domain Name Registry Limited) desk: +64 4 495 2337 mobile: +64 21 400535
You are missing a very important point: this is assuming your cache is totally empty. So you pay this penalty once when the cache is cold. During normal operation, a cache sees a 75-85% hit rate. [1]
I started by saying I was assuming a cold cache to show how much of the total (worst case) is dominated by the GTLD servers compared to the roots and the final lookup. A cache miss to the root servers for example is less of a concern, because most of the root servers have instances that are close (eg, within New Zealand or Australia). In comparison there are no "close" gtld servers, the best one appears to be local to the far end of the southern cross, and at least one of them (e.gtld) appears to be in Europe, well over 300ms away. Compare with say Afilias's .org infrastructure which appears to have an instance at APE (or at least somewhere very close to me), and a handful of instances in Australia, Their best case is local, and their worst case is on par with the GTLD best case.
Somewhere between about 20% and 70% of that time is spent talking to the GTLD servers. And the NS and A .com/.net glue are cachable for 86400, so once a day, at least one person has to wait almost an entire extra second. If you have 86,400 users that have to waste 1 extra second a day, you've just wasted an entire lifetime.
This affirmation assumes all entries expire at the same time, and the root zone has a 41 days TTL for the glue records
The glue for delegates of the root zone appear to be 2 days, where as the glue for the root zone itself appears to be ~41 days. I was ignoring the root zones own glue, and concentrating solely on the delegations it makes. The delegation/glue records in the GTLD zones appear to be for 2 days, which puts my math off by a factor of two. Sorry, my bad for not checking and assuming 1 day. These are the lookups that I think we should be targeting for improvement.
If 86,400 users wasted one second, that's not a lifetime, that's only a day... unless we are talking about the lifetime of some insects.
Joking aside, to waste that amount of time 236 years have to pass, because you waste one second per day.
Hrm, sorry I wasn't clear with my reasoning, assuming 86,400 users wasting 1 second each per day, as an aggregate they are wasting 86,400 seconds each day. It's an amusing look at how a small potentially "insignificant" amount of time can easily become significant if it happens frequently enough and why fixing even the tiniest of delays in a network can quickly add up to significant numbers even if they are spread across a lot of individual users. While 86,400 users obviously aren't all going to have to wait 1s each for every query, the Internet has a long tail, if your users are going to millions of unique sites every two days, at least one of them for each site is going to have to pay the penalty of waiting for an authorative answer, probably both the GTLD cost *and* the end nameservers RTT cost in the same transaction. (Since if there is nothing for that name in the cache, you're going to have to look up both). After that it will be cached in their local browser cache, and your recursive nameservers cache for other users to hit, but then when those caches expire, someone has to pay that penalty again. The other option is of course to pre-warm your cache with names you know are resolved frequently to make sure that none of your users have to pay these penalties. But the long tail bites you as someone is likely to go to a domain that you've not warmed.
If you want to improve Internet performance in New Zealand through improving DNS infrastructure, try and get at least one GTLD server instance hosted within New Zealand. the time it takes to go to the US for the GTLD .COM/.NET/.EDU lookups is by far the easiest of those to solve.
The gain for having an instance of each .COM/.NET/.EDU in New Zealand is low, because a cache resolver will hit them only when the NS/A records expire. A cache resolver usually queries more frequently the authoritative nameservers for the domains the users ask for, rather than "hierarchy" nameservers.
True, however it's going to be hard to get an instance/server of every single nameserver in common use within New Zealand, and as I mentioned above the single user who ends up paying the GTLD transaction cost is likely to end up paying the final authorative nameserver cost as well in the same transaction. If you can't realistically reduce one, reducing the other will provide some help. I believe that getting an instance of a gtld to be a relatively simple improvement that will give a reasonable sized win to a large number of querys (although obviously spread over a large number of individual users) that can be done once for all ISPs compared to other solutions such as all ISPs providing extra infrastructure in the South Island to capture DNS traffic, and then pay for extra national transit to get that traffic to their international providers in Auckland. In short, IMHO it's cost/benefit is likely to be simple compared to other obvious ways of improving DNS resolution times in New Zealand. That's not to say that other methods for improving DNS resolution times in New Zealand aren't a good idea too, just that IMHO this is a simple win. I could be wrong; I don't know much about how the GTLD infrastructure is run. It appears that it isn't anycast at all which is what is leading to these large resolution times, which suggests that perhaps getting an anycast instance into NZ would be quite difficult. I don't know, I'm just putting in my 2c.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Perry, On 5/21/10 4:24 AM, Perry Lorier wrote:
If you want to improve Internet performance in New Zealand through improving DNS infrastructure, try and get at least one GTLD server instance hosted within New Zealand. the time it takes to go to the US for the GTLD .COM/.NET/.EDU lookups is by far the easiest of those to solve.
My view of the WIX shows that Verisign actually does have the 'b' gTLD servers co-located with the Root server 'J" instance there. * 192.33.14.0 202.7.0.172 0 0 9439 23755 26415 i *> 202.7.0.172 0 0 9439 23755 26415 i * 192.58.128.0 202.7.0.172 0 0 9439 23755 26415 i *> 202.7.0.172 0 0 9439 23755 26415 i
Interestingly afilias's .org and .info infrastructure appears to have an instance within NZ (~5ms away), and the rest of their servers also seem to be fairly close.
.ORG Servers are present both on APE and WIX. They provide a different set of .ORG servers locally. 'b2.org.afilias-nst.org' and 'a2.org.afilias-nst.org' respectively. Routes are withdrawn from WIX at present, it seems. You may not see these servers due to 'peering' policy differences. None of the DNS anycast providers are likely to buy transit in NZ and only provide the services at the IX over peering sessions, which may not gel well with the 'policies' of the established carriers. - -gaurab - -- http://www.gaurab.org.np/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.8 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkv5SAIACgkQSo7fU26F3X2jPgCfXZe7ckRmC0SQrr9wum7zyPKG epsAn1ldYpjx9hePFznLmphpmFAvE4Ly =VsA/ -----END PGP SIGNATURE-----
If you want to improve Internet performance in New Zealand through improving DNS infrastructure, try and get at least one GTLD server instance hosted within New Zealand. the time it takes to go to the US for the GTLD .COM/.NET/.EDU lookups is by far the easiest of those to solve.
My view of the WIX shows that Verisign actually does have the 'b' gTLD servers co-located with the Root server 'J" instance there.
Ah, awesome! From my viewpoints I don't see them, I'm going to have to harass my upstream to investigate their routing policies (Hi Chris!). UoW also seems to pick an instance announced over KAREN which sends packets somewhere in Europe (Hi Colin!). Other ISPs do seem to be picking the routes from WIX (just none of the ones I'm on do). It's a much easier solution than I imagined if it's already working for most ISPs and some ISPs just need to investigate their peering preferences to make sure they pick up the lowest latency rather than cheapest route, and/or fix their peering.
.ORG Servers are present both on APE and WIX. They provide a different set of .ORG servers locally. 'b2.org.afilias-nst.org' and 'a2.org.afilias-nst.org' respectively. Routes are withdrawn from WIX at present, it seems.
I do appear to see the Afilias servers at APE, and Afilias seems to have other instances that appear nearby (.au?)
You may not see these servers due to 'peering' policy differences. None of the DNS anycast providers are likely to buy transit in NZ and only provide the services at the IX over peering sessions, which may not gel well with the 'policies' of the established carriers.
Yup!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 5/24/10 12:58 AM, Perry Lorier wrote:
I do appear to see the Afilias servers at APE, and Afilias seems to have other instances that appear nearby (.au?)
Yes. There are instances in Sydney and in Perth, and you possibly see them both over APE. {Thanks to Vocus, they seem do good things other then the bar tab at NZNOG. :-). } - -gaurab - -- http://www.gaurab.org.np/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.8 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkv6LwAACgkQSo7fU26F3X02UwCgop9Cj9rfEsV+/3g9+4MEmi8E Rv4AoOQZ45Nc+3QSmXiet2zzkoVt7H6T =oO+E -----END PGP SIGNATURE-----
participants (12)
-
Alastair Johnson
-
Andrew McMillan
-
Drew Broadley
-
Gaurab Raj Upadhaya
-
Jay Daley
-
Joe Abley
-
Martin D Kealey
-
Nathan Ward
-
Paul Tinson
-
Perry Lorier
-
Philip D'Ath
-
Sebastian Castro