On Wed, Mar 28, 2012 at 01:22:56PM +1300, Sebastian Castro wrote:
On 28/03/12 13:01, Cameron Bradley wrote:
providers. In the cases I?m seeing, records with TTLs of 14400 are being handed out with TTLs of 86400 by the service provider?s servers. For example, if a record has a short TTL (300 seconds), then is "stored" in the cache with a minimum TTL (1 hour) set by the operator. The same applies to records with large TTLs (few days) are put into the cache with 1-day TTL.
I think pushing 5 second ttls up to 60 seconds. Or maybe even 60 to 300 could be ok. Especially in a home setting, where a transparent proxy may even be catching your traffic anyway. But he's complaining about 14400 being forced up to 86400 which is completely different.
I'm not aware of how common is this practice, but the argument I heard is to ease the load in cache in the case of low TTLs.
How common is this practice, and what are the benefits to the SP from doing it? From my perspective there is also the concern that this, for all intents and purposes appears to be bad practice, and serves to ?break? DNS in itself.
Effectively a good DNS administrator would like to control their TTL at their will (we do!) based on a rational process. For example, CDN operators use low TTL to quickly react to outages. But there is also lots of breakage out there caused by non-rational decisions. If you put a low TTL plus a nameserver with the wrong config, you can easily get a query storm in your cache.
Argueably anycast is a much better solution for reacting quickly, and DNS is often used for load balancing to seperate regions. Google having a TTL of 60 seconds seems to encourage a lot of queries. And Facebook with 120 seconds is at least slightly better. If anycast was more widely deployed for CDNs then we wouldn't have this uproar about CDNs not being linked to the source IP address, and going long distances when using shared DNS caches. And realistically if there is anycast and many seperate locations for recursive DNS then there's going to be a lower hit count without shared caching: which hurts smaller sites more than more popular sites and creates even more of a digital divide. Ben.