On Wed, 9 Mar 2005, Nathan Ward wrote: Ok, before I start into this reply I'd like to note that I sent you my initial reply offlist and stuck an [offlist] tag in the subject just so it was even more obvious. Sending your reply back to the mailing list is either due to clueless configuration of your mail client, or you're unaware of simple netiquette. However, since we're now forced to turn this into an onlist discussion...
We tried. It's easier to deal with the occasional network that breaks.
Really? I imagine that there would be lots of breakage in this area, especially during failures (your's, or other's) or routing that lacks clue.
I think there are something like 4 networks currently configured for bypassing. It's not that common, since most operators keep an eye on traffic flows on their international links, and would notice the additional load if (for example) different length prefixes were being advertised into the different routing domains.
When you are caching fully transparently, do the Foundry's only send the HTTP return packets to the caches if they are part of a cached session?
Yes.
How do you detect that there is a problem network? That relies on customers, right? Do they really call and complain enough? I know my Mother would just put any failures like that down to "The Internet is broken again".
As with a lot of non physical layer network faults, customers really are the best way to detect them.
Why don't you cache semi-transparently? (IE, connections to web servers come from the external IP address of the proxy server). I'm not aware of any real life cases of that breaking things, and as far as I am aware, it's what most NZ providers do.. Feel free to prove me wrong here, of course.
Oh, there are many many cases where semi transparent caching snaps stuff. Before the source address spoofing was enabled, the bypass list was hundreds of entries - sites that use access lists tied to customer static IPs, sites that expect connections redirected to a secure portal to come from the same origin address, sites that don't like a customer source IP changing as connections are load balanced across multiple caches... Also, the cache server IPs end up in blacklists, abuse tracking is harder (ever tried keeping copies of cache logs for a big network? They're vast) - and very few places look at the X-Forwarded-For header. Plus (yes, there's more!) the caches get DDoSd, as their IP becomes visible - when someone posts to a forum and the IP is logged, it becomes a target. I'd much rather have to filter out traffic destined for a cable modem customer than a cache server. --David