
Scott Howard wrote:
On Fri, Aug 14, 2009 at 1:10 AM, TreeNet Admin
mailto:admin(a)treenetnz.com> wrote: So, now we get to the transparent proxy.
Up until recently they all performed DNS on the HTTP listed domain and most redirected the request to the actual destination. So that a) the
"All" here is simply not correct. There are many transparent proxy products which use the IP address that the connection was originally destined for rather than resolving the hostname in the Host header. On some this is configurable, on others it's the only way they operate.
Sorry for being overly inclusive. You are right, there are true transparent proxy agents out there that do not handle HTTP as HTTP. I should have referred more specifically to caching HTTP proxies doing interception (aka "transparent"). And no my expertise does not include a comprehensive list of all proxies, so really "the ones I know of in this niche".
Recently there have been proof-of-concept and zero-day attacks using http://www.kb.cert.org/vuls/id/435052 so the proxy behaviour is changing. Some are re-writing the URL and Host: headers to raw IPs and passing it through (bye-bye virtual hosting), some are passing the
Can you provide even a single example of a transparent proxy changing a Host header to be that of an IP? Not only would that be a completely violation of the RFC, it would also break the vast majority of websites on the Internet.
The proxy this person was using. I never did find out the name. http://www.squid-cache.org/mail-archive/squid-users/200904/0232.html The demo he started with was a bit of misnomer. Equating the headers he input into the system directly with those coming out. The private conversation following that public thread involved traces where the users testing tool was dropping the http://<ip> part of the URL (maybe good). The transparent proxy between it and Squid was taking the IP and entering it in the Host: header (definitely bad). Squid was then left passing a bad Host: header and the partial-URL it was given to a third peer web server.
others are validating the destination + headers and throwing up attack notices if they don't match.
Even that would break more often than it would work so I can't see any worthwhile proxy vendor using it as an approach - Akamai and the like being the most obvious example of where it would fail.
Akamai have not proven a problem so far with a few months testing under the belt. The worst case is geo-DNS with multi-national ISP customer bases. The solution there seems to be using resolvers with IP local to each national cluster of clients for the transproxy lookups. Other ideas are very welcome. The prefect fix is still a mystery. Good ideas will be used. Good-sounding will be checked and/or tried. AYJ