On Fri, 2009-08-14 at 20:10 +1200, TreeNet Admin wrote:
If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it.
This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted.
Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to.
hi AYJ, Why should the proxy care? If the client requests HTTP from some IP, just deliver HTTP from that IP. If there are IPs that the proxy should not be able to access, the proxy should be setup with suitable policies to prevent access...
So, now we get to the transparent proxy.
Up until recently they all performed DNS on the HTTP listed domain and most redirected the request to the actual destination. So that a) the request gets the right data back via the optimal route, and b) in the case of a hijacking attacks the real domain owner gets some warning + victim hopefully gets to notice something is broken. (Sorry Jethro, local-machine hosts file is nearly useless when crossing the Internet.).
Recently there have been proof-of-concept and zero-day attacks using http://www.kb.cert.org/vuls/id/435052 so the proxy behaviour is changing.
Some are re-writing the URL and Host: headers to raw IPs and passing it through (bye-bye virtual hosting), some are passing the request to the original IP regardless of the cost, others are validating the destination + headers and throwing up attack notices if they don't match.
Virtual hosting is very common, any proxy that decided to break virtual hosting in the name of security is going to be quickly tossed out. (unless everyone suddenly switches to IPv6 and we have more address space than we know what to do with ;-)
Neither of which helps Jethro, except to inform that Telstra are not at fault. Their proxy is behaving properly in this case and blocking his hijack redirection. Had the proxy worked as assumed, it would possibly have poisoned the cache for every other Telstra client who then get shown the test site he wanted hidden.
It would be valid to say the problem is not specific to Telstra, more a widespread design-flaw with proxies. Surely a better proxy could be implemented to work by: * Browser sends request, intercepted by transparent proxy. * Proxy directs request to destination IP. * Proxy returns results, and caches them, using both the URL and IP as ID. This would have the affect of directing the requests to the locations requested by the browser, whilst still doing caching without poisoning the cache for other users. Then again, I am by no means a proxy expert, there could be some good reasons for not doing so.
Best-practice for such testing of remote servers needs to involve DNS (possibly on a dummy domain). Or as he already discovered: raw-IP in the URL gets through just fine. Though may not be possible for virtual hosts..
Yeah, raw-IP is a non-option and having another dummy DNS is a bit of an annoyance, best workaround is probably to just tunnel the traffic directly to the server when testing. regards, jethro -- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com