Mark Foster wrote:
If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it.
This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted.
Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to.
I find this perspective interesting; why is it the transproxies responsibility to fudge what should be a straight forward transaction, in the interests of security?
It's an old perspective, well before my time in the Industry. As I understand it, rooted in the fact that affected transproxies are not semantically transparent. They cache things and share them with other clients based on URL alone. Your Q runs on the same terms of "why is it the ISP responsibility to run spam filters?" and "why is it the ISP responsibility to run anti-virus on web traffic?" Because the dark side of reality does 'impossible' things far too often already; http://www.packetstormsecurity.org/papers/general/whitepaper_httpresponse.pd... transproxy is an ISP-provided MITM real-time insertion, saving attackers from all the trouble of needing the HTTP response splitting or client DNS resolver poison to setup the HTTP cache poison. As it stands today they still have to either have already infected the client or poison the transproxies DNS resolvers without easily knowing a) where the proxy is, and b) what resolvers it uses. The proxy behaviour is just one brick in the wall. Unfortunately its got to act like a glass window as well as concrete.
Is this not a case of security measures breaking a perfectly good way of operating?
I've used methods similar to Jethro's to test websites in the past - being able to locally fudge the results to a DNS entry without affecting the existing production website is actually useful, and using a hosts file seems like the ideal way to do this.
Interested to get perspective from others on this, since when should the ISP break things in the interests of 'security'[1] ?
Mark.
[1] by this I mean that it's one thing to block CIFS (which to my mind is not intended for public-internet anyway) but it's quite another to get in the middle of http requests in such a way that virtual hosting falls over...
AYJ