If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it.
This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted.
Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to.
I find this perspective interesting; why is it the transproxies responsibility to fudge what should be a straight forward transaction, in the interests of security? Is this not a case of security measures breaking a perfectly good way of operating? I've used methods similar to Jethro's to test websites in the past - being able to locally fudge the results to a DNS entry without affecting the existing production website is actually useful, and using a hosts file seems like the ideal way to do this. Interested to get perspective from others on this, since when should the ISP break things in the interests of 'security'[1] ? Mark. [1] by this I mean that it's one thing to block CIFS (which to my mind is not intended for public-internet anyway) but it's quite another to get in the middle of http requests in such a way that virtual hosting falls over...