Telstraclear international proxy?
hi all, I am seeing an unusual problem with accessing international sites which I believe is being caused by a proxy on international HTTP access for Telstraclear cable internet users. I was attempting to setup virtualhosts on a new webserver (US based) and added entries to /etc/hosts in order to test the sites before I updated the DNS records. However, any access attempt to the new server on port 80 would just return the current production site, which was running on a different server (also in the US). Confirmed with tcpdump that my packets were leaving my computer for the correct IP. There is no proxy in place on my network. I believe there is some kind of proxy/cache for international traffic from Telstraclear's cable network. The proxy is set up in such a way, it ignores the destination IP address being requested and instead takes the requested hostname, resolves it via DNS and fetches the page from the resolved server IP. DEMONSTRATION If I take an international IP and use it for my browser's proxy settings, I find that I can use that IP as a proxy to access any other international website (but not domestic). eg: 74.125.67.100 (google.com) 212.58.254.252 (bbc.com) Using either IP as a proxy will allow me to reach any other international site, but will not allow access to domestic sites (returns blank page). ASSISTANCE Has anyone seen this problem before or know of any caching being taken place by Telstraclear? Would also be good to know if it's specific to Telstraclear cable customers, or other Telstraclear customers as well. Any advice on how I might be able to query the proxy to identify it's owner would be appreciated. thanks, jethro -- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com
Is your hosting provider using Akamai? Mauricio Freitas http://www.geekzone.co.nz http://www.geekzone.co.nz/freitasm http://www.twitter.com/freitasm -----Original Message----- From: nznog-bounces(a)list.waikato.ac.nz [mailto:nznog-bounces(a)list.waikato.ac.nz] On Behalf Of Jethro Carr Sent: Monday, 10 August 2009 9:58 a.m. To: nznog Subject: [nznog] Telstraclear international proxy? hi all, I am seeing an unusual problem with accessing international sites which I believe is being caused by a proxy on international HTTP access for Telstraclear cable internet users. I was attempting to setup virtualhosts on a new webserver (US based) and added entries to /etc/hosts in order to test the sites before I updated the DNS records. However, any access attempt to the new server on port 80 would just return the current production site, which was running on a different server (also in the US). Confirmed with tcpdump that my packets were leaving my computer for the correct IP. There is no proxy in place on my network. I believe there is some kind of proxy/cache for international traffic from Telstraclear's cable network. The proxy is set up in such a way, it ignores the destination IP address being requested and instead takes the requested hostname, resolves it via DNS and fetches the page from the resolved server IP. DEMONSTRATION If I take an international IP and use it for my browser's proxy settings, I find that I can use that IP as a proxy to access any other international website (but not domestic). eg: 74.125.67.100 (google.com) 212.58.254.252 (bbc.com) Using either IP as a proxy will allow me to reach any other international site, but will not allow access to domestic sites (returns blank page). ASSISTANCE Has anyone seen this problem before or know of any caching being taken place by Telstraclear? Would also be good to know if it's specific to Telstraclear cable customers, or other Telstraclear customers as well. Any advice on how I might be able to query the proxy to identify it's owner would be appreciated. thanks, jethro -- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com
hi all, Thanks for the replies (on and off-list), several people have confirmed the existence of a proxy for international traffic for telstraclear customers. Will talk to Telstraclear's support desk and hopefully get it disabled for my account, which some other people have been able todo. Otherwise will have to work around with some ssh magic. On Mon, 2009-08-10 at 10:05 +1200, Mauricio Freitas wrote:
Is your hosting provider using Akamai?
No, after talking to them they claim no usage of proxies or networks like Akamai. regards, jethro -- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com
Good luck (not meant in a harsh manner).
I remember having issues with this, and getting an offering of a temporary
exclusion (hours, not days) until I could diagnose the problem.
Problem ended up that the website was not giving the correct headers and the
transparent proxy was behaving correctly.
Food for thought.
P.S. Getting through and escalating the problem to the correct people is 3/4
of the trouble.
- Drew
On Mon, Aug 10, 2009 at 10:18 AM, Jethro Carr
hi all,
Thanks for the replies (on and off-list), several people have confirmed the existence of a proxy for international traffic for telstraclear customers.
Will talk to Telstraclear's support desk and hopefully get it disabled for my account, which some other people have been able todo.
Otherwise will have to work around with some ssh magic.
On Mon, 2009-08-10 at 10:05 +1200, Mauricio Freitas wrote:
Is your hosting provider using Akamai?
No, after talking to them they claim no usage of proxies or networks like Akamai.
regards, jethro
-- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Mon, 2009-08-10 at 10:18 +1200, Jethro Carr wrote:
hi all,
Thanks for the replies (on and off-list), several people have confirmed the existence of a proxy for international traffic for telstraclear customers.
Will talk to Telstraclear's support desk and hopefully get it disabled for my account, which some other people have been able todo.
Otherwise will have to work around with some ssh magic.
no luck with Telstraclear, took me quite some time to get to someone who actually knew what a cache was. Whilst the guy was polite enough, he seemed unable to understand that setting cache control on my apache server isn't going to do anything to help resolve the problem, when the problem is the cache using the DNS lookups to resolve the server. I did manage to determine that the only reason they will put an exclude into their cache is for legal reasons or very specific applications that break the cache, clearly I'm not VIP enough for that service. Also told that their system does not allow them to disable the cache on a per-customer basis at all. Will have to do some SSH proxy stuff order to perform my testing. Thanks to everyone who replied so promptly! :-) regards, jethro -- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com
Jethro Carr wrote:
On Mon, 2009-08-10 at 10:18 +1200, Jethro Carr wrote:
hi all,
Thanks for the replies (on and off-list), several people have confirmed the existence of a proxy for international traffic for telstraclear customers.
Will talk to Telstraclear's support desk and hopefully get it disabled for my account, which some other people have been able todo.
Otherwise will have to work around with some ssh magic.
no luck with Telstraclear, took me quite some time to get to someone who actually knew what a cache was.
Whilst the guy was polite enough, he seemed unable to understand that setting cache control on my apache server isn't going to do anything to help resolve the problem, when the problem is the cache using the DNS lookups to resolve the server.
I did manage to determine that the only reason they will put an exclude into their cache is for legal reasons or very specific applications that break the cache, clearly I'm not VIP enough for that service.
Also told that their system does not allow them to disable the cache on a per-customer basis at all.
Will have to do some SSH proxy stuff order to perform my testing.
Thanks to everyone who replied so promptly! :-)
regards, jethro
I know its a bit late or this issue, but I thinks its worth donning my Squid-cache hat anyway and explaining a little about transparent proxy operation the nobody seems to have considered. If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it. This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted. Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to. So, now we get to the transparent proxy. Up until recently they all performed DNS on the HTTP listed domain and most redirected the request to the actual destination. So that a) the request gets the right data back via the optimal route, and b) in the case of a hijacking attacks the real domain owner gets some warning + victim hopefully gets to notice something is broken. (Sorry Jethro, local-machine hosts file is nearly useless when crossing the Internet.). Recently there have been proof-of-concept and zero-day attacks using http://www.kb.cert.org/vuls/id/435052 so the proxy behaviour is changing. Some are re-writing the URL and Host: headers to raw IPs and passing it through (bye-bye virtual hosting), some are passing the request to the original IP regardless of the cost, others are validating the destination + headers and throwing up attack notices if they don't match. Neither of which helps Jethro, except to inform that Telstra are not at fault. Their proxy is behaving properly in this case and blocking his hijack redirection. Had the proxy worked as assumed, it would possibly have poisoned the cache for every other Telstra client who then get shown the test site he wanted hidden. Best-practice for such testing of remote servers needs to involve DNS (possibly on a dummy domain). Or as he already discovered: raw-IP in the URL gets through just fine. Though may not be possible for virtual hosts. HTH AYJ
If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it.
This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted.
Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to.
I find this perspective interesting; why is it the transproxies responsibility to fudge what should be a straight forward transaction, in the interests of security? Is this not a case of security measures breaking a perfectly good way of operating? I've used methods similar to Jethro's to test websites in the past - being able to locally fudge the results to a DNS entry without affecting the existing production website is actually useful, and using a hosts file seems like the ideal way to do this. Interested to get perspective from others on this, since when should the ISP break things in the interests of 'security'[1] ? Mark. [1] by this I mean that it's one thing to block CIFS (which to my mind is not intended for public-internet anyway) but it's quite another to get in the middle of http requests in such a way that virtual hosting falls over...
Mark Foster wrote:
If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it.
This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted.
Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to.
I find this perspective interesting; why is it the transproxies responsibility to fudge what should be a straight forward transaction, in the interests of security?
It's an old perspective, well before my time in the Industry. As I understand it, rooted in the fact that affected transproxies are not semantically transparent. They cache things and share them with other clients based on URL alone. Your Q runs on the same terms of "why is it the ISP responsibility to run spam filters?" and "why is it the ISP responsibility to run anti-virus on web traffic?" Because the dark side of reality does 'impossible' things far too often already; http://www.packetstormsecurity.org/papers/general/whitepaper_httpresponse.pd... transproxy is an ISP-provided MITM real-time insertion, saving attackers from all the trouble of needing the HTTP response splitting or client DNS resolver poison to setup the HTTP cache poison. As it stands today they still have to either have already infected the client or poison the transproxies DNS resolvers without easily knowing a) where the proxy is, and b) what resolvers it uses. The proxy behaviour is just one brick in the wall. Unfortunately its got to act like a glass window as well as concrete.
Is this not a case of security measures breaking a perfectly good way of operating?
I've used methods similar to Jethro's to test websites in the past - being able to locally fudge the results to a DNS entry without affecting the existing production website is actually useful, and using a hosts file seems like the ideal way to do this.
Interested to get perspective from others on this, since when should the ISP break things in the interests of 'security'[1] ?
Mark.
[1] by this I mean that it's one thing to block CIFS (which to my mind is not intended for public-internet anyway) but it's quite another to get in the middle of http requests in such a way that virtual hosting falls over...
AYJ
On Fri, 2009-08-14 at 20:10 +1200, TreeNet Admin wrote:
If I understand the problem from Jethros earlier posts he was using a remote server with virtual hosting and setting the hosts file on his local computer to access it.
This has two effects: - browser visits the remote IP - browser sends the HTTP headers for whatever virtual-hosted domain was wanted.
Which is fine for testing in a LAN where there are no proxies present. However any middle proxy _cannot_ trust the destination IP, far too many security attacks are based on sending doctored requests to an IP they should not go to.
hi AYJ, Why should the proxy care? If the client requests HTTP from some IP, just deliver HTTP from that IP. If there are IPs that the proxy should not be able to access, the proxy should be setup with suitable policies to prevent access...
So, now we get to the transparent proxy.
Up until recently they all performed DNS on the HTTP listed domain and most redirected the request to the actual destination. So that a) the request gets the right data back via the optimal route, and b) in the case of a hijacking attacks the real domain owner gets some warning + victim hopefully gets to notice something is broken. (Sorry Jethro, local-machine hosts file is nearly useless when crossing the Internet.).
Recently there have been proof-of-concept and zero-day attacks using http://www.kb.cert.org/vuls/id/435052 so the proxy behaviour is changing.
Some are re-writing the URL and Host: headers to raw IPs and passing it through (bye-bye virtual hosting), some are passing the request to the original IP regardless of the cost, others are validating the destination + headers and throwing up attack notices if they don't match.
Virtual hosting is very common, any proxy that decided to break virtual hosting in the name of security is going to be quickly tossed out. (unless everyone suddenly switches to IPv6 and we have more address space than we know what to do with ;-)
Neither of which helps Jethro, except to inform that Telstra are not at fault. Their proxy is behaving properly in this case and blocking his hijack redirection. Had the proxy worked as assumed, it would possibly have poisoned the cache for every other Telstra client who then get shown the test site he wanted hidden.
It would be valid to say the problem is not specific to Telstra, more a widespread design-flaw with proxies. Surely a better proxy could be implemented to work by: * Browser sends request, intercepted by transparent proxy. * Proxy directs request to destination IP. * Proxy returns results, and caches them, using both the URL and IP as ID. This would have the affect of directing the requests to the locations requested by the browser, whilst still doing caching without poisoning the cache for other users. Then again, I am by no means a proxy expert, there could be some good reasons for not doing so.
Best-practice for such testing of remote servers needs to involve DNS (possibly on a dummy domain). Or as he already discovered: raw-IP in the URL gets through just fine. Though may not be possible for virtual hosts..
Yeah, raw-IP is a non-option and having another dummy DNS is a bit of an annoyance, best workaround is probably to just tunnel the traffic directly to the server when testing. regards, jethro -- Jethro Carr www.jethrocarr.com/index.php?cms=blog www.amberdms.com
On Fri, Aug 14, 2009 at 1:10 AM, TreeNet Admin
So, now we get to the transparent proxy.
Up until recently they all performed DNS on the HTTP listed domain and most redirected the request to the actual destination. So that a) the
"All" here is simply not correct. There are many transparent proxy products which use the IP address that the connection was originally destined for rather than resolving the hostname in the Host header. On some this is configurable, on others it's the only way they operate. Recently there have been proof-of-concept and zero-day attacks using
http://www.kb.cert.org/vuls/id/435052 so the proxy behaviour is changing. Some are re-writing the URL and Host: headers to raw IPs and passing it through (bye-bye virtual hosting), some are passing the
Can you provide even a single example of a transparent proxy changing a Host header to be that of an IP? Not only would that be a completely violation of the RFC, it would also break the vast majority of websites on the Internet.
others are validating the destination + headers and throwing up attack notices if they don't match.
Even that would break more often than it would work so I can't see any worthwhile proxy vendor using it as an approach - Akamai and the like being the most obvious example of where it would fail. Scott.
Scott Howard wrote:
On Fri, Aug 14, 2009 at 1:10 AM, TreeNet Admin
mailto:admin(a)treenetnz.com> wrote: So, now we get to the transparent proxy.
Up until recently they all performed DNS on the HTTP listed domain and most redirected the request to the actual destination. So that a) the
"All" here is simply not correct. There are many transparent proxy products which use the IP address that the connection was originally destined for rather than resolving the hostname in the Host header. On some this is configurable, on others it's the only way they operate.
Sorry for being overly inclusive. You are right, there are true transparent proxy agents out there that do not handle HTTP as HTTP. I should have referred more specifically to caching HTTP proxies doing interception (aka "transparent"). And no my expertise does not include a comprehensive list of all proxies, so really "the ones I know of in this niche".
Recently there have been proof-of-concept and zero-day attacks using http://www.kb.cert.org/vuls/id/435052 so the proxy behaviour is changing. Some are re-writing the URL and Host: headers to raw IPs and passing it through (bye-bye virtual hosting), some are passing the
Can you provide even a single example of a transparent proxy changing a Host header to be that of an IP? Not only would that be a completely violation of the RFC, it would also break the vast majority of websites on the Internet.
The proxy this person was using. I never did find out the name. http://www.squid-cache.org/mail-archive/squid-users/200904/0232.html The demo he started with was a bit of misnomer. Equating the headers he input into the system directly with those coming out. The private conversation following that public thread involved traces where the users testing tool was dropping the http://<ip> part of the URL (maybe good). The transparent proxy between it and Squid was taking the IP and entering it in the Host: header (definitely bad). Squid was then left passing a bad Host: header and the partial-URL it was given to a third peer web server.
others are validating the destination + headers and throwing up attack notices if they don't match.
Even that would break more often than it would work so I can't see any worthwhile proxy vendor using it as an approach - Akamai and the like being the most obvious example of where it would fail.
Akamai have not proven a problem so far with a few months testing under the belt. The worst case is geo-DNS with multi-national ISP customer bases. The solution there seems to be using resolvers with IP local to each national cluster of clients for the transproxy lookups. Other ideas are very welcome. The prefect fix is still a mystery. Good ideas will be used. Good-sounding will be checked and/or tried. AYJ
On 10/08/2009, at 9:57 AM, Jethro Carr wrote:
I believe there is some kind of proxy/cache for international traffic from Telstraclear's cable network.
I understand TelstraClear, and many other NZ ISPs, have transparent HTTP proxies. Maurico's question is relevant since TelstraClear's proxy breaks things if you're using OpenDNS and a CDN like Akamai.
The proxy is set up in such a way, it ignores the destination IP address being requested and instead takes the requested hostname, resolves it via DNS and fetches the page from the resolved server IP.
That's the normal way they work, yes. I've seen this behaviour also. I think your best bet is to use your own proxy (ssh -D) and talk to your international host that way. Sam.
I'd like to take this opportunity to remind people that the appropriate way to handle situations like this is to log a support call with the ISPs helpdesk before posting support questions to this list. If you believe that it is a problem which effects the NZ Operator community as a whole and you have exhausted your options (including escalations) with the ISP concerned then it is maybe appropriate to post. This is to eliminate the situation where an ISP finds its name/helpdesk practices used in vain throughout a mailing list without being given a chance to put the matter right in private first. Thanks Dean On 10/08/09 9:57 AM, Jethro Carr wrote:
hi all,
I am seeing an unusual problem with accessing international sites which I believe is being caused by a proxy on international HTTP access for Telstraclear cable internet users.
On 10/08/09 9:57 AM, Jethro Carr wrote:
hi all,
I am seeing an unusual problem with accessing international sites which I believe is being caused by a proxy on international HTTP access for Telstraclear cable internet users.
I'm not using cable but spent about 3 hours trying to diagnose a problem last week on Telstra. I'd changed a remote website (in Canada) and the results were coming back the same - cleared cache, reset safari, same thing. Tried firefox, same thing. Eventually I accessed the site using the IP instead of the URL and it came through updated. -- Cheers, Matt Riddell Director _______________________________________________ http://www.venturevoip.com/news.php (Daily Asterisk News) http://www.venturevoip.com/st.php (SmoothTorque Predictive Dialer) http://www.venturevoip.com/c3.php (ConduIT3 PABX Systems)
The bulk of the caching issues service providers are asked to deal with come about because people don't know how to configure web servers. Meta tags and expires are actually useful, but it seems that there are always a few people that don't use them, even though their content is dynamic.
Gordon Smith wrote:
The bulk of the caching issues service providers are asked to deal with come about because people don't know how to configure web servers.
Meta tags and expires are actually useful, but it seems that there are always a few people that don't use them, even though their content is dynamic.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Sorry for the completely pointless empty post. My cat did it ! -- Steve. Steve Phillips wrote:
Gordon Smith wrote:
The bulk of the caching issues service providers are asked to deal with come about because people don't know how to configure web servers.
Meta tags and expires are actually useful, but it seems that there are always a few people that don't use them, even though their content is dynamic.
participants (11)
-
Dean Pemberton
-
Drew Broadley
-
Gordon Smith
-
Jethro Carr
-
Mark Foster
-
Matt Riddell
-
Mauricio Freitas
-
Sam Sargeant
-
Scott Howard
-
Steve Phillips
-
TreeNet Admin