Re: [offlist] Re: [nznog] Ispmap / wix / ape - Slightly OT
Nathan Ward wrote:
Simon Lyall wrote:
It breaks a few places that use the IP address to auth http sessions, apparently some sites don't believe these weird cookie thingies are ready for prime time.
Hmm. This would be a problem if your requests for the same site were going to different proxy servers. I don't think that happens with Foundys, by default. If it does, it's easily 'fix'-able.
We had an issue at Maxnet that was a site which broke in this manner. Essentially they 'authenticated' the customer off the IP address of the proxy, which using WCCP was reasonably static unless a proxy failed. However, after the initial authentication, it then redirected them to an SSL site on 443/tcp, which obviously was not going through the
On Wed, 9 Mar 2005 11:34:09 +1300 (NZDT), "Alastair Johnson"
cache farm, and the customer was (correctly) not configured to use a cache.
So the request suddenly came from the customer's IP address. Result: The site would refuse to allow them in.
A stupid way of doing it, and we solved the problem by excluding the site from the cache. For the life of me, I can't remember what it was, but I recall the site was black and popular. I think it might have been some games thing.
The other negative of Non-Transparent-Proxying that this brings to mind is the relative difficulty that ISPs have identifying persons who abuse web based services (websites, forums, et al) - The logs which collect 'cache?.xyz.co.nz' aren't exactly going to give the ISP information used to resolve user accounts and specific individuals. And most ISPs with the above issue don't log their proxies either, or if they do, can only log for a short period of time due to the sheer volume of log data created. So TransProxying, or none at all, would be my personal choice. (I hear that in at least one cases the amount of $ saved by transproxying was being re-spent in maintaining the boxes themselves - so they were pulled out due to lack of value vs inconvenience. And theyre now saving money.) Mark. All IMHO, as usual.
Mark Foster said:
The other negative of Non-Transparent-Proxying that this brings to mind is the relative difficulty that ISPs have identifying persons who abuse web based services (websites, forums, et al) - The logs which collect 'cache?.xyz.co.nz' aren't exactly going to give the ISP information used to resolve user accounts and specific individuals.
You mean like in this case? http://www.nzherald.co.nz/index.cfm?ObjectID=3559088 -- Juha
Juha Saarinen wrote:
Mark Foster said:
The other negative of Non-Transparent-Proxying that this brings to mind is the relative difficulty that ISPs have identifying persons who abuse web based services (websites, forums, et al) - The logs which collect 'cache?.xyz.co.nz' aren't exactly going to give the ISP information used to resolve user accounts and specific individuals.
You mean like in this case?
"Smith also promised TelstraClear would endeavour to discover the identity of the abuser but said it would be difficult thanks to the volume of internet logs generated by customers." Has he never heard of `grep -r` ? It can't be that hard, I have to find malicious content in logs numerous times and it takes me SFA time. They know it's from a Christmas Island domain, so he will obviously browse the site before he links the images onto the forums and Christmas Island domains aren't the most popular. I don't see the problem here. - Drew
Juha Saarinen wrote:
Mark Foster said:
The other negative of Non-Transparent-Proxying that this brings to mind is the relative difficulty that ISPs have identifying persons who abuse web based services (websites, forums, et al) - The logs which collect 'cache?.xyz.co.nz' aren't exactly going to give the ISP information used to resolve user accounts and specific individuals.
You mean like in this case?
"Smith also promised TelstraClear would endeavour to discover the identity of the abuser but said it would be difficult thanks to the volume of internet logs generated by customers."
Has he never heard of `grep -r` ? It can't be that hard, I have to find malicious content in logs numerous times and it takes me SFA time. They know it's from a Christmas Island domain, so he will obviously browse the site before he links the images onto the forums and Christmas Island domains aren't the most popular. I don't see the problem here.
Any ISPs who have run semi-transparent caching like to stick up their hands as to the sheer _volume_ of logging data collected relative to their customer base? And indicate exactly how long they archive those logs for, and how accessible they are??? And to use an obvious example, abusive email sent via hotmail.... One case I worked on years ago required me to find an individual line entry corresponding to the act of clicking on 'send' within a hotmail window, where at least a half dozen of our clients were using the same proxy, and talking to hotmail, at the same time.... this is not going to help identify individual abusers! And if you're an ISP in the top few, I imagine you're going to be handling a large number of simultaneous requests. Thats huge amounts of data, and increased ambiguity. More costs. As I said earlier in this thread, it has been demonstrated that Transproxies can cost more than they save... and the TCL Article quoted is a perfect example of just some of the complications. Thank you Juha. :-) [/stir] Mark.
Mark Foster wrote:
Any ISPs who have run semi-transparent caching like to stick up their hands as to the sheer _volume_ of logging data collected relative to their customer base? And indicate exactly how long they archive those logs for, and how accessible they are??? And to use an obvious example, abusive email sent via hotmail.... One case I worked on years ago required me to find an individual line entry corresponding to the act of clicking on 'send' within a hotmail window, where at least a half dozen of our clients were using the same proxy, and talking to hotmail, at the same time.... this is not going to help identify individual abusers! And if you're an ISP in the top few, I imagine you're going to be handling a large number of simultaneous requests. Thats huge amounts of data, and increased ambiguity. More costs. As I said earlier in this thread, it has been demonstrated that Transproxies can cost more than they save... and the TCL Article quoted is a perfect example of just some of the complications. Thank you Juha. :-) [/stir]
The Wooden Spoon Award goes to Mark... On a more serious note, this may have operational implications for when the new anti-spam law comes into effect -- the language in the proposal is typically Yes, Minister woolly, but it appears that ISPs will be required to act on customers' spam complaints before the DIA steps in. Presumably, machine parsing of headers isn't going to work for cases like the above, so manual eyeballing of the offending message will be required. -- Juha
Mark Foster wrote:
Any ISPs who have run semi-transparent caching like to stick up their hands as to the sheer _volume_ of logging data collected relative to their customer base? And indicate exactly how long they archive those logs for, and how accessible they are???
Lots. No long. Wouldnt this logging come under the requirement for interception points, by the Telecomincations Bill?
Jeremy Brooking wrote:
Mark Foster wrote:
Any ISPs who have run semi-transparent caching like to stick up their hands as to the sheer _volume_ of logging data collected relative to their customer base? And indicate exactly how long they archive those logs for, and how accessible they are???
Lots.
No long.
Sorry, just got outa bed, that should read... Lots, Not Long and Not Very.
participants (4)
-
Drew Broadley
-
Jeremy Brooking
-
Juha Saarinen
-
Mark Foster