Hello, is anyone getting `Your client has issued a malformed request' messages from Google when connected via MaxNet? I've tried with lynx, links, and Firefox. I'm getting these messages, not because my client is issuing a malformed request, but rather that MaxNet's transparent proxy is serving this. tcpflow reports the following in the response. Note the line that says `HIT' in it. HTTP/1.0 400 Bad Request Date: Tue, 08 Jun 2004 22:59:04 GMT Content-Type: text/html Server: GFE/1.3 Cneonction: Close <=== Hmmm, this is interesting. Content-Length: 1207 Age: 40 X-Cache: HIT from proxy2.akl1.maxnet.net.nz Connection: close This doesn't always happen under Firefox (it happened the first time, but after that, it worked ok. I'm guessing it's because of the short lifetime (Age: 40 seconds). Can anyone duplicate this? Just to be safe, I've disabled the transparent proxy here, and have no other proxy configured (direct connection). This was happening last night too. -- Cameron Kerr cameron.kerr(a)paradise.net.nz : http://nzgeeks.org/cameron/ Empowered by Perl!
Hi, Have you considered contacting Maxnet support about this? It's certainly the first I have heard of the problem, and would suggest bringing it through to us, rather than wasting list time. Could you provide me with the full HTTP session - what your client said mostly would be helpful. aj. On Wed, 9 Jun 2004, Cameron Kerr wrote:
Hello, is anyone getting `Your client has issued a malformed request' messages from Google when connected via MaxNet? I've tried with lynx, links, and Firefox.
I'm getting these messages, not because my client is issuing a malformed request, but rather that MaxNet's transparent proxy is serving this.
tcpflow reports the following in the response. Note the line that says `HIT' in it.
HTTP/1.0 400 Bad Request Date: Tue, 08 Jun 2004 22:59:04 GMT Content-Type: text/html Server: GFE/1.3 Cneonction: Close <=== Hmmm, this is interesting. Content-Length: 1207 Age: 40 X-Cache: HIT from proxy2.akl1.maxnet.net.nz Connection: close
This doesn't always happen under Firefox (it happened the first time, but after that, it worked ok. I'm guessing it's because of the short lifetime (Age: 40 seconds).
Can anyone duplicate this?
Just to be safe, I've disabled the transparent proxy here, and have no other proxy configured (direct connection). This was happening last night too.
-- Cameron Kerr cameron.kerr(a)paradise.net.nz : http://nzgeeks.org/cameron/ Empowered by Perl!
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Network Operations || noc. +64.9.915.1825 Maxnet || cell. +64.21.639.706
On 8 Jun 2004, at 19:13, Cameron Kerr wrote:
I'm getting these messages, not because my client is issuing a malformed request, but rather that MaxNet's transparent proxy is serving this.
How widespread is transparent caching today? Back during the trans-pacific bandwidth squeeze of 1998/9 it seemed like transparent caching was a necessary evil that would help accommodate growth in customer demand for traffic until Southern Cross arrived. Southern Cross arrived some time ago, is evidently not full, and yet people are still forcing customers to use caches. Why is that? Joe
Probably due to the myth that transparent cacheing actually saves bandwidth. Tho with the primary web servers defaulting to sending "dont cache these pages" headers back it seems a little fruitless. One major advantage of cacheing however (and part of the reason why we still used caches at iconz for some customers) was for bandwidth acceleration - tho in this configuration the cache tends to use more bandwidth than straight web browsing.. -- Steve. On Tue, 8 Jun 2004, Joe Abley wrote:
On 8 Jun 2004, at 19:13, Cameron Kerr wrote:
I'm getting these messages, not because my client is issuing a malformed request, but rather that MaxNet's transparent proxy is serving this.
How widespread is transparent caching today?
Back during the trans-pacific bandwidth squeeze of 1998/9 it seemed like transparent caching was a necessary evil that would help accommodate growth in customer demand for traffic until Southern Cross arrived. Southern Cross arrived some time ago, is evidently not full, and yet people are still forcing customers to use caches.
Why is that?
Joe
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Joe Abley wrote:
How widespread is transparent caching today?
In NZ, very.
Back during the trans-pacific bandwidth squeeze of 1998/9 it seemed like transparent caching was a necessary evil that would help accommodate growth in customer demand for traffic until Southern Cross arrived. Southern Cross arrived some time ago, is evidently not full, and yet people are still forcing customers to use caches.
Why is that?
Rhetorical question? International bandwidth still costs a lot, perhaps. Even so, isn't there a slight win for NZ Internet users going through a transparent cache, as in theory at least it should speed up "the Intarweb" as content is being fetched from a local cache instead of 150-350ms away? -- Juha
On 8 Jun 2004, at 20:15, Juha Saarinen wrote:
Back during the trans-pacific bandwidth squeeze of 1998/9 it seemed like transparent caching was a necessary evil that would help accommodate growth in customer demand for traffic until Southern Cross arrived. Southern Cross arrived some time ago, is evidently not full, and yet people are still forcing customers to use caches. Why is that?
Rhetorical question?
No, actually.
International bandwidth still costs a lot, perhaps.
This (and Jonathan's answer) assumes that the introduction of caches somehow conserves bandwidth. I've never seen that in practice; the upstream pipes were still as full as ever the last time I watched caches being added to an ISP.
Even so, isn't there a slight win for NZ Internet users going through a transparent cache, as in theory at least it should speed up "the Intarweb" as content is being fetched from a local cache instead of 150-350ms away?
That caches improve perceived performance for users is a reasonable reason to the original question, I suppose. I've never heard of anybody using transparent caches without a regular stream of helpdesk driven exception handling, though, so there's a cost to that performance, both to the customer and to the ISP whose helpdesk phone is ringing. Joe
Joe Abley wrote:
That caches improve perceived performance for users is a reasonable reason to the original question, I suppose. I've never heard of anybody using transparent caches without a regular stream of helpdesk driven exception handling, though, so there's a cost to that performance, both to the customer and to the ISP whose helpdesk phone is ringing.
True, and if not carefully configured, transparent cacheing can have interesting side-effects like anonymising posters of Christmas Island material, as I wrote about on http://www.nzherald.co.nz/business/businessstorydisplay.cfm?storyID=3559088 I'm also wondering how effective ISP-side transparent caching is in today's "Akamaised" Internet. Has anyone looked at that? -- Juha
On 8 Jun 2004, at 20:48, Juha Saarinen wrote:
I'm also wondering how effective ISP-side transparent caching is in today's "Akamaised" Internet. Has anyone looked at that?
I haven't checked, but I would assume that content served up from Akamai nodes is packaged as enthusiastically as possible to persuade caches not to retain data, since every cached object served is dollars not in Akamai's pocket. Akamai has nodes in New Zealand, though. Or are you suggesting that NZ ISPs routinely cache domestic content as well as content from overseas? Joe
On Wed, 9 Jun 2004, Juha Saarinen wrote:
Joe Abley wrote:
Akamai has nodes in New Zealand, though. Or are you suggesting that NZ ISPs routinely cache domestic content as well as content from overseas?
Well, that's what I'm curious about. Do ISPs here "uncache" Akamaised content coming from the local nodes?
I can't speak for everyone, however our caches only operate facing our international transit circuits, and domestic and "local" (eg. Akamai deployment) gets ignored. Akamai does encourage (well, somewhat) ISPs to cache traffic towards Akamai if neccessary, and in general it is quite cachable, being images etc in a lot of cases. I have heard of one ISP who, due to their network architecture, is looking to deploy some rather large Cache Engines in order to cache both domestic and international, as they can't feasibly separate the two.
From our perspective, the caches serve to purposes:
1. Bandwidth reduction - we really do see some quite significant savings on HTTP bandwidth with the caches, and if we take them out we can watch the international load climb substantially. 2. Latency reduction - it makes the browsing experience feel nicer as it's served more locally. We don't really find we have too many issues with them that require helpdesk intervention - on the rare occasion, it's usually caused by the server not sending correct header information, or not responding to IMS requests properly (Hi, IIS4.0 and some releases of 5.0). Ofcourse, when you have caches in your critical path that account for 60-70% of traffic across 2 parallel circuits that are using CEF per-flow load balancing, you see some amusing traffic patterns. aj -- Network Operations || noc. +64.9.915.1825 Maxnet || cell. +64.21.639.706
On Wed, Jun 09, 2004 at 12:48:21PM +1200, Juha Saarinen wrote:
True, and if not carefully configured, transparent cacheing can have interesting side-effects like anonymising posters of Christmas Island material, as I wrote about on http://www.nzherald.co.nz/business/businessstorydisplay.cfm?storyID=3559088
An interesting point. Is this not the X-Forwarded-For header is for? What other methods are used in the industry are used by ISPs to prevent this sort of abuse? -- Cameron Kerr cameron.kerr(a)paradise.net.nz : http://nzgeeks.org/cameron/ Empowered by Perl!
On Wed, 9 Jun 2004, Cameron Kerr wrote:
On Wed, Jun 09, 2004 at 12:48:21PM +1200, Juha Saarinen wrote:
True, and if not carefully configured, transparent cacheing can have interesting side-effects like anonymising posters of Christmas Island material, as I wrote about on http://www.nzherald.co.nz/business/businessstorydisplay.cfm?storyID=3559088
An interesting point. Is this not the X-Forwarded-For header is for?
Generally, yes. We make sure to pass the X-F-F header, but unfortunately most software / webmasters don't look at or log it, which is somewhat stupid. On the other hand, it can make things vulnerable to abuse, as evidenced by some people using the header to stuff online votes by just generating fake headers.
What other methods are used in the industry are used by ISPs to prevent this sort of abuse?
logging cache accesses I guess would be the other option, but how many ISPs actually do this? It's a _lot_ of data to log, and then to keep.. aj -- Network Operations || noc. +64.9.915.1825 Maxnet || cell. +64.21.639.706
On Tue, 8 Jun 2004, Joe Abley wrote:
International bandwidth still costs a lot, perhaps.
This (and Jonathan's answer) assumes that the introduction of caches somehow conserves bandwidth. I've never seen that in practice; the upstream pipes were still as full as ever the last time I watched caches being added to an ISP.
Well the graphs I look at have less bandwidth going in one direction than in the other. The difference being enough to pay for the cache boxes in a reasonable payback period. I wouldn't be surprised if price of bandwidth drops enough in a couple of years for it not to be worth doing however. I seem to remember back about 5 years ago the payback period for cache boxes plus a L4 interceptor was around 1 month. Anyway ISPs don't route most of their customer's traffic these days, Netgate does.
That caches improve perceived performance for users is a reasonable reason to the original question, I suppose. I've never heard of anybody using transparent caches without a regular stream of helpdesk driven exception handling, though, so there's a cost to that performance, both to the customer and to the ISP whose helpdesk phone is ringing.
Obviously if ISPs felt the TCO was positive then they would use it. They fact they they still use them indicates that they perceive the savings are still there. -- Simon J. Lyall. | Very Busy | Mail: simon(a)darkmere.gen.nz "To stay awake all night adds a day to your life" - Stilgar | eMT.
Joe Abley wrote:
Back during the trans-pacific bandwidth squeeze of 1998/9 it seemed like transparent caching was a necessary evil that would help accommodate growth in customer demand for traffic until Southern Cross arrived. Southern Cross arrived some time ago, is evidently not full, and yet people are still forcing customers to use caches.
Why is that?
I dont know, perhaps in an attempt to decrease the ISP's bandwidth usage, and allow them to try offer the consumer lower prices... But that sounds like a silly idea to me.
On Tue, Jun 08, 2004 at 08:10:37PM -0400, Joe Abley wrote:
How widespread is transparent caching today?
[...] Southern Cross arrived some time ago, is evidently not full, and yet people are still forcing customers to use caches.
Why is that?
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment. -- Cameron Kerr cameron.kerr(a)paradise.net.nz : http://nzgeeks.org/cameron/ Empowered by Perl!
On Wed, 9 Jun 2004, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Many times however, when you check the bandwidth in vs. the bandwidth out the saving is minimal if you are only doing light cacheing. While it is possible to override "dont cache" headers and expiry times on web sites this will more than likely break dynamic content and requires a lot more work to maintain lists of dynamic pages that break when the cache headers are overridden hence driving up the costs such that the "savings" in bandwidth are artificial. -- Steve.
I agree wholeheartedly with Steve comments below, the cost saving is negligible while the required resource to administer was high. I redeployed my cache to a new position in our network: it is working very well as a footrest under my desk. I found removing the cache actually improved overall performance, my customers - and even managers - noticed the improvement ... some of them even rang to say Thanks (that makes a change!) David -----Original Message----- From: Steve [mailto:steve(a)focb.co.nz] Sent: Wednesday, 9 June 2004 12:33 PM To: Cameron Kerr Cc: NZ Network Operators Group Subject: Re: [nznog] Google and MaxNet On Wed, 9 Jun 2004, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Many times however, when you check the bandwidth in vs. the bandwidth out the saving is minimal if you are only doing light cacheing. While it is possible to override "dont cache" headers and expiry times on web sites this will more than likely break dynamic content and requires a lot more work to maintain lists of dynamic pages that break when the cache headers are overridden hence driving up the costs such that the "savings" in bandwidth are artificial. -- Steve. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Well, maybe it's just that Cacheflows suck ? -----Original Message----- From: David Fox [mailto:foxy(a)morenet.net.nz] Sent: Wednesday, 9 June 2004 1:12 p.m. To: Steve Cc: NZ Network Operators Group Subject: RE: [nznog] Google and MaxNet I agree wholeheartedly with Steve comments below, the cost saving is negligible while the required resource to administer was high. I redeployed my cache to a new position in our network: it is working very well as a footrest under my desk. I found removing the cache actually improved overall performance, my customers - and even managers - noticed the improvement ... some of them even rang to say Thanks (that makes a change!) David -----Original Message----- From: Steve [mailto:steve(a)focb.co.nz] Sent: Wednesday, 9 June 2004 12:33 PM To: Cameron Kerr Cc: NZ Network Operators Group Subject: Re: [nznog] Google and MaxNet On Wed, 9 Jun 2004, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Many times however, when you check the bandwidth in vs. the bandwidth out the saving is minimal if you are only doing light cacheing. While it is possible to override "dont cache" headers and expiry times on web sites this will more than likely break dynamic content and requires a lot more work to maintain lists of dynamic pages that break when the cache headers are overridden hence driving up the costs such that the "savings" in bandwidth are artificial. -- Steve. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Netapp Cache appliances are very nice units that produce little or no customer visible problems. When compared to the amount of feedback generated by customers from our old squid farm, the Netapp's have been a rather pleasant trouble free experience. Caching still saves a considerable amount of bandwidth, obviously, the more customers you have the more content they pull. Which means having more users generally nets you more bandwidth savings. -----Original Message----- From: Tony Wicks [mailto:nzog(a)road.gen.nz] Sent: 09 June 2004 13:23 To: 'David Fox'; 'Steve' Cc: 'NZ Network Operators Group' Subject: RE: [nznog] Google and MaxNet Well, maybe it's just that Cacheflows suck ? -----Original Message----- From: David Fox [mailto:foxy(a)morenet.net.nz] Sent: Wednesday, 9 June 2004 1:12 p.m. To: Steve Cc: NZ Network Operators Group Subject: RE: [nznog] Google and MaxNet I agree wholeheartedly with Steve comments below, the cost saving is negligible while the required resource to administer was high. I redeployed my cache to a new position in our network: it is working very well as a footrest under my desk. I found removing the cache actually improved overall performance, my customers - and even managers - noticed the improvement ... some of them even rang to say Thanks (that makes a change!) David -----Original Message----- From: Steve [mailto:steve(a)focb.co.nz] Sent: Wednesday, 9 June 2004 12:33 PM To: Cameron Kerr Cc: NZ Network Operators Group Subject: Re: [nznog] Google and MaxNet On Wed, 9 Jun 2004, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Many times however, when you check the bandwidth in vs. the bandwidth out the saving is minimal if you are only doing light cacheing. While it is possible to override "dont cache" headers and expiry times on web sites this will more than likely break dynamic content and requires a lot more work to maintain lists of dynamic pages that break when the cache headers are overridden hence driving up the costs such that the "savings" in bandwidth are artificial. -- Steve. _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog _______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.700 / Virus Database: 457 - Release Date: 06/06/2004 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.700 / Virus Database: 457 - Release Date: 06/06/2004
On Wed, 9 Jun 2004, Tony Wicks wrote:
Well, maybe it's just that Cacheflows suck ?
Which is probably quite true, hence the comments about cache devices that add additional "features" to allow overrides and cache sites 'harder' Squid is amoungst the ones that can get some quite substantial savings but at what cost ? I've seen admins force caching on obviously dynamic sites in order to acheive some sort of realistic bandwidth savings - howeve, this can cause all sorts of support calls and grumpy customers. there is no doubt that one can artificially create a bandwidth saving that is quite real but at what cost ? especially when the customer tends to pay a premium for international bandwidth already.. -- Steve.
-----Original Message----- From: David Fox [mailto:foxy(a)morenet.net.nz] Sent: Wednesday, 9 June 2004 1:12 p.m. To: Steve Cc: NZ Network Operators Group Subject: RE: [nznog] Google and MaxNet
I agree wholeheartedly with Steve comments below, the cost saving is negligible while the required resource to administer was high. I redeployed my cache to a new position in our network: it is working very well as a footrest under my desk.
I found removing the cache actually improved overall performance, my customers - and even managers - noticed the improvement ... some of them even rang to say Thanks (that makes a change!)
David
-----Original Message----- From: Steve [mailto:steve(a)focb.co.nz] Sent: Wednesday, 9 June 2004 12:33 PM To: Cameron Kerr Cc: NZ Network Operators Group Subject: Re: [nznog] Google and MaxNet
On Wed, 9 Jun 2004, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Many times however, when you check the bandwidth in vs. the bandwidth out the saving is minimal if you are only doing light cacheing.
While it is possible to override "dont cache" headers and expiry times on web sites this will more than likely break dynamic content and requires a lot more work to maintain lists of dynamic pages that break when the cache headers are overridden hence driving up the costs such that the "savings" in bandwidth are artificial.
-- Steve.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
Hi, I've found that caching does both improve performance and reduce bandwidth usage. Tuning a cache is a not a 20 minute job, it takes some time to setup right, and after reading several howtos/suggestions/tuning guides all over the show I'm pretty happy with how our caches perform now. Generally we only sit around 14%-16% byte hits (without IMS hacking). IMHO, IMS-hacking isn't worth the extra 6% byte hits it gives, compared to the headaches it causes for customers/helpdesk. Its more so the request hits that make the web feel more responsive, and I find it is quite noticeable, especially for europe and asia sourced web content. I would suggest that the 7%-8% of overall bandwidth the caches save (assuming 50% of traffic is web, which might be a bit generous) is worth it, especially if you are buying >10Mbit.. Even at 5% total savings, you are saving reasonable sums of money. I would suggest hand-rolling your own squid box as apposed to using a cacheflow, and take some time reading up how to get the most out of it (whether it be speed, savings or both). Regards, Relihan. David Fox wrote:
I agree wholeheartedly with Steve comments below, the cost saving is negligible while the required resource to administer was high. I redeployed my cache to a new position in our network: it is working very well as a footrest under my desk.
I found removing the cache actually improved overall performance, my customers - and even managers - noticed the improvement ... some of them even rang to say Thanks (that makes a change!)
David
-----Original Message----- From: Steve [mailto:steve(a)focb.co.nz] Sent: Wednesday, 9 June 2004 12:33 PM To: Cameron Kerr Cc: NZ Network Operators Group Subject: Re: [nznog] Google and MaxNet
On Wed, 9 Jun 2004, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Many times however, when you check the bandwidth in vs. the bandwidth out the saving is minimal if you are only doing light cacheing.
While it is possible to override "dont cache" headers and expiry times on web sites this will more than likely break dynamic content and requires a lot more work to maintain lists of dynamic pages that break when the cache headers are overridden hence driving up the costs such that the "savings" in bandwidth are artificial.
-- Steve.
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
On Wed, Jun 09, 2004 at 12:21:19PM +1200, Cameron Kerr wrote:
Is this a trick question? The answer, naturally, is money, more specifically, it is far cheaper (essentially free) to serve something from an ISPs local cache than to request it all over again. This way, the ISP `earns' (saves) money, plus it has the very real potential to be faster when hit from cache (though I doubt that is the primary motivation for using transparent proxying in an ISP environment.
Surely if you're charging different rates for national and international (and local to the ISP) traffic, you're not earning or saving much though, right? Unless of course you're charging content from your cache as international traffic, but surely that would be illegal ... Richard
participants (12)
-
Alastair Johnson
-
Cameron Kerr
-
David Fox
-
Glen Wilson
-
Jeremy Brooking
-
Joe Abley
-
Juha Saarinen
-
Relihan Myburgh
-
Richard Hector
-
Simon Lyall
-
Steve
-
Tony Wicks