At 10/15/01 05:39 AM, Michael Newbery wrote:
I'm trying to identify throughput speeds of typical low (and not so low) end network devices. With carrier networks that outperform many people's infrastructure, I'd like to have some sort of a list that says
Win95/98
y = z <= (MSS / RTT) * (0.7 / sqrt(loss ratio)) i.e. TCP depends on the segment size, the latency and the loss rate. TCP is also bounded by the sending buffer and the RTT, in that yuou cannot run at a rate faster than (buffer / RTT)
We hit this even with the cable modems where 2Mbps was faster than many PCs could make use of (three years ago). It's got worse.
We can put Smartbits testers on links during commissioning, but that's not really helpful to the customers, who want at least some reasonable pointer as to why going from 64kbps to 1Gbps isn't giving them the throughput they expected.
performance is all about a) tuning the end systems for performance and then b) ensuring the network does no harm. Tuning end systems is all about extending the buffer size, which in turn may also entail turning on window scaling. You may want to consider turning off delayed ack (mixed opinion about this) and certainly raise the initial window size to 4 packets. MTU discovery is also a must, as you want to maximize packet sizes. Careful use of local caches can assist web performance dramatically. a) is where most of the benefit can be found, although in b) comes avoiding loss through active queue management (in this case RED, correctly tuned, can be your friend) and careful control of transmission loss levels if you are using a noisy transmission system. Australia and New Zealand are at the back end of some of the longer undersea cable runs, and the extended RTTs for a large proportion of customer traffic makes careful tuning of the end system essential if you want larger files to download at reasonable rates. regards, Geoff Huston --------- To unsubscribe from nznog, send email to majordomo(a)list.waikato.ac.nz where the body of your message reads: unsubscribe nznog