Yup Confirmed behaviour noticed as part of bufferbloat testing. Google the
bufferbloat mailing list archives if you want to confirm.
On 31 March 2017 at 12:46, Dylan Hall
I stumbled across an issue with iperf3 and it's UDP mode recently that I thought was worth sharing. In short it's very bursty and in my opinion broken. Use iperf (2.0.x) instead.
The interesting detail:
I recently got UFB at home (200/100 plan) and wanted to put it through it's paces. I did the usual TCP tests and it all looked good, so I decided to try UDP to look for packet loss. Even with fairly low rates (10-50 Mbps) I was seeing loss vary from 5% to 50%. I tried a number of different hosts around NZ, and one in the US and although each reported different amounts of loss there was always loss. I ran the tests with "-l 1400" as an option to force 1400 byte packets. Without this iperf sends 8kB packets that get fragmented which confuses the loss figures somewhat. I repeated the tests with iperf rather than iperf3 and everything worked perfectly.
Focusing on one testing host, a 10G connected server near my ISP to minimise transit network in the way.
The following is a little piece of a packet capture from the sending host using iperf (2.0.5). The rate was set to 15 Mbps. I've added the Delta field which is the time since the last packet was seen in micro seconds.
23:27:57.979021 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 746 23:27:57.979766 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 745 23:27:57.980511 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 745 23:27:57.981254 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 743 23:27:57.982001 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 747 23:27:57.982749 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 748 23:27:57.983492 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 743 23:27:57.984238 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 746 23:27:57.984986 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 748 23:27:57.985731 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 745 23:27:57.986478 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 747
The packets are very evenly spaced.
The following is using iperf3 (3.1.3) also at 15 Mbps.
23:28:23.913489 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8 23:28:23.913496 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7 23:28:23.913505 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9 23:28:23.913513 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8 23:28:23.913520 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7 23:28:23.913529 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9 23:28:23.913537 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8 23:28:24.012445 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 98908 23:28:24.012458 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 13 23:28:24.012468 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 10 23:28:24.012475 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7 23:28:24.012483 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8 23:28:24.012492 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9 23:28:24.012499 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7 23:28:24.012508 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9
It appears to send a burst of very closely spaced packets, then take a break, then repeat. The cycle is about 100ms.
Applying some hopefully correct maths, it appears to send about 187kB of data in just over 1ms, then rest for almost 99ms. This gives is about the right average rate (15 Mbps), but for that first 1 ms it's sending at about 1.4 Gbps.
I assume that the loss I'm seeing is either buffers over flowing somewhere in the path or the Chorus rate-limit on UFB discarding the packets.
The old version of iperf uses a busy loop in the sender to achieve the very precise timing for UDP mode which has the side effect of causing 100% CPU while sending. I wonder if the change in iperf3 is an attempt to make it more efficient.
Had I Googled the issue at the beginning I would have found this is a known problem:
https://github.com/esnet/iperf/issues/296 https://github.com/esnet/iperf/issues/386
Hopefully this will save someone else from a couple hours of confusion :)
Thanks,
Dylan
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz https://list.waikato.ac.nz/mailman/listinfo/nznog