Most of them are on Fibre fed ethernet switched networks, so fq_codel
without SQM is generally what I keep them on without tweaking SQM as it
doesn't help on 100/100mbit+
I have a few behind 130/25mbit Cable connections and generally just use
Simple QOS with around 2-5% bellow observed peak rates out of the wan
interface after loading the connection up to NZ servers.Generally
Telstra/Voda's Cable scheduling (Docsis3 Cisco's) is I think single Q
buffer. Tuning SQM again here isn't a big help. fq_codel by itself is
enough to keep latency spikes lower than default pfifo generally found on
most Linux based CPE's out there and give measurable user happiness gains.
One area I want to have a play with is multiqueue SQM/fq_codel performance
at 100/1000mbit + on Intel Atom. As per discussions on the cerowrt list the
Mip's stuff really is limited once you start looking at Gig uplinks which
are carrying VXLAN traffic to the edge (SDN CPE) - as I have a work related
interest in this at the moment.
-Joel
On 26 January 2015 at 22:17, Dave Taht
On Tue, Jan 27, 2015 at 6:45 PM, Joel Wirāmu Pauling
wrote: WNDR3800 is Telepermit/Sold in NZ I have dozens of them around the country... many running cerowrt builds.
Heh. I didn't know. Can you share some of your typical SQM settings for various services and bandwidths? In particular the DSL compensation code was always kind of hairy.
I'd also like to note that ubnt's gear was a frequent target for the cerowrt effort, notably the pico and nanostations are extensively in play in the largest testbed.
I think highly of their default firmware, but it lacks ipv6, routing support, and bufferbloat fixes. Their AirOS qos system is quite good, being fq-based, but didn't have aqm-ish facilities, and so far as I know (in their AirFiber product) they just poured the fq portion of all that into the FPGA, and didn't do much to manage queue length.
A netperf-wrapper rrul (latency with load) result on one of the airfiber boxes in rain and outside of it would be interesting.
The default PSU is DC 110/240 Switchmode, a physical adaptor is all that
is
required.
Have no brick here at all. oops.
-Joel
On 26 January 2015 at 21:08, Ewen McNeill
Hi Dave,
On 27/01/15 16:45, Dave Taht wrote:
Well, I would love it if I could get more data from folk willing to install netperf-wrapper and run a 5 minute test that would be good. [...]
on debian derived linux those are:
In case it helps someone else trying to test this week...
For "Debian derived Linux" read "Ubuntu" AFAICT (netperf is not packaged in Debian, up through Debian Experimental, so "apt-get build-dep
netperf"
won't work there; I can send a
list in case someone wants to try on Debian).
It's also apparently _not_ Ubuntu 14.04 LTS, because that appears to install netperf-wrapper as a Python Egg with the given install instructions, and netperf-wrapper appears to be completely unable to find its tests within the Python Egg, resulting in it complaining:
-=- cut here -=- Fatal error: Hostname lookup failed for host rrul: [Errno -2] Name or service not known -=- cut here -=-
You can tell you have this problem if "netperf-wrapper --list-tests" fails with a Python stack trace, rather than returning a list of tests.
My kludgy workaround (which seems to have worked) was:
-=- cut here -=- cd /usr/local/share && sudo ln -s ~/src/netperf-wrapper . -=- cut here -=-
which then lets "nztest.sh" (http://snapon.lab.bufferbloat.net/~d/nz/nztest.sh) run after some editing to point it at something other than the non-existent NZ server (I chose
wrote: packages-my-Ubuntu-system-wanted-to-install the
US west coast).
Serious suggestion: perhaps it would help to bundle this with Docker or similar? It'd then be easier for people to install a known-to-work version with just a "needs modern Linux kernel" dependency.
In my present location, however, I am seeing a *450ms* RTT to the eu, which I sure hope isn't normal for NZ, just this hotel. ( ping netperf-eu.bufferbloat.net ).
It seems about 75-100ms too high. From Vodafone cable in Wellington:
-=- cut here -=- ewen(a)ashram:~$ ping netperf-eu.bufferbloat.net PING kau.toke.dk (130.243.26.64): 56 data bytes 64 bytes from 130.243.26.64: icmp_seq=0 ttl=39 time=341.222 ms 64 bytes from 130.243.26.64: icmp_seq=1 ttl=39 time=342.191 ms [...] -=- cut here -=-
and it's about 15ms lower (ie, around 325ms) from my colo box in central Wellington (which is roughly the cable Internet overhead, so an expected difference).
320-350ms is roughly the RTT I'd expect to see to Europe out of New Zealand, mostly due to speed-of-light not being infinite. (I do work on a few systems based in Europe so have fairly consistently seen this for years.)
If anyone has a power supply suitable for a wndr3800 and NZ power standards, I brought one, might be able to fix matters here.....
A quick online search suggests this is a 12V DC, 2.5A power supply (eg,
http://www.myaccount.charter.com/customers/support.aspx?supportarticleid=329... ).
If so, they seem likely to be fairly common.
Sort of why I just asked if openwrt derived gear was a problem here. That stuff is the farthest along by far, for home cpe.
The main issue here for DSL is that there is an approval process ("Telepermit") for legally connecting equipment to NZ copper lines, so only models that someone has put through the approval process can be legally used (and IIRC each importer has to have their own import approved).
For Vodafone Cable the CPE device is usually supplied by Vodafone (and acts as a bridge), and it's standard-Ethernet from there in, so there may be more choice.
I've not gone looking for specific models that CoroWRT has focused on in NZ (except for looking for WNDR3800 just now, and not immediately finding any obviously for sale -- but IIRC the model is getting harder to find everywhere now).
Ewen
_______________________________________________ NZNOG mailing list NZNOG(a)list.waikato.ac.nz http://list.waikato.ac.nz/mailman/listinfo/nznog
-- Dave Täht
thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks