[Babel-users] the routing atomic update wet paint - because *I* care
dave.taht at gmail.com
Tue Apr 7 00:24:55 UTC 2015
On Mon, Apr 6, 2015 at 5:03 PM, Dave Taht <dave.taht at gmail.com> wrote:
> On Mon, Apr 6, 2015 at 3:03 PM, Juliusz Chroboczek
> <jch at pps.univ-paris-diderot.fr> wrote:
>>> Babel (dont know about OLSR) finds a "usable" path, then tunes to a
>>> better one, but each tuning step (particularly at high rates) can lose
>>> packets, which cause rate reductions. Ideally would like to never lose
>>> packets while tuning happens.
>> I agree, but I would like to know how many packets we lose. Since the
>> remove/insert happen in quick succession, I'd expect it to be very few.
> My own noted issue is that at high rates, on cheezy routers, we run out of cpu,
> while forwarding packets.
> One daemon, hostapd, wants to run at a pretty high rate, and falls
> behind its desired rate...
> and the context switch and what little work it does alone - costs
> 80Mbits of forwarding,
> currently, on the archer tplink c7 v2. (can send along a graph)
> You would hope that there would be no significant processing between syscalls in
> babel but it is hard to measure, and the easiest thing for me would merely to
Aha! It does help to write things down. I can merely get a timestamp between
and see to what extent that goes up or jitters under load, and compare that to
the relative size of the packets at the rate they are forwarding at. Groovy.
> have been measuring the loss between atomic changes and not during
> the optimization phase.
> As it is I will try to setup some artificial benchmarks showing how much packet
> loss there really is when going from a wan connection to ethernet, as opposed to
> reordering. It might be interesting to show how windows behaves here as it as
> not yet got any decent mechanisms for handling reordering and slows down a lot.
> And I will keep touching the wet paint. A 4 phase commit seems feasible:
> add new route with metric 1025
> del old route metric 1024
> add same route metric 1024
> del same route metric 1025
> But dang it a single syscall should be doable, and if it isnt then the
> kernel APIs need to be fixed.
> And I do wish more of the routing folk out there tested their stuff at
> saturating workloads and differing RTTs such as what I do with
> netperf-wrapper rrul, rtt_fair, and rrul_be tests. I need to get on
> formalizing those tests for battlemesh.
>> -- Juliusz
> Dave Täht
> We CAN make better hardware, ourselves, beat bufferbloat, and take
> back control of the edge of the internet! If we work together, on
> making it:
We CAN make better hardware, ourselves, beat bufferbloat, and take
back control of the edge of the internet! If we work together, on
More information about the Babel-users