[Babel-users] 64k routes, bird, babel rfc, rtod, etc

Dave Taht dave.taht at gmail.com
Thu Nov 8 21:40:13 GMT 2018


On Thu, Nov 8, 2018 at 1:21 PM Juliusz Chroboczek <jch at irif.fr> wrote:
>
> > things fall apart at about 32k in this case.
>
> That's pretty good.  We'll reach your 64k target when we get around to

It's a worthy goal, but really excessive. :) I basically wanted to
know the limits
of the lowest end hw I had, for real, before daring to try and deploy
ipv6 willy nilly again.

> fixing the resend data structure

Got any ideas? I enjoyed poking into the latest trie research (source
specific routing is a new area!) and I really enjoyed learning about
skiplists last week, am liking the uthash potential long term...

> and implement pacing in the buffer
> flushing code.

Pacing can be done at the socket layer now and it's been working for
udp and QUIC since, oh, about 4.12? 4.9? can't
remember. I tried this at one point. Easy patch to fiddle with to
"just turn it on"

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62748f32d501f5d3712a7c372bbb92abc7c62bc7

>
> What happens if you disable all forms of AQM on the outgoing interface?

This is on a gigE and bridged to wifi network primarily.

Sorry, dude, ain't my fault here for a change. :) We're not even
driving the interface hard enough for much
fq to kick in.

tc -s qdisc show dev enp7s0
qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514
target 5.0ms interval 100.0ms memory_limit 32Mb ecn
 Sent 3757104160 bytes 2768425 pkt (dropped 0, overlimits 0 requeues 704)
 backlog 0b 0p requeues 704
  maxpacket 68130 drop_overlimit 0 new_flow_count 695 ecn_mark 0
  new_flows_len 0 old_flows_len 0

I don't see any drops on the outbound interface. I do see babel
perhaps bottlenecking on the send buf a bit.

It's on recv that's the issue here, more, I think. Need to do less
work per recv.

I can of course throw aqm at it, as normally I do run a box with sqm
on it at 20Mbits, (where certainly drops become a huge issue) and am
looking at wifi and get stats on that, but for a change, it's the
code, not the network. I'm of course very interested in someday
returning to the short-rtt-metric and ecn work, but what
I wanted mostly to be done was to be able to deploy rfc-bis with
confidence. I have a heck of a lot more confidence now...

(how's the hmac branch? :)

I'd like to get back on my atomic attempt also as the big reason
networkmanager/odhcpd/dnsmasq go nuts is sending a lot of ip del/adds
that could be modifies...

> I.e. use large FIFO queues both at the interface layer and in the wifi
> driver?

I'll take a harder look at that later. I think increasing (sigh) the
sndbuf and rcvbuf might "help"... but that way lies madness.

babel recognizing there's congestion in the sndbuf and
rescheduling/dropping that packet in order to get to the important
ones (hellos) would be better.

Not really testing wifi, I'd started with the intent to blow up bird.
And I did. And then I got some lithium to throw in the pond....
>
> -- Juliusz



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740



More information about the Babel-users mailing list