[Babel-users] Ability to work with massive number of routes? (global full-table)
Juliusz Chroboczek
jch at irif.fr
Wed Sep 26 16:35:25 BST 2018
> Well, I took a stab at some of that. In particular, babeld has to compare
> a lot of bytes, and while profiling it on these loads, was totally
> bottlenecked on memcmp.
Yep, the code in xroute.c is pretty pessimal, since I wan't expecting
people to have massive numbers of redistributed routes. That's easily
fixed, though.
Note that this does not apply to normal routes (learned from other Babel
speakers) -- if your profiling shows any hotspots in route.c, then I'm
interested.
> And - setting a goal for 64k routes - I thought that switching to
> a normalized table structure internally would be useful. Instead of
> storing the nexthop as a full address store an 16 bit index pointing to
> an array of those nexthops.
We already do (except that we use a pointer rather than an integer index),
see struct route in route.h.
> We could write a roadmap for "better wifi mesh networking", but it
> starts with "more bodies".
Hehe.
> * walking linked lists
Hm. If you're seeing that in your profiles, then your network is very
dense -- only redundant (unused) routes are kept in linked lists. May
I see your profile, please?
> * running out of bandwidth
I think it's the bursty nature of the traffic that kills you, not the
average throughput. I'll definitely be working on that, but I first need
to reproduce your results in a controlled situation.
> * recalcing bellman-ford
That shouldn't cost much, since the computation is incremental. If it
does, I need to see your profile.
-- Juliusz
More information about the Babel-users
mailing list