gnu at toad.com
Fri Jun 10 01:00:38 UTC 2011
> We want to provide a way for people to share with others narrowly or
> broadly a set of thoughts and media objects hosted on infrastructure
> they own themselves and thus have ultimate control over, as an
> attractive alternative to such sharing via popular existing services
> that provide little control.
But the "privately" from the next paragraph should be up here in this
goal. Thoughts or objects shared should not be visible to
unauthorized observers, neither during downloads, nor while in storage
and being offered up.
What we want is trivially easy user controlled sharing, broad
or narrow, on their own infrastructure, with privacy and integrity.
The integrity part that I added refers to things like: Not losing the
contents that users have put into it, even during hardware failures;
and not revealing them to unauthorized third parties such as intruding
secret police who seize the physical box and drives.
Easy key distribution will be key here. If we have no good key
distribution, we'll fail at privacy and integrity. If we have key
distribution that isn't easy, we'll fail at ease of use. If we have
easy key distribution we can reach this goal. It's a hard problem to
solve without centralization, and many projects have reached only a
low level of usefulness based on failing to craft a good solution to
> We want to provide a way for people to communicate with each other
> privately, minimizing their dependence on service providers, and
> hopefully providing some resiliency in the face of service outages.
This paragraph unfortunately tangles a few different objectives. Each
of these objectives requires very different software and network
designs. We should untangle them:
*1* A way for people to communicate with each other without the
contents being visible to others (automatic encrypted communication).
*2* A way for people to communicate with each other without even the
fact of that communication between or among these parties being
visible to others (anonymity of communication).
*3* Reduced dependence on service providers, by directly providing
networking services to neighbors.
*4* Resiliency in the face of individual service outages (such as
being cut off for not paying your DSL bill, or for publishing
something that the government wants to censor).
*5* Resiliency in the face of wide-area service outages (such as
a whole city, region, or country having its Internet services
shut down by natural disaster, political turmoil, or a
central government violating their citizens' rights wholesale).
It seems to me that we have relatively good software for #1 (IPSEC
tunnel mode). But key distribution is still a hard and unsolved
problem, as mentioned above.
For #2 we have Tor, which is slow to use, and requires constant
upgrades to outpace those who would block it, censor it, or imprison
those who use it. #2 can be broken down further in ways that
significantly change the engineering effort and resulting network
* Observers know you're communicating, but they don't know with whom.
* Observers don't even know that you are communicating, unless they monitor
your personal wire to the Internet. (Tor-like hiding in a crowd -
but observers can tell that you are running Tor if they look)
* Observers don't even know that you are communicating, even if they monitor
your personal wire to the Internet. (covert channels / steganography /
your traffic looks like innocent Web surfing even when analyzed by
For #3 there's the practice of wiring Ethernet to neighboring
apartments or along backyard fences. Today this only tends to occur
when those neighbors have no net connection themselves (for sharing
yours). Doing this when neighbors do have a net connection would
require some significant automation of routing table updates, which as
far as I know has not been programmed robustly. Plus the as-yet
unprogrammed idea of using WiFi connections to neighboring homes as
alternative connections, rather than wires. I believe that meshing
local WiFi nodes (like this) will fail at scale, due to radio
interference, omnidirectional transmitters, and multi-hop repetition,
resulting in very low available bandwidth. It also has a major
routing problem, unless you assume all machines are NAT clients and
none is providing a publicly accessible service, which assumption
violates paragraph #1 at the top.
#4 and #5 both depend on #3; if there is no alternative path for
communication, then we can't provide resiliency when the main one
For #4, NAT provides a well understood way for clients to access
public services via alternative paths. But it doesn't help those
who are themselves providing public services (such as a web site,
blog, or file sharing node).
For #5 the demands on the #3 infrastructure are even more intense; the
alternative network must be able to aggregate traffic from hundreds or
thousands of peoples' nodes, routing it to and through any available
wormholes that evade the general poverty of connectedness to the
global Internet. (Such as a satellite phone, a corporate leased line
to the outside world, having one still-surviving commercial network
among many that went down, a dialup modem connection to Amsterdam,
I think that Eben's original inspiration was the country-wide shutdown
scenario (#5) which is one of the harder cases. I have a draft design
that could perhaps handle it. I agree that it is worthwhile to
engineer for this case -- but do we have a consensus that the whole
project agrees to do this hard thing?
More information about the Freedombox-discuss