[Freedombox-discuss] Rogue Freedomboxes and government intervention

John Gilmore gnu at toad.com
Wed Jun 22 20:56:59 UTC 2011

> Freedombox will be open source and use a peer to peer methodology, what's
> stopping a hostile government from running their own 'Freedombox Honeypots'
> and targeting/locating users for arrest?

This is a serious concern, worth thinking about.

A minor intrusion would be for hostile governments (or others) to run
standard FreedomBoxes and then social-engineer their way into peoples'
circle of trusted friends.  This is no different than how they
currently infiltrate groups that they distrust, by insinuating covert
agents into groups of suspects.

A larger scale intrusion, which I think is the threat model you're
discussing, would be to offer up modified versions of the FreedomBox
hardware and/or software for free download.  These modified versions
would not actually protect the user -- the modifications would spy on
the user and report back to the author of the modifications -- but on
the surface they'd look just like ordinary FreedomBox software.

This is the rough equivalent of a country "impersonating" facebook.com
and stealing users' login names and passwords.  In the FreedomBox case
it takes more long-term planning and execution, since they'd have to
attack you before you install your FreedomBox, rather than on any day
that you use your FreedomBox.  It's also a more "stealthy" attack:
installing malware that claims to be FreedomBox software might subvert
users for a long time before the fraud was detected.  (If a week later
they arrested or disappeared everyone who downloaded and installed
their Trojan Horse FB, but didn't arrest people who got the FB
software some other way, the remaining activist community would figure
it out pretty quickly.  The bad guys would have to wait til much later
before revealing that they knew about the Trojan FB users.)  The
attack would require ongoing maintenance, crafting new malware updates
as the real FreedomBox software evolved, since users would eventually
notice if their Trojan FB didn't receive new features on the same schedule as
their friends' and other peoples' FreedomBoxes.

NSA did such an attack for decades since the 1950s on Boris Hagelin's
"Crypto AG" company, which supplied cipher machines to countries
worldwide.  It appears to have been an inside job, in which the
founder of the company deliberately subverted his own machines for
NSA's benefit.  See:


We can't prevent forks of the FreedomBox software, nor would we want
to.  We can't prevent countries from impersonating the freedombox.org
website on the Internets that live in their country.  We should think
about ways to hand-carry a validator that could detect many kinds of
modified FreedomBoxes.  When someone with an actual FB validator
visited activists with hacked FB's, it could detect the difference,
prompting the victims to get their own validators and then get new FB
servers that pass the validation check.

As an aside, this kind of attack is what "Trusted Computing" was
supposed to defend against.  Hollywood companies wanted a way to
*ensure* that when talking over the net to your computer, they were
talking to one that ran only *authorized* DRM software (i.e. no
"subverted" programs that would let the user do as they wish).  In the
same way, FreedomBoxes could in theory ensure that they were talking
to real FreedomBoxes.

If there were no malevolent people in the world, Trusted Computing
wouldn't be a bad thing -- but would also be unnecessary.  In a world
full of powerful and malevolent people, Trusted Computing would just
become a way for THEM to guarantee that we're forced to run their
choice of malware, rather than free-as-in-freedom software that
respects users' choices.

Ken Thompson gave a lecture when he received his Turing Award for the
creation of Unix, called "Reflections on Trusting Trust".  In it he
describes how easy it was to modify the early Unix system to subvert
our trust in it in ways that are very hard to detect.  It references a
paper from the Air Force that did a penetration analysis of Multics
(an obsolete predecessor to Unix).  They covertly modified the
operating system on the manufacturer's development machines, to insert
a Trojan Horse, which was later distributed by the manufacturer.  The
Air Force concluded that they couldn't trust Multics except in benign
environments.  Thompson concludes with "You can't trust code that you
didn't totally create yourself.  (Especially code from companies that
employ people like me.)  No amount of source-level verification or
scrutiny will protect you from using untrusted code."  By the way,
"code that you didn't create yourself" includes the microcode and
circuitry in your CPU chips.  Who designed your CPU?  Whose ECAD
design tools did they use?  Who manufactured the CPU chip?  Who made
the photographic masks for the manufacturing?  Who wrote the firmware
that boots the system?  Who flashed the firmware into the manufactured
server?  Who supplied your compiler, assembler, linker, runtime
loader, and kernel?  Whose software COMPILED your firmware, flasher,
compiler, assembler, linker, runtime loader, and kernel?  This list
includes dozens of independent companies, scattered around the world.
Any of those parties could have included hidden Trojan Horses in the
server hardware or software that we rely upon to enforce our security
instructions.  And any of them could be innocent, honest, and diligent,
but could have been penetrated by malevolent agents who inserted
Trojan Horses without that company's knowledge.

All you can do is raise the cost of subverting trust -- you can never
eliminate the possibility of subverting it.  (Or as Adi Shamir has
said, "There are no secure systems, only degrees of insecurity.")  Ken
Thompson's page at Bell Labs is down, and the information gangsters at
ACM want to charge you to download this, so see his talk here:


The Multics attack paper by Karger & Schell that he references at the
end is here:


	John Gilmore

More information about the Freedombox-discuss mailing list