[Freedombox-discuss] Identity management

Mike Rosing eresrch at eskimo.com
Wed Feb 22 21:47:58 UTC 2012


On Wed, 22 Feb 2012, Daniel Kahn Gillmor wrote:

> (i'm one of the monkeysphere devs, and have an interest in seeing this
> freedombox thing succeed too)

Good, I'll have lots of detail questions when I get into this.

> This is a bad idea if you actually care about the strength of your key.
> FWIW, it's also possible to use a user's password as a seed to a PRNG
> to generate an RSA or DSA key.  This doesn't make it a good idea.

That's why I didn't want to get into it.

> OpenPGP as a cryptosystem (and GnuPG as an implementation of it) is
> malleable enough to have the user's identity stored in their head.  the
> trouble is: for precision storage of high-entropy data, most human heads
> just aren't particularly capable, and a brute-force machine can pretty
> rapidly exhaust most human minds.

You don't always need high entropy.  This is a weak point no matter how
you do it.

> This also encourages the use of arbitrary local machinery, into which
> you type your "only-in-your-head" secret.  Now, there's a copy of this
> secret in the machinery you just used.  was it an internet cafe?  was it
> a friend's machine?  was it your boss's machine?  how do you know that
> machine is not recording what you entered?  If your local endpoint is
> compromised, you've just lost control of your identity.
>
> The natural response to concerns about compromised local endpoints is to
> have a trusted physical console [0].  Once you have a TPC, though, then
> the idea of relying on your mind as a source of high-entropy data is
> rather redundant.  You're maintaining and monitoring your TPC; why not
> use it as an effective cognitive prosthetic on the network?

This goes back to "you know something" and "You have something" - classic
real security.  What you are saying is that to have real security you
need a crypto keyboard - everything the user does is encrypted before it
enters the "open" part of the system.  If the "cognitive prosthetic" is
in the users hands then what gets captured doesn't matter.  If the 
"cognitive prosthetic" is out on the network sitting on a disk, that disk 
becomes a target.

I agree that what has to go out should have high entropy, but the way it 
gets created is a matter of philosophy.  Just how secure does Freedombox
need to be?  Secure dongles? Secure keyboards? Semi-secure pass phrase?

> Security is a process, not a magic sauce.  It only works because people
> have to understand at least the outlines of what it's doing.
>
> A classic example of this is the web browser model.  Many people don't
> understand when they should even be looking for the "lock" (or whatever
> UI variant the browsers have decided a valid https session should
> present as today).  They also don't understand that the lock itself
> doesn't mean "the site is trustworthy", it just means "the site is
> actually www.example.com".  These are useful clues that can permit
> people to have a cryptographically-safer browsing experience (CA cartel
> issues aside), but *only* if they understand what the UI clues are and
> why and when they should be relevant.
>
> Browser developers know that if they make their browser "just fail" when
> there's a problem, the user will pick another browser that lets them
> work around the failure ("oh, i can't visit that site with chrome, but
> it's fine with IE").  Is the user more secure as a result?

I guess that's a question of philosophy too.  If the user feels that 
getting to the site is worth the pain there's not much you can do.  Not
every one uses condoms, that's why AIDS is still spreading.  What level
of education should we impose on the users of Freedombox?

> The same is true for real-world security too, fwiw.  A security policy
> that outlines a set of steps to be followed only increases security if
> the people using it understand why they are doing what they're doing.
>
> Humans are a critical part of any security system.  We need to make
> systems that expose the security features to the users in ways that they
> understand, can relate to, and are engaged by.

100% agree with that.  Anything that takes longer than 3 seconds isn't
going to fly.

> I'm not talking about the math or the algorithms, of course; i'm talking
> about the expected properties of the information in transit.  People
> need to know things like:
>
> 0) Am i anonymous in this communication?  or have i claimed an identity
> (and which one)?  if i've claimed an identity, have i proved that i am
> that identity, or is it just an asserted-but-unproven claim?
>
> 1) Do i know who sent me the data i'm looking at (or listening to)?
> Who is the sender?

If we are using anonymous coms these answers become vague.  We can at 
least know it's same anonymous id, but that's about it.

> 2) Do i know who i'm about to send data to?  Can anyone other than the
> recipients i know about view the data i'm sending?
>
> if the user doesn't know or think about those questions, there is no way
> that a cryptosystem can fix things for them.

True, but we can lock in some apps that have specific behaviors - point to
point always, point to broadcast always so that the sender sees something
different for the specific task at hand.


>
> [0] http://cmrg.fifthhorseman.net/wiki/TrustedPhysicalConsole

Thanks, I'll read more on that.

So the assumption is that users of Freedombox will be educated about 
security issues, they will know when they are taking risks and what the 
results of those risks might be, and they will have access to safe 
equipment.  Security is not "invisible".  That's the philosophy we're 
working with - yes?

Patience, persistence, truth,
Dr. mike




More information about the Freedombox-discuss mailing list