[Pkg-gnutls-maint] Bug#475168: Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

sacrificial-spam-address at horizon.com sacrificial-spam-address at horizon.com
Fri Apr 11 13:11:57 UTC 2008


Simon Josefsson <simon at josefsson.org> wrote:
> sacrificial-spam-address at horizon.com writes:
>> That paper deserves a longer reply, but even granting every claim it
>> makes, the only things it complains about are forward secrecy (is it
>> feasible to reproduce earlier /dev/*random outputs after capturing the
>> internal state of the pool from kernel memory?) and entropy estimation
>> (is there really as much seed entropy as /dev/random estimates?).
>>
>> The latter is only relevant to /dev/random.
>
> Why's that?  If the entropy estimation are wrong, you may have too
> little or no entropy.  /dev/urandom can't give you more entropy than
> /dev/random in that case.

The quality or lack thereof of the kernel's entropy estimation is relevant
only to /dev/random because /dev/urandom's operation doesn't depend on
the entropy estimates.  If you're using /dev/urandom, it doesn't matter
if the kernel's entropy estimation is right, wrong, or commented out of
the source code.

>> In either case, if you want K bits worth of seed entropy for your PRNG,
>> it is completely and utterly pointless to read more than K bits from
>> /dev/urandom.  If N >= K, you will get K bits of entropy with a K-bit
>> read.  If N < K, you will not get more than N bits of entropy no
>> matter how much you read.
>
> Right, although this is irrelevant if the seed doesn't contain
> sufficient entropy.  If the attacker can guess (or exhaustively try) all
> the N bits of seed entropy, using it as a seed for a PRNG won't improve
> things.

Um... we appear to be talking past each other.  This is not irrelevant
if the seed doesn't contain sufficient entropy; this is ABOUT how much
entropy the seed contains.  "If the seed doesn't contain sufficient
entropy" is the N < K case I wrote about.

Can you clarify what you mean?  How can a description of what happens in
a particular situation be irrelevant if that situation happens?


> However, my main concern with Linux's /dev/urandom is that it is too
> slow, not that the entropy estimate may be wrong.  I don't see why it
> couldn't be a fast PRNG with good properties (forward secrecy) seeded by
> a continuously refreshed strong seed, and that reading GB's of data from
> /dev/urandom would not deplete the /dev/random entropy pool.  This would
> help 'dd if=/dev/urandom of=/dev/hda' as well.

Being slow is/was one of Ted Ts'o's original design goals.  He wanted
to do in kernel space ONLY what is not feasible to do in user space.
The fast PRNG you propose is trivial to do in user space, where it
can be seeded from /dev/urandom.

An important design goal of /dev/random is that it is always available;
note there is no CONFIG_RANDOM option, even if CONFIG_EMBEDDED.
This requires that the code be kept small.  Additional features
conflict with this goal.

Some people have suggested a /dev/frandom (fast random) in the kernel
for the application you're talking about, but AFAIK the question "why
should it be in the kernel" has never been adequately answered.

>> So libgcrypt's seeding is "ugly and stupid" and is in desperate need of
>> fixing.  Reading more bits and distilling them down only works on physical
>> entropy sources, and /dev/urandom has already done that.  Doing it again
>> is a complete and total waste of time.  If there are only 64 bits of
>> entropy in /dev/urandom, then it doesn't matter whether you read 8 bytes,
>> 8 kilobytes, or 8 gigabytes; there are only 2^64 possible outputs.
>>
>> Like openssl, it should read 256 bits from /dev/urandom and stop.
>> There is zero benefit to asking for more.
>

> I'm concerned that the approach could be weak -- the quality of data
> from /dev/urandom can be low if your system was just rebooted, and no
> entropy has been gathered yet.  This is especially true for embedded
> systems.  As it happens, GnuTLS could be involved in sending email early
> in the boot process, so this is a practical scenario.

Again, we appear to be talking past each other.  What part of this is
weak?  libgcrypt already seeds itself from /dev/urandom.  At any given
time (such as at boot time), one of two things is true:
1) The kernel contains sufficient entropy (again, I'm not talking about
   its fallible estimates, but the unknowable truth) to satisfy the
   desired K-bit security level, or
2) It does not. 

As long as you are not willing to wait, and thus are using /dev/urandom,
reading more than K bits is pointless.  In case 1, you will get the K bits
you want.  In case 2, you will get as much entropy as there is to be had.
Reading more bytes won't get you the tinest shred of additional entropy.

If you're going to open and read from /dev/urandom, you should stop after
reading 32 bytes.  There is NEVER a good reason to read more when seeing
a cryptographic PRNG.

Reading more bytes from /dev/urandom is just loudly advertising one's
cluelessness; it is exactly as stupid as attaching a huge spoiler and
racing stripes to a Honda civic.

> A seeds file would help here, and has been the suggestion from Werner.
> If certtool used a libgcrypt seeds file, I believe it would have solved
> your problem as well.

My problem is that certtool reads a totally unreasonable amount of data
from /dev/urandom.  It appears that specifying a seed file will change
libgcrypt's behavior, but I haven't figured my way through the code well
enough to be able to predict how.

>> (If you want confirmation, please ask someone you trust.  David Wagner
>> at Berkeley and Ian Goldberg at Waterloo are both pretty approachable.)
>
> I've been in a few discussions with David about /dev/random, for example
> <http://thread.gmane.org/gmane.comp.encryption.general/11397/focus=11456>,
> and I haven't noticed that we had any different opinions about this.

Well, he calls /dev/random "blindingly fast" in that thread, which appears
to differ from your opinion. :-)

>> Fair enough.  Are you saying that you prefer a patch to gnutls rather than
>> one to libgcrypt?
>
> Yes, that could be discussed.  This problem is really in libgcrypt, so
> the best would be if you were successful in fixing this problem at the
> root.  Alternatively, work on improving /dev/urandom in Linux so that
> GnuTLS can read from it directly and use it as the PRNG instead of
> libgcrypt.  Until any of that materialize, I would certainly review and
> consider patches to gnutls.

Um, GnuTLS can already read from /dev/urandom directly.  What enhancements
are required?  The hard thing is maintaining compatibility with
systems without /dev/random.  For that, you need a whole pile of fragile
user-space entropy harvesting code (which is what motivated Ted to write
/dev/random in the first place), and a cryptographic compression function
to distill the entropy.  Of course, once you have that code written,
you might as well just seed it once from /dev/urandom and generate all
the random bytes you need in user-space.





More information about the Pkg-gnutls-maint mailing list