[Pkg-sysvinit-devel] Re: /etc/init.d/urandom Suggestion

Kent Borg kentborg at borg.org
Sun Feb 26 22:10:11 UTC 2006


I have a suggestion for a change to the /etc/init.d/urandom.  I am
told this is the place to do so.



In general, Linux handles its entropy well, but there are some
worrisome cases.  One example is the machine that is coming up fresh
from an install and maybe generating its first keys.  Another example
is a machine that has crashed and will be booting with an entropy file
that has essentially been used before.

My suggestion is to add one line, something vaguely like:

  cat /dev/mem > /dev/urandom

Current RAM technology is volatile.  Its power-on state is subject to
significant patterning, but it is not completely predictable.  A new
RAM chip will likely have its own patterning due to manufacturing
specifics, making each chip a bit unique.  The power-on contents of
old RAM will be affected by how the RAM is habitually used, making
individual chips become more individual as their histories diverge.

Differences in power-on states from one power-on to another, will also
differ depending on memory contents when power was removed, how long
it was off, and probably things like temperature.

Even if the power-on state is remarkably predictable, it doesn't take
a very large "error rate" for a small amount of RAM to supply the
traditional 4096 bits of entropy maintained by the Linux kernel.
Given a large amount of RAM (say, in a headless server) an "error
rate" that is only slightly higher than measurable supplies plenty of
entropy.

Three examples:

Case 1.  An embedded appliance with 16MB of RAM (e.g., Linksys
WRT54GL).  That is 134,217,728 bits.  If just one bit (any bit,
doesn't matter which) in 32K bits is unpredictable, it will yield 4096
bits of entropy.  That is, an error rate of 3.05e-05 or higher will
yield enough entropy to completely initialized the pool.

Case 2.  A headless server with 1 GB of RAM.  Say it crashes and boots
quickly.  While the DRAM controller is being reinitialized a few bits
might flip.  Also, if the server crashed, that suggests it was in some
unpredicted (buggy) state; that is, some bits of its last memory image
have gone bad.  Finally, just the working state of real data at the
moment of crashing (maybe the reset was bumped) is not something that
is easily known externally.  In this example there are 8,589,934,592
bits of RAM.  If 1 bit in 2,097,152 is unpredictable a full 4096 bits
of entropy are available from a 1GB machine.

Case 3.  Any machine with nice ongoing entropy collection shuts down
politely, saving its entropy pool, and reloading the saved entropy
pool on boot.  In this case, there is no shortage of entropy.
However, were the stored entropy pool to be read (by that traditional
cryptographic foe "Eve") while the computer was shutdown, an
additional source would be valuable.


Let me look at costs.

Security: Any change to how Linux handles entropy will rightly worry
people.  Response: The whole theory of /dev/urandom assumes that
feeding known data into /dev/urandom causes no harm.  At worst this
proposal feeds known data into /dev/urandom.  This proposal does not
suggest eliminating storing the entropy pool (though we might want to
store it more frequently).  For this proposal to weaken Linux random
numbers there would have to be a major flaw in the Linux
implementation of SHA-1.

Computrons: This will slow down the boot.  Response: It is probably
best put in as something like:

  nohup nice cat /dev/mem > /dev/urandom &

This will run in the background slowing down other processes only a
little.  Because it traverses all of memory it will damage caching,
but is booting where memory caching is its most valuable?  In an
embedded machine with only a little RAM the traversal will not take
that long.  In a machine with multiple gigabytes of RAM the traversal
will take longer, but those machines are both fast (bytes per second
will be high) and the rest of the boot takes long enough (waiting for
disks) that this traversal will likely finish before the rest of the
boot, leaving the regular running at full speed.  Maybe only traverse
the first couple GB if a machine has more RAM.

I don't know about current versions of Debian, but I know Redhat used
to generate its ssh keys on its first boot, at a point when it had
very little entropy.  Debian-based Ubuntu Breezy Badger doesn't
default to installing sshd, so there is necessarily user interaction,
possibly with an entropy-rich mouse, before those keys are generated.
It seems that RAM is a potentially rich (enough) entropy source that
could help in many cases.

Good idea?  Can I answer any questions?


Thanks,

-kb, the Kent who just subscribed.



More information about the Pkg-sysvinit-devel mailing list