### Understanding Information Entropy

In discussions of random number generation (RNG), people often talk about the term “entropy” as if it was interchangeable with the term “random.” For example:

- The random seed is taken from an
__entropy pool__. __Entropy bits__are added to the pool from external sources such as mouse and keyboard activity, disk I/O operations, and specific interrupts.- Cloned sibling Virtual Machines may have loads of entropy in each of their pools, but they are all the
__same entropy__copied over from the same frozen state. - A RNG seeded with
__insufficient entropy__produces predictable keys. - RNG failures are often rooted in
__bad entropy__. - Software systems face the security problem of
__lacking entropy__.

Mathematically, information entropy is defined as the uncertainty associated with a random variable that represents the average information content one is missing when one does not know the value of the random variable. This article is intended to bridge the gap between the common usage of the term “entropy” (as demonstrated in the above examples) and its mathematical definition. The goal is to explain what information entropy really is, the determining factors of entropy, how to analyze and justify an entropy source, and how to assess the quality of an entropy input. For a good understanding of all these topics, please read the full article.

by Yi Mao

## No comments:

## Post a Comment

Comments are moderated with the goal of reducing spam. This means that there may be a delay before your comment shows up.