Information theory: a flipping coin, an old script and our brain

What does an ancient script, the activity of the brain and the flipping of a coin all have in common? If we consider their measurable properties –as science always tends to proceed when approaching natural phenomena– we might easily claim that the three of them are objects whose matter and energy are well defined in theory and can be measured in practice. The law of conservation of matter discovered by Lavoisier in 1789 and the law of conservation of energy by Mayer in 1842 are indeed the cornerstones of physics and chemistry. However, when trying to explain what is special about living systems, some eminent scientists, like Schrödinger in 1944, pointed out to another concept: information. More than kilograms and joules, bits seemed the key to explain the secret of life. Soon after that, Shannon wrote a monumental article [Shannon 1948] where he proved that the only mathematical function that satisfies three basic properties (continuous, monotonic and additive) for how much information is produced given a discrete sequence of symbols is the sum of -p*logp: Entropy! We sketch Shannon’s proof and apply it to the three cases mentioned above. First, we calculate the entropy of the head-and-tail data sequence obtained as we flip the coin here-and-now (thus, illustrating also the caveats in performing an experiment). We see that our intuition of a “fair” coin (equiprobable and independent in giving head or tails) matches the mathematical property from Shannon’s formula that the entropy is maximum for a fair coin (maximally unpredictable) and zero for a coin that would always yield tails (maximally predictable, so no “surprise”). Then we build onto this real example to imagine that, instead of a coin, we roll a dice, and then we push our imagination into a “gedanken” experiment in which we roll a dice of 27 faces. If each face is a letter of the English alphabet (plus the space) we have a process that could generate English-like scripts. Adding structure to the frequencies of certain letters and to the probabilities of finding two, three or n-letters together (thus, biassing the dice; the opposite of a “fair” dice) we show how the entropy of the sequence produced by the dice would tend to the entropy of real English itself. Thus, we could now take this old and mysterious script from a lost civilization and calculate its entropy too. Surprisingly, this was done and the result approximated the entropy of natural languages such as Tamil or English, while being different (much lower) than a random sequence of symbols. Therefore, mathematically, higher order structure in any text can be measured, revealing syntax. The brain completes this exercise of applying information theory to apparently disparate processes. We explain that neural activity as reflected by the action potential of neurons can be described as a binary signal of 0’s and 1’s (no-spike and spike). Then, Shannon’s ideas and formulas are applicable again to calculate the information encoded by neural activity about the external sensory world: sound, smell, light, heat, etc; any “secondary quality” translated into a common language of bits in the brain (and by the brain). This is how a large body of scientific work, mostly under the name of computational neuroscience, has made substantial progress in answering what the brain does. So, in this brief journey we learn that our (and the mouse) brain, the old script and the gambling coin game at a casino reflect processes whose information can be precisely defined in a mathematical sense and intuitively interpreted thanks to Shannon’s theory. An analogy now made concrete! However, despite the fact that we still have no clue about how to leap from information to meaning –and thus explicate human language and consciousness from the data– entropy remains a conceptual milestone in the history of science, with sprouts both technological and philosophical. From physics’s energy and chemistry’s matter, we jump into biology with the aid of information as entropy and then, pushed by the phenomena neuroscience faces, we are brought up the gradient to face psychology as well –there where life and mind are in confluence– to then immediately be compelled to cast the whole gradient into elusive philosophical questions worth answering.

[Shannon 1948] C. E. Shannon, “A Mathematical Theory of Communication”, The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

Advertisement