Examples of Shannon information "codes"?

We have recently enjoyed a robust discussion of whether DNA is a code or not. From the perspective of Shannon information theory it clearly is, but from the perspective of linguistics or engineering there are significant reasons to think not.

I do not want to recapitulate that debate in this thread. Instead, I would like to explore the question of what other Shannon codes have been observed.

Off-hand I would suggest that every hormone is a kind of Shannon code. In other words, some cell conveys information to another cell by means of the hormone. The information is encoded in the hormone and transmitted across a channel that would include biochemical interactions at either end, plus of course the transport channel (bloodstream). The target cell decodes the message and takes some action in response–or does not take any action if the hormonal message is not in its “codebook.”

Does this conjecture make sense? Has the application of Shannon information theory to hormonal interactions been discussed in biological literature?

I would further suggest that most if not all nervous system activity would involve Shannon codes. For example, olfactory nerves detect the aromas of a barbecued steak and start firing neural messages to appetite controllers in the brain. The electrons that are being transmitted along the nerve channels are a type of Shannon code because they map some information (“barbecue steak!”) to nerve signals.

Again, does this conjecture make sense? And has the application of Shannon information theory to the nervous system been discussed in the literature?

Thanks!

Are there any examples of naturally occurring Shannon codes in physics? Not just the transfer of energy, but the transmittal of information?

Could you repeat the definition of “code”? Are there multiple definitions? Is a lever a code, because it transmits the information of movement at one end to induce movement at the other end?

2 Likes

I appreciate your interest, John.

I hoped that specifying code as defined by Shannon information theory would suffice. As I have looked around, however, I have noticed that no one seems to define terms like code and channel before they start using them in the information theory context. So… a code is simply a representation of information that allows it (the information) to be transmitted from one location to another.

Let’s restrict ourselves to information theory. Other definitions exist, but I am not interested in them in this thread.

From the perspective of information theory, a lever transmits energy rather than information.

Let me know if you have any other questions or ideas, John. And a happy new year to you!

Chris

I’m not sure it’s a great model for hormones. Response to a hormone is typically a continuous function of the amount of hormone, rather than something that interprets a discrete set of symbols.

1 Like

What’s the difference? How do you tell if something is a representation of information?

As in the lever in a Morse Code Key transmitter device?

Are you sure that that lever transmits energy but not information?

3 Likes

Thanks for responding, Allen. May your 2019 be blessed!

Transmitting information always requires the transfer of energy. For example, you encode your thoughts into nerve impulses and then into analog waves when you speak. Pressing the levers on a piano keyboard encodes and transmits the music from your mind to the minds of an audience.

However, the mere transmission of energy via a tool that is capable of being used for information transmittal doesn’t necessarily imply that information is actually being encoded and transmitted. The vocal cords form sound waves when you snore. A cat can walk across piano keys. Neither of these scenarios involves information encoding.

Likewise, a chicken can strut up to a telegraph lever and peck on it. Energy is being transferred, but no information is being encoded.

1 Like

Thanks for the conversation, John. May your 2019 be blessed!

That’s a really good question and I am not sure I have sufficient philosophical training to answer it well. If the energy that is transferred can be decoded into information, then it was encoded by the sender. If you’re a POW and you hear taps on the wall, it might be the sounds of water pipes–no information was encoded. Or it might be Morse code from the lieutenant in the next cell, in which case information was encoded.

This probably seems circular to you; it certainly seems circular to me. But information theory is much more amenable to engineering than to philosophy, AFAICT.

Thanks for responding, Steve. A blessed 2019 to you, yours, and your cute dog.

You make a good point about hormones. I am not sure that it renders the information formulation of hormones invalid, however. First, each listening cell receives a discrete number of hormones/messages, so at the cellular level it may not be so completely continuous. (Scale up to an organ, though, and it is basically continuous, as you note.) Would you agree with me that at the cellular level the hormone molecule can be regarded as a message?

Second, information transfer does not necessarily involve a discrete response. A deep learning model can transform a discretized array of pixels into a probability that the image represents the digit ‘3’, for example. Does that make sense?

Finally, I would welcome any thoughts you might have on nerve impulses as the encoding of information.

Thanks!

The main problem here is whether physical causation counts as decoding. If transmission of force along a lever isn’t decoding, how does a series of chemical reactions count? If decoding requires an agent, who’s the agent in the ribosome?

You are making the classic mistake of confusing information with meaning. In all of your examples information is still encoded and transmitted. It may not have meaning in any human understood language but it still is information in the Shannon sense.

Please explain how a cat’s steps on a piano keyboard encode information. To me they seem like noise, not signal. And the distinction between signal and noise is very important in information theory.

Likewise, please explain how snores constitute an information encoding.

EDIT: There is a big difference between information and encoded information. The sound of a tree falling is information; when I hear the sound I can draw an inference. But that sound is not encoded information.

Every time the cat steps on a key the action is encoded as a piano note. Someone listening could reconstruct the pattern of steps based on the tones they heard. It’s still information even if it’s not Vladimir Horowitz tickling the ivories. :slightly_smiling_face:

The notes produced by a cat can be a series of discrete symbols (I’ll assume the cat only steps on one note at a time) carried by a communication channel (piano string → air) to a receiver (your ear). They can therefore be treated as information using Shannon’s information theory. Noise in that theory is something added to the channel from another source. (Assuming I understand the theory correctly – not my field at all.)

1 Like

Your understanding is correct. In the cat example noise would be an external interference, say a siren from a passing fire engine, which prevents the receiver from hearing the cat notes clearly.

Perhaps it could be, but it’s pretty dicey. A cell is unlikely to be able to distinguish one molecule of hormone from two, and there must be a substantial stochastic component to how it responds. More important, why would you try to treat it this way? It seems like modeling the response with a response function would be a good deal more useful.

Here are two thoughts from different perspectives to the many helpful responses that others have provided for your consideration.

As has been pointed out, the concepts of information, encoding, and meaning have both technical and everyday usages; it is important to disentangle the two. I focus on encoding and meaning.

On encoding: Shannon’s 1948 paper was applied mathematics. He provided theorems about the theoretical limits on the efficient, error-free usage of a noisy communication medium with limited capacity. That mathematics has since been used for scientific models in neuroscience, population genetics, thermodynamics and many other fields (I can provide details if you are interested).

The power of his work resulted from abstracting away from engineering of communication channels. To explain his mathematics, he defined ‘information’ and ‘encoding’ in precise technical terms which which do not seem to match how I read your usage of these words. In particular, the technical definition of ‘encoding’ was meant to capture how messages were prepared for transmission over the channel. Here is how he defined ‘encoding’ in section 2 of his 1948 paper:

That paper is readily available online; the first pages are worth reading to understand how Shannon redefined everyday terms as technical terms to explain the mathematics.



On meaning: Grice developed the concepts of natural and nonnatural meaning which I think shed some light on the original issue of whether DNA is a code. The following is adapted from this set of notes. Consider these two usages of the word ‘means’:

  1. [Natural Meaning]: “Spots means measles”.
  2. [Nonnatural meaning]: “A red traffic light means cars stop.”

You can tell the two senses of ‘means’ apart by considering logical deductions based on them.

  1. “Spots means measles but he has not got measles” This is a contradiction.
  2. “A red traffic light means cars stop but the car did not stop” This is not a contradiction.

Analogously, for DNA and coding, it is possible to use ‘coding’ in two different senses, which I think is the source of the arguments in the other thread (others have pointed this out in a different way). Distinguish between the biochemistry of DNA [natural] and the particular letters and symbols used by scientists to represent codon and amino acids in a table mapping the two [nonnatural].

ETA: A qualification: once scientists have settled on the symbols for the codons and amino acids (non-natural coding,eg consider English versus Chinese textbooks), the fact that these symbols are used to model a natural encoding means that the coding table is fixed by the biochemistry (ETA 2 for clarity).


When you say “encoded information”, I take you to be referring to the nonnatural sense. But the mathematics of Shannon can be applied to the natural sense as well: all that the mathematics needs is a model of the world which abstracts it into (1) messages as meaningless strings of symbols and (2) the probability distribution of such messages. (Some applications also add noise and capacity as well, abstracted to Shannon’s technical sense.)

3 Likes

There is substantial philosophy devoted to analysis of the concept of information in science and in everyday usage. Here are some links from SEP if you are interested in further details. The SEP articles do rapidly get into abstruse technical philosophy, but the introductory paragraphs are clear on the scope of the philosophical discussion to follow.

https://plato.stanford.edu/entries/information/

https://plato.stanford.edu/entries/information-biological/

https://plato.stanford.edu/entries/molecular-genetics/#WhaDoGenDNADo

Philosophers have also made use of information in the technical sense. For example, Dretske attempted to apply it as part of his solution to the problem of naturalizing mental content.

https://plato.stanford.edu/entries/content-teleological/#3.1

2 Likes

Information theory is certainly not my strong suit . . . so with that said . . .

Could photo adsorption and emission be a type of “code” within Shannon theory? Molecules absorb photons with certain wavelengths and emit light at different wavelengths. Emission can also be a product of temperature.

image