Information = Entropy and Chance = Choice

So @jongarvey, do you remember this conversation from a while ago?

I hope you can take a look at at this again. It brought us both to a very close understanding of how science understands chance. I wonder if this could be helpful again.

I’ll be summarizing the discussion on ASC soon. @EricMH reached out, and we are planning to publicly discuss it. ASC was proposed by his PhD mentor, Robert Marks. This could be an interesting conversation.

1 Like

I do remember it, and my final conclusion that the problem with information theory is that there is no information theory of “information,” since it’s pretty absurd to be unable to distinguish between events that are not only irrational, but possibly uncaused, and organised matter such as we find in life.

To summarise my comparison of (ontological) chance and choice, the reason they are formally indistinguishable is that one of them doesn’t exist at all. Which one it is depends on your own metaphysical presumptions.

Where I’m starting to take this in blog posts is to the fundamental nature of “intentional information,” (and cognates like design) as an immaterial entity closely tied up with speeech and language, whether it occurs in human or divine situations. This accords with the grave problems we have (despite Glipsnort’s assurances) defining that kind of information scientifically. You can’t define teleology, meaning, and the ambiguities of language in terms of mathematical abstractions - and yet these are reality.

Watch that space.

4 Likes

This is an excellent area for discussion. I am going to study this in detail. This is at the cutting edge of science right now and has profound implications on most of what we talk about here.

1 Like

ASC = Algorithmic Specified Complexity

I don’t know if you’ve referenced @evograd’s blog or Tom English’s post on Skeptical Zone on the ACS paper.

1 Like

That is essentially correct, though it is a mix of jargon and common language.

Exactly.

@jongarvey, I hope you avoid “intentional information” as a term. How about “divine intent”? Or perhaps “providential intent”? Using the word “information” creates a fallacious link between your direction and the ID information arguments for design.

It will also help make sense of an earlier question you had…

See that there are three types of information.

  1. Total Information, or just information content of an entity (the area of a circle)
  2. Mutual information, or the shared information between two entities (the overlap between the two circles)
  3. Conditional Information, or the information required for one entity, if we know another entity (the non-overlapping area of one of the circles).

So, in your case, if a single mutation changes a gene:

  1. In the most common understanding, the conditional information is guaranteed to increase (unless the sequence does not change). In this case, we mean the conditional information between the before-mutation and after-mutation sequences. Turns out that it is fairly easy to calculate a reasonable estimate of the conditional information here if it is a single mutation.

  2. In the next most common understand, the mutual information may or may not decrease between the before-mutation and after-mutation sequences… An insertion would keep the mutual information the same, but a deletion or point mutation would reduce it. If A and B are are two unrelate sequences, the chance that mutual information will increase is about a 50/50 coin toss in many if not most cases, as we just learned: The EricMH Information Argument and Simulation - #117.

  3. In the least common understanding, the information would most likely increase, though there is a small chance it might decrease. The more mutations there are, the more certain we are that the information would increase. Deletions, however, might reduce the information content on average. Randomly mutating a sequence increases its entropy (usually), and information = entropy.

There are several ways of quantifying or measuring these quantities empirically from data. There are also theoretical ways building proofs about the “true” information, which are unmeasurable and unknowable versions of these three quantities too. Of note, many (if not most) of the proofs about “true” information do not carry over to empirical information. That translation step between theoretical “true” information, and empirical information cannot be neglected. Understanding this translation undergirds a strong intuition about how to interpret information measures of complexity.

There are several other things that should be come clear too If we ever talk about mutual information or conditional information in a formal context, we must alway specific to which information is being measured in relation? We can’t talk about the mutual or conditional information of A without specifying B. We cannot claim to be talking about information theory and use the bare term “information” to mean anything other than total information, which equals entropy. Mutual information is the shared entropy between two objects. Conditional entropy is the unshared entropy. So information is entropy. Just as Shannon wrote in his seminal paper.

Questions?

2 Likes

I understand your definitions much better now. Thank you. I see what I was doing in applying information theory to this case.

Your definitions are from the perspective of an outside observer looking at the information of the cell before and after a mutation. With those definitions, yes the observer’s information increases as a result.

What I was doing was looking at information from inside the cell’s perspective. The mother cell has information, I how to run and operate the cell. The cell divides and the daughter cell contains a mutation that degrades the information the daughter cell has to run and operate the cell. Therefore a loss in information.

I see both prospective as useful. But for the discussing to follow, I will use your perspective as an observer which always results in information increasing.

1 Like

How do you know it degrades the “information”? Might it also improve it?

1 Like

You don’t know, and won’t know for generations. Now I see it.

1 Like

Not to be belabor the point, but I want to make sure we are seeing this similarly. Do you see why we cannot say “degrades” the information or reduces it? We could say that the mutation alters the information in the genome. That is certainly true, even though we do not know or specify the significance of said alteration.

2 Likes

Yes, I get it now. And now with that information theory definition and primer, I am ready to learn about the application of Information Theory to Biological Systems. How is that learning going to take place?

1 Like

Well, when I have time. Which is a bit short right now. The BioLogos thread I linked to in the OP is a good starting point. Have you had a chance to read through that yet? It is going to be the basis of the larger conversation. Soon, I want to summarize the key bits, so others have a good setup for that conversation.

1 Like

A post was split to a new topic: Divine Determinism or Does Some Chance Exisit?

This is not my major area of study, it is all wrong for a thinker who starts with systemic intuition and moves from there to logic, but I sense it is important and you @swamidass are more or less available.

So when you say “entropy=information” are you saying it acts like it in the sense that much like energy assuming one state takes away possible other forms it could assume. So that once we know something it eliminates other possible meanings that the data might have had? Its a hard thing to grasp.

In the link you wrote…

" We can never be sure that what looks like noise to us actually is noise. Equivalently, we can never be sure what looks like to information to us, actually is information."

Well I get the first part but not the second. For example, a few lifetimes ago I was an officer in the Navy and one of my duties was handling the crypto keys. As long as whoever we were sending to had one of those keys, they could extract our messages. If they didn’t, it was noise. The Soviets could build a supercomputer but they could not break our code until they got John Walker to sell them some of the keys. So it would look just like noise unless you had the same key as the sender. There was additional info outside the transmission needed to get the info from the signal. The signal did not contain the information needed to get the information out of the signal. You needed the keys.

But that brings me to the second part of your statement. That part I don’t get. We extracting information from that “noise” on a daily basis and operated on it as if it was right like someone’s life depended on it, which it frequently did. There has to be ways to know if we have gotten information from the signal.

As best as I can see your attempt to explain it went like so…

" The proof is esoteric, but the intuitive explanation is that we can never be sure that we did not see a hidden pattern in the data that would allow us to “compress” or “understand” it better. We can never be sure that what looks like noise to us actually is noise. Equivalently, we can never be sure what looks like to information to us, actually is information.

Again I see the truth of the first part but not the second. Operationally, we act like we can get signal from noise in a lot of ways and it seems to be working for us. Even if a message is further compressible, like a picture in .BMP format could have been sent in a much smaller .JPG file, it doesn’t change the information. It just took more data to express it.

The big picture I am getting from all this, without being able to explain step by step if I got it, is that if God was sending information to change living things, guiding evolution as it were, it is unlikely we would be able to detect it. Is that a part of it?

2 Likes

I have read the Biologos conversation. My takeaway is as below-

  1. Information content according to information theory cannot be equated to meaning or function.
    In fact, a random sequence with no easily discernable pattern will have more information than a repeating pattern.
  2. We don’t have any way to discern between noise and meaningful information in biology because of our incomplete knowledge.
2 Likes

I’m trying to get a handle on the “mutual Information” side of things but need a reality check…

I get the impression that the notion of mutual information is not just information that two systems/groups/whatever share, but information that correlates between the two. Thus some datum ‘A’ in system #1 indicates something about the state of system #2 and vice versa. Maybe that’s the result of a causal interaction between the two systems, but perhaps it could be some other correlation. For example, a rock in the field could be wet or dry. If the rock is wet, it probably rained recently. If it’s dry, it probably hadn’t rained recently. “Wet rock” correlates with “Weather = rained recently” and “Dry rock” with “Weather = no recent rain”. The state of the rock’s wetness is causally linked to the state of the weather in the environment. Knowing the state of one system allows us to know the state of the other. Would that be considered ‘mutual information’?

For an organism like a bear (system #1) and its environment (system #2), could we consider the link between bear hibernation and the outside temperature plus the availability of food as a type of mutual information?

1 Like

I have worked more on the technical/applied side of scientific research, so I tend to see DNA and information from a different angle than most of the lay public. For me, the complexity of a sequence is defined by my ability to amplify it by PCR, distinguish it from other parts of the genome, and the ability to clone a piece of DNA. For those things it helps to have a good mixture of all the bases. Some of the problematic DNA I have had to work with include prokaryote promoter regions and bisulfite converted DNA, both of which are AT rich. I would also hazard a guess that repetitive or less complex DNA sequences are easier to compress, but that is outside of my expertise.

What I am wondering is if anyone has applied information theory to a comparison of exons and introns. These seem to be the most obvious targets for comparing sequences that have different histories of selective pressure. I would also assume that information theorists don’t treat all bases the same given the preference to CpG and transition mutations.

Overall, it is interesting to see how information theorists approach DNA sequence. I tend to be the person in the forest, so it is good to hear from people who are looking at the forest as a whole.

3 Likes

Yup that is a well known distinction. If you look at the papers describing different variants of BLAST you can see one type of anylsis that engages these different types of sequences.

Are you following long here? Eric Holloway: Algorithmic Specified Complexity.

Yes, I am tracking this and I can follow the math. But I don’t see ANY connection with this math to any thing that remotely correlates with Shannon Information and Biological Systems. This is so abstract and relates to computational complexity and writing code. Don’t see any connection to biological systems and evolutionary science. But please keep going as I see that Marks made some errors that you are picking up.

2 Likes