A post was merged into an existing topic: Implementation Details in Information Theory
(You can move this to a side comment thread if it’s more appropriate.)
Questions about these two:
When explaining functional information (FI), you initially pointed to the overlap between cancer 1 and cancer 2 that is excluded by the germline, i.e. (C_1 \cap C_2) \setminus G.
But are you defining FI to be equal to this? i.e. FI = (C_1 \cap C_2) \setminus G.
Or are you simply stating that FI is strictly a subset of this, that we identify using deeper biological theories and experiments (instead of definitions), but not all of this is necessarily FI? i.e. FI \subset (C_1 \cap C_2) \setminus G.
Could there also be FI left in cancer 1, cancer 2 that is part of the germline? i.e. \exists x | x \in FI, x \in (C_1 \cap C_2 \cap G)?
What I’m confused about is that it seems to me that while mutual information is a term that can be strictly defined mathematically, functional information seems to depend on empirical experiments, at least in the way people are using these terms. Thus you can’t prove anything about FI by using math alone.
Great question. No, we do not define FI this way. We infer FI from recurrence, and this is usually a valid inference.
Rather, the chances of having the same mutations arising in multiple independent cancers overlap is very low, by random chance alone. I’ve even shown how to estimate the amount of information in common too. These common mutations are called “reccurent” in cancer genomics. In evolutionary terminology, they are homoplastic, or convergent, mutations. In light of the very low likelihood by random chance, at this point, ID makes a the design inference.
However, the design inference does not appear justified in this context, unless someone wants to argue that God is guiding the development of cancer. So this appears to be a clear false positive of the design detection mathematics of ID.
As scientists, that isn’t the whole story. We still want to know how such a low likelihood event takes place. We need to have a good explanation for how this mutual information, these recurrent mutations, arose. The best explanation is that the cancers have common mutations because they both have the same cancer “function.” Cancer genomes, after all, have the same function. Remembering that genotype often causes phenotype, we rationally conclude that the common mutations are those that are causing the cancer phenotype-function. So, therefore, we infer that recurrent mutations are drivers mutations that cause the cancer function.
Is this inference entirely accurate? Without knowing the details it should be obvious the answer is “no”. For every rule in biology there are exceptions. I’m not going to get into that here though. This is usually a valid inference. Nonetheless, it parallels how Durston, Dembski, and Marks define FI. (1) Gather examples of entities with a function, (2) compute their mutual information.
True. What they do, however, is just assume:
- life must have very high FI,
- the only source of FI is intelligence.
Durston has improved a small bit on this by actually trying to measure FI. However, he misunderstands how to compute FI, associating it with the wrong type of MI. I’ll show that visually in a moment. So he fails at this basic computation in the end, but ID is trying to engage the data. They just do not have enough working knowledge of information theory to even see for themselves when they are in error.
So what can cause high mutual information? It very much depends on the what type of mutual information we are talking about. We can grant a few possibilities.
- Intelligence, we can presume is a technical possibility.
- Common history / Common ancestry (common starting point before mutation added)
- Common mutational distributions and mechanisms (neutral evolution).
- Common selective pressures (best explanation for most of cancer FI).
- Complex interactions between all of the above.
All these can produce high MI in the right contexts. Of most importance is #2, common history. It turns this explains the vast majority of the MI we see in biology, and because ID is non-commital or opposed to common descent, they are blind to how this affects their calculations. In the case of cancer, if we remove 6 billion bits of mutual information from shared history, the remaining amount is probably FI, and mostly caused by common selective pressures (i.e. natural selection). It is however just a tiny tiny fraction of the total amount MI between two cancer genomes.
The key point is that there is absolutely zero justification for the belief that FI is a unique signature of minds. Zero. What Dembski (and for example @EricMH) is an end run around the hard work of untangling the contributions of all these mechanisms to MI. Instead, they declare that if there is MI, it must be intelligence. Zero justification. Zero tests. Zero demonstration.
This comes out strikingly in Durston’s work. He computes FI wrong, by using a MI that does not take common ancestry into account. He just assumes that all MI must be produced by a mind, and never actually does a coherent simulation of DNA evolution. If he did, and then applied some clear thinking, he would find out that common history explains MI as he computes just fine.
The reason why is because he computes FI incorrectly, by equating it with the wrong type of MI. What we really want is the mutual information computed like this, excluding everything caused by common history:
FI = (C_1 \cap C_2) \setminus G, which correspond to the tiny overlap here (about 60-350 bits, not drawn to scale):
They, however make the mistake of computing FI this way instead, lumping common history as also caused by common function (about 6 billion bits here, and not to scale):
FI = (C_1 \cap C_2) , which corresponds to the overlap here. In our example of cancer genomes, the FI would be computed at about 6 billion bits. As you can see, that equivocation between different types of MI wildly overestimates FI by neglecting the contribution of common shared history.
I want to emphasize, from my interactions with Durston and other ID advocates, I do not think this is dishonesty. They appear to have been blinded by their polemic goals, and self-reinforcing echo chamber. Because most people are lost in the Byzantine derivations of mathematics in general, and information theory of ID proponents too, it is hard for people to break in and enter the conversation with them. They just do not have any practical experience in applying information theory. So it is not surprising that they are making errors here.
Just to briefly answer…
This a probabilistic inference. The is not “strictly” anything. There are exceptions. In the end, we do biological experiments, but even these do not establish causality with 100% certainty. What we would say is that:
M \subset (C_1 \cap C_2) \setminus G \rightarrow F
Or that a mutation in that set probabilistically implies that it is functional. Not being in that group probabilistically implies that it is not functional. The arrow here means “implies.” We can test this inference with biological experiments. We find that it is very often correct.
DI, however, sometiems tries to argue:
M \subset (C_1 \cap C_2) \rightarrow F
We can test this with biological experiments too. Find that this inference is most often incorrect (just as predicted by neutral theory). This is not a valid inference.
Remember, @EricMH has argued that the only thing that can cause MI is intelligence? That false claim is what guides him to this error. It is a type of MI, so he just presumes nothing else can generate it. He has not demonstrated this however. He thinks the proofs tell him this, but his application of the proofs would also demonstrate that the 2nd law of thermodynamics is false. Which is to say, his application of the proofs is wrong.
I’ll finally add that we have defined this as sets of mutations in a discrete formulate. In practice, formulate this as bits of information or probability distributions in practice, which allows for more fuzziness. For example, we will not see a given driver mutation in all cancers, but only a subset of the cancers. So all this can be used to handle noisy cases and so on.
@EricMH this is a key test case for ID. This a good negative control. I’m curious how you engage it.
Still getting to it. This place blows up my inbox.
Thank you. I have no background in information theory, and this is what I was looking for – an intuitive explanation of what the debate is about. (I still find the claim that high mutual information implies about intelligence to be odd, but I haven’t seen the argument in its favor yet.)
One comment: you note that mutation usually decreases MI between genomes. In some cases, though, MI presumably can decrease and then increase again, when a mutation is followed immediately by a back mutation. This is highly unlikely in a cancer genome, but I’d bet that it routinely occurs during viral infections.
I’ll turn your nice pictures into some ugly symbols.
That little green crescent is MI(C1:C2)-MI(C1:C2:G). You are saying that possibly,
MI(F1,C1:F2,C2) - MI(F1,C1:F2,C2:G) \geq MI(F1(C1):F2(C2)-MI(F1(C1):F2(C2):G)
This is true. I do not disagree here.
I don’t understand your notation but it appears to be referring to wrong segment.
Typo alert:
11 x 31.5 bits = 346.5 bits of functional information in colon cancer
6 x 31.5 bits = 189 bits of functional information in colon cancer
Looks like one of those ought be Lung cancer.
This looks very familiar, I think if I change just a few words the resemblance will be clear …
So now, we can visually see how the paradox is resolved. entropy of some sorts increases, but entropy of another sort decreases. Overall entropy increases, but local order can increase.
Which is how we describe energy entering an open system and increasing local order, and the cost of overall increased entropy (energy expended).
It seem that EricMH has been making the Information Theory equivalent of “The 2nd Law of Thermodynamics disproves evolution” argument. I suspected this was the case, but didn’t have enough pieces put together to say so. Well done, SJS!
@EricMH, I hope you are following this. Do you see that the entire area covered by the red circle is to be excluded? If not, I’ll try and make a more clear diagram for you. The key point I’m hoping you can see is this:
This ends up not being the correct way to compute FI.
Do you see why?
I’m not seeing how this disproves what I’ve been saying. Your green crescent is the difference between two mutual information quantities. The law of information non-growth does not apply to a difference between two MI quantities. If Durston thinks it does, then he is mistaken. But, it is unclear how someone making a mistake is relevant to my argument. Apologies for my denseness.
No, that is not true. You appear to have misread it. I’ll have to think through how to make that clear in the figure.
Spell it out symbolically as I have done. That is unambiguous.
We already did, with set notation.