Explaining the Cancer Information Calculation

To put it simply: as genomes mutate, the become more different from one another overall, but they can still acquire new functions because new functions have little to do with overall similarity. @EricMH has mistaken the fact that the overal similarity decreases as meaning than no new similarities can arise. This is demonstrably false, as we see in cancer.

ID theory has to adjust now due to this falsification of their claim. I don’t see any other way around it for them.

That is just analogy. Think of it as a report of commonalities, not actually moving circles. Such as the among of sequences in common and different.

Correct.

This is actually not complex at all to explain in words.

Cancer mutations are normal genomes modified by evolutionary processes. Two cancer genomes have a great deal of mutual information with one another (about 6 billion bits worth) just because they have a very similar starting point. In other words, (C1 \cap C2) is very large because (C1 \cap C2 \cap G) is very large. They have have high mutual information because of common descent. For the most part, we expect random mutations to reduce the similarity between cancers with time (C1 \cap C2), and they usually do. That is where all @EricMH and @Kirk and other CSI arguments focus. With time, this will decrease, and they equate this with functional information.

However, we know that cancer has acquired new functions that the germ-line does not have. Where did those changes come from? They come from the changes that cancers have in common, for other reasons, because they have a common function. We can find those as the sequences cancers have in common with each other, minus the sequences they both have in common with the starting point germ-line. Which is exactly:

(C1 \cap C2) \setminus G = (C1 \cap C2) - (C1 \cap C2 \cap G)

That is the functional information that defines the “cancer function”, and we can quantify it in bits in some cases as high as about 350 bits. According to @EricMH and @Kirk and Dembski, this should all be impossible. Their cutoff is at about 100 bits (right?), above which they believe it is impossible to explain by natural processes.

Or we could revert to their math, as is being worked out by @kirk on another thread (Durston: Functional Information). In that case, cancer has 6 billion bits, essentially equal to the amount of information in the human genome. This is an erroneous calculation, but if they’d like to argue this is valid ID math, they can certainly do so.

So they are caught between rock and a hard place. They have no good options.

  1. Agree that functional information corresponds with (C1 \cap C2) \setminus G. At this point they can argue that it cannot increase by natural processes (as @EricMH is attempting to do), but it obviously has increased in cancer.

  2. Argue that functional information corresponds with (C1 \cap C2) (as @Kirk is currently arguing) but then they have to explain how cancer exists with 6 billion bits, and is less than the starting point, and yet somehow cancer acquired new functional information. Moreover, the experimental data does not support their claims in cancer because the vast majority of commonality do not relate to function. We know that because in the case we of cancer we have tested it directly.

Either way, there is more FI than they believe possible by natural processes. So the argument is done at this point.

That is their situation. The reason why there is lack of clarity of times is that they are arguing mutually contradictory statements at the same time. It seems that none of them have ever applied information theory for a practical project, so this must be very challenging for them. I’m sympathetic, which is why I’ve taken the time to explain it.

And that is exactly right. Overall the genomes become more dissimilar. However, they can still acquire functions because they can get similarities in localized places of the genome.

All these claims can be formalized more carefully as statistical distributions. I’m just describing these as sets for clarity.

2 Likes