Explaining the Cancer Information Calculation

There are two Harvard trained physicists, who are much stronger at math than I.

@PdotdQ and @dga471 has my math been fudged or unclear? Do you agree or disagree with @EricMH’s contention that I’ve fudged my math this whole thread?

2 Likes

Thank you for the vote of confidence, but I don’t know if I have the time right now to go through the thread to check math :stuck_out_tongue:… Plus, I don’t have any training in information theory, so I probably won’t be a good arbiter anyway.

I will defer this responsibility to the mathematician @nwrickert :wink:

2 Likes

Good idea.

1 Like

For reference, the OP refers to Computing the Functional Information in Cancer.

The problem for me, is that we are discussing a mathematical model. But I have not seen a clear specification of the model. So it is hard to pin anything down.

2 Likes

I’m a bit confused because people have been talking about MI without referring to MI between which sets of data. In this post, whenever I switch to LaTeX, I’m going to use the strict language of set theory, without any reference to MI, so as to keep any obfuscating language away.

Recap of my personal understanding

As I understand it, Josh has argued that even if (C1 \cap G), (C2 \cap G) , and (C1 \cap C2) all decrease (meaning: the total area of the circles which are only overlapped by at most one color increases), (C1 \cap C2) \setminus G (which I take is what Eric calls CMI) may increase. I believe Eric agrees with this basic mathematical statement.

As Josh explained, why is it possible that (C1 \cap C2) \setminus G increases? Intuitively, what is happening is that somehow the two cancers mutate in a way that is somewhat correlated with each other. From some biological reasoning, this is explained as a result of driver mutations that are essential for the cancer to function. Because of this, Josh names (C1 \cap C2) \setminus G as functional information or FI. Eric agrees that it is possible for random + deterministic processes to result in an increase of FI (=(C1 \cap C2) \setminus G ).

Up to this point, I think this is fairly clear. Please correct me if I’m getting anything wrong.

Eric’s Objections

I’m not sure I fully understand these objections. Here is my interpretation.
Using my notation, the above equation is expressed as:

(C1 \cap C2) \setminus G = (C1 \cap C2) - (C1 \cap C2 \cap G)

(This is trivially true, of course, if you check Josh’s image. It also follows from basic set theory identities. So we all agree on this.)

The only way I understand Eric’s objection to this is that somehow, (C1 \cap C2) (which I understand is what he calls “absolute MI”) is limited by information non-growth and cannot increase.

But even if this is true, the above equation doesn’t imply that (C1 \cap C2) \setminus G cannot grow, even as it is bounded above by (C1\cap C2). Even if (C1 \cap C2) stays the same, or even decreases, it is still possible for (C1 \cap C2) \setminus G to increase. How so?

With every iteration, even as the first term (C1 \cap C2) continues to decrease, the second term (C1 \cap C2 \cap G) can decrease faster than the first. In other words, the two cancers are both moving away from the germline much faster than they are moving away from each other. Hence, the difference between the two terms can increase - in other words an increase in (C1 \cap C2) \setminus G, or functional information.

To picture this in your head, imagine the blue and green circles moving further away from each other, but also jumping together downwards, far from the germline. Although the horizontal width of the overlap between them has decreased, the vertical height has dramatically increased, leading to an overall increase in area!

This is, as I understand it and as @Dan_Eastwood pointed out, akin to the realization that the 2nd law of thermodynamics doesn’t disprove evolution, because local entropy may decrease. Of course, there is an limit to how much it can decrease. The analogy would be that if the green and blue circles have completely separated away from the red one, then from that point onwards, FI cannot continue to increase. This is the “heat death of the universe” scenario.

The only way to mathematically object to this is to show that it is not possible for the second term to decrease faster than the first. But as I understand it, this is a scientific, not mathematical question.

I would like to invite Eric and Josh to scrutinize and correct my attempt at understanding the dialogue here.

4 Likes

More comments on this particular objection:

I don’t really understand how this statement is significant at all. In our simplified picture, indeed, initially, the amount of MI is at maximum: the three circles overlap each other perfectly. But I don’t understand how this proves anything about CMI. As I showed in my explanation above, the only way you can object to Josh’s argument is if this you can lay down mathematical rules for how fast the circles can move away from each other. But simply saying that initial MI is needed doesn’t say anything. It does not restrict the movement of the circles.

Indeed, the rules of the movements of the circles is something we can only study in the lab, and if Josh is representing the biology correctly, then we do have good explanations (e.g. common ancestry or selection pressures) for why the cancers, for example, move in coordinated ways.

2 Likes

Thank you @dga471 for taking the time and effort to do this!

2 Likes

That represents my understanding as well(*). I have an additional confusion about the statement that (C1∩C2) is limited by info non-growth. (C1∩C2) is created by the process of genome replication from G. It is limited by the information content of G, of course, but replication trivially generates MI where none was present.

(*) Keeping in mind that I’m merely a Yale-trained ex-physicist, which is double downgrade from a Harvard-trained physicist. (Really.)

6 Likes

Well at this point I’m still a half-trained Harvard physicist, so we’re back to even :laughing:

3 Likes

To put it simply: as genomes mutate, the become more different from one another overall, but they can still acquire new functions because new functions have little to do with overall similarity. @EricMH has mistaken the fact that the overal similarity decreases as meaning than no new similarities can arise. This is demonstrably false, as we see in cancer.

ID theory has to adjust now due to this falsification of their claim. I don’t see any other way around it for them.

That is just analogy. Think of it as a report of commonalities, not actually moving circles. Such as the among of sequences in common and different.

Correct.

This is actually not complex at all to explain in words.

Cancer mutations are normal genomes modified by evolutionary processes. Two cancer genomes have a great deal of mutual information with one another (about 6 billion bits worth) just because they have a very similar starting point. In other words, (C1 \cap C2) is very large because (C1 \cap C2 \cap G) is very large. They have have high mutual information because of common descent. For the most part, we expect random mutations to reduce the similarity between cancers with time (C1 \cap C2), and they usually do. That is where all @EricMH and @Kirk and other CSI arguments focus. With time, this will decrease, and they equate this with functional information.

However, we know that cancer has acquired new functions that the germ-line does not have. Where did those changes come from? They come from the changes that cancers have in common, for other reasons, because they have a common function. We can find those as the sequences cancers have in common with each other, minus the sequences they both have in common with the starting point germ-line. Which is exactly:

(C1 \cap C2) \setminus G = (C1 \cap C2) - (C1 \cap C2 \cap G)

That is the functional information that defines the “cancer function”, and we can quantify it in bits in some cases as high as about 350 bits. According to @EricMH and @Kirk and Dembski, this should all be impossible. Their cutoff is at about 100 bits (right?), above which they believe it is impossible to explain by natural processes.

Or we could revert to their math, as is being worked out by @kirk on another thread (Durston: Functional Information). In that case, cancer has 6 billion bits, essentially equal to the amount of information in the human genome. This is an erroneous calculation, but if they’d like to argue this is valid ID math, they can certainly do so.

So they are caught between rock and a hard place. They have no good options.

  1. Agree that functional information corresponds with (C1 \cap C2) \setminus G. At this point they can argue that it cannot increase by natural processes (as @EricMH is attempting to do), but it obviously has increased in cancer.

  2. Argue that functional information corresponds with (C1 \cap C2) (as @Kirk is currently arguing) but then they have to explain how cancer exists with 6 billion bits, and is less than the starting point, and yet somehow cancer acquired new functional information. Moreover, the experimental data does not support their claims in cancer because the vast majority of commonality do not relate to function. We know that because in the case we of cancer we have tested it directly.

Either way, there is more FI than they believe possible by natural processes. So the argument is done at this point.

That is their situation. The reason why there is lack of clarity of times is that they are arguing mutually contradictory statements at the same time. It seems that none of them have ever applied information theory for a practical project, so this must be very challenging for them. I’m sympathetic, which is why I’ve taken the time to explain it.

And that is exactly right. Overall the genomes become more dissimilar. However, they can still acquire functions because they can get similarities in localized places of the genome.

All these claims can be formalized more carefully as statistical distributions. I’m just describing these as sets for clarity.

2 Likes

You seem to be misquoting me. I do not say I(C1:C2|G) cannot increase. Remember, at the very beginning of this thread, my first comment was I have no disagreement here.

I say that I(C1:C2) cannot increase, and it is the upper limit of I(C1:C2|G). Therefore, I(C1:C2|G) generated through random processes is dependent on apriori information.

Hence, it is a question begging argument to insist I(C1:C2|G) increasing somehow discredits ID claims.

If I’m taking Josh’s three circles illustration almost literally, then there is nothing special about the a priori, maximum amount of MI in the system that bounds (C1 \cap C2) \setminus G. You don’t need any sort of intelligence to start with this. This is simply a statement that cancers are mutations from a germline. So, there was a time when the cancer was “identical” to the germline - i.e. when there was no cancer!

Am I missing something here?

3 Likes

Exactly.

Exactly.

1 Like

I don’t know. I’m just writing the circles in mathematical notation, and the notation says C(C1:C2|G) is derived from I(C1:C2). If that’s wrong, then we need new circles. But, this is it for me. I’ve come to the end of my interest here.

When I say “moving circles”, I mean moving not in physical space, but in the Venn diagram space, where more area overlap = more commonalities between the sequences, and vice versa. So the illustration of moving circles is right, I think.

1 Like

OK, I’m going to retract what I just said, after thinking about this more carefully.

On the one hand, it seems to me that Eric is arguing that common ancestry requires intelligence, which is weird. (I guess this is Josh’s main complaint with ID theorists in the first place - assuming that all MI is a signature of intelligence.)

On the other hand, Josh says this:

Yes, common ancestry explains the high initial amount of MI. But, it is unclear to me what are the relative increases/decreases of (C1 \cap C2) and (C1 \cap C2 \cap G). As I said in my initial long post in this thread, this is crucial in determining whether (C1 \cap C2 \setminus G) can actually increase - you need the second term to decrease faster than the first.

In other words, you have not explained how the circles move relative to each other. There is something in here which is not captured by the Venn diagrams - something more than just common ancestry. Common selection pressure might be. Can you clarify this, Josh?

2 Likes

Common selective pressures select the same mutations that arise randomly and independently in different patients. This happens in cancer, without intelligent guidance. The common selective force, as part of a parallel evolutionary experiment, produces the mutual information. Once again, without intelligent guidance of any kind.

Cancer is empirical demonstration that this type of mutual information increases.

1 Like

Does this help anything?

http://math.ucr.edu/home/baez/bio_info/bio_info_web.pdf

Biodiversity, Entropy and Thermodynamics
John Baez
http://math.ucr.edu/home/baez/bio info/

1 Like

It probably would, but I think @dga471 got this covered.

1 Like