@EricMH it seems we agree on this, and then it appears you write:
If Swamidass is right that functional information is conditional mutual information, then functional information is also limited by the conservation law. Thus, natural processes, insofar as they are reducible to randomness + determinism, cannot generate functional information. Eric Holloway: ID as a bridge between Francis Bacon and Thomas Aquinas | Uncommon Descent
This whole thread is an exercise in math fudging. Your starting premise is that mutual information can be created by chance and determinism. Then, you proceed to show that conditional mutual information is created by chance and determinism. That is an equivocation.
Then I go on to show conditional mutual information is predicated on absolute mutual information, thus even conditional mutual information is limited by the information non growth law:
I(A:B|C) = I(A:B)-A(A:B:C)
Since I(A:B) is absolute mutual information (and limited by info non growth), and is the upper limit on I(A:B|C), then that means also I(A:B|C) is limited by info non growth.
There are two Harvard trained physicists, who are much stronger at math than I.
@PdotdQ and @dga471 has my math been fudged or unclear? Do you agree or disagree with @EricMHās contention that Iāve fudged my math this whole thread?
Thank you for the vote of confidence, but I donāt know if I have the time right now to go through the thread to check math ā¦ Plus, I donāt have any training in information theory, so I probably wonāt be a good arbiter anyway.
I will defer this responsibility to the mathematician @nwrickert
The problem for me, is that we are discussing a mathematical model. But I have not seen a clear specification of the model. So it is hard to pin anything down.
Iām a bit confused because people have been talking about MI without referring to MI between which sets of data. In this post, whenever I switch to LaTeX, Iām going to use the strict language of set theory, without any reference to MI, so as to keep any obfuscating language away.
Recap of my personal understanding
As I understand it, Josh has argued that even if (C1 \cap G), (C2 \cap G) , and (C1 \cap C2) all decrease (meaning: the total area of the circles which are only overlapped by at most one color increases), (C1 \cap C2) \setminus G (which I take is what Eric calls CMI) may increase. I believe Eric agrees with this basic mathematical statement.
As Josh explained, why is it possible that (C1 \cap C2) \setminus G increases? Intuitively, what is happening is that somehow the two cancers mutate in a way that is somewhat correlated with each other. From some biological reasoning, this is explained as a result of driver mutations that are essential for the cancer to function. Because of this, Josh names (C1 \cap C2) \setminus G as functional information or FI. Eric agrees that it is possible for random + deterministic processes to result in an increase of FI (=(C1 \cap C2) \setminus G ).
Up to this point, I think this is fairly clear. Please correct me if Iām getting anything wrong.
Ericās Objections
Iām not sure I fully understand these objections. Here is my interpretation.
Using my notation, the above equation is expressed as:
(This is trivially true, of course, if you check Joshās image. It also follows from basic set theory identities. So we all agree on this.)
The only way I understand Ericās objection to this is that somehow, (C1 \cap C2) (which I understand is what he calls āabsolute MIā) is limited by information non-growth and cannot increase.
But even if this is true, the above equation doesnāt imply that (C1 \cap C2) \setminus G cannot grow, even as it is bounded above by (C1\cap C2). Even if (C1 \cap C2) stays the same, or even decreases, it is still possible for (C1 \cap C2) \setminus G to increase. How so?
With every iteration, even as the first term (C1 \cap C2) continues to decrease, the second term (C1 \cap C2 \cap G) can decrease faster than the first. In other words, the two cancers are both moving away from the germline much faster than they are moving away from each other. Hence, the difference between the two terms can increase - in other words an increase in (C1 \cap C2) \setminus G, or functional information.
To picture this in your head, imagine the blue and green circles moving further away from each other, but also jumping together downwards, far from the germline. Although the horizontal width of the overlap between them has decreased, the vertical height has dramatically increased, leading to an overall increase in area!
This is, as I understand it and as @Dan_Eastwood pointed out, akin to the realization that the 2nd law of thermodynamics doesnāt disprove evolution, because local entropy may decrease. Of course, there is an limit to how much it can decrease. The analogy would be that if the green and blue circles have completely separated away from the red one, then from that point onwards, FI cannot continue to increase. This is the āheat death of the universeā scenario.
The only way to mathematically object to this is to show that it is not possible for the second term to decrease faster than the first. But as I understand it, this is a scientific, not mathematical question.
I would like to invite Eric and Josh to scrutinize and correct my attempt at understanding the dialogue here.
I donāt really understand how this statement is significant at all. In our simplified picture, indeed, initially, the amount of MI is at maximum: the three circles overlap each other perfectly. But I donāt understand how this proves anything about CMI. As I showed in my explanation above, the only way you can object to Joshās argument is if this you can lay down mathematical rules for how fast the circles can move away from each other. But simply saying that initial MI is needed doesnāt say anything. It does not restrict the movement of the circles.
Indeed, the rules of the movements of the circles is something we can only study in the lab, and if Josh is representing the biology correctly, then we do have good explanations (e.g. common ancestry or selection pressures) for why the cancers, for example, move in coordinated ways.
To put it simply: as genomes mutate, the become more different from one another overall, but they can still acquire new functions because new functions have little to do with overall similarity. @EricMH has mistaken the fact that the overal similarity decreases as meaning than no new similarities can arise. This is demonstrably false, as we see in cancer.
ID theory has to adjust now due to this falsification of their claim. I donāt see any other way around it for them.
That is just analogy. Think of it as a report of commonalities, not actually moving circles. Such as the among of sequences in common and different.
Correct.
This is actually not complex at all to explain in words.
Cancer mutations are normal genomes modified by evolutionary processes. Two cancer genomes have a great deal of mutual information with one another (about 6 billion bits worth) just because they have a very similar starting point. In other words, (C1 \cap C2) is very large because (C1 \cap C2 \cap G) is very large. They have have high mutual information because of common descent. For the most part, we expect random mutations to reduce the similarity between cancers with time (C1 \cap C2), and they usually do. That is where all @EricMH and @Kirk and other CSI arguments focus. With time, this will decrease, and they equate this with functional information.
However, we know that cancer has acquired new functions that the germ-line does not have. Where did those changes come from? They come from the changes that cancers have in common, for other reasons, because they have a common function. We can find those as the sequences cancers have in common with each other, minus the sequences they both have in common with the starting point germ-line. Which is exactly:
That is the functional information that defines the ācancer functionā, and we can quantify it in bits in some cases as high as about 350 bits. According to @EricMH and @Kirk and Dembski, this should all be impossible. Their cutoff is at about 100 bits (right?), above which they believe it is impossible to explain by natural processes.
Or we could revert to their math, as is being worked out by @kirk on another thread (Durston: Functional Information). In that case, cancer has 6 billion bits, essentially equal to the amount of information in the human genome. This is an erroneous calculation, but if theyād like to argue this is valid ID math, they can certainly do so.
So they are caught between rock and a hard place. They have no good options.
Agree that functional information corresponds with (C1 \cap C2) \setminus G. At this point they can argue that it cannot increase by natural processes (as @EricMH is attempting to do), but it obviously has increased in cancer.
Argue that functional information corresponds with (C1 \cap C2) (as @Kirk is currently arguing) but then they have to explain how cancer exists with 6 billion bits, and is less than the starting point, and yet somehow cancer acquired new functional information. Moreover, the experimental data does not support their claims in cancer because the vast majority of commonality do not relate to function. We know that because in the case we of cancer we have tested it directly.
Either way, there is more FI than they believe possible by natural processes. So the argument is done at this point.
That is their situation. The reason why there is lack of clarity of times is that they are arguing mutually contradictory statements at the same time. It seems that none of them have ever applied information theory for a practical project, so this must be very challenging for them. Iām sympathetic, which is why Iāve taken the time to explain it.
And that is exactly right. Overall the genomes become more dissimilar. However, they can still acquire functions because they can get similarities in localized places of the genome.
All these claims can be formalized more carefully as statistical distributions. Iām just describing these as sets for clarity.
You seem to be misquoting me. I do not say I(C1:C2|G) cannot increase. Remember, at the very beginning of this thread, my first comment was I have no disagreement here.
I say that I(C1:C2) cannot increase, and it is the upper limit of I(C1:C2|G). Therefore, I(C1:C2|G) generated through random processes is dependent on apriori information.
Hence, it is a question begging argument to insist I(C1:C2|G) increasing somehow discredits ID claims.
If Iām taking Joshās three circles illustration almost literally, then there is nothing special about the a priori, maximum amount of MI in the system that bounds (C1 \cap C2) \setminus G. You donāt need any sort of intelligence to start with this. This is simply a statement that cancers are mutations from a germline. So, there was a time when the cancer was āidenticalā to the germline - i.e. when there was no cancer!
I donāt know. Iām just writing the circles in mathematical notation, and the notation says C(C1:C2|G) is derived from I(C1:C2). If thatās wrong, then we need new circles. But, this is it for me. Iāve come to the end of my interest here.
When I say āmoving circlesā, I mean moving not in physical space, but in the Venn diagram space, where more area overlap = more commonalities between the sequences, and vice versa. So the illustration of moving circles is right, I think.
OK, Iām going to retract what I just said, after thinking about this more carefully.
On the one hand, it seems to me that Eric is arguing that common ancestry requires intelligence, which is weird. (I guess this is Joshās main complaint with ID theorists in the first place - assuming that all MI is a signature of intelligence.)
On the other hand, Josh says this:
Yes, common ancestry explains the high initial amount of MI. But, it is unclear to me what are the relative increases/decreases of (C1 \cap C2) and (C1 \cap C2 \cap G). As I said in my initial long post in this thread, this is crucial in determining whether (C1 \cap C2 \setminus G) can actually increase - you need the second term to decrease faster than the first.
In other words, you have not explained how the circles move relative to each other. There is something in here which is not captured by the Venn diagrams - something more than just common ancestry. Common selection pressure might be. Can you clarify this, Josh?
Common selective pressures select the same mutations that arise randomly and independently in different patients. This happens in cancer, without intelligent guidance. The common selective force, as part of a parallel evolutionary experiment, produces the mutual information. Once again, without intelligent guidance of any kind.
Cancer is empirical demonstration that this type of mutual information increases.