Eric, I know that you’re not that interested in this particular argument anymore, but I am indeed mystified, why do you think that the mere presence of mutual information alone is an indication of ID? It seems to me that you are saying, every time an organization reproduces and passes on its DNA, that is an instance of ID (because that produces mutual information between the parent and child, or between two children). Can you explain this to someone not trained in information theory like me?
To me, it seems that you’re saying reproduction can only be explained by ID. Even if I don’t know anything about evolution, it is difficult for me to see how reproduction is any different from the many other natural processes we encounter in our daily lives. Perhaps there’s some subtler layer of the argument that I’m missing here.
This was @swamidass’ argument that natural processes could produce functional information. I point out the math shows the FI is dependent on pre-existing MI, so the argument doesn’t follow.
But while I was not arguing in my other threads that physical duplication is mutual information and indicates intelligent design, yes, it does appear even duplication requires some kind of intelligent design. At least that seems to be the logical conclusion of my mutual information argument.
As it is, it seems that DNA replication is de facto evidence of ID because it unequivocally creates MI. I’m honestly dumbfounded at the moment and do not know quite how to penetrate the circularity. Can you help @dga471?
Let’s focus on the most recent, and rather simple claim by Eric:
(Caveat: I’m not trained in information theory, and I have not been following the exchange between Eric and @Dan_Eastwood.)
First, it seems to me that here Eric is using the word “information” in two different ways: once referring to mutual information between A and B (which in my simple picture, I imagine as some degree of correlation between two sets A and B) and once referring to “information” in a general sense which exists in A independently, perhaps a measure of its entropy. This creates confusion.
Secondly and more importantly, I don’t understand the idea of using a function F to sidestep this argument. In the simplest case, we can think of A and B as an ordered string of 0s and 1s. A function that produces B by copying A simply needs to consist of a simple instruction:
If A_n = X, then set B_n = X.
This “copying function” can be used no matter what A contains. Thus, I(F:A) should be close to zero in most cases. Am I missing something here?
@EricMH, I showed you that a natural process can increase FI.
You made that claim. We responded that MI just arises naturally from the deterministic process of duplication.
Then you pulled this out:
Which seems to therefore demonstrate your argument is false by a reductio ad absurdum. Except you seem to be actually taking this to be a valid position.
You seem to be making far fewer mistakes than most people here.
Depends how you measure it. In some cases it might increase information quite a bit, doubling it.
If we have reduced FI to merely restating the fact that DNA can duplicate, perhaps @EricMH can explain how duplication is not a deterministic + random process.
See, I’m starting to suspect more and more that the argument Eric makes is at the basic level simply the statement that “any order in nature is evidence of ID.” This is consonant with his statements in other threads that science is literally impossible assuming methodological naturalism. For Eric, even Newtonian mechanics was only possible using ID assumptions (i.e. non-MN):
The problem with this is that this reduces the ID argument to a purely philosophical one. Very different from what most ID proponents have been proposing. It would really just be a variant on the argument for God from the unreasonable effectiveness of mathematics to model nature, or from fine-tuning. All the information theory is just window dressing.
I am hoping that the ID argument is more interesting than this.
In this case F literally consists of one line of code. And A could be anything - from a random string of a billion numbers to the entire English Wikipedia. I can’t imagine any definition of a correlation function to compute I(F:A) which would return a high value in most cases, because there are limitless possibilities for A.
I think I agree with what you are saying. What I am saying is that it seems Eric is arguing that the LZ compression function itself (which in Eric’s implementation here consists of 25 lines of code) has some mutual information between itself and whatever is being compressed or copied. In that way, he argues that duplication does not increase mutual information in the system. That doesn’t make sense to me, because it is literally false. The information that is being compressed by the function is not contained in the function itself. However, it is true that when you apply the function to something to duplicate it, the total mutual information in the system increases.
There is nothing special about DNA replication. It’s just a nearly deterministic physical process that produces a chemical that’s happens to be easy to think of in terms of digital information. But the production of mutual information is a ubiquitous feature of natural, deterministic processes. There are correlations all over the place in the real world – between the temperature in my block and the temperature on the next block, between the density of crabgrass in front yards and density of crabgrass in backyards, between the time of year and the amount of sunlight, and on and on. Such correlations exist because causality is a feature of reality, and the closer a system is to being perfectly deterministic, the stronger the correlations and the larger the mutual information between different products of the same physical cause. Mutual information exists, and has to exist, because of determinism, not in spite of it. Mutual information between two cancer genomes, or any two offspring genomes, is created because DNA replication repeatedly produces the same product (more or less) over and over again. That is the physical reality that is being modeled in terms of mutual information. Any argument that concludes that determinism cannot produce mutual information is incoherent.
It seems to me that the central issue is what the information no-growth theorem actually says and what it applies to. That might be a more fruitful area to explore than whose implementation is correct.
That is where we started. I was explaining to @EricMH that there is a gap between the theoretical domain where the no growth theorem applies and the empirical domain we live in as scientists. The proofs he has in one domain do not apply empirically. He made claims that I told him were wrong empirically. The simulations were just to demonstrate this the case.
I think the issue of whether Eric is just arguing for ID on a purely philosophical level (as I guessed in my post above), such that he can claim that DNA replication is evidence of ID, should be clarified first. Otherwise the details of implementing the law of information non-growth is just a red herring. Even if you prove all his implementations don’t work, he can always fall back on that basic claim that all order in nature is ID.
Exactly, this is why the whole implementation exercise @swamidass had me do is pointless, which I said from the beginning. @swamidass needs to state his argument against ID instead of this long misdirection he had me doing for weeks.
Yes, this is also correct. Due to the law of information non growth, all order is evidence of ID, so all investigations of order are not being consistently MN. Hence, Hume’s argument against induction, and more recently Wolpert’s No Free Lunch Theorem, which are consistent applications of MN.
But, you are right, it is somewhat scientifically dissatisfying if this is all that ID claims, although such assumptions have motivated many of the major scientific breakthroughs (i.e. Newton). Science requires some empirical way to differentiate hypotheses, and that is where the information non-growth theorem comes in. Insofar as you can quantify the existing mutual information in a system, the theorem says a naturalistic (chance+determinism) system cannot produce more than it contains. On the other hand, if the system contains intelligent agents, then there can be a net gain in mutual information.
While it is clearly not straightforward to test, there is at least a notion of empirical testability here, and the information theory it is based on has been applied in many practical scenarios (albeit the Shannon form is more widely applied than the Kolmogorov form).
To respond to another comment, when I say “information” I mean “mutual information”. Yes, it is confusing. I could use the acronym MI all the time if that would reduce confusion.
OK, Eric. Thanks for clarifying. I agree with you that if we can’t agree on this basic argument, the implementations are pointless. The problem is, as I mentioned above, I don’t see how different your argument is from arguments for God from fine-tuning of constants, or from the unreasonable effectiveness of mathematics. In fact, in other threads I have been advancing similar arguments. One crucial difference here is that I don’t think this is something that has anything to do with methodological naturalism - I think this is a question that is answered with philosophical and theological arguments.
But even if this turns out to be right, it will not affect the way we do science. The evidence for this is that I myself (and many other Christian scientists) believe that God has designed the universe, and God continuously sustains the order He has created, yet we hold to MN. Thus to me, information theory arguments are really red herrings because they only obscure the basic philosophical argument that you are advancing. As I said, it’s just window dressing.
OK, but information non-growth in the way you described here is not going to be able to rule out the hypotheses of ID vs. non-ID. At most, it can differentiate between “more ID” and “less ID”, assuming you’ve already committed to the basic thesis that all order is evidence of ID. We’ve just seen this demonstrated in this thread, where even if Josh demonstrates that MI can arise out of natural processes, you can simply reply that those natural processes are the result of ID anyway. Therefore, the disagreement is still fundamentally philosophical. This is why Josh keeps complaining that the debate is “circular.” Because it is: the science assumes a certain philosophy of nature. The debate about ID (at least your version of it) isn’t a scientific one, it’s a philosophical one, or perhaps even just rhetorical one.
It would seem that from proximate, causal standpoint, the evolution of life and possibly the origin of life would be compatible with all these ‘law of information non-growth’ arguments. Any ‘information’ necessary could be injected at the ‘time’ of the universe’s origin. This would be compatible with a deistic ID model.
Now if one wants to argue that organisms can’t evolve new functions or encode information obtained from environmental interactions, that would seem a hard sell to me.