Got it. I’m doing that translation in my head when you speak.
So why does biology and DNA matter so much? Why not just make that claim? The fact that we are not a chaotic soup, that there are any patterns at all of any kind is all evidence of design in your view.
There is a self contradictory set of claims being made.
One hand, you claim that natural law cannot produce new information (mutual information), only intelligence can.
On the other hand, you claim when natural laws produce new information (mutual information), this is because intelligence is what set up the laws.
So you are claiming simultaneously that natural law can and cannot produce new information. Either way it is intelligence in your books. Fine. However, can natural laws produce new information or not?
If all you are making is a fine tuning argument, there would be no debate to be had. Why all the mathematical song and dance in information theory? What exactly does it add?
What I do claim is that when we think natural laws are producing new MI, then it will always turn out on further analysis the MI came from somewhere else where it pre-existed. Chance and determinism can never produce a net increase in MI. This is just the law of information non-growth.
Replication does seem to depend on prior MI. Basically, there is no way to get MI for free, so in general any instance of MI is indicative of ID
Consequently, fine tuning is also an instance of the information argument, but the information argument is more general and applies to more areas. This is why I said all the ID arguments reduce to Dembski’s CSI argument. CSI and the information producing capability of intelligence is the central issue.
It becomes scientific rather than merely philosophical in the case of ID claiming we can detect the creation of new information and distinguish this from information being passed on from another source.
To bring this back to DNA and evolution, some make the claim that evolution can originate the MI in DNA. In fact, that is its whole claim to fame, that natural selection and mutation are sufficient to create all the genetic diversity we see, as if there were some special MI creating ability in the evolutionary process itself. But, this we now know to be false due to the law of information non growth.
However, you haven’t shown that information non growth has any implications for the subset of MI in DNA. If your information theory argument is correct, than the mutual information in the entire system cannot increase. But the entire system – everything that causally interacts with the DNA – includes not just the DNA, but the rest of the organism, the rest of the ecosystem, the rest of the planet, the sun, and the volume of space that the sun radiates into. If you model that entire system as deterministic, I have no problem with the claim that the information in it does not increase. The mutual information between two copies of a genome, however, are a miniscule portion of the total information, and nothing you’ve presented here says anything about whether it can increase or not.
Help me square these statements with what you say next…
In your view, do you believe that (A) DNA replication requires the direct input of Intelligence every time a cell, virus, or plasmid replicates? Or you do believe that (B) DNA replication is just what happens, according to natural law, as the cellular “machinery” works on DNA?
If A, this is consistent with vitalism, as you have claimed as your view before. What evidence do you have the DNA replication requires the direct intervention of intelligence each and every time it happens? Is it merely the information non-growth theorem? Is it not more likely you just are misapplying it?
If B, then it seems to contract what you’ve said earlier. It seems then that natural law can increase empirical (not the non-observable kind) MI without any difficulty. That would seem to directly contradict your claim that natural law cannot increase emperical MI.
To be clear, empirical MI can increase why the non-observable sort of MI can stay constant. Though I’m not sure its true, I’m fine with that claim. At question here is using proofs regarding an unobservable, theoretical MI to deal with Emperical MI. That is the move that is invalid. This does not mean that Information Theory for practical problems is invalid. It just means your use of the theory is invalid for the problems you have selected.
I agree @dga471. It looks like an example of goal-post moving. Clearly a copy is a deterministic function. However, a deterministic function is not supposed to be able to increase MI. Yet it can. This gets back to some old questions I asked @EricMH that he never answered about the simulation: what are valid choices for X, Y, and E? He never answered.
Another very puzzling set of statements:
Compression is not possible except in ergodic sequences, period. Ergodic, however, is a strange term. Clearly all these big strings are compressible down to the random seed, so they are ergodic. They are just not ergodic in the way that LZ understands ergodic (key point!). We have to introduce some order into the sequences, which my first implementation did (by encoding as strings of 0 and 1) to get any compression from any algorithm. This gets to an interesting catch-22 that @EricMH falls into:
He wants compression to reduce the size of the random bit string.
He doesn’t want there to be any “computer perceivable” order in these bit strings.
These two requirements are not consistent. These are logically contradictory requirements. We have to pick one or the other, but we can’t have both. I actually produced an implementation that satisfied #1 but he complained about #2 not being satisfied. Then I produced an implementation that satisfied #2, and he complained about #1 not being satisfied. So which one is it @EricMH? Which of these two mutually contradictory requirements is how you want to run the simulation? You can’t have both. You have to choose. And no, I did not make an error. I’m just trying to follow these self-contradictory requirements.
This is in error. This hackish “fix” does not produce a compression algorithm by your definition. It creates instead a function where the compression function will almost always increase the size of the input by one bit. It is not a compression algorithm then, violating one of your requirements.
Moreover, his last experiment demonstrates another error in interpretation:
Once again the claim is wrong, again. Actually it will never cross for copies of pseudorandom bit strings, and this is a problem because MI supposed to equal len(Y), no zero. H(Y+Y) is suppose to approach len(Y) not len(Y+Y). That will actually become less likely as the length increases. So it is both a false claim, and if it were true, that will still demonstrate the error in his implementation.
I do believe that @EricMH is an honest character here. However, it is hard to make progress when basic and demonstrable errors are being made over and over.
Somewhat stunningly to me too is that I explained already how to fix it for some narrow cases.
This actually would produce a reasonable MI in @EricMH’s simulation , though, as I said, is easy to break with a different E function (a rotation, and many other sorts of shuffling). What just is hard to for me get my head around is @EricMH’s reasoning here. It really does look like epistemic closure, but there is such confidence exuding from him I’m unclear exactly what is going on in your head, @EricMH. I hope you can enlighten us.
There is a point here too. It is not possible to fix for all cases, just some. You have to actually model the process to know the compression size / probability function.
Quite a lot here, but it might help if you weren’t so intent on claiming everything I state is a contradiction. Perhaps try thinking of a way the statements don’t contradict, or perhaps I didn’t say what you think I said. I.e. the principle of charity:
In philosophy and rhetoric, the principle of charity or charitable interpretation requires interpreting a speaker’s statements in the most rational way possible and, in the case of any argument, considering its best, strongest possible interpretation.
I attempt this with your posts, so I’d appreciate a returned favor here.
Otherwise, you’ll just “win” through throwing up too much flak, because I don’t have enough time to address everything, which is a hollow victory indeed.
The basic point is the law of information non growth (LoING) means you cannot get net increase of algorithmic MI from deterministic and random processes (DRP). That point is proven. The point of controversy is how this translates over to the calculable empirical realm.
Your first argument is we only see a portion of everything, so we cannot differentiate between MI and conditional MI, which you call FI. However, FI is a lower limit on MI, so any measurement of FI > 0 indicates MI > 0, and per LoING FI > 0 indicates some cause that is not reducible to DRP. So this argument does not work.
Your second argument is we can never truly measure algorithmic MI, therefore empirical measurements do not tell us anything about the algorithmic MI of X. This argument is more interesting, but so far has only revolved around demonstrating that I have difficulty constructing ways to empirically measure algorithmic MI, which other commentators have pointed out is not conclusive of anything. If you are correct on this premise, then you certainly have a good argument, but you need to do more than pointing out flaws in my experiments. If you cannot, then you have no grounds for saying ID is fundamentally flawed other than an argument from personal incredulity, and that argument is like Bill Gates saying we only need 64k of memory in our computers.
And personally, since LoING is proven, this whole discussion seems pretty moot to me. I find it inconceivable that algorithmic MI has no bearing on the empirical realm, especially since it has resulted in useful technology and applications (i.e. Li and Vitanyi’s work).
I am happy to be convinced otherwise. But, if you don’t have any positive arguments why algorithmic MI has no empirical validity, then I don’t see a great deal of progress here.
The problem here, at least from my perspective, is that @EricMH appears to be making an argument about an abstract mathematical model, and most of us are concerned with empirical questions related to biology and evolution. So there appears to be a huge communication gulf.
For myself, I do not see how the abstract model connects to the empirical questions. I do not see how AIT (algorithmic information theory) could possibly be relevant.
Yes it has been proven, however you are misunderstanding us. Algorithmic MI is a very useful concept, just not for what you are doing. That is the difference. It just cannot be used the way you are using it. So the fact that it is useful for some things (which it is) does not help you at all in demonstrating that it is useful precisely in a place that it cannot be useful.
Your conclusion does not follow. MI > 0 does not imply a change to MI, only that it is greater than zero, so LoING is not violated. (Unless by MI > 0 you actually mean ΔMI > 0, in which case your conclusion doesn’t follow for a different reason.)