A simple one would be to specify random target Y, random initial state X, and make E some other random string. In this case E(X) is E \oplus X. To measure I(X,E:Y) and I(X:Y), we use Lempel-Ziv compression, where LZ(X) is how much X can be compressed. I(X:Y) is estimated by I_{LZ}(X:Y) = LZ(X)+LZ(Y)-LZ(X.Y).

So, the prediction from my argument is I_{LZ}(X:Y) \geq I_{LZ}(E(X):Y). Since everything is randomly generated, this will be trivially true, as both I_{LZ}(X:Y) = 0 and I_{LZ}(E(X):Y)=0.

Not true (e.g. channel capacity), but also why would that matter? If it better describes what we mean by information, then it is a better description, period, regardless of practical considerations.

This clarifies are your using emperical compression as an aproximation of Komologrov complexity (which is uncomputable).

So that is what I was thinking, but you haven’t yet fully specified the experiment. What are the other controls you are going to run? What is X going to be and what is Y going to be? What is the E() function going to be? How do you verify your choice of X and Y (objectively) is valid?

In this your are wild deviation from standard explication information theory. It’s okay. You can make it your own.

The reason why uncomputability is so important is because it means it is impossible to accurately measure information from data alone. That means we cannot accurately measure the information content of DNA. This is a fundamental finding of information theory.

By python implementation, among running the experiment, this should include:

A function for deciding if the choice of X and Y is valid.

A function for deciding if the choice of E is valid. Failing that, at least offer clear criteria.

A choice of E, X and Y that you are using for your main run.

A choice of E, X and Y (and whatever else) you are using for your controls.

A function for computing I(). Obviously make a call to a library for compression size, no need to implement that from scratch.

This will also need clear explanation of what you think will happen with your control experiments and why. They should each be mapped back to your main argument. Lest you feel this is an onerous programming task, it is probably just 20 lines of python code. Nothing substantial at all.

Of course, if as you do this, you uncover the error in your argument that’s great. You get credit for figuring it out for yourself, and acknowledging it here.

Well, we have an equivalence of sorts between the uncomputable information that makes us human, and the uncomputable information that makes live alive.

And both very different from heat death, though they’re mathematically equivalent.

I think the error you see is I am making an absolute claim instead of a probabilistic one. Sure, this is an error of sorts, but I’m using absolutes to simplify the discussion. Of course, with stochasticity involved, there is deviation from the absolute, and there can be error bounds and such.

However, probability isn’t the problem. Sure there is some minute probability of anything happening in a stochastic, discrete world. But, evolution is not interesting for that reason. It is supposedly a way to turn these small probabilities into big probabilities, and it doesn’t work.

At any rate, I’ve used a lot of time and effort here, and am still no closer to understanding why you disagree. It only seems to be that you disagree with how I use the word ‘information’, which is not a significant disagreement. I’ll use ‘information’ in whatever way you want, but that doesn’t change the actual content of what I’m arguing.

Why does that matter? We cannot accurately measure anything in this world. So what? Science still happens. Technology still works. Plus, pragmatics are beside the point. If evolution is mathematically impossible, it is impossible, period, whether we can measure it or not.

Yes, empirically there are deviations from any theory when stochasticity is involved. Why is that a problem? We would just need to introduce error bounds of some kind and run enough tests.

Perhaps we can cut to the chase, and you can explain what the fundamental error is.

Uncomputable does not mean undetectable. If we are halting oracles, we can originate the necessary axioms to detect such things. All of science has this problem, and the only reason science works is because we halting oracles have derived many axioms about this physical world.

This is all I’ve got for you. You’ve got to be more straightforward in these discussions and stop playing hard to get. Just state your point clearly and concisely, and then we will see whether you’ve refuted the law of information non growth. If you cannot do this, then we are finished here.

In biology functional information can have several solutions that can have nominal function. This is conceptually described as a hill that can be climbed by natural selection. If the functional information was equal to all possible sequences then I think your proof fails. This is not realistic but it may be a failure point.

Great job @colewd, you found one key error. He has used merely one E() in his simulation and thinks this extrapolates to all possible E(). In fact, if we can produce an E() that breaks his formula, of any sort, than his claim is falsified. You are producing an alternate E verbally, but with a computer implementation there need not even be argument. We only have to produce it and empirically demonstrate it violates his rule.

It is also interesting that the one control he chooses deterministically fails. That means even his single control experiment demonstrates his intuition is wrong. It is not even a very good control.

This is two obvious and major errors. There are more.

You defined evolution as “some combination of algorithmic processing and randomness injection.” This is an idiosyncratic definition.

EricMH:

“Evolution” is a vague word. It might mean Darwinian evolution. It might mean the evolution posited the early Greek philosophers. It could mean Wallace’s guided evolution. Some forms of evolution are less problematic, such as Lamarkianism, animal breeding or genetic engineering, since they involve the interaction of a directing agent. Others are problematic, because they claim complex organisms come about without any kind of directedness. I am arguing the latter sort of evolution is making a mathematical claim that is false.

I’m seeing a problem with the notion of modification via animal breeding being less problematic than evolution with natural selection… Or rather, the idea that Eric’s analysis would distinguish between the two; one being possible and the latter mathematically ‘wrong’ or improbable. Am I misinterpreting the statement?

You are correct, but this does not contradict my proof. You’ve offloaded the information into the evolutionary process, but still does not explain its origin. I mentioned this possibility here:

The point is even if evolution contributes some mutual information, it does not explain the origin of the mutual information. Something external to the evolutionary process placed the mutual information there. And this thing cannot itself be algorithmic/stochastic. The only other possibility is a halting oracle.

So, evolution’s claim to fame, that it can produce complex structure without any external input, is false.

On the other hand, if no one believes this anymore about evolution, then what are we arguing about? This is all ID has ever claimed, and if it is no longer controversial, then let’s all agree ID is correct, halting oracles exist, and move on.