Intelligence and Halting Oracles

My understanding is I went through this whole empirical experiment exercise in order for you to present your disproof of ID’s empirical validity. I’ve done the former and now I await the latter. On the other hand, if you don’t have such a disproof, then say so.

OK, we are on the same page at least. I feared you might have it the other way around! :slight_smile:

In statistics we know U exists from the Rao-Blackwell-Kolmogorov theorem. …

And now I find I need some time to follow through the theory on the Algorithmic Information side. I will come back to this.

I think we are mostly in agreement, except that the expected information gain is not the goal. The distribution of information change just needs to capture some part of the fitness gradient. I’ll try to make that a more formal statement when I get back to this.

AND thank you for following up, I do appreciate it.

Edit: I had a thought about H(X) being a set of points of roughly equivalence function, rather than a single maximal point. I think that might imply a non-convex loss function, and Jensen’s inequality may not apply. This would also change the parameters of the current discussion, so I’m not sure it’s fair to introduce this now, but I thought it was worth noting.


@Timothy_Horton Be polite. There is some deep math here, and I won’t just wave it away without understanding it.

If you want to insult people, go back to Facebook.


He did say he would follow up on my questions, and I’m glad that he has.

1 Like

When someone claims to have provided empirical evidence for the Intelligent Design of biological life, an event which if true would be one of the most profound scientific discoveries of all time, it seems only prudent to be skeptical. I’m merely pointing out asserting “no one has DISPROVED my claims” isn’t scientific evidence for ID.

2 posts were merged into an existing topic: Explaining the Cancer Information Calculation

Tim, there is a real discussion happening here, and I would like to see it through. If you can’t contribute in a constructive way, then please do not interrupt.


In this setting will might be able to determine when I(X; U(Y),p) will and will not converge to H(X). That depends on a numbers of unstated assumptions, including a fitness gradient, which is beyond the scope of this discussion. We can find coin-flip scenarios where I(X; U(Y),p) will generate MI relative to fitness (convergence).

That doesn’t mean you are wrong about increasing the expectation over all possible functions, but it does leave some wiggle-room for randomness to increase information within the accepted definition of Information Non-Growth, for some subset of functions.

@dga471 My original question has to do with the definition of MI. “MI is not a thing” but a measure of commonality between two things, A and B, and here replication creates MI by definition.

Eric is saying that duplication of A to create a new object B creates no new information, which is also correct, but in a different meaning, as you noted.

I need to think about this, and backtrack to what I wrote about gaining information from the environment, then sit down and try to turn this into a clearly stated argument.