The EricMH Information Argument and Simulation

The above statement is incorrect, it is 30% of the time the equation is violated in the control, which follows my prediction that it’d be right more often than wrong.

I must confess that I am still not entirely clear on what your objection is, so below is my best effort to understand.

Is your objection that since we have to approximate algorithmic information, there will be deviations from the theorem when we use calculable approximations? In other words, with my notation, you are saying that at least some of the time I_{LZ}(X:Y) < I_{LZ}(E(X):Y) when I_{LZ}(E:Y) = 0. This objection makes sense to me. You may well be right, and it is something that I would have to analyze further to see what exactly the limits of calculable approximations are.

By itself, the above does not address my argument. So, you also are importing another premise: that it is computable information metrics that are relevant for the whole debate, not the algorithmic information I’ve been using. This may also be true, and something I will have to think upon further.

The whole time I thought you were debating the veracity of the information non growth theorem, which made no sense to me as it is mathematically proven. Now, it seems you do not actually disagree with that point. I guess we really were talking past each other, as one of the commentators pointed out.

At any rate, this is my takeaway from the discussion:

  1. Evolution cannot increase algorithmic mutual information. To be clear, this is my argument, and I cannot see how the experimental detour applies. As far as I can tell, no one disagrees with this point.
  2. Evolution might increase approximated algorithmic mutual information (AAMI).
  3. AAMI might be the only metric that matters, since it is calculable.

I appreciate your many responses, and will keep points #2 and #3 in consideration.

1 Like