The EricMH Information Argument and Simulation

I see. Well I understand that. Thanks for not holding it against me. I was not trying to dismiss you for being ID. It just seemed like you had misrepresented yourself. You did not, but that is what it seemed like. Thanks for taking it in stride.

Rather than go back to check the code, I’ll just grant that to you. It is really beside the point. That is only relevant to the control experiment you wrote. Besides, 70% accuracy is horrible for this domain.

The prediction is not that it would be “right more often than wrong.” You did not make a prediction. I applied your formula, which would say that the formula would be right 100% of the time. I asked you for a prediction but you gave none. It should be 100% for your reasoning to be valid. This demonstrates you missed something critically important.

The bigger problem is that your main simulation was right only 50% of the time. That means that 50% of the time evolution will improve mutation information (as you have defined it). Those are very good odds. It means your application of this equation does not take into account key caveats in the original proof, or the original proof is wrong. I’m not going to untangle which one it is right here. If the original proof was valid, than you’ve missed key assumptions. What ever the case, you argument ends not being valid.

To make your case, you have many more experiments to demonstrate every single step in your argument is correct. There are several controls that have to be run. There are several validations. You are no where near making the case at this point.

If that was a correct proof, you are certainly misapplying it. I’m not importing another premise. I’m just applying the basic findings of information theory. You cannot treat theoretical compressibility as if it is emperical compressibility. These things do not behave same way.

You know this, because you are not getting the results you should. That means you did not know what that formula really means. That is your real problem.

I was not debating you. I was trying to make sense of what you were saying. You were not being clear. I gave you a list of things I needed in the simulation. You did not provided it. I gave you a list of question, you did not answer them all. I was not messing with you. The problem was that you were not being clear. I was just trying to make sense of what you were saying.

Depending on what you mean, it is possible I agree. Once again, this is not clear However, in the sense it is true, it is irrelevant to evolution. There is no way to map this proof to evolution because algorithmic mutual information is impossible to measure, and is not related to function.

Another thing that it appears you miss is that “intelligence” is not exempt from the rules of information theory. It is not a magical exemption. There is nothing in information theory that says intelligence allows to transcend the limits. That ends up being a key control you have to put into your experiments. That, also, is where it will be easiest to invalidate most the arguments you put forward after the next step.

That is easy to answer. It is demonstrable that evolution inevitably increases AAMI to arbitrarily high levels. That, also, is the only metric that matters in this context because it is calculable.

Thanks. I appreciate this exchange. @EricMH, now that I know who you are, you can try and engage again in the future. Peace.

2 Likes