Looks like the mutual information discussion is winding down. I’d mentioned I cleaned up the experiment from the first time I talked with @swamidass. The main fix is I made is the calculated mutual information is always non-negative by making the compression algorithm not decompress bitstrings, except adding a single bit to flag that the bitstring is incompressible. I also subtracted an extra bit in the mutual information calculation to account for the possibility of two incompressible bitstrings. These fixes resulted in the first two experiments achieving a 100% success rate.
I created a second set of experiments to see whether calculated mutual information underestimates algorithmic mutual information. In the first case I use two bitstrings consisting only of 1s, and results in a 0% success rate. In the second case, I use randomly generated bitstrings, and get a 100% success rate.
The experiments can be seen here:
What conclusions can we draw from this? First, I submit the premise that compressible bitstrings are themselves instances of algorithmic mutual information. With that premise, the main conclusion is that in the absence of algorithmic mutual information, i.e. all random bitstrings, calculated mutual information behaves the same, i.e. measures zero. In other words, you cannot get positive calculated mutual information from zero algorithmic mutual information.
Since algorithmic mutual information cannot be produced by randomness + determinism, and calculated mutual information requires algorithmic mutual information, then the existence of calculated mutual information implies one of two possibilities:
- algorithmic mutual information exists for no reason
- something other than randomness + determinism exists
2 is a preferable conclusion to 1 if there is a viable alternative to randomness + determinism. A halting oracle is an alternative, and it seems to be viable. I do not know of any proof that a halting oracle is a logical impossibility.
As such, these experiments seem to best support conclusion 2, and the best candidate that I know of for 2 is a halting oracle.
A counter argument someone might give is these experiments are illusions since they use pseudorandom number generators instead of true randomness. To which I’d respond: the experiments would not turn out differently if we had used a quantum random bit generator instead. However, we cannot know if anything is truly random in the quantum sense. But, we can know that algorithmic mutual information exists if computable mutual information exists. So, we can know true positives, but not true negatives, when it comes to algorithmic mutual information.
So, that’s about all I’ve got. I’ll check back during the weekend and address any questions people have.