Continuing the discussion from Swamidass: Computing the Functional Information in Cancer:
What About the Proofs?
I expect this will puzzle some people:
So, as surprising as this may be, it is a not at all relevant to what I’ve worked out here. The reason why is that this formalism depends on an theoretical understanding of mutual information that is uncomputable and unobservable in practice.
This is theoretically important in physics. For example, “reversibility” in physics logically implies “conservation of information.” Perfect knowledge of the state of a deterministic system in principle (and only in principle) allows us to reconstruct any point in the past or the future, but only in principle. This tells us all the information in a system must be same at all times. However, I’ve explained before that information=entropy. We observe entropy increasing. What gives? What resolves this contradiction?
As Susskind explains:
Professor Susskind then discusses the apparent contradiction between the second law of thermodynamics, and the reversibility of classical mechanics. If entropy always increases, reversibility is violated. The resolution of this conflict lies in the (lack of) precision of our observations. Undetectable differences in initial conditions lead to large changes in results. This is the foundation of chaos theory.
Entropy vs. reversibility | The Theoretical Minimum
So measured entropy increases, even though true but unknowable information content (which is entropy) of the system stays the same. The 2nd law of thermodynamics is a real observation, even though information content, though unobservable, stays the same (comments @PdotdQ or @dga471?) .
That is a very wide gap between theory and observation. The gap here, as Susskind notes, is required to make sense of paradoxes in our use of the term “information” and “entropy.” This gap between theory and observation is exactly what Demski (for example) shows no indication of understanding. He talks about the conservation of information, but does not recognize two key things:
-
Conservation of information only applies in a fully-reversible deterministic world, which may not be our reality. Quantum mechanics might inject information into the world in the form of quantum randomness.
-
Minds are not an exception to the “conservation of information” rule, even in a deterministic physics.
-
Whatever the case, the gap between unobservable and observable information content is so large as to render proofs in one domain almost entirely irrelevant to proofs in another. There is no way around this problem. That is what the uncomputability proof tells us.
Those Pesky Implementation Details
As @dga471 almost writes:
The only disagreement I have with this is that we already have discovered that the implementation issues are fundamentally intractable. In fact, we know this from information theory. All versions of the “conservation of information” proofs, such as those regularly put forward by @EricMH, are essentially philosophical constructs that do not connect with with any observables. The uncomputability proof actually proves that they cannot be reduced to observables.
With this in mind, that is how I constructed the impossible puzzle.
@EricMH task was to compute the information content of these sequences without knowing how they were generated: Eric Holloway: Algorithmic Specified Complexity - #25 by swamidass. This is a provably impossible task because because it requires perfect knowledge to compute. That is why we say compression (and information) uncomputable.
The information content of these strings obeys the “conservation of information” proofs that @EricMH repeats: randomness + determinism can’t increase information. I computed a specified amount of information, and then used a deterministic process to scramble this information. I designed the scramble to be reversible (but only in principle). From this theorem, then, I know the information content of the strings.
However, I did not reveal how I scrambled this information or how to decrypt it, or how to know if the decryption was successful, or how the initial information was generated. Consequently, there is no obvious way to figure out what the information content of these strings are without the information I withheld.
Randomness is Information
In context, @EricMH appears to misunderstand what is meant by randomness in his proofs. In the context of his proofs, the “randomness” is the pre-existing “information” of full detailed state of the entire universe or a totally closed system. It does not apply at all to an open system. He is correct though that knowing this, determinism will not create or destroy any of that information. This follows logically from reversibility. Also, this is what Demski heavily relies upon in his argumentation, and also implied in in every argument for ID I’ve seen.
What this is saying is that global information + determinism can’t produce more information if a system is reversible. In contrast, information + more randomness/information can produce more information (as might occur with quantum mechanics if it is not deterministic)…
So what breaks determinism? Perhaps intelligence, if we have free will, can add information. Quantum randomness, also, can add information. Any thing that is not determinism can add information to reality. Regardless, all these conservation of information proofs only apply to a totally closed system any ways. Intrusions of information (by randomness or intelligence or natural processes) can easily increase information (or mutual information) in a subsystem, even in the idealized version of information, even in a deterministic world.
I hope that makes some sense. There is a lot of theory in these posts. I hope this isn’t loosing people. And I hope the physicists are suddenly realizing how information theory arises in their field.