Somehow we programmers write a whole lot of code that reliably halts. And we cannot write programs to do the same. That’s pretty surprising if we are not halting oracles.
I would never claim that “intelligence” is Turing reducible. Likewise, I would never claim that intelligence is a halting oracle. To me, both seem absurd.
This is not surprising at all, because (1) human intelligence is only an unreliable guide and often fails, and (2) computer programs are designed to guard against this, even though they fail. Humans are only imperfect oracles, which is to say they are not halting oracles. Even software can be an imperfect halting oracle, which is to say it cannot be an oracle.
Halting Oracles are different. By definition, they are required to be 100% accurate ALL the time for ALL possible inputs. Thy must also terminate in a finite amount of time, so they cannot be infinite runtime. Positing the existence of non-infinite/non-omniscient Halting Oracle leads to a logical contradiction. Such contradiction arises even if humans are said oracle.
In a finite amount of time, Halting Oracles require a view of a true infinity of time, which is to say they require omniscience. Such a thing does not exist in our world but might apply to God. Whatever human intelligence is, it is not a halting oracle.
Joshua is not an atheist. God brings information wherever it is needed.
But there is no evidence to show God’s operation can be detected.
So the Halting Oracle is a really good example to study further. It is proven that they do not exist in a proof by contradiction (https://en.wikipedia.org/wiki/Proof_by_contradiction).
Does a program exist that can examine the code of any other program to determine if it halts?
This is adapted from here: https://en.wikipedia.org/wiki/Halting_problem#Proof_concept
Now consider this program.
def g(): if halts(g): loop_forever()
Now ask, does halts(g) return true or false? Either way we get a contradiction.
If halts(g) returns True, then g() will execute loop_forever(), and will therefore not halt, resulting in a contradiction, because halts(g) should have returned False.
If halts(g) returns False, then g() will halt, resulting in a contradiction, because halts(g) should have returned True.
Therefore we know that a Halting Oracle is not logically possible.
Intelligence is Not a Halting Oracle
@EricMH, I thought I’d give you an example of some code that demonstrates that humans are not halting oracles. This is describe here:
For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop.
The satisfiability problem is defined here: https://en.wikipedia.org/wiki/NP-hardness
So it is fairly easy to construct a question that in principle no amount of human intelligence can solve. I’ll just ask a human, does this algorithm halt?
def f(): if is_satisfiable(VERY_LARGE_SAT_PROBLEM): loop_forever()
It is easy to construct a hard SAT problem that cannot be solved by human intelligence. Therefore we are demonstrating that humans are not halting oracles.
The Proof Against ID
Let’s look at your claim.
- CSI can only be created by intelligence.
- Therefore, according to your application of Levin, intelligence is a halting oracle.
That is the starting point you claim. We can add to this.
- Halting oracles are a logical impossibility (see above).
- That means the “intelligence” that creates CSI is a logical impossibility.
- Therefore, either claim #1 is false, or CSI is a logical impossibility.
- If claim #1 is false, there other sources of CSI than intelligence, then CSI is not a unique signature for intelligence.
- If CSI is a logical impossibly, then CSI does not exist.
Either #6 or #7 mean that ID is false.
That just proves no Turing machine can be a halting oracle. To further conclude halting oracles are logically impossible requires the Total Turing Premise (TTP)::
Turing machines exhaust the set of all possible processes.
I don’t know the TTP has been proven. But if it has, then your argument holds and I’d concede ID is false.
Prima facie, TTP seems false because MI exists.
No that is not the case. No Turing machine was used in that proof. It is merely a logical proof based on “computability” and “complete”. If you want to say computability implies a Turing machine, fine. The key thing is that the only way around this is omniscience. This reduces down to “does God exist or not”, and demonstrates that human intelligence is not at all comparable to divine intelligence.
Now that I’ve shown this doesn’t depend on TTP, do you concede ID is false?
I tried Googling “Total Turing Premise” and found nothing. Has any work been done on this before?
In any case, even if what you said here is true, it seems to me that the burden of proof is on ID advocates to show that human intelligence can actually produce non-Turing processes, such that they can be halting oracles. Am I right?
I haven’t heard of it before, but he defines it:
Which seems to be restating a claim that he has made before that determinism + randomness can’t increase mutual information. So it appears that he was certain of TTP until he thought it undermined ID. However, halting oracles lead to a logical contradiction. Intelligence is not a halting oracle.
There is no evidence that MI (as defined as a unique product of a mind) exists.
Yes, that’s how it is defined.
Any calculable form of MI requires absolute MI exists.
The TTP as defined by Eric seems to be a more narrow, specific claim: namely that there are processes which can never be performed by a Turing machine. Processes, not results. So it seems that Eric needs to show that humans can create procedures which are inherently impossible for a TM to execute.
(Still, this is a bit confusing to me because I thought that almost by definition, processes (programs?) can always be executed by a Turing machine. Am I mistaken?)
I imagine this, if successful, would also be some sort of proof against the possibility of artificial general intelligence (AGI) using Turing machines. So perhaps looking into the arguments for or against the possibility of AGI would be one way to judge how likely TTP is true or not.
No, per my last point:
So what happens if we unplug the computer? Isn’t that a halting state??
There seem to be several unstated assumptions at work here.
First, MI is not a thing that exists in a physical way, it is a measurement between two systems (say A and *B). If A and *B are identical or nearly identical then they will share a high degree of MI. We can easily imagine a physical process that produces identical copies of the same system; for instance, all water molecules share a relatively high degree of MI. By this argument all water molecules (and all matter!) is designed.
It is more interesting to consider some system A than is capable of making copies of itself, creating a child system B that can also make copies of itself. Here the MI should contain the information relative to the_function_ of reproduction. Again, we can name physical processes that reproduce themselves, such as fire. Fire possesses some of the qualities of life; it metabolizes fuel from the environment, and can spawn other fires nearby via sparks. On state of “on fire” is very similar to any other state of fire, so they should have MI between them. Is fire designed? I think not.
So far we have at least two examples of MI that do not require intelligence, unless we want to declare everything to be designed, which is not useful in a scientific sense.
We can continue this exercise by adding functions in addition to replication. For instance, can the process capture energy from the environment? If we count hot embers as stored energy, then fire again appears to be designed. Capturing energy is also implied in the function of reproduction.
But what about capturing information from the environment? Now we have something different. Fire clearly doesn’t have any “memory” about what it burned before. Fire never gets any better or worse at burning, it just continues to burn until it exhausts itself. I claim that capturing information from the environment is a key function for sometime to be “life”. Capturing information separate life from other chemical processes that we would not claim to be alive. There could be additional functions needed depending on how we define life, but reproduction and capturing information are essential.
[edit:: clarifying: Where and when halting occurs is another hidden assumption, or at least undefined]
Considered as an algorithm, a system A that reproduces itself is a forking process. The forks produce child systems B and C (with slight variation) which continue. If C is better at reproducing itself than B we will get more copies of C in the environment than B. If B and the children of B are effectively unable to reproduce, then the B process halts. The remaining process C has gained*** information relative to successful reproduction (fitness) from the environment.
*** clarification: C has not lost information relative to fitness, and may have gained relative to A.
Once such a system as A and C exist in the environment there is no barrier to adding additional information other than the environment itself. @EricMH may still have a case for the initial MI, but the idea needs further refinement to avoid the “everything is designed” implication.
He is talking about a halting Oracle, not a halting state.
However, the real issue is that no logical process can solve the halting problem, not no physical process. Moreover, human intelligence is a wonderful counter example. What ever ID means by design, apparentkt humans can do it, even though we are demonstrably not halting oracles.
Agreed. I went through this exercise to illustrate that Information Non-Growth doesn’t have any useful meaning here without careful definition of the conditions, and a halting oracle doesn’t seem to be necessary at all.
8 posts were split to a new topic: Is God a Halting Oracle?
This is a good, substantial comment on the topic of info theory, so I wanted to make a quick response before I depart. I’ve been meaning to for awhile, but kept forgetting, my apologies.
Your main argument is that duplicating A creates mutual information between A1 and A2. This is an interesting point, and I’ll get back to you with a response. On the face of it, it does seem to contradict the law of information non-growth, so there is possibly some error here. But, I cannot handwave things away, so I’ll come up with a formal resolution to this conundrum.
Regarding your other point about organisms learning the environmental information, I also suspect that learning about the information in environment E is limited by the information in A, since the environment itself is pretty uninformative, K(K(E)|E) = K(E). And at any rate, there is still a mass quantity of environmental information to explain, which results in the information regress I pointed out to you before, same as Dembski’s proof regarding the vertical no free lunch theorem. But, this point is good and deserves more than just a handwave, so I’ll work on a formal resolution here too.
Anyways, great points!
@EricMH Thanks! I’m doing a bit of library work on related questions too. I’ll post in the or the related threads when I’m prepared.
@EricMH while you are thinking about it, may I ask for a clarification? Please forgive my clumsy notation below. My background in statistical theory lets me follow the IT literature well enough, but I’m not used to expressing myself in these terms.
For the law of Information non-growth in the setting of statistical theory we have
I(X; Y ) ≥ I(X; T(Y ))
which is familiar to me as an application of Jensen’s Inequality. In the Algorithmic Information setting we have the same, and this shows that the average message length is minimised when coded for the true probability distribution rather than any function T() of that probability distribution. I agree with you this far, but here begins my question:
If I(X; Y ) represents information for some biological function coded by the true probability distribution p, then are you asserting that I(X; Y ) already represents the maximum state of greater biological fitness can exist?
In this scenario any T() other than identity will indeed have lesser biological function, because no other state is possible. But this seems to be intuitively backwards for what is supposed of biological evolution: For some probability distribution q not equal to the true distribution p there exists some function U() such that
I(X; Y,q ) \leq I(X; U(Y ),p)
thus allowing an increase in fitness. That is, U() reduces the Kullback-Leibler distance between q and p. I would weaken this statement to U() might exist, because it is not clear we could find such a function universally.