Contradictory Points in ID and Information Theory Arguments?

So we’ve recently had several exchanges on this forum about arguments for ID that rely on information theory (IT). I tried my best to follow along. I am someone who’s a physicist and took a few classes on statistical mechanics, but am far from being an expert in that area as my research uses little of it. I am also completely untrained in IT, although as Josh points out it is not very different from statistical mechanics.

To put it bluntly, I was very confused by the seeming contradictions and errors that came out of this exchange. I initially chalked this up to my lack of full understanding of the subtleties of IT (as I never took a formal class in it). But when it seems that even simple statements are contradictory, I’m starting to suspect that there really isn’t a clear, coherent argumentation strategy in ID using IT. This long post will attempt to highlight important points of the exchange so far, and points where I am utterly confused.

The original ID argument and its pitfalls

According to Josh, the original ID argument developed by Dembski, Marks and co. has the following assumption:

The standard response to this is that this random starting point is not true of nature:

This is also the main thrust of Josh’s argument in, for example, Computing the Functional Information in Cancer - #26 by EricMH and Explaining the Cancer Information Calculation - #21 by swamidass. Common descent, which is really just a fancy way to say “DNA replication”, is capable of producing mutual information (MI) - something that is glaringly obvious. Thus no intelligence is required to explain functional information (FI), which as argued in the above threads is just a subset of MI.

However, the ID side then replied that the cancer information calculation explains nothing, because it presumes pre-existing MI. A major plank of ID is that MI can only be produced by intelligence. After MI exists, then its evolution is governed by the Law of Information Non-Growth (LoING).

To the non-ID side, this implied that DNA replication requires an intelligence - a seemingly ridiculous claim, as DNA replication is a natural process as simple and commonplace as gravity.

A Concession from ID?

Some people including myself then focused on this claim, which seemed obviously wrong. It then led to some sort of concession from the ID side, that it is possible to get MI without intelligence:

Reply 1:

Notice that this concession seems to directly refute the original argumentative logic of ID: even if we assume a “completely random” starting point (which as Josh argues, is a wrong assumption), it is conceded that no intelligence is required to produce MI!

Seeming Contradictions?

But besides this, we also had these clarifications to the concession, which confused me:

Reply 2:

Reply 3:

(Edited after some more thought.)
The first confusing point to me, was that replies 2 and 3 seems to imply that completely random variables equals non-independent. I thought completely random variables equals independent. (As an example, if you flip two fair, completely random coins several times, their results will be independent of each other.) It almost seems like a straight off typo, as it contradicts Reply 1, which seems to imply independence = randomness because of the assumptions of the ID concession which assumes randomness in nature.

Another Contradiction: Does ID think DNA is independent, or not?

Even with the seeming contradictions, I was optimistic, as the concession hinted that we had achieved progress in the discussion. Somewhat in line with the concession, the ID side then tried to argue that prima facie, things in nature like DNA are independent:

Reply 4:

Josh and many others disagreed: we can’t rule out that hidden dependencies between organisms and their environment. A lot of people then pressed the ID side on this implausible assertion. It seemed clear what was happening, at least: the ID side, having conceded a main line of argumentation, was forced to rescue its argument by defending an auxiliary assertion which the non-ID side had good arguments against.

On the other hand, later, this progress was undermined by what seemed to me to be a contradictory statement:

Reply 5:

(Emphasis mine.)

It seems that now, the ID side believes that nature is not generated from fair coin flips (i.e. independent)! This seems to directly contradict Reply 4, unless I am misreading this quote, or there is a typo somewhere.

Besides this direct contradiction, the ID argument has also been reversed: that ID can prove an intelligence is required even with dependencies in the environment (such as common descent).

To me, this makes the argument even more untenable. This is basically saying that Josh’s major point of refutation against ID (that nature is not completely random, but with dependencies such as common descent) is going to rescue the argument. It is super-duper confusing.

The conclusion: is it just me, or is the ID side arguing several contradictory things? The logic has just completely gone off the rails here. At least I’m confused. It would be very helpful for people to be able to enlighten me here.

5 Likes

To be precisely clear, we can start with zero MI, and still see MI increase due to the process that create FI. So it is not precise to say that because MI is created due to CD, we can create FI. Rather, this demonstrates that MI is not only FI. MI can be created by CD (DNA replication) and by a whole range of other things.

This is also a confused statement. He was arguing that MI can’t be produced by replication unless there is information within genomes. Of course, there is information in genomes. They do not need to be produced by coin flips for their to be information in genomes.

It is not just you. It seems like a self-contradictory argument.

4 Likes

Yes, you’ve got it.

They use AIT (algorithmic information theory) to model information. And AIT is an abstract theory, so it can be used in isolation from reality.

Evolution does not make sense in isolation from reality. For a biological organism, information is going to be information about reality.

Perhaps the ID arguments are correct – that a system isolated from reality cannot get information. But that has nothing to do with evolution.

They want DNA (the genome as a string of DNA bases) to be treated as information. Well, okay, I suppose. But the genome is relevant to survival in the real world. So if the genome is information, then it is information about reality. And their abstract model excludes that.

Evolutionary biologists repeatedly point to this as the problem. But the ID people fail to see it.

What it really shows is that ID is philosophy rather than science. A scientist doesn’t make this kind of mistake.

4 Likes

The same can be said for information theory. What tends to get lost is that we are creating models which may or may not, or may more or less, reflect reality.

1 Like

No, the same can’t be said for Information Theory. You live in the “information age” You use google, facebook, and you blog here. All of this would not be possible without the genius of thousands of engineers and scientists who created the technology based on the science of Information Theory. And to think that it goes back to Shannon and the 1948 paper on “The Theory of Communications” A profound idea 70 years ago, and all this technology that we just take for granted in our daily lives.

Can you help to explain this for the layman (me?) Why is data processing or information technology dependent upon information theory? Or rather how (in what way) is it dependent? Are you saying that, apart from Shannon, for instance, that these technologies that we use today would not have been developed? Or are they better because of information theory?

2 Likes

For my part, I find a lot of equivocation about the meaning of Mutual Information, because MI is always a measure between two samples (strings) of information. Replication creates MI by definition, but it doesn’t create new information. The statement than MI cannot be created make no sense in general. Any regularity in nature also implies that some measures of MI must be non-negative.

We can talk about MI relative to something of interest, like fitness… I’m using fitness here to describe some physical function that allows fitness, to disambiguate from mathematical functions of information…

The LoING says that no function or randomness can improve in expectation over the “true” coding distribution, and there is no controversy there. If LoING prevents an increase it can only be that the true fitness code has already been achieved. Our guest DID acknowledge that fitness can improve by random variation, if some level of improvement is possible, but limited. I appreciated that concession, and agreed there are limits, but did not pursue that line of discussion.

What I did not follow up on, is that if improved fitness is possible, that is: the true fitness information has not been found, then the LoING does not apply. Here, I think, is where a Halting Oracle enters the discussion. (Somebody please correct me if I am getting this wrong.) Small random variations are mostly neutral, we think, and without some significant change the new information will not become fixed in the population. I know little about population genetics, but I am familiar with the term :fixation probability". If must be true that changes large enough for fixation are more rare, so the criticism is correct in part. Where I think this breaks down is MI has been defined ambiguously. Consider:

  1. Parent population A spawns child sub-populations B and C with some random variation.
  2. B has slightly greater fitness than A, but not enough gain for fixation in the total population.
  3. C has slightly worse fitness than A. but not enough loss to decline in the population.
  4. BUT the difference between B and D C can be enough to reach fixation relative to C. A and B will remain fixed in the population, but C will decline.

THIS, if I understand correctly, is the Halting Oracle needed for the evolutionary computation. There is no conflict with LoING if we define the problem this way. If we want to think in terms of “The 2nd Law of Information Theory”, we have gained information in B at the expense of casting away C.

Please criticize this interpretation! :slight_smile:

2 Likes

I think you have gained information only if copying information is a gain of information. If we look at this from an evolutionary perspective how would this copying function actually create a new feature?

It will certainly create variation but how much? What can the variation from reproduction produce?

You can define a genetic change that increases fitness an increase in information but I think you are fooling yourself here by playing with semantics. We could quibble over an increase in information or not (depending how we define information in this case) but the real issue is whether this process explains life’s diversity.

53 posts were split to a new topic: Cole Tries Again on Functional Information

Shannon’s paper put communications, data processing and information technology on a firm mathematical foundation. You could calculate items such as Channel Capacity - How much information you can get through a communications channel (a wire, free space, a fiber) and at what fidelity. Shannon didn’t tell you how to do it, but told you want was possible. With that communications, and data processing and information technology has been working hard to get as close to the Shannon limit as is possible.

1 Like

I completely agree with this. The lack of precision in using the term Mutual Information verses just Information is frustrating and obfuscates the argument.

What is D? You have not defined it.

I do not yet have a clear understanding of how exactly a Halting Oracle (HO) comes into play. In the discussion, some people have been using it as a synonym for “intelligence” (something which we also debated in Intelligence and Halting Oracles), and ID argues that intelligence is needed to create information. However, it is not clear to me how the halting problem is relevant to information creation. (The gist I’ve been getting is that HOs have to be non-Turing machines to not result in a logical contradiction, so they are magical black boxes that can evade a bunch of information theory theorems, which is ID advocates’ idea of what intelligence is. :face_with_raised_eyebrow:)

2 Likes

(Emphasis mine.)
Since the ID strategy was based on using the Law of Information Non-Growth, it is imperative that ID advocates define information clearly, rigorously, precisely, and consistently. This is why defining it as the Shannon entropy is very nice: it’s very succinct, mathematical, and without fuzzy terms like “function”, “meaning”, or “purpose”. ID cannot start an argument using LoING and then assert that arguing about information increase is not important!

3 Likes

Randomness is a potent source of new information, the most potent source. Mutate a strand of DNA, that is a new type of DNA. Mutate a lot, and you create a lot of new information. Replicate it, and it is now mutual information between the replicated genomes.

Remember that noise is the most information rich object, and replication can turn noise into mutual information, shared noise between two samples.

Quite wrong. Halting oracle is not relevant here at all. LoING is not relevant either, because the assumptions required for it to hold are violated. Moreover, randomness is information. Replicating randomness makes it mutual information.

The question should always be “Mutual information between what things?”.

1 Like

I don’t think ID in general is about LoING. You are equating one persons argument to ID. It is about functional information. Functional informations relationship to LoING is fuzzy at best.

1 Like

OK, sure. But this thread is titled “ID and Information Theory arguments.”

Sure seemed important to some of the ID advocates here, especially the one that trained under Marks. He is the first person to mention this here.

3 Likes

Well, the point is, by claiming that their argument follows from LoING, the ID side has been claiming that this is all “proven”. If they no longer want to talk about LoING, then there is no “proof” any more.

3 Likes

B and C are imperfect copies, variations on the information in A.
This is not semantics, but defining the problem of new information becoming fixed in a population. But if you are asking, then I should explain it better. I’m working up an illustration.

If I understood Eric correctly, he sees a problem with new information becoming fixed in the population. Further variation could reverse any gain. I think Eric means the HO to be a way for evolution to recognize a gain in useful information, keeping the good and discarding the bad. I brought up fixation probability as a way to do that, sans designer.

Oops! D should be C. I edited.

Agreed. I did not state what I meant very clearly.

Clarification needed: I was trying to understand why Eric thought a HO is necessary, and what might fill that role. He mentioned something about “fixing new information”, and I this is my best guess to the role of the HO in his thinking.
In not thrilled by the idea of evolution as computation either, but want to try to understand what Eric meant. That will increase my understanding, and maybe help with his, if I get the chance.

EXACTLY!

1 Like

Are you assuming that new information is an increase in information? If you claim that information in the population has changed I would agree. If you say information has increased I think that claim needs support

I seem to be guilty of my own complaint! :slight_smile:

“Mutual information between what things?”.

In my example I am defining B to have greater information (MI) with respect to improved fitness, and the opposite for C. B may not have sufficient gain in fitness to become fixed in the population relative to the original A, but it could more easily become fixed wrt C. If B fixed, A remains fixed, and C declines, then the overall MI relative to fitness will have increased. This “fixing” in the population problem comes from population genetics. In other settings (ie: Dawkin’s WEASEL) it has been called “Latching”. I seem to have to wrong understanding of Halting Oracle’s, but I think Eric meant that an Oracle is needed to solve the fixing/latching problem, otherwise random variation will tend to Decrease MI wrt fitness.

I don’t mean to speak for Eric, I’m just trying to understand him.

Random variation in any string is very likely to increase the raw amount of AI, but not necessarily in a useful direction.

2 Likes