Define "information"? Creationists aren't even willing to define it

A quick question: is “specified information [where] the specification is fitness” even potentially calculable (even if “relevant”)?

Short answer: yes.

It is the case of “functional information” in which the function is fitness.

Longer answer:

<tirade>Ever since Dembski and co. started using “specified information” as an argument for ID, many critics of ID have loudly declared that it is a meaningless mumbo-jumbo notion cooked up by advocates of ID just to bamboozle easily-intimidated evolutionists.

Actually, it was invented by Leslie Orgel in 1973 and more clearly laid out as functional information by Jack Szostak and Robert Hazen in 2003 and 2007. None of those folks are creationists or ID advocates.

The problem with the ID argument based on Complex Specified Information is not that the quantity is meaningless. It isn’t. It is that there is no sensible argument that it is impossible to achieve it by repeated bouts of natural selection.</tirade>

Apologies for the tirade.

Clarification: I was not considering Functional Information in my comment above, and this comment wento in the queue as I was writing. I agree with @Joe here. Dembski and Marks are not using Functional Information.

I don’t think Dembski (in Dembski 2005) is even using math.

From 2005 on, Dembski defined information as Specified only to the extent that it was improbable to arise by natural evolutionary processes. Yes, there is no easy way to calculate that. In effect, his argument worked this way:

  1. We want to decide whether it is extremely improbable that a adaptation, or one better than it, can arise by natural evolutionary processes. So, following Dembski,
  2. … we must decide whether it exhibits Complex Specified Information.
  3. How do we know whether it does? Well, we must calculate the probability that it, or one better than it, can arise by natural evolutionary processes.
  4. … and Dembski does not tell us how to do that.
  5. So we are back at step 1. And all the Information Theory stuff has not helped.

The final nail in the CSI coffin, as far as I am concerned, is @Joe_Felsenstein’s post on Panda’s Thumb a few years back.

Has functional information every been calculated for a whole organism? Is it practical to do so?

Taking a thought experiment example, if some organisation offered a million dollar grant, to anybody willing to calculate the functional information of, say, a particular nemotode, would that offer have takers?

One of the things that I keep hammering home to young earthists over and over again is the fact that science has rules. Rules that apply to every area of science, whether “operational” or “historical.” Rules that have nothing whatsoever to do with “materialism” or “naturalism” or “secularism,” but that apply to Christians and atheists alike. Rules that are basically the rules of honesty, factual accuracy, quality control, and not making things up.

The need to define your terms and quantify them in an objective way is one of the most basic and fundamental of these rules, especially if you are going to try and discuss whether something can or cannot be created by a particular process. By denouncing demands that they stick to this rule as “obfuscation,” young earthists are basically insisting that the rules of science do not apply to them, in other words demanding the right to make things up, invent their own alternative reality, and flat-out lie.

For what it’s worth, Paul Price is one of the thirteen or so young earthists to date who have told me that I’m taking Deuteronomy 25:13-16 out of context by applying it to science—on this very forum back in December 2020. Such a line is, as I now point out up-front, effectively demanding the right to tell lies.

I have also seen similar things from other young earthists in the same vein. Over on The Other Place, one guy a while ago responded to me by flat-out denying that science has any rules at all. This, of course, is nonsense. If science did not have any rules, then we would be able to claim that mermaids were evidence for a young earth, because treknobabble.


To do so you’d, in principle, have to consider all possible genotypes and measure the fitness (or some other function) for each. While not practical, one can take a simple scheme of fitnesses for all genotypes and imagine doing it. Taking random genotypes (constructed by monkeys with 4-key typewriters, how many full genomes would they have to type out before they got one that had fitnesses as good as, or better than, the present genotype? Obviously a hell of a lot. The specified information is - log2 (P), where P is that very small probability.

So arguing that in our genomes there is lots of specified information is not a vacuous notion. Incidentally, Hazen, Griffiths, Carothers and Szostak (Proceedings of the National Academy of Sciences, 2007) note examples of investigations with sequences at single genes, for the functions of those loci.

1 Like

The problem has never been specified information. The issue with Dembski’s Specified Complex Information has always been his idiosyncratic use of “complex”. That is why Orgel’s use of the terminology is not relevant to Dembski’s. Orgel did not have Dembski’s idea of complexity in mind.

That aside, it also demonstrates a sadly impoverished approach to the text, to take it as literally being just about having accurate weights for your scale rather than as a very strong and effective metaphor for honesty in general


Well one problem is the specification always comes after the fact with ID-proponents. They pick some functional sequence discovered in an organism, then say it’s specified as if it was predicted to exist by ID and then try to do some sort of math-gymnastics to show that it’s appearance by dice-rolling is incredibly improbable. Voila! - Therefore design!


Yes, but the definition has never been an area of disagreement. The problem of Texas sharpshooter style arguments (and sometimes completely worthless probability calculations) is another matter.

Then again, even getting the importance of specification - which is at least a step in the right direction - seems to be beyond some creationists,

I have a similar concern about “specified”. Not only that it is after the fact, but what that information represents.

If we were making a statistical estimate, there would be some distributional parameter (like a population mean) we are trying to estimate, and then to make some inference about it.

Functional Information has a set of sequences which allow some degree of a given function, and the we can estimate a percentile rank of function for a particular sequence.

Complex Specified Information doesn’t do anything like either of those. It’s a probability (if we take Dembski’s meaning) about some undefined parameter which is presumed to indicate design, again after the fact, and no a priori definition of what constitutes “design”.

IMore, but I have to take a break

I can imagine, at least in concept, a multinomial distribution of sequences in genome-space, and we might make statistical inference about those parameters*. Design, if it exists, could be a region of genome-space which is somehow unreachable by evolution. It would be a VERY hard task to show that no evolutionary pathway exists to the “design region”.

In fact, I think I could argue that such a region probably doesn’t exist, meaning that random chance and evolution are likely to have discovered any such region without the aid of a Designer.

I will try to write this down and see if it holds up And if it doesn’t, please forgive my ramblings.

* Phylogenetic analysis methods must be doing something like this already

Wouldn’t you then have to synthesise an organism with each of these monkey-generated genotypes? And then perform lab or field experiments on whole populations of each genotype, to measure their fitness relative to your original exemplar of the organism? Even for the simplest of organism, that would seem impractical.

1 Like

Perhaps a visual example would be helpful.

We’re going to play hide and seek, but in an unusual way. The person who is ‘It’ will follow a route that has been defined in advance, and will reach out and tag at various points along the route that have also been defined in advance. We’ll start out by choosing the route and the tag points at random. If we track how much time ‘It’ spends in a given place, we get a map like the one below:


The brightest spot is the starting point; brightness indicates more time spent in a place. So as we get further from the starting point, the squares get dimmer – ‘It’ is spending less time there. The fading brightness is pretty much equal in all directions, exactly as you’d expect for someone meandering at random. Does this picture give you any information about where the hiders are? I don’t think it does. (It helps if you know where the hiders are; you’ll get a better picture soon.)

So far, so boring; I don’t recommend actually playing hide and seek this way. But I hope you’ll indulge me a bit further. Suppose we send out several kids to be ‘It’ with these random routes. They each come back and report how many of their tags actually made contact with hiders. For the kids who did make contact, we take their routes and make small changes to them and send everyone back out with new routes. (If no one makes contact, we make small changes to all the routes.) Lather, rinse, repeat.

After many rounds, let’s check back in on the game and make another map of where ‘It’ is spending most of their time.


This is a very different picture. Now instead of mostly spending their time around the starting point, they are spending the most time at two points closer to the left-hand corners, as well as some time at the adjacent and intervening points. There also seems to be some time spent at two spots near the right-hand corners, although not as much. Do you suppose this picture contains information about where the hiders are? I would say that while it is not 100% unambiguous, if you tried to infer where the hiders are from this picture, I’d expect you to be more accurate than if you used the first picture. And that’s what I mean when I say this picture has information about the location of the hiders, while the other does not.

Now, the process I described above is a fairly simple evolutionary algorithm. The route changes represent mutations, and the preferential choice of routes that connected with hiders for subsequent use represents a selection step. That combination of mutation and selection–which can also be thought of as exploration with feedback–is what can lead to accumulation of information.

If you’d like more details about this little experiment, you can read about it here and here. The model had its beginnings right here in the forum.


I might disagree with you about the flaw in Dembski’s arguments being the definition of “complex”. But I do think that you are agreeing with me that the idea of a specification is not what makes his work invalid.

This is particularly apparent in regards to antibiotic resistant bacteria, the infectious agent arms race, and pesticide resistance. Creationists regard these as loss of information, despite the obvious gain of environmental information and utility of adaptations to the target organism.

what about those experiments where some bacteria developed a resistance to substances over time due to mutations in their genes? Such mutations, which are mistakes in the genes, result from a loss of information (such as the loss of a control gene which regulates the pumping of the substance into the cell). Again, this is the opposite of evolution, which requires an increase in information if it were to occur.

“Doonesbury” Needs Some Resistance

1 Like

Trying a different approach …

Consider information to be a pattern. Here it’s pretty easy to see that random mutation can create new information - some pattern that previously did not exist in the population. That’s not necessary an increase or a decrease, it’s just different.

If you want the know if that difference is an increase, you first have to define what you mean to the increase:

  1. An insertion mutation or duplication which physically increases that space to store genetic information. The genome could now store a larger or more complex pattern. For a duplication the pattern does not change, but further mutations can create new patterns without wrecking the original.

  2. Increased variability in the population genome, which would be an increase in Shannon Information. Here a mutation creates a new variation on existing patterns in the genome. All the previous variations still remain (selection comes later).

  3. Improved function. A mutation allows some new pattern or improved function. This may or may not include changes received by 1 or 2.

  4. Removal of poor function. This is selection, not mutation, but seems to fit here… If part of the genome including a pattern with poor function goes extinct, the remaining population now has higher average function.

You are probably wanting #3. If this is the case, then consider any mutation that decreases function. Undo that mutation and we have increased function back to the original level, thus increasing information as well (in the sense you seem to intend).


It certainly isn’t the only flaw in Dembski’s arguments, but it is - in my view anyway - the only major problem in Dembski’s definition of Specified Complex Information. The impracticality of identifying it in non-trivial cases is also a big issue.

With regard to functional information, insertions will not necessarily add information and deletions will not necessarily destroy information.

Can you tell us about the pursuit of quantification by evolutionists regarding the origin of say the ATP synthase, or the ribosome, or the echolocation system in whales or the chameleon vision.

Does Sanford’s argument fail if by information he means functional information. I don’t think so.