Looking for sources on the information argument

Could anyone point me to good sources on the information argument popular among ID advocates? I know the argument has been discussed here to death and I tried reading through old threads, but I’ve got COVID brain and not all of my neurons are synapsing. I’m looking for a good response or two that I could share with my students. Thanks!

1 Like

Bill Dembski has the most academic explanation.

All posts by @Joe_Felsenstein (and other posters I can’t recall) on pandasthumb.org (just look through the archives, there are lots), and all posts by Joe Felsenstein and Tom English on theskepticalzone.org:
http://theskepticalzone.com/wp/author/tom-english/
http://theskepticalzone.com/wp/author/joe-felsenstein/

And there’s always Jeffrey Shallit on his recursivity blog who’s been debunking creationist claptrap about information theory for years. He’s also had this long-standing challenge to creationists:

3 Likes

Jason Rosenhouse is a good author to check out.

You might start with this:

On Mathematical Anti-Evolutionism
Jason Rosenhouse
Sci & Educ (2016) 25:95–114
DOI 10.1007/s11191-015-9801-7

3 Likes

The simplest direct answer I can give you is, that if we could all agree on the same definition of Information, this problem evaporates.

Shannon Information is most commonly mentioned, which is a measure of the average bandwidth needed to send messages from sender to receiver. In statistics this is the Variance, and your students might already have calculated this in a lab exercise.

Kolmogorov (Algorithmic) Information is a measure of compressibility. The more the original message can be crunched down to minimal size and still be fully recovered (as measured by the length of computer code needed to do so), the less “algorithmic” Information it contains. An example is PNG image compression compared to the size of the original bitmap image. The less it can be compressed, the more information is contained in the image (non-compressibility is a good proxy for Algorithmic Info content).

Neither of these have anything of do with the meaning of the message transferred. YEC will almost always be concerned with “meaning” rather than a mathematical quantity. ID may range from the YEC view to technical arguments about creating new information.

In mathematical terms, adding randomness is adding information. “Selection” in the evolutionary sense removes information for a population. Creating new information is the easy part.

In physical terms, entropy increases, but local order can by increased by expending energy, casting away disorder into the wider environment (selection again). Everything we see and consider to be “beautiful and ordered” is the result of entropy dissipation, with various amounts of local order.

9 Likes

Awesome! This is very helpful. Thanks.

1 Like

There is no one coherent creationist “information argument”. There is some inchoate feeling among creationists that “information” cannot come from naturalistic processes but must be ultimately supernatural in origin. This then shows up in various half-assed arguments. They make use of Shannon Information, or Kolmogorov Complexity or the Orgel/Szostak/Hazen concepts of Specified Information / Functional Information:

  1. The frequent assertion that mutation is the ultimate source of information and that gene frequency changes cannot add information, because “the alleles are already there”. This is silly: it is easy to show that gene frequency changes can increase Functional Information.
  2. Dembski’s Complex Specified Information: In its most clearly explained version, it is not something you can detect which then shows you that ordinary evolutionary processes are extremely unlikely to bring it about. Rather, one must first show that ordinary evolutionary processes are extremely unlikely to bring about this high a level of adaptation. That then allows you to call it Complex Specified Information. Which then enables you to prove (ta-da!) that ordinary evolutionary processes are extremely unlikely to bring about this level of adaptation. But wait: didn’t we start from there?
  3. gpuccio’s dictum that having 500 bits of Functional Information shows that natural selection cannot achieve that level of function. Not actually proven by any math, but after hundreds of comments (in my thread of 2 December 2018 at The Skeptical Zone) it develops that gpuccio just thinks that typically there are no “uphill” paths on the fitness surface enabling this to happen. He claims this is empirically true, not theoretically proven.
  4. Kolmogorov Complexity arguments: Somehow it is argued that having an unusually short “program” computing a bitstring (which may be a coded genotype or phenotype, but this is never explained) is somehow a very unusual thing in some sense. The computation uses a prior distribution used by algorithmic information theory people but it is never explained how this prior is relevant to biology. The unusualness is then argued to make it improbable that this could happen in evolution, except that it is never explained why. Or why shortness of the program is somehow being favored in evolution.
    There may be more classes of “information” arguments but one gets tired.
8 Likes

@stlyankeefan AND Joe is giving a generous review of CSI. Dembski’s 2005 paper defining CSI makes some very strange errors, like allowing probabilities greater than one and confusion between CSI being a quantity or a quality. Finally, if we ignore the other problems, CSI is maximized for very simple sequences (very low Algorithmic Information), meaning that things exhibiting high values of CSI are actually very simple. This is the exact opposite of Dembski’s claims.

4 Likes

Awesome! Thanks.

I could quibble about the “probabilities greater than 1”, I don’t think they are a big issue. As for Algorithmic Information, that is one scale that you could use in computing SI to see whether there is CSI. Dembski has mentioned numbers of them. Such as viability. See for example page 148 of No Free Lunch where he says specification can be “cashed out” in various ways. Specified Information always has a scale of goodness of some sort, the issue being whether it is somehow difficult for evolution to get you to be that good. The Kolmogorov Complexity scale is, as you note, not particularly sensible. Why should we be impressed that the program to develop a given bitstring is unusually short? They never explain. For other scales the scale is basically related to fitness. So if we had a proof that the organism could not get as fit as it seems to be, by ordinary evolutionary processes, then it makes sense to worry about how that came to be.

4 Likes

I would be OK if Dembski bounded the probability at 1.0, but he doesn’t. He has an MS in Statistics, and must know the correct binomial calculation. I agree it is fixable.

I’ll guess I’ll have to break down and finally buy a used copy.

I recall the Marks, Ewert, and Dembski Algorithmic Specified Information paper corrected this by using a scale of “incompressibility” (total info minus algorithmic info). I wasn’t impressed by that paper, but Marks got the math right.

If we had proof of that, we wouldn’t be having this conversation. :wink:

Maybe glass sponges with their optimal design are a case in point.

There is nothing very surprising about an animal being well adapted to it’s environment. When you meet a sponge walking down the street, let us know.

6 Likes

Alas, the author of the piece on glass sponges doesn’t understand natural selection. She writes

Chance processes like natural selection show zero potential for supplying the information needed for such a profound connection between structure and function.
There it is. The certainty that natural selection can’t supply adaptive information.

4 Likes

I think this speaks to the different understandings of what Information means.

@Giltil It’s not my intent to argue, but can you explain what “Information” means to you?

I like how extreme the assertion is too. Zero potential. Not insufficient potential, no. Zero. Natural selection can’t perform any degree of optimization no matter how tiny. No influence on structure-function relationships.
Presumably the author must then be convinced that the load-bearing capacity of the structure of the sponge has zero impact on it’s survival and reproductive capacities?

It’s especially odd given how exactly these kinds of structure-function optimization problems are the ones in which selection typically does so well that the method of iterative random sampling combined with selection is deliberately employed by human engineers to design things such as load-bearing structures, airplane wings, windmill propeller blades, and what have you.

4 Likes

The point of my post regarding glass sponges was not there is something surprising about an animal being well adapted to it’s environment, but that there is something very surprising about seing animals displaying not mere functional systems/structures but OPTIMAL functional systems/structures.
Many contributors here seem to think that there exist in biology a vast number of solutions to a given biological problem so that it is not surprising that the RV+NS process can find solutions. Say differently, reasoning in term of sequence, many here think that functional sequences are not rare in sequence space. Even if I disagree with this idea, let’s pretend that it is true. In that case, it would probably be correct to reject the idea that FI (functional Information) reliably points to design. But regarding not FI but OFI (Optimal Functional Information), the situation is very different. Indeed, if it may be conceivable that functional sequences are not rare in sequence space, one would have to depart from science to argue the same thing about optimal functional sequences.

No, the high confidence that natural selection can’t supply for OPTIMAL physical structures instead. See my answer to Dan above.

Functional information, as defined by Hazen et al.

1 Like

Helder spends her article developing the theme that sponge form follows function, then in the last paragraph just baldly states that natural selection cannot supply the information needed. No matter how connected form and function are, that does not follow at all.

Sharks and whales are similarly shaped, for reasons of the aqueous predators they are in the environmental niche they occupy. Why would cube shaped sharks not be selected against? How would the hydrodynamics of a whale’s environment not favor a certain body shape, flipper, and tail?

6 Likes

Why?

There is some structure that undergoes some physical stress, like bearing a load. Pick any random structure shape, like some irregular chaotic lump of material, submit it to the physical stress, like ability to resist some degree of pressure from all directions and see how long it lasts before breaking.

Random shape:
random shape 01

Now generate lots of copies of that lump(all being almost exactly identical) but with small random changes in that same structure, submit each of them to the same physical stress. Remove the ones that did the worst. Take the remainder, generate new copies of each, submit them to the stress. Remove the ones that did the worst again. Rinse and repeat ten million times.

Why in the world should this NOT result in the optimal structural solution to suffering that physical stress?

If the optimal physical shape is a sphere, just to make it easy to picture in our imagination, then it seems obvious to me that any imaginable starting shape can be incrementally improved towards the shape of a sphere, and there are obviously many random changes that move the starting shape(pretty much regardless of what it is) closer towards the shape of a sphere, just as there are changes that move us away from a sphere. Positive and negative selection then.

Just how many random changes are available to that lump that will bring it closer to a sphere?
shape
Any random change that removes some material from areas (marked by red lines) outside the “circle” would bring it closer to a sphere, and any random change that adds material to the areas marked by blue lines would do the same. If the function is to retain structural integrity against some outside pressure, any random change that weakens the structure would be selected against, and anything that makes it stronger would be selected for.

You seem to be saying there’s something intrinsically unworkable about this scenario, but you don’t say what that is. Just what is the problem?

4 Likes

Natural selection is great at optimizing things.

This is evident from observation.

Again, because we have substantial empirical evidence in support of that view.

Optimal solutions should be rare, but if there is a functional gradient then optimization is easy. This is in fact the entire point of natural selection.

7 Likes