Dembski: Building a Better Definition of Intelligent Design

On the contrary, I think it’s just starting to get interesting!

At first read IT has no application here, but …

… then I read your blog about Notch. Reading about cell interactions brings to mind Cellular Automata and Turning Machines. Cell are certainly processing information, and there are some concepts from Computer Science that might apply, but not necessarily Information Theory.

I need to think on that, and reply from a real keyboard.

Thanks for reading it, and it is true that computational approaches to signaling are fruitful paths to better understanding. My point about Notch, though, is not about that. (Notch is a centerpiece of my “evolution is easy” book project that I hope will get better organized in the near future.) I was as usual unclear in my quick response above, but the point was to affirm your comment about the number of possible paths/outcomes in a trajectory of any kind. So, if we want to consider Shannon entropy-based concepts of “narrowing possibilities,” we have to understand and name both the possibilities and the “goal.” Maybe this deserves a separate thread, but the failure to name and identify these aspects is what I called “one of a small collection of fundamental untruths that form the core of ID ‘strategy.’” If one considers Notch signaling as a “goal,” and/or as some kind of optimized solution to a set of problems, of the kind an engineer might develop, then one must ignore the biggest question: did it have to be this way? The answer to that question, biologically speaking, is a resounding NO and not just for Notch signaling but for every signaling system that is used to build an animal (or a plant, and that’s not just to make @Art happy). That’s the point of my “Why Notch?” writing.

Whether this particular question is one that information theory can help illuminate is… less clear to me. I haven’t seen it yet in my reading of systems biology that uses information-theoretic language and approaches. But I have seen enough exploration of information theory in developmental biology (and genomics and evolution and signaling) to know that information-theoretic work in biology is not hindered by any claim that information theory “does not deal with meaning” (which I’ll take your word for) and seems not to have limited itself to a definition like “Information describes variability or bandwidth (Shannon).”

I’ll post some interesting recent work and a few comments on Adami’s new book in a response to @Art.

Hmmm it’s kinda both.

Information theoretic approaches to experimental questions in developmental biology are semi-recent, dating to less than 15 years ago. See the nice overview in the Preview by Sokolowski (first link below). One key early contribution was in PNAS in 2013, “Positional information, in bits.” That paper is about taking the developmental concept of positional information and framing it mathematically, explicitly building on Shannon. The paper seems pretty basic in its aims. The Preview is about a paper in the same issue of Cell Systems, “Localization of signaling receptors maximizes cellular information acquisition in spatially structured natural environments.” This paper seems to be a pretty significant advance in the use of information theory in biology. It is (as near as I can tell) only conceptually related to questions of development, but the Sokolowski piece claims that the paper has solved some big conceptual problems that most certainly do pertain to analysis of development.

I’m currently reading Chris Adami’s new book on biological information, and have sampled his section on use of information theory to think about TFs in development. That section is based on his 2015 paper, Discovery and information-theoretic characterization of transcription factor binding sites that act cooperatively. The context is development.

Ah but then there’s a new PNAS paper that I need to read right away: “Information content and optimization of self-organized developmental systems.” The abstract suggests that these authors are addressing the kind of question I threw out earlier: what are informational aspects (limits? strategies?) that come along with the progressive restriction of cell fate that is fundamental to animal development?

I guess one way to think more about this is to ask whether information theory can help us understand (or ask better questions about) the way development happens. The embryo isn’t just growing (as in getting bigger) – that will be clearly limited by resources. It’s also progressively adding features, and it seems obligated to burn bridges as it goes. Does information theory represent a different way to think about that? I don’t know, but I can tell that systems biologists aren’t shying away.

Papers:

2 Likes

Hey Steve, thanks a whole bunch for the list of papers. I will need some time to go through them, and will be slow replying until mid-August. Someone needs to post here every few days so that the powers that be do not involve the 7-day rule.

I will also be thinking about this in the context of something closer to what I work on. The intersection of every regulatory mechanism known to man (and gods) on the singular decisions plants must make (when to flower, when to germinate) are a good place for me to tinker with informational concepts.

1 Like

I’ll keep the home fires burning :sunglasses:!

That look super interesting. Plants are so cute. :seedling:

“Conditional Information” is used to describe a limited set of trajectories given “observed facts”. Here the “facts” would be the developmental and environmental signals that a cell can detect.

The number of possible alternatives to Notch is not Shannon Information, at least not without some very complicated definitions and restricted meaning. I started to write about Algorithmic Information (AI), because then we might discuss how cells are kind of like Turing Machines, but I think that’s a dead end. What is useful is to consider the DNA that encodes the “function of Notch”. This we might boil down to the number of bits needed to convey signals that Notch conveys (Shannon Information). We need some number of bits for the message itself, and possibly more bits to uniquely identify the type of signal to the receiver (to eliminate other sorts of signals).

There analogy I like here is the famous “One if by land, two if by sea.” The signal is only one bit of information which triggers the prearranged meaning “by land” or “by sea”. The receiver also needs to know that signal can be observed from a church steeple, not just from anyone who might happen to by waving lanterns about. This is prearranged information - instructions to the receiver requires more bits to describe.

Note: These extra bits aren’t part of Shannon Information, but they would be counted in Algorithmic Information. I still don’t think AI in relevant; all it can tell us is approximate number of codons needed for the notch gene, and we already know that.

Any means of sharing bits of information will do, so long as the cell (or Turning Machine) can identify and decode the signal. The number of ways of doing this is … REALLY LARGE … I don’t know where you would begin counting. The limiting factor would be in identifying unique signals, which means (I think) the number of different sorts of chemoreceptors that can be described in a given length of DNA.

I will hazard a guess there is more information baked into identifying the correct source of a biological signal (like Notch) than there is in the signal itself.

Which will take some time to digest, thank you.

1 Like

And that’s the point of asking “why Notch?” in the context of evolution. All of the major cellular signaling systems (there are seven that are conserved across all animal lineages) should be described as you just did: “any means…will do.” The universal conservation of these seven systems is one of those extraordinary facts of biology that requires a good explanation. Common ancestry is that explanation.

It seems that this limitation has been solved (by evolution) partly by expanding the number of receptors/signals but also by employing them in combinations. This is an active area of research at the intersection of cell/molecular biology and systems biology, and this is one place where biologists are using explicit information-theoretic language. It’s also a place where biologists use, unapologetically, the language of design.

Sadly, it seems very doubtful that Dembski wants to ask questions like that (i.e., “how does that work?”). Art suggested that “the hard work of developing a system wherein probabilistic considerations in nature could be properly studied was and is just too much for the ID community” and I think we all know he’s right. Meyer is not even capable of thinking about these concepts, nor is Axe. Dembski, it seems to me, is capable. But alone. I won’t try to understand how these intellectual disasters come to be, but I also won’t pretend that the ID community can intelligently discuss design or Shannon information.

Why even bring it up, then? (It = what some ID guy writes about information or the definition of “intelligent design.”) For me, the answer is that I think that the “intuition” employed by these hacks is pretty common amongst moderately intelligent people when considering evolution. So, I think that the confusion of Dembski et al. is potentially useful as a starting point when, say, writing about the ways in which evolution can feel counterintuitive.

1 Like

In the latest version of the Complex Specified Information argument, Dembski and Ewert have an algorithm that produces (something) and a measure of the difference between the size of the something and the length of the algorithm. The indication of design is when the algorithm is much shorter than the (something).

But they neglect to even suggest what it is that the algorithm is computing – the genotype? the phenotype? Let alone give any biological reason for why shortness of algorithm is hard for natural
processes to achieve.

So you are quite right.

2 Likes

Is that from the new edition of The Design Inference, or something else?

Don’t they say “make an outboard motor” is a valid algorithm, or words to that effect (that is not an exact quote or intended to be)? And the size of the algorithm is just the length kf that statement (presumably measured in words or letters - I’d guess words but I don’t know).

If that’s it, then the idea is - to be polite - less than sensible.

2 Likes

It is the definition of Algorithmic Specified Complexity (ASC) that they give there. It was also mentioned earlier in papers by Ewert, Marks, and Dembski (2013) and Ewert, Dembski, and Marks (2014), and a proof of its conservation was attempted by Nemati and Holloway (2019) in the DI house journal BIO-Complexity. Links to these and to disagreements with them may be found in my 2019 post at Panda’s Thumb where I spend much time trying to figure out why this makes any sense.

3 Likes

How did I miss that bit of fun? Time to catch up!

But this comment from Eric Holloway

Also, one more side note, ASC is not brought about by simplicity, but by a concise description by an independent context. For example, the bacterial flagellum is highly complex, but also has an enormous amount of ASC since its construction can be concisely described in English as an “outboard motor”.

… is insightful to the mindset of ID. For them what matters is that the English description is short. Suddenly Dembski’s meaning is much more clear … and also completely nuts. The length of the English language description has no bearing on the object it describes. It’s simply a more convenient way to name the object than listing all of it’s component parts. If we count all the additional language needed to unpack the meaning of “An outboard motor” it is no longer very short. ***

More after I get caught up?

*** Dembski appears to be ignoring the second part of the Algorithmic Information - the part that decompresses the words “outboard motor” into a functional outboard motor.

6 Likes

Also, it’s wrong. The bacterial flagellum is an inboard motor, since the ‘shaft’ penetrates the ‘hull’.

6 Likes

That might be the least wrong thing about it.

2 Likes

But probably the easiest to explain.

1 Like

And more than a little emblematic of the lack of rigor that infects ID discourse.

1 Like

No surprise to see Dembski among the authors of these papers. What a dumb-ASC.

1 Like

That must have been the comment I was thinking of. And it is, indeed completely nuts. Indeed if algorithmic specified complexity may be increased simply by simply coining a new term to describe the result it seems completely meaningless as a measure of anything relevant. Did the flagellum really have a lower ASC before the term “outboard motor” was coined?

It looks like an attempt to combine the notion of algorithmic complexity with the idea of specification without any understanding of what it would mean.

Thinking a little more, I know that Dembski was attempting to deal with the Texas Sharpshooter aspect of CSI by counting up all the specifications of equal length - hardly rigorous but good enough for a rough estimate. Could this idea be an attempt to deal with the even bigger problem of the difficulties of calculating meaningful probabilities by replacing them with the length of the English language specification - and confusing that with Algorithmic Complexity?

2 Likes

Also, does the flagellum’s ASC increase if you call it an inboard motor instead? Or decrease if you describe it in French or German?

4 Likes

Or if you describe it in German as “außenbordmotor” or “außenborder”? (Extra points for finding the languages with the shortest and longest words for outboard motor. :wink: )

4 Likes