This is why I have always been skeptical of “complexity”. The measure of complexity depends on how you measure it and on the representation system.
You can come up with reasonable ideas of asymptotic complexity of a problem. The actual measured complexity doesn’t mean a lot. But for problems that can grow, there asymptotic trends.
The entities for which the ID people want to talk about complexity are all finite systems where asymptotic trends don’t have an obvious meaning.
The original use of “complex” to describe specified information was by Leslie Orgel in 1973. However he did not mean by it that the phenotype was complicated, but only that the sequence was long and conveyed a lot of specified information. Then ID advocates started to use Orgel’s term “specified information” and used “complex” simply to mean that there was a large amount of it, enough to make it very improbable if it was generated at random.
So the whole use of “complex” in discussions of what evolutionary processes could do was, right from the start, not discussing how complicated organisms were.
There is a separate, and vast, literature which tries to measure complexity of organisms by numbers of cell types.
The fraud of it all is in the fact that they haven’t contributed anything to the field and won’t ever do so.
Their whole bit is, “But life is so complex!” and will never move beyond that, because as soon as you start looking at the nature of that complexity, it screams that it arose by a massively iterative process.
I would agree, and suggest that this is entirely to be expected.
Both the ID movement in general and the DI in particular are engaged in apologetics, not research – coming up with new and shiny arguments why God exists/is-good, not finding stuff out. Their interest in design or anything else that they use as the basis for their arguments is thus subsidiary to that purpose, and so almost-guaranteed to be superficial and half-hearted.
Expecting them to contribute “anything to the field” of design is like expecting William Lane Craig to contribute to the science of cosmology or Ray Comfort to the study of bananas. Yes, it would be nice if they did, and make their work far more useful and more interesting to read – but any such expectation is a recipe for disappointment.
They’r not even trying to. The only reason IDers submit papers for publication (or get papers published by nefarious means) is so that they can then say that ID is scientific because… look at all these published papers!
Michael Behe, who pioneered the concept of irreducible complexity in the bacterial flagellum, said it best in a Discovery Institute keynote address of long ago:
“ASC not what ID can do for the flagellum—but what the flagellum can do for ID fundraising.”
As Gary Larson once responded to critics: “And it can also be noted that bears don’t actually drive cars and dung beetles really don’t attend college.” And as Barry Goldwater once said “Brevity in the pursuit of comedy is no vice. And pedantry in the pursuit of sophistry is no virtue.”
That said . . .
I practice and enjoy pedantry (and sophistry!) as much as anybody but the linguist and lexicographer in me can’t resist piling on with relish (and perhaps a little mustard.) The Oxford English Dictionary, the Cambridge Dictionary, Merriam-Webster, and Collins dictionary all concur that a primary meaning of “pioneer/pioneered” is “to be one of the first people to do something.” and in the case of concepts “to take an idea or field of study into new territory.” Michael Behe certainly did both with IC. Indeed, he took the irreducible complexity concept into venues and to new audiences where ICR had virtually no reach or influence. Behe was certainly a pioneer of IC.
Just for fun I asked the Gemini Advanced engine about this and its reply began with:
Michael Behe, a biochemist, is widely known for introducing and popularizing the concept of Irreducible Complexity (IC) in his 1996 book “Darwin’s Black Box.” He defined IC as a system composed of several interacting parts that contribute to the basic function, where the removal of any one of the parts causes the system to effectively cease functioning.
I would say that “introducing and popularizing” is exactly what Behe did as a pioneer of IC propaganda.
Fun times.
(Josh Swamidass took a backstage photo of Michael Behe and I some years ago but I’ve never been able to get a copy. Those were fun times also. Perhaps I will see it when Josh eventually publishes his autobiography and photo memoirs.)
It had, but it wouldn’t have been possible to see. What that means is that you cannot be sure that a complex structure doesn’t exhibit a specification. For example, most people won’t be able to figure out that the sequence of numbers below exhibit specified complexity. And yet it has. Can you see it?
14159265358979323846264338327950288419716939937510582
Don’t think this is accurate as far as Shannon information is concerned.
To see this, let’s consider the outcomes of 2 coin toss.
Here is the first:
HHHHHHHHHHHHHHHHHHHH
Here is the second
HTTTHHTHTTTTHHTHHTTH
Both outcomes have the same probability, hence the same Shannon information. Yet, the second is more random than the first. So no, more randomness doesn’t mean more Shannon information and WD doesn’t think that Shannon information is lack of randomness.
I think one should be very careful before attacking Dembski in the field of information theory, because the guy really masters the concepts.
What I love most about the “Ray Comfort & the Intelligently Designed Banana” story (and the Little Golden Book based upon it) is that he finally got something right—and entirely by accident. As we all know, Ray just didn’t realize that it was humans who designed it by selective cultivation for sweetness, minimal seeds, and none of that nasty fibrous pulp that was like chewing on a rotten jute rope that got mangled in a kitchen blender. I am entirely in favor of that kind of intelligent design that gave us a better banana.
I can see how you might think that, but no. Shannon Information is a property of the population and the observations it might produce. It is not a property of a sample, but we might estimate it from a sample.
So no, more randomness doesn’t mean more Shannon information …
It does! That’s the definition. Try the math for yourself.
Shannon Information: H = -\sum_{i}^{} p_i \ log_2( p_i)
For the coin that gives all H, the probability of heads should be close to 1.0, and 0.0 for tails, but log(0) causes trouble. I suggest using P[H]=0.98 and P[T]= 0.02.
H = - [ 0.02 \ log_2( 0.02) + 0.98 \ log(0.98) ) ] = 0.14144
This value goes to zero as the probabilities approach 0 and 1.
For the fair coin use p=0.5.
H = - 2 [ 0.5 \ log_2( 0.5) ] = -2 [(0.5 ( -1)] = 1
or 1 bit, as we should expect from a fair coin.
… and WD doesn’t think that Shannon information is lack of randomness.
No disagreement there. The trouble comes from how WD uses Algorithmic Information (AI). I’m going to modify your example just a little to use longer strings to make it more obvious.
A) HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
and
B) HTTTHHTHTTTTHHTHHTTHHTHTTHHTHTTTTHHHTHHTT
Both have length 40, but we might “compress” (A) to the instructions: “Write H 40 times”
which has length 16.
For (B) there is no obvious way to write it more briefly, so it’s compressed length is still 40. It is completely “random” from the perspective of trying to write out the same sequence in shorter form.
The length of the “algorithm” to write A is shorter than for B, so we say B has more Algorithmic Information (because 40>16).
(Even if we find some more efficient way of coding these sequences, it seems certain that B will require more AI than A.)
We can turn this around and say that A “lacks randomness” compared to B.
In multiple sources, WD describes the simpler description, the one “lacking randomness”, as containing more information.Why?
Functional DNA sequences (described using AI) will tend to be more random rather than simple. Consider a long sequence of a single nucleic acid repeated 400 times; it is simple to describe but probably doesn’t have much function. Function DNA will tend to be more random as measured by AI, containing more information.
That said, Algorithmic Information doesn’t have any useful biological application I am aware of.
Now, a description like “An outboard motor” is simple, and WD says this description contains a great deal of information? This was a small revelation to me (above) that WD intends that the English language specification should be short. I had previously assumed he meant a specification of functional DNA (or something). It’s an interesting insight, but it doesn’t help Dembski one bit (or one Shannon ).
It makes no sense. I think Dembski wants to make a connection between the simple description and full information needed to actually construct an outboard motor,but he never establishes that connection. To my knowledge he never mentions why they should be connected.
I think one should be very careful before attacking Dembski in the field of information theory, because the guy really masters the concepts.
@Giltil, I agree, and I have been very careful. I have mastered these concepts too, and I have far more practical experience at putting it to use in a related field (Statistical theory strongly overlaps with Shannon Information, Algorithmic Information is essentially “parallel” to Shannon Information, with similar theorems and concepts.).
I spent a year carefully reviewing Dembski’s 2005 paper, following up on citations and making sure I had it right, and that I hadn’t overlooked deeper interpretation. I’ve done more since. He gets it wrong, and makes a hash of information theory. I don’t know who told you “the guy really masters the concepts”, but there isn’t much to support the statement. Dembski has the education to grasp these concepts - he should know - yet he doesn’t.
This brings to mind an image of ID Design ‘Theorists’ waiting with baited breath for each new edition of dictionaries to come out, and then immediately scouring them for neologisms, in the hope that this might reveal to them ‘new’ ASC.
It of course doesn’t solve the different-ASC-in-different-languages problem – and I could see German ‘Theorists’ being quite disheartened – because their neologisms tend to be longer (compound words).
A technical point: AI (and ASC) use the idea of a “Universal Turing Machine” which can optimally compress/decompress messages. Which language doesn’t matter, only that there should exist some optimal way to code the message.
In practice, we never know if there might be some better encoding, and measured AI is always an approximation greater than or equal to the true AI. I didn’t mention this is my long post above because it wasn’t optimal.
Wouldn’t that mean that ASC is, in practice, unobservable/unmeasurable? Wouldn’t that in turn severely limit the utility of the concept?
Addendum: would this compression be lossless? As many languages would have subtleties that would seem likely to be lost in a ‘universal’ language-neutral compression.