Depends on one’s perspective.
I have dusted off a very old essay that tries to convey a different perspective. Enjoy, and please, please everyone feel free to comment, criticize, and take the thoughts in new directions.
On complexity and information
Consider the tornadic thunderstorm. It consists of a number of integrated and essential components, all of which are needed to produce and maintain the tornado. The ground and upper-air windstreams (which must be oriented in precise manners), the precisely-functioning updraft, the supercell itself (which consists of more parts than I can possibly list), and the funnel cloud. By most accounts (there will always be dissent), an IC system.
Can we speak about the information content of a tornadic thunderstorm? I believe so. Recall that the informational content of genomes is usually estimated by “calculating” that fraction of all possible sequences (nominally, amino acid sequences) that can satisfy a particular specification. We can use a similar strategy to guesstimate the “information” carried by water vapor molecules in a storm. The hard part is deciding how few of all of the possible states that are available to a particular water molecule are actually “used” in a storm. Now, one can count up all possible positions in the storm, interactions with all possible partners, etc., etc., and realize that the number is probably rather small. But, for the sake of argument, let’s pick an arbitrarily large number – let’s propose that only 1 in 10^30 hypothetical states of any given water molecule is excluded in a storm.
Starting there, we need only count the number of water vapor molecules in a storm and estimate the “probability” that the arrangement found in a storm would occur. If we arbitrarily think in simple terms - a storm that is 5x5x7 miles in size, a temperature of 30 degrees C, a partial pressure for water vapor of about 32 mm Hg, an overall atmospheric pressure of 1 atm - then the number becomes (roughly) 1x10^-30 raised to the number of water vapor molecules in the storm (which is about10^36). Which in turn is about 10^-10^6 (that’s 1 divided by 1 million!). (For comparison, recall Hoyle’s number of 10^-40,000 as an estimate of the probability of the proteins in a simple cell arising by chance.)
Hopefully, if I’ve been clear, there should be the beginnings of a paradox here. This comes from the “universal probability bound” that was set forth by Dembski – roughly 10^-150. The reflexive interpretation of the preceding is that the information content of a tornado, which obviously forms by chance and far too often to be considered as improbable, exceeds Dembski’s limit, thereby indicating a fundamental problem somewhere. But, in exploring this seeming paradox, I would suggest that useful things can be learned about the application of Dembski’s ideas to nature. I’ll offer two in the rest of this post.
A. First, I need to remind myself just what the “universal probability bound” is. It was not derived from information-based computations, but rather by estimating the likelihood of possible occurrences in the universe. The preceding suggests (to me, at least) that it may not be appropriate to equate the information content of a system with the probability of occurrence of an event. This probability needs to be estimated by other means, and depends on much more than the informational changes associated with an event.
As a simpler example, consider the information content of T-urf13. In NFL, Dembski argues that the information content of this protein, while large, is in and of itself below the “limit” that one gets if we equate the “universal probability bound” with informational bits. However, the probability of this protein arising by chance in a population of maize plants is likely far, far greater than the informational estimate would suggest. In contrast, the probability of finding a milligram of T-urf13 on, say, the dark side of the moon is far, far less than the “universal probability bound”. In other words, circumstance and pathway are of paramount importance when thinking of probability, while inherent information content is almost irrelevant.
Put another way, an information content of 3 million bits (roughly that of a tornado, if one grants my back-of-the-napkin arithmatic), while in excess of the limit one would get if one equates the “universal probability bound” with information, is not really complex, since an event bearing the origination of such a quantity of information can and does occur “by chance”, and frequently. IOW, complexity is not determined by information content, but by other considerations.
B. Which brings me to a second point. Usually, (in my reading, at least), information content is reflective of the informational entropy of a system. Entropy, in turn, is usually taken as a state variable – the informational entropy of, say, a protein is independent of the pathway by which the protein originated. The preceding indicates that complexity does not share this property. It follows (at least to me) that the property “complex specified information” (CSI) is not a state variable, and thus should not be rigorously equated with information per se. I would suggest that a better analogy to be used here is that of thermodynamic work. Work is a property that is pathway-dependent – the amount of work obtained in going from state A to state B is determined as much by pathway as the inherent thermodynamic properties of the initial and final states (although the poises of the state variables do affect the work that can be done). It seems (naively, to be sure) that CSI would be better defined in terms of some sort of informational “work”, rather than inherent information content. (This would take into account the pathway dependence of the assignment of complexity, as indicated in the preceding.)
I’d like to elaborate on the concept of informational work, but I am woefully ill-equipped to explore the idea in much depth. Perhaps other participants here can fill in some of the blanks.
As I said, just some fanciful thoughts. Enjoy.