The Extra Face in Mount Rushmore

The hypothesis that the production of statistically significant levels of functional information is unique to intelligent minds has some pretty obvious empirical predictions in forensic science, decryption, SETI, archaeology, and biology. The most basic qualification for a testable, falsifiable hypothesis is that it can be empirically falsified. There are two ways to potentially falsify this hypothesis: a) the lab and b) realistic computational simulations.

Functional information (FI) is obviously not an invention of ID. Without FI, you would not even have been able to communicate on this forum or even send a text message. For an introduction to the concept,see this Nature article. There are other, more technical papers I can direct you to, should you be interested in finding out more about this.

Regarding the evolutionary algorithms you linked to … both EA’s are very nice examples of ID in action. Unfortunately, however, they do not calculate the amount of FI produced which, it appears to me, would not be statistically significant. It is very low-level info. Part of my program included graduate work in EA’s, including writing a large number of them. There is one thing everyone writing them soon becomes perfectly clear on … you had better put very careful, intelligent thought and design into your evaluation function or it will not work. If I may quote from one of our text books, Eiben & Smith, ‘Introduction to Evolutionary Computing’ …

“the evaluation function … forms the basis for selection, and thereby it facilitates improvements. More accurately, it defines what improvement means. From the problem-solving perspective, it represents the task to solve in the evolutionary context.” p. 19

At the heart of every successful evolutionary algorithm is well-designed (intelligently-designed) evaluation function that compares the current population with where you want it to go and evaluates any improvement in reaching the objective. In real life, evolution is blind and does not know where it wants to go. We can write fiction with words, we can produce mathematical fiction that model worlds that do not exist, and we can produce software fiction. The two EA’s you’ve linked to are software fiction; simulating an imaginary world where evolution knows where it wants to go, and is handed, on a silver platter, four components to work with, and a method to assemble and motivate these components. It is a nice, well-designed piece of software, but should not be mistaken for a real-life, blind watchmaker who has no objective whatsoever.

In the field, we often encounter situations and anomalies where a number of variables appear to be involved, that are difficult to individually quantify. A frequent solution that enables us to make useful estimates is to estimate the two extremes, normalize it, and divide that range into discrete, delta-x components. If we normalize the entire set of delta-x’s to 1, then we estimate the smaller range of delta-x that would be required for the effect or anomaly, we are then in a position to estimate delta-x(effect)/1 which would represent Wf/Wt.

In general, for any anomaly or effect, we must first determine the relevant variables before we can begin to estimate the various Wf/Wt for each of those variables. A radio signal from deep space will have different relevant variables than those at a crime site, or at an archaeological dig, or Mount Rushmore.

Re. Devil’s causeway: Nice example! I would add another component to an ID investigation … eliminate those effects that are determined by physics. This would eliminate crystal structures determined by the molecular structure and physics, including the columnar basalt at the Giant’s Causeway in the UK and Devil’s Tower in the USA. Those are highly constrained by the physics of their molecular structure coupled with the physics of thermal stresses during cooling such that Wf = Wt and FI = 0.

Re. Relative smoothness, you are right that there are many examples of smooth stones, but the key word is ‘relative’ (in comparison to the general area of the effect or anomaly). I have some beautiful, highly smoothed stones I picked up off the beach in Newfoundland … but relative to the surrounding area they would represent no deviation at all. Another way to better deal with this would be a more sophisticated variable of “relative relief smoothness”. Normally, natural processes that smooth stone, also reduce the relief on that stone (i.e., would wear down the nose and smooth out the eyes on Mount Rushmore). So what we would look for is sharp, well-defined large-scale relief coupled with very fine smoothness, relative to the other rock features in that area, estimate a normalized range, the delta-x subset required fo the effect, which would give us an estimate of Wf/Wt for that variable.

Over the past few decades, I have applied this FI = -log2 (Wf/Wt) to a large variety of problems and it seems to give meaningful results in detecting ID. They include detecting soviet subs within the background noise of the ocean (during a student summer job for National Defence back in the days of the USSR), application to crystal lattices, random polymers, various forensic cases, “watermarked” artificial proteins, detecting lottery fraud, etc.

No, you’re still not getting the empirical prediction part. If you disagree, please make a few.

Hypotheses of the form “Property Y is unique to X” form one of the backbones of modern science. Unique properties permit us to identify and classify elements, molecules, artifacts, as well as make empirically verifiable/falsifiable predictions of the form “This demonstrates property Y, therefore we predict it is of the classification X”.

For example, from observations of the sequences of transmembrane proteins, we can form the hypothesis “Y characteristics of amino acid sequences appear to be unique to transmembrane proteins.” From this hypothesis, if we observe a sequence that contains those characteristics, we can predict the 3D structure using a transmembrane template. In general, 3D structure prediction relies on hypotheses of this form.

Similarly, we observe that intelligent agents can produce effects that require statistically significant levels of FI, but we’ve yet to observe non-rational processes producing statistically significant levels of FI. Instead, non-rational processes tend to favour expectation values of the null hypothesis, not deviations from it. From those observations, we can form the hypothesis that the ability to produce statistically significant levels of FI is unique to intelligent agents. From this hypothesis, we can make predictions. For example, if we observe an effect or artifact that required a significant amount of FI to construct, then we can predict that the effect or artifact was produced by a rational agent. Consequently, Monsanto can insert genetic markers into the genomes of Roundup-Ready soybeans and canola to distinguish their product from the other types. The markers must have a sufficiently high FI to put it far enough outside the normal expectation values so that they can use it to identify their product in legal cases. I know of one case where they have successfully defended a lawsuit against an individual engaged in the unauthorized production of Roundup Ready canola. The J. Craig Venter Institute does something similar to label their artificial biopolymers.

I expect you understand that the same hypothesis can be used to predict intelligent agency in forensic science, archeology, SETI, and so forth.

You’re still not getting it. To be useful, scientific hypotheses need to be mechanistic. “Appear” is a weasel word in a hypothesis.

If you were trying to formulate a useful hypothesis, it would be, “The Y characteristic of a protein sequence is sufficient to cause transmembrane localization.” That leads to experiments, not cataloging.

Ah, the standard ID hand wave. Modeling a natural process on a designed computer is evidence the original natural process was intelligently designed. :roll_eyes:

But doesn’t it kind of depend on how you’re defining functional, etc. It almost sounds circular, something like “When we look at things we intelligent beings produce, they appear to be intelligently produced.” And then applying it universally like, “If it looks like an intelligent agent produced it then an intelligent agent produced it.” How are we to objectively determine that FI is unique to intelligent agents when intelligent agency is embedded in the definition of FI, as far as I can tell?


It appears to me that you are using the “no true Scotsman” approach for what defines a testable hypothesis. As for using “appear” in a hypothesis, it depends whether a person likes to overstate things, or err on the side of caution. I prefer to be cautious in my approach, but if you wish, you can certain drop the word “appear” and it will make no difference in the hypothesis I proposed, as “appear” does not appear in it.

If you are suggesting that hypotheses of the form “Y is a unique property of X” is not useful, then I’m afraid you are tossing a great deal of science out the window. In a lab, you would not have a clue what you are working with, nor it’s properties when you conduct an experiment. You would not be able to analyse or predict anything. This isn’t a simple matter of “cataloguing”, it is essential for understanding, testing, and predicting processes, field effects, and outcomes.

What I would rather see from you, is a mathematical approach to distinguishing examples of intelligent design from non-designed objects and effects … there is a difference between a laptop and a rock in the bush. What method, including equations, would you provide that is not purely subjective? Throwing up one’s hands and standing slack-jawed, criticizing other scientists actually working on the problem, is not the way science advances.

You have misrepresented what I wrote. Take another read …. it has nothing to do with a “designed computer”. The first step in critiquing what someone has written is to understand it so that you can accurately articulate it. In your case, you need to understand what is involved in coding the evaluation function of EA’s and GA’s that actually can do interesting things. Unless you understand that, you are in a poor position to understand what you are commenting about.

That is a good question and not always easy to determine. FI is defined in the literature. One example is (

The research question is, “what can produce FI?”. Thus far, there is only one thing we have ever observed that can actually produce it … intelligence. It is the only empirically verified option on the table. From that, we can draw the hypothesis I articulated earlier. If we ever observe something else doing it, then the hypothesis will be falsified. Until then, however, it is the only option on the table.

Defining “function” has to be done within the larger physical system. It is the physical system that determines whether something is functional or not. The paper I linked to above discusses this, and I also discuss it in my paper Measuring the functional sequence complexity of proteins For anomalies or for effects where we do not know what the function is, we can still look at the effect and figure out what physical variables might be required to produce that effect within the context of the larger physical system. We can make quantified estimates and proceed from there.

Ah, more hand-waving propaganda from the ID crowd. Just define FI as “can only be produced by intelligence” then claim biological life is chock full of FI. Didn’t you guys learn silly semantic games like that don’t work with your “CSI” and “FSCO” nonsense?

1 Like

The problem is defining functional information without being ambiguous or arbitrary. The other problem is automatically rejecting natural processes that produce functional information.

1 Like

Chill out @Timothy_Horton. Make your point without the insults. @kirk is a nice guy, even though I’m certainly he is wrong. You lose big time when you treat him unkindly.

I didn’t attack him personally, I’m going after his claims. Isn’t that allowed any more?

Go after his claims. Just do it a little less unkindly than your usual grumpiness. You win more people over with kindness.

This is what Tim does continually.

Of course Bill has to chime in with his usual falsehoods. Still angry because so many people call you on your repeated idiocy?

I’m not trying to win him over, just show others how disingenuous ID arguments usually are. But maybe I’ll add grumpy cat as my avatar. :slightly_smiling_face:

Tim you have become a useless Troll and are adding no value to the conversation. You either are dishonest or you have comprehension problems. The anti ID rhetoric is getting old.

The have been attempts in the literature to define FI. Here is my attempt Measuring the functional sequence complexity of proteins and here is another from Hazen et al. Functional information and the emergence of biocomplexity

I would not want to automatically reject natural processes as being able to produce FI. Rather, I think we need to limit our options to what we can empirically verify as having an ability to produce FI. If there is a natural process that we can empirically verify as doing it, then my earlier hypothesis is falsified. We would then be left with two options, rather than just one.

EAs conclusively demonstrate the natural processes in evolution can produce new FI. The excuse the amount produced is not statistically significant is like claiming erosion can’t have produced the Grand Canyon because any erosion we see in real time from water and wind is statistically insignificant compared to the depth of the canyon.

1 Like

If we could find a natural biological process that could generate FI in quantities to produce complex adaptions such as a birds wings or an eye then do you think we could build a model for evolution?

It’s already been done Bill. Go read a basic biology or genetics textbook sometime instead of AIG.

1 Like