The hypothesis that the production of statistically significant levels of functional information is unique to intelligent minds has some pretty obvious empirical predictions in forensic science, decryption, SETI, archaeology, and biology. The most basic qualification for a testable, falsifiable hypothesis is that it can be empirically falsified. There are two ways to potentially falsify this hypothesis: a) the lab and b) realistic computational simulations.
Functional information (FI) is obviously not an invention of ID. Without FI, you would not even have been able to communicate on this forum or even send a text message. For an introduction to the concept,see this Nature article. There are other, more technical papers I can direct you to, should you be interested in finding out more about this.
Regarding the evolutionary algorithms you linked to … both EA’s are very nice examples of ID in action. Unfortunately, however, they do not calculate the amount of FI produced which, it appears to me, would not be statistically significant. It is very low-level info. Part of my program included graduate work in EA’s, including writing a large number of them. There is one thing everyone writing them soon becomes perfectly clear on … you had better put very careful, intelligent thought and design into your evaluation function or it will not work. If I may quote from one of our text books, Eiben & Smith, ‘Introduction to Evolutionary Computing’ …
“the evaluation function … forms the basis for selection, and thereby it facilitates improvements. More accurately, it defines what improvement means. From the problem-solving perspective, it represents the task to solve in the evolutionary context.” p. 19
At the heart of every successful evolutionary algorithm is well-designed (intelligently-designed) evaluation function that compares the current population with where you want it to go and evaluates any improvement in reaching the objective. In real life, evolution is blind and does not know where it wants to go. We can write fiction with words, we can produce mathematical fiction that model worlds that do not exist, and we can produce software fiction. The two EA’s you’ve linked to are software fiction; simulating an imaginary world where evolution knows where it wants to go, and is handed, on a silver platter, four components to work with, and a method to assemble and motivate these components. It is a nice, well-designed piece of software, but should not be mistaken for a real-life, blind watchmaker who has no objective whatsoever.
In the field, we often encounter situations and anomalies where a number of variables appear to be involved, that are difficult to individually quantify. A frequent solution that enables us to make useful estimates is to estimate the two extremes, normalize it, and divide that range into discrete, delta-x components. If we normalize the entire set of delta-x’s to 1, then we estimate the smaller range of delta-x that would be required for the effect or anomaly, we are then in a position to estimate delta-x(effect)/1 which would represent Wf/Wt.
In general, for any anomaly or effect, we must first determine the relevant variables before we can begin to estimate the various Wf/Wt for each of those variables. A radio signal from deep space will have different relevant variables than those at a crime site, or at an archaeological dig, or Mount Rushmore.
Re. Devil’s causeway: Nice example! I would add another component to an ID investigation … eliminate those effects that are determined by physics. This would eliminate crystal structures determined by the molecular structure and physics, including the columnar basalt at the Giant’s Causeway in the UK and Devil’s Tower in the USA. Those are highly constrained by the physics of their molecular structure coupled with the physics of thermal stresses during cooling such that Wf = Wt and FI = 0.
Re. Relative smoothness, you are right that there are many examples of smooth stones, but the key word is ‘relative’ (in comparison to the general area of the effect or anomaly). I have some beautiful, highly smoothed stones I picked up off the beach in Newfoundland … but relative to the surrounding area they would represent no deviation at all. Another way to better deal with this would be a more sophisticated variable of “relative relief smoothness”. Normally, natural processes that smooth stone, also reduce the relief on that stone (i.e., would wear down the nose and smooth out the eyes on Mount Rushmore). So what we would look for is sharp, well-defined large-scale relief coupled with very fine smoothness, relative to the other rock features in that area, estimate a normalized range, the delta-x subset required fo the effect, which would give us an estimate of Wf/Wt for that variable.
Over the past few decades, I have applied this FI = -log2 (Wf/Wt) to a large variety of problems and it seems to give meaningful results in detecting ID. They include detecting soviet subs within the background noise of the ocean (during a student summer job for National Defence back in the days of the USSR), application to crystal lattices, random polymers, various forensic cases, “watermarked” artificial proteins, detecting lottery fraud, etc.