What I look for in a good scientist is the ability and interest in problem-solving. “I’m not sure I would even know where to start” is a depressing admission of a failure to even try. If I may spell out a procedure to analyzing anomalies it would be as follows:
Step One: Observe an effect that appears to be an interesting anomaly.
Step Two: Suggest a number of variables that we could look at in putting together a method to objectively quantify the anomaly. By “objective variables” we can rule out “we also know exactly what caused it” and “looks like a human face”. We need a method and variables that could be used by aliens visiting our planet who know nothing of the history of Rushmore, nor what a human might have looked like. That is the way science works … we want to rule out all subjectivity and come up with a universal, objective method that can be used in the full range of applications.
Step Three: Quantify the range of each of the suggested variables and provide an estimate as to where in that range the anomaly falls for each variable.
Step Four: You may wish to evaluate your list of variables. If there are some that are so lacking in resolution that they provide as many false positives and false negatives, then they really are not helpful in distinguishing an anomaly from the general background, and you may wish to eliminate them. This is where sharp relief vs. smoothness comes into play … there is a range that can be normalized. The examples you suggest fall into the lower end of the range, but an anomaly will fall into the upper end of the range.
Step Five: Once we have agreed upon the relevant variables and where each variable falls within it’s own range, we are in a position to estimate the level of functional information required to produce the anomaly, using the equation I provided earlier, which is essentially the equation that Hazen et al published. If one wants to work with a non-uniform distribution of probabilities, then I would use the equations I published in my paper.
Step Six: Once the value of functional information has been estimated, then a statistical decision must be made as to whether that value is statistically significant or not. If it is not, then the anomaly fails to qualify as something requiring intelligent design. If it is a statistically significant deviation from the null hypothesis (no intelligence required … natural processes can demonstrably produce the same anomaly), then the anomaly tests positive for intelligent design.
I’m going to end my participation in this discussion at this point, as I don’t see any serious attempt to engage Josh’s original problem, or to even demonstrate an aptitude and interest for problem-solving. It’s too bad, as Josh’s problem is an interesting one, and science can address the challenge of identifying and quantifying anomalies. In fact, more than 20 years ago, my first application of functional information (taking off from Leon Brillouin’s work in the early 1950’s) in science was to use it to quantify anomalies. It’s been an interesting discussion.