Why would “Paul Nelson” not be be covered, as it were, by some combination of (4), (5), and/or (6)? Other than that, apparently, @pnelson wishes to rule this out arbitrarily and for no good reason.
It is worth noting also that AI is an interesting test case. ID holds, it seems, that AI could never exhibit genuine intelligence.
It’s not true now, but possibly in the next 10 years we would be uncertain if @pnelson actually wrote that post himself, or if the post was written by an AI trained on his past posts and with access to other information.
That’s an example of something not covered by the EF, which would have conclude “designed.” Now the ID proponent could still invoke, “but the AI is designed”, and that gets to the problem of establishing a negative control. It seems self-contradictory to argue opportunistically that AI is and is not intelligent, in different ways here.
Waiting for my URL from David Klinghoffer…but cannot resist replying to this.
On 11 January 1996, Phil Johnson debated Niles Eldredge at Calvin College about “Evolution: Hope or Hype?” After the debate, a small group of us, including Eldredge and Johnson, went out for beers at a local Chi-Chi’s. I had met Eldredge years earlier (1983), when he came to the University of Pittsburgh for a honors college seminar on punk eek, and afterwards we talked over dinner. Anyway, that night in 1996, we were sitting opposite each other, continuing the debate discussion in a pleasantly alcohol-moderated mode.
Eldredge is an ex-Northern Baptist, who described realizing he no longer believed “any of all that stuff” when, as a young teen sitting in the balcony of his childhood Baptist church during a service, he asked his brother, “Hey, do you believe this?” His brother said no, and Eldredge realized he didn’t believe it either. Anyway, we’re chatting over nachos, and I ask him – searching for some reasonable opening in his apparently impregnable naturalistic self-understanding – what it would be sensible for me to infer, if (as an intergalactic explorer, not of the species H. sapiens) I found his many books in the Library of Congress. Eldredge had long since returned to the elements out of which he was constructed, but his books lived on.
I will never forget his response. A somewhat uncomfortable expression flickered across his face. What follows is as close as I can get to verbatim:
“Well,” he said, “it would be OK to talk about the forces that influenced me…this experience, that experience…the natural causes that acted on me.”
In other words, rather than make a humdrum, entirely everyday design inference, where the intelligent agent “Niles Eldredge” was causally responsible for the origin of his books, he began to dissolve himself away into a Humean bundle. Nobody at home. Just a temporary bundle of experiences, stored for a few decades in a pile of meat, by happenstance out of equilibrium with its environment until the inevitable occurs and the meat achieves room temperature.
In my life, the most telling difference between philosophical naturalists such as Eldredge, or many of those of the same frame of mind who post here, versus ID folks, is the former’s unhappiness with the reality of their own agency.
At the end of the day: there is nobody at home. (4), (5) and (6) suffice, and the agent dissolves away into physics.
Again, that’s not biology. At some point the ID community has to move from faces on mountains and books to actual biology.
If we are digging through the ground and find a potshard and an earthworm, which do we take back to the museum as an artifact of intelligent design? If aliens landed on Earth, what would make them conclude that life on Earth was designed?
If I take a single book and place it in a warm, dark place overnight and find millions of books the next day, what do I conclude? If I take a single bacterium and place it in a warm, dark place overnight and find millions more bacteria the next day, what do I conclude?
If I compare the DNA sequence of chimp and human homologs, how do I determine if the differences between the two genomes was due to design or natural causes? How can the explanatory filter help me answer those questions?
And how would an application of the EF in assessing the origin of the differences between two DNA sequences avoid the Texas Sharpshooter fallacy?
Because there is no reference to a specification in EF, it is hard to see it as a sharpshooter fallacy. It fails for other reasons.
There is here: http://www.arn.org/docs/dembski/wd_explfilter.htm
Bill Dembski: Invariably, what is needed to eliminate chance is that the event in question conform to a pattern . Not just any pattern will do, however. Some patterns can legitimately be employed to eliminate chance whereas others cannot.
A bit of terminology will prove helpful here. The “good” patterns will be called specifications . Specifications are the non- ad hoc patterns that can legitimately be used to eliminate chance and warrant a design inference. In contrast, the “bad” patterns may be called fabrications . Fabrications are the ad hoc patterns that cannot legitimately be used to eliminate chance.
I can testify that the probability of the differences between chimp and human sequences arising naturally is so low that it must be represented by the negative log of the actual probability in order to be intelligible. And that’s just for fairly short sequences. One hesitates to suppose what the negative log likelihood would be for whole genome comparisons. Clearly the EF can rule out a natural explanation.
QED.
Of course, in pointing out these differences between genomes(or individual genes), you’re guilty of the very ad-hocness Dembski himself points out. What he calls fabrications. Those are textbook examples of the Texas Sharpshooter fallacy. By pointing out the differences after the fact, you’ve made an ad-hoc specification of a pattern, and hence committed the fallacy.
Just to repeat stuff I and others have been saying on Panda’s Thumb, at The Skeptical Zone, and here, the issue is not how to define Design. Dembski does not use any such definition. Instead the basic strategy of his CSI and SC and Design Inference and Explanatory Filter arguments is to see whether we can show that the probability of an adaptive feature this good or better is so low under natural evolutionary processes that it would occur less than once in the whole history of the universe. For the limit, Dembski uses an extremely conservative number based on Seth Lloyd’s calculations.
The problem with Dembski’s EF argument is how one calculates the probability under the null hypothesis, which includes all evolutionary processes. If one can calculate it (somehow – he doesn’t tell us how) and it is too low to be plausible, then his formula for SC says that SC is present. Then that establishes that the probability is too low to be plausible. In short, before you get to the SC step, you already have your answer. So SC is a useless add-on to the argument.
Alternatively, one can put a uniform probability on all genotypes in the relevant stretch of DNA. That is what Szostak and Hazen’s Functional Information argument does. However that does not establish that normal evolutionary processes cannot bring about that large a value of FI (and S&H were not even attempting to address that). Advocates of ID often declare that FI is basically the same as Specified Information. But it isn’t, if you define the SI calculation as using the probability of having a sequence that good or better under ordinary evolutionary processes.
Now you can go back to the irrelevant argument of how you define Design. PS: Panda’s Thumb is down right now for some obscure reason which we’re looking into, otherwise I would provide tediously many links.
Nothing’s amiss. We cannot infer that “Paul Nelson” wrote this text without refuting the natural cause that some-one unknown is using your account.
The problem with Dembski’s EF argument is how one calculates the probability under the null hypothesis, which includes all evolutionary processes.
Let’s be clear here @Joe_Felsenstein, there are many problems with EF.
Notice that the EF considers Natural Process XOR Chance, but never Natural Processes AND Chance. But a good model of evolutionary processes would be natural processes AND chance, but by definition EF doesn’t consider it. So yes, the issue is the null model, but the reason why a better null model isn’t being consider is because of this equivocation between XOR and AND.
I don’t get it. By “natural processes” I of course include genetic drift and all sorts of normal accidents. (One thing we should not get into is whether natural processes are deterministic or not – I am thinking of processes that are adequately modeled as random such as random Mendelian segregation). You want to separate “chance” from natural processes? Why on earth would you do that?
I don’t get it. By “natural processes” I of course include genetic drift and all sorts of normal accidents.
But that isn’t what Dembski is including there. Chance is not a natural law in his formulation. Of note, he also means ontological chance, which is different than how we mean “random” in biology. That also is the side trail you intend rightly to avoid.
You want to separate “chance” from natural processes? Why on earth would you do that?
Why one would do this, I can’t say. We can’t speak to motivations very easily.
However, it does justify excluding any realistic model of random mutations (chance) + natural selection (natural process).
I suppose that in addition to normal randomish processes in nature, one might want a category of things that happen by “chance” so that Theistic Evolution could work by tweaking the “chance” events. But the Explanatory Filter did not attribute “chance” to leprechauns, but only concluded in favor of leprechauns once “chance” as well as “necessity” were eliminated. So that can’t be it.
Waiting for my URL from David Klinghoffer…but cannot resist replying to this.
Thanks for the reply, @pnelson. I don’t think you address my point completely, though. The original question:
Thought experiment.
Question: what is the best explanation for the cause of this text? – i.e., the sentences you are reading right now.
Hypothesis: there exists an unknown natural cause, somewhere in (4), (5) and (6), other than the agent “Paul Nelson,” which produces English text about the epistemology of design inferences.
By my way of looking at things, the agent “Paul Nelson” is short hand for “sentient being whose origins involve some combination of (4), (5), and/or (6)”. When you deliberately exclude “Paul Nelson” as you do, @pnelson, you render your entire argument quite meaningless.
Put another way - why exclude just “Paul Nelson”? Why not also exclude, oh, say, hydrophobic forces, covalent catalysis, neutral evolution, the strong nuclear force, or any other aspect of nature? Why limit the exclusion to just “Paul Nelson”? What is the logic behind your arbitrary decision?
By my way of looking at things, the agent “Paul Nelson” is short hand for “sentient being whose origins involve some combination of (4), (5), and/or (6)”.
Mozart did not write the Concerto for Flute, Harp, and Orchestra in C major, K. 299/297c.
Some combination of (4), (5), and/or (6) did. The apparent agent “Mozart” is shorthand for physics in various guises. We simply await the details, but “Mozart” is a placeholder for a more adequate, non-intentional theory.
That way lies insanity. My apologies if I do not follow you there.
7 posts were split to a new topic: Nelson: Parabolas and Methodological Naturalism (Again)
A post was merged into an existing topic: Nelson: Parabolas and Methodological Naturalism (Again)