# The Fine-Tuning Argument and ID arguments

I don’t agree with this. There’s a distinction between reducibility and consistency. If the fine structure constant were 10 times larger than it is right now then biology would probably be impossible. But that doesn’t mean that the laws describing biology are mathematically reducible to physics.

I’ve already done this, but I can try again.

1. The constants are contingent. (This is step 1 in the Explanatory Filter.)

2. The values of the constants are improbable. (This is step 2 of the Explanatory Filter.)

3. The values of the constants form an independent pattern, which is the specific set of values that allow for complex life. (This is step 3 of the Explanatory Filter.)

Explanatory Filter

Conclusion: The values of the constants are an example of specified complexity and a design inference is warranted.

It is almost impossible to imagine a clearer application of the specified complexity argument, and it doesn’t matter one whit that Dembski’s specified complexity argument appeared after the FTA did. Dembski was arguing that specified complexity is the logical form all design inferences take, and he never claimed that no one ever made a design inference prior to his argument. No IDist claims that we are the only ones who make design inferences. That’s absurd.

Now, since you are going to deny all of this for no apparent reason, I’m going to assume your denial and continue on my original point. No further discussion will be helpful along this line if you continue to deny the obvious.

Since M&C are claiming that biology is in the absolute sense deterministic, then any design inference about biology would have to be restricted to the initial conditions where the deterministic backwards causal chain ends. Why? Because every deterministic event in the causal change would be filtered out of the Explanatory Filter at step 1 and the cause would be necessity, otherwise known as natural law.

So what about the initial conditions? M&C’s scenario say that God must have specified the initial conditions in such a way that a particular outcome was reached at some point in the timeline. What particular outcome? The formation of complex life. Well isn’t this redundant with the FTA? In fact, no. The constants referred to in the FTA only allow for complex life to exist. They do not determine it to exist. In order to determine it to exist, you need additional information to be present in the initial conditions of the universe. In M&C’s scenario, this would take the form of God specifying the position and momentum of, presumably, all the particles in the universe. A specified complexity argument clearly follows:

1. The initial position and momentum of all particles is contingent. (Step 1)

2. The initial position and momentum of all particles is improbable. (Step 2)

3. The initial position and momentum of all particles form an independent pattern, namely the specific pattern that will inevitably give rise to complex life. (Step 3)

And there you have a design inference applied to M&C’s TE vision of God’s providence. At that point, the ball is in their court to either accept this or deny it.

Demski himself backed away from the (explanatory filter) EF. Why do you still use it?

2 Likes

We have got to discuss this. Why can’t I say, “the values of the constants form an independent pattern, which is the specific set of values that allow for the exact configuration and chemistry of the gas clouds.”?

For reference, here is where Dembski states that he prefers to use CSI, not EF anymore.

https://uncommondescent.com/intelligent-design/some-thanks-for-professor-olofsson/#comment-299021

William Dembski

December 3, 2008 at 9:24 pm

I wish I had time to respond adequately to this thread, but I’ve got a book to deliver to my publisher January 1 — so I don’t. Briefly:

(1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.

(2) The challenge for determining whether a biological structure exhibits CSI is to find one that’s simple enough on which the probability calculation can be convincingly performed but complex enough so that it does indeed exhibit CSI. The example in NFL ch. 5 doesn’t fit the bill. The example from Doug Axe in ch. 7 of THE DESIGN OF LIFE (www.thedesignoflife.net) is much stronger.

(3) As for the applicability of CSI to biology, see the chapter on “assertibility” in my book THE DESIGN REVOLUTION.

(4) For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence” at http://www.designinference.com.

(5) There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).

1 Like

I agree, this thread needs an appendix to define all the acronyms. Very hard to follow for the nonexpert

1 Like

@Michelle, @John_Harshman, here are the explanations for the abbreviations used in this thread:

MTE = Mere Theistic Evolution, a new proposal for theistic evolution stripped down to the bare essentials without all of the theological baggage that has historically come with it.
M&C = Murray and Churchill, referring to their paper “Mere Theistic Evolution”
WLC = William Lane Craig

ID = Intelligent Design
IDM = Intelligent Design Movement
TE/EC = Theistic Evolution/Evolutionary Creation
YEC = Young Earth Creationism
MN = Methodological Naturalism

CSI = Complex Specified Information: a measure of “specified complexity” advanced by Dembski and other ID proponents to argue for Design
EF = Explanatory Filter, an earlier version of CSI advanced by Dembski in the 1990s (@BenKissling references this diagram to describe it: http://creationwiki.org/pool/images/thumb/a/ac/EF.png/300px-EF.png)

UCD = Universal Common Descent
LUCA = Last Universal Common Ancestor
NDE = Neo Darwinian Evolution

FTA = Fine Tuning Argument: argues that the apparent fine-tuning of the fundamental constants points to the existence of God
ToE = Theory of Everything: hypothetical physical theory that will successfully explain the 4 fundamental forces in terms of one single interaction
GTB = Generalized Theory of Biology (Daniel’s custom term for a hypothetical “Theory of Everything” for biology")

3 Likes

I think this is a really interesting fact given @BenKissling’s claim that FTA is a special case of EF and/or CSI. He has it all backwards.

Rather FTA was an idea out there for a long time. It is hardly even worth producing references, but this is certainly true. Several philosophers see value in it, though there is certainly debate. EF and CSI, however, are new and disputed. Dembski himself disputes EF. True, he did not actually offer a clear public retraction, which leads @BenKissling (who is quite informed and engaged) into some messes like we are seeing here.

A better way to see it is that Dembski attempted to generalize FTA (which was accepted as legitimate) into a larger more general framework that could capture, for example, ID arguments in biology. So he is using FTA as a conceptual foundation, and trying to bridge the logic of FTA into another area. He did this (as @BenKissling models) by reframing FTA as EF or CSI.

f he had been successful, that would have been a real advance. If EF or CSI had reach the same broad legitimacy of FTA, or at least the correspondence between EF or CSI and FTA was broadly affirmed, that would important. In that case, FTA might be reasonably considered a special case of CSI or EF.

The problem is that this reframing of FTA as EF or CSI substantially weakens its logic. So the EF or CSI formulated FTA is far weaker than the best FTA arguments in the literature, far weaker. We can (and in my view we should) reject CSI and EF, while still taking FTA seriously.

This is empirically true: CSI and EF never achieved the same broad legitimacy as FTA. In particular, EF was abandoned by Dembski himself. CSI has major problems that are beside the point here. Most people agree, and they should agree, that the FTA can be true, even if EF and CSI are nonsense. That’s good news, because EF and CSI are considered nonsense by a lot of us, but I am sure even critics of FTA (@dga471) would agree there some legitimacy to it and sensible people take the other point of view.

So, with that assessment, no I do not think EF and CSI are valid generalizations of FTA. No, FTA is not a special case of EF or CSI.

2 Likes

None of this shows why the FTA and specified complexity are different. I argued they are the same logically. You assert they are different but present no argument other than that one is considered legitimate and the other isn’t. That’s an argument from authority which I don’t accept. Explain why they are different.

And Dembski never attempted to connect the FTA and specified complexity as far as I know. I am doing that. I broached the subject once with Bruce Gordon at a seminar on the FTA he did at Houston Baptist several years ago. I asked him why, if the reasoning is essentially the same, people like Plantinga accepted FTA but not the other. He looked me right in the eye and said of Plantinga, “I think he has sociological reasons.”

That’s essentially what you just said, Josh. I don’t accept that. Nobody should.

Think what you are saying in relation to what I wrote.

I agree that you presented a FTA argument that is logically equivalent to CSI. However, I also told you that this is a weaker version of the historical FTA argument.

You yourself agree that FTA is different than CSI, because the FTA is only (in your view) a particular case of CSI. So that means it is not the same thing because, even in your view, CSI is a more general idea than FTA.

I agree there is a CSI-FTA combined argument, but it is a weak argument because CSI is incoherent. I prefer valid formulations of FTA arguments that take a different logical path.

You just switched from EF to CSI. Why? Do you now concede that EF is a dead argument? That would be progress.

That is an ad hominem that should be beneath Dembski.

You should try asking Plantinga why and you will get a different answer. In fact, why not just read Plantiga’s book, “Where the Conflict Lies”. Plantinga’s reasons for rejecting ID are pretty similar to mine.

Now, that is an ad hominem on me.

If you care to understand ID’s strengths and weaknesses, you have to grow out of psychologizing its critics.

No, I do not reject CSI or EF for sociological reasons. I can’t deny what I see. They have major logical errors. Even Dembski rejected EF. Perhaps now you do too. Did you learn something new? I hope so…

1 Like

Useful. A good rule of thumb is to write it out the first time it’s used in a post, or at least the first time it’s been used in a while.

1 Like

What is that different logical path? Explain it to me.

I switched because it doesn’t matter. I only used the EF because it was convenient. If you don’t like the EF than ignore it. The argument is exactly the same for regular old specified complexity.

As a tangent, no, I don’t think the EF is dead. I disagree with Dembski’s statement that the three causes are not mutually exclusive. He’s probably thinking in terms of complex phenomena like the existence of homo sapiens. If an “event” is defined as a physicist would define it, that is reduced to a single event as small as you can get, than yeah, the causes are mutually exclusive. It is true that homo sapiens probably have been affected by all three types of causation more broadly (if chance exists as a cause). But individual events such as this particular molecule bumping into that one, then I’d still argue the three causes are mutually exclusive. And that is the context of M&C’s proposal because they are interested in the entire deterministic causal chain as a string of events with a probability of either 1 or 0 that are most accurately modeled as a probability of 0.5. What else can they be talking about but the smallest possible event? I think the EF would hold in that case.

[Moderator edit to fix fomatting only]

Well, first of all, that was Bruce Gordon, not Dembski. And second of all I don’t know how else to explain why people react so strongly against what seems to me a fairly obvious and straightforward point.

I have read it. Gordon and I were discussing that exact book. He had taught a class on it at HBU. We also discussed why one of Plantinga’s objections to ID was that it couldn’t be quantified, and yet the only numbers in the entire book were in the ID section. Once again, it seems different standards are always applied to ID, and there is no rational explanation for this.

Logical errors which you refuse to describe.

Refuse? He has done so more times than I can count on this very forum. And I see no refusal on his part on this thread. He just hasn’t done it yet.

Also, still waiting for a response on independent patterns. I have read multiple papers by Demsbki and tons of posts on evolution news and none have been helpful.

1 Like

Well, it does matter.

The EF’s logic takes these questions in order.

1. Is it known natural processes alone? If yes, then conclude not designed.
2. Is it chance alone? If yes, then conclude non designed.
3. Conclude design.

Here is one major error though. Design, known natural process, chance are not exhaustive. Here are a few other examples of explanations that are not considered:

1. Natural processes (known and unknown)
2. Natural processes (known and unknown) + Chance
3. Known natural processes + Chance

Of course, if we are talking about God’s design, natural processes alone can also be design any way! So even finding it is natural processes alone should not cause to conclude not designed. Mount Everest, after all, is also designed by God.

So, for all these reasons and more, EF is a totally invalid inference. So the EF formulated version of FTA is not valid. That matters.

And I already said that if you don’t like the EF, the argument I’m making can be made in exactly the same way with a simple specified complexity argument, which is always the one I’ve preferred. I think Dembski tried to turn it into a mathematical proof and that was doomed to failure. He should have stuck with his original idea of making it a framework for a scientific paradigm. That is where it is useful.

1. The values of the fundamental constants are complex.
2. The values of the fundamental constants are specified (the range which allows for life to exist is the specification).

Therefore, the values of the fundamental constants are intelligently designed.

I’m pretty sure that the values of the fundamental constants are real, not complex. If they were complex, they would be of the form a + bi, where a and b are real numbers.

No, they’re real.

No, they’re observed after-the-fact.

2 Likes

“Complex” in this context means “improbable”. I will try to believe these were sincere comments and simply believe you haven’t read Dembski, like at all.

The YEC use of The Fine Tuning Argument is a score on its own net.

The FTA is the argument that the fundamental constants, such as mixing parameters for quarks and neutrinos and the cosmological constant, are tuned allowing for the emergence of life. As expressed by astrophysicists Livio and Rees: That is, relatively small changes in their values would have resulted in a universe in which there would be a blockage in one of the stages in emergent complexity that lead from a ‘big bang’ to atoms, stars, planets, biospheres, and eventually intelligent life. One of the earlier such examples offered was the triple-alpha process for the stellar nucleosynthesis of carbon. The balance of carbon and oxygen production in stars is sensitive to variation of the strong nuclear force, with one paper concluding that a deviation of even 0.4% would tilt to almost entirely one or the other.

There also exist features of the universe which do not necessarily classify as fundamental constants, but are also found in a narrow band leading to the universe as we know it. The temperature variation of the CMB is sufficient to allow matter to clump in a timely fashion. A still higher variation could work, but might result in denser, more chaotic and disruptive galaxies. The FTA examines the principles of nature to ascertain the latitude permitted to result in an universe of enough stability and age to permit the generations of stars required to produce rocky worlds with water and carbon, with a sun of sufficient radiance and lifespan stability to nurture an earth-like planet.

It was not apologists, but physicists who framed the FTA, and many of them were actually quite antagonistic to religion and sought resolutions outside of theism. From a YEC perspective, who cares if over the course of millions of years stars can synthesize carbon or produce metals? For them, Adam was not fashioned from stardust, but created de novo.

YEC is often antagonistic to the very attributes of the FTA which are featured as tuned to be favorable to life. For instance, in the FTA the aforementioned CMB variation, rate of cosmic expansion, and strength of gravity all factor into the formation of stars, but YEC deny that stars can naturally form at all. Similarly, the FTA examines the balance of nuclear forces and gravitation required to have stars such as our sun burn for billions of years. Ignoring this, for years YEC made a case that the sun was powered principally by gravitational collapse, and this was supported by observation of a shrinking sun and missing neutrinos, until the whole idea was thoroughly discredited and they were forced to retreat, albeit with the protest that the sun was not proven the be older than 6,000 years. Then there is the speed of light, the “c” that permeates physics and is embedded in the fine structure constant, which YEC is forever engaging in fantastic flights of fancy to make none-constant. Finally, YEC advocates for accelerated radioactive decay, but isotope half-lives are not independent but are determined by quantum mechanics, nuclear geometry which is by definition, and fundamental forces which are at the heart of the FTA. In promoting that accelerated, vastly accelerated, radioactive decay occurred without catastrophic results, YEC essentially is making a case that fine tuning is exactly not essential for life.

So not only does YEC regard fine tuning as superfluous to existence, but is actively hostile to the uniformitarianism of the FTA, to use their favorite misnomer. The FTA is compatible with some variants of ID which largely accept conventional cosmology, although the design and mind detection blather constitutes extra baggage. In this vein, Robin Collins, professor of philosophy at Messiah College, is the most extensively published Christian scholar on the FTA.

In summary, regardless of one’s stance generally regarding the FTA, anthropic argument, theory of everything, or multiverse, it seems to me that the FTA has no legitimate place beneath the YEC canopy; and therefore, can only serve a cynical, out of context, rhetorical role when appealed to by such creationist organizations. Which means, of course, it’s perfect to them.

Complexity may be improbable in some instances, but improbability does not necessarily involve complexity, as in the context of constants.

3 Likes

Joshua, would you deny the validity of explanatory filter-type inferences when they are employed by, say, coroners? And if not, could you specify the difference between the reasoning of the coroner and the reasoning employed when ID proponents use the EF? I’m not necessarily defending the EF, just trying to get clarity.

To be clear what I mean about a coroner’s reasoning, let me give an example:

A coroner is called in to provide the cause of death of an individual. There are three broad options:

1. Death by natural causes, (e.g., a heart attack, stroke, etc.)

2. Death by chance (e.g., the head struck an object after a fall, the person slipped and fell into the water and drowned).

3. Death by design. (Someone killed the person by deliberate action.)

Doesn’t the coroner have to rule out the first two before he can be certain about the third? For example, if the person died of a heart attack, and then fell against a hard object, say, a stone or machine, a mark on the head from the impact would not in itself be proof of foul play. So the coroner, if he is going to conclude that the mark on the head was produced by deliberate assault, has to show that the kind of mark found on the head is not consistent with marks that would be made by accidental falls etc. He would have to show that certain kinds of mark on the head appear only as a result of deliberate impact delivered by a voluntary agent.

If the coroner can rule out natural causes and accident as the cause of the mark in question, is he not within reason in declaring that a blow on the head was delivered by design?

And if he is within reason in drawing this conclusion, why is such a conclusion in principle not allowable in questions of origins?

Again, I’m not necessarily defending the EF, but merely trying to find out whether your objection to it is on general principles (in which case all coroner’s verdicts of design would be in doubt), or whether you accept the general idea of eliminating chance and natural causes, but find the application of the idea by ID proponents to be sloppy or shoddy.