You could do that, at which point I would ask you your view on the fine-tuning argument.
I think the FTA is interesting but premature. As a physicist working in the field of fundamental physics our scientific goal is always to try to first look for a deeper scientific explanation for fine-tuning. That being said, Luke Barnes’ book on the FTA is on my reading list, so I haven’t closed off the possibility. In contrast, the design arguments that I find more convincing are Thomistic, relying on the general intelligibility of nature instead of specific behaviors in nature that are currently not intelligible (which seems to be what ID proponents prefer).
Secondly, what I find unconvincing about ID from a theological point of view is that it often draws too close of a parallel between the human and divine minds (e.g. if we can detect whether an object in space came out of natural processes or was constructed by aliens, why can’t one do the same with God?) without considering the traditional doctrine of analogy and the scholastic distinction between primary and secondary causation. In short, ID proponents tend to be wedded to a post-Enlightenment deistic metaphysics and an anthropomorphic, theistic personalist view of God which I think is theologically unorthodox.
What criteria would have to be fulfilled for you to consider FTA?
A rigorous mathematical proof that a hypothetical deeper theory that can explain the 26 fundamental constants in terms of a lesser number of them would still need to be fine-tuned.
Note that this kind of general proof is not impossible in principle; Bell’s theorem (1964) for example showed that quantum mechanics is incompatible with local hidden variable theories. Subsequent experiments showed that Bell’s inequality is violated by nature, from which we concluded that any local hidden variable theory cannot be true.
Anything other than that might still be a convincing argument, but it would be more philosophical and suggestive rather than scientific.
Okay, so let’s assume that criterion fulfilled.
I maintain that FTA is equivalent to the specified complexity argument for design. The constants could have been different, therefore they are contingent. Their specific set of values are of high complexity/low probability, fulfilling the complexity requirement. They fit an independent pattern, namely they are the values needed to sustain complex life, which is the specification. Therefore they are an example of specified complexity and a design inference as to their origin is warranted.
But each possible universe is equally likely. A particular universe with nothing but gas clouds fits all that criteria as well. I can just say that particular arrangement of atoms and gas clouds is the specification. Saying life is the specification seems question begging.
“If there was a ‘cosmic roll of the dice’ … then the objective chance that any given universe would result from that ‘cosmic role of the dice’ was astronomically small. That our universe arose is no more in need of further explanation than that any other universe should have arisen … after all, every universe will have many features that mark it out as very special in the group of universes as a whole.”
I think for the FTA to even have a chance to succeed you’d have to combine it with another argument like the Argument From Moral Agency. But you’re doing that from a theistic viewpoint. Not ID. So for the FTA to have a chance to work you have to work from Theism.
Yes, each possible universe is equally likely. No, you cannot say the particular arrangement is the specification. That’s not what ID means by an independent pattern.
So, in other words, lack of a Bell’s Theorem equivalent is not the real reason you reject the FTA. You reject the basic argument.
Do you accept any possibility of scientific evidence for intelligent causation?
First, I don’t think I’m knowledgeable enough about the specified complexity argument in order to fully assess your claims.
Second, I think there’s a potential difference here, because FTA is applied to physics, which is a more fundamental science than biology. A theorem powerful enough to give FTA scientific legitimacy would apply to a wide range of “theory of everything” scenarios. Whereas biology doesn’t have anything close to that level of rigor. In fact, there are philosophers who argue that biology doesn’t have true laws of behavior like physics does because it comes about from contingent natural processes.
In a hypothetical world where a brilliant biologist has devised a “general theory of biology” (GTB) which is empirically verified to be very accurate, and someone has also rigorously argued that the constants in this GTB need fine-tuning, then maybe I can consider an FTA for biological situations.
If we had a theory of everything, biology would be reducible to physics except for information, which is equivalent to the contingency of the constants in FTA. All you did was add another criterion that must be fulfilled. Assuming it has been fulfilled, you say you would then consider FTA in biology. I’m unclear on what your answers would be to my previous questions in that case.
Do you agree/disagree that FTA is equivalent to a specified complexity argument?
You’re gonna have to break that one down for me. What’s an independent pattern according to ID? I just reread one of Dembski’s papers and it was very vague.
Yes I do. I’m not opposed to ID. Common ancestry is the hill I’m willing to die on so to speak.
Daniel, I have been wondering lately, do these constants incorporate CP violation or is that another FTA feature?
That’s not correct, and I don’t think even the most ardent adherents of reductionism would defend that. Even if we found a theory of everything that could explain fundamental constants and the behavior of fundamental particles when we smash them in particle accelerators, that doesn’t necessarily mean we have laws that can explain what exactly happens when these fundamental particles form complex biological organisms. For example, I would bet that a GTB, whatever it is and if it exists at all, would have little to do with calculating contributions from the four fundamental forces that physicists are usually concerned about.
Disagree, for the reasons given above. Physics and biology are different fields. And remember, I’m not even convinced FTA is right in physics. Additionally, it could be that after we find this hypothetical GTB, specified complexity as defended by ID proponents is not the right way to formulate the analogue of an FTA in biology. Remember that we’re talking about hypothetical theories and proofs, so this is all just speculation.
There is a severe acronym overload in this thread that makes many of the posts hard to read. For example, what is NDE? Presumably not “near-death experience”.
Probably “Neo-Darwinian Evolution”.
Yes, two of these constants characterize the amount of CP violation. (See here and here for a relatively readable explanation.) One of them is the CP-violating phase in the CKM matrix (which characterizes how much different types of quarks can “mix” with each other), while the other one is the CP-violating phase in the PMNS matrix (which is the analogous matrix for different types of neutrinos). Currently the value of these constants can only be measured empirically.
I said nothing about biology or physics in the question. I’m asking you whether or not you think the FTA is a specific version of the more general specified complexity argument. FTA may depend on details in physics, but I argue it takes the form of the specified complexity argument for intelligent causation, which is general and can be applied in any field. There is no need to state any facts about biology to discuss specified complexity as an argument for intelligence.
Do you agree that the FTA is a particular case of a general specified complexity argument?
I think you’re trying to get a more specific answer than I can give you. As I said, I’m not particularly knowledgeable on the technical details of the FTA or the “general specified complexity argument”. I’m not even sure what you’re referring to. Can you show me where you’ve “proven” that they’re the same?
I do not agree.
That is a good thing, because Dembski’s complex specified information (CSI) analysis has some major problems. FTA was around a long time before CSI (and the IDM for the matter), and has much more grounding. It is a very good thing, it would seem, that FTA is not a particular case of CSI or the general specified complexity argument.
The FTA as I have generally seen in physics discussions, is the argument that the fundamental constants are tuned to be fully sufficient for the evolution of life, ie biology as you say is reducible to physics; which runs counter to the whole idea of irreducible complexity in nature.
From a YEC perspective, who cares if stars can synthesize carbon, or burn for billions for billions of years, to produce iron? You do not need FTA because life is created by fiat as are the metals under your feet. Under YEC, we are not stardust.