Hi everyone. I’d like to pose a question that’s been bugging me for some time. I hope that some contributors with a math/physics background might be able to respond.
A fine-tuned universe can only be designed via a random process
I’d like to begin with Dr. Seth Lloyd’s paper, “The Universe as Quantum Computer”. On pages 14-15 of his paper, he writes:
"To understand why the quantum computational model necessarily gives rise to complexity, consider the old story of monkeys typing on typewriters. The original version of this story was proposed by the French probabilist Emile Borel, at the beginning of the twentieth century (for a detailed ´ account of the history of typing monkeys see [1]). Borel imagined a million typing monkeys (singes dactylographes) and pointed out that over the course of single year, the monkeys had a finite chance of producing all the texts in all the libraries in the world. He then immediately noted that with very high probability, they would would produce nothing but gibberish.
"Consider, by contrast, the same monkeys typing into computers. Rather than regarding the monkeys random scripts as mere texts, the computers interpret them as programs, sets of instructions to perform logical operations. At first it might seem that the computers would also produce mere gibberish – ‘garbage in, garbage out,’ as the programmer’s maxim goes. While it is true that many of the programs might result in garbage or error messages, it can be shown mathematically that the monkeys have a relatively high chance of producing complex, ordered structures. The reason is that many complex, ordered structures can be produced from short computer programs, albeit after lengthy calculations. Some short program will instruct the computer to calculate the digits of π, for example, while another will cause it to produce intricate fractals. Another will instruct the computer to evaluate the consequences of the standard model of elementary particles, interacting with gravity, starting from the big bang. A particularly brief program instructs the computer to prove all possible theorems. Moreover, the shortest programs to produce these complex structures are necessarily random. If they were not, then there would be an even shorter program that could produce the same structure. So the monkeys, by generating random programs, are producing exactly the right conditions to generate structures of arbitrarily great complexity.
“For this argument to apply to the universe itself, two ingredients are necessary – first, a computer, and second, monkeys. But as shown above, the universe itself is indistinguishable from a quantum computer. In addition, quantum fluctuations – e.g., primordial fluctuations in energy density – automatically provide the random bits that are necessary to seed the quantum computer with a random program. That is, quantum fluctuations are the monkeys that program the quantum computer that is the universe. Such a quantum computing universe necessarily generates complex, ordered structures with high probability.”
What it seems that Lloyd is saying is that if a Designer wanted to create a life-sustaining universe, and needed to hit on the right combination of physical parameters for designing such a universe, then the best the Designer could do would be to use a random process to generate a program for creating such a universe. Intelligence offers no intuitive shortcut for hitting on the right parameters. But since the optimal process for generating the right parameters is itself inherently random, then it appears that the need for design is obviated. In that case, any attempt to argue that the universe is the product of a Designer is going to fall foul of the principle of parsimony (Occam’s razor). In fact, it would be impossible in principle to know that our universe had been designed.
A Designer could not know in advance that a fine-tuned universe would actually produce life, let alone sentient life or intelligent life
It gets worse. It turns out that mathematically, there’s no purely general way of knowing what result a given set of parameters will produce, when fed into a program (say, a program for generating a universe with stars and planets). Mathematician Steve Wolfram has written about this, in connection with the Principle of Computational Equivalence. In chapter 12 of his book, A New Kind of Science (which can be read online), he argues at length that practically every system occurring in nature, apart from ones which are obviously simple, is computationally irreducible, and hence unpredictable. As Wolfram puts it on page 828 of his book:
“…I have argued that among systems that appear in nature a great many exhibit computational irreducibility - so that in a sense it becomes irreducibly difficult to foresee what they will do.”
On pages 755 to 756, Wolfram spells out in detail how the Principle of Computational Equivalence would apply to biology:
“And what I suspect is that for almost any system whose behavior seems to us complex, almost any non-trivial question about what the system does after an infinite number of steps will be undecidable. So, for example, it will typically be undecidable whether the evolution of the system from some particular initial condition will ever generate a specific arrangement of cell colors - or whether it will yield a pattern that is, say, ultimately repetitive or ultimately nested.”
The implications with regard to intelligent design theory should be obvious. Let’s suppose that a Designer set up the initial parameters of the universe in such a way as to permit the formation of galaxies, stars and planets, the origin of life, and the subsequent evolution of eukaryotic organisms, multicellular organisms, animals, animals with nervous systems, sentient animals, and finally, sapient beings like ourselves. What the Principle of Computational Equivalence tells us is that there’s no way that a Designer could be sure that the initial parameters used to fine-tune the universe would in fact give rise to a cosmos in which life arises, let alone sentient life, let alone intelligent life. Fine-tuning becomes, at best, a shot in the dark, a “hit-and-miss” strategy which may or may not achieve its desired objective. Isn’t this an inappropriate way for a Designer to make intelligent moral agents?
Does the Designer have a simulator? Is the Designer a simulator?
Perhaps we might imagine that the Designer either possesses, or is, some kind of omni-simulator that can run a program which displays the results of all possible selections of the relevant cosmic parameters, allowing the Designer to then select a specific set of parameters that yields a world that gives rise to intelligent life. In other words, the Designer would have to run zillions of simulations before hitting on a suitable one that yields us (or something like us). But what kind of Designer is that? And wouldn’t it be more economical to simply believe in a multiverse, rather than a multiverse plus a Designer running a massive simulation?
Some theists, notably the philosopher Robin Collins, argue that we do in fact have a solid philosophical reason to believe that a multiverse requires a Designer: namely, that a multiverse needs to be fine-tuned. This claim hinges on the assertion that cosmic inflation is itself a finely-tuned process. Assuming for argument’s sake that this is the case (but see here), all that follows is that the multiverse is fine-tuned. It doesn’t follow that the next level up (call it the Metaverse) needs to be fine-tuned as well.
In any case, there’s a larger philosophical question that still needs to be answered: how does a supposedly simple and incorporeal Designer perform calculations? Such a being would lack the required architectural properties for it to be an omni-simulator. But if we suppose instead that it makes an omni-simulator, then a skeptic might object that the omni-simulator itself might be a better place to stop, in our philosophical quest for explanations.
The inescapability of random selection, when designing a cosmos
The intelligent design hypothesis faces a further difficulty. Even if we grant that a Metaverse, like a multiverse, would still need to be designed, and that pushing the problem of design up one level solves absolutely nothing, the problem remains that even though the vast majority of all possible sets of values for the constants of Nature are incompatible with the existence of life, there are still a very large number of sets of possible values for the constants of Nature that are compatible with the emergence of life, including sentient and intelligent life. The cosmological constant doesn’t have to have precisely the value it does, for instance. Moreover, there’s no indication that the value of the cosmological constant in our universe is in any way optimal, among the values that could have been chosen. So it seems that the Designer needs to make a random selection of values, to design our universe. Nor will it do to suggest that the Designer could set up a program to come up with some set of life-compatible constants of Nature, and then let the program make that selection, by having it pick the first set of values that are life-compatible, for instance. For it is likely that there are many such programs that could be written. Which one is the Designer going to choose? The shortest? But what if there are many of these, as is likely the case? The conclusion seems inescapable: unless one is prepared to make a host of ad hoc, highly questionable assumptions (e.g. that there is something uniquely choiceworthy about the set of constants that define our universe), it appears that some element of randomness is inevitable in the design of the cosmos.
Can an incorporeal Designer make a random selection?
Nevertheless, there is something profoundly problematic about the whole idea of an incorporeal Designer making a random selection. Putting it another way: “Pick a card, any card” is one thing you cannot say to a spirit. However, it works if you say it to an embodied being, because that being will have built-in biases - e.g. right-handedness, a preference for certain numbers, and so on. These biases combine to push the embodied agent to make a selection, which feels random on a conscious level, but is in fact determined by the agent’s subconscious preferences. However, by definition, a disembodied agent has no built-in biases - i.e. nothing to push it one way or the other. Presented with a range of options, a disembodied spirit would be paralyzed, like Buridan’s ass. In other words, making a random selection is impossible for a disembodied Designer.
But if we suppose that the Designer is in some way corporeal (or perhaps super-corporeal), then what advantage does this metaphysical hypothesis have over atheistic materialism? And wouldn’t a corporeal Designer have to exist within some larger space or milieu, meaning that it would itself require a further explanation? (Of course, one could suppose that the Designer embraces the whole of the space and time that it inhabits. There’s a name for this point of view: pantheism.)
An intuitive Designer?
Classical theists would object that the model of a Designer which I have been appealing to is an anthropomorphic one. God, they maintain, does not need to engage in discursive reasoning. He doesn’t need to figure things out; he “just knows” (or intuits, by knowing Himself) everything there is to know about the cosmos, including the right set(s) of parameters for generating a cosmos that yields intelligent life.
Now, in everyday life, there are indeed situations where we know something to be true, without being able to articulate how we know it. We can just see it; that’s all. Could God’s Mind be like that? I’m afraid not. For our inability to explain how we arrive at a certain insight springs from our lack of self-knowledge. If we had an exhaustive knowledge of our current state (in particular, our neural architecture), as well as our past history, then presumably we would be able to identify the chain of subconscious information processing by which we arrived at the insight that we had. However, lack of self-knowledge is something that we cannot impute to God, without denying His divinity.
The same goes for the suggestion that just as there are said to be a few gifted people with synesthesia possessing the uncanny ability to calculate complex arithmetic problems instantly, so too, God might have a non-inferential way of seeing precisely how the universe needs to hang together, and what set of constants it requires. However, this is a very poor example. It turns out that when when psychologists ask these synesthetes about what’s going on in their heads when they calculate, they are able to describe what they can visualize.
Another fallacy in the “Divine knowledge by intuition” scenario is that it appeals to an argument from conceivability. It seems that we can at least conceive of a Being that knows whether a given proposition P is true, without there needing to be any process by which the Being knows. For instance, we can conceive of a Being that always gets the answer to any meaningful question right, on a quiz show. When asked what method it employs to comes up with the right answer, the Being replies, “I have none. I just can, and that’s all. That’s Who I am. I am the Being whose nature it is to know.”
Sounds fishy, right? Yes, but why exactly? The reason is that there’s more to knowledge than merely getting the answer right. Knowledge is not merely true belief; it requires justification. If you claim to know the answer to a math problem but are utterly unable to justify your answer to other people who ask, “How do you know?”, then even if you get the answer right, can you really be said to have known the answer all along? What distinguishes this from a lucky guess? Nothing. At some point, you really do need to “show workings.”
So the answer to the conceivability argument for non-inferential Divine knowledge is that it rests on an inadequate conception of knowledge. Knowledge is something more than coming up with the right answers. The right process matters, too.
There is therefore no alternative, it seems, to calculating, when it comes to designing a cosmos. But as we saw above, the Principle of Computational Equivalence seems to entail that a Designer would have no guarantee that a given set of life-compatible constants would actually give rise to life, without running a full simulation. And if there needs to be a simulation for each and every possible set of life-compatible values, in order to for the Designer to hit on the right answer, then the Design hypothesis collapses under its own metaphysical weight.
I shall lay down my pen here. I’d like to turn the discussion over to my readers. Have I understood Seth Lloyd and Stephen Wolfram correctly? What are your thoughts on the Designer’s need for a simulator? I would love to hear your thoughts.