Is the universe undesignable?

A couple of quick responses:

I was replying based on the usual definition of ‘spirit’. If you had written 'the divine being of classical theists’, and not ‘a spirit’, my reply would have been different.

Does the spiritual designer include comments in his code? Or not use subroutines or variables?

That wasn’t a comment about a “designed” program for determining whether a universe was “life-producing”. It was a comment about a program generated by randomly typing monkeys, that happened to model elementary particles interacting with gravity - because that’s what Lloyd described.

If he tries to model particle behaviour using insufficiently precise values, then yes.

If the universe is finite, space allocation would be theoretically possible. But if your argument is dependent on the universe being finite, you’ve got a lot of work ahead of you.

I read Lloyd’s meaning of ‘random’ not as ‘incompressible’, but as ‘shortest possible’, which isn’t quite the same thing.

@vjtorley I think Andrews statement is important here. As humans we can conceive of design when the raw materials are available ie atoms and molecules. Understanding origin of atoms and molecules which is an important source of the fine tuning is a very difficult problem even if we can figure out a possible tuning process.

Here is a discussion from Reasons to Believe on Creation ex nihilo.

Isn’t it? But if it’s compressible, the shortest possible version would be the compressed form. I suppose you mean that a program can be incompressible without being the shortest possible version. Is that it?

Thanks for the suggestion, Dan. If the outcome of a universe set up with given set of parameters is inherently unpredictable, then it would make perfect sense for a Designer to perform simulations, just as human designers do.

That’s a very interesting suggestion. As to why randomness is necessary, let me quote Seth Lloyd:

… *[M]any complex, ordered structures can be produced from short computer programs, albeit after lengthy calculations… Moreover, the shortest programs to produce these complex structures are necessarily random. If they were not, then there would be an even shorter program that could produce the same structure.

Lloyd seems to be arguing randomness is the best way to go, in order to build complexity.

Excellent point. And as you correctly point out, that’s not the same thing as “uncaused” or “unbiased.” However, we are still left with the question of how the Designer hit upon a complexity-generating set of values for the constants of nature and the universe’s initial parameters. In my OP, I mentioned Stephen Wolfram’s Principle of Computational Equivalence, which implies that there’s no way in principle of foreseeing what a given set of values will yield, short of running the actual program.

That’s a quote from page 828 of A New Kind of Science. So it seems that the Designer must have employed a trial-and-error process, performing various simulations until a suitable combination was found. Such a process need not be random in the sense of “uncaused,” but it would presumably involve a random walk through the possible sets of values - and for that, you would use a random number generator. Apparently random walks can be used not only to model nature but also for the purpose of finding solutions to mathematical problems. Wikipedia mentions a few examples in its article on random walks:

If the Designer had to use such a process in order to generate our universe, then it’s the best way that the job could have been done, and the sensible thing for believers to do is to simply accept that fact and move on. From a scientific standpoint, however, we are still left with the interesting question about what kind of device the simulator would have to be.

@Roy raises an interesting point:

Upon reflection, I think that what matters most from a programming standpoint is not whether the universe is finite, but whether the range of possible values for its parameters is finite. Cosmologist Luke Barnes has pointed out that physical theories break down if the values get too high.

Which leaves us with the question: what reason is there for thinking that the universe was designed?

I should first point out that one can believe in design without having a scientific reason for doing so. In any case, it so happens that I have addressed this question in an article I wrote long ago on Uncommon Descent, titled, Night Vision: A new version of the fine-tuning argument. What matters is not the proportion of “good universes,” but the proportion of good universes in our local neighborhood - i.e. with parameters whose values are not too far removed from our own. It is generally acknowledged by cosmologists that this proportion is very small, and that in our local neighborhood, at least, the range of life-friendly parameters is heavily restricted. In my article, I cite philosophy professor John Roberts’ article, Fine-Tuning and the Infrared Bull’s-Eye (Philosophical Studies 160(2):287-303, 2012). Referring to the “restricted range” premise (which he calls R), Roberts writes:

Imagine that you are standing in front of an extremely large wall, which as far as you can tell is homogeneously white, with nothing to distinguish one part of it from another. From somewhere behind you, a dart is launched, and it zooms over your head and then hits a point on the wall. It occurs to you to wonder whether the dart was thrown carefully by a skillful aimer or flung up there by some chance process. You might reason as follows:

… [T]he skillful-aimer hypothesis doesn’t make the dart’s hitting this point any more likely than the random-flinger hypothesis does. And so, thus far, all my evidence seems to be neutral between the skillful-aimer hypothesis and the random-flinger hypothesis.

Then you open your birthday present, and it’s a pair of infrared-vision goggles. You put them on, and when you look at the wall again, you see that it bears a standard dartboard design done in infrared paint, and the center of the bull’s-eye is at precisely the point where the dart is sticking out of the wall. Now what do you think? It seems obvious that the only reasonable thing to think at this point is that you now have excellent evidence that the dart was carefully aimed. (And by someone or something that can see in the infrared part of the spectrum.) Why? We can reconstruct your reasoning as a likelihood argument: There being something special and aim-worthy about the point where the dart struck the wall is much less surprising and much more to have been suspected if the dart were thrown by a skillful aimer than if it were flung up there by some random process…

Roberts comments:

The analogy between this case and that of the fine-tuning argument is obvious. Our discovery of R corresponds to the discovery of the infrared bull’s-eye: It shows us that there was something intelligibly (even if not uniquely) aim-worthy or choiceworthy about the values of our universe’s parameters which they do not share with generic possible parameter-values. Just as the discovery of the heretofore invisible bull’s-eye ought to strike us as more likely given a skillful aimer than given a random flinger, so should the special feature of the actual parameter-values strike us as more likely given that they were set by design than given that they were set by chance.

Now let me vary Roberts’ thought experiment a little. Suppose that the bull’s-eye is red, but now suppose that the rest of the wall is not homogeneously white, but has several large red patches, in addition to the tiny red bull’s-eye. Would that alter the validity of the design inference. I don’t think so. The fact that the dart hit such a tiny spot on the wall would still be remarkable, in and of itself.

We might still want to ask: precisely what is it that makes it rational to infer that the dart is thrown by an intentional agent? Roberts answer is that the target was choiceworthy or aim-worthy in a way that other spots on the wall were not. (I discuss his reasoning at further length in my article.)

I shall leave readers with a very fair-minded summary of the evidence by cosmologist Luke Barnes, titled, The Fine-Tuning of the Universe for Life. After discussing the various metaphysical options for explaining fine-tuning, he concludes:

For each of these alternatives, the fine-tuning of the universe for life plays an important role. For Tegmark, the complexity required by any SAS [self-aware structures] explains why we see this universe/mathematical structure, rather than a simpler one. For axiarchism and theism, fine-tuning for life shows how these ideas could have explanatory power. Given the seemingly extraordinarily small proportion of possibilities that permit the existence of embodied moral agents, the axiarchist and theist can understand something of why Alberta’s blackboard is the one has gone to all the bother of existing. Further examination of these alternatives takes us beyond the philosophy of physics.

And with that, I’ll lay down my pen and wish you all a happy Easter. Cheers.

1 Like

Are we placing any limits on this spirit Designer’s intellectual capabilities? If not, what is to stop this designer from simply mentally running all variants of these programs (possibly in parallel with each other) and learning the results?

Where is this assertion substantiated?

This makes no sense given a definition of “random” that anyone else would accept. I’d like to see you try to defend it.

If so, you have just destroyed your argument for a fine-tuned universe. How, given this claim, can you know that any other set of parameters would not work as well, since you haven’t run any of the programs?

Incidentally, I thought that book was largely nonsensical and its conclusions unwarranted by the data. And I don’t think any of your conclusions about the current topic are warranted either.

This assumes that you can’t narrow down the parameters in any way without at least simulating a universe that has them. I don’t see an argument for that. So I don’t see why the designer would have to make a random choice. Nor do I see why, if it were, a random walk is necessary rather than a simple random sample.

Some problems, perhaps, but is this one of them? You make no argument for that.

Why couldn’t the simulator just be “the mind of God”? Assuming a simulator would actually be needed, which is not in evidence.

Why?

How can they know, since you have claimed that the only way to know what would happen would be to construct such a universe or a good simulation of one? How can they know what the possible parameter sets are, since we have no theory of such a distribution, or even a theory of whether the parameters are uncorrelated?

As for Roberts, I suggest that his thought experiment probably happened in Texas, and a quick examination will show that the paint is still wet. Further, we have no idea how big the wall is compared to the target, or how many targets there are. How shocking is it that no matter how short you are, your legs are always exactly long enough to reach the ground? Coincidence or design?

That’s unintelligible. Perhaps the article in full explains it better. But your excerpt fails even to establish that there’s a phenomenon in need of explanation.

All in all, I find your arguments mostly missing in action. Unexpressed, unconsidered assumptions abound. But enjoy your celebration of the day.

1 Like

Yes, Happy Solstice to you as well. :slight_smile:

A further thought … random initial states do not necessarily imply random outcomes. If some form of convergence applies, as in the Law of Large Numbers, then certain states will be more likely than others. If this applies to the initial state then some states will be more more predictable. AFTER the initial state, events in the universe may also converge in the same way. All “simple” events will occur with probability 1.0. Certain “complex” events may also occurs with probability 1.0 if convergence favors these events.

Recall the source here is Seth Lloyd. It makes sense in terms or theoretical computer science; incompressible strings (random bits) are full of unique information, and one or more of those strings could be decoded to produce complex structures. Conway’s Game of Life is an example of this sort of thing, where a simple initial state may generate enormous complexity.

There are of course a far larger number of equally random strings that don’t produce anything interesting*. “Algorithmic Information” doesn’t capture the potential meaning of any string, just as Shannon Information does not capture the meaning of a message.

* for a giving coding scheme. Discussion of coding schemes would be way off-topic.

And back to @vjtorley: It seems to me that once the initial condition of A universe are set, then Lloyd’s argument no longer applies. The definition of randomness you are using depends on a Universal Turing Machine (UTM). The laws of physics for that universe will process all the random input, and the Turing Machine is no longer Universal.

Aside: I find Dembski’s definition of CSI to have a similar flaw, as the only coding scheme that applies to biological sequences are the laws of chemistry.

Therefore your Designer is not omnipotent. Interesting.

Overall, it appears that you are doing a lot of heavy lifting with the term “random” in ways that it does not appear to have been designed (pun intended) to do.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.