(Book: On Guard, by William Lane Craig. Chapter 5: “Why is the universe fine-tuned for life?”)
Chapter 5 starts the “Intelligent Design” portion of Dr. Craig’s apologetic, though he does not call it that. In fact, he puts in a sidebar that explicitly points out that “fine-tuned” does not mean designed. Fine tuning, he explains, “just means that the range of life-permitting values for the constants and quantities is extremely narrow.” But don’t let that mislead you, this is his ID argument, which he presents in the form of another syllogism.
- The fine-tuning of the universe is due to either physical necessity, chance, or design.
- It is not due to physical necessity or chance.
- Therefore it is due to design.
There are several significant flaws in this argument. First of all, as we saw last week, time itself has its origin, along with the rest of the universe, at the Big Bang. That means there has never been a time when the physical constants of the universe did not exist with their current values, which prevents any Designer from having the opportunity to design them. They’re already here, they already have the correct values, and the Designer himself could not have existed prior to the constants (since time itself does not go back that far), so we can eliminate design right off the bat.
Dr. Craig also falls for the creationist obfuscation of “chance,” as in some kind of events that are completely unpredictable and chaotic. Such a definition of chance is actually closer in meaning to “magic” than it is to the laws of nature. In nature, things may be undirected, in the sense that there’s no entity pre-planning how he/she/it/they intend for things to work out. Due to the mechanistic nature of how things work in real life, however, the results are anything but random (and in fact one of the hardest things to do scientifically is to produce genuine randomness).
What we have in the real world are a number of entities and processes that interact according to some relatively simple basic principles that have surprisingly complex results. At a certain point, the complexity becomes too great for us to follow all the variables and see where things are going, and so we introduce the concept of “random chance” as a placeholder for combined effect of all the influences we can’t keep track of. This, however, is a manifestation of our own limits, not some actual god of chaos named “Chance.” The underlying phenomena (with the exception of quantum-level phenomena) are following the fundamental laws of nature. We just can’t keep track of them all.
Dr. Craig, in his first premise, fails to grasp this important feature of the natural world. By claiming that observed characteristics must fall into one of three categories—physical necessity, chance, or design—he and other ID creationists are overlooking (and perhaps deliberately obscuring) the way nature really works. Most natural phenomena operate by a combination of multiple independent agents or processes interacting in undirected ways according to a few basic physical laws, and when these interactions influence one another, the resulting feedback loop produces amazingly complex and apparently structured results, similar to what we think of as “design.” In fact, one of the reasons a lot of design is done by trial and error is because this process simulates the “random” processes that are so effective in undirected nature.
Technically, I’m digressing a bit. Usually, ID creationists offer the “physical necessity, chance, or design” argument as a ploy to attack evolution, and evolution, in particular, operates by the complex mechanisms of independent processes interacting according to natural law. Dr. Craig, however, is attempting to adapt this anti-evolution argument and use it to lay out a trichotomy of possible causes for the physical constants of the universe. That’s obviously a red herring, since these constants have always existed, for all of time, with their current values, and therefore it is meaningless to speak of them having any “cause.”
This gets into another of Dr. Craig’s mistakes, though: a failure to understand the nature of a constant. Remember that old joke about “2+2=5, for sufficiently large values of 2″? It’s funny because it treats “2″ like some kind of vague, variable quantity, when we know it isn’t. The number 2 is a constant, it doesn’t have values that are larger or smaller than 2. That is, by definition, what a constant is. We can’t change what the value of a constant is, we can only be mistaken about which value is correct. But Dr. Craig is treating constants as though you could just pick some random number (if you were an Intelligent Designer) and *poof*, the new value of π is now 42.
[C]onsider the following examples of fine-tuning. The so-called weak force, one of the four fundamental forces of nature, which operates inside the nucleus of an atom, is so finely tuned that an alteration in its value by even one part out of 10100 would have prevented a life-permitting universe! Similarly, a change in the value of the so-called cosmological constant, which drives the acceleration of the universe’s expansion, by as little as one part in 10120 would have rendered the universe life-prohibiting.
And if you changed π to be 314.15 instead of 3.1415, it would take the earth a whole century to revolve around the sun! Wow, amazing! Change 2 to 2.5, and the equation in the joke would not be funny any more, because the number 2 would no longer be a constant, it would be a variable, and you could put whatever value you want in it. Or whatever value you need to make your fine-tuning work.
Meanwhile, in the real world, constants are, well, constant. The way we discover them is by virtue of the fact that they can’t be any other value, really. We can take an equation in which a constant appears, and substitute a different value in place of the constant, and see how the numbers come out, but that won’t tell us anything about the real world, and really it won’t tell us much about anything: if you say that “the circumference of a circle is 2πr, but you can change the value of π,” how can you be sure that the value of 2 won’t change also? They’re both constants; it’s no more unreasonable to take the one as a variable than it is to take the other.
Dr. Craig spends a lot of time trying to force various speculative modern theories (e.g. string theory) onto his Procrustean bed of “physical necessity or chance or design,” on the assumption that constants can be changed, and that it’s somehow meaningful to talk about causes for things that have always existed. Frankly, I’m not going to dig into them too much—it’s a wild goose chase, and there are others more qualified than I to discuss the problems that Dr. Craig is having with advanced physics (e.g. skydivephil’s video).
What I do want to go back to is the underlying assumption behind his reasoning. Dr. Craig is making the classic creationist assumption that if we do not understand why the constants and quantities of the universe work out the way they do, then we should automatically assume that some intelligent, self-aware, purposeful Being must have deliberately and intentionally created them. In short, he is advocating that our default preference should be for superstition in cases where scientific answers have not yet been found.
Obviously that’s an appallingly anti-scientific and anti-intellectual assumption to make, and I doubt that Dr. Craig would willingly concede that he is making it. If you look at his methodology, though, this is exactly what he is doing. He is not trying to describe a scientifically-definable process by which a Designer would “tune” the values of the physical constants of the universe. He’s not giving us enough detail that we could calculate what real-world consequences would result from such a process, and how those consequences would differ from those that would result if these values arose by some other process. He’s not giving us anything that would allow use to verify, objectively and scientifically, whether the facts fit his hypothesis better than any non-design hypothesis. All he’s doing is trying to attack the scientific alternatives to his superstitious explanation. A few examples:
For example, the most promising candidate for a [Theory of Everything] to date, so-called M-theory or superstring theory, only works if there are eleven dimensions. But the theory itself can’t explain why just that particular number of dimensions should exist.
Moreover, M-theory doesn’t predict uniquely a life-permitting universe. It permits a vast range of around 10500 different possible universes, all consistent with the same laws but varying in the values of the constants of nature. Almost all of these possible universes are life-prohibiting. So some explanation is needed why, out of all these possibilities, a life permitting universe exists…
The proposed mechanisms for generating a world ensemble are so vague that it’s far from clear that the physics governing the multiverse will not involve any fine tuning…
We saw in chapter 4 that the Borde-Guth-Vilenkin theorum requires that even a multiverse of bubble universes must have a beginning. In that case, the mechanism that generates the bubble universes has been chugging away for only a finite amount of time. So by now, there may well be only a finite number of bubbles in the world ensemble, which may not be enough to guarantee the appearance of a finely tuned universe by chance alone…
With the failure of the many worlds hypothesis, the last ring of defense for the alternative of chance collapses. Neither physical necessity nor chance provides a good explanation of the fine-tuning of the universe.
Etc. etc. etc. Science may not have all the answers. Superstring theory can’t explain why eleven dimensions should happen to exist. M-theory permits too many universes, we need more explanations. The many worlds hypothesis may not explain everything, therefore it has failed. In other words, don’t listen to the scientists. Ignore their answers. Look somewhere else.
Well, I’ll take it a step farther than Dr. Craig: science does not yet know how—or if—any sort of material metaverse exists and/or influences the characteristics of this universe. The scientific thing to do in the absence of such answers is to ask more questions, seek more data, and learn what we can. The superstitious thing to do is just give up and attribute things to some invisible magical Being. What Dr. Craig is doing is encouraging us to reject the former approach in favor of the latter.
This is the key failure of Intelligent Design creationism, even as a pseudo-scientific approach. In the end, it is an attempt to replace science with ordinary superstition, an attempt to throw away both scientific answers and even scientific investigation, in favor of a dogmatic animism. Dr. Craig does his homework better than most, and tries his hardest to dress it up in a lab coat and make it sound like he’s just being a good skeptic, but it’s marketing. A genuinely objective scientist would acknowledge that there are still gaps in the various theories, but would nevertheless agree that they’re the best answers we’ve found so far. He wouldn’t pounce on the gaps as a convenient place to try and stuff God in.
We’ve got more to cover in this chapter, but Dr. Craig’s next move is to attack Richard Dawkins, and that’s going to open up a whole ‘nuther can, as they say. So we’ll save that for next time. Tune in again next week!