Jump directly to the Content
Jump directly to the content
Article

Karl W. Giberson


Much Ado About Nada

The television sitcom Seinfeld was a cultural phenomenon—one of the most wildly successful shows in the history of television. Part of Seinfeld's peculiar charm—its "schtick"—was that it was a show about nothing, and it managed to turn this into a marketing device. Of course Seinfeld wasn't really about nothing, it simply lacked the standard plot framework possessed by sibling shows and often drew its comedic strength from trivia. But, by traditional yardsticks, something normally present was missing—thus the claim to be about nothing.

The Seinfeldian sense of nothing is the common usage. A show without a standard plot is about "nothing." Likewise, someone without a plan is doing "nothing" on Friday night. A writer between projects is working on "nothing." A bored child has "nothing" to do. A detective who comes up dry has "nothing." The SETI program has found "nothing." A tale told by an idiot signifies "nothing."

The absence of an anticipated element is often described as "nothing." This very familiar usage is rarely confusing to ordinary people, but it is certainly imprecise, colloquial, and unsatisfactory to philosophers.

Philosophers, it turns out, have always been fascinated by nothing. The classical Greeks were intrigued by nothing and invented a variety of ingenious arguments to prove that nothing could not exist. Their most enduring legacy was Aristotle's pithy aphorism, horror vacui—"Nature abhors a vacuum." Theologians have also put their spin on nothing. Determined to lay waste to the notion that God created the world out of some uncooperative material, they developed the now familiar doctrine of creatio ex nihilo. God created out of nothing.

Physicists are also intrigued by nothing. Their nothing is empty space, and they have long observed, experimented, and theorized about whether space can be truly empty. The historical intuition was negative, consistent with the Aristotelian tradition out of which modern science arose. Space was filled with an "ether"—an all-purpose material that accomplished a variety of things, from carrying light waves to eliminating the need to fret about horror vacui.

Currently physicists—or, more precisely, cosmologists—are carefully, and at considerable expense, investigating nothing. Their motivation is to understand the first moments of the Big Bang, when there was almost nothing. As one runs the proverbial cosmic clock in reverse, back to those truly prehistoric times when the universe was, say, less than a minute old, it seems apparent that nothing is just over the horizon, to use a metaphor strangely out of place.

Approaching the "ultimate" beginning at t=0, the universe becomes devoid of matter—there are no particles. There is only energy. And then there is no energy—only energy fluctuations. Are we closing in on nothing?

Alas, there are serious limits to our understanding of the first moment of the universe. We lack the exotic experimental cathedrals to model this moment of creation, and our high priests—the theoretical physicists—as yet have no platonic revelation to guide us convincingly into the light. We will never build the experimental cathedrals—calculations show that they would have to bigger than the earth—but we could get a revelation from the robed theorists. At least that is what they are promising.

A BRIEF HISTORY OF NOTHING

For ambitious readers interested in the history of nothing, Professor Nick Hug get from the University of Illinois at Chicago has assembled a marvelous reader entitled Space from Zeno to Einstein. The book is a collection of 17 historically important readings with commentary.

The overwhelming ambiguity of nothing—or empty space, as it is often called—is clearly reflected in the writings of the classical Greeks. In the Timaeus, Plato, anticipating much of the confusion to follow, doesn't know what to make of space. Space is not "up there" in the world of forms, but neither is it "down here," casting shadows on the wall. In Plato's colorful metaphor, space is "apprehended by a kind of bastard reasoning that does not involve sense perception … as in a dream."

Euclid's rhetoric is less stimulating than Plato's, but what it lacks in color it makes up in utility. For Euclid, space is a geometric object with a number of "self-evident" characteristics of endless utility, as we all learned in high school. But the vaunted clarity of Euclid's formulation failed to convince everybody. Parmenides' famous student Zeno, for example, showed that Euclid's concept of space was inconsistent with the idea of motion.

One of Zeno's classic arguments against motion runs like this: to get from here to there, one must first go halfway to there; but to get to this halfway point one must first go halfway to the halfway point. The space between here and there, argued Zeno, can be subdivided an infinite number of times. To travel from here to there requires that one traverse all these infinite subdivisions. But that would take an infinite amount of time. Thus motion is impossible. (One pictures Zeno pacing back and forth in a state of great animation as explicates this convoluted refutation of motion.)

Aristotle rejected the idea that space could be truly empty and convinced everyone that Mother Nature was truly horrified by nothing. For Aristotle, space could not be nothing nor could it simply be the "location for bodies" as Plato had taught. This would imply that bodies carried their "place" around with them. In the characteristically pedantic prose for which he is so universally loved by students of philosophy, Aristotle puts it like this:

If the place is in the thing (it must be if it is either shape or matter) place will have a place. For both the form and the indeterminate undergo change and motion along with the thing, and are not always in the same place, but are where the thing is. Hence the place will have a place.

Aristotle goes on to argue that "a place with a place" leads to an infinite regress, and thus we must abandon Plato's concept that the place of a body is simply its location. Space exists on its own, filled with "places" where things are constantly striving to be—heavy objects striving for lower places, flames striving for higher ones. And Mother Nature rushing in to fill any vacated place because she simply won't tolerate a vacuum.

For nearly two thousand years the Aristotelian tradition continued with minimal editing. The sixth-century philosopher John Philoponus criticized Aristotle's claim that the celestial spheres are not moving into new places as they rotate; in the fourteenth century Jean Buridan objected to the view that an object moves toward its natural place unless acted on by a force. In the seventeenth century we find Descartes defending Euclid's idea that space is a geometrical construct. We find Newton launching an attack on the Cartesians, arguing that "space" is physically real, a "disposition of being qua being."

Although Newton's notion of "absolute space" was rejected by his great contemporary and rival, Leibniz, not until two centuries later did Newton's theory receive a proper scientific critique. The critic was the Austrian physicist and philosopher of science Ernst Mach, whose work exerted considerable influence on Einstein. Mach mounted an attack on the idea of absolute space, which, he argued (correctly), was completely without empirical support. Absolute space, Mach said, was a product of Newton's "imagination," and should play no role in any physical theory, since good theories are based on sense observations. (Mach died in 1916 refusing to believe in atoms because he couldn't see them.)

The notion of the absoluteness of space was further weakened by some curious developments in mathematics. The culprit was Immanuel Kant, the most important Western philosopher since Aristotle and one whose prose has also delighted philosophy students for over two centuries. Kant created an influential epistemological system that seemed to require an absolute Euclidian spatial framework. This "Newtonian" space was necessary (Kant was a committed Newtonian) for the coherence of our experience of the world; when we encounter objects in the world, we experience them as being located in this absolute space, which Kant argued had to be Euclidian.

Shortly after Kant died, some ingenious and incredibly strange nineteenth-century mathematicians created several varieties of non-Euclidian geometry. These new geometries, in which parallel lines often met and triangles took on new shapes, were every bit as consistent and plausible as their Euclidian predecessor, at least from a theoretical point of view.

This development broke the intimate and longstanding connection between the abstract mathematical structure of geometry and the "real world." As long as there was only one geometry and one world for that geometry to describe, it could be supposed that the two were intimately related in some way—made by God for each other, so to speak. But, if there are several geometries, the question arises which, if any, of the available geometries describes the real world.

By this time—the late nineteenth century—Euclid was rolling in his grave, and the stage was set for Einstein's revolution. Space was no longer absolute and Newtonian; the geometry of the "real" world was henceforth to be determined by measurement and observation—if anyone could think of a way to do that—rather than conjured from within like some revelation from on high.

With these insights, and some deep thinking about the relationship between inertial mass and gravitational mass, Einstein gave birth to General Relativity—a theory about the geometry of space. Einstein showed how the presence of massive bodies actually changes the geometry of space. In the absence of massive bodies, space is indeed Euclidian, which means that parallel lines don't meet, the interior angles of a triangle add up to two right angles, and so on. But put a large mass, like a star, into that region of space and the geometry of empty space changes, rendering your high-school textbooks, and all that hard-won knowledge you acquired from them, utterly useless.

The readings in Huggett's anthology sweep across the centuries, highlighting that long, cross-generational conversation that is one of the hallmarks of the Western intellectual tradition. There are several lessons to be learned from this long conversation. First, arguments depend on historical context. The arguments made prior to Mach for the existence or nonexistence of empty space are simply not compelling today, despite the brilliance of the thinkers who articulated them. Take Leibniz. He objected to the existence of a big, empty, uniform space because, if all the "places" in this big space were exactly the same, there would be no basis—no "sufficient reason"—for God to put something here or there. If here and there were the same, then how could God make up his mind? This argument is almost funny now. Yet it would be Whiggish of us to presume with with smug satisfaction that, because we know better today, we are incapable of such folly. We no doubt do know much more about the universe than Leibniz did, but it is also the case that—to an extent impossible for us to determine—our current knowledge is a product of our assumptions and our instruments.

We also learn from this long conversation that the boundaries between science, theology, philosophy, and mathematics are rarely clear. In retrospect, the arguments about space made by the Greeks, Descartes, Newton, Leibniz, and others of their time are really not scientific arguments in any modern sense of that word. Yet all of these thinkers made contributions to science. In their own minds, their theories were simply part of a comprehensive attempt to make sense of things in the largest possible context. The major weakness of Huggett's selections is the exclusion of contributions from the atomists—the tradition that actually came the closest to anticipating our modern understanding of nothing. Fortunately there is another excellent source for this contribution: Henning Genz's book, Nothingness: The Science of Empty Space, first published in German in 1994 and now available in English translation (in a revised edition).1

THE SCIENCE OF NOTHING

In 1654 in the city of Regensberg, during a session of the German Imperial Diet, Otto Von Guericke put on a great show about nothing. Guericke took two large hemispheres, about three feet in diameter, and joined them with an air seal to make a sphere. He then pumped the air out of the sphere, creating a vacuum. Next he took two teams of eight horses and had them pull in opposite directions on the two hemispheres. When, with considerable strain, the horses finally managed to separate the hemispheres, they came apart with a great explosion. When a small amount of air was let in to the sealed hemispheres, they could be separated with no difficulty.

Guericke's spectacular demonstration was the latest in a series of experimental attempts to study vacuums in nature—to create and then study tiny regions of empty space and theorize about the mundane notion of air pressure. Henning Genz's Nothingness provides a more focused look at nothing, emphasizing the history of scientific attempts to understand empty space. The early part of the book covers much of the same historical ground as Huggett's anthology but places special emphasis on those developments that led to our modern understanding. While such an overtly Whiggish approach must be regarded with caution, it does allow us to abstract from hopelessly unfamiliar epistemological networks those insights—particularly some truly classic experiments—that could inspire and even withstand later scientific scrutiny.

The historical consideration of nothing that resonates most strongly with our contemporary understanding comes out of the atomist tradition. The first atomists were Leucippus, who dared propose that space could be empty, and Democritus, who coined the term atom (both in the fifth century B.C.). The atomists believed that physical reality consisted of atoms separated by empty space. Remove the atoms and what remained would be nothing, at least a Seinfeldian sort of nothing. Atomism was a minority viewpoint that coexisted alongside the prevailing Aristotelian tradition of the horror vacui. To the limited degree that experiments could distinguish between these views, the experiments generally came down on the side of the atomists. But the experiments were inconclusive, and the horror vacui fit more snugly into the prevailing metaphysical schemes.

The history of attempts to create, understand, and defend nothing is remarkable. Consider the contribution of Strato, a disciple of Aristotle. Strato blew air into a metal sphere through a pipe. He noted that, even though the sphere was initially full of air, it was possible to blow in some more. How could this be? Strato argued that the sphere was initially filled with invisible atoms of air separated by empty space. By eliminating some of this empty space, more atoms could be added.

We do not know what Strato's more traditionally Aristotelian critics thought of this experiment. No doubt they found it curious, but they were not inclined to abandon the horror vacui. Large edifices of theory are rarely toppled by such gentle observational pressure. Nevertheless, Strato was right; they were wrong.

In the seventeenth century, Galileo, Torricelli, Pascal, Boyle, and others conducted a series of experiments that sounded the death knell for the horror vacui. The modern understanding of air pressure emerged through the invention of simple barometers in which the ambient air pressure pushed up a column of mercury. Air pressure was found to diminish at higher altitudes. Bodies of all sorts were observed to fall at the same rate in evacuated chambers; heated air was observed to expand; there was no sound from a bell ringing under an evacuated jar. Such fascinating public demonstrations played an evangelistic role in spreading the good news that modern science had arrived and brought nothing with it.

But had a true nothing been achieved? Was the horror vacui refuted? Nobody cared. The large philosophical and theological question of nothing had been taken over by the physicists and reduced to a manageable theory of air pressure. Did nature abhor a vacuum? Perhaps, but look at the bell ringing silently inside the jar; look at the horses pulling on the sphere. Isn't that neat? What was that funny Latin phrase you asked about?

The nothing on display in seventeenth-century Europe was in fact a Seinfeldian nothing. Light, for example, still traveled through the bell jar, and that meant that the space in the jar must at least contain ether, for ether was believed to be essential for the transmission of light, just as air is essential for the transmission of sound.

The ether retained its central role all the way to beginning of the twentieth century, despite the repeated failure of attempts to measure it directly. If it filled all of space like some cosmic fog, then the earth must travel through it on its way around the sun. This should be detectable, but all the experiments that tried to measure this "ether wind" failed.

Ingenious rationalizations of this failure were duly produced. Ultimately the venerable ether passed away, just as the horror vacui had two centuries earlier. The ether did not die because it was definitively refuted; it died because there seemed to be less and less for it to do. Like some newly retired power-broker, demoralized at the onset of irrelevance, it died.

ARE WE THERE YET?

Nothing had become a bit closer. A region of space with no matter in it, no ether, no gravitational forces, no light passing through—surely this was nothing. But then, in a curious historical reversal, a brand-new physics began to populate the vacuum with all manner of interesting quantum creatures, just as all the classical creatures were whisked away. In fact, one might say that the classical creatures had to be removed before the quantum creatures could become invisible.

The behavior of quantum nothingness is a complex, rich topic, far beyond the scope of this essay; interested readers are encouraged to enroll in a doctoral program in physics. If that is out of the question, they should at least read Genz to get a feel for this fascinating subject. I will mention only one aspect.

Heisenberg's Uncertainty Principle is one the of most profound discoveries in all of science. Along with the second law of thermodynamics, it represents a portion of our understanding that physicists are quite certain will never be superseded. A number of very interesting physical results follow from the Uncertainty Principle, and it certainly has to be considered a critical part of the rational substructure—the logos—of the physical universe.

The most familiar expression of the Uncertainty Principle states that the momentum and the position of a particle, such as an electron, cannot both be known precisely at the same time. As the position of the electron is measured to greater degrees of accuracy, the momentum becomes more uncertain, and vice versa.

But the Uncertainty Principle actually says much more. The principle is generally interpreted to imply that a quantum particle does not even possess a well-defined position or momentum, and the artificial process of forcing a particle to assume a well-defined position, by measuring it, can only be accomplished if the already uncertain momentum acquires an even greater uncertainty. Ontology, as John Polkinghorne likes to say, models epistemology.

The implications of this odd trade-off are profound. In the case of the atom, for example, the Uncertainty Principle determines the size of the smallest possible orbit. As the electron gets closer to the nucleus, its orbit becomes smaller; as the orbit gets smaller the location becomes more precise and less uncertain; as the location becomes less uncertain the momentum becomes more uncertain, meaning that the electron must move about more furiously, which tends to push it out into a larger orbit. The size of the smallest possible orbit is set by the requirement that the product of the uncertainty in the position and the uncertainty in the momentum be greater than some fundamental constant of nature.

A region of space from which all the matter has been removed is at best a Seinfeldian nothing, still filled with radiation.

This constraint shows up in a number of other ways as well. A pendulum cannot sit completely at rest in its lowest position. To do so would require that it have a completely specified momentum (zero) and a completely specified position. The Uncertainty Principle demands that the pendulum wobble just a bit, although the amount is too small to be detectable in ordinary pendulums.

A less familiar expression of the Uncertainty Principle involves a similar relationship between energy and time; this has profound implications for our understanding of empty space, and perhaps even the universe as a whole. A region of space from which all the matter has been removed is at best a Seinfeldian nothing, still filled with radiation. If we enclose a region of empty space in a box, somewhere beyond the reach of any local gravity, what will be in the box? The answer is radiation. The walls of the box will assume the ambient temperature of the universe, currently about –270 Centigrade and dropping very slowly, and radiation in the form of photons will be emitted and absorbed by the walls inside the box. If we cool the box to absolute zero, which is only three degrees below the current temperature of the universe, then the walls of the box will absorb energy but not emit any. (Material at absolute zero has no energy to emit; it can only absorb.) Now, what is in the box? Have we achieved nothing, other than this fascinating box somewhere out in space?

We are entering the mysterious realm of the quantum vacuum, a place so bizarre that it makes the Twilight Zone look like a clothing store. Be cause of the Uncertainty Principle, the energy level of the radiation in the box—in the quantum vacuum—can never be exactly zero. The supercooled walls of the box steadily extract energy from the space, just as the cold coils of your freezer extract energy from things stored there, or from your tongue if it happens to be stuck there. But the lowest possible energy state of the quantum vacuum must retain a tiny bit of uncertainty. Nature must obey the Uncertainty Principle. How does She do this? After all the energy has been removed, what remains to preserve uncertainty?

The lowest energy state of the quantum vacuum does indeed have zero energy, but not in the normal sense of that word. Zero is the average value of an energy level that fluctuates up and down, positive and negative, about zero. Heisenberg's Uncertainty Principle stipulates that the lowest energy in a vacuum must fluctuate about zero, just as a pendulum at its lowest point must oscillate about zero, and an electron must have a smallest possible orbit. These requirements on physical systems—the rules they must obey, as they make their way through the spacetime adventure of their existence—are a part of the rational structure of the world.

The closest we have gotten to absolute nothingness, to a true vacuum from which Nature recoils in Aristotelian horror, is the quantum vacuum. But as we have seen, even the quantum vacuum is at best a Seinfeldian nothing, devoid of traditional matter to be sure, but filled, nevertheless with activity. The fluctuations imposed on the energy levels of the vacuum by the Uncertainty Principle have all sorts of implications.

Perhaps the most remarkable is the production of what are known as virtual particles. (The reader may want to reconsider that doctoral program.) All matter comes in duos of matter/antimatter. The familiar electron has the less familiar positron or antielectron as its antimatter partner. All ordinary particles have an electrical charge that is equal but opposite to the charge of its antiparticle. This matter/antimatter mirror-imaging means that a particle/ antiparticle pair can suddenly appear, seemingly out of nothing, without violating the law of conservation of electrical charge. This is what keeps the universe from getting a shock.

But what about conservation of energy? That law is maintained by the temporary "borrowing" of energy allowed by the Uncertainty Principle. A matter/antimatter pair is allowed to borrow some energy from the fluctuating vacuum as long as the debt is repaid on time.

And if the borrowed energy is small enough, the payback time can be quite long—billions of years, in fact. If the borrowed energy is large, such as that needed for the creation of an electron and a positron, then the loan must be paid back almost immediately—in less than a billionth of a second. That is why those particles generally appear in what is called a virtual mode—their fleeting appearance is so transient that the don't actually make it all the way from the Twilight Zone to the real world. They hover temporarily in an ontological limbo.

Surely this is science fiction? Not at all. It is well established physics, as you would know if you had stayed with that doctoral program. Genz puts it like this: "We can say for sure that quantum theory, when joined to the special theory of relativity, mandates the permanent and ubiquitous existence of fluctuations that include the appearance and disappearance of virtual particle pairs."

Genz's accounting is in general agreement with our best current understanding of physics. More or less identical statements can be found in other treatments, such as Timothy Ferris's excellent The Whole Shebang, Martin Rees's Before the Beginning, or Lee Smolin's Life of the Cosmos. At this point the cosmologists face a dilemma. They have established that a quantum vacuum can erupt and sometimes produce matter out of a Seinfeldian nothing. If the energy of the eruption is small enough the payback time can be rather long. By a strange "coincidence," the total energy of the universe, when you add up the positive and negative contributions, turns out to be very close to zero. Could our universe be a low-energy, long-lifetime, vacuum fluctuation—a 15-billion-year episode of Seinfeld, full of sound and fury, but signifying nothing?

The cosmologists have ascended a rather high peak. They should be able to see for miles. Instead they remain shrouded in a quantum fog. But, as they peer through the fog, they sometimes think they can see a universe being created out of nothing—a spontaneous creatio ex nihilo, a creation without a creator.

CREATIO EX NIHILO … ALMOST

Some time ago I had the opportunity to hear MIT cosmologist Alan Guth give a public lecture on the topic of the quantum vacuum. Guth is an excellent speaker and waxed eloquent about his contribution to that field— a variation known as "inflation." In Guth's inflationary model, a vacuum fluctuation gave birth to a tiny universe that experienced a brief period of anomalously rapid expansion during which it grew from something the size of a proton to something the size of a softball. This, Guth contends, is how our universe originated.

Guth concluded his lecture with the following words: "And that is how you get a universe from nothing." He paused briefly and then added, quietly and almost as an afterthought, "Well, almost nothing."

But there is a world—perhaps even a universe—of difference between nothing and almost nothing. If Genz, Guth, and the cosmological community are right, we have in our possession a most remarkable theory of how our universe developed under the creative jurisdiction of a preexisting rational order in the form of the laws of physics. A vacuum may have nothing in it in some Seinfeldian sense, and something—maybe even a universe—may indeed erupt out of such a vacuum. But the quantum vacuum is not a true nothing. It is a hive of activity as particles come and go, faithfully following the logos that we know as the Uncertainty Principle. The vacuum is also a locus for the creative incarnation of possibilities already implicit in the logos of the physical laws. The quantum vacuum may breathe fire into the equations, but the equations precede the vacuum both temporally and ontologically.

Genz, and many others like him, have scaled the cosmological heights and are hoping their myopic gaze will penetrate the fog and give them a glimpse of the promised land. They end up seduced by a mirage—a mirage in which a really good question is mistaken for an answer. There is no explanation within physics for the ultimate origin of the universe.

In the beginning was the Word.

Karl W. Giberson is professor of physics at Eastern Nazarene College.

1. For a recent history of atomism that nicely complements Genz's study, see Bernard Pullman's The Atom in the History of Human Thought (Oxford Univ. Press, 1998).

Most ReadMost Shared