Category Archives: Physics

What is Space? Part I

Does Space Have More Than 3 Dimensions?
Written by Sten Odenwald
Copyright (C) 1984 Kalmbach Publishing. Reprinted by permission

The intuitive notion that the universe has three dimensions seems to be an irrefutable fact. After all, we can only move up or down, left or right, in or out. But are these three dimensions all we need to describe nature? What if there aree, more dimensions ? Would they necessarily affect us? And if they didn’t, how could we possibly know about them? Some physicists and mathematicians investigating the beginning of the universe think they have some of the answers to these questions. The universe, they argue, has far more than three, four, or five dimensions. They believe it has eleven! But let’s step back a moment. How do we know that our universe consists of only three spatial dimensions? Let’s take a look at some “proofs.”

On a 2-dimensional piece of paper you can draw an infinite number of polygons.  But when you try this same trick in 3-dimensions you run up against a problem.There are five and only five regular polyhedra. A regular polyhedron is defined as a solid figure whose faces are identical polygons – triangles, squares, and pentagons – and which is constructed so that only two faces meet at each edge. If you were to move from one face to another, you would cross over only one edge. Shortcuts through the inside of the polyhedron that could get you from one face to another are forbidden. Long ago, the mathematician Leonhard Euler demonstrated an important relation between the number of faces (F), edges (E), and corners (C) for every regular polyhedron: C – E + F = 2. For example, a cube has 6 faces, 12 edges, and 8 corners while a dodecahedron has 12 faces, 30 edges, and 20 corners. Run these numbers through Euler’s equation and the resulting answer is always two, the same as with the remaining three polyhedra. Only five solids satisfy this relationship – no more, no less.

Not content to restrict themselves to only three dimensions, mathematicians have generalized Euler’s relationship to higher dimensional spaces and, as you might expect, they’ve come up with some interesting results. In a world with four spatial dimensions, for example, we can construct only six regular solids. One of them – the “hypercube” – is a solid figure in 4-D space bounded by eight cubes, just as a cube is bounded by six square faces. What happens if we add yet another dimension to space? Even the most ambitious geometer living in a 5-D world would only be able to assemble thee regular solids. This means that two of the regular solids we know of – the icosahedron and the dodecahedron – have no partners in a 5-D universe.
For those of you who successfully mastered visualizing a hypercube, try imagining what an “ultracube” looks like. It’s the five- dimensional analog of the cube, but this time it is bounded by one hypercube on each of its 10 faces! In the end, if our familiar world were not three-dimensional, geometers would not have found only five regular polyhedra after 2,500 years of searching. They would have found six (with four spatial dimension,) or perhaps only three (if we lived in a 5-D universe). Instead, we know of only five regular solids. And this suggests that we live in a universe with, at most, three spatial dimensions.

All right, let’s suppose our universe actually consists of four spatial dimensions. What happens? Since relativity tells us that we must also consider time as a dimension, we now have a space-time consisting of five dimensions. A consequence of 5-D space-time is that gravity has freedom to act in ways we may not want it to.

To the best available measurements, gravity follows an inverse square law; that is, the gravitational attraction between two objects rapidly diminishes with increasing distance. For example, if we double the distance between two objects, the force of gravity between them becomes 1/4 as strong; if we triple the distance, the force becomes 1/9 as strong, and so on. A five- dimensional theory of gravity introduces additional mathematical terms to specify how gravity behaves. These terms can have a variety of values, including zero. If they were zero, however, this would be the same as saying that gravity requires only three space dimensions and one time dimension to “give it life.” The fact that the Voyager space- craft could cross billions of miles of space over several years and arrive vithin a few seconds of their predicted times is a beautiful demonstration that we do not need extra-spatial dimensions to describe motions in the Sun’s gravitational field.

From the above geometric and physical arguments, we can conclude (not surprisingly) that space is three-dimensional – on scales ranging from that of everyday objects to at least that of the solar system. If this were not the case, then geometers would have found more than five regular polyhedra and gravity would function very differently than it does – Voyager would not have arrived on time. Okay, so we’ve determined that our physical laws require no more than the three spatial dimensions to describe how the universe works. Or do they? Is there perhaps some other arena in the physical world where multidimensional space would be an asset rather than a liability?

Since the 1920s, physicists have tried numerous approaches to unifying the principal natural interactions: gravity, electromagnetism, and the strong and weak forces in atomic nuclei. Unfortunately, physicists soon realized that general relativity in a four-dimensional space-time does not have enough mathematical “handles” on which to hang the frameworks for the other three forces. Between 1921 and 1927, Theodor Kaluza and Oskar Klein developed the first promising theory combining gravity and electromagnetism. They did this by extending general relativity to five dimensions. For most of us, general relativity is mysterious enough in ordinary four-dimensional space-time. What wonders could lie in store for us with this extended universe?

General relativity in five dimensions gave theoreticians five additional quantities to manipulate beyond the 10 needed to adequately define the gravitational field. Kaluza and Klein noticed that four of the five extra quantities could be identified with the four components needed to define the electromagnetic field. In fact, to the delight of Kaluza and Klein, these four quantities obeyed the same types of equations as those derived by Maxwell in the late 1800s for electromagnetic radiationl Although this was a promising start, the approach never really caught on and was soon buried by the onrush of theoretical work on the quantum theory of electromagnetic force. It was not until work on supergravity theory began in 1975 that Kaluza and Klein’s method drew renewed interest. Its time had finally come.

What do theoreticians hope to gain by stretching general relativity beyond the normal four dimensions of space-time? Perhaps by studying general relativity in a higher-dimensional formulation, we can explain some of the constants needed to describe the natural forces. For instance, why is the proton 1836 times more massive than the electron? Why are there only six types of quarks and leptons? Why are neutrinos massless? Maybe such a theory can give us new rules for calculating the masses of fundamental particles and the ways in which they affect one another. These higher-dimensional relativity theories may also tell us something about the numbers and properties of a mysterious new family of particles – the Higgs bosons – whose existence is predicted by various cosmic unification schemes. (See “The Decay of the False Vacuum,” ASTRONOMY, November 1983.)

These expectations are not just the pipedreams of physicists – they actually seem to develop as natural consequences of certain types of theories studied over the last few years. In 1979, John Taylor at Kings College in London found that some higher- dimensional formalisms can give predictions for the maximum mass of the Higgs bosons (around 76 times that of the proton.) As they now stand, unification theories can do no more than predict the existence of these particles – they cannot provide specific details about their physical characteristics. But theoreticians may be able to pin down some of these details by using extended theories of general relativity. Experimentally, we know of six leptons: the electron, the muon, the tauon, and their three associated neutrinos. The most remarkable prediction of these extended relativity schemes, however, holds that the number of leptons able to exist in a universe is related to the number of dimensions of space-time. In a 6-D space-time, for example, only one lepton – presumably the electron – can exist. In a 10-D space-time, four leptons can exist – still not enough to accommodate the six we observe. In a 12-D space- time, we can account for all six known leptons – but we also acquire two additional leptons that have not yet been detected. Clearly, we would gain much on a fundamental level if we could increase the number of dimensions in our theories just a little bit.

How many additional dimensions do we need to consider in order to account for the elementary particles and forces that we know of today? Apparently we require at least one additional spatial dimension for every distinct “charge” that characterizes how each force couples to matter. For the electromagnetic force, we need two electric charges: positive and negative. For the strong force that binds quarks together to form, among other things, protons and neutrons, we need three “color” charges – red, blue, and green. Finally, we need two “weak” charges to account for the weak nuclear force. if we add a spatial dimension for each of these charges, we end up with a total of seven extra dimensions. The properly extended theory of general relativity we seek is one with an 11 -dimensional space-time, at the very least. Think of it – space alone must have at least 10 dimensions to accomodate all the fields known today.

Of course, these additional dimensions don’t have to be anything like those we already know about. In the context of modern unified field theory, these extra dimensions are, in a sense, internal to the particles themselves – a “private secret,” shared only by particles and the fields that act on them! These dimensions are not physically observable in the same sense as the three spatial dimensions we experience; they’stand in relation to the normal three dimensions of space much like space stands in relation to time.

With today’s veritable renaissance in finding unity among the forces and particles that compose the cosmos, some by methods other than those we have discussed, these new approaches lead us to remarkably similar conclusions. It appears that a four-dimensional space-time is simply not complex enough for physics to operate as it does.

We know that particles called bosons mediate the natural forces. We also know that particles called fermions are affected by these forces. Members of the fermion family go by the familiar names of electron, muon, neutrino, and quark; bosons are the less well known graviton, photon, gluon, and intermediate vector bosons. Grand unification theories developed since 1975 now show these particles to be “flavors” of a more abstract family of superparticies – just as the muon is another type of electron. This is an expression of a new kind of cosmic symmetry – dubbed supersymmetry, because it is all-encompassing. Not only does it include the force-carrying bosons, but it also includes the particles on which these forces act. There also exists a corresponding force to help nature maintain supersymmetry during the various interactions. It’s called supergravity. Supersymmetry theory introduces two new types of fundamental particles – gravitinos and photinos. The gravitino has the remarkable property of mathematically moderating the strength, of various kinds of interactions involving the exchange of gravitons. The photino, cousin of the photon, may help account for the “missing mass” in the universe.

Supersymmetry theory is actually a complex of eight different theories, stacked atop one another like the rungs of a ladder. The higher the rung, the larger is its complement of allowed fermion and boson particle states. The “roomiest” theory of all seems to be SO(8), (pronounced ess-oh-eight), which can hold 99 different kinds of bosons and 64 different kinds of fermions. But SO(8) outdoes its subordinate, SO(7), by only one extra dimension and one additional particle state. Since SO(8) is identical to SO(7) in all its essential features, we’ll discuss SO(7) instead. However, we know of far more than the 162 types of particles that SO(7) can accommodate, and many of the predicted types have never been observed (like the massless gravitino). SO(7) requires seven internal dimensions in addition to the four we recognize – time and the three “every day” spatial dimensions. If SO(7) at all mirrors reality, then our universe must have at least 11 dimensions! Unfortunately, it has been demonstrated by W. Nahm at the European Center for Nuclear Research in Geneva, Switzerland that supersymmetry theories for space-times with more than 11 dimensions are theoretically impossible. SO(7) evidently has the largest number of spatial dimensions possible, but it still doesn’t have enough room to accommodate all known types of particles.

It is unclear where these various avenues of research lead. Perhaps nowhere. There is certainly ample historical precedent for ideas that were later abandoned because they turned out to be conceptual dead-ends. Yet what if they turn out to be correct at some level? Did our universe begin its life as some kind of 11-dimensional “object” which then crystallized into our four- dimensional cosmos?

Although these internal dimensions may not have much to do with the real world at the present time, this may not always have been the case. E. Cremmer and J. Scherk of I’Ecole Normale Superieure in Paris have shown that just as the universe went through phase transitions in its early history when the forces of nature became distinguishable, the universe may also have gone through a phase transition when mensionality changed. Presumably matter has something like four external dimensions (the ones we encounter every day) and something like seven internal dimensions. Fortunately for us, these seven extra dimensions don’t reach out into the larger 4-D realm where we live. If they did, a simple walk through the park might become a veritable obstacle course, littered with wormholes in space and who knows what else!

Alan Chocos and Steven Detweiler of Yale University have considered the evolution of a universe that starts out being five- dimensional. They discovered that while the universe eventually does evolve to a state where three of the four spatial dimensions expand to become our world at large, the extra fourth spatial dimension shrinks to a size of 10^-31 centimeter by the present time. The fifth dimension to the universe has all but vanished and is 20 powers of 10 – 100 billion billion times – smaller than the size of a proton. Although the universe appears four- dimensional in space-time, this perception is accidental due to our large size compared to the scale of the other dimensions. Most of us think of a dimension as extending all the way to infinity, but this isn’t the full story. For example, if our universe is really destined to re-collapse in the distant future, the three- dimensional space we know today is actually limited itself – it will eventually possess a maximum, finite size. It just so happens that the physical size of human beings forces us to view these three spatial dimensions as infinitely large.

It is not too hard to reconcile ourselves to the notion that the fifth (or sixth, or eleventh) dimension could be smaller than an atomic nucleus – indeed, we can probably be thankful that this is the case.

The Cosmological Redshift

Galaxy Redshifts Reconsidered

Written by Sten Odenwald 
Copyright (C) 1993 Sky Publishing Corporation. Reprinted by permission. See February 1993 issue

Since its discovery nearly 65 years ago, the cosmological redshift has endured as one of the most persuasive ‘proofs’ that our universe is expanding. The steps leading to its discovery are well known. Soon after Christian Doppler discovered that motion produces frequency shifts in 1842, astronomers began an aggressive spectroscopic program to measure the velocities of stars and planets using their Doppler shifts. This continued through the first few decades of the 20th century ‘culminating’ in the work by Vesto Slipher, Edwin Hubble and Milton Humason on the so-called spiral nebulae — distinctly non- stellar objects that also seemed to display star-like Doppler shifts. So long as velocities of only a few hundred kilometers per second were measured, no one questioned that the frequency shifts for the spiral nebulae indicated relative motion just as they had for stars and planets.
But, during the 1920’s and 30’s spiral nebulae with Doppler shifts of over 34,000 kilometers per second were discovered. In a letter by Hubble to the Dutch cosmologist Willem De Sitter in 1931, he stated his concerns about these velocities by saying “… we use the term ‘apparent velocities’ in order to emphasize the empirical feature of the correlation. The interpretation, we feel, should be left to you and the very few others who are competent to discuss the matter with authority.” Dispite this cautionary note, the fact of the matter was that the redshifts measured for the distant galaxies LOOKED like Doppler shifts. The terms ‘recession velocity’ and ‘expansion velocity’ were quickly brought into service by astronomers at the telescope, and by popularizers, to describe the physical basis for the redshift.

As astronomers explored the universe to greater depths, galaxies and quasars appeared to be rushing away at faster and faster speeds. It seems to be a completely natural consequence of the outrushing of matter from the big bang. Like a sparkling display of fireworks on a warm summer evening, we imagine ourselves standing on one of those galactic ‘cinders’, watching the others rush past us into the dark void of infinite space. Upon closer examination, however, this intuitively-compelling and seductive mental image is both inadequate and misleading.

The Mysteries of Relativity

Big bang cosmology is based on Einstein’s general theory of relativity. It is a theory transcending both Newton’s mechanics and Einstein’s special theory of relativity, introducing us to concepts that do not exist within the older theories. Nor are these concepts easily comprehensible by our common sense which has been honed by organic evolution to see the world only through a narrow set of glasses.

For example, special relativity is based on the difficult-to-fathom postulate that the speed of light is absolutely constant when measured in reference frames moving at a constant speed. From this emerges the concept of ‘spacetime’ which then becomes the arena for all phenomena involving time dilation, length contraction and the Twin Paradox. Beyond special relativity lies the incomparably more alien landscape of general relativity. Gravitational fields now become geometric curvatures of spacetime. This has no analog in special relativity based as it is on a perfectly flat spacetime that remains aloof from any influence on it by matter or energy.

Just as the constancy of the speed of light led to the Twin Paradox, the curvature of spacetime leads to its own menageri of peculiar phenomena. One of these involves the slowing-down of clocks in the presence of a strong gravitational field. Related to this is the “gravitational redshift” which occurs when the frequency of light sent from the surface of a body is shifted to lower frequencies during the journey to the observer. This redshift is not related to the famous Doppler shift since the observer is not in motion relative to the body emitting the light signal!

A second phenomenon predicted by general relativity that also has no analog in special relativity is the cosmological redshift. Simply stated, the cosmological redshift occurs because the curvature of spacetime was smaller in the past when the universe was younger than it is now. Light waves become stretched en route between the time they were emitted long ago, and the time they are detected by us today.

The Doppler shift and cosmology

It is tempting to refer to cosmological redshifts as Doppler shifts. This choice of interpretation has in the years since Hubble’s work led to an unfortunate misunderstanding of big bang cosmology, obscurring one of its most mysterious beauties. As noted with a hint of frustration by cosmologists such as Steven Weinberg and Jaylant Narlikar and John Wheeler, “The frequency of light is also affected by the gravitational field of the universe, and it is neither useful nor strictly correct to interpret the frequency shifts of light…in terms of the special relativistic Doppler effect.”.

By refering to cosmological redshifts as Doppler shifts, we are insisting that our Newtonian intuition about motion still applies without significant change to the cosmological arena. A result of this thinking is that quasars now being detected at redshifts of Z = 4.0 would have to be interpreted as traveling a speeds of more than V = Z x c or 4 times the speed of light. This is, of course, quite absurd, because we all know that no physical object may travel faster than the speed of light.

To avoid such apparently nonsensical speeds, many popularizers use the special relativistic Doppler formula to show that quasars are really not moving faster than light. The argument being that for large velocities, special relativity replaces Newtonian physics as the correct framework for interpreting the world. By using a special relativistic velocity addition formula the quasar we just discussed has a velocity of 92 percent the speed of light. Although we now have a feeling that Reason has returned to our description of the universe, in fact, we have only replaced one incomplete explanation for another. The calculation of the quasar’s speed now presupposes that special relativity ( a theory of flat spacetime) is applicable even at cosmological scales where general relativity predicts that spacetime curvature becomes important. This is equivalent to a surveyor making a map of the state of California, and not allowing for the curvature of the earth!

The adoption of the special relativistic Doppler formula by many educators has led to a peculiar ‘hybrid’ cosmology which attempts to describe big bang cosmology using general relativity, but which is still firmly mired in the ruberik of special relativity. For instance, under the entry ‘redshift’ in the Cambridge Encyclopedia of Astronomy it is explicitly acknowledged that the redshift is not a Doppler shift, but less than two paragraphs later, the special relativistic Doppler formula is introduced to show how quasars are moving slower than the speed of light! It is also common for popularizers of cosmology to describe how ‘space itself stretches’ yet continue to describe the expansion of the universe as motion governed by the restrictions of special relativity. What’s going on here?

General relativity to the rescue

By adopting general relativity as the proper guide, such contradictions are eliminated. General relativity leads us to several powerful conclusions about our cosmos: 1) special relativity is inapplicable for describing the larger universe; 2) the concepts of distance and motion are not absolutely defined and 3) Preexisting spacetime is undefined. Each of these conclusions are as counter-intuitive as the Twin Paradox or as the particle/wave dualism of quantum mechanics. As Nobel Physicist John Wheeler once put it “If you are not completely confused by quantum mechanics, you do not understand it” The same may be said for general relativity.

The first conclusion means that we cannot trust even the insights hard won from special relativity to accurately represent the ‘big picture’ of the universe. General relativity must replace special relativity in cosmology because it denies a special role to observers moving at constant velocity, extending special relativity into the arena of accelerated observers. It also denies a special significance to special relativity’s flat spacetime by relegating it to only a microscopic domain within a larger geometric possibility. Just as Newtonian physics gave way to special relativity for describing high speed motion, so too does special relativity give way to general relativity. This means that the special relativistic Doppler formula should not, in fact cannot, be used to quantify the velocity of distant quasars. We have no choice in this matter if we want to maintain the logical integrity of both theories.

Distance and motion

The second conclusion is particularly upsetting because if we cannot define what we mean by distance, how then can we discuss in meaningful terms the ‘motion’ of distant quasars, or a Hubble Law interpreted as a distance versus velocity relation? In a small region of spacetime, we can certainly define motion as we always have because space has a static, flat geometry. When a body moves from point x to point y in a time interval, T, we say it is moving with a speed of S = (x – y)/T. There are also specific experimental ways of measuring x, y and T to form the quotent S by using clocks and rulers. The crucil feature behind these measurements is that nothing happens to the geometry of space during the experiment to change the results of the measuring process.

In the cosmological setting which we believe is accurately described by general relativity, we have none of these luxuries! Astronomers cannot wait millions of years to measure quasar proper motions. They cannot, like Highway Patrol officers, bounce radar beams off distant galaxies to establish their relative distances or speeds. Unlike all other forms of motion that have been previously observed, cosmological ‘motion’ cannot be directly observed. It can only be INFERRED from observations of the cosmological redshift, which general relativity then TELLS US means that the universe is expanding.

In big bang cosmology, galaxies are located at fixed positions in space. They may perform small dances about these positions in accordance with special relativity and local gravitational fields, but the real ‘motion’ is in the literal expansion of space between them! This is not a form of movement that any human has ever experienced. It is, therefore, not surprising that our intuition reels at its implication and seeks other less radical interpretations for it including special relativity. But even the exotic language and conundrums of special relativity cannot help us. Instead we are forced to interrogate the mathematics of general relativity itself for whatever landmarks it can provide. In doing so, we are left, however, with a riddle as profound as that of the Twin Paradox, and equally challenging to explain.

Two galaxies permanently located at positions (x1 , y1 , z1 ) and ( x2 , y2 , z2 ) at one time find themselves one billion light years apart. Then a few billion years later while located at the same coordinates, they find themselves 3 billion light years apart. The galaxies have not ‘moved’, nevertheless, their separations have increased. In fact, when the universe was only one year old, the separations between these galaxies were increasing at 300 times the speed of light! Space can expand faster than the speed of light in general relativity because space does not represent matter or energy. The displacements that arise from its dilation produce an entirely new kind of motion for which even our special relativistically-trained intuitions remain profoundly silent. Like that gentleman from Main once said “You can’t get there [to general relativity] from here [special relativity]”. To the extent that general relativity has been tested and found correct, we have no choice but to accept its consequences at face value.

Space, time and matter

The last conclusion drawn from general relativistic cosmology is that, unlike special relativity, it is not physically meaningful to speak of spacetime existing independently of matter and energy. In big bang cosmology, both space and time came into existence along side matter and energy at ‘time zero’. If our universe contains more than a critical density of matter and energy, its spacetime is forever finite and bounded, in a shape analogous to a sphere. Beyond this boundary, space and time simply do not exist. In fact, general relativity allows the Conservation of Energy to be suspended so that matter and energy may be created quite literally from the nothingness of curved spacetime. General relativity provides a means for ‘jump-starting’ Creation!

Big bang cosmology is both a profoundly beautiful, and disturbing, model for our universe, its shape and its destiny. It contains many surprises which have yet to be completely worked-out. But one feature of the evolving universe seems absolutly clear, the big bang was not some grand fireworks display, but an event of a completely different order. It resembled more an expanding soap bubble film upon which galactic dust motes are carried along for the ride. This film represents the totality of all the space and matter in our universe, and it expands into a mysterious primordial void which is itself empty of space, dimension, time or matter.

In the future it is hoped that a death knell will finally have sounded for the last vestage of the older thinking. With the Doppler interpretation of the cosmological redshift at last reconsidered, and rejected, we will finally be able to embrace the essential beauty and mystery of cosmic expansion as it was originally envisioned by its discoverers.

Einstein’s Fudge

Einstein’s Cosmic Fudge Factor

Written by Sten Odenwald
Copyright (C) 1991. Sky Publishing Corporation. Reprinted by permission. See April, 1991 issue

Black holes…quarks…dark matter. It seems like the cosmos gets a little stranger every year. Until recently, the astronomical universe known to humans was populated by planets, stars, galaxies, and scattered nebulae of dust and gas. Now, theoretists tell us it may also be inhabited by objects such as superstrings, dark matter and massive neutrinos — objects that have yet to be discovered if they exist at all!
As bizarre as these new constituents may sound, you don’t have to be a rocket scientist to appreciate the most mysterious ingredient of them all. It is the inky blackness of space itself that commands our attention as we look at the night sky; not the sparse points of light that signal the presence of widely scattered matter.

During the last few decades, physicists and astronomers have begun to recognize that the notion of empty space presents greater subtleties than had ever before been considered. Space is not merely a passive vessel to be filled by matter and radiation, but is a dynamic, physical entity in its own right.

One chapter in the story of our new conception of space begins with a famous theoretical mistake made nearly 75 years ago that now seems to have taken on a life of its own.

In 1917, Albert Einstein tried to use his newly developed theory of general relativity to describe the shape and evolution of the universe. The prevailing idea at the time was that the universe was static and unchanging. Einstein had fully expected general relativity to support this view, but, surprisingly, it did not. The inexorable force of gravity pulling on every speck of matter demanded that the universe collapse under its own weight.

His remedy for this dilemma was to add a new ‘antigravity’ term to his original equations. It enabled his mathematical universe to appear as permanent and invariable as the real one. This term, usually written as an uppercase Greek lambda, is called the ‘cosmological constant’. It has exactly the same value everywhere in the universe, delicately chosen to offset the tendency toward gravitational collapse at every point in space.

A simple thought experiment may help illustrate the nature of Lambda. Take a cubic meter of space and remove all matter and radiation from it. Most of us would agree that this is a perfect vacuum. But, like a ghost in the night, the cosmological constant would still be there. So, empty space is not really empty at all — Lambda gives it a peculiar ‘latent energy’. In other words, even Nothing is Something!

Einstein’s fudged solution remained unchallenged until 1922 when the Russian mathematician Alexander Friedmann began producing compelling cosmological models based on Einstein’s equations but without the extra quantity. Soon thereafter, theorists closely examining Einstein’s model discovered that, like a pencil balanced on its point, it was unstable to collapse or expansion. Later the same decade, Mount Wilson astronomer Edwin P. Hubble found direct observational evidence that the universe is not static, but expanding.

All this ment that the motivation for introducing the cosmological constant seemed contrived. Admitting his blunder, Einstein retracted Lambda in 1932. At first this seemed to end the debate about its existence. Yet decades later, despite the great physicist’s disavowal, Lambda keeps turning up in cosmologists’ discussions about the origin, evolution, and fate of the universe.

THEORY MEETS OBSERVATION

Friedmann’s standard ‘Big Bang’ model without a cosmological constant predicts that the age of the universe, t0, and its expansion rate (represented by the Hubble parameter, H0) are related by the equation t0 = 2/3H0. Some astronomers favor a value of H0 near 50 kilometers per second per megaparsec (one megaparsec equals 3.26 million light years). But the weight of the observational evidence seems to be tipping the balance towards a value near 100. In the Friedmann model, this implies that the cosmos can be no more than 7 billion years old. Yet some of our galaxy’s globular clusters have ages estimated by independent methods of between 12 and 18 billion years!

In what’s called the Einstein-DeSitter cosmology, the Lambda term helps to resolve this discrepancy. Now a large value for the Hubble parameter can be attributed in part to “cosmic repulsion”. This changes the relationship between t0 and H0, so that for a given size, the universe is older than predicted by the Friedmann model.

In one formulation of Einstein’s equation, Lambda is expressed in units of matter density. This means we can ask how the cosmological constant, if it exists at all, compares with the density of the universe in the forms of stars and galaxies.

So far, a careful look at the available astronomical data has produced only upper limits to the magnitude of Lambda. These vary over a considerable range – from about 10 percent of ordinary matter density to several times that density.

The cosmological constant can also leave its mark on the properties of gravitational lenses and faint galaxies. One of the remarkable features of Einstein’s theory of general relativity is its prediction that space and time become deformed or ‘warped’ in the vicinity of a massive body such as a planet, star or even a galaxy. Light rays passing through such regions of warped “space-time” have their paths altered. In the cosmological arena, nearby galaxies can deflect and distort the images of more distant galaxies behind them. Sometimes, the images of these distant galaxies can appear as multiple images surrounding the nearby ‘lensing’ galaxy.

At Kyoto University M. Fukugita and his coworkers predicted that more faint galaxies and gravitational lenses will be detected than in a Friedmann universe if Lambda is more than a few times the matter density. Edwin Turner, an astrophysicist at Princeton University also reviewed the existing, scant, data on gravitational lenses and found that they were as numerous as expected for Lambda less that a few times the matter density. By the best astronomical reconning, Lambda is probably not larger than the observed average matter density of the universe. For that matter, no convincing evidence is available to suggest that Lambda is not exactly equal to zero. So why not just dismiss it as an unnecessary complication? Because the cosmological constant is no longer, strictly, a construct of theoretical cosmology.

NOTHING AND EVERYTHING

To understand how our universe came into existence, and how its various ingredients have evolved, we must delve deeply into the fundamental constituents of matter and the forces that dictate how it will interact. This means that the questions we will have to ask will have more to do with physics than astronomy. Soon after the big bang, the universe was at such a high temperature and density that only the details of matter’s composition (quarks, electrons etc) and how they interact via the four fundamental forces of nature were important. They represented the most complex collections of matter in existence, long before atoms, planets, stars and galaxies had arrived on the scene.

For two decades now, physicists have been attempting to unify the forces and particles that make up our world – to find a common mathematical description that encompasses them all. Some think that such a Theory of Everything is just within reach. It would account not only for the known forms of matter, but also for the fundamental interactions among them: gravity, electromagnetism, and the strong and weak nuclear forces.

These unification theories are known by a variety of names: grand unification theory, supersymmetry theory and superstring theory. Their basic claim is that Nature operates according to a small set of simple rules called symmetries.

The concept of symmetry is at least as old as the civilization of ancient Greece, whos art and archetecture are masterworks of simplicity and balance. Geometers have known for a long time that a simple cube can be rotated 90 degrees without changing its outward appearance. In two dimensions, equalateral triangles look the same when they are rotated by 120 degrees. These are examples of the geometric concept of Rotation Symmetry.

There are parallels to geometric symmetry in the way that various physical phenomena and qualities of matter express themselves as well. For example, the well-known principle of the Conservation of Energy is a consequence of the fact that when some collections of matter and energy are examined at different times, they each have precisely the same total energy, just as a cube looks the same when it is rotated in space by a prescribed amount. Symmetry under a ‘shift in time’ is as closely related to the Conservation of Energy as is the symmetry of a cube when rotated by 90 degrees.

Among other things, symmetries of Nature dictate the strengths and ranges of the natural forces and the properties of the particles they act upon. Although Nature’s symmetries are hidden in today’s cold world, they reveal themselves at very high temperatures and can be studied in modern particle accelerators.

The real goal in unification theory is actually two-fold: not only to uncover and describe the underlying symmetries of the world, but to find physical mechanisms for ‘breaking’ them at low energy. After all, we live in a complex world filled with a diversity of particles and forces, not a bland world with one kind of force and one kind of particle!

Theoreticians working on this problem are often forced to add terms to their equations that represent entirely new fields in Nature. The concept of a field was invented by mathematicians to express how a particular quantity may vary from point to point in space. Physicists since the 18th century have adopted this idea to describe quantitatively how forces such as gravity and magnetism change at different distances from a body.

The interactions of these fields with quarks, electrons and other particles cause symmetries to break down. These fields are usually very different than those we already know about. The much sought after Higgs boson field, for example, was introduced by Sheldon Glashow, Abdus Salam and Steven Weinberg in their unified theory of the electromagnetic and weak nuclear forces.

Prior to their work, the weak force causing certain particles to decay, and the electromagnetic force responsible for the attraction between charged particles and the motion of compass needles, were both considered to be distinct forces in nature. By combining their mathematical descriptions into a common language, they showed that this distinction was not fundamental to the forces at all! A new field in nature called the Higgs field makes these two forces act differently at low temperature. But at temperatures above 1000 trillion degrees, the weak and electromagnetic forces become virtually identical in the way that they affect matter. The corresponding particles called the Higgs Boson not only cause the symmetry between the electromagnetic and weak forces to be broken at low temperature, but they are also responsible for confiring the property of mass on particles such as the electrons and the quarks!

There is, however a price that must be paid for introducing new fields into the mathematical machinery. Not only do they break symmetries, but they can also give the vacuum state an enormous latent energy that, curiously, behaves just like Lambda in cosmological models.

The embarrassment of having to resurrect the obsolete quantity Lambda is compounded when unification theories are used to predict its value. Instead of being at best a vanishingly minor ingredient to the universe, the predicted values are in some instances 10 to the power of 120 times greater than even the most generous astronomical upper limits!

It is an unpleasant fact of life for physicists that the best candidates for the Theory of Everything always have to be fine-tuned to get rid of their undesirable cosmological consequences. Without proper adjustment, these candidates may give correct predictions in the microscopic world of particle physics, but predict a universe which on its largest scales looks very different from the one we inhabit.

Like a messenger from the depths of time, the smallness – or absence – of the cosmological constant today is telling us something important about how to craft a correct Theory of Everything. It is a signpost of the way Nature’s symmetries are broken at low energy, and a nagging reminder that our understanding of the physical world is still incomplete in some fundamental way.

A LIKELY STORY

Most physicists expect the Theory of Everything will describe gravity the same way we now describe matter and the strong, weak and electromagnetic forces – in the language of quantum mechanics. Gravity is, after all, just another force in Nature. So far this has proven elusive, due in part to the sheer complexity of the equations of general relativity. Scientists since Einstein have described gravity ( as well as space and time) in purely geometric terms. Thus we speak of gravity as the “curvature of space-time”.

To acheive complete unification, the dialects of quantum matter and geometric space have to be combined into a single language. Matter appears to be rather precisely described in terms of the language of quantum mechanics. Quarks and electrons exchange force-carrying particles such as photons and gluons and thereby feel the electromagnetic and strong nuclear forces. But, gravity is described by Einstein’s theory of general relativity as a purely geometric phenomenon. These geometric ideas of curvature and the dimensionality of space have nothing to do with quantum mechanics.

To unify these two great foundations of physics, a common language must be found. This new language will take some getting used to. In it, the distinction between matter and space dissolves away and is lost completely; matter becomes a geometric phenomenon, and at the same time, space becomes an exotic form of matter.

Beginning with work on a quantum theory of gravity by John Wheeler and Bryce DeWitt in the 1960’s, and continuing with the so-called superstring theory of John Schwartz and Michael Green in the 1980’s, a primitive version of such a ‘quantum-geometric’ language is emerging. Not surprisingly, it borrows many ideas from ordinary quantum mechanics.

A basic concept in quantum mechanics is that every system of elementary particles is defined by a mathematical quantity called a wave function. This function can be used, for example, to predict the probability of finding an electron at a particular place and time within an atom. Rather than a single quantity, the wave function is actually a sum over an infinite number of factors or ‘states’, each representing a possible measurement outcome. Only one of these states can be observed at a time.

By direct analogy, in quantum gravitation, the geometry of space-time, whether flat or curved, is only one of an infinite variety of geometric shapes for space-time, and therefore the universe. All of these possibilities are described as separate states in the wave function for the universe.

But what determines the probability that the universe will have the particular geometry we now observe out of the infinitude of others? In quantum mechanics, the likelihood that an electron is located somewhere within an atom is determined by the external electric field acting on it. That field is usually provided by the protons in the atomic nucleus. Could there be some mysterious field ‘outside’ our universe that determines its probability?

According to Cambridge University theorist Stephen Hawking, this is the wrong way to look at the problem. Unlike the electron acted upon by protons, our universe is completely self-contained. It requires no outside conditions or fields to help define its probability. The likelihood that our universe looks the way it does depends only on the strengths of the fields within it.

Among these internal fields, there may even be ones that we haven’t yet discovered. Could the cosmological constant be the fingerprint in our universe of a new ‘hidden’ field in Nature? This new field could affect the likelihood of our universe just as a kettle of soup may contain unknown ingredients although we can still precisely determine the kettle’s mass.

A series of mathematical considerations led Hawking to deduce that the weaker the hidden field becomes, the smaller will be the value we observe for the cosmological constant, and surprisingly, the more likely will be the current geometry of the universe.

This, in turn, implies that if Lambda were big enough to measure by astronomers in the first place, our universe would be an improbable one. Philosophically, this may not trouble those who see our cosmos as absolutely unique, but in a world seemingly ruled by probability, a counter view is also possible. There may, in fact, exist an infinite number of universes, but only a minority of them have the correct blend of physical laws and physical conditions resembling our life-nurturing one.

Hawking continued his line of speculation by suggesting that, if at the so-called Planck scale of 10 to the power of -33 centimeters the cosmos could be thought of as an effervescent landscape, or “space-time foam”, then perhaps a natural mechanism could exist for eliminating the cosmological constant for good.

One of the curiosities of combining the speed of light and Newton’s constant of gravitation from general relativity, with Planck’s constant from quantum mechanics, is that they can be made to define unique values for length, time and energy. Physicists believe that at these Planck scales represented by 10 to the power of -33 centimeters and 10 to the power of -43 seconds, general relativity and quantum mechanics blend together to become a single, comprehensive theory of the physical world: The Theory Of Everything. The energy associated with this unification, 10 to the power of 19 billion electron volts, is almost unimaginably big by the standards of modern technology.

The universe itself, soon after the Big Bang, must also have passed through such scales of space, time and energy during its first instants of existence. Cosmologists refer to this period as the Planck Era. It marks the earliest times that physicists are able to explore the universe’s physical state without having a complete Theory of Everything to guide them.

WORMHOLES

Harvard University physicist Sidney Coleman has recently pursued this thought to a possible conclusion. Instead of some mysterious new field in Nature, maybe the Lambda term appears in our theories because we are using the wrong starting model for the geometry of space at the Planck scale.

Previous thinking on the structure of space-time had assumed that it behaved in some sense like a smooth rubber sheet. Under the action of matter and energy, space-time could be deformed into a variety of shapes, each a possible geometric state for the universe. Nearly all candidates for the Theory of Everything’s embed their fields and symmetries in such a smooth geometrical arena.

But what if space-time were far more complicated? One possibility is that ‘wormholes’ exist, filling space-time with a network of tunnels. The fabric of space-time may have more in common with a piece of Swiss cheese than with a smooth rubber sheet.

According to Coleman, the addition of wormholes to space-time means that, like the ripples from many stones tossed into a pond, one geometric state for the universe could interfere with another. The most likely states ( or the biggest ripples) would win out. The mathematics suggest that quantum wormhole interference at the Planck scale makes universes with cosmological constants other than zero exceedingly unlikely.

How big would wormholes have to be to have such dramatic repurcussions? Surprisingly, the calculations suggest that small is beautiful. Wormholes the size of dogs and planets would be very rare. Universes containing even a few of them would exist with a vanishingly low probability. But wormholes smaller than 10 to the power of -33 centimeters could be everywhere. A volume the size of a sugar cube might be teeming with uncounted trillions of them flashing in and out of existence!

Coleman proposes that the action of these previously ignored mini- wormholes upon the geometric fabric of the universe that forces Lambda to be almost exactly zero. Like quantum ‘Pac Men’, they gobble up all the latent energy of space-time that would otherwise have appeared to us in the form of a measureable cosmological constant!

The addition of wormholes to the description of space-time admits the possibility that our universe did not spring into being aloof and independent, but was influenced by how other space-times had already evolved – ghostly mathematical universes with which we can never communicate directly.

The most likely of these universes had Lambda near zero, and it is these states that beat out all other contenders. In a bizarre form of quantum democracy, our universe may have been forced to follow the majority, evolving into the high probability state we now observe, without a detectable cosmological constant.

EPILOG

Wormholes? Wave functions? Hidden fields? The answer to the cosmological constant’s smallness, or absence, seems to recede into the farthest reaches of abstract thinking, faster than most of us can catch up.

As ingenious as these new ideas may seem, the final pages in this unusual story have probably not been written, especially since we can’t put any of these ideas to a direct test. It is a tribute to Einstein’s genius that even his ‘biggest blunder’ made near the beginning of this century still plagues physicists and astronomers as we prepare to enter the 21st century. Who would ever have thought that something that may not even exist would lead to such enormous problems!

The Planck Era

The Planck Era

Written by Sten Odenwald. Copyright (C) 1984 Kalmbach Publishing. Reprinted by permission

The Big Bang theory says that the entire universe was created in a tremendous explosion about 20 billion years ago. The enormity of this event is hard to grasp and it seems natural to ask ourselves ‘What was it like then?’ and ‘What happened before the Big Bang?’. To try to answer these queries, lets take a brief journey backwards in time.
We first see the formation of our own sun about 15 billion years after the Big Bang and then by 5 billion years, the formation of the first galaxies. By 700,000 years, the universe is awash with the fireball radiation that keeps all matter at a temperature of 4,000 degrees. Because of this, darkness is completely absent since every point in the sky glows with the brilliance of the sun. No stars, planets or even dust grains exist, just a hot dense plasma of electrons, protons and helium nuclei. By 3 minutes, we see helium form from the fusion of hydrogen atoms while the universe seeths at a temperature of nearly 1 billion degrees. The average density of matter is that of lead. By 1 second, the Lepton Era ends and the ratio of neutrons to protons has become fixed at 1 neutron for every 5 protons. The temperature is now 5 billion degrees everywhere. At about .0001 second, we watch as the Quark Era ends and the temperature of the fireball radiation rises to an incredable 1 trillion degrees. Quarks, for the first time, can combine in groups of two and three to become neutrons, protons and other types of heavy particles. The universe is now packed with matter as densly as the nucleus of an atom. A mountain like Mt. Everest could be squeezed into a volume no greater than the size of a golf ball!

By 1 billionth of a second, the temperature is 1 thousand trillion degrees and we see the electromagnetic and weak forces merge into one force. The density of the universe has increased to the point where the entire earth could be contained in a thimble. Quarks and anti-quarks are no longer confined inside of particles like neutrons and protons but are now part of a superheated plasma of unbound particles. As the remaining history of the universe unfolds, a long period seems to pass when nothing really new happens. Then, at a time 10(-35) second after the Big Bang, a spectac ular change in the size of the universe occurs. This is the GUT Era when the strong nuclear force becomes distinguishable from the weak and electromagnetic forces. The temperature is an incredable 10 thousand trillion trillion degrees and the density of matter has sored to nearly 10(75) gm/cm3. This number is so enormous that even our analogies are almost beyond comprehension. At these densities, the entire Milky Way galaxy could easily be stuffed into a volume no larger than a single hydrogen atom! Electrons and quarks together with their anti-particles, were the major constituents of matter and very massive particles called Leptoquark Bosons caused the quarks to decay into electrons and vice versa. If we now move forward in time we would witness the vacuum of space undergoing a ‘phase transition’ from a higher energy state to a lower energy state. This is analogous to a ball rolling down the side of a mountain and coming to rest in the lowest valley. As the universe ‘rolls down hill’ it begins a brief but stupendous period of expansion. The universe swells to billions of times its former size in almost no time at all.

In addition to this, a slight excess of matter over anti-matter appears becaus of the decay of massive particles called X Higgs Bosons. As we continue to watch the universe age, the remaining pairs of particles and anti-particles find themselves and vanish in a tremendous burst of annihilation. From this paroxysm, the bulk of the fireball radiation that we now observe is born.

The GUT Era is the last stop in our fanciful journey through time. If we had asked what it was like before the GUT Era, we would immediately have entered a vast no mans land where few indisputable facts would serve to gui de us. What does seem clear is that gravity is destined to grow in importance, eventually becoming the dominant force acting between parti cles, even at the microscopic level.

G R A V I T Y

According to theories developed since the 1930’s, what we call a ‘force’ is actually a collective phenomenon caused by the exchange of innumerable, force-carrying particles called gauge bosons. The electromagnetic force, which causes like charges to attract and dissimilar ones to repel, is transmitted by gauge bosons called photons, the strong force that binds nucleii together is transmitted by gluons and the weak force which causes particles to decay is transmitted by the, recently discovered, W and Z Intermediate Vector Bosons. In an analogous way, physicists believe that gravity is transmitted by particles called Gravitons. If gravity really does have such a quantum property, its effects should appear once quarks and electrons can be forced to within 10(-33) centimeter of one another, a distance called the Planck length. To acheive these conditions, quarks and electrons will have to be collided at energies of 10(19) GeV. An accelerator patterned after the 2-mile, Stanford Linear Accelerator would have to be 1 light-year in length to push particles to these incredable energies! Fortunatly, what humans find impossible to do, Nature with its infinite resources finds less difficult. Before the universe was 10(-43) second old, matter routinely experienced collisions at these energies. This period is what we call the Planck Era.

THROUGH A LOOKING GLASS, DARKLEY

Since our technology will not allow us to physically reproduce the conditions during these ancient times, we must use our mathematical theories of how matter behaves to mentally explore what the universe was like then. We know that the appearence of the universe before 10(-43) second can only be adequatly described by modifying the Big Bang theory because this theory is, in turn, based on the General Theory of Relativity. General Relativity tells us how gravity operates on the macroscopic scale of planets, stars and galaxies. At the Planck scale, we need to extend General Relativity so that it includes not only the macroscopic properties of gravity but also is microscopic characteristics as well. The theory of ‘Quantum Gravity’ is still far from completion but physicists tend to agree that, at the very least, Quantum Gravity must combine the conceptual elements of the two great theories of modern physics: General Relativity and Quantum Mechanics.

In the language of General Relativity, gravity is a consequence of the deformati on of space caused by the presence of matter and energy. Gravity is just another name for the amount of curvature in the geometry of 3-dimensional space. In Quantum Gravity theory, gravity is produced by massless gravitons so that gravitons now represent individual packages of curved space that travel through space at the speed of light.

The appearence and dissappearence of innumerable gravitons gives the geometry of space a very lumpy and dynamic appearance. John Wheeler at Princeton University thinks of this as a foamy, sub-structure to space where the geometry of space twists and contorts so that far flung regions of space may suddenly find themselves connected by ‘wormholes’ which constantly appear and dissappear within 10(-43) seconds. Even as you are reading this article, this frenetic activity is occurring in the hyper-microscopic domain, 100 billion billion times smaller than the nucleus of an atom. For a comparison, the size of the sun and the size of a single atom stand in about this same proportion. Although Quantum Gravity effects are completely undetectable today at the atomic and nuclear scale, during the Planck Era, macroscopic and microscopic worlds merged and the Quantum Gravity of the microcosm suddenly became the Quantum Cosmology of the macrocosm!

QUANTUM COSMOLOGY

As we approach the end of the Planck Era, the random appearance and dissappearance of innumerable gravitons will eventually force us to give up the concept of a specific geometry to 3-dimensional space. Instead, the geometry at a given moment will have to be thought of as an average over all 3-dimensional space geometries that are possible. Once again, the reason for this is that particles are squeezed so closely together that we can now see individual gravitons moving around in the space between them causing space to become curved. We can no longer get away with saying that the space between two quarks, for example, is flat. This is what we mean when we say that the gravitational force between them is insignificant when compared to the other three forces of Nature.

To make matters much worse, not only will Quantum Gravity not allow us to calculate the exact 3-dimensional geometry to space but, at the Planck scale, it will not allow us to simultaneously determine its exact geometry and precise rate of change in time. What this means is that we may never be able to calculate with any certainty exactly what the history of the universe was like before 10-43 second. Today, the large-scale geometry of space is one of three possible types: flat and infinite, negatively curved and infinite or positively curved and finite. During the Planck Era, the ‘large-scale’ geometry was contorted by wormholes and and infinite number of possibilities were possible. To probe the history of the universe then would be like trying to trace your ancestral roots if every human being on earth had a possibility of being one of your parents. Now try to trace your family tree back a few generations! The farther back in time you go, the greater are the number of possible ancestors you could have had. An entirely new conception of what we mea n by ‘a history for the universe’ will have to be developed. Even the concepts of space and time will have to be completely re-evaluated in the face of the qua ntum fluctuations of spacetime at the Planck Era!

THE BIRTH OF THE UNIVERSE

The picture that seems to emerge from using our sketchy outline of what Quantum Gravity theory might look like is that as we approach the Planck Era, gravitons are exchanged between quarks and electrons with increasingly higher energy and in greater number. By the time we reach the end of the Planck Era at 10(-43) second, gravitons will begin to carry as much energy as the other force carriers (Gluons, IVBs and Photons). At still earlier times, a period of complet e symmetry and unification between all the natural forces will ensue. Only one super-unified force exists here (gravity) and only one kind of particle dominates the activity of this age(Gravitons).

During the early 70’s, the Russian physicists Ya. Zel’dovitch and A. Starobinski of the USSR Academy of Science proposed that the rapidly changing geometry of space during the Planck Era may actually have created all the matter, anti-matter and radiation that existed soon after Creation. In their picture of Creation, the rapidly changing geometry of space created particles and anti-particles with masses of 10(19) GeV. This production of matter and anti-matter removed energy from the enormous fluctuations occuring in the geometry of space and eventually succeeded in damping them out altogether by the end of the Planck Era. They also found that the rate of particle creation increased as more and more particles were created.

Several recent studies by Physicists Edward Tryon of Hunter College, R. Brout, F. Englert and E. Gunzig of the University of Brussels and david Atkatz and Heinz Pagels of the Rockefeller University have shed additional light on what Creation may have been like. Imagine if you can, nothing at all! This is the primordial vacuum of space. There is complete darkness here, no light yet exists. The number of dimensions to space was probably not the normal 3 that we are so accustomed to but may have been as high as 11 according to Supergravity theory! In this infinite emptiness, random fluctuations occurred that ever so slightly changed the energy of the vacuum at various points in space. Eventually, one of these fluctuations attained a critical energy and began to grow. As it grew, very massive particles called leptoquarks and anti-leptoquarks were created, causing the expansion to accelerate. This is much like a ball rolling down a hill that moves slowly at first and then gains momentum. The expansion of the proto-universe, in turn, caused still more leptoquarks to be created. This furious cycle continued until, at long last, the leptoquarks decayed into quarks, leptons (electrons, muons etc) and their anti-particles and the universe emerged from the Planck Era. Particle creation stopped once the fluctuations in the geometry of space subsided.

So, we are left with the remarkable possibility that, in the beginning, there ex isted quite literally, nothing at all and from it emerged nearly all of the matter and radiation that we now see. This process has been described by the physicist Frank Wilczyk at the University of California, Santa Barbara by saying, ” The reason that there is something instead of nothing is that nothing is unstable”. A ball sitting on the summit of a steep hill needs but the slightest tap to set it in motion. A random fluctuation in space was apparently all that was required to unleash the incredable latent energy of the vacuum, thus creating matter and energy and an expanding universe from ‘nothing at all’.

The universe did not spring into being instantaneously but was created a little bit at a time in a ‘bootstrap’ process. Once a few particles were created by quantum fluctuations of the empty vacuum, it became easier for a few more to appear and so, in a rapidly escalating process, the universe gushed forth from nothingness.

How long did this take? The primordial vacuum could have existed for an eternity before the particular fluctuation that gave rise to our universe happened. Physicist Edward Tryon expresses this best by saying that ” Our universe is simply one of those things that happens from time to time”.

The principles of Quantum Gravity may ultimatly force us to reconsider questions like ‘What happened before the Big Bang?’ because they imply the existence of something (time) that may not have any meaning at all. These questions may be as empty of meaning as an explorer on the north pole asking, ‘Which way is North?’. Only the complete theory of Quantum Gravity may tell us how to ask the right questions!

What is Space? Part II

Space-Time: The Final Frontier

Written by Sten Odenwald. Copyright (C) 1995 Sky Publishing Corporation. See February 1996 issue.
THE NIGHT SKY, when you think about it, is one of the strangest sights imaginable. The pinpoint stars that catch your eye are all but swallowed up by the black nothingness of space – an entity billions of light-years deep with which we here on Earth have no direct ex- perience.
What is empty space, really? At first the question seems silly. There’s nothing to it! But look again in light of what modern physics knows and suspects, and the nature of space emerges as one of the most important “sleeper” issues growing for the last 50 years. “Nature abhors a vacuum,” proclaimed Aristotle more than 2,300 years ago. Today physicists are discovering that this is true in ways the ancient Greeks could never have imagined.

True, the cosmos consists overwhelmingly of vacuum. Yet vacuum itself is proving not to be empty at all. It is much more complex than most people would guess. “But surely,” you might ask, “if you take a container and remove everything from inside it – every atom, every photon – there will be nothing left?” Not by a long shot. Since the 1920s physicists have recognized that on a microscopic scale, the vacuum itself is alive with activity. Moreover, this network of activity may extend right down to include the very structure of space-time itself. The fine structure of the vacuum may ultimately hold the keys to some of the deepest questions facing physics – from why elementary particles have the properties they do, to the cause of the Big Bang and the likelihood of other universes outside our own.

THINGS THAT GO BUMP IN THE DARK

The state of the art in physics – our deepest current understanding of the world – is embodied in the so-called Standard Model, in which all matter and forces are accounted for by an astonishingly few types of particles (see Sky & Telescope – December 1987, page 582). Six quarks and six leptons make up all possible forms of matter. In practice just two of the quarks (the up and down) and one lepton (the electron) account for everything in the world except for a few whiffs of exotica known only to high-energy physicists. The 12 particles of matter (and their 12 corresponding particles of antimatter, or antiparticles) are acted upon by “messenger particles” that carry all the known forces. The photon mediates the electromagnetic force, including all the familiar chemical and structural forces around us on Earth. The members of the gluon family carry the strong force that binds neutrons and protons together in atomic nuclei. The W’, W-, and Zo mediate the weak nuclear force, and the as-yet-undiscovered graviton is believed to carry the force of gravity.

Every possible event involving the 12 matter particles can be completely explained as an exchange of messenger particles. During some of these events, for example when electrons accelerate in a radio-transmitter antenna, messenger particles (in this case photons) materialize and travel through space. At other times, however, the messengers remain almost entirely hidden within the interacting system. When the messengers exist in this hidden form, they are called “virtual particles.” Virtual particles may seem ghostly and unreal by everyday standards. But real they are. Moreover, they are not limited to their role of mediating interactions. Virtual particles can also pop in and out of empty space all by themselves.

Quantum mechanics, the rulebook of the Standard Model, states as a bedrock principle that you need a certain length of time to measure a particle’s energy or mass to a given degree of accuracy. The shorter the observation time, the more uncertain the measurement. If the time is very brief, the uncertainty becomes larger than the particie’s entire mass, and you cannot say whether or not the particle is there at all. The lighter the particle, the longer its uncertainty time. In the case of an electron-positron pair, the uncertainty time scale is about 10^-21″ seconds.

On time scales shorter than this, virtual electrons and positrons can, and do, pop in and out of nothingness like peas in a shell game. It’s as if, just because you can’t say a particle doesn’t exist when you look very briefly, then in a sense it does. This is not mere theorizing. In 1958 a tabletop experiment demonstrated the “Casimir effect,” measuring the force caused by virtual particles appearing and vanishing in total vacuum through the attraction they caused between two parallel metal plates. If the vacuum were truly empty the plates should not have attracted, but the incessant dance of virtual particles in the space between them produces a detectable effect.

Every particle – matter as well as messenger – seems to display a virtual form, each seething in greater or lesser abundances in what physicists call the “physical vacuum.” When it comes to affecting the ordinary world, moreover, virtual particles may do much more than just mediate forces. Some, in fact, may cause matter to have the property we call mass. The electron is the simplest of matter particles. Our knowledge of the physical world rests upon a solid understanding of its properties. Yet despite its abundance in the circuitry around us, the electron harbors an enigma. The fact that it has mass cannot be explained in the Standard Model, at least the parts of it that have been experimentally verified. More than 30 years ago particle physicist Peter Higgs suggested that the existence of mass has to do with a new ingredient of nature that is now called the Higgs field, which provides a new type of messenger particle that interacts with the electron to make it “weigh.”

The Higgs field has yet to be discovered, but many physicists expect it to exist everywhere in the physical vacuum, ensuring through its interactions with electrons and other particles that they will display mass. Even now, particle accelerators at CERN in Switzerland and at Fermilab near Chicago are straining at their maximum capabilities to cause just one “Higgs boson,” the presumed messenger particle for this field, to break loose from the vacuum and leave a detectable trace. Success would provide a triumphant completion of the Standard Model.

So to answer our question about whether a container of empty space is truly empty, the best anyone can do is remove the normal, physical particles that nature allows us to see and manipulate. The virtual particles can never be evicted. And in addition there may exist the ever-present Higgs field.

QUANTUM GRAVITY

For most of this century, physicists have struggled to bring gravity into the scheme of forces that are mediated by virtual messenger particles. To put this another way, the theory of general relativity, which shows the force of gravity to be a curvature of space-time, needs to be integrated with quantum mechanics, which shows forces to be virtual particle exchanges. Working on the assumption that such a marriage is possible, physicists named gravity’s messenger particle the graviton. But general relativity requires that gravitons be more than just quanta of gravity. In essence, gravitons define the structure of space-time itself.

The reconciliation of quantum mechanics and general relativity may lead us to dramatically new notions of the nature of space and time. Some theorists have suggested that points in space-time become defined only when a particle (such as a graviton or photon) interacts with other particles. In this view, what they are doing between interactions is a nonphysical question, since only an interaction defines a measurable time and place. Gravitational forces (and thus gravitons) exert an influence at distances much larger than the subatomic realm, as anyone who has fallen down a flight of stairs can attest. But only at an extremely small scale – the Planck length of 10^-33 centimeters – does the quantum nature of gravity become important.

Suppose you could magically look through a microscope that magnified an atomic nucleus to be some 10 light-years across. Under this magnification the smallest gravitons – that is, the most energetic and massive ones – would be about a millimeter in size. Here we might see a strange world in which space-time itself was defined by gravitons intersecting and looping around each other. In a similar vein, Roger Penrose has suggested that the gravitational field and space-time are built up from still more primitive mathematical entities called twistors, and that “ultimately the [space-time] concept may possibly be eliminated from the basis of physical theory altogether.” In essence, space and time become factored out as less- than-fundamental parts of the physical world.

In such a view, only the interactions between twistors, or perhaps gravitons, define when and where space-time is and is not. Are there gaps in the physical vacuum, voids of true and absolute nothing where space and time themselves do not exist?

Another viewpoint on the structure of space-time is offered by “superstring theory.” String theories posit that the fundamental objects of nature are one-dimensional lines rather than points; the “elementary” particles we measure are only oscillations of these strings. Superstring theory only seems to work, however, if space-time has not just four dimensions (three of space and one of time), but 10 dimensions. This hardly seems like the world we live in. To hide the extra six dimensions, mathematicians roll them up into conceptual corners that go by such cryptic names as “Calabi-Yau manifolds” and “orbifold space.” A recent textbook on the subject concludes on a wistful note that “if the string idea is correct, we may never catch more than a glimpse of the full ex- tent of reality.”

More recently, theorists Carlo Rovelli (University of Pittsburgh) and Lee Smolin (Pennsylvania State University) completed their analysis of a quantum gravity model developed by Abhay Ashtekar at Syracuse University in 1985. Unlike string theory, Ashtekar’s work applies only to gravity. However, it posits that at the Planck scale, space-time dissolves into a network of “loops” that are held together by knots. Somewhat like a chain-mail coat used by knights of yore, space-time resembles a fabric fashioned in four dimensions from these tiny one-dimensional loops and knots of energy.

Is this the way the world really is on its most fundamental level, or have mathematicians become detached from reality? Superstring theory has enticed physicists for over a decade now because it hints at a super unification of all four fundamental forces of nature. But it remains frustratingly hard to plant anchors down from these cloud castles into the real world of observation and experiment. The famous remark that superstring theory is “a piece of 21st- century physics that accidentally fell into the 20th century” captures both the excitement and frustration of workers stuck with 20th-century tools.

Surprisingly, string theory, Ashtekar’s loopy space-time, and twistors are not entirely independent ways of looking at space-time. In 1986 theorists discovered that superstrings have some things in common with twistors. A deep connection had been uncovered between two very different, independent theories. Like two teams of tunnelers starting on opposite sides of a mountain, they had met at the middle – a sign, perhaps, that they are dealing with a single real mountain, not separate ones in their own imaginations. And in 1995 Rovelli and Smolin also found that their graviton loops are very closely related to both the twistors and superstrings, though not identical in all respects.

THE COSMIC CONNECTION

Space-time could be strange in other ways too. Theorist John A. Wheeler (In- stitute for Advanced Study) has long advocated that at the Planck scale, space-time has a complex shape that changes from instant to instant. Wheeler called his picture “space-time foam” – a sea of quantum black holes and worm holes appearing and vanishing on a time scale of about 10^-43″ seconds. This is the Planck time, the time it takes light to cross the Planck length. Shorter than that, time, like space, presumably cannot exist – or, at least, our everyday notions of them cease to be valid.

Wheeler’s idea of space-time foam is a natural extrapolation from the idea of virtual particles. According to quantum mechanics, the higher the energy and mass of a particle, the smaller it must appear. A virtual particle as small as 10^-33″ cm, lasting only 10^-43 second, has so great a mass (10^-5 gram) in such a tiny volume that its own surface gravity would give it an escape velocity greater than the speed of light. In other words, it is a tiny black hole. But a black hole is not an ordinary object sitting in space- time like a particle; it is a structure of distorted, convoluted space-time itself. Although the consequences of such phe- nomena are not understood, it is rea- sonable to assume that these virtual par- ticles dramatically distort all space-time at the Planck scale.

If we take this reasoning at face value, and consider the decades-old experiments proving that the virtual particle phenomenon in a vacuum is real, it is hard to believe that space-time is smooth at or below the Planck scale. Space must be broken up and quantized. The only question is how. Wheeler’s original idea of space-time foam is especially potent because according to recent proposals by Sidney Coleman (Harvard) and Stephen Hawking (Cambridge University), its worm holes not only connect different points very close together within our space-time, but connect our space-time to other universes that, as far as we are concerned, exist only as ghostly probabilities. These connections to other universes cause the so-called cosmological constant – an annoying intrusion into the equations of cosmology ever since Einstein (see Sky & Telescope- April 1991, page 362) – to neatly vanish within our own universe.

Space-time foam has also been implicated as the spawning ground for baby universes. In several theories explaining the cause of the Big Bang and what came before, big bangs can bud off from a previously existing space-time, break away completely while still microscopic, and inflate with matter to become new universes of their own, completely disconnected (“disjoint”) from their space- time of origin. This process, proposed by Alan Guth (MIT) and others, gives a handle on what many expect to be another key issue of 21st-century physics: was our Big Bang unique? Or was it just a routine spinoff of natural processes happening all the time in some larger, outside realm? (see Sky & Telescope- September 1988, page 253).

Yet there are problems. The amount of latent energy in the quantum fluctuations of space-time foam is staggering: 10^105 ergs per cubic centimeter. This amounts to 10 billion billion times the mass of all the galaxies in the observ- able universe – packed into every cubic centimeter! Fortunately, Mother Nature seems to have devised some means of exactly canceling out this phenomenon to an accuracy of about 120 decimal places. The problem is that we haven’t a clue how.

It’s unnerving to think that in the 16 inches separating this page from your eyes, new big bangs are perhaps being spawned out and away from our quiet space-time every instant. By comparison, it seems positively dull that the photons by which you see this page might be playing a hop-scotch game to avoid gaps where space-time doesn’t exist.

REALITY CHECK

Some physicists have begun to throw cold water on these fantastic ideas. For instance, in 1993 Matt Visser (Washington University) studied the mathematical properties of quantum worm holes and discovered that, once they are formed, they become stable: they can’t foam at all. Kazuo Ghoroku (Fukuoka Institute of Technology, Japan) also found that quantum worm holes become stable even when their interactions with other fields are considered. What Wheeler called space-time foam may be something else entirely.

Among the unresolved problems facing theorists is the nature of time, which has been recognized as inextricably bound up with space ever since Einstein posited a constant speed for light. In general relativity, it isn’t always obvious how to define what we mean by time, especially at the Planck scale where time seems to lose its conventional meaning. Central to any quantum theory is the concept of measurement, but what does this imply for physics at the Planck scale, which sets an ultimate limit to the possibility of measurement? How any of these ideas about space- time can be tested is currently unknown. Some physicists believe this makes these ideas not real scientific inquiry at all. And it’s worth remembering that mathematics can sometimes introduce concepts that are only a means to an end and have no independent reality.

In the abstract world of mathematical symbolism, it isn’t always clear what is real and what’s not. For example, when we do long division on paper to divide 54,162 by 2 to get 27,081, we generate the intermediate numbers 14, 16, and 2, which we then just throw away. Are virtual particles, compact 6-dimensional manifolds, and twistors simply nonphysical means to an end – mere artifacts of how we humans do our mathematics? Particle physicists often have to deal with “ghost fields” that are simply the temporary scaffolding used for calculations, and that vanish when the calculations are complete. Nonphysical devices such as negative probability and faster- than-light tachyon particles are grudgingly tolerated so long as they disappear before the final answers. Even in super- string theory, recent work suggests that it may be possible to build consistent models entirely within ordinary four-dimensional space-time, without recourse to higher dimensions.

ANGEL FOOD CAKE

So, how should we think of the great, dark void that we gaze into at night? All clues point to space-time being a kind of layer cake of busy phenomena on the submicroscopic scale. The topmost layer contains the quarks and electrons comprising ordinary matter, scattered here and there like raisins in the frosting. These raisins can be plucked away to make a region of space appear empty. The frosting itself consists of virtual particles, primarily those carrying the electromagnetic, weak, and strong forces, filling the vacuum with incessant activity that can never be switched off. Their quantum comings and goings may completely fill space-time so that no points are ever really missing. This layer of the cake of “empty space” seems pretty well established by laboratory experiment.

Beneath this layer we have the domain of the putative Higgs field. No matter where the electron and quark “raisins” go, in this view, there is always a piece of the Higgs field nearby to affect them and give them mass. Below the Higgs layer there may exist other layers, representing fields we have yet to discover. But eventually we arrive at the lowest stratum, that of the gravitational field. There is more of this field wherever mass is present in the layers above it, but there is no place where it is entirely absent. This layer recalls the Babylonian Great Turtle that carried the universe on its back. Without it, all the other layers above would vanish into nothingness.

We know that space-time is quite smooth down to at least the scale of the electron, 10^-20 cm – 10 million times smaller than an atomic nucleus. This is the size limit set for any internal component of the electron, based on careful comparisons between experiment and the predictions of quantum electrodynamics. But near the Planck horizon of 10^-33 cm, space-time must change its structure drastically. It may be a world in which conventional notions of dimensionality, time, and space need to be redefined and possibly eliminated altogether.

The conceit of our universe’s uniqueness may disappear, with big bangs becoming viewed as run-of-the-mill events in some much larger outside realm, and with physical constants being attributed to causes in space-times forever beyond human experience.

There is much that’s spooky about the physical vacuum. This spookiness may be rooted more in the way our brains work than in some objective aspect of nature. Einstein stressed, “Space and time are not conditions in which we live, but modes in which we think.” Our understanding of space remains in its infancy. With Aristotle smiling at us down the centuries, we now see the vacuum as much more than a vacancy. It will take many decades, if not centuries, before a complete understanding of it is fashioned. In the meantime, enjoy the nighttime view!

FURTHER READING

Davies, Paul. The New Physics. Cambridge: Cambridge University Press, 1989.

Mallove, Eugene. “The Self-Reproducing Universe.” Sky & Telescope, September 1988, page 253.

Matthews, Robert. “Nothing Like a Vacuum.” New Scientist, February 25, 1995, page 30.

Pagels, Heinz. “Perfect Symmetry.” New York: Simon & Schuster, 1985.

The End of Physics?

For 45 years I have followed the great pageant of ideas in theoretical physics. From high school through retirement, although my career and expertise is in astronomy and astrophysics, my passion has always been in following the glorious ideas that have swirled around in theoretical physics. I watched as the quark theory of the 1960s gave way to Grand Unification Theory in the 1970s, and then to string theory and inflationary cosmology in the 1980s. I was thrilled by how these ideas could be applied to understanding the earliest moments in the Big Bang and perhaps let me catch at least a mathematical glimpse of how the universe, time and space came to be literally out of Nothing; explanations not forthcoming from within Einstein’s theory of general relativity.

Even as recently as 2012 this story continued to captivate me even as I grappled with what might be the premature end of my life at the hands of non-Hodgkins Lymphoma diagnosed in 2008. And still I read the journal articles, watching as new ideas emerged, built upon the theoretical successes of the 1990s and beyond. But then a strange thing happened.

In the 1980s, the US embarked on the construction in Texas of the Superconducting Super Collider, but that project was scrapped and de-funded by Congress after ¼ of it had been built. Attention then turned to the European Large Hadron Collider project, which after 10 years finally achieved its first collisions in 2009. The energy of this accelerator has steadily been increased to 13 TeV, and now records some 600 million collisions per second, which generates 30 petabytes of data per year. Among these collisions were expected to be the traces of ‘new physics’, and physicists were not dissappointed. In 2012 the elusive Higgs Boson was detected some 50 years after it was predicted to exist. It was a major discovery that signaled we were definitely on the right track in verifying the Standard Model. But since then, following many more years of searching among the debris of trillions of collisions, all we continue to see are the successful predictions of the Standard Model confirmed again and again with only a few caveats.

Typically, physicists push experiments to ever-higher degrees of accuracy to uncover where our current theoretical model predictions are becoming thread-bare, revealing signs of new phenomena or particles, hence the term ‘new physics’. Theoreticians then use this anomalous data to extend known ideas into a larger arena, and always select new ideas that are the simplest-possible extensions of the older ideas. But sometimes you have to incorporate entirely new ideas. This happened when Einstein developed relativity, which was a ‘beautiful’ extension of the older and simpler Newtonian Physics. Ultimately it is the data that leads the way, and if not available, we get to argue over whose theory is more mathematically beautiful or elegant.

Today we have one such elegant contender for extending the Standard Model that involves a new symmetry in Nature called supersymmetry. Discovered mathematically in the mid-1970s, it showed how the particles in the Standard Model that account for matter (quarks, electrons) are related to the force-carrying particles (e.g. photons, gluons), but also offered an integrated role for gravity as a new kind of force-particle. The hitch was that to make the mathematics work so that it did not answer ‘infinity’ every time you did a calculation, you had to add a whole new family of super-heavy particles to the list of elementary particles. Many versions of ‘Minimally Supersymmetric Standard Models’ or MSSM’s were possible, but most agreed that starting at a mass of about 1000 times that of a proton (1 TeV), you would start to see the smallest of these particles as ‘low-hanging fruit’, like the tip of an upside-down pyramid.

For the last seven years of LHC operation, using a variety of techniques and sophisticated detectors, absolutely no sign of supersymmetry has yet to be found. In April, 2017 at the Moriond Conference, physicists with the ATLAS Experiment at CERN presented their first results examining the combined 2015 – 2016 LHC data. This new dataset was almost three times larger than what was available at the last major particle physics conference held in 2016. Searches for the supersymmetric partners to quarks and gluons (called squarks and gluinos) turned up nothing below a mass of 2 TeV. There was no evidence for exotic supersymmetric matter at masses below 6 TeV, and no heavy partner to the W-boson was found below 5 TeV.

Perhaps the worst result for me as an astronomer is for dark matter. The MSSM model, the simplest extension of the Standard Model with supersymmetry, predicted the existence of several very low mass particles called neutralinos. When added to cosmological models, neutralinos seem to account for the existence of dark matter, which occupies 27% of the gravitating stuff in the universe and controls the movement of ordinary matter as it forms galaxies and stars. MSSM gives astronomers a tidy way to explain dark matter and closes the book on what it is likely to be. Unfortunately the LHC has found no evidence for light-weight neutralinos at their expected MSSM mass ranges. (see for example https://arxiv.org/abs/1608.00872 or https://arxiv.org/abs/1605.04608)

Of course the searches will continue as the LHC remains our best tool for exploring these energies well into the 2030s. But if past is prologue, the news isn’t very promising. Typically the greatest discoveries of any new technology are made within the first decade of operation. The LHC is well on its way to ending its first decade with ‘only’ the Higgs boson as a prize. It was fully intended that the LHC would have given us hard evidence by now for literally dozens of new super-heavy particles, and a definitive candidate for dark matter to clean up the cosmological inventory.

So this is my reason for feeling sad. If the Higgs boson is a guide, it may take us several more decades and a whole new and expensive LHC replacement to find something significant to affirm our current ‘beautiful’ ideas about the physical nature of the universe. Supersymmetry may still play a role in this but it will be hard to attract a new generation of young physicists to its search if Nature continues to withhold so much as a hint we are on the right theoretical track.

If supersymmetry falls string theory, which hinges on supersymmetry, may also have to be put aside or re-thought. Nature seems to favor simple theories over complex ones so are the current string theories with supersymmetry really the simplest ones?

Thousands of physicists have toiled over these ideas since the 1970s. In the past, such a herculean effort usually won-out with Nature rewarding the tedious intellectual work, and some vestiges of the effort being salvaged for the new theory. I find it hard to believe that will not again be the case this time, but as I prepare for retirement I am realizing that I may not be around to see this final vindication.

So what should I make of my 45-year intellectual obsession to keep up with this research? Given what I know today would I have done things differently? Would I have taught fewer classes on this subject, or written fewer articles for popular science magazines?

Absolutely not!

I have thoroughly enjoyed the thrill of the new ideas about matter, space, time and dimension. The Multiverse idea offered me a new way of experiencing my place in ‘reality’. I could never have invented these amazing ideas on my own, which have entertained me for most of my professional life. Even today’s Nature seems to have handed us something new: Gravity waves have been detected after a 60-year search; detailed studies of the cosmic ‘fireball’ radiation are giving us hints to the earliest moments in the Big Bang; and of course we have discovered THOUSANDS of new planets.

Living in this new world seems almost as intellectually stimulating, and now offer me more immediate returns on my investment in the years remaining.

The Proton’s Spin

Protons are the work horses of chemistry. Their numbers determine which element you are talking about, and their positive charge determines how many electrons will form a cloud around them to facilitate all manner of chemical reactions.

For decades we thought that protons were absolutely fundamental particles along with neutrons and electrons, but then came the quantum revolution of the 1920s and the escalating quest to understand what their actual physical properties were. Through experimentation, we found that protons all had exactly the same mass to many decimal places. They all had exactly +1.0000 unit of charge, also to many decimal places. But they also possessed an entirely new physical quantity found only in atomic-scale physics. This quantity was called ‘spin’ but had nothing to do with the motion of a top about its axis, although paradoxically it could nonetheless be interpreted in that way.

Quantum spin, unlike the continuous spinning of a top, comes only in integer units like 0, 1, 2, etc, or in half-integer units like ½, 3/2, 5/2 etc. Physicists soon discovered that fundamental particles like photons ( the carriers of light energy) only had a quantum spin of exactly 1.0, while protons, neutrons, neutrinos and electrons had exactly ½ unit of spin. The former kinds of particles were called bosons while the latter were given the name fermions. Composite particles made up from these elementary bosons and fermions could have other spin values, but only what arises from adding, in the proper way, the elementary spins of their constituents.

By the 1960s, experiments had begun to show that protons were not actually fundamental particles at all, nor were neutrons for that matter. Theoretical models that built-up protons and neutrons and many other known particles called mesons and baryons soon led to the idea of the quark. For protons and neutrons, you needed three quarks, while for the mesons you only needed two of which one would be a quark and the other an anti-quark. The mathematics were impressive and elegant, and this system of quarks soon became the favored model for all particles that interacted through the strong nuclear force, itself produced by the exchanges of particles called gluons. Also in this scheme, quarks would be spin-1/2 fermions and the gluons would be spin-1 bosons much like the photons which carry light energy.

All seemed to be going great by the 1970s and 1980s. The quark model flourished, and many new subtle phenomena were uncovered through the application of what became the Standard Model of physics. But there was a fly in the ointment.

At first the explanation for how a proton could have a spin of ½ while at the same time being composed of three quarks, each also spin-1/2 particles, was pretty well settled. Because a proton consisted of two identical ‘up’ quarks and one ‘down’ quark, it was entirely reasonable that the two up quarks would have equal and opposite spin canceling each other out, leaving behind the down quark to carry the protons ½ unit of spin. Similarly for the neutron, its two down quarks combined to have a net-zero spin leaving the single up quark to carry the ½ unit of spin for the neutron.

The Proton Spin Crisis

All seemed to be well until experiments in 1987 at the European Muon Collaboration actually used carefully prepared beams of particles called muons to probe the interior of protons and double-check the way the quark spins were oriented with the protons spin. What they found was startling. Not more than 25% of the proton’s spin was generated by the quarks at all. The remaining 75% of what defines the spin of a proton had to come from some other source!

When you look at the mass of a proton compared to the masses of the three constituent quarks you discover something very fascinating. The masses of the quarks only account for about 1% of the mass of the entire proton. Instead, thanks to Einstein’s E=mc2, it is the stress energy of the gluon fields inside the proton that contribute the missing 99%. The mass that you read on the bathroom scale is only 1% contributed by the mass of your elementary quarks in grams, but 99% by the invisible energy(mass) of the gluon fields that occupy nuclear space!

Now for proton spin, the only other things rattling around inside the intense fields in the interior of a proton were the gluons holding the quarks together, and an ephemeral sea of quark-antiquark pairs that momentarily appeared and disappeared in the vacuum of space found there. This sea of vacuum or ‘virtual’ particles is absolutely required by modern quantum physics, and although we can never detect their comings and goings by any direct observation, we can detect their influence on nearby elementary particles.

In 2014, experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven, New York collided polarized protons together and physicists think they have found a large part of the remainder of the protons spin. Perhaps 40% to 50% seems to be contributed by the gluons themselves. This still leave about 25% in some other source. Meanwhile, other experiments by MIT physicists determined that any anti-quarks produced inside a proton among the virtual quark sea contribute very little to the over-all spin of the proton.

The bottom line today seems to be what this table shows:

Quark spin……………………….………..25%
Gluon spin…………………………………40-50%
Orbital angular momentum……..25% to 35%

When the experimental constraints are added up, we still do not have a precise measure of how the various proton constituents add up to give the universally constant spin of 1/2 to a proton that is observed for all protons to many decimal places.

Who would have thought that such an important number as ‘1/2’ arises from combining a number of messy phenomena that themselves seem imprecise!

Check back here on Tuesday, May 30 for my next topic!

The First Billion Years

When we think about the Big Bang we tend only to look at the first few instants when we think all of the mysterious and exciting action occurred. But actually, the first BILLION years are the real stars of this story!

My books ‘Eternity:A Users Guide’ and ‘Cosmic History I and II’ provide a more thorough, and ‘twitterized’, timeline of the universe from the Big Bang to the literal end of time if you are interested in the whole story as we know it today. You can also look at a massive computer simulation developed by Harvard and MIT cosmologists in 2014.

What we understand today is not merely based on theoretical expectations. Thanks to specific observations during the last decade, we have actually discovered distant objects that help us probe critical moments during this span of time.

Infancy

By the end of the first 10 minutes after the Big Bang, the universe was filled with a cooling plasma of hydrogen and helium nuclei and electrons – too hot to come together to form neutral atoms at seething temperatures over 100 million Celsius. The traces that we do see of the fireball light from the Big Bang are called the cosmic background radiation, and astronomers have been studying it since the 1960s. Today, the temperature is 2.726 kelvins, but at one part in 100,000 there are irregularities in its temperature across the entire sky detected by the COBE, WMAP and Planck satellites and shown below. These irregularities are the gravitational fingerprints of vast clusters of galaxies that formed in the infant universe after several more billion years.

By 379,000 years, matter had cooled down to the point where electrons could bond with atomic nuclei to form neutral atoms of hydrogen and helium. For the first time in cosmic history, matter could go its own way and no longer be affected by the fireball radiation, which used to blast these assembled atoms apart faster than they could form. If you were living at this time, it would look like you were standing inside the surface of a vast dull-red star steadily fading to black as the universe continued to expand, and the gas steadily cooled over the millennia. No matter where you stood in the universe at this time, all you would see around you is  this dull-red glow across the sky.

6 million years – By this time, the cosmic gas has cooled to the point that its temperature was only 500 kelvins (440 F). At these temperatures, it no longer emits any  visible light. The universe is now fully in what astronomers call The Cosmic Dark Ages. If you were there and looking around, you would see nothing but an inky blackness no matter where you looked! With infrared eyes, however, you would see the cosmos filled by a glow spanning the entire sky.

20 million years – The hydrogen-helium gas that exists all across the universe is starting to feel the gravity effects of dark matter, which has started to form large clumps and vast spiders-web-like networks spanning the entire cosmos, with a mass of several trillion times the mass of our sun. As the cold, primordial gas falls into these gravity wells, it forms what will later become the halos of modern-day galaxies. All of this hidden under a cloak of complete darkness because there was as yet no physical objects in existence to light things up. Only detailed supercomputer simulations can reveal what occurred during this time.

The First Stars

100 million years – Once the universe got cold enough, large gas clouds stopped being controlled by their internal pressure, and gravity started to take the upper hand. First the vast collections of matter destined to become the haloes of galaxies formed. Then, or at about the same time, the first generation of stars appeared in the universe. These Population III stars made from nearly transparent hydrogen and helium gas were so massive, they lived for only a few million years before detonating as supernova. As the universe becomes polluted with heavier elements from billions of supernovae, collapsing clouds become more opaque to their own radiation, and so the collapse process stops when much less matter has formed into the infant stars. Instead of only producing massive Population III stars with 100 times our sun’s mass, numerous stars with masses of 50, 20 and 5 times our sun’s form with increasing frequency. Even smaller stars like our own sun begin to appear by the trillions. Most of this activity is occurring in what will eventually become the halo stars in modern galaxies like the Milky Way. The vast networks of dark matter became illuminated from within as stars and galaxies began to form.

200 million years – The oldest known star in our Milky Way called SM0313 formed about this time. This star contains almost no iron — less than one ten millionth of the iron found in our own Sun. It is located 6000 light years from Earth. Another star called the Methusela Star is located about 190 light years from Earth and was formed about the same time as SM0313.

The First Quasars and Black Holes

300 million years The most distant known ‘quasar’ is called APM 8279+5255, and contains traces of the element iron. This means that at about this time after the Big Bang, some objects are powered by  enormous black holes that steadily consume a surrounding disk of gas and dust. For APM 8279+5255, the mass of this black hole is about 20 billion times more massive than the Sun. Astronomers do not know how a black hole this massive could gave formed so soon after the Big Bang. A dimple division shows that a 20 billion solar mass black hole forming in 300 million years would require a growth rate higher than 60 solar masses a year!

The First Galaxies

400 million years – The cold primordial matter becomes clumpy under the action of its own gravity. These clumps have masses of perhaps a few billion times our sun or less, and over time this material starts to collapse locally into even smaller clouds that become mini-galaxies where intense episodes of star formation activity are playing out.

This image shows the position of the most distant galaxy discovered so far with the Hubble Space Telescope. The remote galaxy GN-z11 shown in the inset is actually ablaze with bright young blue stars. They look red in this image because the wavelengths of light have been stretched by the expansion of the universe to longer, redder wavelengths. Like the images of so many other young galaxies, we cannot see individual stars, but their irregular shapes show that the stars they contain are spread out in irregular clumps within their host galaxy, possibly because they are from separate, merging clouds whose collisions have triggered the star-forming activity we see.

Although it is hard work, astronomers can detect the faint reddish traces of dozens of other infant galaxies such as MACS0647-JD, UDFj-39546284 and EGSY-2008532660. These are all  small dwarf galaxies over 100 times less massive than our Milky Way. They are all undergoing intense star forming activity between 400 and 600 million years after the Big Bang.

The Gamma-Ray Burst Era begins about 630 million years after the Big Bang. Gamma-ray bursts are caused by very massive stars, perhaps 50 to 100 times our own sun’s mass, that explode as hypernovae and form a single black hole, so we know that these kinds of stars were already forming and dying by this time. Today from ‘across the universe’ we see these events occur about once each day!

800 million years – The quasar ULAS J1120+0641 is another young case of a supermassive black hole that has formed, and by this time is eating its surrounding gas and stars at a prodigious rate. The mass of this black hole is about 2 billion times the mass of our sun, and like others is probably the result of frequent galaxy mergers and rapid eating of surrounding matter.

Also at around this time we encounter the Himiko Lyman Alpha Blob; one of the most massive objects ever discovered in the early universe.  It is 55,000 light-years across, which is half of the diameter of the Milky Way. Objects like Himiko are probably powered by an embedded galaxy that is producing young massive stars at a phenomenal rate of 500 solar masses per year or more.

Again the most brilliant objects we can see from a time about 900 million years after the big bang includes galaxies like SDSS J0100+2802 with a luminosity 420 trillion times that of our own Sun. It is powered by a supermassive black hole  12 billion times the mass of our sun.

The Re-Ionization Era

960 million years – By this time, massive stars in what astronomers call ‘Population III’ are being born by the billions across the entire universe. These massive stars emit almost all of their light in the ultraviolet part of the visible spectrum. There are now so many intense sources of ultraviolet radiation in the universe that all of the remaining hydrogen gas becomes ionized. Astronomers call this the Reionization Era. Within a few hundred million years, only dwarf galaxy-sized blobs of gas still remain and are being quickly evaporated. We can still see the ghosts of these clouds in the light from very distant galaxies. The galaxy SSA22-HCM1 is the brightest of the objects called ‘Lyman-alpha emitters’. It may be producing new stars at a rate of 40 solar masses per year and enormous amounts of ultraviolet light. The galaxy HDF 4-473.0 also spotted at this age is only 7,000 light years across. It has an estimated star formation rate of 13 solar masses per year.

1 billion years First by twos and threes, then by dozens and hundreds, clusters of galaxies begin to form as the gravity of matter pulls the clumps of galaxy-forming matter together. This clustering is speeded up by the additional gravity provided by dark matter. In a universe without dark matter, the number of clusters of galaxies would be dramatically smaller.

Clusters of Galaxies Form

Proto-galaxy cluster AZTEC-3 consists of 5 smaller galaxy-like clumps of matter, each forming stars at a prodigious rate. We now begin to see how some of the small clumps in this cluster are falling together and interacting, eventually to become a larger galaxy-sized system. This process of cluster formation is now beginning in earnest as more and more of these ancient clumps fall together under a widening umbrella of gravity. Astronomers are discovering more objects like AzTEC-3, which is the most distant known progenitor to modern elliptical galaxies. By 2.2 billion years after the Big Bang, it appears that half of all the massive elliptical galaxies we see around us today have already formed by this time.

Thanks to the birth and violent deaths of generations of massive Population III stars, the universe is now flooded with heavy elements such as iron, oxygen, carbon and nitrogen: The building blocks for life. But also elements like silicon, iron and uranium which help to build rocky planets and heat their interiors. The light from the quasar J033829.31+002156.3 can be studied in detail and shows that by this time, element-building through supernova explosions of Population III stars has produced lots of carbon, nitrogen and silicon. The earliest planets and life forms based upon these elements now have a chance to appear in the universe. Amazingly, we have already spotted such an ancient world!

Earliest Planets Form

At 1 billion years after the Big Bang, the oldest known planet PSR B1620-26 b has already formed. Located in the globular cluster Messier-4, about 12,400 light-years from Earth, it bears the unofficial nicknames “Methuselah” and “The Genesis Planet” because of its extreme age. The planet is in orbit around the two very old stars: A dense white dwarf and a neutron star. The planet has a mass of 2.5 times that of Jupiter, and orbits at a distance a little greater than the distance between Uranus and our own Sun. Each orbit of the planet takes about 100 years.

Wonders to Come!

Although the Hubble Space Telescope strains at its capabilities to see objects at this early stage in cosmic history, the launch of NASA’s Webb Space Telescope will uncover not dozens but thousands of these young pre-galactic objects with its optimized design. Within the next decade, we will have a virtually complete understanding of what happened during and after the Cosmic Dark Ages when the earliest possible sources of light could have formed, and one can only marvel at what new discoveries will turn up.

What an amazing time in which to be alive!

Check back here on Wednesday, May 24 for my next topic!

Our Unstable Universe

Something weird is going on in the universe that is causing astronomers and physicists to lose a bit of sleep at night. You have probably heard about the discovery of dark energy and the accelerating expansion of the universe. This is a sign that something is afoot that may not have a pleasant outcome for our universe or the life in it.

Big Bang Cosmology V 1.0

The basic idea is that our universe has been steadily expanding in scale since 14 billion years ago when it flashed into existence in an inconceivably dense and hot explosion. Today we can look around us and see this expansion as the constantly- increasing distances between galaxies embedded in space. Astronomers measure this change in terms of a single number called the Hubble Constant which has a value of about 70 km/sec per megaparsecs. For every million parsecs of separation between galaxies, a distance of 3.24 million light years, you will see distant galaxies speeding away from each other at 70 km/sec . This conventional Big Bang theory has been the main-stay of cosmology for decades and it has helped explain everything from the formation of galaxies to the abundance of hydrogen and helium in the universe.

Big Bang Cosmology V 2.0

Beginning in the 1980’s, physicists such as Alan Guth and Andre Linde added some new physics to the Big Bang based on cutting-edge ideas in theoretical physics. For a decade, physicists had been working on ways to unify the three forces in nature: electromagnetism, and the strong and weak nuclear forces. This led to the idea that just as the Higgs Field was needed to make the electromagnetic and weak forces look different rather than behave as nearly identical ‘electroweak’ forces, the strong force needed its own ‘scalar field’ field to break its symmetry with the electroweak force.

When Guth and Linde added this field to the equations of Big Bang cosmology they made a dramatic discovery. As the universe expanded and cooled, for a brief time this new scalar field made the transition between a state where it allowed the electroweak and strong forces to look identical, and a state where this symmetry was broken representing the current state of affairs. This period of time extended from about 10(-37) second to 10(-35) seconds; a mere instant in cosmic time, but the impact of this event was spectacular. Instead of the universe expanding at a steady rate in time as it does now, the separations between particles increased exponentially in time in a process called Inflation. Physicists now had a proper name for this scalar field: The Inflaton Field.

Observational cosmology has been able to verify since the 1990s that the universe did, indeed, pass through such an inflationary era at about the calculated time. The expansion of space at a rate many trillions of times faster than the speed of light insured that we live in a universe that looks as ours does, especially in terms of the uniformity of the cosmic ‘fireball’ temperature. It’s 2.7 kelvins no matter where you look, which would have been impossible had the Inflationary Era not existed.

Physicists consider the vacuum of space to be more than ‘nothing’. Quantum mechanically, it is filled by a patina of particles that invisibly come and go, and by fields that can give it a net energy. The presence of the Inflaton Field gave our universe a range of possible vacuum energies depending on how the field interacted with itself. As with other things in nature, objects in a high-energy state will evolve to occupy a lower-energy state. Physicists call the higher-energy state the False Vacuum and the lower-energy state the True Vacuum, and there is a specific way that our universe would have made this change. Before Inflation, our universe was in a high-energy, False Vacuum state governed by the Inflaton Field. As the universe continued to expand and cool, a lower-energy state for this field was revealed in the physics, but the particles and fields in our universe could not instantaneously go into that lower-energy state. As time went on, the difference in energy between the initial False Vacuum and the True Vacuum continued to increase. Like bubbles in a soda, small parts of the universe began to make this transition so that we now had a vast area of the universe in a False Vacuum in which bubbles of space in the True Vacuum began to appear. But there was another important process going on as well.

When you examine how this transition from False to True Vacuum occurred in Einstein’s equations that described Big Bang cosmology, a universe in which the False Vacuum existed was an exponentially expanding space, while the space inside the True Vacuum bubbles was only expanding at a simple, constant rate defined by Hubble’s Constant. So at the time of inflation, we have to think of the universe as a patina of True Vacuum bubbles embedded in an exponentially-expanding space still caught in the False Vacuum. What this means for us today is that we are living inside one of these True Vacuum bubbles where everything looks about the same and uniform, but out there beyond our visible universe horizon some 14 billion light years away, we eventually enter that exponentially-expanding False Vacuum universe. Our own little bubble may actually be billions of times bigger than what we can see around us. It also means that we will never be able to see what these other distant bubbles look like because they are expanding away from us at many times the speed of light.

Big Bang Cosmology 3.0

You may have heard of Dark Energy and what astronomers have detected as the accelerating expansion of the universe. By looking at distant supernova, we can detect that since 6 billion years after the Big Bang, our universe has not been expanding at a steady rate at all. The separations between galaxies has been increasing at an exponential rate. This is caused by Dark Energy, which is present in every cubic meter of space .The more space there is as the universe expands, the more Dark Energy and the faster the universe expands. What this means is that we are living in a False Vacuum state today in which a new Inflaton Field is causing space to dilate exponentially. It doesn’t seem too uncomfortable for us right now, but the longer this state persists, the greater is the probability our corner of the universe will see a ‘bubble’ of the new True Vacuum appear. Inside this bubble there will be slightly different physics such as the mass of the electron or the quark may be different. We don’t know when our corner of the universe will switch over to its True Vacuum state. It could be tomorrow or 100 billion years from now. But there is one thing we do know about this progressive, accelerated expansion.

Eventually, distant galaxies will be receding from our Milky Way at faster that the speed of light as they are helplessly carried along by a monstrously-dilating space. This also means they will become permanently invisible for the rest of eternity as their light signals never keep pace with the exponentially-increasing space between them. Meanwhile, our Milky Way will become the only cosmic collection of matter we will ever be able to see from then on. It is predicted that this situation will occur about 100 billion years from now when the Andromeda Galaxy will pass beyond this distant horizon.

As for what the new physics will be in the future True Vacuum state is anyone’s guess. If the difference in energy between the False and True vacuum is only a small fraction of the mass of a neutrino (a few electron-Volts) we may hardly know that it happened and life will continue. But if it is comparable to the mass of the electron (512,000 eV), we are in for some devastating and fatal surprises best not contemplated.

Check back here on  Tuesday, May 16  for my next topic!

Boltzmann Brains

Back in the 1800’s, Ludwig Boltzmann (1844-1906) developed the idea of entropy and thermodynamics, which have been the main-stay of chemistry and physics ever since. Long before atoms were identified, Boltzmann had used them in designing his theory of statistical mechanics, which related entropy to the number of possible statistical states these particles could occupy. His famous formula

S = k log W

is even inscribed on his tombstone! His frustrations with the anti-atomists who hated his crowning achievement ‘statistical mechanics’ led him in profound despair to commit suicide in 1906.

If you flip a coin 4 times, it is unlikely that all 4 flips will result in all-heads or all-tails. It is far more likely that you will get a mixture of heads and tails. This is a result of their being a total of 2^4 = 16 possible outcomes or ‘states’ for this system, and the state with all heads or all tails occur only 1/16 of the time. Most of the states you will produce have a mixture of heads and tails (14/16). Now replace the coin flips by the movement of a set of particles in three dimensions.

Boltzmann’s statistical mechanics related the number of possible states for N particles moving in 3-dimensional space, to the entropy of the system. It is more difficult to calculate the number of states than for the coin flip example above, but it can be done using his mathematics, and the result is the ‘W’ in his equation S = k Log W. The bottom line is that, the more states available to a collection of particles (for example atoms of a gas), the higher is the entropy given by . How does a gas access more states? One way is for you to turn up its temperature so that the particles are moving faster. This means that as you increase the temperature of a gas, its entropy increases in a measurable way.

Cosmologically, as our universe expands and cools, its entropy is actually increasing steadily because more and more space is available for the particles to occupy even as they are moving more slowly as the temperature declines. The Big Bang event itself, even at its unimaginably high temperature was actually a state of very low entropy because even though [particles were moving near the speed of light, there was so little space for matter to occupy!

For random particles in a gas colliding like billiard balls, with no other organizing forces acting on them, (called the kinetic theory of gases), we can imagine a collection of 100 red particles clustered in one corner of a box, and 1000 other blue particles located elsewhere in the box. If we were to stumble on a box of 1100 particles that looked like this we would immediately say ‘how odd’ because we sense that as the particles jostled around the 100 red particles would quickly get uniformly spread out inside the box. This is an expression of their being far more available states where the red balls are uniformly mixed, than states where they are clustered together. This is also a statement that the clustered red balls is a lower-entropy version of the system, and the uniformly-mixed version is a higher form of entropy. So we would expect that the system evolves from lower to higher entropy as the red particles diffuse through the box: Called the Second Law of Thermodynamics.

Boltzmann Brains.

The problem is that given enough time, even very rare states can have a non-zero probability of happening. With enough time and enough jostling, we could randomly find the red balls once again clustered together. It may take billions of years but there is nothing that stands in the way of this happening from statistical principles. Now let’s suppose that instead of just a collection of red balls, we have a large enough system of particles that some rare states resemble any physical object you can imagine: a bacterium, a cell phone, a car…even a human brain!

A human brain is a collection of particles organized in a specific way to function and to store memories. In a sufficiently large and old universe, there is no obvious reason why such a brain could not just randomly assemble itself like the 100 red particles in the above box. It would be sentient, have memories and even senses. None of its memories would be of actual events it experienced but simply artificial reconstructions created by just the right neural pathways randomly assembled. It would remember an entire lifetime to date without having actually lived or occupied any of the events in space and time.

When you calculate the probability for such a brain to evolve naturally in a low-entropy universe like ours rather than just randomly assembling itself you run into a problem. According to Boltzmann’s cosmology, our vast low-entropy and seemingly highly organized universe is embedded in a much larger universe where the entropy is much higher. It is far less likely that our organized universe exists in such a low entropy state conducive to organic evolution than a universe where a sentient brain simply assembles itself from random collisions. In any universe destined to last for eternity, it will rapidly be populated by incorporeal brains rather than actual sentient creatures! This is the Paradox of the Boltzmann Brain.

Even though Creationists like to invoke the Second Law to deny evolution as a process of random collisions, the consequence of this random idea about structure in the universe says that we are actually all Boltzmann Brains not assembled by evolution at all. It is, however, of no comfort to those who believe in God because God was not involved in randomly assembling these brains, complete with their own memories!

So how do we avoid filling our universe with the abomination of these incorporeal Boltzman Brains?

The Paradox Resolved

First of all, we do not live in Boltzmann’s universe. Instead of an eternally static system existing in a finite space, direct observations show that we live in an expanding universe of declining density and steadily increasing entropy.

Secondly, it isn’t just random collisions that dictate the assembly of matter (a common idea used by Creationists to dismantle evolution) but a collection of specific underlying forces and fundamental particles that do not come together randomly but in a process that is microscopically determined by specific laws and patterns. The creation of certain simple structures leads through chemical processes to the inexorable creation of others. We have long-range forces like gravity and electromagnetism that non-randomly organize matter over many different scales in space and time.

Third, we do not live in a universe dominated by random statistical processes, but one in which we find regularity in composition and physical law spanning scales from the microscopic to the cosmic, all the way out to the edges of the visible universe. When two particles combine, they can stick together through chemical forces and grow in numbers from either electromagnetic or gravitational forces attracting other particles to the growing cluster, called a nucleation site.

Fourth, quantum processes and gravitational processes dictate that all existing particles will eventually decay or be consumed in black holes, which will evaporate to destroy all but the most elementary particles such as electrons, neutrinos and photons; none of which can be assembled into brains and neurons.

The result is that Boltzmann Brains could not exist in our universe, and will not exist even in the eternal future as the cosmos becomes more rarefied and reaches its final and absolute thermodynamic equilibrium.

The accelerated expansion of the universe now in progress will also insure that eventually all complex collections of matter are shattered into individual fundamental particles each adrift in its own expanding and utterly empty universe!

Have a nice day!

Check back here on Tuesday, May 9 for my next topic!