Tag Archives: quantum gravity

What is Grand Unification Theory?


In the mid-1970’s physicists were excited with the recent success of Steven Weinberg, Abdus Salam and Sheldon Glashow in creating a unification theory for the electromagnetic and weak forces. By applying what is called ‘group theory’ , physicists such as Glashow, Georgi and others proposed that you could use the symmetries of ‘SU(5)’ to unite the weak and electromagnetic forces with the strong nuclear force which is mediated by gluons. This became known as ‘Grand Unification Theory’ or ‘GUT’, and quickly evolved into many variants including ‘super-symmetric GUTs (SUSY- GUTs)’, ‘super gravity theory’ and ‘dimensionally-extended SUSY GUTs’, before being replaced by string theory in the early 1980’s.

It produced a lot of excitement in the late-70s and early-80’s because it seemed as though it could provide an explanation for the strong, weak and electromagnetic forces, and do so in a common mathematical language. It’s major prediction was that at the enormous energy of 1000 billion billion volts (10^15 GeV) the strong nuclear force would become similar (or unified) with the electromagnetic and weak forces. Applying these ideas to cosmology also led to the creation of Inflationary Cosmology.

Today, the so-called Standard Model of nuclear physics unifies physics (except for gravity) and uses some of the basic ideas of GUT to do so. Physicists worked very hard to confirm several basic ideas in GUT theory such as ‘spontaneous symmetry breaking’ by looking for the Higgs Boson. In 2012 this elusive particle was discovered at the Large Hadron Collider some 50 years after it was predicted. This wass a revolutionary discovery because it demonstrated that the entire concept of spontaneous symmetry breaking seemed to be valid. It was the keystone idea in the unification of the electromagnetic and weak forces for which Abdus Salam, Steven Weinberg and George Glashow received the Nobel Prize in the mid-1970s. SSB was also the workhorse concept behind much of the mathematical work into GUTs.

GUT research in the booming 1970s also uncovered a new ‘Supersymmetry’ in nature, which continues to be searched for. The unpleasant thing about the current Standard Model is that it has several dozen adjustable constants that have to be experimentally fine-tuned to reproduce our physical world including such numbers as the constant of gravity, speed of light, fine structure constant, and the constants that determine how strongly the leptons and quarks interact. Physicists think that this is way too much, and so the search is on for a better theory that has far fewer ad hoc constants. There is also the problem that the Standard Model doesn’t include gravity.

The hope that gravity could some how be incorporated into GUTs pursued in the 1970s was ultimately never realized because of the advent of String Theory which provided a newer way to look at gravity as a ‘quantum field’. Yet most popular versions of string theory include supersymmetry, hense they are called superstring theories.

Supersymmetry has grown to become a lynchpin concept behind many ideas for unifying all of the four foruces including gravity, however after five years of searching for signs of it at the CERN Large Hadron Collider, not so much as a trace of it has been detected. It seems as though the Standard Model is all there is, but in which the strong force and the ‘electroweak’ forces may possibly not be unified further.

What are the ’10 dimensions’ that physicists are always talking about?


This stunning simulation of Calabi-Yau spaces at each point in 3-d space was created by  Jeff Bryant and based on concepts from A.J. Hanson, “A Construction for Computer Visualization of Certain Complex Curves,” in “Computers and Mathematics” column, ed. Keith Devlin, of Notices of the American Mathematical Society, 41, No. 9, pp. 1156–1163 (American Math. Soc., Providence, November/December, 1994). See his website for details.

We know that we need at least 4 to keep track of things: The three dimensions of space that give us freedom to move Up-Down, Left-right, and forward-backward, plus the dimension of time. These dimensions of spacetime form the yellow gridwork in the image above. At each intersections you have a new location in space and time, and mathematically there are a infinite number of these coordinate points. But the real world may be different then the mathematical ideal. There may not be an infinite number of points between 0 and 1, but only a finite number.

We know that you can sub-divide space all the way down to the quantum realm and to distances and times of 10^-20 cm and 10^-30 seconds and spacetime still looks perfectly smooth to the physics we observe there, but what if we go down even further? Since the 1940s, a simple calculation using the three fundamental constants h, c and G has turned up a smallest quantum distance of 10^-33 cm and 10^-43 seconds called the Planck Scale. In our figure above, the spacings in the yellow grid are at this scale of intervals, and that is the smallest possible separation for physical processes in space and time..it is believed.

Since the 1970s, work on the unification of forces has uncovered a number of ideas that could work, but nearly all require that we add some additional dimensions to the four we know. All of these extra dimensions are believed to appear at the Planck scale, so they are accessible to elementary particles but not to humans.In string theory, these added dimensions are rolled up into identical but complex mini-geometries like the ones shown above.

No one has the faintest idea how to go about proving that other dimensions really exist in the microcosm. The energies are so big that we cannot figure out how to build the necessary accelerators and instruments. We know that Mother Nature is rather frugal, so I would be very surprised if more than 4 dimensions existed.There have been many proposals since the 1920s to increase the number of dimensions to spacetime beyond the standard four that relativity uses. In all cases, these extra dimensions are vastly smaller than an atom and are not accessable to humans…fortunately!

Current string theory proposes 6 additional dimensions while M-theory allows for a seventh. These additional dimensions are sometimes called ‘internal degrees of freedom’ and are related to the number of fundamental symmetries present in the physical world at the quantum scale. The equations that physicists work with require these additional dimensions so that new symmetries can be defined that allow physicists to understand physical relationships between the various particle families.

They think these are actual, real dimensions to the physical world, only that they are now ‘compact’ and have finite sizes unlike our 4 dimensions of space and time which seem almost to be infinite in size. The figure above shows what these compact additional dimensions look like, mathematically. Each point in 4 dimensional space-time has another 6 dimensions attached to it which ‘particles and forces’ can use as extra degrees of freedom to define themselves and how they will interact with each other. These spaces are called Calabi-Yau manifolds and it is their 6-dimensional geometry that determines the exact properties of fundamental particles.

Do not confuse them with ‘hyperspace’ because the particles do not actually ‘move’ along these other dimensions. They are not ‘spatial’ dimensions, but are as unlike space and time as time is unlike space!

Is there empty space inside particles the same way there is inside atoms?


There is no known ‘inside’ to an elementary point particle such as an electron. It is not a tiny sphere with an interior space, though back in 1920 physicists asked whether the space inside an electron was the same as outside. This was shortly before the wave/particle properties of matter were discovered by Louis deBroglie, which set the stage for quantum mechanics.

One way you might think of it is that it looks something like a globular star cluster like the one shown here called Messier 2, taken by the Hubble Space Telescope. (Credits: NASA, ESA, STScI, and A. Sarajedini (University of Florida)

The ’empty space’ within and near particles such as electrons and quarks is far more active and complex close to the electron than in the lower-energy ’empty space’ within the vastly-larger boundaries of atoms. There is no such thing as ’empty space’ anywhere in nature. There are only apparent ‘voids’ that SEEM not to contain matter or energy, but at the level of the quantum world, even ’empty’ voids are teeming with activity as particles come and go; created out of quantum fluctuations in any of a variety of fields in nature.

Heisenberg’s Uncertainty Principle all but guarantees the existence of such a dynamic, physical vacuum. Physicists, moreover, have conducted many experiments where the effects of these ghostly, half-real particles can be seen clearly. The level of activity that fills the physical vacuum is set by the energy at which the vacuum is ‘observed’. Within an atom, much of the activity is carried by ‘virtual photons’ that mediate the electromagnetic force, and by the occasional electron-positron pairs that appear and vanish. At very high energies, and correspondingly small length scales, the vacuum fills up with the comings and goings of even more high energy particles; quarks-antiquarks, gluons-antigluons, muons-antimuons, and a whole host of other particles and their anti-matter twins. Within the nucleus of an atom, gluons and their anti- particles are everywhere, going about their business to keep the quarks bound into the nuclear ‘quark-gluon plasma’, portions of which we see as protons and neutrons.

For electrons, enormous energy is stored in its electric field at small scales, so this allows more and more massive particle-antiparticle pairs to be created out of quantum fluctuations in this field.

So the ‘inside’ of an electron is an onion-like region of space where low energy virtual particles form the extended halo surrounding a core where more and more massive particle clouds are encountered.

What is ‘Now’?

What is the duration of the present moment? How is it that this present moment is replaced by ‘the next moment’?

Within every organism, sentient or not, there are thousands of chemical processes that occur with their own characteristic time periods, but these time periods start and stop at different times so that there is no synchronized ‘moment’. Elementary atomic collisions that build up molecules take nanoseconds while cell division takes minutes to hours, and tissue cell lifespans vary from 2 days in the stomach lining to 8 years for fat cells (see Cell Biology). None of these jangled timescales collectively or in isolation create the uniform experience we have of now and its future moments. To find the timescale that corresponds to the Now experience we have to look elsewhere.

It’s all in the mind!

A variety of articles over the  years have identified 2 to 3 seconds as the maximum duration of what most people experience as ‘now’, and what researchers call the ‘specious present’. This is the time required by our brain’s neurological mechanisms to combine the information arriving at our senses with our internal, current model of the ‘outside world’. During this time an enormous amount of neural activity has to happen. Not only does the sensory information have to be integrated together for every object in your visual field and cross connected to the other senses, but dozens of specialized brain regions have to be activated or de-activated to update your world model in a consistent way.

In a previous blog I discussed how important this world model is in creating within you a sense of living in a consistent world with a coherent story. But this process is not fixed in stone. Recent studies by Sebastian Sauer and his colleagues at the Ludwig-Maximilians-Universität in Munich show that mindfulness meditators can significantly increase their sense of ‘now’ so that it is prolonged for up to 20 seconds.

In detail, a neuron discharge lasts about 1 millisecond, but it has to be separated from the next one by about 30 milliseconds before a sequence is perceived, and this seems to be true for all senses. When you see a ‘movie’ it is a succession of still images flashed into your visual cortex at intervals less than 30 milliseconds, giving the illusion of a continuous unbroken scene.  (Dainton: Stanford Encyclopedia of Philosophy, 2017).

The knitting together of these ‘nows’ into a smooth flow-of-time is done by our internal model-building system. It works lightning-fast to connect one static collection of sensory inputs to another set and hold these both in our conscious ‘view’ of the world. This gives us a feeling of the passing of one set of conditions smoothly into another set of conditions that now make up the next ‘Now’. To get from one moment to the next, our brain can play fast-and-loose with the data and interpolate what it needs. For example, it our visual world, the fovea in our retina produces a Blind Spot but you never notice it because there are circuits that interpolate across this spot to fill-in the scenery. The same thing happens in the time dimension with the help of our internal model to make our jagged perceptions in time into a smooth movie experience.

Neurological conditions such as strokes, or psychotropic chemicals can disrupt this process and cause dramatic problems. Many schizophrenic patients stop perceiving time as a flow of  linked events.  These defects in time perception may play a part in the hallucinations and delusions experienced by schizophrenic patients according to some studies. There are other milder aberrations that can affect our sense of the flow-of-time.

Research has also suggested the feeling of awe has the ability to expand one’s perceptions of time availability. Fear also produces time-sense distortion. Time seems to slow down when a person skydives or bungee jumps, or when a person suddenly and unexpectedly senses the presence of a potential predator or mate. Research also indicates that the internal clock, used to time durations in the seconds-to-minutes range, is linked to dopamine function in the basal ganglia. Studies in which children with ADHD are given time estimation tasks shows that time passes very slowly for them.

Because the volume of data is enormous, we cannot hold many of these consecutive Now moments in our consciousness with the same clarity, and so earlier Nows either pass into short-term memory if they have been tagged with some emotional or survival attributes, or fade quickly into complete forgetfulness. You will not remember the complete sensory experience of diving into a swimming pool, but if you were pushed, or were injured, you will remember that specific sequence of moments with remarkable clarity years later!

The model-building aspect of our brain is just another tool it has that is equivalent to its pattern-recognition ability in space. It looks for patterns in time to find correlations which it then uses to build up expectations for ‘what comes next’. Amazingly, when this feature yields more certainty than the evidence of our senses, psychologists like Albert Powers at Yale University say that we experience hallucinations (Fan, 2017). In fact, 5-15% of the population experience auditory hallucinations (songs, voices, sounds) at some time in their lives when the brain literally hears a sound that is not there because it was strongly expected on the basis of other clues. One frequent example is that  people claim to hear the Northern Lights as a crackling fire or a swishing sound, because their visual system creates this expectation and the brain obliges.

This, then, presents us with the neurological experience of Now. It is between 30 milliseconds and several minutes in duration. It includes a recollection of the past which fades away for longer intervals in the past, and includes a sense of the immediate future as our model-making facility extrapolates from our immediate past and fabricates an expectation of what comes next.

Living in a perpetual Now is no fun. The famed psychologist Oliver Sacks describes  a patient, Clive Wearing, with a severe form of amnesia, who was unable to form any new memories that lasted longer than 30 seconds, and became convinced every few minutes that he was fully conscious for the first time. “In some ways, he is not anywhere at all; he has dropped out of space and time altogether. He no longer has any inner narrative; he is not leading a life in the sense that the rest of us do….It is not the remembrance of things past, the “once” that Clive yearns for, or can ever achieve. It is the claiming, the filling, of the present, the now, and this is only possible when he is totally immersed in the successive moments of an act. It is the “now” that bridges the abyss.”

Physical ‘Now’.

This monkeying around with brain states, internal model-making and sensory data creates Now as a phenomenon we experience, but the physical world outside our collective brain population does not operate through its own neural systems to create a Cosmic Now. That would only be the case if, for example, we were literally living inside The Matrix….which I believe we are not. So in terms of physics, the idea of Now does not exist. We even know from relativity that there can be no uniform and simultaneous Now spanning large portions of space or the cosmos. This is a problem that has bedeviled many people across the millennia.

Augustine (in the fourth century) wrote, “What is time? If no one asks me, I know; if I wish to explain, I do not know. … My soul yearns to know this most entangled enigma.” Even Einstein himself noted ‘…that there is something essential about the Now which is just outside the realm of science.’

Both of these statements were made before quantum theory became fully developed. Einstein developed relativity, but this was a theory in which spacetime took the place of space and time individually. If you wanted to define ‘now’ by a set of simultaneous conditions, relativity put the kibosh on that idea because due to the relative motions and accelerations of all Observers, there can be no simultaneous ‘now’ that all Observers can experience. Also, there was no ‘flow of time’ because relativity was a theory of worldlines and complete histories of particles from start to finish (called the boundary conditions of worldlines). Quantum theory, however, showed some new possibilities.

In physics, time is a variable, often represented by the letter t, that is a convenient parameter with which to describe how a system of matter and energy change. The first very puzzling feature of time as a physical variable is that all mathematical representations of physical laws or theories show that time is continuous, smooth and infinitely divisible into smaller intervals. These equations are also ‘timeless’ in that they can be written down on a piece of paper and accurately describe how a system changes from start to finish (based on boundary conditions defined at ‘t=0’) , but the equations show this process as ‘all at once’.

In fact, this perspective is so built into physics that it forms the core of Einstein’s relativity theory in the form of the 4-d spacetime ‘block’. It also appears in quantum mechanics because fundamental equations like Schroedinger’s Equation also offer a timeless view of quantum states.

In all these situations, one endearing feature of our world is actually suppressed and mathematically hidden from view, and that is precisely the feature we call ‘now’.

To describe what things look like Now, you have to dial in to the equations the number t =  t(now). How does nature do that? As discussed by physicist Lee Smolin in his book ‘Time Reborn’, this is the most fundamental experience we have about the physical world as sentient beings, yet it is not represented by any feature in the physical theories we have developed thus far. There is no theory that selects t = t(now) over all the infinite other moments in time.

Perhaps we are looking in the wrong place!

Just as we have seen that what we call ‘space’ is built up like a tapestry from a vast number of quantum events described (we hope!) by quantum gravity, time also seems to be created from a synthesis of elementary events occurring at the quantum scale.   For example, what we call temperature is the result of innumerable collisions among elementary objects such as atoms. Temperature is a measure of the average collision energy of a large collection of particles, but cannot be identified as such at the scale of individual particles. Temperature is a phenomenon that has emerged from the collective properties of thousands or trillions of individual particles.

A system can be described completely by its quantum state – which is a much easier thing to do when you have a dozen atoms than when you have trillions, but the principle is the same. This quantum state describes how the elements of the system are arrayed in 3-d space, but because of Heisenberg’s Uncertainty Principle, the location of a particle at a given speed is spread out rather than localized to a definite position.  But quantum states can also become entangled. For these systems, if you measure one of the particles and detect property P1 then the second particle must have property P2. The crazy thing is that until you measured that property in the first particle, it could have had either property P1 or P2, but after the measurement the distant particle ‘knew’ that it had to have the corresponding property even though this information had to travel faster than light to insure consistency.

An intriguing set of papers by physicist Seth Lloyd at Harvard University in 1984 showed that over time, the quantum states of the member particles become correlated and shared by the larger ensemble. This direction of increasing correlation goes only one way and establishes the ‘Arrow of Time’ on the quantum scale.

One interesting feature of this entanglement idea is that ‘a few minutes ago’, our brain’s quantum state was less correlated with its surroundings and our sensory information than at a later time. This means that the further you go into the past moments, the less correlated they are with the current moment because, for one, the sensory information has to arrive and be processed before it can change our brain’s state. Our sense of Now is the product of how past brain states are correlated with the current state. A big part of this correlating is accomplished, not by sterile quantum entanglement, but by information transmitted through our neural networks and most importantly our internal model of our world – which is a dynamic thing.

If we did not have such an internal model that correlates our sensory information and fabricates an internal story of perception, our sense of Now would be very different because so much of the business of correlating quantum information would not occur very quickly. Instead of a Now measured in seconds, our Now’s would be measured in hours, days or even lifetimes, and be a far more chaotic experience because it would lack a coherent, internal description of our experiences.

This seems to suggest that no two people live in exactly the same Now, but these separate Now experiences can become correlated together as the population of individuals interact with each other and share experiences through the process of correlation. As for the rest of the universe, it exists in an undefined Now state that varies from location to location and is controlled by the speed of light, which is the fastest mode of exchanging information.

Read more:

In my previous blogs, I briefly described how the human brain perceives and models space (Blog 14: Oops one more thing), how Einstein and other physicists dismiss space as an illusion (Blog 10: Relativity and space ), how relativity deals with the concept of space (Blog 12: So what IS space?), what a theory of quantum gravity would have to look like (Blog 13: Quantum Gravity Oh my! ), and along the way why the idea of infinity is not physically real (Blog 11: Is infinity real?) and why space is not nothing (Blog 33: Thinking about Nothing). I even discussed how it is important to ‘think visually’ when trying to model the universe such as the ‘strings’ and ‘loops’ used by physicists as an analog to space ( Blog 34: Thinking Visually)

I also summarized the nature of space in a wrap-up of why something like a quantum theory for gravity is badly needed because the current theories of quantum mechanics and general relativity are incomplete, but also point the way towards a theory that is truly background-independent and relativistic (Blog 36: Quantum Gravity-Again! ). These considerations describe the emergence of the phenomenon we call ‘space’ but also down play its importance because it is an irrelevant and misleading concept.

Decay of the False Vacuum

The Decay of the False Vacuum

Written by Sten Odenwald. Copyright (C) 1983 Kalmbach Publishing. Reprinted by permission

In the recently developed theory by Steven Weinberg and Abdus Salam, that unifies the electromagnetic and weak forces, the vacuum is not empty. This peculiar situation comes about because of the existence of a new type of field, called the Higgs field. The Higgs field has an important physical consequence since its interaction with the W, W and Z particles (the carriers of the weak force) causes them to gain mass at energies below 100 billion electron volts (100 Gev). Above this energy they are quite massless just like the photon and it is this characteristic that makes the weak and electromagnetic forces so similar at high energy.

On a somewhat more abstract level, consider Figures 1 and 2 representing the average energy of the vacuum state. If the universe were based on the vacuum state in Figure 1, it is predicted that the symmetry between the electromagnetic and weak interactions would be quite obvious. The particles mediating the forces would all be massless and behave in the same way. The corresponding forces would be indistinguishable. This would be the situation if the universe had an average temperature of 1 trillion degrees so that the existing particles collided at energies of 100 Gev. In Figure 2, representing the vacuum state energy for collision energies below 100 Gev, the vacuum state now contains the Higgs field and the symmetry between the forces is suddenly lost or ‘broken’. Although at low energy the way in which the forces behave is asymmetric, the fundamental laws governing the electromagnetic and weak interactions remain inherently symmetric. This is a very remarkable and profound prediction since it implies that certain symmetries in Nature can be hidden from us but are there nonetheless.

During the last 10 years physicists have developed even more powerful theories that attempt to unify not only the electromagnetic and weak forces but the strong nuclear force as well. These are called the Grand Unification Theories (GUTs) and the simplist one known was developed by Howard Georgi, Helen Quinn,and Steven Weinberg and is called SU(5), (pronounced ‘ess you five’). This theory predicts that the nuclear and ‘electroweak’ forces will eventually have the same strength but only when particles collide at energies above 1 thousand trillion GeV corresponding to the unimaginable temperature of 10 thousand trillion trillion degrees! SU(5) requires exactly 24 particles to mediate forces of which the 8 massless gluons of the nuclear force, the 3 massless intermediate vector bosons of the weak force and the single massless photon of the electromagnetic force are 12. The remaining 12 represent a totally new class of particles called Leptoquark bosons that have the remarkable property that they can transform quarks into electrons. SU(5) therefore predicts the existence of a ‘hyperweak’ interaction; a new fifth force in the universe! Currently, this force is 10 thousand trillion trillion times weaker than the weak force but is nevertheless 100 million times stronger than gravity. What would this new force do? Since protons are constructed from 3 quarks and since quarks can now decay into electrons, through the Hyperweak interaction, SU(5) predicts that protons are no longer the stable particles we have always imagined them to be. Crude calculations suggest that they may have half-lives between 10^29 to 10^33 years. An immediate consequence of this is that even if the universe were destined to expand for all eternity, after ‘only’ 10^32 years or so, all of the matter present would catastrophically decay into electrons, neutrinos and photons. The Era of Matter, with its living organisms, stars and galaxies, would be swept away forever, having represented but a fleeting episode in the history of the universe. In addition to proton decay, SU(5) predicts that at the energy characteristic of the GUT transition, we will see the affects of a new family of particles called supermassive Higgs bosons whose masses are expected to be approximately 1 thousand trillion GeV! These particles interact with the 12 Leptoquarks and make them massive just as the Higgs bosons at 100 GeV made the W, W and Z particles heavy. Armed with this knowledge, let’s explore some of the remarkable cosmological consequences of these exciting theories.

The GUT Era

To see how these theories relate to the history of the universe, imagine if you can a time when the average temperature of the universe was not the frigid 3 K that it is today but an incredable 10 thousand trillion trillion degrees (10^15 GeV). The ‘Standard Model’ of the Big Bang, tells us this happened about 10^-37 seconds after Creation. The protons and neutrons that we are familiar with today hadn’t yet formed since their constituent quarks interacted much too weakly to permit them to bind together into ‘packages’ like neutrons and protons. The remaining constituents of matter, electrons, muons and tau leptons, were also massless and traveled about at essentially light-speed; They were literally a new form of radiation, much like light is today! The 12 supermassive Leptoquarks as well as the supermassivs Higgs bosons existed side-by-side with their anti-particles. Every particle-anti particle pair that was annihilated was balanced by the resurrection of a new pair somewhere else in the universe. During this period, the particles that mediated the strong, weak and electromagnetic forces were completely massless so that these forces were no longer distinguishable. An inhabitant of that age would not have had to theorize about the existence of a symmetry between the strong, weak and electromagnetic interactions, this symmetry would have been directly observable and furthermore, fewer types of particles would exist for the inhabitants to keep track of. The universe would actually have beed much simpler then!

As the universe continued to expand, the temperature continued to plummet. It has been suggested by Demetres Nanopoulis and Steven Weinberg in 1979 that one of the supermassive Higgs particles may have decayed in such a way that slightly more matter was produced than anti-matter. The remaining evenly matched pairs of particles and anti-particles then annihilated to produce the radiation that we now see as the ‘cosmic fireball’.

Exactly what happened to the universe as it underwent the transitions at 10^15 and 100 GeV when the forces of Nature suddenly became distinguishable is still under investigation, but certain tantalizing descriptions have recently been offered by various groups of theoriticians working on this problem. According to studies by Alan Guth, Steven Weinberg and Frank Wilczyk between 1979 and 1981, when the GUT transition occured, it occured in a way not unlike the formation of vapor bubbles in a pot of boiling water. In this analogy, the interior of the bubbles represent the vacuum state in the new phase, where the forces are distinguishable, embedded in the old symmetric phase where the nuclear, weak and electromagnetic forces are indistinguishable. Inside these bubbles, the vacuum energy is of the type illustrated by Figure 2 while outside it is represented by Figure 1. Since we are living within the new phase with its four distinguishable forces, this has been called the ‘true’ vacuum state. In the false vacuum state, the forces remain indistinguishable which is certainly not the situation that we find ourselves in today!

Cosmic Inflation

An exciting prediction of Guth’s model is that the universe may have gone through at least one period in its history when the expansion was far more rapid than predicted by the ‘standard’ Big Bang model. The reason for this is that the vacuum itself also contributes to the energy content of the universe just as matter and radiation do however, the contribution is in the opposite sense. Although gravity is an attractive force, the vacuum of space produces a force that is repulsive. As Figures 1 and 2 show, the minimum energy state of the false vacuum at ‘A’ before the GUT transition is at a higher energy than in the true vacuum state in ‘B’ after the transition. This energy difference is what contributes to the vacuum energy. During the GUT transition period, the positive pressure due to the vacuum energy would have been enormously greater than the restraining pressure produced by the gravitational influence of matter and radiation. The universe would have inflated at a tremendous rate, the inflation driven by the pressure of the vacuum! In this picture of the universe, Einstein’s cosmological constant takes on a whole new meaning since it now represents a definite physical concept ; It is simply a measure of the energy difference between the true and false vacuum states (‘B’ and ‘A’ in Figures 1 and 2.) at a particular time in the history of the universe. It also tells us that, just as in de Sitter’s model, a universe where the vacuum contributes in this way must expand exponentially in time and not linearly as predicted by the Big Bang model. Guth’s scenario for the expansion of the universe is generally called the ‘inflationary universe’ due to the rapidity of the expansion and represents a phase that will end only after the true vacuum has supplanted the false vacuum of the old, symmetric phase.

A major problem with Guth’s original model was that the inflationary phase would have lasted for a very long time because the false vacuum state is such a stable one. The universe becomes trapped in the cul-de-sac of the false vacuum state and the exponential expansion never ceases. This would be somewhat analogous to water refusing to freeze even though its temperature has dropped well below 0 Centigrade. Recent modifications to the original ‘inflationary universe’ model have resulted in what is now called the ‘new’ inflationary universe model. In this model, the universe does manage to escape from the false vacuum state and evolves in a short time to the familiar true vacuum state.

We don’t really know how exactly long the inflationary phase may have lasted but the time required for the universe to double its size may have been only 10^-34 seconds. Conceivably, this inflationary period could have continued for as ‘long’ as 10^-24 seconds during which time the universe would have undergone 10 billion doublings of its size! This is a number that is truely beyond comprehension. As a comparison, only 120 doublings are required to inflate a hydrogen atom to the size of the entire visible universe! According to the inflationary model, the bubbles of the true vacuum phase expanded at the speed of light. Many of these had to collide when the universe was very young in order that the visible universe appear so uniform today. A single bubble would not have grown large enough to encompass our entire visible universe at this time; A radius of some 15-20 billion light years. On the other hand, the new inflationary model states that even the bubbles expanded in size exponentially just as their separations did. The bubbles themselves grew to enormous sizes much greater than the size of our observable universe. According to Albrecht and Steinhardt of the University of Pennsylvania, each bubble may now be 10^3000 cm in size. We should not be too concerned about these bubbles expanding at many times the speed of light since their boundaries do not represent a physical entity. There are no electrons or quarks riding some expandind shock wave. Instead, it is the non-material vacuum of space that is expanding. The expansion velocity of the bubbles is not limited by any physical speed limit like the velocity of light.

GUMs in GUTs

A potential problem for cosmologies that have phase transitions during the GUT Era is that a curious zoo of objects could be spawned if frequent bubble mergers occured as required by Guth’s inflationary model. First of all, each bubble of the true vacuum phase contains its own Higgs field having a unique orientation in space. It seems likely that no two bubbles will have their Higgs fields oriented in quite the same way so that when bubbles merge, knots will form. According to Gerhard t’Hooft and Alexander Polyakov, these knots in the Higgs field are the magnetic monopoles originally proposed 40 years ago by Paul Dirac and there ought to be about as many of these as there were bubble mergers during the transition period. Upper limits to their abundance can be set by requiring that they do not contribute to ‘closing’ the universe which means that for particles of their predicted mass (about 10^16 GeV), they must be 1 trillion trillion times less abundant than the photons in the 3 K cosmic background. Calculations based on the old inflationary model suggest that the these GUMs (Grand Unification Monopoles) may easily have been as much as 100 trillion times more abundant than the upper limit! Such a universe would definitly be ‘closed’ and moreover would have run through its entire history between expansion and recollapse within a few thousand years. The new inflationary universe model solves this ‘GUM’ overproduction problem since we are living within only one of these bubbles, now almost infinitly larger than our visible universe. Since bubble collisions are no longer required to homogenize the matter and radiation in the universe, very few, if any, monopoles would exist within our visible universe.

Horizons

A prolonged period of inflation would have had an important influence on the cosmic fireball radiation. One long-standing problem in modern cosmology has been that all directions in the sky have the same temperature to an astonishing 1 part in 10,000. When we consider that regions separated by only a few degrees in the sky have only recently been in communication with one another, it is hard to understand how regions farther apart than this could be so similar in temperature. The radiation from one of these regions, traveling at the velocity of light, has not yet made it across the intervening distance to the other, even though the radiation may have started on its way since the universe first came into existence. This ‘communication gap’ would prevent these regions from ironing-out their temperature differences.

With the standard, Big Bang model, as we look back to earlier epochs from the present time, the separations between particles decrease more slowly than their horizons are shrinking. Neighboring regions of space at the present time, become disconnected so temperature differences are free to develope. Eventually, as we look back to very ancient times, the horizons are so small that every particle existing then literally fills the entire volume of its own, observable universe. Imagine a universe where you occupy all of the available space! Prior to the development of the inflationary models, cosmologists were forced to imagine an incredably well-ordered initial state where each of these disconnected domains (some 10^86 in number) had nearly identical properties such as temperature. Any departure from this situation at that time would have grown to sizable temperature differences in widely separated parts of the sky at the present time. Unfortunately, some agency would have to set-up these finely-tuned initial conditions by violating causality. The contradiction is that no force may operate by transmitting its influence faster than the speed of light. In the inflationary models, this contradiction is eliminated because the separation between widely scattered points in space becomes almost infinitly small compared to the size of the horizons as we look back to the epoc of inflation. Since these points are now within each others light horizons, any temperature difference would have been eliminated immediatly since hotter regions would now be in radiative contact with colder ones. With this exponentially-growing, de Sitter phase in the universe’s early history we now have a means for resolving the horizon problem.

Instant Flat Space

Because of the exponential growth of the universe during the GUT Era, its size may well be essentially infinite for all ‘practical’ purposes . Estimates by Albrecht and Steinhardt suggest that each bubble region may have grown to a size of 10^3000 cm by the end of the inflationary period. Consequently, the new inflationary model predicts that the content of the universe must be almost exactly the ‘critical mass’ since the sizes of each of these bubble regions are almost infinite in extent. The universe is, for all conceivable observations, exactly Euclidean (infinite and flat in geometry) and destined to expand for all eternity to come. Since we have only detected at most 10 percent of the critical mass in the form of luminous matter, this suggests that 10 times as much matter exists in our universe than is currently detectable. Of course, if the universe is essentially infinite this raises the ghastly spectre of the eventual annihilation of all organic and inorganic matter some 10^32 years from now because of proton decay.

In spite of its many apparent successes, even the new inflationary universe model is not without its problems. Although it does seem to provide explainations for several cosmological enigmas, it does not provide a convincing way to create galaxies. Those fluctuations in the density of matter that do survive the inflationary period are so dense that they eventually collapse into galaxy-sized blackholes! Neither the precise way in which the transition to ordinary Hubbel expansion occurs nor the duration of the inflationary period are well determined.

If the inflationary cosmologies can be made to answer each of these issues satisfactorily we may have, as J. Richard Gott III has suggested, a most remarkable model of the universe where an almost infinite number of ‘bubble universes’ each having nearly infinite size, coexist in the same 4-dimensional spacetime; all of these bubble universes having been brought into existence at the same instant of creation. This is less troublesome than one might suspect since, if our universe is actually infinite as the available data suggests, so too was it infinite even at its moment of birth! It is even conceivable that the universe is ‘percolating’ with new bubble universes continually coming into existence. Our entire visible universe, out to the most distant quasar, would be but one infinitessimal patch within one of these bubble regions. Do these other universes have galaxies, stars, planets and living creatures statistically similar to those in our universe? We may never know. These other universes, born of the same paroxicism of Creation as our own, are forever beyond our scrutiny but obviously not our imaginations!

Beyond The Beginning…

Finally, what of the period before Grand Unification? We may surmise that at higher temperatures than the GUT Era, even the supermassive Higgs and Leptoquark bosons become massless and at long last we arrive at a time when the gravitational interaction is united with the weak, electromagnetic and strong forces. Yet, our quest for an understanding of the origins of the universe remains incomplete since gravity has yet to be brought into unity with the remaining forces on a theoretical basis. This last step promises to be not only the most difficult one to take on the long road to unification but also appears to hold the greatest promise for shedding light on some of the most profound mysteries of the physical world. Even now, a handful of theorists around the world are hard at work on a theory called Supergravity which unites the force carriers (photons, gluons, gravitons and the weak interaction bosons) with the particles that they act on (quarks, electrons etc). Supergravity theory also predicts the existence of new particles called photinos and gravitinos. There is even some speculation that the photinos may fill the entire universe and account for the unseen ‘missing’ matter that is necessary to give the universe the critical mass required to make it exactly Euclidean. The gravitinos, on the other hand, prevent calculations involving the exchange of gravitons from giving infinite answers for problems where the answers are known to be perfectly finite. Hitherto, these calculations did not include the affects of the gravitinos.

Perhaps during the next decade, more of the details of the last stage of Unification will be hammered out at which time the entire story of the birth of our universe can be told. This is, indeed, an exciting time to be living through in human history. Will future generations forever envy us our good fortune, to have witnessed in our lifetimes the unfolding of the first comprehensive theory of Existence?

What is Space? Part I

Does Space Have More Than 3 Dimensions?
Written by Sten Odenwald
Copyright (C) 1984 Kalmbach Publishing. Reprinted by permission

The intuitive notion that the universe has three dimensions seems to be an irrefutable fact. After all, we can only move up or down, left or right, in or out. But are these three dimensions all we need to describe nature? What if there aree, more dimensions ? Would they necessarily affect us? And if they didn’t, how could we possibly know about them? Some physicists and mathematicians investigating the beginning of the universe think they have some of the answers to these questions. The universe, they argue, has far more than three, four, or five dimensions. They believe it has eleven! But let’s step back a moment. How do we know that our universe consists of only three spatial dimensions? Let’s take a look at some “proofs.”

On a 2-dimensional piece of paper you can draw an infinite number of polygons.  But when you try this same trick in 3-dimensions you run up against a problem.There are five and only five regular polyhedra. A regular polyhedron is defined as a solid figure whose faces are identical polygons – triangles, squares, and pentagons – and which is constructed so that only two faces meet at each edge. If you were to move from one face to another, you would cross over only one edge. Shortcuts through the inside of the polyhedron that could get you from one face to another are forbidden. Long ago, the mathematician Leonhard Euler demonstrated an important relation between the number of faces (F), edges (E), and corners (C) for every regular polyhedron: C – E + F = 2. For example, a cube has 6 faces, 12 edges, and 8 corners while a dodecahedron has 12 faces, 30 edges, and 20 corners. Run these numbers through Euler’s equation and the resulting answer is always two, the same as with the remaining three polyhedra. Only five solids satisfy this relationship – no more, no less.

Not content to restrict themselves to only three dimensions, mathematicians have generalized Euler’s relationship to higher dimensional spaces and, as you might expect, they’ve come up with some interesting results. In a world with four spatial dimensions, for example, we can construct only six regular solids. One of them – the “hypercube” – is a solid figure in 4-D space bounded by eight cubes, just as a cube is bounded by six square faces. What happens if we add yet another dimension to space? Even the most ambitious geometer living in a 5-D world would only be able to assemble thee regular solids. This means that two of the regular solids we know of – the icosahedron and the dodecahedron – have no partners in a 5-D universe.
For those of you who successfully mastered visualizing a hypercube, try imagining what an “ultracube” looks like. It’s the five- dimensional analog of the cube, but this time it is bounded by one hypercube on each of its 10 faces! In the end, if our familiar world were not three-dimensional, geometers would not have found only five regular polyhedra after 2,500 years of searching. They would have found six (with four spatial dimension,) or perhaps only three (if we lived in a 5-D universe). Instead, we know of only five regular solids. And this suggests that we live in a universe with, at most, three spatial dimensions.

All right, let’s suppose our universe actually consists of four spatial dimensions. What happens? Since relativity tells us that we must also consider time as a dimension, we now have a space-time consisting of five dimensions. A consequence of 5-D space-time is that gravity has freedom to act in ways we may not want it to.

To the best available measurements, gravity follows an inverse square law; that is, the gravitational attraction between two objects rapidly diminishes with increasing distance. For example, if we double the distance between two objects, the force of gravity between them becomes 1/4 as strong; if we triple the distance, the force becomes 1/9 as strong, and so on. A five- dimensional theory of gravity introduces additional mathematical terms to specify how gravity behaves. These terms can have a variety of values, including zero. If they were zero, however, this would be the same as saying that gravity requires only three space dimensions and one time dimension to “give it life.” The fact that the Voyager space- craft could cross billions of miles of space over several years and arrive vithin a few seconds of their predicted times is a beautiful demonstration that we do not need extra-spatial dimensions to describe motions in the Sun’s gravitational field.

From the above geometric and physical arguments, we can conclude (not surprisingly) that space is three-dimensional – on scales ranging from that of everyday objects to at least that of the solar system. If this were not the case, then geometers would have found more than five regular polyhedra and gravity would function very differently than it does – Voyager would not have arrived on time. Okay, so we’ve determined that our physical laws require no more than the three spatial dimensions to describe how the universe works. Or do they? Is there perhaps some other arena in the physical world where multidimensional space would be an asset rather than a liability?

Since the 1920s, physicists have tried numerous approaches to unifying the principal natural interactions: gravity, electromagnetism, and the strong and weak forces in atomic nuclei. Unfortunately, physicists soon realized that general relativity in a four-dimensional space-time does not have enough mathematical “handles” on which to hang the frameworks for the other three forces. Between 1921 and 1927, Theodor Kaluza and Oskar Klein developed the first promising theory combining gravity and electromagnetism. They did this by extending general relativity to five dimensions. For most of us, general relativity is mysterious enough in ordinary four-dimensional space-time. What wonders could lie in store for us with this extended universe?

General relativity in five dimensions gave theoreticians five additional quantities to manipulate beyond the 10 needed to adequately define the gravitational field. Kaluza and Klein noticed that four of the five extra quantities could be identified with the four components needed to define the electromagnetic field. In fact, to the delight of Kaluza and Klein, these four quantities obeyed the same types of equations as those derived by Maxwell in the late 1800s for electromagnetic radiationl Although this was a promising start, the approach never really caught on and was soon buried by the onrush of theoretical work on the quantum theory of electromagnetic force. It was not until work on supergravity theory began in 1975 that Kaluza and Klein’s method drew renewed interest. Its time had finally come.

What do theoreticians hope to gain by stretching general relativity beyond the normal four dimensions of space-time? Perhaps by studying general relativity in a higher-dimensional formulation, we can explain some of the constants needed to describe the natural forces. For instance, why is the proton 1836 times more massive than the electron? Why are there only six types of quarks and leptons? Why are neutrinos massless? Maybe such a theory can give us new rules for calculating the masses of fundamental particles and the ways in which they affect one another. These higher-dimensional relativity theories may also tell us something about the numbers and properties of a mysterious new family of particles – the Higgs bosons – whose existence is predicted by various cosmic unification schemes. (See “The Decay of the False Vacuum,” ASTRONOMY, November 1983.)

These expectations are not just the pipedreams of physicists – they actually seem to develop as natural consequences of certain types of theories studied over the last few years. In 1979, John Taylor at Kings College in London found that some higher- dimensional formalisms can give predictions for the maximum mass of the Higgs bosons (around 76 times that of the proton.) As they now stand, unification theories can do no more than predict the existence of these particles – they cannot provide specific details about their physical characteristics. But theoreticians may be able to pin down some of these details by using extended theories of general relativity. Experimentally, we know of six leptons: the electron, the muon, the tauon, and their three associated neutrinos. The most remarkable prediction of these extended relativity schemes, however, holds that the number of leptons able to exist in a universe is related to the number of dimensions of space-time. In a 6-D space-time, for example, only one lepton – presumably the electron – can exist. In a 10-D space-time, four leptons can exist – still not enough to accommodate the six we observe. In a 12-D space- time, we can account for all six known leptons – but we also acquire two additional leptons that have not yet been detected. Clearly, we would gain much on a fundamental level if we could increase the number of dimensions in our theories just a little bit.

How many additional dimensions do we need to consider in order to account for the elementary particles and forces that we know of today? Apparently we require at least one additional spatial dimension for every distinct “charge” that characterizes how each force couples to matter. For the electromagnetic force, we need two electric charges: positive and negative. For the strong force that binds quarks together to form, among other things, protons and neutrons, we need three “color” charges – red, blue, and green. Finally, we need two “weak” charges to account for the weak nuclear force. if we add a spatial dimension for each of these charges, we end up with a total of seven extra dimensions. The properly extended theory of general relativity we seek is one with an 11 -dimensional space-time, at the very least. Think of it – space alone must have at least 10 dimensions to accomodate all the fields known today.

Of course, these additional dimensions don’t have to be anything like those we already know about. In the context of modern unified field theory, these extra dimensions are, in a sense, internal to the particles themselves – a “private secret,” shared only by particles and the fields that act on them! These dimensions are not physically observable in the same sense as the three spatial dimensions we experience; they’stand in relation to the normal three dimensions of space much like space stands in relation to time.

With today’s veritable renaissance in finding unity among the forces and particles that compose the cosmos, some by methods other than those we have discussed, these new approaches lead us to remarkably similar conclusions. It appears that a four-dimensional space-time is simply not complex enough for physics to operate as it does.

We know that particles called bosons mediate the natural forces. We also know that particles called fermions are affected by these forces. Members of the fermion family go by the familiar names of electron, muon, neutrino, and quark; bosons are the less well known graviton, photon, gluon, and intermediate vector bosons. Grand unification theories developed since 1975 now show these particles to be “flavors” of a more abstract family of superparticies – just as the muon is another type of electron. This is an expression of a new kind of cosmic symmetry – dubbed supersymmetry, because it is all-encompassing. Not only does it include the force-carrying bosons, but it also includes the particles on which these forces act. There also exists a corresponding force to help nature maintain supersymmetry during the various interactions. It’s called supergravity. Supersymmetry theory introduces two new types of fundamental particles – gravitinos and photinos. The gravitino has the remarkable property of mathematically moderating the strength, of various kinds of interactions involving the exchange of gravitons. The photino, cousin of the photon, may help account for the “missing mass” in the universe.

Supersymmetry theory is actually a complex of eight different theories, stacked atop one another like the rungs of a ladder. The higher the rung, the larger is its complement of allowed fermion and boson particle states. The “roomiest” theory of all seems to be SO(8), (pronounced ess-oh-eight), which can hold 99 different kinds of bosons and 64 different kinds of fermions. But SO(8) outdoes its subordinate, SO(7), by only one extra dimension and one additional particle state. Since SO(8) is identical to SO(7) in all its essential features, we’ll discuss SO(7) instead. However, we know of far more than the 162 types of particles that SO(7) can accommodate, and many of the predicted types have never been observed (like the massless gravitino). SO(7) requires seven internal dimensions in addition to the four we recognize – time and the three “every day” spatial dimensions. If SO(7) at all mirrors reality, then our universe must have at least 11 dimensions! Unfortunately, it has been demonstrated by W. Nahm at the European Center for Nuclear Research in Geneva, Switzerland that supersymmetry theories for space-times with more than 11 dimensions are theoretically impossible. SO(7) evidently has the largest number of spatial dimensions possible, but it still doesn’t have enough room to accommodate all known types of particles.

It is unclear where these various avenues of research lead. Perhaps nowhere. There is certainly ample historical precedent for ideas that were later abandoned because they turned out to be conceptual dead-ends. Yet what if they turn out to be correct at some level? Did our universe begin its life as some kind of 11-dimensional “object” which then crystallized into our four- dimensional cosmos?

Although these internal dimensions may not have much to do with the real world at the present time, this may not always have been the case. E. Cremmer and J. Scherk of I’Ecole Normale Superieure in Paris have shown that just as the universe went through phase transitions in its early history when the forces of nature became distinguishable, the universe may also have gone through a phase transition when mensionality changed. Presumably matter has something like four external dimensions (the ones we encounter every day) and something like seven internal dimensions. Fortunately for us, these seven extra dimensions don’t reach out into the larger 4-D realm where we live. If they did, a simple walk through the park might become a veritable obstacle course, littered with wormholes in space and who knows what else!

Alan Chocos and Steven Detweiler of Yale University have considered the evolution of a universe that starts out being five- dimensional. They discovered that while the universe eventually does evolve to a state where three of the four spatial dimensions expand to become our world at large, the extra fourth spatial dimension shrinks to a size of 10^-31 centimeter by the present time. The fifth dimension to the universe has all but vanished and is 20 powers of 10 – 100 billion billion times – smaller than the size of a proton. Although the universe appears four- dimensional in space-time, this perception is accidental due to our large size compared to the scale of the other dimensions. Most of us think of a dimension as extending all the way to infinity, but this isn’t the full story. For example, if our universe is really destined to re-collapse in the distant future, the three- dimensional space we know today is actually limited itself – it will eventually possess a maximum, finite size. It just so happens that the physical size of human beings forces us to view these three spatial dimensions as infinitely large.

It is not too hard to reconcile ourselves to the notion that the fifth (or sixth, or eleventh) dimension could be smaller than an atomic nucleus – indeed, we can probably be thankful that this is the case.

Einstein’s Fudge

Einstein’s Cosmic Fudge Factor

Written by Sten Odenwald
Copyright (C) 1991. Sky Publishing Corporation. Reprinted by permission. See April, 1991 issue

Black holes…quarks…dark matter. It seems like the cosmos gets a little stranger every year. Until recently, the astronomical universe known to humans was populated by planets, stars, galaxies, and scattered nebulae of dust and gas. Now, theoretists tell us it may also be inhabited by objects such as superstrings, dark matter and massive neutrinos — objects that have yet to be discovered if they exist at all!
As bizarre as these new constituents may sound, you don’t have to be a rocket scientist to appreciate the most mysterious ingredient of them all. It is the inky blackness of space itself that commands our attention as we look at the night sky; not the sparse points of light that signal the presence of widely scattered matter.

During the last few decades, physicists and astronomers have begun to recognize that the notion of empty space presents greater subtleties than had ever before been considered. Space is not merely a passive vessel to be filled by matter and radiation, but is a dynamic, physical entity in its own right.

One chapter in the story of our new conception of space begins with a famous theoretical mistake made nearly 75 years ago that now seems to have taken on a life of its own.

In 1917, Albert Einstein tried to use his newly developed theory of general relativity to describe the shape and evolution of the universe. The prevailing idea at the time was that the universe was static and unchanging. Einstein had fully expected general relativity to support this view, but, surprisingly, it did not. The inexorable force of gravity pulling on every speck of matter demanded that the universe collapse under its own weight.

His remedy for this dilemma was to add a new ‘antigravity’ term to his original equations. It enabled his mathematical universe to appear as permanent and invariable as the real one. This term, usually written as an uppercase Greek lambda, is called the ‘cosmological constant’. It has exactly the same value everywhere in the universe, delicately chosen to offset the tendency toward gravitational collapse at every point in space.

A simple thought experiment may help illustrate the nature of Lambda. Take a cubic meter of space and remove all matter and radiation from it. Most of us would agree that this is a perfect vacuum. But, like a ghost in the night, the cosmological constant would still be there. So, empty space is not really empty at all — Lambda gives it a peculiar ‘latent energy’. In other words, even Nothing is Something!

Einstein’s fudged solution remained unchallenged until 1922 when the Russian mathematician Alexander Friedmann began producing compelling cosmological models based on Einstein’s equations but without the extra quantity. Soon thereafter, theorists closely examining Einstein’s model discovered that, like a pencil balanced on its point, it was unstable to collapse or expansion. Later the same decade, Mount Wilson astronomer Edwin P. Hubble found direct observational evidence that the universe is not static, but expanding.

All this ment that the motivation for introducing the cosmological constant seemed contrived. Admitting his blunder, Einstein retracted Lambda in 1932. At first this seemed to end the debate about its existence. Yet decades later, despite the great physicist’s disavowal, Lambda keeps turning up in cosmologists’ discussions about the origin, evolution, and fate of the universe.

THEORY MEETS OBSERVATION

Friedmann’s standard ‘Big Bang’ model without a cosmological constant predicts that the age of the universe, t0, and its expansion rate (represented by the Hubble parameter, H0) are related by the equation t0 = 2/3H0. Some astronomers favor a value of H0 near 50 kilometers per second per megaparsec (one megaparsec equals 3.26 million light years). But the weight of the observational evidence seems to be tipping the balance towards a value near 100. In the Friedmann model, this implies that the cosmos can be no more than 7 billion years old. Yet some of our galaxy’s globular clusters have ages estimated by independent methods of between 12 and 18 billion years!

In what’s called the Einstein-DeSitter cosmology, the Lambda term helps to resolve this discrepancy. Now a large value for the Hubble parameter can be attributed in part to “cosmic repulsion”. This changes the relationship between t0 and H0, so that for a given size, the universe is older than predicted by the Friedmann model.

In one formulation of Einstein’s equation, Lambda is expressed in units of matter density. This means we can ask how the cosmological constant, if it exists at all, compares with the density of the universe in the forms of stars and galaxies.

So far, a careful look at the available astronomical data has produced only upper limits to the magnitude of Lambda. These vary over a considerable range – from about 10 percent of ordinary matter density to several times that density.

The cosmological constant can also leave its mark on the properties of gravitational lenses and faint galaxies. One of the remarkable features of Einstein’s theory of general relativity is its prediction that space and time become deformed or ‘warped’ in the vicinity of a massive body such as a planet, star or even a galaxy. Light rays passing through such regions of warped “space-time” have their paths altered. In the cosmological arena, nearby galaxies can deflect and distort the images of more distant galaxies behind them. Sometimes, the images of these distant galaxies can appear as multiple images surrounding the nearby ‘lensing’ galaxy.

At Kyoto University M. Fukugita and his coworkers predicted that more faint galaxies and gravitational lenses will be detected than in a Friedmann universe if Lambda is more than a few times the matter density. Edwin Turner, an astrophysicist at Princeton University also reviewed the existing, scant, data on gravitational lenses and found that they were as numerous as expected for Lambda less that a few times the matter density. By the best astronomical reconning, Lambda is probably not larger than the observed average matter density of the universe. For that matter, no convincing evidence is available to suggest that Lambda is not exactly equal to zero. So why not just dismiss it as an unnecessary complication? Because the cosmological constant is no longer, strictly, a construct of theoretical cosmology.

NOTHING AND EVERYTHING

To understand how our universe came into existence, and how its various ingredients have evolved, we must delve deeply into the fundamental constituents of matter and the forces that dictate how it will interact. This means that the questions we will have to ask will have more to do with physics than astronomy. Soon after the big bang, the universe was at such a high temperature and density that only the details of matter’s composition (quarks, electrons etc) and how they interact via the four fundamental forces of nature were important. They represented the most complex collections of matter in existence, long before atoms, planets, stars and galaxies had arrived on the scene.

For two decades now, physicists have been attempting to unify the forces and particles that make up our world – to find a common mathematical description that encompasses them all. Some think that such a Theory of Everything is just within reach. It would account not only for the known forms of matter, but also for the fundamental interactions among them: gravity, electromagnetism, and the strong and weak nuclear forces.

These unification theories are known by a variety of names: grand unification theory, supersymmetry theory and superstring theory. Their basic claim is that Nature operates according to a small set of simple rules called symmetries.

The concept of symmetry is at least as old as the civilization of ancient Greece, whos art and archetecture are masterworks of simplicity and balance. Geometers have known for a long time that a simple cube can be rotated 90 degrees without changing its outward appearance. In two dimensions, equalateral triangles look the same when they are rotated by 120 degrees. These are examples of the geometric concept of Rotation Symmetry.

There are parallels to geometric symmetry in the way that various physical phenomena and qualities of matter express themselves as well. For example, the well-known principle of the Conservation of Energy is a consequence of the fact that when some collections of matter and energy are examined at different times, they each have precisely the same total energy, just as a cube looks the same when it is rotated in space by a prescribed amount. Symmetry under a ‘shift in time’ is as closely related to the Conservation of Energy as is the symmetry of a cube when rotated by 90 degrees.

Among other things, symmetries of Nature dictate the strengths and ranges of the natural forces and the properties of the particles they act upon. Although Nature’s symmetries are hidden in today’s cold world, they reveal themselves at very high temperatures and can be studied in modern particle accelerators.

The real goal in unification theory is actually two-fold: not only to uncover and describe the underlying symmetries of the world, but to find physical mechanisms for ‘breaking’ them at low energy. After all, we live in a complex world filled with a diversity of particles and forces, not a bland world with one kind of force and one kind of particle!

Theoreticians working on this problem are often forced to add terms to their equations that represent entirely new fields in Nature. The concept of a field was invented by mathematicians to express how a particular quantity may vary from point to point in space. Physicists since the 18th century have adopted this idea to describe quantitatively how forces such as gravity and magnetism change at different distances from a body.

The interactions of these fields with quarks, electrons and other particles cause symmetries to break down. These fields are usually very different than those we already know about. The much sought after Higgs boson field, for example, was introduced by Sheldon Glashow, Abdus Salam and Steven Weinberg in their unified theory of the electromagnetic and weak nuclear forces.

Prior to their work, the weak force causing certain particles to decay, and the electromagnetic force responsible for the attraction between charged particles and the motion of compass needles, were both considered to be distinct forces in nature. By combining their mathematical descriptions into a common language, they showed that this distinction was not fundamental to the forces at all! A new field in nature called the Higgs field makes these two forces act differently at low temperature. But at temperatures above 1000 trillion degrees, the weak and electromagnetic forces become virtually identical in the way that they affect matter. The corresponding particles called the Higgs Boson not only cause the symmetry between the electromagnetic and weak forces to be broken at low temperature, but they are also responsible for confiring the property of mass on particles such as the electrons and the quarks!

There is, however a price that must be paid for introducing new fields into the mathematical machinery. Not only do they break symmetries, but they can also give the vacuum state an enormous latent energy that, curiously, behaves just like Lambda in cosmological models.

The embarrassment of having to resurrect the obsolete quantity Lambda is compounded when unification theories are used to predict its value. Instead of being at best a vanishingly minor ingredient to the universe, the predicted values are in some instances 10 to the power of 120 times greater than even the most generous astronomical upper limits!

It is an unpleasant fact of life for physicists that the best candidates for the Theory of Everything always have to be fine-tuned to get rid of their undesirable cosmological consequences. Without proper adjustment, these candidates may give correct predictions in the microscopic world of particle physics, but predict a universe which on its largest scales looks very different from the one we inhabit.

Like a messenger from the depths of time, the smallness – or absence – of the cosmological constant today is telling us something important about how to craft a correct Theory of Everything. It is a signpost of the way Nature’s symmetries are broken at low energy, and a nagging reminder that our understanding of the physical world is still incomplete in some fundamental way.

A LIKELY STORY

Most physicists expect the Theory of Everything will describe gravity the same way we now describe matter and the strong, weak and electromagnetic forces – in the language of quantum mechanics. Gravity is, after all, just another force in Nature. So far this has proven elusive, due in part to the sheer complexity of the equations of general relativity. Scientists since Einstein have described gravity ( as well as space and time) in purely geometric terms. Thus we speak of gravity as the “curvature of space-time”.

To acheive complete unification, the dialects of quantum matter and geometric space have to be combined into a single language. Matter appears to be rather precisely described in terms of the language of quantum mechanics. Quarks and electrons exchange force-carrying particles such as photons and gluons and thereby feel the electromagnetic and strong nuclear forces. But, gravity is described by Einstein’s theory of general relativity as a purely geometric phenomenon. These geometric ideas of curvature and the dimensionality of space have nothing to do with quantum mechanics.

To unify these two great foundations of physics, a common language must be found. This new language will take some getting used to. In it, the distinction between matter and space dissolves away and is lost completely; matter becomes a geometric phenomenon, and at the same time, space becomes an exotic form of matter.

Beginning with work on a quantum theory of gravity by John Wheeler and Bryce DeWitt in the 1960’s, and continuing with the so-called superstring theory of John Schwartz and Michael Green in the 1980’s, a primitive version of such a ‘quantum-geometric’ language is emerging. Not surprisingly, it borrows many ideas from ordinary quantum mechanics.

A basic concept in quantum mechanics is that every system of elementary particles is defined by a mathematical quantity called a wave function. This function can be used, for example, to predict the probability of finding an electron at a particular place and time within an atom. Rather than a single quantity, the wave function is actually a sum over an infinite number of factors or ‘states’, each representing a possible measurement outcome. Only one of these states can be observed at a time.

By direct analogy, in quantum gravitation, the geometry of space-time, whether flat or curved, is only one of an infinite variety of geometric shapes for space-time, and therefore the universe. All of these possibilities are described as separate states in the wave function for the universe.

But what determines the probability that the universe will have the particular geometry we now observe out of the infinitude of others? In quantum mechanics, the likelihood that an electron is located somewhere within an atom is determined by the external electric field acting on it. That field is usually provided by the protons in the atomic nucleus. Could there be some mysterious field ‘outside’ our universe that determines its probability?

According to Cambridge University theorist Stephen Hawking, this is the wrong way to look at the problem. Unlike the electron acted upon by protons, our universe is completely self-contained. It requires no outside conditions or fields to help define its probability. The likelihood that our universe looks the way it does depends only on the strengths of the fields within it.

Among these internal fields, there may even be ones that we haven’t yet discovered. Could the cosmological constant be the fingerprint in our universe of a new ‘hidden’ field in Nature? This new field could affect the likelihood of our universe just as a kettle of soup may contain unknown ingredients although we can still precisely determine the kettle’s mass.

A series of mathematical considerations led Hawking to deduce that the weaker the hidden field becomes, the smaller will be the value we observe for the cosmological constant, and surprisingly, the more likely will be the current geometry of the universe.

This, in turn, implies that if Lambda were big enough to measure by astronomers in the first place, our universe would be an improbable one. Philosophically, this may not trouble those who see our cosmos as absolutely unique, but in a world seemingly ruled by probability, a counter view is also possible. There may, in fact, exist an infinite number of universes, but only a minority of them have the correct blend of physical laws and physical conditions resembling our life-nurturing one.

Hawking continued his line of speculation by suggesting that, if at the so-called Planck scale of 10 to the power of -33 centimeters the cosmos could be thought of as an effervescent landscape, or “space-time foam”, then perhaps a natural mechanism could exist for eliminating the cosmological constant for good.

One of the curiosities of combining the speed of light and Newton’s constant of gravitation from general relativity, with Planck’s constant from quantum mechanics, is that they can be made to define unique values for length, time and energy. Physicists believe that at these Planck scales represented by 10 to the power of -33 centimeters and 10 to the power of -43 seconds, general relativity and quantum mechanics blend together to become a single, comprehensive theory of the physical world: The Theory Of Everything. The energy associated with this unification, 10 to the power of 19 billion electron volts, is almost unimaginably big by the standards of modern technology.

The universe itself, soon after the Big Bang, must also have passed through such scales of space, time and energy during its first instants of existence. Cosmologists refer to this period as the Planck Era. It marks the earliest times that physicists are able to explore the universe’s physical state without having a complete Theory of Everything to guide them.

WORMHOLES

Harvard University physicist Sidney Coleman has recently pursued this thought to a possible conclusion. Instead of some mysterious new field in Nature, maybe the Lambda term appears in our theories because we are using the wrong starting model for the geometry of space at the Planck scale.

Previous thinking on the structure of space-time had assumed that it behaved in some sense like a smooth rubber sheet. Under the action of matter and energy, space-time could be deformed into a variety of shapes, each a possible geometric state for the universe. Nearly all candidates for the Theory of Everything’s embed their fields and symmetries in such a smooth geometrical arena.

But what if space-time were far more complicated? One possibility is that ‘wormholes’ exist, filling space-time with a network of tunnels. The fabric of space-time may have more in common with a piece of Swiss cheese than with a smooth rubber sheet.

According to Coleman, the addition of wormholes to space-time means that, like the ripples from many stones tossed into a pond, one geometric state for the universe could interfere with another. The most likely states ( or the biggest ripples) would win out. The mathematics suggest that quantum wormhole interference at the Planck scale makes universes with cosmological constants other than zero exceedingly unlikely.

How big would wormholes have to be to have such dramatic repurcussions? Surprisingly, the calculations suggest that small is beautiful. Wormholes the size of dogs and planets would be very rare. Universes containing even a few of them would exist with a vanishingly low probability. But wormholes smaller than 10 to the power of -33 centimeters could be everywhere. A volume the size of a sugar cube might be teeming with uncounted trillions of them flashing in and out of existence!

Coleman proposes that the action of these previously ignored mini- wormholes upon the geometric fabric of the universe that forces Lambda to be almost exactly zero. Like quantum ‘Pac Men’, they gobble up all the latent energy of space-time that would otherwise have appeared to us in the form of a measureable cosmological constant!

The addition of wormholes to the description of space-time admits the possibility that our universe did not spring into being aloof and independent, but was influenced by how other space-times had already evolved – ghostly mathematical universes with which we can never communicate directly.

The most likely of these universes had Lambda near zero, and it is these states that beat out all other contenders. In a bizarre form of quantum democracy, our universe may have been forced to follow the majority, evolving into the high probability state we now observe, without a detectable cosmological constant.

EPILOG

Wormholes? Wave functions? Hidden fields? The answer to the cosmological constant’s smallness, or absence, seems to recede into the farthest reaches of abstract thinking, faster than most of us can catch up.

As ingenious as these new ideas may seem, the final pages in this unusual story have probably not been written, especially since we can’t put any of these ideas to a direct test. It is a tribute to Einstein’s genius that even his ‘biggest blunder’ made near the beginning of this century still plagues physicists and astronomers as we prepare to enter the 21st century. Who would ever have thought that something that may not even exist would lead to such enormous problems!