Category Archives: Physics

What is Grand Unification Theory?


In the mid-1970’s physicists were excited with the recent success of Steven Weinberg, Abdus Salam and Sheldon Glashow in creating a unification theory for the electromagnetic and weak forces. By applying what is called ‘group theory’ , physicists such as Glashow, Georgi and others proposed that you could use the symmetries of ‘SU(5)’ to unite the weak and electromagnetic forces with the strong nuclear force which is mediated by gluons. This became known as ‘Grand Unification Theory’ or ‘GUT’, and quickly evolved into many variants including ‘super-symmetric GUTs (SUSY- GUTs)’, ‘super gravity theory’ and ‘dimensionally-extended SUSY GUTs’, before being replaced by string theory in the early 1980’s.

It produced a lot of excitement in the late-70s and early-80’s because it seemed as though it could provide an explanation for the strong, weak and electromagnetic forces, and do so in a common mathematical language. It’s major prediction was that at the enormous energy of 1000 billion billion volts (10^15 GeV) the strong nuclear force would become similar (or unified) with the electromagnetic and weak forces. Applying these ideas to cosmology also led to the creation of Inflationary Cosmology.

Today, the so-called Standard Model of nuclear physics unifies physics (except for gravity) and uses some of the basic ideas of GUT to do so. Physicists worked very hard to confirm several basic ideas in GUT theory such as ‘spontaneous symmetry breaking’ by looking for the Higgs Boson. In 2012 this elusive particle was discovered at the Large Hadron Collider some 50 years after it was predicted. This wass a revolutionary discovery because it demonstrated that the entire concept of spontaneous symmetry breaking seemed to be valid. It was the keystone idea in the unification of the electromagnetic and weak forces for which Abdus Salam, Steven Weinberg and George Glashow received the Nobel Prize in the mid-1970s. SSB was also the workhorse concept behind much of the mathematical work into GUTs.

GUT research in the booming 1970s also uncovered a new ‘Supersymmetry’ in nature, which continues to be searched for. The unpleasant thing about the current Standard Model is that it has several dozen adjustable constants that have to be experimentally fine-tuned to reproduce our physical world including such numbers as the constant of gravity, speed of light, fine structure constant, and the constants that determine how strongly the leptons and quarks interact. Physicists think that this is way too much, and so the search is on for a better theory that has far fewer ad hoc constants. There is also the problem that the Standard Model doesn’t include gravity.

The hope that gravity could some how be incorporated into GUTs pursued in the 1970s was ultimately never realized because of the advent of String Theory which provided a newer way to look at gravity as a ‘quantum field’. Yet most popular versions of string theory include supersymmetry, hense they are called superstring theories.

Supersymmetry has grown to become a lynchpin concept behind many ideas for unifying all of the four foruces including gravity, however after five years of searching for signs of it at the CERN Large Hadron Collider, not so much as a trace of it has been detected. It seems as though the Standard Model is all there is, but in which the strong force and the ‘electroweak’ forces may possibly not be unified further.

What does the equation look like that shows how gravitational radiation is lost from the binary pulsar system?


What astronomers observed in the Hulse-Taylor Pulsar was a decrease in the orbital period of the two neutron stars.

From general relativity, it was possible to predict, mathematically, how the period ought to change in time as the binary system emitted gravitational energy during the time the orbits of the neutron stars were being ‘circularized’.

The predicted, and deceptively simple, formula for the period change, dP/dt, can be found in the excellent book by Stuart Shapiro and Saul Teukolsky Black holes, white dwarfs and neutron stars, and it looks like this:

dP/dt    =    -1.202 x 10-12   M (2.8278 - M)

where M = 1.41 solar masses…the mass of one of the neutron stars determined by observation and the application of Kepler’s Laws. The units are in seconds of orbit period change per second.

The result is the predicted period change is dP/dt = -2.40 x 10-12 and the observed value is -2.30 +/- 0.22 x 10-12 seconds per second. This implies a better than 10 percent disagreement between theory and observation, and thereby proves that gravitational radiation leakage is the simplest explanation. No one has yet found a simpler explanation in terms of tidal friction or other non-relativity processes.

Where does the energy come from that produces virtual particles?


In ordinary Newtonian physics, just about everything can be traced back to some elementary process that conserves energy and momentum. For 400 years we were taught that neither energy nor mass could be created or destroyed but had to be conserved througout some process such as the moon orbiting earth. We also learned that conservation laws applied to closed systems that you could see and systems that you could not see, from the cosmic to the atomic. Does a tree falling in the forest when no one is there to see it, still conserve energy? you bet! But then during the earth-20th century, the Roaring Twenties hit, and physics was turned upside down for a few years.

I am not going to review quantum mechanics and quantum field theory in this writing because you have probably read most of the literature about this ‘Second Pillar’ of physics. The important thing to remember is in the atomic world, a whole new set of paradighms apply that have nothing to do with Newton’s Physics except in some skelletal form. We still talk about mass, momentum and energy, but now the objects of our concern are elementary objects that behave as waves or particles depending on what kinds of experiment you put them through. Energy is no longer a Newtonian quantity but is an ‘operator’ that acts on a particle wavefunction to return a value for a particular state index. Momentum also has its own operator, and the way these operators act on a wavefunction is analogous to how a particular tuning fork vibrates in resonance to an applied force. Each vibratory mode of the wavefunction of an electron has its own energy at a particular instant in time, and a particular momentum at a particular position in space. Physicists say that energy and time ‘commute’ with each other and momentum and position do likewise. Because these wavefunctions are statistical in nature, the ‘square’ of a component of the wavefunction gives the probability that the electron will have a specific energy and momentum. But this statistical feature of an electron’s state means that the product of the conjugate variables must be greater than or equal to Planck’s constant. This gives us the famous Heisenberg Uncertainty Principle:

What these relations relate to is our ability to distinguish between each of the possible energy and momentum quantum states of an electron at a particular moment in time, and a particular location in space. In fact, because we are dealing with states that are part of an infinite harmonic series for the electrons wavefunction, we can use the mathematics of Fourier to relate frequency to wavelength for each of the states. In light and sound we have wavelength = constant / frequency where the constant is the speed of sound or light. In quantum mechanics, the wavefunction is based on similar relationships for the conjugate variables (E,t) and (p,x). The experimental problem is that because E and t are conjugate, it means that as we try to specify the momentum state, p, more accurately we steadily lose accuracy in knowing where the electron is in the x variable. Similarly, as we try to precisely determine how much energy a system has, E, we lose accuracy in knowing at which specific instant it had that energy.

What does this have to do with the energy of virtual particles?

The Heisenberg relatonship between energy and time is actually a statement of how well we can know both of these quantities for any system that has wavelike properties. In words:

The uncertainty in the total energy of a particular state decreases as the amount of time it is in that state increases.

This is often interpreted as a statement of our being able to measure the energy of the system if we only observe it for a short while. A practical example is as follows.

Initially at Time = Ti our system consists of two particles Pa and Pb which have the total energies of Ea and Eb. Then Einitial = Ea + Eb. A neighboring state at time =T2 contains the same two particles and their energies, but includes a third particle V, with the energy Ev. The final state of the system at time = Tf contains only the original two particles. According to Heisenbergs Uncertainty Principle, the change in energy between the two states is just (Ea + Eb + Ev) – (Ea + Eb) = Ev. This energy change between the two states is related to how long then state with the third particle exists according to Delta-T = h/Ev which is the minimum time the Ev energy can persist.

In quantum mechanics a system begins in an initial state at Time Ti and ends in a final state at Tf. These states contain only the original particles, in this case A and B. What happens in-between can include any other process so long as it obeys Heisenbergs Uncertainty Principle so that

Ev = h/(Tf-Ti)

If the time between the initial and final states is long, the energy fluctuation, Ev, will be very small, but if the time difference is short, the value for Ev can be very large.

So where does this energy Ev come from? You can think of it as being ‘borrowed’ from the state in which the particle V did not exist…which is called the quantum vacuum. That’s because the vacuum state is the lowest energy state of the system remaining after we remove the two original particles. What is left over is n ’empty space’ in which all of the other energy fluctuations ( interpreted as virtual particles because of E=mc^2) that come and go over time periods set by the amount of energy they contain.

Another way to think of this is to use the measurement analogy for what happens when you average together lots of measurements. When you start out with 36 measurements and average them, you get an answer but this is the mid-point of a bellcurve for these repeated measurements that has a ‘standard deviation’, which tells you the spread of the measured points around the average value. As you increase the number of measurements to 10,000 your average may not change by much, but now the shape of the bellcurve has narrowed because the standard deviation is now squareroot(10000/36) = 100/6 times smaller. The more you measure the smaller becomes the fluctuation in the parameter you are measuring. In the same way, you make 36 energy measurements of a particle state and the standard deviation is determined by Heisenbergs Uncertainty principle based on the amount of time involved in making these measurements. But when you make more measurements you increase the time between Ti and Tf and the standard deviation decreases to a smaller value.

Can gravity affect the speed of light?


Gravity can certainly warp and distort the ‘straight-line’ path of a light ray. This Hubble image is of the Einstein Ring LRG 3-757 in which the central massive galaxy has warped the image of a background galaxy into a ring of light. (Credit: ESA/Hubble & NASA)

The speed of light is something measured with a local apparatus in an inertial reference frame, using the same meter stick and clock. A gravitational field has zillions of such ‘locally inertial reference frames’ which are described by freely-falling observers for short intervals of time and small regions of space. In all of these tiny domains, an observer would measure the same velocity for light as guaranteed by special relativity. To ask what the speed of light is over a domain where gravitational forces make a reference frame ‘non-inertial’ and not moving at a constant speed, is an ill-defined question in special relativity. As soon as you try to measure the speed of such an impulse, you would be using a clock and a meter stick which would not be the ‘proper time and space’ intervals for the entire region where the gravitational field exists.

Gravity can affect the speed of light. If you measure the speed over a large enough region that special relativity and its requirement of a flat spacetime is not satisfied. In the presence of curved spacetime, conventional local measurement techniques do not work and so you cannot define the speed of light in exactly the same way that you do under laboratory conditions in ‘flat’ spacetime. In fact, in curved spacetime even the concept of conservation of energy is not easily defined because the curvature of space itself changes the definition. Conservation of energy only works in flat spacetime.

Can gravity be simulated using electromagnetic forces?


Not completely. Gravity is a distortion in the geometry of space. This illustration (Credit: LIGO/T. Pyle) shows how gravity waves from colliding black holes distort this geometry in a way that electric and magnetic waves do not.

First of all, gravity is what is called a tensor force while electromagnetism is a vector force. That difference means that it is impossible to reproduce all of the properties in gravity using a simpler force field. Moreover, gravity is not at all a force in the usual sense. It is a purely geometric effect in spacetime.

Gravity provides ONLY a force of attraction between all forms of matter and energy. Electromagnetism provided ONLY attractive or repulsive forces between matter which carries electric charge. It is possible to get the acceleration of gravity and the force of gravity by using two charged particles of opposite charge, but numerically all you would have is a force field that mimics one feature of gravity. Take away the charges and the similarity immediately vanishes.

Contrary to what some science fiction stories might imply, we know of no electromagnetic analog of gravity. We can however create electromagnetic force fields with charged matter that can alter the total forces they feel due to gravity. We can levitate charged particles in magnetic fields and so on.

How does a magnetic field differ from a gravitational field?


The biggest difference is that a gravitational field is mathematically classified as a tensor field while magnetic fields, or actually electromagnetic fields, are vector fields. This means that it takes 4×4=16 components to define a gravitational field in general while it only takes 4 components to define an electromagnetic field. The number 4 comes up because spacetime is 4-dimensional.

Gravitational fields are determined only by the mass ( or mass-energy) of a body. Charged and uncharged massive particles produce the same gravitational field pound for pound ( well…the electromagnetic energy has its own mass so it does contribute a bit ).

Magnetic fields are produced by charged particles in motion, and depend on the charge and velocity of these particles, but not on their mass. Magnetic fields are ‘polar’ fields with a North and South polarity.

Gravitational fields have no polarity at all. At large distances, gravitational fields diminish as the inverse square of distance from their source.

Magnetic fields at large distances from their source, decrease as the inverse cube of the distance.

You can only detect magnetic fields by using charged particles to measure their deflection.

Gravitational fields can be detected by using anything to measure a change in velocity.

Do we really know how gravity and magnetism operate?


It depends entirely on what you mean by knowledge. This figure (Credit:NASA/Conceptual Image Lab) shows magnetic field lines and the graded decrease of Earth’s gravity in an artistic rendition. We know how both of these work in considerable detail.

Our theories for gravity and magnetism allow us to describe the essential physics of systems from nearly the size of the universe, to events at a scale of nearly one million times smaller than the nucleus of an atom. The latter phenomenon is explained by quantum electrodynamics which routinely makes predictions correct to 10 decimal places. For gravity, we can accurately describe systems as vast as the universe and its evolution, all the way down to the surfaces of black holes a few dozen kilometers across.

From a practical point of view we understand these two forces almost perfectly, and to our best current ability to measure.

A charged particle sitting in your reference frame motionless produces a pure electric field like the figure above.As you get farther from the center, the number of imaaginary field lines is conserved through each spherical surface and so the strength of the field decreases as the reciprocal of te surface area of the sphere. This is the relationship if space is 3-dimensional and explains the inverse-square law.

Where does this electric field come from? It comes from the electromagnetic force which is a quantum phenomenon explained in great detail by a theory called Quantum Electrodynamics. Every charged particle is surrounded by a cloud of virtual photons that are exchanged between other charged particles to produce the familiar electrostatic force.

Here is a common diagram (Credit: Wikipedia) showing the exchange of just one of these virtual particles (the wavy line) exchanged between two electrons. This ‘Feynman Diagram’ is a only symbolic representation of the mathematical terms that have to be multiplied together to calculate the probability that this reaction will occur. It is not meant to be a ‘photograph’ of what is actually going on.

A magnetic field is what we observe in our frame of reference as an electric field passes by traveling at a speed relative to us. The movement of these charges is called an electric current, and all electric currents produce magnetic fields so long as the speed of the current is not zero. But a magnetic field is not fundamentally a new field in nature, its just the familiar electric field seen in a different reference frame. Our understanding of ‘electromagnetic’ fields is virtually perfect to the extent that its bass is in quantum mechanics is a solid foundation.

Gravitational fields are very different. We call them ‘fields’ because that is a left-over description from Newtonian physics and it serves us well in most things that we come into contact with in our solar system and local universe. But this is not the correct way to think about gravity. It is neither a field nor is it a force, although it resembles both of these in rough terms.

General relativity is our most sucessful theory for describing gravity, but it is a purely geometric theory and not one that looks anything like quantum electrodynamics. Instead, 4-dimensional spacetime is the ‘field’ that describes gravity and distortions in it lead to the experiences of accellerations as particles try to travel in straight lines through the geometry. At the same time, matter (and energy) also creates these geometric distortions. General relativity is a theory of the geometry of worldlines and not some mythical background space into which gravity is embedded. The biggest flaw in general relativity is that it does not tell us exactly how matter creates gravity. The only other theory that we have that describes how forces are produced is called Quantum Field Theory of which quantum electrodynamics is the most successful one.

There is no similar theory for gravity, so we do not really know how gravity is created by matter. Because the field for gravity is called spacetime, what we have to imagine is a theory that describes where spacetime ‘comes from’ in other words how matter produces space and time!!!

If you are asking a more metaphysical question about our knowledge, then we don’t really understand ANYTHING about gravity at the deepest level, such as why does gravity exist? Is it a quantum field? What is the nature of space-time?

In terms of the human sphere of activity, we understand enough about these two fields that we will never have much practical need for better theories than the ones we now have.

What are the ’10 dimensions’ that physicists are always talking about?


This stunning simulation of Calabi-Yau spaces at each point in 3-d space was created by  Jeff Bryant and based on concepts from A.J. Hanson, “A Construction for Computer Visualization of Certain Complex Curves,” in “Computers and Mathematics” column, ed. Keith Devlin, of Notices of the American Mathematical Society, 41, No. 9, pp. 1156–1163 (American Math. Soc., Providence, November/December, 1994). See his website for details.

We know that we need at least 4 to keep track of things: The three dimensions of space that give us freedom to move Up-Down, Left-right, and forward-backward, plus the dimension of time. These dimensions of spacetime form the yellow gridwork in the image above. At each intersections you have a new location in space and time, and mathematically there are a infinite number of these coordinate points. But the real world may be different then the mathematical ideal. There may not be an infinite number of points between 0 and 1, but only a finite number.

We know that you can sub-divide space all the way down to the quantum realm and to distances and times of 10^-20 cm and 10^-30 seconds and spacetime still looks perfectly smooth to the physics we observe there, but what if we go down even further? Since the 1940s, a simple calculation using the three fundamental constants h, c and G has turned up a smallest quantum distance of 10^-33 cm and 10^-43 seconds called the Planck Scale. In our figure above, the spacings in the yellow grid are at this scale of intervals, and that is the smallest possible separation for physical processes in space and time..it is believed.

Since the 1970s, work on the unification of forces has uncovered a number of ideas that could work, but nearly all require that we add some additional dimensions to the four we know. All of these extra dimensions are believed to appear at the Planck scale, so they are accessible to elementary particles but not to humans.In string theory, these added dimensions are rolled up into identical but complex mini-geometries like the ones shown above.

No one has the faintest idea how to go about proving that other dimensions really exist in the microcosm. The energies are so big that we cannot figure out how to build the necessary accelerators and instruments. We know that Mother Nature is rather frugal, so I would be very surprised if more than 4 dimensions existed.There have been many proposals since the 1920s to increase the number of dimensions to spacetime beyond the standard four that relativity uses. In all cases, these extra dimensions are vastly smaller than an atom and are not accessable to humans…fortunately!

Current string theory proposes 6 additional dimensions while M-theory allows for a seventh. These additional dimensions are sometimes called ‘internal degrees of freedom’ and are related to the number of fundamental symmetries present in the physical world at the quantum scale. The equations that physicists work with require these additional dimensions so that new symmetries can be defined that allow physicists to understand physical relationships between the various particle families.

They think these are actual, real dimensions to the physical world, only that they are now ‘compact’ and have finite sizes unlike our 4 dimensions of space and time which seem almost to be infinite in size. The figure above shows what these compact additional dimensions look like, mathematically. Each point in 4 dimensional space-time has another 6 dimensions attached to it which ‘particles and forces’ can use as extra degrees of freedom to define themselves and how they will interact with each other. These spaces are called Calabi-Yau manifolds and it is their 6-dimensional geometry that determines the exact properties of fundamental particles.

Do not confuse them with ‘hyperspace’ because the particles do not actually ‘move’ along these other dimensions. They are not ‘spatial’ dimensions, but are as unlike space and time as time is unlike space!

Can you exceed the speed of light by manipulating space-time in some way?


Other than in science fiction, there is absolutely no known way to exceed the speed of light and to transmit matter or information in that manner.

We need hard evidence that nature permits such things to happen, and this evidence is completely lacking. Physicists have been accelerating electrons to within a millimeter/sec of the speed of light for decades and have never seen any departure from what is strictly permitted by special relativity. There is no ‘gap’ or ‘quirk’ that has ever been experimentally discovered that shows light-weight electrons can surmount the speed of light.

Another problem is that we do not actually know what the structure of spacetime is. All we know is that it is defined by the worldline geometry of particles. It is not something that preexists matter and energy, so other than pathologies such as black holes, we do not know what it means to actually ‘manipulate’ spacetime. Spacetime is manipulated by altering the worldlines of particles, not the other way around.

Is there empty space inside particles the same way there is inside atoms?


There is no known ‘inside’ to an elementary point particle such as an electron. It is not a tiny sphere with an interior space, though back in 1920 physicists asked whether the space inside an electron was the same as outside. This was shortly before the wave/particle properties of matter were discovered by Louis deBroglie, which set the stage for quantum mechanics.

One way you might think of it is that it looks something like a globular star cluster like the one shown here called Messier 2, taken by the Hubble Space Telescope. (Credits: NASA, ESA, STScI, and A. Sarajedini (University of Florida)

The ’empty space’ within and near particles such as electrons and quarks is far more active and complex close to the electron than in the lower-energy ’empty space’ within the vastly-larger boundaries of atoms. There is no such thing as ’empty space’ anywhere in nature. There are only apparent ‘voids’ that SEEM not to contain matter or energy, but at the level of the quantum world, even ’empty’ voids are teeming with activity as particles come and go; created out of quantum fluctuations in any of a variety of fields in nature.

Heisenberg’s Uncertainty Principle all but guarantees the existence of such a dynamic, physical vacuum. Physicists, moreover, have conducted many experiments where the effects of these ghostly, half-real particles can be seen clearly. The level of activity that fills the physical vacuum is set by the energy at which the vacuum is ‘observed’. Within an atom, much of the activity is carried by ‘virtual photons’ that mediate the electromagnetic force, and by the occasional electron-positron pairs that appear and vanish. At very high energies, and correspondingly small length scales, the vacuum fills up with the comings and goings of even more high energy particles; quarks-antiquarks, gluons-antigluons, muons-antimuons, and a whole host of other particles and their anti-matter twins. Within the nucleus of an atom, gluons and their anti- particles are everywhere, going about their business to keep the quarks bound into the nuclear ‘quark-gluon plasma’, portions of which we see as protons and neutrons.

For electrons, enormous energy is stored in its electric field at small scales, so this allows more and more massive particle-antiparticle pairs to be created out of quantum fluctuations in this field.

So the ‘inside’ of an electron is an onion-like region of space where low energy virtual particles form the extended halo surrounding a core where more and more massive particle clouds are encountered.