Category Archives: Physics

The End of Physics?

For 45 years I have followed the great pageant of ideas in theoretical physics. From high school through retirement, although my career and expertise is in astronomy and astrophysics, my passion has always been in following the glorious ideas that have swirled around in theoretical physics. I watched as the quark theory of the 1960s gave way to Grand Unification Theory in the 1970s, and then to string theory and inflationary cosmology in the 1980s. I was thrilled by how these ideas could be applied to understanding the earliest moments in the Big Bang and perhaps let me catch at least a mathematical glimpse of how the universe, time and space came to be literally out of Nothing; explanations not forthcoming from within Einstein’s theory of general relativity.

Even as recently as 2012 this story continued to captivate me even as I grappled with what might be the premature end of my life at the hands of non-Hodgkins Lymphoma diagnosed in 2008. And still I read the journal articles, watching as new ideas emerged, built upon the theoretical successes of the 1990s and beyond. But then a strange thing happened.

In the 1980s, the US embarked on the construction in Texas of the Superconducting Super Collider, but that project was scrapped and de-funded by Congress after ¼ of it had been built. Attention then turned to the European Large Hadron Collider project, which after 10 years finally achieved its first collisions in 2009. The energy of this accelerator has steadily been increased to 13 TeV, and now records some 600 million collisions per second, which generates 30 petabytes of data per year. Among these collisions were expected to be the traces of ‘new physics’, and physicists were not dissappointed. In 2012 the elusive Higgs Boson was detected some 50 years after it was predicted to exist. It was a major discovery that signaled we were definitely on the right track in verifying the Standard Model. But since then, following many more years of searching among the debris of trillions of collisions, all we continue to see are the successful predictions of the Standard Model confirmed again and again with only a few caveats.

Typically, physicists push experiments to ever-higher degrees of accuracy to uncover where our current theoretical model predictions are becoming thread-bare, revealing signs of new phenomena or particles, hence the term ‘new physics’. Theoreticians then use this anomalous data to extend known ideas into a larger arena, and always select new ideas that are the simplest-possible extensions of the older ideas. But sometimes you have to incorporate entirely new ideas. This happened when Einstein developed relativity, which was a ‘beautiful’ extension of the older and simpler Newtonian Physics. Ultimately it is the data that leads the way, and if not available, we get to argue over whose theory is more mathematically beautiful or elegant.

Today we have one such elegant contender for extending the Standard Model that involves a new symmetry in Nature called supersymmetry. Discovered mathematically in the mid-1970s, it showed how the particles in the Standard Model that account for matter (quarks, electrons) are related to the force-carrying particles (e.g. photons, gluons), but also offered an integrated role for gravity as a new kind of force-particle. The hitch was that to make the mathematics work so that it did not answer ‘infinity’ every time you did a calculation, you had to add a whole new family of super-heavy particles to the list of elementary particles. Many versions of ‘Minimally Supersymmetric Standard Models’ or MSSM’s were possible, but most agreed that starting at a mass of about 1000 times that of a proton (1 TeV), you would start to see the smallest of these particles as ‘low-hanging fruit’, like the tip of an upside-down pyramid.

For the last seven years of LHC operation, using a variety of techniques and sophisticated detectors, absolutely no sign of supersymmetry has yet to be found. In April, 2017 at the Moriond Conference, physicists with the ATLAS Experiment at CERN presented their first results examining the combined 2015 – 2016 LHC data. This new dataset was almost three times larger than what was available at the last major particle physics conference held in 2016. Searches for the supersymmetric partners to quarks and gluons (called squarks and gluinos) turned up nothing below a mass of 2 TeV. There was no evidence for exotic supersymmetric matter at masses below 6 TeV, and no heavy partner to the W-boson was found below 5 TeV.

Perhaps the worst result for me as an astronomer is for dark matter. The MSSM model, the simplest extension of the Standard Model with supersymmetry, predicted the existence of several very low mass particles called neutralinos. When added to cosmological models, neutralinos seem to account for the existence of dark matter, which occupies 27% of the gravitating stuff in the universe and controls the movement of ordinary matter as it forms galaxies and stars. MSSM gives astronomers a tidy way to explain dark matter and closes the book on what it is likely to be. Unfortunately the LHC has found no evidence for light-weight neutralinos at their expected MSSM mass ranges. (see for example https://arxiv.org/abs/1608.00872 or https://arxiv.org/abs/1605.04608)

Of course the searches will continue as the LHC remains our best tool for exploring these energies well into the 2030s. But if past is prologue, the news isn’t very promising. Typically the greatest discoveries of any new technology are made within the first decade of operation. The LHC is well on its way to ending its first decade with ‘only’ the Higgs boson as a prize. It was fully intended that the LHC would have given us hard evidence by now for literally dozens of new super-heavy particles, and a definitive candidate for dark matter to clean up the cosmological inventory.

So this is my reason for feeling sad. If the Higgs boson is a guide, it may take us several more decades and a whole new and expensive LHC replacement to find something significant to affirm our current ‘beautiful’ ideas about the physical nature of the universe. Supersymmetry may still play a role in this but it will be hard to attract a new generation of young physicists to its search if Nature continues to withhold so much as a hint we are on the right theoretical track.

If supersymmetry falls string theory, which hinges on supersymmetry, may also have to be put aside or re-thought. Nature seems to favor simple theories over complex ones so are the current string theories with supersymmetry really the simplest ones?

Thousands of physicists have toiled over these ideas since the 1970s. In the past, such a herculean effort usually won-out with Nature rewarding the tedious intellectual work, and some vestiges of the effort being salvaged for the new theory. I find it hard to believe that will not again be the case this time, but as I prepare for retirement I am realizing that I may not be around to see this final vindication.

So what should I make of my 45-year intellectual obsession to keep up with this research? Given what I know today would I have done things differently? Would I have taught fewer classes on this subject, or written fewer articles for popular science magazines?

Absolutely not!

I have thoroughly enjoyed the thrill of the new ideas about matter, space, time and dimension. The Multiverse idea offered me a new way of experiencing my place in ‘reality’. I could never have invented these amazing ideas on my own, which have entertained me for most of my professional life. Even today’s Nature seems to have handed us something new: Gravity waves have been detected after a 60-year search; detailed studies of the cosmic ‘fireball’ radiation are giving us hints to the earliest moments in the Big Bang; and of course we have discovered THOUSANDS of new planets.

Living in this new world seems almost as intellectually stimulating, and now offer me more immediate returns on my investment in the years remaining.

The Proton’s Spin

Protons are the work horses of chemistry. Their numbers determine which element you are talking about, and their positive charge determines how many electrons will form a cloud around them to facilitate all manner of chemical reactions.

For decades we thought that protons were absolutely fundamental particles along with neutrons and electrons, but then came the quantum revolution of the 1920s and the escalating quest to understand what their actual physical properties were. Through experimentation, we found that protons all had exactly the same mass to many decimal places. They all had exactly +1.0000 unit of charge, also to many decimal places. But they also possessed an entirely new physical quantity found only in atomic-scale physics. This quantity was called ‘spin’ but had nothing to do with the motion of a top about its axis, although paradoxically it could nonetheless be interpreted in that way.

Quantum spin, unlike the continuous spinning of a top, comes only in integer units like 0, 1, 2, etc, or in half-integer units like ½, 3/2, 5/2 etc. Physicists soon discovered that fundamental particles like photons ( the carriers of light energy) only had a quantum spin of exactly 1.0, while protons, neutrons, neutrinos and electrons had exactly ½ unit of spin. The former kinds of particles were called bosons while the latter were given the name fermions. Composite particles made up from these elementary bosons and fermions could have other spin values, but only what arises from adding, in the proper way, the elementary spins of their constituents.

By the 1960s, experiments had begun to show that protons were not actually fundamental particles at all, nor were neutrons for that matter. Theoretical models that built-up protons and neutrons and many other known particles called mesons and baryons soon led to the idea of the quark. For protons and neutrons, you needed three quarks, while for the mesons you only needed two of which one would be a quark and the other an anti-quark. The mathematics were impressive and elegant, and this system of quarks soon became the favored model for all particles that interacted through the strong nuclear force, itself produced by the exchanges of particles called gluons. Also in this scheme, quarks would be spin-1/2 fermions and the gluons would be spin-1 bosons much like the photons which carry light energy.

All seemed to be going great by the 1970s and 1980s. The quark model flourished, and many new subtle phenomena were uncovered through the application of what became the Standard Model of physics. But there was a fly in the ointment.

At first the explanation for how a proton could have a spin of ½ while at the same time being composed of three quarks, each also spin-1/2 particles, was pretty well settled. Because a proton consisted of two identical ‘up’ quarks and one ‘down’ quark, it was entirely reasonable that the two up quarks would have equal and opposite spin canceling each other out, leaving behind the down quark to carry the protons ½ unit of spin. Similarly for the neutron, its two down quarks combined to have a net-zero spin leaving the single up quark to carry the ½ unit of spin for the neutron.

The Proton Spin Crisis

All seemed to be well until experiments in 1987 at the European Muon Collaboration actually used carefully prepared beams of particles called muons to probe the interior of protons and double-check the way the quark spins were oriented with the protons spin. What they found was startling. Not more than 25% of the proton’s spin was generated by the quarks at all. The remaining 75% of what defines the spin of a proton had to come from some other source!

When you look at the mass of a proton compared to the masses of the three constituent quarks you discover something very fascinating. The masses of the quarks only account for about 1% of the mass of the entire proton. Instead, thanks to Einstein’s E=mc2, it is the stress energy of the gluon fields inside the proton that contribute the missing 99%. The mass that you read on the bathroom scale is only 1% contributed by the mass of your elementary quarks in grams, but 99% by the invisible energy(mass) of the gluon fields that occupy nuclear space!

Now for proton spin, the only other things rattling around inside the intense fields in the interior of a proton were the gluons holding the quarks together, and an ephemeral sea of quark-antiquark pairs that momentarily appeared and disappeared in the vacuum of space found there. This sea of vacuum or ‘virtual’ particles is absolutely required by modern quantum physics, and although we can never detect their comings and goings by any direct observation, we can detect their influence on nearby elementary particles.

In 2014, experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven, New York collided polarized protons together and physicists think they have found a large part of the remainder of the protons spin. Perhaps 40% to 50% seems to be contributed by the gluons themselves. This still leave about 25% in some other source. Meanwhile, other experiments by MIT physicists determined that any anti-quarks produced inside a proton among the virtual quark sea contribute very little to the over-all spin of the proton.

The bottom line today seems to be what this table shows:

Quark spin……………………….………..25%
Gluon spin…………………………………40-50%
Orbital angular momentum……..25% to 35%

When the experimental constraints are added up, we still do not have a precise measure of how the various proton constituents add up to give the universally constant spin of 1/2 to a proton that is observed for all protons to many decimal places.

Who would have thought that such an important number as ‘1/2’ arises from combining a number of messy phenomena that themselves seem imprecise!

Check back here on Tuesday, May 30 for my next topic!

The First Billion Years

When we think about the Big Bang we tend only to look at the first few instants when we think all of the mysterious and exciting action occurred. But actually, the first BILLION years are the real stars of this story!

My books ‘Eternity:A Users Guide’ and ‘Cosmic History I and II’ provide a more thorough, and ‘twitterized’, timeline of the universe from the Big Bang to the literal end of time if you are interested in the whole story as we know it today. You can also look at a massive computer simulation developed by Harvard and MIT cosmologists in 2014.

What we understand today is not merely based on theoretical expectations. Thanks to specific observations during the last decade, we have actually discovered distant objects that help us probe critical moments during this span of time.

Infancy

By the end of the first 10 minutes after the Big Bang, the universe was filled with a cooling plasma of hydrogen and helium nuclei and electrons – too hot to come together to form neutral atoms at seething temperatures over 100 million Celsius. The traces that we do see of the fireball light from the Big Bang are called the cosmic background radiation, and astronomers have been studying it since the 1960s. Today, the temperature is 2.726 kelvins, but at one part in 100,000 there are irregularities in its temperature across the entire sky detected by the COBE, WMAP and Planck satellites and shown below. These irregularities are the gravitational fingerprints of vast clusters of galaxies that formed in the infant universe after several more billion years.

By 379,000 years, matter had cooled down to the point where electrons could bond with atomic nuclei to form neutral atoms of hydrogen and helium. For the first time in cosmic history, matter could go its own way and no longer be affected by the fireball radiation, which used to blast these assembled atoms apart faster than they could form. If you were living at this time, it would look like you were standing inside the surface of a vast dull-red star steadily fading to black as the universe continued to expand, and the gas steadily cooled over the millennia. No matter where you stood in the universe at this time, all you would see around you is  this dull-red glow across the sky.

6 million years – By this time, the cosmic gas has cooled to the point that its temperature was only 500 kelvins (440 F). At these temperatures, it no longer emits any  visible light. The universe is now fully in what astronomers call The Cosmic Dark Ages. If you were there and looking around, you would see nothing but an inky blackness no matter where you looked! With infrared eyes, however, you would see the cosmos filled by a glow spanning the entire sky.

20 million years – The hydrogen-helium gas that exists all across the universe is starting to feel the gravity effects of dark matter, which has started to form large clumps and vast spiders-web-like networks spanning the entire cosmos, with a mass of several trillion times the mass of our sun. As the cold, primordial gas falls into these gravity wells, it forms what will later become the halos of modern-day galaxies. All of this hidden under a cloak of complete darkness because there was as yet no physical objects in existence to light things up. Only detailed supercomputer simulations can reveal what occurred during this time.

The First Stars

100 million years – Once the universe got cold enough, large gas clouds stopped being controlled by their internal pressure, and gravity started to take the upper hand. First the vast collections of matter destined to become the haloes of galaxies formed. Then, or at about the same time, the first generation of stars appeared in the universe. These Population III stars made from nearly transparent hydrogen and helium gas were so massive, they lived for only a few million years before detonating as supernova. As the universe becomes polluted with heavier elements from billions of supernovae, collapsing clouds become more opaque to their own radiation, and so the collapse process stops when much less matter has formed into the infant stars. Instead of only producing massive Population III stars with 100 times our sun’s mass, numerous stars with masses of 50, 20 and 5 times our sun’s form with increasing frequency. Even smaller stars like our own sun begin to appear by the trillions. Most of this activity is occurring in what will eventually become the halo stars in modern galaxies like the Milky Way. The vast networks of dark matter became illuminated from within as stars and galaxies began to form.

200 million years – The oldest known star in our Milky Way called SM0313 formed about this time. This star contains almost no iron — less than one ten millionth of the iron found in our own Sun. It is located 6000 light years from Earth. Another star called the Methusela Star is located about 190 light years from Earth and was formed about the same time as SM0313.

The First Quasars and Black Holes

300 million years The most distant known ‘quasar’ is called APM 8279+5255, and contains traces of the element iron. This means that at about this time after the Big Bang, some objects are powered by  enormous black holes that steadily consume a surrounding disk of gas and dust. For APM 8279+5255, the mass of this black hole is about 20 billion times more massive than the Sun. Astronomers do not know how a black hole this massive could gave formed so soon after the Big Bang. A dimple division shows that a 20 billion solar mass black hole forming in 300 million years would require a growth rate higher than 60 solar masses a year!

The First Galaxies

400 million years – The cold primordial matter becomes clumpy under the action of its own gravity. These clumps have masses of perhaps a few billion times our sun or less, and over time this material starts to collapse locally into even smaller clouds that become mini-galaxies where intense episodes of star formation activity are playing out.

This image shows the position of the most distant galaxy discovered so far with the Hubble Space Telescope. The remote galaxy GN-z11 shown in the inset is actually ablaze with bright young blue stars. They look red in this image because the wavelengths of light have been stretched by the expansion of the universe to longer, redder wavelengths. Like the images of so many other young galaxies, we cannot see individual stars, but their irregular shapes show that the stars they contain are spread out in irregular clumps within their host galaxy, possibly because they are from separate, merging clouds whose collisions have triggered the star-forming activity we see.

Although it is hard work, astronomers can detect the faint reddish traces of dozens of other infant galaxies such as MACS0647-JD, UDFj-39546284 and EGSY-2008532660. These are all  small dwarf galaxies over 100 times less massive than our Milky Way. They are all undergoing intense star forming activity between 400 and 600 million years after the Big Bang.

The Gamma-Ray Burst Era begins about 630 million years after the Big Bang. Gamma-ray bursts are caused by very massive stars, perhaps 50 to 100 times our own sun’s mass, that explode as hypernovae and form a single black hole, so we know that these kinds of stars were already forming and dying by this time. Today from ‘across the universe’ we see these events occur about once each day!

800 million years – The quasar ULAS J1120+0641 is another young case of a supermassive black hole that has formed, and by this time is eating its surrounding gas and stars at a prodigious rate. The mass of this black hole is about 2 billion times the mass of our sun, and like others is probably the result of frequent galaxy mergers and rapid eating of surrounding matter.

Also at around this time we encounter the Himiko Lyman Alpha Blob; one of the most massive objects ever discovered in the early universe.  It is 55,000 light-years across, which is half of the diameter of the Milky Way. Objects like Himiko are probably powered by an embedded galaxy that is producing young massive stars at a phenomenal rate of 500 solar masses per year or more.

Again the most brilliant objects we can see from a time about 900 million years after the big bang includes galaxies like SDSS J0100+2802 with a luminosity 420 trillion times that of our own Sun. It is powered by a supermassive black hole  12 billion times the mass of our sun.

The Re-Ionization Era

960 million years – By this time, massive stars in what astronomers call ‘Population III’ are being born by the billions across the entire universe. These massive stars emit almost all of their light in the ultraviolet part of the visible spectrum. There are now so many intense sources of ultraviolet radiation in the universe that all of the remaining hydrogen gas becomes ionized. Astronomers call this the Reionization Era. Within a few hundred million years, only dwarf galaxy-sized blobs of gas still remain and are being quickly evaporated. We can still see the ghosts of these clouds in the light from very distant galaxies. The galaxy SSA22-HCM1 is the brightest of the objects called ‘Lyman-alpha emitters’. It may be producing new stars at a rate of 40 solar masses per year and enormous amounts of ultraviolet light. The galaxy HDF 4-473.0 also spotted at this age is only 7,000 light years across. It has an estimated star formation rate of 13 solar masses per year.

1 billion years First by twos and threes, then by dozens and hundreds, clusters of galaxies begin to form as the gravity of matter pulls the clumps of galaxy-forming matter together. This clustering is speeded up by the additional gravity provided by dark matter. In a universe without dark matter, the number of clusters of galaxies would be dramatically smaller.

Clusters of Galaxies Form

Proto-galaxy cluster AZTEC-3 consists of 5 smaller galaxy-like clumps of matter, each forming stars at a prodigious rate. We now begin to see how some of the small clumps in this cluster are falling together and interacting, eventually to become a larger galaxy-sized system. This process of cluster formation is now beginning in earnest as more and more of these ancient clumps fall together under a widening umbrella of gravity. Astronomers are discovering more objects like AzTEC-3, which is the most distant known progenitor to modern elliptical galaxies. By 2.2 billion years after the Big Bang, it appears that half of all the massive elliptical galaxies we see around us today have already formed by this time.

Thanks to the birth and violent deaths of generations of massive Population III stars, the universe is now flooded with heavy elements such as iron, oxygen, carbon and nitrogen: The building blocks for life. But also elements like silicon, iron and uranium which help to build rocky planets and heat their interiors. The light from the quasar J033829.31+002156.3 can be studied in detail and shows that by this time, element-building through supernova explosions of Population III stars has produced lots of carbon, nitrogen and silicon. The earliest planets and life forms based upon these elements now have a chance to appear in the universe. Amazingly, we have already spotted such an ancient world!

Earliest Planets Form

At 1 billion years after the Big Bang, the oldest known planet PSR B1620-26 b has already formed. Located in the globular cluster Messier-4, about 12,400 light-years from Earth, it bears the unofficial nicknames “Methuselah” and “The Genesis Planet” because of its extreme age. The planet is in orbit around the two very old stars: A dense white dwarf and a neutron star. The planet has a mass of 2.5 times that of Jupiter, and orbits at a distance a little greater than the distance between Uranus and our own Sun. Each orbit of the planet takes about 100 years.

Wonders to Come!

Although the Hubble Space Telescope strains at its capabilities to see objects at this early stage in cosmic history, the launch of NASA’s Webb Space Telescope will uncover not dozens but thousands of these young pre-galactic objects with its optimized design. Within the next decade, we will have a virtually complete understanding of what happened during and after the Cosmic Dark Ages when the earliest possible sources of light could have formed, and one can only marvel at what new discoveries will turn up.

What an amazing time in which to be alive!

Check back here on Wednesday, May 24 for my next topic!

Our Unstable Universe

Something weird is going on in the universe that is causing astronomers and physicists to lose a bit of sleep at night. You have probably heard about the discovery of dark energy and the accelerating expansion of the universe. This is a sign that something is afoot that may not have a pleasant outcome for our universe or the life in it.

Big Bang Cosmology V 1.0

The basic idea is that our universe has been steadily expanding in scale since 14 billion years ago when it flashed into existence in an inconceivably dense and hot explosion. Today we can look around us and see this expansion as the constantly- increasing distances between galaxies embedded in space. Astronomers measure this change in terms of a single number called the Hubble Constant which has a value of about 70 km/sec per megaparsecs. For every million parsecs of separation between galaxies, a distance of 3.24 million light years, you will see distant galaxies speeding away from each other at 70 km/sec . This conventional Big Bang theory has been the main-stay of cosmology for decades and it has helped explain everything from the formation of galaxies to the abundance of hydrogen and helium in the universe.

Big Bang Cosmology V 2.0

Beginning in the 1980’s, physicists such as Alan Guth and Andre Linde added some new physics to the Big Bang based on cutting-edge ideas in theoretical physics. For a decade, physicists had been working on ways to unify the three forces in nature: electromagnetism, and the strong and weak nuclear forces. This led to the idea that just as the Higgs Field was needed to make the electromagnetic and weak forces look different rather than behave as nearly identical ‘electroweak’ forces, the strong force needed its own ‘scalar field’ field to break its symmetry with the electroweak force.

When Guth and Linde added this field to the equations of Big Bang cosmology they made a dramatic discovery. As the universe expanded and cooled, for a brief time this new scalar field made the transition between a state where it allowed the electroweak and strong forces to look identical, and a state where this symmetry was broken representing the current state of affairs. This period of time extended from about 10(-37) second to 10(-35) seconds; a mere instant in cosmic time, but the impact of this event was spectacular. Instead of the universe expanding at a steady rate in time as it does now, the separations between particles increased exponentially in time in a process called Inflation. Physicists now had a proper name for this scalar field: The Inflaton Field.

Observational cosmology has been able to verify since the 1990s that the universe did, indeed, pass through such an inflationary era at about the calculated time. The expansion of space at a rate many trillions of times faster than the speed of light insured that we live in a universe that looks as ours does, especially in terms of the uniformity of the cosmic ‘fireball’ temperature. It’s 2.7 kelvins no matter where you look, which would have been impossible had the Inflationary Era not existed.

Physicists consider the vacuum of space to be more than ‘nothing’. Quantum mechanically, it is filled by a patina of particles that invisibly come and go, and by fields that can give it a net energy. The presence of the Inflaton Field gave our universe a range of possible vacuum energies depending on how the field interacted with itself. As with other things in nature, objects in a high-energy state will evolve to occupy a lower-energy state. Physicists call the higher-energy state the False Vacuum and the lower-energy state the True Vacuum, and there is a specific way that our universe would have made this change. Before Inflation, our universe was in a high-energy, False Vacuum state governed by the Inflaton Field. As the universe continued to expand and cool, a lower-energy state for this field was revealed in the physics, but the particles and fields in our universe could not instantaneously go into that lower-energy state. As time went on, the difference in energy between the initial False Vacuum and the True Vacuum continued to increase. Like bubbles in a soda, small parts of the universe began to make this transition so that we now had a vast area of the universe in a False Vacuum in which bubbles of space in the True Vacuum began to appear. But there was another important process going on as well.

When you examine how this transition from False to True Vacuum occurred in Einstein’s equations that described Big Bang cosmology, a universe in which the False Vacuum existed was an exponentially expanding space, while the space inside the True Vacuum bubbles was only expanding at a simple, constant rate defined by Hubble’s Constant. So at the time of inflation, we have to think of the universe as a patina of True Vacuum bubbles embedded in an exponentially-expanding space still caught in the False Vacuum. What this means for us today is that we are living inside one of these True Vacuum bubbles where everything looks about the same and uniform, but out there beyond our visible universe horizon some 14 billion light years away, we eventually enter that exponentially-expanding False Vacuum universe. Our own little bubble may actually be billions of times bigger than what we can see around us. It also means that we will never be able to see what these other distant bubbles look like because they are expanding away from us at many times the speed of light.

Big Bang Cosmology 3.0

You may have heard of Dark Energy and what astronomers have detected as the accelerating expansion of the universe. By looking at distant supernova, we can detect that since 6 billion years after the Big Bang, our universe has not been expanding at a steady rate at all. The separations between galaxies has been increasing at an exponential rate. This is caused by Dark Energy, which is present in every cubic meter of space .The more space there is as the universe expands, the more Dark Energy and the faster the universe expands. What this means is that we are living in a False Vacuum state today in which a new Inflaton Field is causing space to dilate exponentially. It doesn’t seem too uncomfortable for us right now, but the longer this state persists, the greater is the probability our corner of the universe will see a ‘bubble’ of the new True Vacuum appear. Inside this bubble there will be slightly different physics such as the mass of the electron or the quark may be different. We don’t know when our corner of the universe will switch over to its True Vacuum state. It could be tomorrow or 100 billion years from now. But there is one thing we do know about this progressive, accelerated expansion.

Eventually, distant galaxies will be receding from our Milky Way at faster that the speed of light as they are helplessly carried along by a monstrously-dilating space. This also means they will become permanently invisible for the rest of eternity as their light signals never keep pace with the exponentially-increasing space between them. Meanwhile, our Milky Way will become the only cosmic collection of matter we will ever be able to see from then on. It is predicted that this situation will occur about 100 billion years from now when the Andromeda Galaxy will pass beyond this distant horizon.

As for what the new physics will be in the future True Vacuum state is anyone’s guess. If the difference in energy between the False and True vacuum is only a small fraction of the mass of a neutrino (a few electron-Volts) we may hardly know that it happened and life will continue. But if it is comparable to the mass of the electron (512,000 eV), we are in for some devastating and fatal surprises best not contemplated.

Check back here on  Tuesday, May 16  for my next topic!

Boltzmann Brains

Back in the 1800’s, Ludwig Boltzmann (1844-1906) developed the idea of entropy and thermodynamics, which have been the main-stay of chemistry and physics ever since. Long before atoms were identified, Boltzmann had used them in designing his theory of statistical mechanics, which related entropy to the number of possible statistical states these particles could occupy. His famous formula

S = k log W

is even inscribed on his tombstone! His frustrations with the anti-atomists who hated his crowning achievement ‘statistical mechanics’ led him in profound despair to commit suicide in 1906.

If you flip a coin 4 times, it is unlikely that all 4 flips will result in all-heads or all-tails. It is far more likely that you will get a mixture of heads and tails. This is a result of their being a total of 2^4 = 16 possible outcomes or ‘states’ for this system, and the state with all heads or all tails occur only 1/16 of the time. Most of the states you will produce have a mixture of heads and tails (14/16). Now replace the coin flips by the movement of a set of particles in three dimensions.

Boltzmann’s statistical mechanics related the number of possible states for N particles moving in 3-dimensional space, to the entropy of the system. It is more difficult to calculate the number of states than for the coin flip example above, but it can be done using his mathematics, and the result is the ‘W’ in his equation S = k Log W. The bottom line is that, the more states available to a collection of particles (for example atoms of a gas), the higher is the entropy given by . How does a gas access more states? One way is for you to turn up its temperature so that the particles are moving faster. This means that as you increase the temperature of a gas, its entropy increases in a measurable way.

Cosmologically, as our universe expands and cools, its entropy is actually increasing steadily because more and more space is available for the particles to occupy even as they are moving more slowly as the temperature declines. The Big Bang event itself, even at its unimaginably high temperature was actually a state of very low entropy because even though [particles were moving near the speed of light, there was so little space for matter to occupy!

For random particles in a gas colliding like billiard balls, with no other organizing forces acting on them, (called the kinetic theory of gases), we can imagine a collection of 100 red particles clustered in one corner of a box, and 1000 other blue particles located elsewhere in the box. If we were to stumble on a box of 1100 particles that looked like this we would immediately say ‘how odd’ because we sense that as the particles jostled around the 100 red particles would quickly get uniformly spread out inside the box. This is an expression of their being far more available states where the red balls are uniformly mixed, than states where they are clustered together. This is also a statement that the clustered red balls is a lower-entropy version of the system, and the uniformly-mixed version is a higher form of entropy. So we would expect that the system evolves from lower to higher entropy as the red particles diffuse through the box: Called the Second Law of Thermodynamics.

Boltzmann Brains.

The problem is that given enough time, even very rare states can have a non-zero probability of happening. With enough time and enough jostling, we could randomly find the red balls once again clustered together. It may take billions of years but there is nothing that stands in the way of this happening from statistical principles. Now let’s suppose that instead of just a collection of red balls, we have a large enough system of particles that some rare states resemble any physical object you can imagine: a bacterium, a cell phone, a car…even a human brain!

A human brain is a collection of particles organized in a specific way to function and to store memories. In a sufficiently large and old universe, there is no obvious reason why such a brain could not just randomly assemble itself like the 100 red particles in the above box. It would be sentient, have memories and even senses. None of its memories would be of actual events it experienced but simply artificial reconstructions created by just the right neural pathways randomly assembled. It would remember an entire lifetime to date without having actually lived or occupied any of the events in space and time.

When you calculate the probability for such a brain to evolve naturally in a low-entropy universe like ours rather than just randomly assembling itself you run into a problem. According to Boltzmann’s cosmology, our vast low-entropy and seemingly highly organized universe is embedded in a much larger universe where the entropy is much higher. It is far less likely that our organized universe exists in such a low entropy state conducive to organic evolution than a universe where a sentient brain simply assembles itself from random collisions. In any universe destined to last for eternity, it will rapidly be populated by incorporeal brains rather than actual sentient creatures! This is the Paradox of the Boltzmann Brain.

Even though Creationists like to invoke the Second Law to deny evolution as a process of random collisions, the consequence of this random idea about structure in the universe says that we are actually all Boltzmann Brains not assembled by evolution at all. It is, however, of no comfort to those who believe in God because God was not involved in randomly assembling these brains, complete with their own memories!

So how do we avoid filling our universe with the abomination of these incorporeal Boltzman Brains?

The Paradox Resolved

First of all, we do not live in Boltzmann’s universe. Instead of an eternally static system existing in a finite space, direct observations show that we live in an expanding universe of declining density and steadily increasing entropy.

Secondly, it isn’t just random collisions that dictate the assembly of matter (a common idea used by Creationists to dismantle evolution) but a collection of specific underlying forces and fundamental particles that do not come together randomly but in a process that is microscopically determined by specific laws and patterns. The creation of certain simple structures leads through chemical processes to the inexorable creation of others. We have long-range forces like gravity and electromagnetism that non-randomly organize matter over many different scales in space and time.

Third, we do not live in a universe dominated by random statistical processes, but one in which we find regularity in composition and physical law spanning scales from the microscopic to the cosmic, all the way out to the edges of the visible universe. When two particles combine, they can stick together through chemical forces and grow in numbers from either electromagnetic or gravitational forces attracting other particles to the growing cluster, called a nucleation site.

Fourth, quantum processes and gravitational processes dictate that all existing particles will eventually decay or be consumed in black holes, which will evaporate to destroy all but the most elementary particles such as electrons, neutrinos and photons; none of which can be assembled into brains and neurons.

The result is that Boltzmann Brains could not exist in our universe, and will not exist even in the eternal future as the cosmos becomes more rarefied and reaches its final and absolute thermodynamic equilibrium.

The accelerated expansion of the universe now in progress will also insure that eventually all complex collections of matter are shattered into individual fundamental particles each adrift in its own expanding and utterly empty universe!

Have a nice day!

Check back here on Tuesday, May 9 for my next topic!

The Planck Era

The Big Bang theory says that the entire universe was created in a tremendous explosion about 14 billion years ago. The enormity of this event is hard to grasp and it seems natural to ask ourselves ‘What was it like then?’ and ‘What happened before the Big Bang?’.

Thanks to what physicists call the Standard Model, we have a detailed understanding of quantum physics, matter, energy and force that let us reproduce what the universe looked like as early as a billionth of a second after the Big Bang.  The results of high-precision observational cosmology also let us verify that the Standard Model predictions match what we see as the general properties of the matter and energy in our universe up until this unimaginable time.  We can actually go a bit farther back towards the beginning thanks to detailed studies of the cosmic background radiation!

At a time 10(-36) second ( that is  a trillionth of a trillionth of a trillionth of a second!) after the Big Bang, a spectacular change in the size of the universe occurs. This is the Inflationary Era when the strong nuclear force becomes distinguishable from the weak and electromagnetic forces. The temperature is an incredable 10 thousand trillion trillion degrees and the density of matter has sored to nearly 10(75) gm/cm3. This number is so enormous  even our analogies are almost beyond comprehension. At these densities, the entire Milky Way galaxy could easily be stuffed into a volume no larger than a single hydrogen atom!

Between a billionth of a second and 10(-35) seconds is a No Man’s Land currently in accessible to our technology and requires instruments such as the CERN Large Hadron Collider scaled up to the size of our solar system or even larger!  This is also the domain of the so-called Particle Desert that I previously wrote about, and the landscape of the predictions made by supersymmetric string theory, for which there is as yet no evidence of their correctness despite decades of intense theoretical research.

THROUGH A LOOKING GLASS, DARKLEY

Since our technology will not allow us to physically reproduce the conditions during these ancient times, we must use our mathematical theories of how matter behaves to mentally explore what the universe was like then. We know that the appearence of the universe before 10(-43) second can only be adequatly described by modifying the Big Bang theory because this theory is, in turn, based on the General Theory of Relativity.  At the Planck Scale, we need to extend General Relativity so that it includes not only the macroscopic properties of gravity but also is microscopic characteristics as well. The theory of ‘Quantum Gravity’ is still far from completion but physicists tend to agree that there are some important quide-posts to help us understand how it applies to Big Bang theory.

QUANTUM COSMOLOGY

In the language of General Relativity, gravity is a consequence of the deformati on of space caused by the presence of matter and energy.  In Quantum Gravity theory, gravity is produced by massless gravitons, or strings (in what is called string theory), or loops of energy (in what is called loop quantum gravity), so that gravitons now represent individual packages of curved space.

The appearence and dissappearence of innumerable gravitons gives the geometry of space a very lumpy and dynamic appearance. The geometry of space twists and contorts so that far flung regions of space may suddenly find themselves connected by ‘wormholes’ and quantum black holes, which constantly appear and dissappear within 10(-43) seconds. The geometry of space at a given moment will have to be thought of as an average over all 3-dimensional space geometries that are possible.

What this means is that we may never be able to calculate with any certainty exactly what the history of the universe was like before 10(-43) seconds.  To probe the history of the universe then would be like trying to trace your ancestral roots if every human being on earth had a possibility of being one of your parents. Now try to trace your family tree back a few generations! An entirely new conception of what we mean by ‘a history for the universe’ will have to be developed. Even the concepts of space and time will have to be completely re-evaluated in the face of the quantum fluctuations of spacetime at the Planck Era!

Now we get to a major problem in investigating the Planck Era.

BUT WAIT…THERE’S MORE!

Typically we make observations in nuclear physics by colliding particles and studying the information created in the collision, such as the kinds of particles created, their energy, momentum, spin and other ‘quantum numbers’.  The whole process of testing our theories relies on studying the information generated in these collisions, searching for patterns, and comparing them to the predictions. The problem is that this investigative process breaks down as we explore the Planck Era.  When the quantum particles of space (gravitons, strings or loops)collide at these enormous energies and small scales, they create quantum black holes that immediately evaporate. You cannot probe even smaller scales of space and time because all you do is to create more quantum black holes and wormholes.  Because the black holes evaporate into a randomized hailstorm of new gravitons you cannot actually make observations of what is going on to search for non-random patterns the way you do in normal collisions!

Quantum Gravity, if it actually exists as a theory, tells us that we have finally reached a theoretical limit to how much information we can glean about the Planck Era. Our only viable options involve exploring the Inflationary Era and how this process left its fingerprints on the cosmic background radiation through the influence of gravitational waves.

Fortunately, we now know that gravity waves exist thanks to the discoveries by the LIGO instrument in 2016. We also have indications of what cosmologists call the cosmological B-Modes which are the fingerprints of primordial gravity waves interacting with the cosmic background radiation during the Inflationary Era.

We may not be able to ever study the Planck Era conditions directly when the universe was only 10(-43) seconds old, but then again, knowing what the universe was doing  10(-35) seconds after the Big Bang all the way up to the present time is certainly an impressive human intellectual and technological success!

 

Check back here on May 3 for the next blog!

Glueballs anyone?

Today, physicists are both excited and disturbed by how well the Standard Model is behaving, even at the enormous energies provided by the CERN Large Hadron Collider. There seems to be no sign of the expected supersymmetry property that would show the way to the next-generation version of the Standard Model: Call it V2.0. But there is another ‘back door’ way to uncover its deficiencies. You see, even the tests for how the Standard Model itself works are incomplete, even after the dramatic 2012 discovery of the Higgs Boson! To see how this backdoor test works, we need a bit of history.

Glueballs found in a quark-soup (Credit: Alex Dzierba, Curtis Meyer and Eric Swanson)

Over fifty years ago in 1964, physicists Murray Gell-Mann at Caltech and George Zweig at CERN came up with the idea of the quark as a response to the bewildering number of elementary particles that were being discovered at the huge “atom smasher” labs sprouting up all over the world. Basically, you only needed three kinds of elementary quarks, called “up,” “down” and “strange.” Combining these in threes, you get the heavy particles called baryons, such as the proton and neutron. Combining them in twos, with one quark and one anti-quark, you get the medium-weight particles called the mesons. In my previous blog, I discussed how things are going with testing the quark model and identifying all of the ‘missing’ particles that this model predicts.

In addition to quarks, the Standard Model details how the strong nuclear force is created to hold these quarks together inside the particles of matter we actually see, such as protons and neutrons. To do this, quarks must exchange force-carrying particles called gluons, which ‘glue’ the quarks together in to groups of twos and threes. Gluons are second-cousins to the photons that transmit the electromagnetic force, but they have several important differences. Like photons, they carry no mass, however unlike photons that carry no electric charge, gluons carry what physicist call color-charge. Quarks can be either ‘red’, ‘blue’ or ‘green’, as well as anti-red, anti-green and anti-blue. That means that quarks have to have complex color charges like (red, anti-blue) etc. Because the gluons carry color charge, unlike photons which do not interact with each other, gluons can interact with each other very strongly through their complicated color-charges. The end result is that, under some circumstances, you can have a ball of gluons that resemble a temporarily-stable particle before they dissipate. Physicists call these glueballs…of course!

Searching for Glueballs.

Glueballs are one of the most novel, and key predictions of the Standard Model, so not surprisingly there has been a decades-long search for these waifs among the trillions of other particles that are also routinely created in modern particle accelerator labs around the world.

Example of glueball decay into pi mesons.

Glueballs are not expected to live very long, and because they carry no electrical charge they are perfectly neutral particles. When these pseudo-particles decay, they do so in a spray of other particles called mesons. Because glueballs consist of one gluon and one anti-gluon, they have no net color charge. From the various theoretical considerations, there are 15 basic glueball types that differ in what physicists term parity and angular momentum. Other massless particles of the same general type also include gravitons and Higgs bosons, but these are easily distinguished from glueball states due to their mass (glueballs should be between 1 and 5GeV) and other fundamental properties. The most promising glueball candidates are as follows:

Scalar candidates: f0(600), f0(980), f0(1370), f0(1500), f0(1710), f0(1790)
Pseudoscalar candidates: η(1405), X(1835), X(2120), X(2370), X(2500)
Tensor candidates: fJ(2220), f2(2340)

By 2015, the f-zero(1500) and f-zero(1710) had become the prime glueball candidates. The properties of glueball states can be calculated from the Standard Model, although this is a complex undertaking because glueballs interact with nearby quarks and other free gluons very strongly and all these factors have to be considered.

On October 15, 2015 there was a much-ballyhooed announcement that physicists had at last discovered the glueball particle. The articles cited Professor Anton Rebhan and Frederic Brünner from TU Wien (Vienna) as having completed these calculations, concluding that the f-zero(1710) was the best candidate consistent with experimental measurements and its predicted mass. More rigorous experimental work to define the properties and exact decays of this particle are, even now, going on at the CERN Large Hadron Collider and elsewhere.

So, between the missing particles I described in my previous blog, and glueballs, there are many things about the Standard Model that still need to be tested. But even with these predictions confirmed, physicists are still not ‘happy campers’ when it comes to this grand theory of matter and forces. Beyond these missing particles, we still need to have a deeper understanding of why some things are the way they are, and not something different.

Check back here on Wednesday, April 5 for my next topic!

Crowdsourcing Gravity

The proliferation of smartphones with internal sensors has led to some interesting opportunities to make large-scale measurements of a variety of physical phenomena.

The iOS app ‘Gravity Meter’ and its android equivalent have been used to make  measurements of the local surface acceleration, which is nominally 9.8 meters/sec2. The apps typically report the local acceleration to 0.01 (iOS) or even 0.001 (android) meters/secaccuracy, which leads to two interesting questions: 1)How reliable are these measurements at the displayed decimal limit, and 2) Can smartphones be used to measure expected departures from the nominal surface acceleration due to Earth rotation? Here is a map showing the magnitude of this (centrifugal) rotation effect provided by The Physics Forum.

As Earth rotates, any object on its surface will feel a centrifugal force directed outward from the center of Earth and generally in the direction of local zenith. This causes Earth to be slightly bulged-out at the equator compared to the poles, which you can see from the difference between its equatorial radius of 6,378.14 km versus its polar radius of 6,356.75 km: a polar flattening difference of 21.4 kilometers. This centrifugal force also has an effect upon the local surface acceleration  by reducing it slightly at the equator compared to the poles. At the equator, one would measure a value for ‘g’ that is about 9.78 m/sec2 while at the poles it is about 9.83 m/sec2. Once again, and this is important to avoid any misconceptions, the total acceleration defined as gravity plus centrifugal is reduced, but gravity is itself not changed because from Newton’s Law of Universal Gravitation, gravity is due to mass not rotation.

Assuming that the smartphone accelerometers are sensitive enough, they may be able to detect this equator-to-pole difference by comparing the surface acceleration measurements from observers at different latitudes.

 

Experiment 1 – How reliable are ‘gravity’ measurements at the same location?

To check this, I looked at the data from several participating classrooms at different latitudes, and selected the more numerous iOS measurements with the ‘Gravity Meter’ app. These data were kindly provided by Ms. Melissa Montoya’s class in Hawaii (+19.9N), George Griffith’s class in Arapahoe, Nebraska (+40.3N), Ms. Sue Lamdin’s class in Brunswick, Maine (+43.9N), and Elizabeth Bianchi’s class in Waldoboro, Maine (+44.1N).

All four classrooms measurements, irrespective of latitude (19.9N, 40.3N, 43.9N or 44.1N) showed distinct ‘peaks’, but also displayed long and complicated ‘tails’, making these distributions not Gaussian as might be expected for random errors. This suggests that under classroom conditions there may be some systematic effects introduced from the specific ways in which students may be making the measurements, introducing  complicated and apparently non-random,  student-dependent corrections into the data.

A further study using the iPad data from Elizabeth Bianchi’s class, I discovered that at least for iPads using the Gravity Sensor app, there was a definite correlation between when the measurement was made and the time it was made during a 1.5-hour period. This resembles a heating effect, suggesting that the longer you leave the technology on before making the measurement, the larger will be the measured value. I will look into this at a later time.

The non-Gaussian behavior in the current data does not make it possible to assign a normal average and standard-deviation to the data.

 

Experiment 2 – Can the rotation of Earth be detected?

Although there is the suggestion that in the 4-classroom data we could see a nominal centrifugal effect of about the correct order-of-magnitude, we were able to get a large sample of individual observers spanning a wide latitude range, also using the iOS platform and the same ‘Gravity Meter’ app. Including the median values from the four classrooms in Experiment 1, we had a total of 41 participants: Elizabeth Abrahams, Jennifer Arsenau, Dorene Brisendine, Allen Clermont, Hillarie Davis, Thom Denholm, Heather Doyle, Steve Dryer, Diedra Falkner, Mickie Flores, Dennis Gallagher, Robert Gallagher, Rachael Gerhard, Robert Herrick, Harry Keller, Samuel Kemos, Anna Leci, Alexia Silva Mascarenhas, Alfredo Medina, Heather McHale, Patrick Morton, Stacia Odenwald, John-Paul Rattner, Pat Reiff, Ghanjah Skanby, Staley Tracy, Ravensara Travillian, and Darlene Woodman.

The scatter plot of these individual measurements is shown here:

The red squares are the individual measurements. The blue circles are the android phone values. The red dashed line shows the linear regression line for only the iOS data points assuming each point is equally-weighted. The solid line is the predicted change in the local acceleration with latitude according to the model:

G =   9.806   –  0.5*(9.832-9.78)*Cos(2*latitude)    m/sec2

where the polar acceleration is 9.806 m/sec2 and the equatorial acceleration is 9.780 m/sec2. Note: No correction for lunar and solar tidal effects have been made since these are entirely undetectable with this technology.

Each individual point has a nominal variation of +/-0.01 m/sec2 based on the minimum and maximum value recorded during a fixed interval of time. It is noteworthy that this measurement RMS is significantly smaller than the classroom variance seen in Experiment 1 due to the apparently non-Gaussian shape of the classroom sampling. When we partition the iOS smartphone data into 10-degree latitude bins and take the median value in each bin we get the following plot, which is a bit cleaner:

The solid blue line is the predicted acceleration. The dashed black line is the linear regression for the equally-weighted individual measurements. The median values of the classroom points are added to show their distribution. It is of interest that the linear regression line is parallel, and nearly coincident with, the predicted line, which again suggests that Earth’s rotation effect may have been detected in this median-sampled data set provided by a total of 37 individuals.

The classroom points clustering at ca +44N represent a total of 36 measures representing the plotted median values, which is statistically significant. Taken at face value, the classroom data would, alone, support the hypothesis that there was a detection of the rotation effect, though they are consistently 0.005 m/sec2 below the predicted value at the mid-latitudes. The intrinsic variation of the data, represented by the consistent +/-0.01 m/sec2 high-vs-low range of all of the individual samples, suggests that this is probably a reasonable measure of the instrumental accuracy of the smartphones. Error bars (thin vertical black lines) have been added to the plotted median points to indicate this accuracy.

The bottom-line seems to be that it may be marginally possible to detect the Earth rotation effect, but precise measurements at the 0.01 m/sec2 level are required against what appears to be a significant non-Gaussian measurement background. Once again, some of the variation seen at each latitude may be due to how warm the smartphones were at the time of the measurement. The android and iOS measurements do seem to be discrepant with the android measurements leading to a larger measurement variation.

Check back here on Wednesday, March 29 for the next topic!

Fifty Years of Quarks!

Today, physicists are both excited and disturbed by how well the Standard Model is behaving, even at the enormous energies provided by the CERN Large Hadron Collider. There seems to be no sign of the expected supersymmetry property that would show the way to the next-generation version of the Standard Model: Call it V2.0. But there is another ‘back door’ way to uncover its deficiencies. You see, even the tests for how the Standard Model itself works are incomplete, even after the dramatic 2012 discovery of the Higgs Boson! To see how this backdoor test works, we need a bit of history.

Over fifty years ago in 1964, physicists Murray Gell-Mann at Caltech and George Zweig at CERN came up with the idea of the quark as a response to the bewildering number of elementary particles that were being discovered at the huge “atom smasher” labs sprouting up all over the world. Basically, you only needed three kinds of elementary quarks, called “up,” “down” and “strange.” Combining these in threes, you get the heavy particles called baryons, such as the proton and neutron. Combining them in twos, with one quark and one anti-quark, you get the medium-weight particles called the mesons.

This early idea was extended to include three more types of quarks, dubbed “charmed,” “top” and “bottom” (or on the other side of the pond, “charmed,” “truth” and “beauty”) as they were discovered in the 1970s. These six quarks form three generations — (U, D), (S, C), (T, B) — in the Standard Model.

Particle tracks at CERN/CMS experiment (credit: CERN/CMS)

Early Predictions

At first the quark model easily accounted for the then-known particles. A proton would consist of two up quarks and one down quark (U, U, D), and a neutron would be (D, D, U). A pi-plus meson would be (U, anti-D), and a pi-minus meson would be (D, anti-U), and so on. It’s a bit confusing to combine quarks and anti-quarks in all the possible combinations. It’s kind of like working out all the ways that a coin flipped three times give you a pattern like (T,T,H) or (H,T,H), but when you do this in twos and threes for U, D and S quarks, you get the entire family of the nine known mesons, which forms one geometric pattern in the figure below, called the Meson Nonet.

The basic Meson Nonet (credit: Wikimedia Commons)

If you take the three quarks U, D and S and combine them in all possible unique threes, you get two patterns of particles shown below, called the Baryon Octet (left) and the Baryon Decuplet (right).

Normal baryons made from three-quark triplets

The problem was that there was a single missing particle in the simple 3-quark baryon pattern. The Omega-minus (S,S,S) at the apex of the Baryon Decuplet was nowhere to be found. This slot was empty until Brookhaven National Laboratory discovered it in early 1964. It was the first indication that the quark model was on the right track and could predict a new particle that no one had ever seen before. Once the other three quarks (C, T and B) were discovered in the 1970s, it was clear that there were many more slots to fill in the geometric patterns that emerged from a six-quark system.

The first particles predicted, and then discovered, in these patterns were the J/Psi “charmonium” meson (C, anti-C) in 1974, and the Upsilon “bottomonium” meson (B, anti-B) in 1977. Apparently there are no possible top mesons (T, anti-T) because the top quark decays so quickly it is gone before it can bind together with an anti-top quark to make even the lightest stable toponium meson!

The number of possible particles that result by simply combining the six quarks and six anti-quarks in patterns of twos (mesons) is exactly 39 mesons. Of these, only 26 have been detected as of 2017. These particles have masses between 4 and 11 times more massive than a single proton!

For the still-heavier three-quark baryons, the quark patterns predict 75 baryons containing combinations of all six quarks. Of these, the proton and neutron are the least massive! But there are 31 of these predicted baryons that have not been detected yet. These include the lightest missing particle, the double charmed Xi (U,C,C) and the bottom Sigma (U, D, B), and the most massive particles, called the charmed double-bottom Omega (C, B, B) and the triple-bottom omega (B,B,B). In 2014, CERN/LHC announced the discovery of two of these missing particles, called the bottom Xi baryons (B, S, D), with masses near 5.8 GeV.
To make life even more interesting for the Standard Model, other combinations of more than three quarks are also possible.

Exotic Baryons
A pentaquark baryon particle can contain four quarks and one anti-quark. The first of these, called the Theta-plus baryon, was predicted in 1997 and consists of (U, U, D, D, anti-S). This kind of quark package seems to be pretty rare and hard to create. There have been several claims for a detection of such a particle near 1.5 GeV, but experimental verification remains controversial. Two other possibilities called the Phi double-minus (D, D, S, S, anti-U) and the charmed neutral Theta (U, U, D, D, anti-C) have been searched for but not found.

Comparing normal and exotic baryons (credit: Quantum Diaries)

There are also tetraquark mesons, which consist of four quarks. The Z-meson (C, D, anti-C, anti-U) was discovered by the Japanese Bell Experiment in 2007 and confirmed in 2014 by the Large Hadron Collider at 4.43 GeV, hence the proper name Z(4430). The Y(4140) was discovered at Fermilab in 2009 and confirmed at the LHC in 2012 and has a mass 4.4 times the proton’s mass. It could be a combination of charmed quarks and charmed anti-quarks (C, anti-C, C, anti-C). The X(3830) particle was also discovered by the Japanese Bell Experiment and confirmed by other investigators, and could be yet another tetraquark combination consisting of a pair of quarks and anti-quarks (q, anti-q, q, anti-q).

So the Standard Model, and the six-quark model it contains, makes specific predictions for new baryon and meson states to be discovered. All totaled, there are 44 ordinary baryons and mesons that remain to be discovered! As for the ‘exotics’ that opens up a whole other universe of possibilities. In theory, heptaquarks (5 quarks, 2 antiquarks), nonaquarks (6 quarks, 3 antiquarks), etc. could also exist.

At the current pace of a few particles per year or so, we may finally wrap up all the predictions of the quark model in the next few decades. Then we really get to wonder what lies beyond the Standard once all the predicted particle slots have been filled. It is actually a win-win situation, because we either completely verify the quark model, which is very cool, or we discover anomalous particles that the quark model can’t explain, which may show us the ‘backdoor’ way to the Standard Model v.2.0 that the current supersymmetry searches seem not to be providing us just yet.

Check back here on Wednesday, March 22 for the next topic!

The Mystery of Gravity

In grade school we learned that gravity is an always-attractive force that acts between particles of matter. Later on, we learn that it has an infinite range through space, weakens as the inverse-square of the distance between bodies, and travels exactly at the speed of light.

But wait….there’s more!

 

It doesn’t take a rocket scientist to remind you that humans have always known about gravity! Its first mathematical description as a ‘universal’ force was by Sir Isaac Newton in 1666. Newton’s description remained unchanged until Albert Einstein published his General Theory of Relativity in 1915. Ninety years later, physicists, such as Edward Witten, Steven Hawkings, Brian Greene and Lee Smolin among others, are finding ways to improve our description of ‘GR’ to accommodate the strange rules of quantum mechanics. Ironically, although gravity is produced by matter, General Relativity does not really describe matter in any detail – certainly not with the detail of the modern quantum theory of atomic structure. In the mathematics, all of the details of a planet or a star are hidden in a single variable, m, representing its total mass.

 

The most amazing thing about gravity is that is a force like no other known in Nature. It is a property of the curvature of space-time and how particles react to this distorted space. Even more bizarrely, space and time are described by the mathematics of  GR as qualities of the gravitational field of the cosmos that have no independent existence. Gravity does not exist like the frosting on a cake, embedded in some larger arena of space and time. Instead, the ‘frosting’ is everything, and matter is embedded and intimately and indivisibly connected to it. If you could turn off gravity, it is mathematically predicted that space and time would also vanish! You can turn off electromagnetic forces by neutralizing the charges on material particles, but you cannot neutralize gravity without eliminating spacetime itself.  Its geometric relationship to space and time is the single most challenging aspect of gravity that has prevented generations of physicists from mathematically describing it in the same way we do the other three forces in the Standard Model.

Einstein’s General Relativity, published in 1915, is our most detailed mathematical theory for how gravity works. With it, astronomers and physicists have explored the origin and evolution of the universe, its future destiny, and the mysterious landscape of black holes and neutron stars. General Relativity has survived many different tests, and it has made many predictions that have been confirmed. So far, after 90 years of detailed study, no error has yet been discovered in Einstein’s original, simple theory.

Currently, physicists have explored two of its most fundamental and exotic predictions: The first is that gravity waves exist and behave as the theory predicts. The second is that a phenomenon called ‘frame-dragging’ exists around rotating massive objects.

Theoretically, gravity waves must exist in order for Einstein’s theory to be correct. They are distortions in the curvature of spacetime caused by accelerating matter, just as electromagnetic waves are distortions in the electromagnetic field of a charged particle produced by its acceleration. Gravity waves carry energy and travel at light-speed. At first they were detected indirectly. By 2004, astronomical bodies such as the  Hulse-Taylor orbiting pulsars were found to be losing energy by gravity waves emission at exactly the predicted rates. Then  in 2016, the  twin  LIGO gravity wave detectors detected the unmistakable and nearly simultaneous pulses of geometry distortion created by colliding black holes billions of light years away.

Astronomers also detected by 1997 the ‘frame-dragging’ phenomenon in  X-ray studies of distant black holes. As a black hole (or any other body) rotates, it actually ‘drags’ space around with it. This means that you cannot have stable orbits around a rotating body, which is something totally unexpected in Newton’s theory of gravity. The  Gravity Probe-B satellite orbiting Earth also confirmed in 2011 this exotic spacetime effect at precisely the magnitude expected by the theory for the rotating Earth.

Gravity also doesn’t care if you have matter or anti-matter; both will behave identically as they fall and move under gravity’s influence. This quantum-scale phenomenon was searched for at the Large Hadron Collider ALPHA experiment, and in 2013 researchers placed the first limits on how matter and antimatter ‘fall’ in Earth’s gravity. Future experiments will place even more stringent limits on just how gravitationally similar matter and antimatter are. Well, at least we know that antimatter doesn’t ‘fall up’!

There is only one possible problem with our understanding of gravity known at this time.

Applying general relativity, and even Newton’s Universal Gravitation, to large systems like galaxies and the universe leads to the discovery of a new ingredient called Dark Matter. There do not seem to be any verifiable elementary particles that account for this gravitating substance. Lacking a particle, some physicists have proposed modifying Newtonian gravity and general relativity themselves to account for this phenomenon without introducing a new form of matter. But none of the proposed theories leave the other verified predictions of general relativity experimentally intact. So is Dark Matter a figment of an incomplete theory of gravity, or is it a here-to-fore undiscovered fundamental particle of nature? It took 50 years for physicists to discover the lynchpin particle called the Higgs boson. This is definitely a story we will hear more about in the decades to come!

There is much that we now know about gravity, yet as we strive to unify it with the other elementary forces and particles in nature, it still remains an enigma. But then, even the briefest glance across the landscape of the quantum world fills you with a sense of awe and wonderment at the improbability of it all. At its root, our physical world is filled with improbable and logic-twisting phenomena and it simply amazing that they have lent themselves to human logic to the extent that they have!

 

Return here on Monday, March 13 for my next blog!