All posts by StenBlog

The End of Physics?

For 45 years I have followed the great pageant of ideas in theoretical physics. From high school through retirement, although my career and expertise is in astronomy and astrophysics, my passion has always been in following the glorious ideas that have swirled around in theoretical physics. I watched as the quark theory of the 1960s gave way to Grand Unification Theory in the 1970s, and then to string theory and inflationary cosmology in the 1980s. I was thrilled by how these ideas could be applied to understanding the earliest moments in the Big Bang and perhaps let me catch at least a mathematical glimpse of how the universe, time and space came to be literally out of Nothing; explanations not forthcoming from within Einstein’s theory of general relativity.

Even as recently as 2012 this story continued to captivate me even as I grappled with what might be the premature end of my life at the hands of non-Hodgkins Lymphoma diagnosed in 2008. And still I read the journal articles, watching as new ideas emerged, built upon the theoretical successes of the 1990s and beyond. But then a strange thing happened.

In the 1980s, the US embarked on the construction in Texas of the Superconducting Super Collider, but that project was scrapped and de-funded by Congress after ¼ of it had been built. Attention then turned to the European Large Hadron Collider project, which after 10 years finally achieved its first collisions in 2009. The energy of this accelerator has steadily been increased to 13 TeV, and now records some 600 million collisions per second, which generates 30 petabytes of data per year. Among these collisions were expected to be the traces of ‘new physics’, and physicists were not dissappointed. In 2012 the elusive Higgs Boson was detected some 50 years after it was predicted to exist. It was a major discovery that signaled we were definitely on the right track in verifying the Standard Model. But since then, following many more years of searching among the debris of trillions of collisions, all we continue to see are the successful predictions of the Standard Model confirmed again and again with only a few caveats.

Typically, physicists push experiments to ever-higher degrees of accuracy to uncover where our current theoretical model predictions are becoming thread-bare, revealing signs of new phenomena or particles, hence the term ‘new physics’. Theoreticians then use this anomalous data to extend known ideas into a larger arena, and always select new ideas that are the simplest-possible extensions of the older ideas. But sometimes you have to incorporate entirely new ideas. This happened when Einstein developed relativity, which was a ‘beautiful’ extension of the older and simpler Newtonian Physics. Ultimately it is the data that leads the way, and if not available, we get to argue over whose theory is more mathematically beautiful or elegant.

Today we have one such elegant contender for extending the Standard Model that involves a new symmetry in Nature called supersymmetry. Discovered mathematically in the mid-1970s, it showed how the particles in the Standard Model that account for matter (quarks, electrons) are related to the force-carrying particles (e.g. photons, gluons), but also offered an integrated role for gravity as a new kind of force-particle. The hitch was that to make the mathematics work so that it did not answer ‘infinity’ every time you did a calculation, you had to add a whole new family of super-heavy particles to the list of elementary particles. Many versions of ‘Minimally Supersymmetric Standard Models’ or MSSM’s were possible, but most agreed that starting at a mass of about 1000 times that of a proton (1 TeV), you would start to see the smallest of these particles as ‘low-hanging fruit’, like the tip of an upside-down pyramid.

For the last seven years of LHC operation, using a variety of techniques and sophisticated detectors, absolutely no sign of supersymmetry has yet to be found. In April, 2017 at the Moriond Conference, physicists with the ATLAS Experiment at CERN presented their first results examining the combined 2015 – 2016 LHC data. This new dataset was almost three times larger than what was available at the last major particle physics conference held in 2016. Searches for the supersymmetric partners to quarks and gluons (called squarks and gluinos) turned up nothing below a mass of 2 TeV. There was no evidence for exotic supersymmetric matter at masses below 6 TeV, and no heavy partner to the W-boson was found below 5 TeV.

Perhaps the worst result for me as an astronomer is for dark matter. The MSSM model, the simplest extension of the Standard Model with supersymmetry, predicted the existence of several very low mass particles called neutralinos. When added to cosmological models, neutralinos seem to account for the existence of dark matter, which occupies 27% of the gravitating stuff in the universe and controls the movement of ordinary matter as it forms galaxies and stars. MSSM gives astronomers a tidy way to explain dark matter and closes the book on what it is likely to be. Unfortunately the LHC has found no evidence for light-weight neutralinos at their expected MSSM mass ranges. (see for example https://arxiv.org/abs/1608.00872 or https://arxiv.org/abs/1605.04608)

Of course the searches will continue as the LHC remains our best tool for exploring these energies well into the 2030s. But if past is prologue, the news isn’t very promising. Typically the greatest discoveries of any new technology are made within the first decade of operation. The LHC is well on its way to ending its first decade with ‘only’ the Higgs boson as a prize. It was fully intended that the LHC would have given us hard evidence by now for literally dozens of new super-heavy particles, and a definitive candidate for dark matter to clean up the cosmological inventory.

So this is my reason for feeling sad. If the Higgs boson is a guide, it may take us several more decades and a whole new and expensive LHC replacement to find something significant to affirm our current ‘beautiful’ ideas about the physical nature of the universe. Supersymmetry may still play a role in this but it will be hard to attract a new generation of young physicists to its search if Nature continues to withhold so much as a hint we are on the right theoretical track.

If supersymmetry falls string theory, which hinges on supersymmetry, may also have to be put aside or re-thought. Nature seems to favor simple theories over complex ones so are the current string theories with supersymmetry really the simplest ones?

Thousands of physicists have toiled over these ideas since the 1970s. In the past, such a herculean effort usually won-out with Nature rewarding the tedious intellectual work, and some vestiges of the effort being salvaged for the new theory. I find it hard to believe that will not again be the case this time, but as I prepare for retirement I am realizing that I may not be around to see this final vindication.

So what should I make of my 45-year intellectual obsession to keep up with this research? Given what I know today would I have done things differently? Would I have taught fewer classes on this subject, or written fewer articles for popular science magazines?

Absolutely not!

I have thoroughly enjoyed the thrill of the new ideas about matter, space, time and dimension. The Multiverse idea offered me a new way of experiencing my place in ‘reality’. I could never have invented these amazing ideas on my own, which have entertained me for most of my professional life. Even today’s Nature seems to have handed us something new: Gravity waves have been detected after a 60-year search; detailed studies of the cosmic ‘fireball’ radiation are giving us hints to the earliest moments in the Big Bang; and of course we have discovered THOUSANDS of new planets.

Living in this new world seems almost as intellectually stimulating, and now offer me more immediate returns on my investment in the years remaining.

Eclipse!!!

In nine weeks, on August 17, 2017, residents of the continental United States will be treated to a total solar eclipse as they enjoy their noon-time repast. If you are standing along the line of totality, which stretches from Oregon’s Pacific coast to South Carolina’s Atlantic surf, and if you have a good telescope, you may even be treated to a view such as the one below photographed by Luc Viatour ( www.Lucnix.be) during the August 1999 total solar eclipse over France!

In this blog, I am going to mention some of the work I am doing to help the public enjoy this once-in-a-lifetime event through my various projects at NASA. NASA, as part of its informal education programs, has adopted this event as a major national PR effort to focus its many assets on the ground and in space. In early-2016, my group at the NASA Goddard Spaceflight Center called the Heliophysics Education Consortium (HEC) was asked to lead the charge organize this event. Led by Dr. Alex Young, the HEC team has created the Official Eclipse website, and I have had the great pleasure and honor to have written many of its resources. So, think of this Blog as an introduction to the What, Where, When and How of this eclipse, and a personal tour by me for how you can enjoy this rare and magnificent event!

First, you probably have a lot of questions about solar eclipses, so I wrote a FAQ Page that covers most of the common ones. I also wrote an essay about the many Eclipse Misconceptions that people have had about solar eclipses. Some are quite bizarre, but seem to be passed on from generation to generation despite our deep scientific understanding of them!

Take a look at the picture at the top of this blog. During most total eclipses of the sun, you are guaranteed to see the brilliant solar corona. With a telescope or a good telephoto lens, you may even glimpse the reddish hue of the solar chromosphere following along the darkened limb of the moon. Here and there, you may even see a solar prominence also glowing in reddish light. These features are hard to see because they are rather small and with the naked eye you just don’t have the natural magnification to see them well.

I will be writing a number of  essays that discuss particular aspects of the sun and post them at the Eclipse SCIENCE page. Topics will include the corona, prominences, chromosphere and the dramatic helmet streamers  so far as astronomers understand them today. It’s all about solar magnetism and how this interacts with the 100,000 degree plasma in the solar atmosphere. The corona, itself, has a temperature of several million degrees! It is about a million times fainter than the disk of the sun, which is why you only get to see it during an eclipse. But astronomers can use instruments called coronagraphs to artificially eclipse the sun, and allow us to study coronal features at our leisure.

There are many things you can do while waiting for the eclipse to start, and a few non-invasive things you can do while the eclipse is in progress. I have tried to collect these ideas into two areas called Citizen Explorers and Citizen Scientists depending on how serious you want to be in exploring eclipses. If you have a mathematical mind, I have even created a dozen or so curious Math Challenges that help you look at many different aspects of the event.

Just about everyone is going to want to use their smartphones to photograph the eclipse, so in my article on Smartphone Photography I try to outline some of the things you can do, what kinds of technology you need like inexpensive telephoto lenses, and what to expect. Below is a photo taken by a smartphone with no other technology from Longyearbyen, Svalbard in 2015. Your camera phone will give you a brilliant, over-exposed corona but don’t expect detailed resolution because your lens is simply not good enough to show details like those in the photo above. If you use an inexpensive smartphone telephoto with 15x or higher, you need a camera tripod too. Also, be careful that you do not use the telephoto on the sun while the brilliant photosphere is visible to avoid camera damage. (Credit: Stan Honda/AFP/Getty Images)

Some people may be interested in uploading their images, and in my essay on Geotagging I describe two ways that you can upload your selfies so that others can enjoy your efforts. The first is to use GOOGLE Maps but you can only upload your photo to  preexisting ‘tagged’ locations in GOOGLE Maps (nearby businesses, monuments, parks, etc). This is unlike the discontinued Panoramio/GOOGLE Earth feature where you could upload your picture to any geographic location on the planet. The second way is to send your pictures to a NASA site that is collecting them.

For a rare event like this, you might also want to create a time capsule of your experience. I have suggested that you write a letter to yourself about who you are today, and what you think the future holds for you. On April 8, 2024 there will be another total solar eclipse across the continental United States and you might set that date as the time when you next read your letter being at that time 7 years older!

People have observed total solar eclipses for centuries, and in my Eclipse History area I have a number of essays that describe earlier observations in the United States. There is even an archive of newspaper articles since the early 1800s in which you can read  first-hand accounts. Some music has also been written and performed with solar eclipses in mind! In my Music Page I list many of these pieces both in classical and pop music over the years. Who can forget Carly Simon’s ‘…You flew your Lear Jet to Nova Scotia to see the total eclipse of the sun’ about the March 7, 1970 eclipse!

One thing I find interesting is that since the 1500s there have been over a dozen eclipse paths that have crossed the one for August 21, 2017. I created several resources that describe these ‘magical’ crossing points and what history was going on in North America during these dates. I also created a set of math problems where you can calculate the latitude and longitude for these crossing points  if you are in-to math!

Where should you go to see this eclipse? In one essay I discussed how  there are dozens of airline flights on that day from which passengers may be able to see the total solar eclipse out their windows if the Captain gets you to the right place and time along the route. For folks on the ground, you can check out NASA’s Path of Totality maps and see if your travels on that day take you anywhere close. For the rest of us, no matter where you are in North America, you will at least see a partial solar eclipse where a small ‘bite’ is taken out of the solar disk. With a pair of welder’s goggles or solar viewing glasses, you can look up anytime around noon and see the eclipse in progress.

Created with GIMP

NASA plans to collect stunning images from some of its available spacecraft, so on this page I collected information about which spacecraft will participate. After the eclipse, you will see in the News Media many of these images as they are produced during the days after the event.

Myself, I will be joining the NASA Eclipse Team in Carbondale, Illinois for a huge public celebration of the total solar eclipse. This will be televised through numerous NASA feeds all along the path of the eclipse so you can view it on your smartphone or laptop screen wherever you are. I will be involved with this telecast between 11:45 and 1:45 CDT as a panelist on several of the program segments. When I return from this event, or perhaps even during it, I will upload a follow-up blog about what it was like from ‘Ground Zero’. I have never experienced a total solar eclipse before, so I am prepared to be stunned and amazed!

Stay tuned!

Exoplanets Galore!

***This Blog was updated in 2023 with new data. Visit Exoplanet Update. 

The year was 2001 – The first of the new millennium. It was also the year astronomers reached a historic milestone: They had catalogued more planets orbiting distant stars than the eight we knew about in our own solar system!

The first of these was discovered in 1988: A Jupiter-sized world orbiting the star Gamma Cephi A at a distance of 45 light years from our sun. Since then, well over 3,600 ‘exo’planets have been discovered using many different detection methods both on the ground and in space. We are truly living in a very different age than we experienced only a few short decades ago. We no longer have to speculate whether other planetary systems exist across the universe. We now have the technology and a growing catalog of examples to prove our solar system is not alone in the universe!

One of these detection methods is the Transit Method. As a planet passes in front of its star as viewed from Earth, the brightness of the star dims by an amount equal to the ratio of the planet’s circular area to the star’s circular area. For example, a Jupiter-sized planet has a diameter of 143,000 km while a sun-like star has a diameter of 1.4 million km, so the ratio of their areas is 1/100. As this exoplanet transits the disk of its star as viewed from Earth, the brightness of the star will dim by 1%. By measuring the time between successive transit ‘dips’ you can deduce the orbit period of the planet. From the orbit period and the mass of the star you also obtain the distance between the star and the planet. Once you have this information, you can estimate the surface temperature of the planet and its average density (rocky planet or gas giant). In the example below, the orbit period is 4.887 days and from the amount of dimming ( 0.7%) the ratio of their areas is 1/142, and planet’s diameter is 1/12 the diameter of its star.

To take advantage of the transit method on a large scale, NASA launched the Kepler Observatory in 2009. Using a super-sensitive digital camera, it took a picture of the same star field in the constellation Cygnus every minute and captured brightness information for over 150,000 stars similar to our sun. Over the course of its survey, now in its eighth year, it has discovered 4,496 exoplanet candidates of which 2,335 have been verified by independent study. These statistics now show that virtually all stars in the sky have at least one planet in orbit around them, and about 1 in 20 have Earth-sized worlds!

What was entirely unexpected was the diversity of exoplanet types, and orbits about their parent stars. Our solar system offered as templates the small rocky Earth-sized worlds of the inner solar system, the two gas giants (Jupiter and Saturn) and the two ice giants (Uranus and Neptune), but among the thousands of worlds discovered so far, a vaster array of possibilities have been found.

There are super-Earths over three times as large as our world apparently fully covered by deep oceans (Kepler 22b). There are hot-Jupiters that orbit their stars in a matter of hours, and swelter at temperatures of thousands of degrees (51 Pegasi b). Some planets have ‘water cycles’ in which silicate rock evaporates from their surfaces, condenses in clouds, and then rains droplets of lava back to the surface (COROT-7b, Alpha Centauri Bb, Kepler 10b). The cloud-shrouded super-Earth GJ 1214b is so hot that its clouds could be made of zinc sulfide and potassium chloride! One exoplanet is so close to its star that it is literally evaporating before our eyes, leaving a huge comet-like tail of gas in its orbital wake (HD 221458b).

The thousands of exoplanets detected by the transit method are especially intriguing. From Earth, as the exoplanet passes in front of its star, some of the starlight passes through the exoplanet’s atmosphere. Various gases in the atmosphere absorb selected wavelengths of this light and leave their unique fingerprints behind for distant astronomers using spectroscopes to study. So far among the dozen or so worlds examined, atmospheres rich in water vapor, carbon dioxide, methane, sulfur and other common molecules have already been found. This figure shows the spectrum of the exoplanet Wasp-19b and signs of methane (CH4) and hydrogen cyanide (HCN) – a lethal mixture for humans to breathe! Future ground and space-based observatories will be able to detect the molecules in thousands of exoplanet atmospheres, but this is no idle scientific pursuit. Living systems on a planetary scale sculpt their atmospheres by creating free molecular oxygen, so if we ever detect an exoplanet atmosphere rich in oxygen, it will be a major discovery of life in the cosmos beyond our own Earth.

Another aspect of the search for Earth-like worlds is to find exoplanets about as large as Earth that orbit their stars in what is called the Habitable Zone (HZ). In this range of orbital distances from the exoplanet’s star, the surface temperatures would allow liquid water to exist. In our solar system, this zone is located between the orbits of Venus and Mars, and Earth is smack in the middle of it. Currently, the various surveys have identified 13 exoplanets that are not only Earth-sized but are within their star’s HZ. The closest of these is GJ 273b at a distance of 12 light years! But there are complicating factors to finding an exact twin to our Earth upon which we might also hope to discover a living biosphere.

If a planet in the HZ has too much carbon dioxide – a potent greenhouse gas – its temperature would be more Venus-like and it would be a desert world or worse. If the exoplanet had significant traces of common gases such as hydrogen cyanide or methane, it would be unbreathable for humans though some extremophile bacteria on Earth thrive in such environments.

Another thing to think about is that an Earth-like world could be in orbit around a Jupiter-sized exoplanet in its star’s HZ. Even though we do not directly see the Earth-sized world there could be several of them as satellite worlds of larger exoplanets. The Jupiter-sized exoplanet WASP-12b appears to have an ‘exomoon’ with several times the mass of Earth. We know of hundreds of Jupiter-sized exoplanets orbiting in the habitable zones of their stars. We can no longer discount the possibility that their exomoons may be Earth-like though currently undetectable.

So far we have had to detect and study exoplanets indirectly from transit and spectroscopic data, but in a growing number of cases we can actually directly see these worlds as they spin around their stars. In 2008, the bright star Fomalhaut was observed by the Hubble Space Telescope and revealed a small planet among the debris of its dense disk of gas and dust. In another instance, the young star HR 8799 was observed by the Keck Observatory in Hawaii and revealed four massive planets (HR8799 b, c, d and e) in orbit. This is only the beginning of direct-imaging studies as we enter a new era of exoplanet studies.

What will the future bring? The spectacular success of the NASA Kepler mission is just the beginning of new missions to come that will discover thousands of new exoplanets. The NASA, Transiting Exoplanet Survey Satellite (TESS) will be launched in 2018 and will perform a Kepler-like survey of the 500,000 brightest stars in the sky. This survey will double the number of known exoplanets and include hundreds of additional Earth-sized candidates. At the same time, the Webb Space Telescope will be launched and among its many research programs will be ones to study the atmospheres of known exoplanets and Earth-sized candidates.

Check back here on June 18 for my next blog!

Synthia

We can micro-miniaturize electronics by making their components smaller and smaller, but what about living organisms? Humans consist of over 20,000 genes coded into a billion-nucleotide molecule called DNA. That’s a lot of information for a complex organism like humans, but when it comes to genome size, a rare Japanese flower, called Paris japonica, is the current heavyweight champ, with 50 times more DNA than humans. But what is the smallest number of genes and nucleotides needed to create a living system?

Researchers call this the Minimal Genome, and it is what you get when you take a simple organism and strip away all of the non-essential and duplicative genes, leaving a bare minimum behind to spawn a healthy living system. To find one of these organisms in Nature, one would think that you have to search far and wide to find it among the millions of organisms on Earth. Luckily, a prime candidate was found much closer to ‘home’!

Following an intense multi-decade search, the current champion organism with the fewest naturally-occurring genes is a rather dangerous pest called Mycoplasma genitalium (M. genitalium). MG is a sexually-transmitted disease  that doctors have known about since the 1980’s and more than 1 in 100 adults have it. It causes urethritis and pelvic inflammatory disease among other symptoms. It is also a bacterium with only 482 genes among 582,970 nucleotides.

By 2008 the researchers had artificially synthesized the complete 482-gene, circular chromosome of M. genitalium, however, M. genitalium is a slow-growing bacterium so by 2010 the research switched to another simple organism called M. mycoides with a faster reproduction cycle. They were able to synthesize the 1-million nucleotide DNA of this bacterium and transplant it into the body of yet another bacterium called M. capricolum, which had been scrubbed of all its DNA. The new genome quickly took over the cell and was dubbed Synthia, but it behaved exactly as M. mycoides even though it was entirely synthetic. It had been created from a computer record of its sequential gene compliment and a set of chemicals, so it truly was the first lifeform whose parents were a computer and a set of chemical pumps!

Syn 3.0 – (Credit: Mark Ellisman/National Center for Imaging and Microscopy Research)

Following a tedious process of trial-and-error where over 100 different analogues with different minimal DNA sets were created, most non-viable, the creation of a new synthetic bacterium, Syn. 3.0, was announced by Nobel laureate Ham Smith, microbiologist Clyde Hutchison, and genomics pioneer Craig Venter at Harvard University in the journal Science on March 25, 2016.

Although Syn 3.0 only had 473 genes, amazingly, the function of 149 of these remains unknown. Some create proteins that stick out from the bacterium’s cell wall but their functions are unknown. Other genes seem to be involved in creating proteins that shuttle molecules in and out of the bacterium’s cell wall, but the nature of these molecules and their role in the cell’s metabolism is unknown. The artificial genome was also reorganized using a computer algorithm to place similar genes near each other – like de-fragging a hard drive, but this did not have any obvious effect on the bacterium.

In at least one case, a ‘watermark’ sequence was inserted into the genome of an earlier synthetic bacterium called M. laboratorium. The 4 watermarks are coded messages in the form of DNA base pairs, of 1246, 1081, 1109 and 1222 nucleotides respectively, which give the names of the researchers, and quotes from James Joyce, Robert Oppenheimer, and an especially relevant one by Richard Feynman: ‘What I cannot build I cannot understand’.

Is M. Genitalium really the smallest organism? Probably not, but it depends on how you define such minimal organisms. Since its genome was sequenced in 1999, we now know of five additional bacteria with even smaller genome sizes. The smallest of these is Candidatus Hodgkinia cicadicola Dsem with only 169 genes, however like the others this organism’s genome is supplemented by the host cell’s genome so it acts more like an organelle than a free-standing organism.
 M. genitalium is classified as an intracellular parasite and cannot exist by itself in the biosphere. It requires a host system such as the human urinary tract to provide the environment to sustain it. For truly free-standing organisms that can actually live by themselves and reproduce, the smallest of these is currently thought to be Pelagibacter ubique, which was found in 2002. It makes up 25% of all bacterial plankton cells in the ocean. It also undergoes regular seasonal cycles in abundance – in summer reaching ~50% of the cells in the temperate ocean surface waters. Thus it plays a major role in the Earth’s carbon cycle! Its genome was sequenced in 2005 and consists of 1,308,759 nucleotides forming 1,389 genes. Its genome has been streamlined by evolution so that it requires the least amount of nitrogen to reproduce (a scarce resource in the bacterium’s ocean environment). The base pairs C and G are nitrogen-rich, with a total of 11 nitrogen atoms between them. A and T are nitrogen-poor and have only 7 nitrogen atoms between them. All other environmental factors being equal, instead of 50% of the genome containing A and T, a whopping 70% does. Somehow, this bacterium has found a way to find alternative ways to create genes that are essential for life by avoiding the ‘expensive’ alternatives. Over billions of years it optimized itself to the current low-nitrogen compliment.

Why is all of this important? Why is synthetic genomics such an important research area? Because it is a direct way to identify how hundreds of genes work together to create viable living systems: their skeletons, metabolisms and reproductive strategies. For astrobiologists, it is a glimpse of what alien life might look like when it is pared down to its absolute essentials but still behaves as an independent system rather than a viral symbiont requiring a pre-existing host. Also, once we know what a basic viable bacterium host looks like, genetically, we can systematically add to this genome other factors of interest to us. We can explore what the process of epigenetics looks like as various environmental factors are added to switch on and off genes. Above all, it is the inevitable questions to come about the essential mechanism of life that will be the most exciting to watch develop!

Check back here on  Tuesday, June 6 for the next essay!

The Proton’s Spin

Protons are the work horses of chemistry. Their numbers determine which element you are talking about, and their positive charge determines how many electrons will form a cloud around them to facilitate all manner of chemical reactions.

For decades we thought that protons were absolutely fundamental particles along with neutrons and electrons, but then came the quantum revolution of the 1920s and the escalating quest to understand what their actual physical properties were. Through experimentation, we found that protons all had exactly the same mass to many decimal places. They all had exactly +1.0000 unit of charge, also to many decimal places. But they also possessed an entirely new physical quantity found only in atomic-scale physics. This quantity was called ‘spin’ but had nothing to do with the motion of a top about its axis, although paradoxically it could nonetheless be interpreted in that way.

Quantum spin, unlike the continuous spinning of a top, comes only in integer units like 0, 1, 2, etc, or in half-integer units like ½, 3/2, 5/2 etc. Physicists soon discovered that fundamental particles like photons ( the carriers of light energy) only had a quantum spin of exactly 1.0, while protons, neutrons, neutrinos and electrons had exactly ½ unit of spin. The former kinds of particles were called bosons while the latter were given the name fermions. Composite particles made up from these elementary bosons and fermions could have other spin values, but only what arises from adding, in the proper way, the elementary spins of their constituents.

By the 1960s, experiments had begun to show that protons were not actually fundamental particles at all, nor were neutrons for that matter. Theoretical models that built-up protons and neutrons and many other known particles called mesons and baryons soon led to the idea of the quark. For protons and neutrons, you needed three quarks, while for the mesons you only needed two of which one would be a quark and the other an anti-quark. The mathematics were impressive and elegant, and this system of quarks soon became the favored model for all particles that interacted through the strong nuclear force, itself produced by the exchanges of particles called gluons. Also in this scheme, quarks would be spin-1/2 fermions and the gluons would be spin-1 bosons much like the photons which carry light energy.

All seemed to be going great by the 1970s and 1980s. The quark model flourished, and many new subtle phenomena were uncovered through the application of what became the Standard Model of physics. But there was a fly in the ointment.

At first the explanation for how a proton could have a spin of ½ while at the same time being composed of three quarks, each also spin-1/2 particles, was pretty well settled. Because a proton consisted of two identical ‘up’ quarks and one ‘down’ quark, it was entirely reasonable that the two up quarks would have equal and opposite spin canceling each other out, leaving behind the down quark to carry the protons ½ unit of spin. Similarly for the neutron, its two down quarks combined to have a net-zero spin leaving the single up quark to carry the ½ unit of spin for the neutron.

The Proton Spin Crisis

All seemed to be well until experiments in 1987 at the European Muon Collaboration actually used carefully prepared beams of particles called muons to probe the interior of protons and double-check the way the quark spins were oriented with the protons spin. What they found was startling. Not more than 25% of the proton’s spin was generated by the quarks at all. The remaining 75% of what defines the spin of a proton had to come from some other source!

When you look at the mass of a proton compared to the masses of the three constituent quarks you discover something very fascinating. The masses of the quarks only account for about 1% of the mass of the entire proton. Instead, thanks to Einstein’s E=mc2, it is the stress energy of the gluon fields inside the proton that contribute the missing 99%. The mass that you read on the bathroom scale is only 1% contributed by the mass of your elementary quarks in grams, but 99% by the invisible energy(mass) of the gluon fields that occupy nuclear space!

Now for proton spin, the only other things rattling around inside the intense fields in the interior of a proton were the gluons holding the quarks together, and an ephemeral sea of quark-antiquark pairs that momentarily appeared and disappeared in the vacuum of space found there. This sea of vacuum or ‘virtual’ particles is absolutely required by modern quantum physics, and although we can never detect their comings and goings by any direct observation, we can detect their influence on nearby elementary particles.

In 2014, experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven, New York collided polarized protons together and physicists think they have found a large part of the remainder of the protons spin. Perhaps 40% to 50% seems to be contributed by the gluons themselves. This still leave about 25% in some other source. Meanwhile, other experiments by MIT physicists determined that any anti-quarks produced inside a proton among the virtual quark sea contribute very little to the over-all spin of the proton.

The bottom line today seems to be what this table shows:

Quark spin……………………….………..25%
Gluon spin…………………………………40-50%
Orbital angular momentum……..25% to 35%

When the experimental constraints are added up, we still do not have a precise measure of how the various proton constituents add up to give the universally constant spin of 1/2 to a proton that is observed for all protons to many decimal places.

Who would have thought that such an important number as ‘1/2’ arises from combining a number of messy phenomena that themselves seem imprecise!

Check back here on Tuesday, May 30 for my next topic!

The First Billion Years

When we think about the Big Bang we tend only to look at the first few instants when we think all of the mysterious and exciting action occurred. But actually, the first BILLION years are the real stars of this story!

My books ‘Eternity:A Users Guide’ and ‘Cosmic History I and II’ provide a more thorough, and ‘twitterized’, timeline of the universe from the Big Bang to the literal end of time if you are interested in the whole story as we know it today. You can also look at a massive computer simulation developed by Harvard and MIT cosmologists in 2014.

What we understand today is not merely based on theoretical expectations. Thanks to specific observations during the last decade, we have actually discovered distant objects that help us probe critical moments during this span of time.

Infancy

By the end of the first 10 minutes after the Big Bang, the universe was filled with a cooling plasma of hydrogen and helium nuclei and electrons – too hot to come together to form neutral atoms at seething temperatures over 100 million Celsius. The traces that we do see of the fireball light from the Big Bang are called the cosmic background radiation, and astronomers have been studying it since the 1960s. Today, the temperature is 2.726 kelvins, but at one part in 100,000 there are irregularities in its temperature across the entire sky detected by the COBE, WMAP and Planck satellites and shown below. These irregularities are the gravitational fingerprints of vast clusters of galaxies that formed in the infant universe after several more billion years.

By 379,000 years, matter had cooled down to the point where electrons could bond with atomic nuclei to form neutral atoms of hydrogen and helium. For the first time in cosmic history, matter could go its own way and no longer be affected by the fireball radiation, which used to blast these assembled atoms apart faster than they could form. If you were living at this time, it would look like you were standing inside the surface of a vast dull-red star steadily fading to black as the universe continued to expand, and the gas steadily cooled over the millennia. No matter where you stood in the universe at this time, all you would see around you is  this dull-red glow across the sky.

6 million years – By this time, the cosmic gas has cooled to the point that its temperature was only 500 kelvins (440 F). At these temperatures, it no longer emits any  visible light. The universe is now fully in what astronomers call The Cosmic Dark Ages. If you were there and looking around, you would see nothing but an inky blackness no matter where you looked! With infrared eyes, however, you would see the cosmos filled by a glow spanning the entire sky.

20 million years – The hydrogen-helium gas that exists all across the universe is starting to feel the gravity effects of dark matter, which has started to form large clumps and vast spiders-web-like networks spanning the entire cosmos, with a mass of several trillion times the mass of our sun. As the cold, primordial gas falls into these gravity wells, it forms what will later become the halos of modern-day galaxies. All of this hidden under a cloak of complete darkness because there was as yet no physical objects in existence to light things up. Only detailed supercomputer simulations can reveal what occurred during this time.

The First Stars

100 million years – Once the universe got cold enough, large gas clouds stopped being controlled by their internal pressure, and gravity started to take the upper hand. First the vast collections of matter destined to become the haloes of galaxies formed. Then, or at about the same time, the first generation of stars appeared in the universe. These Population III stars made from nearly transparent hydrogen and helium gas were so massive, they lived for only a few million years before detonating as supernova. As the universe becomes polluted with heavier elements from billions of supernovae, collapsing clouds become more opaque to their own radiation, and so the collapse process stops when much less matter has formed into the infant stars. Instead of only producing massive Population III stars with 100 times our sun’s mass, numerous stars with masses of 50, 20 and 5 times our sun’s form with increasing frequency. Even smaller stars like our own sun begin to appear by the trillions. Most of this activity is occurring in what will eventually become the halo stars in modern galaxies like the Milky Way. The vast networks of dark matter became illuminated from within as stars and galaxies began to form.

200 million years – The oldest known star in our Milky Way called SM0313 formed about this time. This star contains almost no iron — less than one ten millionth of the iron found in our own Sun. It is located 6000 light years from Earth. Another star called the Methusela Star is located about 190 light years from Earth and was formed about the same time as SM0313.

The First Quasars and Black Holes

300 million years The most distant known ‘quasar’ is called APM 8279+5255, and contains traces of the element iron. This means that at about this time after the Big Bang, some objects are powered by  enormous black holes that steadily consume a surrounding disk of gas and dust. For APM 8279+5255, the mass of this black hole is about 20 billion times more massive than the Sun. Astronomers do not know how a black hole this massive could gave formed so soon after the Big Bang. A dimple division shows that a 20 billion solar mass black hole forming in 300 million years would require a growth rate higher than 60 solar masses a year!

The First Galaxies

400 million years – The cold primordial matter becomes clumpy under the action of its own gravity. These clumps have masses of perhaps a few billion times our sun or less, and over time this material starts to collapse locally into even smaller clouds that become mini-galaxies where intense episodes of star formation activity are playing out.

This image shows the position of the most distant galaxy discovered so far with the Hubble Space Telescope. The remote galaxy GN-z11 shown in the inset is actually ablaze with bright young blue stars. They look red in this image because the wavelengths of light have been stretched by the expansion of the universe to longer, redder wavelengths. Like the images of so many other young galaxies, we cannot see individual stars, but their irregular shapes show that the stars they contain are spread out in irregular clumps within their host galaxy, possibly because they are from separate, merging clouds whose collisions have triggered the star-forming activity we see.

Although it is hard work, astronomers can detect the faint reddish traces of dozens of other infant galaxies such as MACS0647-JD, UDFj-39546284 and EGSY-2008532660. These are all  small dwarf galaxies over 100 times less massive than our Milky Way. They are all undergoing intense star forming activity between 400 and 600 million years after the Big Bang.

The Gamma-Ray Burst Era begins about 630 million years after the Big Bang. Gamma-ray bursts are caused by very massive stars, perhaps 50 to 100 times our own sun’s mass, that explode as hypernovae and form a single black hole, so we know that these kinds of stars were already forming and dying by this time. Today from ‘across the universe’ we see these events occur about once each day!

800 million years – The quasar ULAS J1120+0641 is another young case of a supermassive black hole that has formed, and by this time is eating its surrounding gas and stars at a prodigious rate. The mass of this black hole is about 2 billion times the mass of our sun, and like others is probably the result of frequent galaxy mergers and rapid eating of surrounding matter.

Also at around this time we encounter the Himiko Lyman Alpha Blob; one of the most massive objects ever discovered in the early universe.  It is 55,000 light-years across, which is half of the diameter of the Milky Way. Objects like Himiko are probably powered by an embedded galaxy that is producing young massive stars at a phenomenal rate of 500 solar masses per year or more.

Again the most brilliant objects we can see from a time about 900 million years after the big bang includes galaxies like SDSS J0100+2802 with a luminosity 420 trillion times that of our own Sun. It is powered by a supermassive black hole  12 billion times the mass of our sun.

The Re-Ionization Era

960 million years – By this time, massive stars in what astronomers call ‘Population III’ are being born by the billions across the entire universe. These massive stars emit almost all of their light in the ultraviolet part of the visible spectrum. There are now so many intense sources of ultraviolet radiation in the universe that all of the remaining hydrogen gas becomes ionized. Astronomers call this the Reionization Era. Within a few hundred million years, only dwarf galaxy-sized blobs of gas still remain and are being quickly evaporated. We can still see the ghosts of these clouds in the light from very distant galaxies. The galaxy SSA22-HCM1 is the brightest of the objects called ‘Lyman-alpha emitters’. It may be producing new stars at a rate of 40 solar masses per year and enormous amounts of ultraviolet light. The galaxy HDF 4-473.0 also spotted at this age is only 7,000 light years across. It has an estimated star formation rate of 13 solar masses per year.

1 billion years First by twos and threes, then by dozens and hundreds, clusters of galaxies begin to form as the gravity of matter pulls the clumps of galaxy-forming matter together. This clustering is speeded up by the additional gravity provided by dark matter. In a universe without dark matter, the number of clusters of galaxies would be dramatically smaller.

Clusters of Galaxies Form

Proto-galaxy cluster AZTEC-3 consists of 5 smaller galaxy-like clumps of matter, each forming stars at a prodigious rate. We now begin to see how some of the small clumps in this cluster are falling together and interacting, eventually to become a larger galaxy-sized system. This process of cluster formation is now beginning in earnest as more and more of these ancient clumps fall together under a widening umbrella of gravity. Astronomers are discovering more objects like AzTEC-3, which is the most distant known progenitor to modern elliptical galaxies. By 2.2 billion years after the Big Bang, it appears that half of all the massive elliptical galaxies we see around us today have already formed by this time.

Thanks to the birth and violent deaths of generations of massive Population III stars, the universe is now flooded with heavy elements such as iron, oxygen, carbon and nitrogen: The building blocks for life. But also elements like silicon, iron and uranium which help to build rocky planets and heat their interiors. The light from the quasar J033829.31+002156.3 can be studied in detail and shows that by this time, element-building through supernova explosions of Population III stars has produced lots of carbon, nitrogen and silicon. The earliest planets and life forms based upon these elements now have a chance to appear in the universe. Amazingly, we have already spotted such an ancient world!

Earliest Planets Form

At 1 billion years after the Big Bang, the oldest known planet PSR B1620-26 b has already formed. Located in the globular cluster Messier-4, about 12,400 light-years from Earth, it bears the unofficial nicknames “Methuselah” and “The Genesis Planet” because of its extreme age. The planet is in orbit around the two very old stars: A dense white dwarf and a neutron star. The planet has a mass of 2.5 times that of Jupiter, and orbits at a distance a little greater than the distance between Uranus and our own Sun. Each orbit of the planet takes about 100 years.

Wonders to Come!

Although the Hubble Space Telescope strains at its capabilities to see objects at this early stage in cosmic history, the launch of NASA’s Webb Space Telescope will uncover not dozens but thousands of these young pre-galactic objects with its optimized design. Within the next decade, we will have a virtually complete understanding of what happened during and after the Cosmic Dark Ages when the earliest possible sources of light could have formed, and one can only marvel at what new discoveries will turn up.

What an amazing time in which to be alive!

Check back here on Wednesday, May 24 for my next topic!

Our Unstable Universe

Something weird is going on in the universe that is causing astronomers and physicists to lose a bit of sleep at night. You have probably heard about the discovery of dark energy and the accelerating expansion of the universe. This is a sign that something is afoot that may not have a pleasant outcome for our universe or the life in it.

Big Bang Cosmology V 1.0

The basic idea is that our universe has been steadily expanding in scale since 14 billion years ago when it flashed into existence in an inconceivably dense and hot explosion. Today we can look around us and see this expansion as the constantly- increasing distances between galaxies embedded in space. Astronomers measure this change in terms of a single number called the Hubble Constant which has a value of about 70 km/sec per megaparsecs. For every million parsecs of separation between galaxies, a distance of 3.24 million light years, you will see distant galaxies speeding away from each other at 70 km/sec . This conventional Big Bang theory has been the main-stay of cosmology for decades and it has helped explain everything from the formation of galaxies to the abundance of hydrogen and helium in the universe.

Big Bang Cosmology V 2.0

Beginning in the 1980’s, physicists such as Alan Guth and Andre Linde added some new physics to the Big Bang based on cutting-edge ideas in theoretical physics. For a decade, physicists had been working on ways to unify the three forces in nature: electromagnetism, and the strong and weak nuclear forces. This led to the idea that just as the Higgs Field was needed to make the electromagnetic and weak forces look different rather than behave as nearly identical ‘electroweak’ forces, the strong force needed its own ‘scalar field’ field to break its symmetry with the electroweak force.

When Guth and Linde added this field to the equations of Big Bang cosmology they made a dramatic discovery. As the universe expanded and cooled, for a brief time this new scalar field made the transition between a state where it allowed the electroweak and strong forces to look identical, and a state where this symmetry was broken representing the current state of affairs. This period of time extended from about 10(-37) second to 10(-35) seconds; a mere instant in cosmic time, but the impact of this event was spectacular. Instead of the universe expanding at a steady rate in time as it does now, the separations between particles increased exponentially in time in a process called Inflation. Physicists now had a proper name for this scalar field: The Inflaton Field.

Observational cosmology has been able to verify since the 1990s that the universe did, indeed, pass through such an inflationary era at about the calculated time. The expansion of space at a rate many trillions of times faster than the speed of light insured that we live in a universe that looks as ours does, especially in terms of the uniformity of the cosmic ‘fireball’ temperature. It’s 2.7 kelvins no matter where you look, which would have been impossible had the Inflationary Era not existed.

Physicists consider the vacuum of space to be more than ‘nothing’. Quantum mechanically, it is filled by a patina of particles that invisibly come and go, and by fields that can give it a net energy. The presence of the Inflaton Field gave our universe a range of possible vacuum energies depending on how the field interacted with itself. As with other things in nature, objects in a high-energy state will evolve to occupy a lower-energy state. Physicists call the higher-energy state the False Vacuum and the lower-energy state the True Vacuum, and there is a specific way that our universe would have made this change. Before Inflation, our universe was in a high-energy, False Vacuum state governed by the Inflaton Field. As the universe continued to expand and cool, a lower-energy state for this field was revealed in the physics, but the particles and fields in our universe could not instantaneously go into that lower-energy state. As time went on, the difference in energy between the initial False Vacuum and the True Vacuum continued to increase. Like bubbles in a soda, small parts of the universe began to make this transition so that we now had a vast area of the universe in a False Vacuum in which bubbles of space in the True Vacuum began to appear. But there was another important process going on as well.

When you examine how this transition from False to True Vacuum occurred in Einstein’s equations that described Big Bang cosmology, a universe in which the False Vacuum existed was an exponentially expanding space, while the space inside the True Vacuum bubbles was only expanding at a simple, constant rate defined by Hubble’s Constant. So at the time of inflation, we have to think of the universe as a patina of True Vacuum bubbles embedded in an exponentially-expanding space still caught in the False Vacuum. What this means for us today is that we are living inside one of these True Vacuum bubbles where everything looks about the same and uniform, but out there beyond our visible universe horizon some 14 billion light years away, we eventually enter that exponentially-expanding False Vacuum universe. Our own little bubble may actually be billions of times bigger than what we can see around us. It also means that we will never be able to see what these other distant bubbles look like because they are expanding away from us at many times the speed of light.

Big Bang Cosmology 3.0

You may have heard of Dark Energy and what astronomers have detected as the accelerating expansion of the universe. By looking at distant supernova, we can detect that since 6 billion years after the Big Bang, our universe has not been expanding at a steady rate at all. The separations between galaxies has been increasing at an exponential rate. This is caused by Dark Energy, which is present in every cubic meter of space .The more space there is as the universe expands, the more Dark Energy and the faster the universe expands. What this means is that we are living in a False Vacuum state today in which a new Inflaton Field is causing space to dilate exponentially. It doesn’t seem too uncomfortable for us right now, but the longer this state persists, the greater is the probability our corner of the universe will see a ‘bubble’ of the new True Vacuum appear. Inside this bubble there will be slightly different physics such as the mass of the electron or the quark may be different. We don’t know when our corner of the universe will switch over to its True Vacuum state. It could be tomorrow or 100 billion years from now. But there is one thing we do know about this progressive, accelerated expansion.

Eventually, distant galaxies will be receding from our Milky Way at faster that the speed of light as they are helplessly carried along by a monstrously-dilating space. This also means they will become permanently invisible for the rest of eternity as their light signals never keep pace with the exponentially-increasing space between them. Meanwhile, our Milky Way will become the only cosmic collection of matter we will ever be able to see from then on. It is predicted that this situation will occur about 100 billion years from now when the Andromeda Galaxy will pass beyond this distant horizon.

As for what the new physics will be in the future True Vacuum state is anyone’s guess. If the difference in energy between the False and True vacuum is only a small fraction of the mass of a neutrino (a few electron-Volts) we may hardly know that it happened and life will continue. But if it is comparable to the mass of the electron (512,000 eV), we are in for some devastating and fatal surprises best not contemplated.

Check back here on  Tuesday, May 16  for my next topic!

Boltzmann Brains

Back in the 1800’s, Ludwig Boltzmann (1844-1906) developed the idea of entropy and thermodynamics, which have been the main-stay of chemistry and physics ever since. Long before atoms were identified, Boltzmann had used them in designing his theory of statistical mechanics, which related entropy to the number of possible statistical states these particles could occupy. His famous formula

S = k log W

is even inscribed on his tombstone! His frustrations with the anti-atomists who hated his crowning achievement ‘statistical mechanics’ led him in profound despair to commit suicide in 1906.

If you flip a coin 4 times, it is unlikely that all 4 flips will result in all-heads or all-tails. It is far more likely that you will get a mixture of heads and tails. This is a result of their being a total of 2^4 = 16 possible outcomes or ‘states’ for this system, and the state with all heads or all tails occur only 1/16 of the time. Most of the states you will produce have a mixture of heads and tails (14/16). Now replace the coin flips by the movement of a set of particles in three dimensions.

Boltzmann’s statistical mechanics related the number of possible states for N particles moving in 3-dimensional space, to the entropy of the system. It is more difficult to calculate the number of states than for the coin flip example above, but it can be done using his mathematics, and the result is the ‘W’ in his equation S = k Log W. The bottom line is that, the more states available to a collection of particles (for example atoms of a gas), the higher is the entropy given by . How does a gas access more states? One way is for you to turn up its temperature so that the particles are moving faster. This means that as you increase the temperature of a gas, its entropy increases in a measurable way.

Cosmologically, as our universe expands and cools, its entropy is actually increasing steadily because more and more space is available for the particles to occupy even as they are moving more slowly as the temperature declines. The Big Bang event itself, even at its unimaginably high temperature was actually a state of very low entropy because even though [particles were moving near the speed of light, there was so little space for matter to occupy!

For random particles in a gas colliding like billiard balls, with no other organizing forces acting on them, (called the kinetic theory of gases), we can imagine a collection of 100 red particles clustered in one corner of a box, and 1000 other blue particles located elsewhere in the box. If we were to stumble on a box of 1100 particles that looked like this we would immediately say ‘how odd’ because we sense that as the particles jostled around the 100 red particles would quickly get uniformly spread out inside the box. This is an expression of their being far more available states where the red balls are uniformly mixed, than states where they are clustered together. This is also a statement that the clustered red balls is a lower-entropy version of the system, and the uniformly-mixed version is a higher form of entropy. So we would expect that the system evolves from lower to higher entropy as the red particles diffuse through the box: Called the Second Law of Thermodynamics.

Boltzmann Brains.

The problem is that given enough time, even very rare states can have a non-zero probability of happening. With enough time and enough jostling, we could randomly find the red balls once again clustered together. It may take billions of years but there is nothing that stands in the way of this happening from statistical principles. Now let’s suppose that instead of just a collection of red balls, we have a large enough system of particles that some rare states resemble any physical object you can imagine: a bacterium, a cell phone, a car…even a human brain!

A human brain is a collection of particles organized in a specific way to function and to store memories. In a sufficiently large and old universe, there is no obvious reason why such a brain could not just randomly assemble itself like the 100 red particles in the above box. It would be sentient, have memories and even senses. None of its memories would be of actual events it experienced but simply artificial reconstructions created by just the right neural pathways randomly assembled. It would remember an entire lifetime to date without having actually lived or occupied any of the events in space and time.

When you calculate the probability for such a brain to evolve naturally in a low-entropy universe like ours rather than just randomly assembling itself you run into a problem. According to Boltzmann’s cosmology, our vast low-entropy and seemingly highly organized universe is embedded in a much larger universe where the entropy is much higher. It is far less likely that our organized universe exists in such a low entropy state conducive to organic evolution than a universe where a sentient brain simply assembles itself from random collisions. In any universe destined to last for eternity, it will rapidly be populated by incorporeal brains rather than actual sentient creatures! This is the Paradox of the Boltzmann Brain.

Even though Creationists like to invoke the Second Law to deny evolution as a process of random collisions, the consequence of this random idea about structure in the universe says that we are actually all Boltzmann Brains not assembled by evolution at all. It is, however, of no comfort to those who believe in God because God was not involved in randomly assembling these brains, complete with their own memories!

So how do we avoid filling our universe with the abomination of these incorporeal Boltzman Brains?

The Paradox Resolved

First of all, we do not live in Boltzmann’s universe. Instead of an eternally static system existing in a finite space, direct observations show that we live in an expanding universe of declining density and steadily increasing entropy.

Secondly, it isn’t just random collisions that dictate the assembly of matter (a common idea used by Creationists to dismantle evolution) but a collection of specific underlying forces and fundamental particles that do not come together randomly but in a process that is microscopically determined by specific laws and patterns. The creation of certain simple structures leads through chemical processes to the inexorable creation of others. We have long-range forces like gravity and electromagnetism that non-randomly organize matter over many different scales in space and time.

Third, we do not live in a universe dominated by random statistical processes, but one in which we find regularity in composition and physical law spanning scales from the microscopic to the cosmic, all the way out to the edges of the visible universe. When two particles combine, they can stick together through chemical forces and grow in numbers from either electromagnetic or gravitational forces attracting other particles to the growing cluster, called a nucleation site.

Fourth, quantum processes and gravitational processes dictate that all existing particles will eventually decay or be consumed in black holes, which will evaporate to destroy all but the most elementary particles such as electrons, neutrinos and photons; none of which can be assembled into brains and neurons.

The result is that Boltzmann Brains could not exist in our universe, and will not exist even in the eternal future as the cosmos becomes more rarefied and reaches its final and absolute thermodynamic equilibrium.

The accelerated expansion of the universe now in progress will also insure that eventually all complex collections of matter are shattered into individual fundamental particles each adrift in its own expanding and utterly empty universe!

Have a nice day!

Check back here on Tuesday, May 9 for my next topic!

The Planck Era

The Big Bang theory says that the entire universe was created in a tremendous explosion about 14 billion years ago. The enormity of this event is hard to grasp and it seems natural to ask ourselves ‘What was it like then?’ and ‘What happened before the Big Bang?’.

Thanks to what physicists call the Standard Model, we have a detailed understanding of quantum physics, matter, energy and force that let us reproduce what the universe looked like as early as a billionth of a second after the Big Bang.  The results of high-precision observational cosmology also let us verify that the Standard Model predictions match what we see as the general properties of the matter and energy in our universe up until this unimaginable time.  We can actually go a bit farther back towards the beginning thanks to detailed studies of the cosmic background radiation!

At a time 10(-36) second ( that is  a trillionth of a trillionth of a trillionth of a second!) after the Big Bang, a spectacular change in the size of the universe occurs. This is the Inflationary Era when the strong nuclear force becomes distinguishable from the weak and electromagnetic forces. The temperature is an incredable 10 thousand trillion trillion degrees and the density of matter has sored to nearly 10(75) gm/cm3. This number is so enormous  even our analogies are almost beyond comprehension. At these densities, the entire Milky Way galaxy could easily be stuffed into a volume no larger than a single hydrogen atom!

Between a billionth of a second and 10(-35) seconds is a No Man’s Land currently in accessible to our technology and requires instruments such as the CERN Large Hadron Collider scaled up to the size of our solar system or even larger!  This is also the domain of the so-called Particle Desert that I previously wrote about, and the landscape of the predictions made by supersymmetric string theory, for which there is as yet no evidence of their correctness despite decades of intense theoretical research.

THROUGH A LOOKING GLASS, DARKLEY

Since our technology will not allow us to physically reproduce the conditions during these ancient times, we must use our mathematical theories of how matter behaves to mentally explore what the universe was like then. We know that the appearence of the universe before 10(-43) second can only be adequatly described by modifying the Big Bang theory because this theory is, in turn, based on the General Theory of Relativity.  At the Planck Scale, we need to extend General Relativity so that it includes not only the macroscopic properties of gravity but also is microscopic characteristics as well. The theory of ‘Quantum Gravity’ is still far from completion but physicists tend to agree that there are some important quide-posts to help us understand how it applies to Big Bang theory.

QUANTUM COSMOLOGY

In the language of General Relativity, gravity is a consequence of the deformati on of space caused by the presence of matter and energy.  In Quantum Gravity theory, gravity is produced by massless gravitons, or strings (in what is called string theory), or loops of energy (in what is called loop quantum gravity), so that gravitons now represent individual packages of curved space.

The appearence and dissappearence of innumerable gravitons gives the geometry of space a very lumpy and dynamic appearance. The geometry of space twists and contorts so that far flung regions of space may suddenly find themselves connected by ‘wormholes’ and quantum black holes, which constantly appear and dissappear within 10(-43) seconds. The geometry of space at a given moment will have to be thought of as an average over all 3-dimensional space geometries that are possible.

What this means is that we may never be able to calculate with any certainty exactly what the history of the universe was like before 10(-43) seconds.  To probe the history of the universe then would be like trying to trace your ancestral roots if every human being on earth had a possibility of being one of your parents. Now try to trace your family tree back a few generations! An entirely new conception of what we mean by ‘a history for the universe’ will have to be developed. Even the concepts of space and time will have to be completely re-evaluated in the face of the quantum fluctuations of spacetime at the Planck Era!

Now we get to a major problem in investigating the Planck Era.

BUT WAIT…THERE’S MORE!

Typically we make observations in nuclear physics by colliding particles and studying the information created in the collision, such as the kinds of particles created, their energy, momentum, spin and other ‘quantum numbers’.  The whole process of testing our theories relies on studying the information generated in these collisions, searching for patterns, and comparing them to the predictions. The problem is that this investigative process breaks down as we explore the Planck Era.  When the quantum particles of space (gravitons, strings or loops)collide at these enormous energies and small scales, they create quantum black holes that immediately evaporate. You cannot probe even smaller scales of space and time because all you do is to create more quantum black holes and wormholes.  Because the black holes evaporate into a randomized hailstorm of new gravitons you cannot actually make observations of what is going on to search for non-random patterns the way you do in normal collisions!

Quantum Gravity, if it actually exists as a theory, tells us that we have finally reached a theoretical limit to how much information we can glean about the Planck Era. Our only viable options involve exploring the Inflationary Era and how this process left its fingerprints on the cosmic background radiation through the influence of gravitational waves.

Fortunately, we now know that gravity waves exist thanks to the discoveries by the LIGO instrument in 2016. We also have indications of what cosmologists call the cosmological B-Modes which are the fingerprints of primordial gravity waves interacting with the cosmic background radiation during the Inflationary Era.

We may not be able to ever study the Planck Era conditions directly when the universe was only 10(-43) seconds old, but then again, knowing what the universe was doing  10(-35) seconds after the Big Bang all the way up to the present time is certainly an impressive human intellectual and technological success!

 

Check back here on May 3 for the next blog!

Space Power!

On Earth we can deploy a 164-ton wind turbine to generate 1.5 megawatts of electricity, but in the also energy-hungry environment of space travel, far more efficient energy-per-mass systems are a must. The choices for such systems are not unlimited in the vacuum of space!

OK…this is a rather obscure topic, but as I discussed in my previous blog, in order to create space propulsion systems that can get us to Mars in a few days, or Pluto in a week, we need some major improvements in how we generate power in space.

I am going to focus my attention on ion propulsion, because it is far less controversial than any of the more efficient nuclear rocket designs. Although nuclear rocket technology is pretty well worked out theoretically and in engineering designs since the 1960s,   there is simply no political will to deploy this technology in the next 50 years due to enormous public concerns. The concerns are not entirely unfounded. The highest-efficiency and least massive fission power plants would use near-weapons grade uranium or plutonium fuel, making them look like atomic bombs to some skeptics!

Both fission and fusion propulsion have a lot in common with ordinary chemical propulsion. They heat a propellant up to very high temperatures and direct the exhaust flow, mechanically, out the back of the engine using tapered  ‘combustion chambers’ that resemble chemical rockets. The high temperatures insure that the isotropic speeds of the particles are many km/sec, but the flow has to be shaped by the engine nozzle design to leave the ship in one direction. The melting temperature of a fission reactor is about 4,500 K so the maximum speed of the ejected thermal gas (hydrogen) passing through its core is about 10 km/sec.

Ion engines are dramatically different. They guide ionized particles out the back of the engine using one or more acceleration grids. The particles are electrostatically guided and accelerated literally one at a time, so that instead of flowing all over the place in the rocket chamber, they start out life already ‘collimated’ to flow in only one direction at super-thermal speeds. For instance, the Dawn spacecraft ejected Zenon particles at a speed of 25 km/sec. If you had a high-temperature xenon gas with particles at that same speed, the temperature of this gas would be 4 million Celsius! Well above the melting point of the ion engine!

We are well into the design of high-thrust ion engines, and have already deployed several of these. The Dawn spacecraft launched in 2007 has visited asteroid Vesta (2011) and dwarf planet Ceres (2015) using a 10 kilowatt ion engine system with 937 pounds of xenon propellant, and achieved a record-breaking speed change of 10 kilometers/sec. It delivered about 0.09 Newtons of thrust over 2,000 days of continuous operation. Compare this with the millions of Newtons of thrust delivered by the Saturn V in a few minutes.

Under laboratory conditions, newer ion engine designs are constantly being developed and tested. The NASA NEXT program in 2010 demonstrated over 5.5 years of continuous operation for a 7 kilowatt ion engine. It used 862 kg of xenon and produced a thrust of 3.5 Newtons, some 30 times better than the Dawn technology.

Theoretically, an extensive research study on the design of megawatt ion engines by David Fearn presented at the Space Power Symposium of the 56th International Astronautical Congress in 2005 gave some typical characteristics for engines at this power level. The conclusion was that these kinds of ion engines pose no particular design challenges and can achieve exhaust speeds that exceed 100 km/sec. As a specific example, an array of nine thrusters using xenon propellant would deliver a thrust of 120 Newtons and consume 7.4 megawatts. A relatively small array of thrusters can also achieve exhaust speeds of 1,500 km/sec using lower-mass hydrogen propellants.

Ion propulsion requires megawatts of energy in order to produce enough continuous thrust to get us to the high speeds and thrusts we need for truly fast interplanetary travel.

The bottom line for ion propulsion is the total electrical power that is available to accelerate the propellant ions. Very high efficiency solar panels that convert more than 75% of the sunlight into electricity work very well near Earth orbit (300 watts/kg), but produce only 10 watts/kg near Jupiter, and 0.3 watts/kg near Pluto. That means the future of fast space travel via ion propulsion spanning our solar system requires some kind of non-solar-electric, fission reactor system (500 watts/kg) to produce the electricity. The history of using reactors in space though trivial from an engineering standpoint, is a politically complex one because of the prevailing fear that a launch mishap will result in a dirty bomb or even a Hiroshima-like event in the minds of the general public and Congress.

The Soviet Union has been launching nuclear reactors into space for decades in its Kosmos series of satellites. Early in 1992, the idea of purchasing a Russian-designed and fabricated space reactor power system and integrating it with a US designed satellite went from fiction to reality with the purchase of the first two Topaz II reactors by the Strategic Defense Initiative Organization (now the Ballistic Missile Defense Organization (BMDO). SDIO also requested that the Applied Physics Laboratory in Laurel, MD propose a mission and design a satellite in which the Topaz II could be used as the power source. Even so, the Topaz II reactor had a mass of 1,000 kg and produced 10 kilowatts for an efficiency of 10 watts/kg. Due to funding reduction within the SDIO, the Topaz II flight program was postponed indefinitely at the end of Fiscal Year 1993.

Similarly, cancellation was the eventual fate of the US SP-100 reactor program. This program was started in 1983 by NASA, the US Department of Energy and other agencies. It developed a 4000 kg, 100 kilowatt reactor ( efficiency = 25 watts/kg) with heat pipes transporting the heat to thermionic converters.

Proposed SP-100 reactor ca 1980  (Image credit: NASA/DoE/DARPA)

Believe it or not, small nuclear fission reactors are becoming very popular as portable ‘batteries’ for running remote communities of up to 70,000 people. The Hyperion Hydride Reactor is not much larger than a hot tub, is totally sealed and self-operating, has no moving parts and, beyond refueling, requires no maintenance of any sort.

Hyperion, Uranium Hydride Reactor (Credit:Hyperion, Inc)

According to the Hyperion Energy Company the Gen4 reactor has a mass of about 100-tons and is designed to deliver 25 megawatts electricity for a 10-year lifetime, without refueling. The efficiency for such a system is 250 watts/kg! Of course you cannot just slap one of  these Bad Boys onto a rocket ship to provide the electricity for the ion engines, but this technology already proves that fission reactors can be made very small and deliver quite the electrical wallop, and do so in places where solar panels are not practical.

Some of the advanced photo-electric system being developed by NASA and NASA contractors are based on the solar energy technology used in the NASA Deep Space 1 mission and the Naval Research Laboratory’s TacSat 4 reconnaissance satellite, and are based on ‘stretched lens array’ lens concentrators for sunlight that amplify the sunlight by up to 8 times (called eight-sun systems). The solar arrays are also flexible and can be rolled out like a curtain. The technology promises to reach efficiency levels of 1000 watts/kg, and less than $50/watt, compared to the 100 w/kg and $400/watt of current ‘one sun’ systems that do not use lens concentrators. A 350 kW solar-electric ion engine system is a suggested propulsion for a 70 ton crewed mission to Mars. With the most efficient stretched lens array solar arrays currently under design, a 350 kW system would have a mass of only 350 kg and cost about $18 million. The very cool thing about this is that improvements in solar panel technology not only directly benefit space power systems for inner solar system travel, but lead to immediate consumer applications in Green Energy!  Imagine covering your roof with a 1-square-meter high efficiency panel rather than your entire roof with an unsightly  lower-efficiency system!

So to really zip around the solar system and avoid the medical problems of prolonged voyages, we really need more work on compact power plant design that is politically realistic. Once we solve THAT problem, even Pluto will be a week’s journey away!

Check back here on Monday, April 24 for my next topic!