Category Archives: Astronomy

Martian Swamp Gas?

Thanks to more than a decade of robotic studies, the surface of Mars is becoming a place as familiar to some of us as similar garden spots on Earth such as the Atacama Desert in Chile, or Devon Island in Canada. But this rust-colored world still has some tricks up its sleave!

Back in 2003, NASA astronomer Michael Mumma and his team discovered traces of methane in the dilute atmosphere of Mars. The gas was localized to only a few geographic areas in the equatorial zone in the martian Northern Hemisphere, but this was enough to get astrobiologists excited about the prospects for sub-surface life. The amount being released in a seasonal pattern was about 20,000 tons during the local summer months.


The discovery using ground-based telescopes in 2003 was soon confirmed a year later by other astronomers and by the Mars Express Orbiter, but the amount is highly variable. Ten years later, the Curiosity rover also detected methane in the atmosphere from its location many hundreds of miles from the nearest ‘plume’ locations. It became clear that the hit-or-miss nature of these detections had to do with the source of the methane turning on and off over time, and it was not some steady seepage going on all the time. Why was this happening, and did it have anything to do with living systems?

On Earth, there are organisms that take water (H2O) and combine it with carbon dioxide in the air (CO2) to create methane (CH3) as a by-product, but there are also inorganic processes that create methane too. For instance, electrostatic discharges can ionize water and carbon dioxide and can produce trillions of methane molecules per discharge. There is plenty of atmospheric dust in the very dry Martian atmosphere, so this is not a bad explanation at all.

This diagram shows possible ways that methane might make it into Mars’ atmosphere (sources) and disappear from the atmosphere (sinks). (Credit: NASA/JPL-Caltech/SAM-GSFC/Univ. of Michigan)

Still, the search for conclusive evidence for methane production and removal is one of the high frontiers in Martian research these days. New mechanisms are being proposed every year that involve living or inorganic origins. There is even some speculation that the Curiosity rover’s chemical lab was responsible for the rover’s methane ‘discovery’. Time will tell if some or any of these ideas ultimately checks out. There seem to be far more geological ways to create a bit of methane compared to biotic mechanisms. This means the odds do not look so good that the fleeting traces of methane we do see are produced by living organisms.

What does remain very exciting is that Mars is a chemically active place that has more than inorganic molecules in play. In 2014, the Curiosity rover took samples of mudstone and tested them with its on-board spectrometer. The samples were rich in organic molecules that have chlorine atoms including chlorobenzene (C6H4Cl2) , dichloroethane (C2H4Cl2), dichloropropane (C3H6Cl2) and dichlorobutane (C4H8Cl2). Chlorobenzene is not a naturally occurring compound on Earth. It is used in the manufacturing process for pesticides, adhesives, paints and rubber. Dichloropropane is used as an industrial solvent to make paint strippers, varnishes and furniture finish removers, and is classified as a carcinogen. There is even some speculation that the abundant perchlorate molecules (ClO4) in the Martian soil, when heated inside the spectrometer with the mudstone samples, created these new organics.

Mars is a frustratingly interesting place to study because, emotionally, it holds out hope for ultimately finding something exciting that takes us nearer to the idea that life once flourished there, or may still be present below its inaccessible surface. But all we have access to for now is its surface geology and atmosphere. From this we seem to encounter traces of exotic chemistry and perhaps our own contaminants at a handful of parts-per-billion. At these levels, the boring chemistry of Mars comes alive in the statistical noise of our measurements, and our dreams of Martian life are temporarily re-ignited.

Meanwhile, we will not rest until we have given Mars a better shot at revealing traces of its biosphere either ancient or contemporary!

Check back here on Thursday, March 2 for the next essay!

Death By Vacuum

As an astrophysicist, this has GOT to be one of my favorite ‘fringe’ topics in physics. There’s a long preamble story behind it, though!

The discovery of the Higgs Boson with a mass of 126 GeV, about 130 times more massive than a proton, was an exciting event back in 2012. By that time we had a reasonably complete Standard Model of how particles and fields operated in Nature to create everything from a uranium atom and a rainbow, to lighting the interior of our sun. A key ingredient was a brand new fundamental field in nature, and its associated particle called the Higgs boson. The Standard Model says that all fundamental particles in Nature have absolutely no mass, but they all interact with the Higgs field. Depending on how strong this interaction, like swimming through a container of molasses, they gain different amounts of mass. But the existence of this Higgs field has led to some deep concerns about our world that go well beyond how this particle creates the physical property we call mass.

In a nutshell, according to the Standard Model, all particles interact with the ever-present Higgs field, which permeates all space. For example, the W-particles interact very strongly with the Higgs field and gain the most mass, while photons interact not at all, remain massless.

The Higgs particles come from the Higgs field, which as I said is present in every cubic centimeter of space in our universe. That’s why electrons in the farthest galaxy have the same mass as those here on Earth. But Higgs particles can also interact with each other. This produces a very interesting effect, like the tension in a stretched spring. A cubic centimeter of space anywhere in the universe is not at all perfectly empty, and actually has a potential energy ‘stress’ associated with it. This potential energy is  related to just how massive the Higgs boson is. You can draw a curve like the one below that shows the vacuum energy  and how it changes with the Higgs particle mass:

Now the Higgs mass actually changes as the universe expands and cools. When the universe was very hot, the curve looked like the one on the right, and the mass of the Higgs was zero at the bottom of the curve. As the universe expanded and cooled, this Higgs interaction curve turned into the one on the left, which shows that the mass of the Higgs is now X0 or 126 GeV. Note, the Higgs mass represented by the red ball used to be zero, but ‘rolled down’ into the lower-energy pit as the universe cooled.

The Higgs energy curve shows a very stable situation for ‘empty’ space at its lowest energy (green balls) because there is a big energy wall between where the field is today, and where it used to be (red ball). That means that if you pumped a bit of energy into empty space by colliding two particles there, it would not suddenly turn space into the roaring hot house of the Higgs field at the top of this curve.

We don’t actually know exactly what the Higgs curve looks like, but physicists have been able to make models of many alternative versions of the above curve to test out how stable the vacuum is. What they  found is something very interesting.

The many different kinds of Higgs vacuua can be defined  by using two masses: the Higgs mass and the mass of the top quark. Mathematically, you can then vary the values for the Higgs boson and the Top quark and see what happens to the stability of the vacuum. The results are summarized in the plot below.

The big surprise is that, from the observed mass of the Higgs boson and our top quark shown in the small box, their values are consistent with our space being inside a very narrow zone of what is called meta-stability. We do not seem to be living in a universe where we can expect space to be perfectly stable. What does THAT mean? It does sound rather ominous that empty space can be unstable!

What it means is that, at least in principle, if you collided particles with enough energy that they literally blow-torched a small region of space, this could change the Higgs mass enough that the results could be catastrophic. Even though the collision region is smaller than an atom, once created, it could expand at the speed of light like an inflating bubble. The interior would be a region of space with new physics, and new masses for all of the fundamental particles and forces. The surface of this bubble would be a maelstrom of high-energy collisions leaking out of empty space! You wouldn’t see the wall of this bubble coming. The walls can contain a huge amount of energy, so you would be incinerated as the bubble wall ploughed through you.

Of course the world is not that simple. These are all calculations based on the Standard Model, which may be incomplete. Also, we know that cosmic rays collide with Earth’s atmosphere at energies far beyond anything we will ever achieve…and we are still here.

So sit back and relax and try not to worry too much about Death By Vacuum.

Then again…

 

Return here on Wednesday, February 22 for my next blog!

The Next Sunspot Cycle

Forecasters are already starting to make predictions for what might be in store as our sun winds-down its current sunspot cycle (Number 24) in a few years. Are we in for a very intense cycle of solar activity, or the beginning of a century-long absence of sunspots and a rise in colder climates?

Figure showing the sunspot counts for the past few cycles. (Credit:www.solen.info)

Ever since Samuel Schwabe discovered the 11-year ebb and flow of sunspots on the sun in 1843, predicting when the next sunspot cycle will appear, and how strong it will be, has been a cottage industry among scientists and non-scientists alike. For solar physicists, the sunspot cycle is a major indicator of how the sun’s magnetic field is generated, and the evolution of various patterns of plasma circulation near the solar surface and interior. Getting these forecasts bang-on would be proof that we indeed have a ‘deep’ understanding of how the sun works that is a major step beyond just knowing it is a massive sphere of plasma heated by thermonuclear fusion in its core.

So how are we doing?

For over a century, scientists have scrutinized the shapes of dozens of individual sunspot cycles to glean features that could be used for predicting the circumstances of the next one. Basically, we know that 11-years is an average and some cycles are as short as 9 years or as long as 14. The number of sunspots during the peak year, called sunspot maximum, can vary from as few as 50 to as many as 260. The speed with which sunspot numbers rise to a maximum can be as long as 80 months for weaker sunspot cycles, and as short as 40 months for the stronger cycles. All of these features, and many other statistical rules-of-thumb, lead to predictive schemes of one kind or another, but they generally fail to produce accurate and detailed forecasts of the ‘next’ sunspot cycle.

Prior to the current sunspot cycle (Number 24), which spans the years 2008-2019, NASA astronomer Dean Pesnell collected 105 forecasts for Cycle 24 . For something as simple as how many sunspots would be present during the peak year, the predictions varied from as few as 40 to as many as 175 with an average of 106 +/-31. The actual number at the 2014 peak was 116. Most of the predictions were based on little more than extrapolating statistical patterns in older data. What we really want are forecasts that are based upon the actual physics of sunspot formation, not statistics. The most promising physics-based models we have today actually follow magnetic processes on the surface of the sun and below and are called Flux Transport Dynamo models.

Solar polar magnetic field trends (Credit: Wilcox Solar Observatory)

The sun’s magnetic field is much more fluid than the magnetic field of a toy bar magnet. Thanks to the revolutionary work by helioseismologists using the SOHO spacecraft and the ground-based GONG program, we can now see below the turbulent surface of the sun. There are vast rivers of plasma wider than a dozen Earths, which wrap around the sun from east to west. There is also a flow pattern that runs north and south from the equator to each pole. This meridional current is caused by giant convection cells below the solar surface and acts like a conveyor belt for the surface magnetic fields in each hemisphere. The sun’s north and south magnetic fields can be thought of as waves of magnetism that flow at about 60 feet/second from the equator at sunspot maximum to the poles at sunspot minimum, and back again to the equator at the base of the convection cell. At sunspot minimum they are equal and opposite in intensity at the poles, but at sunspot maximum they vanish at the poles and combine and cancel at the sun’s equator. The difference in the polar waves during sunspot minimum seems to predict how strong the next sunspot maximum will be about 6 years later as the current returns the field to the equator at the peak of the next cycle. V.V Zharkova at Northumbria University in the UK uses this to predict that Cycle 25 might continue the declining trend of polar field decrease seen in the last three sunspot cycles, and be even weaker than Cycle 24 with far fewer than 100 spots. However, a recent paper by NASA solar physicists David Hathaway and Lisa Upton  re-assessed the trends in the polar fields and predict that the average strength of the polar fields near the end of Cycle 24 will be similar to that measured near the end of Cycle 23, indicating that Cycle 25 will be similar in strength to the current cycle.

But some studies such as those by Matthew Penn and William Livingston at the National Solar Observatory seem to suggest that  sunspot magnetic field strengths have been declining since about 2000 and are already close to the minimum needed to sustain sunspots on the solar surface.  By Cycle 25 or 26, magnetic fields may be too weak to punch through the solar surface and form recognizable sunspots at all, spelling the end of the sunspot cycle phenomenon, and the start of another Maunder Minimum cooling period perhaps lasting until 2100. A quick GOOGLE search will turn up a variety of pages claiming that a new ‘Maunder Minimum’ and mini-Ice Age are just around the corner! An interesting on-the-spot assessment of these disturbing predictions was offered back in 2011 by NASA solar physicist C. Alex Young, concluding from the published evidence that these conclusions were probably ‘Much Ado about Nothing’.

What can we bank on?

The weight of history is a compelling guide, which teaches us that tomorrow will be very much like yesterday. Statistically speaking, the current Cycle 24 is scheduled to draw to a close about 11 years after the previous sunspot minimum in January 2008, which means sometime in 2019. You can eyeball the figure at the top of this blog and see that that is about right. We entered the Cycle 24 sunspot minimum period in 2016 because in February and June, we already had two spot-free days. As the number of spot-free days continues to increase in 2017-2018, we will start seeing the new sunspots of Cycle 25 appear sometime in late-2019. Sunspot maximum is likely to occur in 2024, with most forecasts predicting about half as many sunspots as in Cycle 24.

None of the current forecasts suggest Cycle 25 will be entirely absent. A few forecasts even hold out some hope that a sunspot maximum equal to or greater than Cycle 24 which was near 140 is possible, while others place the peak closer to 60 in 2025.

It seems to be a pretty sure bet that there will be yet-another sunspot cycle to follow the current one. If you are an aurora watcher, 2022-2027 would be the best years to go hunting for them. If you are a satellite operator or astronaut, this next cycle may be even less hostile than Cycle 24 was, or at least no worse!

In any event, solar cycle prediction will be a rising challenge in the next few years as scientists pursue the Holy Grail of creating a reliable theory of why the sun even has such cycles in the first place!

Check back here on Friday, February 17 for my next blog!

Interstellar Travel?

Interstellar travel revolves around the answers to three major questions:

1) Where will we go?

2) What will we do when we get there?

3) How will it benefit folks back on Earth?

Far beyond the issue of whether interstellar travel is technologically possible is the very practical issue of answering these questions long before we turn the first screw in the hardware.

A few of the thousands of stars within 50 light years of Earth.(Credit: Atlas of the Universe)

In science fiction, finding new destinations is usually handled by either manned or unmanned expeditionary forces. Not surprisingly, the hazardous experiences of the manned expeditions are usually the exciting core of the story itself. When we create this technology ourselves, the first  trips are usually to a popular nearby star like Alpha Centauri (e.g Babylon 5). When we are co-opting alien technology, we often have to go to where the aliens previously had outposts often hundreds or thousands of light years away ( e.g Contact or Stargate). In any event, most stories assume that the travelers can select multiple destinations and hop-scotch their way around the local universe within a single human lifetime to find a habitable planet. Apparently, once you have convinced politicians to fund the first trip, the incremental cost of additional trips is very small!

The whole idea of interstellar travel was created by fiction writers, but because it has many elements of good science in it, it is a very persuasive idea. It is an idea located smack in the middle of the ‘gray area’ between fantasy and reality, and this is what goads people on to try to imagine ways to make it a reality. One thing we do know is that it will be an expensive venture requiring an investment at the level of many percent of an entire planet’s GDP.

By some estimates, the first interstellar voyage will cost many trillions of dollars, require decades to construct, and involve tens to hundreds of passengers and explorers. Assuming a project of this scope can even be sold to the bulk of humanity that will be left behind to pay the bills, how will the destination be selected? Will we just point the ship towards any star and commit these resources to a random journey and outcome, or will we know a LOT about where we are going before the fuel is loaded? Most of us will agree that the latter case for such an expensive ‘one of’ mission is more likely. By the way, let’s not talk about the human Manifest Destiny to explore the unknown. Even Christopher Columbus knew his destination in detail (India!) and traveled within a very benign biosphere, free breathable atmosphere and comfortable gravity to get there in a few months.

So…where will we go?

Contrary to popular ideas, we will know our destination in great detail long before we leave our solar system. We will know whether the star has any planets, and we will know it has at least one planet in its habitable zone (HZ) where temperature would allow liquid water to exist. We will know if the planet has an atmosphere or not. We will know its mass and size, and perhaps more importantly, whether the planet has a biosphere. We will not invest perhaps trillions of dollars to study a barren Mars or Venus-like planet. All of these issues will be worked out by astronomical remote-sensing research at far lower cost than traveling there. If a nearby star does not have detectable planets, we will most certainly NOT mount a trillion-dollar mission to just ‘go and see’!

The 10 nearest stars are: Proxima Centauri (4.24 lys), Alpha Centauri (4.36), Barnards Star (5.96), Luhman 16 (6.59), Wolf 359 (7.78), Lalande 21185 (8.29), Sirius (8.59), Luyten 726-8 (8.72), Ross 154 (9.68) and Ross 248 (10.32). This takes us out to a distance of just over 10 light years from Earth. The prospects for an interesting world to visit are not good.

Proxima Centauri has one recently-detected Earth-sized planet orbiting inside its HZ, making it a Venus-like world of no interest. Alpha Centauri B has one unverified Earth-sized planet, but not in the star’s liquid-water HZ. It orbits ten times closer than Mercury. There are no planets larger than Neptune orbiting this star closer than our planet Jupiter. Barnards Star has a no known planets, but a Jupiter-sized planet inside the orbit of Mars is exluded, so this is still a viable star for future searches for terrestrial planets in the star’s HZ. Luhman 16 is a binary system whose members orbit each other every 25 years at a distance of 3 AU. A possible companion orbits one of these stars every month at a distance closer than Mercury. As for the stars Wolf 359, Lalande 21185, Sirius, Luyten 726-8, Ross 154 and Ross 248, there have been searches for Jupiter-sized companions around these  stars, but none have ever been claimed.

So, our nearest stars within 10 light years are pretty bleak as destinations for expensive missions. There is no solid evidence for Earth-sized planets orbiting within the HZs of any of them. These would not be plausible targets because there is so little return on the high cost of getting there, even though that cost in terms of travel time is the smallest of all stars in our neighborhood. This also sets a scale for the technology required. It is not enough to visit our nearest star, but we have to trudge 2 to 3 times farther before we can find better destinations.

Better Destinations.

Let’s take a bigger step. Out to a distance of 16 light years there are 56 normal stars, which include some promising candidate targets.

Epsilon Eridani (10.52 ly) has one known giant planet outside its HZ. It also has two asteroid belts: one at about three times Earth’s distance from our sun (3 AU) and one at about 20 AU. No one would ever risk a priceless mission by sending it to a sparse planetary system with deadly asteroid belts and no HZ candidates!

Groombridge 34 (11.62 ly) –The only suspected planet has a mass of more than five Earths. No mission would be sent to such a planet for which an atmosphere would probably be crushingly dense and probably Jupiter-like even if it was in its HZ.

Epsilon Indi (11.82 ly) – has a possible Jupiter-sized planet with a period of more than 20 years. No known smaller planets.

Artist rendering of the planets Tau Ceti e and f (Credit: PHL @ UPR Arecibo)

Tau Ceti (11.88 ly) probably has five planets between two and six times Earth’s mass, and with periods from 14 to 640 days. Planet Tau Ceti f is colder than Mars and is at the outer limit to the star’s HZ. Its atmosphere might be dense enough for greenhouse heating, so the world might be habitable after all. But this is guesswork not certainty.

Kapteyn’s Star (12.77 ly) – It has two planets, Kapteyn b and Kapteyn c, that are 5 to 8 times the mass of Earth. Kapteyn b has a period of 120 days and is a potentially habitable planet estimated to be 11 billion years old. Again, a massive planet whose surface you could never visit, so what is the point of the interstellar expedition?

Gliese 876 (15.2 ly) has four planets. All have more than 6 times the mass of Earth and orbit closer than the planet Mercury. Gliese 876 c is a giant planet like Jupiter in the star’s habitable zone. Would you bet the entire mission that 876c has habitable ‘Galilean moons’ like our Jupiter? This would be an unacceptable shot in the dark, though a tantalizing one.

So, out to 15 light years we have some interesting prospects but no confirmed Earth-sized planet in its star’s HZ whose surface you could actually visit. We also have no solid data on the atmospheres of any of these worlds. None of these candidates seem worth investing the resources of a trillion-dollar mission to reach and study. We can study them all from Earth at far less cost.

Best Destinations.

If we take an even bigger step and consider stars closer than 50 light years we have a sample of potentially 2000 stars but not all of them have been discovered and cataloged. About 130 are bright enough to be seen with the naked eye. The majority are dim and cool red dwarf stars, which are still good candidates for planetary systems. In this sample we encounter among the known planetary candidates several that would be intriguing targets:

61 Virginis (11.41 ly) – It has three planets with masses between 5 and 25 times our Earth, crowded inside the orbit of Venus. The asteroidal debris disk has at least 10 times as many comets as our solar system. There are no detected planets more massive than Saturn within 6 AU. An Earth-mass planet in the star’s habitable zone remains a possiblity, but the asteroid belts make this an unacceptable high risk target.

Gliese 667 planets (Credit: ESO )

Gliese 667 (23.2 ly) –As many as seven planets may orbit this star, but have not been confirmed. All have masses between that of Earth and Uranus. All but one are huddled inside the orbit of Mercury. Planets c and d are in the star’s HZ and are at least 3 times the mass of Earth. Their hypothetical moons may be habitable.
55 Cancri (40.3 ly)- All five planets orbiting this star are more than five times the mass of Earth. Only 55 Cancri e is located at the inner edge of the star’s HZ and its hypothetical moons could be habitable. More planets are possible within the stable zone between 0.9 to 3.8 AU if their orbits are circular. This is a system we still need to study.

HD 69830 (40.7 ly) has a debris disk produced by an asteroid belt twenty times more massive than that in our own solar system. Three detected planets have masses between 10 to 18 times that of Earth. The debris disk makes this a high-risk prospect even if there are habitable moons.

HD 40307 (41.8 ly) Five of the six planets orbit very close to the star inside the orbit of Mercury. The fifth planet orbits at a distance similar to Venus and is in the system’s habitable zone. The planets range in mass from three to ten times Earth. Again, is a planet in its HZ with a mass too great for a direct human visit a good candidate? I don’t think so.

Upsilon Andromedae (44.25 ly) The two outer planets are in orbits more elliptical than any of the planets in the Solar System. Upsilon Andromedae d is in the system’s habitable zone, has three times the mass of Jupiter, with huge temperature swings. Its hypothetical moons may be habitable.

47 Ursa Majoris (45.9 ly) The only known planet 47 Ursae Majoris b is more than twice the mass of Jupiter and orbits between Mars and Jupiter. The inner part of the habitable zone could host a terrestrial planet in a stable orbit. None yet detected.

There are still many more stars in this sample to detect, catalog and study so it is possible that a Goldilocks Planet could be found eventually. But we are now looking at destinations more than 20 light years away at a minimum. This will considerably increase the cost and duration of any interstellar mission by factors of five to ten times a simple jaunt to Alpha Centauri.

Other issues.

Would you really consider a planet with two to five times Earth’s gravity to be a candidate? Who would want to live under that crushing weight? Many of the candidates we have found so far are massive Earth’s that few colonists would consider standing upon. Their surfaces are also technologically expensive to get to and leave. But perhaps these worlds might have moons with more comfortable gravities? There is always hope, but will that be enough to risk a multi-trillion-dollar mission?

There is also the issue of atmosphere. None of the candidate planets we have discussed transit their stars, so we cannot detect their atmospheres and figure out if they have atmospheres and if their  trace gases would be  lethal. The perfect destination worth the expense of a trip would have a breathable atmosphere with oxygen. Since free oxygen is only produced by living systems, our target planet would have a biosphere. We can only hope that as the surveys of the nearby stars continue, we will find one of these. But statistics suggests we will have to search much farther than 50 light years and a few thousand stars before we encounter one. That makes the interstellar voyage even more costly, not by factors of five and ten, but potentially hundreds of times. But if the trip were to a world with a known biosphere, THAT might be worth the effort, but possibly nothing less than this would be worth the cost, the risk, and the scientific return.

So the bottom line is that the only interstellar destination worth the expense is either one in which colonists can live comfortably on the planet with a lethal atmosphere, hermetically sealed under a dome, or a similar planet with a breathable oxygen atmosphere and a biosphere. Statistically, we will find far more examples of the first kind of target than the second. But in the majority of the cases, we will not be able to detect the atmosphere of an Earth-sized world in its habitable zone before we start the trip, and will have to ‘guess’ whether it even has an atmosphere at all!

The enormous cost of an interstellar trip to a target tens or even hundreds of light years away will preclude any guess work about what we will find when we get there. Consider this: Investing $100 billion to travel to Mars, a low-risk planet we thoroughly understand in detail, is still considered a political pipe dream even with existing technology! What would we call a trip that costs perhaps 100 times as much?

For more on this topic, have a look at my book ‘Interstellar Travel:An astronomer’s guide’. Available at Amazon.com.

 

Return here for my next blog posting on Friday, February   3

Why NASA needs ARMs

In 2013, a small 70-meter asteroid exploded over the town of Chelyabinsk and injured 3000 people from flying glass. Had this asteroid exploded a few hours earlier over New York City, the flying glass hazard would have been lethal for thousands of people, sending thousands more into the emergency rooms of hospitals for critical-care treatment. Of all the practical benefits of space exploration, it is hard to argue that asteroid investigations are not a high priority above dreams of colonization of the moon and Mars.

So why is it that the only NASA mission to actually try a simple method to adjust the orbit of an asteroid cannot seem to garner much support?

There has been much debate over the next step in human exploration: whether to go back to the moon or take the harder path to Mars. The later goal has been much favored, and for the last decade or so, NASA has developed a step-by-step Journey to Mars approach for doing this, beginning with the development of the SLS launch vehicle, and the testing out of many necessary systems, technologies and strategies to support astronauts making this trip, both quickly and safely. Along with numerous Mars mapping and rover missions now in progress or soon to be launched, there are also technology development missions to test out such things as solar-electric ‘ion’ propulsion systems.

One of these test-bed missions with significant scientific returns is the Asteroid Redirect Mission to be launched in ca 2021 for a cost of about $1.4 billion. NASA’s first-ever robotic mission will visit a large near-Earth asteroid, collect a multi-ton boulder from its surface, and use it in an enhanced gravity tractor asteroid deflection demonstration. The spacecraft will then redirect the multi-ton boulder into a stable orbit around the moon, where astronauts will explore it and return with samples in the mid-2020s.

But all is not well for ARM.

ARM was proposed in 2010 during the Obama Administration as an alternative to the canceled Constellation Program proposed by the Bush Administration, so with the new GOP-dominated administration set on dismantling all of the Obama Administrations’ legacy work, there is much incentive to eliminate it for political reasons alone.

Reps. Lamar Smith (R-Texas), chairman of the HCSST, and Brian Babin (R-Texas), chairman of the HSST space subcommittee reportedly feel that the incoming Trump administration should be “unencumbered” by decisions made by the current one — like what they want to do with the ACA . They claim to have access to “honest assessments” of ARM’s value rather than “farcical studies scoped to produce a predetermined outcome.” The House’s version of the 2017 FY appropriations bill includes wording that would force NASA to fully defund the ARM program. Furthermore, Smith and Babin wrote, “the next Administration may find merit in some, if not all, of the components of ARM, and continue the program; however, that decision should be made after a full and fair review based on the merits of the program and in the context of a larger exploration and science strategy.” Similar arguments will no doubt be used to cancel climate change research, which has also been deemed politically biased and unscientific by the current, incoming administration.

But ARM is no ordinary ‘exploration and science’ space mission, even absent its unique ability to test the first high-power ion engines for interplanetary travel, and retrieve a large, pristine multi-ton asteroid sample. All other NASA missions have certainly demonstrated their substantial scientific returns, and this is often the key justification that allows them to proceed. Mission technology also affords unique tech spinoff opportunities in the commercial sector that makes the US aerospace industrial base very happy to participate. But these returns all seem rather abstract, and for the person-on-the-street rather hard to appreciate.


For decades, astronomers have been discovering and tracking 100s of thousands of asteroids. We live in an interplanetary shooting gallery, where some 15,000 Near Earth Objects have already been discovered, and 30 new ones added every week. NEOs, by the way, are asteroids that come within 30 million miles of Earth’s orbit. These asteroids measure 1 kilometer or more, and statistically over 90% of this population has now been identified. But only 27% of those 140 meters or larger have been discovered. Once their orbits are determined, we can make predictions about which ones will pose an danger to Earth.

Currently there are 1,752 potentially hazardous asteroids  that come within 5 million miles of Earth (20 times Earth-moon distance). There are none predicted to impact Earth in the next 100 years. But new ones are found every week, and between now and February 2017, one object called 2016YJ about 30 meters across will pass within 1.2 lunar distances of Earth. The list of closest approaches in 2016 is quite exciting to look through The object 2016 QA2 discovered in 2016 in the nick of time, was about 70 meters across and came within 53,000 miles of Earth. Upon impact, it would have been an event similar to Chelyabinsk. Even larger, and far more troubling very close encounters have been predicted for the 325-meter asteroid Apophis in 2029, and the 1-kilometer asteroid 2001WN5 in 2028 and well within the man-made satellite cloud that surrounds Earth.

The first successful forecast of an impact event was made on 6 October 2008 when the asteroid 2008 TC3 was discovered. It was calculated that it would hit the Earth only 21 hours later. Luckily it had a diameter of only three meters and did not cause any damage. Since then, some stony remnants of the asteroid have been found. But this object could just as easily have been a 100-meter object exploding over New York City or London, with devastating consequences.

So in terms of planetary defense, asteroids are a dramatically important hazard we need to study. For some asteroids, we may have as little as a year to decide what to do. Although many mitigation strategies have been proposed, none have actually been tested! We need to test as many different orbit-changing strategies as we can before the asteroid with Earth’s name written on it is discovered.

Honestly, what more practical benefit can there be for a NASA mission than to materially protect Earth and our safety?

Check back here on Thursday, January 5 for the next installment!

To Pluto in 30 days!

OK…While everyone else is worrying how to get to Mars, let’s take a really big step and figure out how to get to Pluto….in a month!

The biggest challenge for humans is surviving the long-term rigours of space hazards, but all that is nearly eliminated if we keep our travel times down to a few weeks.

Historically, NASA spacecraft such as the Pioneer, Voyager and New Horizons missions have taken many years to get as far away from Earth as Pluto. The New Horizons mission was the fastest and most direct of these. Its Atlas V launch vehicle gave it an initial speed of 58,000 km/hr. With a brief gravity assist by Jupiter, its speed was boosted to 72,000 km/hour, and the 1000-pound spacecraft made it to Pluto in 9.5 years. We will have to do a LOT better than that if we want to get there in 1 month!

The arithmetic of the journey is quite simple: Good old speed = distance / time. But if we gain a huge speed to make the trip, we have to lose this speed to arrive at Pluto and enter orbit. The best strategy is to accelerate for the first half, then turn the spacecraft around and decelerate for the second half of the trip. The closest distance of Pluto to Earth is about 4.2 billion kilometers (2.7 billion miles). That means that for 15 days and 2.1 billion kilometers, you are traveling at an average speed of 5.8 million kilometers per hour!

Astronomers like to use kilometers/second as a speed unit, so this becomes about 1,600 km/sec. By comparison, the New Horizons speed was 20 km/sec. Other fast things in our solar system include the orbit speed of Mercury around the sun (57 km/s), the average solar wind speed (400 km/s) and a solar coronal mass ejection event (3,000 km/s).

If our spacecraft was generating a constant thrust by running its engines all the time, it would be creating a uniform acceleration from minute to minute. We can calculate how much this is using the simple formula distance = ½ acceleration x Time-squared. With distance as 2.1 billion km and time as 15 days we get 0.00062 km/sec/sec or 0.62 meters/sec/sec. Earth’s gravity is 9.8 meters/sec/sec so we will be feeling an ‘artificial gravity’ of about 0.06 Gs….hardly enough to feel, so you will still be essentially weightless the whole journey!

If the rocket is squirting fuel (reaction mass) out its engines to produce the thrust, we can estimate that this speed has to be about 1,600 km/sec. Rocket engines are compared in terms of their Specific Impulse (SI), which is the exhaust speed divided by the acceleration of gravity on Earth’s surface, so if the exhaust speed is 1,600 km/sec, then the SI = 160,000 seconds. For chemical rockets like the Saturn V, SI=250 seconds!

What technology do we need to get to these speeds and specific impulses?

The most promising technology we have today is the ion rocket engine, which has SIs in the range of 2,000 to 30,000 seconds .The largest ion engine designs include the VASIMR engine; a proposed 200 megawatt, nuclear-electric ion engine design that could conceivably get us to Mars in 39 days. Ion engines are limited by the electrical power used to accelerate the ions (currently in the kilowatt-range but gigawatts are possible if you use nuclear power plants), and the mass of the ions themselves (currently xenon atoms).

Other designs propose riding the solar wind using solar sails, however although this works on the outward-bound leg of the trip, it is very difficult to return to the inner solar system! The familiar technique of ‘tacking into the wind’ will not work because for sailboats it relies on movement through manipulating pressure changes behind the sail, while solar wind pressure changes are nearly zero. Laser propulsion systems have also been considered, but the power requirements often compete with the total electrical power generated by a large faction of the world for payloads with appreciable mass.

So, some version of ion propulsion with gigawatt power plants (fission or fusion) may do the trick. Because the SIs are so very large, the amount of fuel will be a small fraction of the payload mass, and these ships may look very much like those fantastic ships we often see in science fiction after-all!

Oh…by the way, the same technology that would get you to Pluto in 30 days would get you to Mars in 9 days and the Moon in 5 minutes.

Now, wouldn’t THAT be cool?

If you want to see some more ideas about interplanetary travel, have a look at my book ‘Interplanetary Travel:An astronomer’s guide’ available at amazon.com.

Check back here on Monday, January 2 for the next installment!

Selling Ice to Eskimos

Looking beyond our first journeys to Mars in the 2030s, and perhaps setting up outposts there in the 2040s, a frequently-mentioned plan for commercialization of space often brings up the prospects of interplanetary mining. A bit of careful thought can define the prospects and successes for such a venture if we are willing to confront them honestly.

The biggest challenge is that the inner solar system out to the asteroid belt is vastly different than the outer solar system from Jupiter to the distant Kuiper Belt. It is as though they occupy two completely separate universes, and for all intents and purposes, they do!

The inner solar system is all about rocky materials, either on accessible planetary surfaces and their moons, or in the form of asteroids like this photo of asteroid Vesta. We have studied a representative sample of them and they are rich in metals, silicates and carbon-water compounds. Lots of fantastic raw materials here for creating habitats, building high-tech industries, and synthesizing food.

Humans tend to ‘follow the water’ and we know that the polar regions of Mercury and the Moon have water-ice locked away in permanently shadowed craters under the regolith. Mars is filthy rich with water-ice, which forms the permanent core of its polar caps, and probably exists below the surface in the ancient ocean basins of the Northern Hemisphere. Many asteroids in the outer belt are also rich in water, as are the occasional cometary bodies that pass through our neighborhood dozens of times a year.

The inner solar system is also compressed in space. Typical closest distances between its four planets can be about 30 million miles, so the technological requirements for interplanetary travel are not so bad. Over the decades, we have launched about 50 spacecraft to inner solar system destinations for a modest sum of money and rocketry skill.

The outer solar system is quite another matter.

Just to get there we have to travel over 500 million miles to reach Jupiter…ten times the distance to Mars when closest to Earth. The distances between destinations in the outer solar system are close to one billion miles! We have sent ten spacecraft to study these destinations. You cannot land on any of the planets there, only their moons. Even so, many of these moons (e.g those near Jupiter) are inaccessible to humans due the intense radiation belts of their planets.

The most difficult truth to deal with in the outer solar system is the quality of the resources we will find there. It is quite clear from astronomical studies and spacecraft visits that the easiest accessible resources are various forms of water and methane ice. What little rocky material there is, is typically buried under hundreds of kilometers of ice, like Saturn’s moon Enceladus shown here, or at the cores of the massive planets. The concept of mining in the outer solar system is one of recovering ice, which has limited utility for fabricating habitats or being used as fuel and reaction mass.

The lack of commercializable resources in the outer solar system is the biggest impediment to developing future ‘colonization’ plans for creating permanent, self-sustaining outposts there. This is dramatically different than what we encounter in the inner solar system where minable resources are plentiful, and water is far less costly to access than in the outer solar system.

Astronomically speaking, we will have much to occupy ourselves in developing the inner solar system for human access and commercialization, but there is a big caveat. Mined resources cannot be brought back to Earth no matter how desirable the gold, platinum and diamonds might be that are uncovered. The overhead costs to mine and ship these desirable resources is so high that they will never be able to compete with similar resources mined on Earth. Like they say about Las Vegas, ‘what is mined in space, stays in space’. Whatever resources we mine will be utilized to serve the needs of habitats on Mars and elsewhere, where the mining costs are just part of the high-cost bill for having humans in space in the first place.

The good news, however, is that the outer solar system will be the playground for scientific research, and who knows, perhaps even tourism. The same commercial pressures that will drive rocket system technology to get us to Mars in 150 days, will force these trips to take months, then weeks, then days. Once we can get to Mars in a week or less, we can get to Pluto in a handful of months, not the current ten-year journeys. Like so many other historical situations, scientific research and tourism became viable goals for travel as partners to the political or commercial competition to get to India in the 1500s, the Moon in the 1960s…or Mars in the 2000s.

In the grand scheme of things, we have all the time in the world to make this happen!

For more about this, have a look at my book ‘Interplanetary Travel:An Astronomer’s Guide’, for details about resources, rocket technology, and how to keep humans alive, based upon the best current ideas in astronomy, engineering, psychology and space medicine. Available at Amazon.com

Check back here on Friday, December 30 for the next installment!

Quantum Gravity…Oh my!

So here’s the big problem.

Right now, physicists have a detailed mathematical model for how the fundamental forces in nature work: electromagnetism, and the strong and weak nuclear forces. Added to this is a detailed list of the fundamental particles in nature like the electron, the quarks, photons, neutrinos and others. Called the Standard Model, it has been extensively verified and found to be an amazingly accurate way to describe nearly everything we see in the physical world. It explains why some particles have mass and others do not. It describes exactly how forces are generated by particles and transmitted across space. Experimenters at the CERN Large Hadron Collider are literally pulling out their hair to find errors or deficiencies in the Standard Model that go against the calculated predictions, but have been unable to turn up anything yet. They call this the search for New Physics.

Along side this accurate model for the physical forces and particles in our universe, we have general relativity and its description of gravitational fields and spacetime. GR provides no explanation for how this field is generated by matter and energy. It also provides no description for the quantum structure of matter and forces in the Standard Model. GR and the Standard Model speak two very different languages, and describe two very different physical arenas. For decades, physicists have tried to find a way to bring these two great theories together, and the results have been promising but untestable. This description of gravitational fields that involves the same principles as the Standard Model has come to be called Quantum Gravity.

The many ideas that have been proposed for Quantum Gravity are all deeply mathematical, and only touch upon our experimental world very lightly. You may have tried to read books on this subject written by the practitioners, but like me you will have become frustrated by the math and language this community has developed over the years to describe what they have discovered.

The problem faced by Quantum Gravity is that gravitational fields only seem to display their quantum features at the so-called Planck Scale of 10^-33 centimeters and  10^-43 seconds. I cant write this blog using scientific notation, so I am using the shorthand that 10^3 means 1000 and 10^8 means 100 million. Similarly, 10^-3 means 0.001 and so on. Anyway, the Planck scale  also corresponds to an energy of 10^19 GeV or 10 billion billion GeV, which is an energy 1000 trillion times higher than current particle accelerators can reach.

There is no known technology that can reach the scales where these effects can be measured in order to test these theories. Even the concept of measurement itself breaks down! This happens because the very particles (photons) you try to use to study physics at the Planck scale carry so much energy  they turn into quantum black holes and are unable to tell you what they saw or detected!

One approach to QG is called Loop Quantum Gravity.  Like relativity, it assumes that the gravitational field is all there is, and that space and time become grainy or ‘quantized’ near the Planck Scale. The space and time we know and can experience in-the-large is formed from individual pieces that come together in huge numbers to form the appearance of a nearly-continuous and smooth gravitational field.

The problem is that you cannot visualize what is going on at this scale because it is represented in the mathematics, not by nuggets of space and time, but by more abstract mathematical objects called loops and spin networks. The artist rendition above is just that.

So here, as for Feynman Diagrams, we have a mathematical picture that represents a process, but the picture is symbolic and not photographic. The biggest problem, however, is that although it is a quantum theory for gravity that works, Loop Quantum Gravity does not include any of the Standard Model particles. It represents a quantum theory for a gravitational field (a universe of space and time) with no matter in it!

In other words, it describes the cake but not the frosting.

The second approach is string theory. This theory assumes there is already some kind of background space and time through which another mathematical construct called a string, moves. Strings that form closed loops can vibrate, and each pattern of vibrations represents a different type of fundamental particle. To make string theory work, the strings have to exist in 10 dimensions, and most of these are wrapped up into closed balls of geometry called Calabi-Yau spaces. Each of these spaces has its own geometry within which the strings vibrate. This means there can be millions of different ‘solutions’ to the string theory equations: each a separate universe with its own specific type of Calabi-Yau subspace that leads to a specific set of fundamental particles and forces. The problem is that string theory violates general relativity by requiring a background space!

In other words, it describes the frosting but not the cake!

One solution proposed by physicist Lee Smolin is that Loop Quantum Gravity is the foundation for creating the strings in string theory. If you looked at one of these strings at high magnification, its macaroni-like surface would turn into a bunch of loops knitted together, perhaps like a Medieval chainmail suit of armor. The problem is that Loop Quantum Gravity does not require a gravitational field with more than four dimensions ( 3 of space and one of time) while strings require ten or even eleven. Something is still not right, and right now, no one really knows how to fix this. Lacking actual hard data, we don’t even know if either of these theories is closer to reality!

What this hybrid solution tries to do is find aspects of the cake that can be re-interpreted as particles in the frosting!

This work is still going on, but there are a few things that have been learned along the way about the nature of space itself. At our scale, it looks like a continuous gravitational field criss-crossed by the worldlines of atoms, stars and galaxies. This is how it looks even at the atomic scale, because now you get to add-in the worldlines of innumerable ‘virtual particles’ that make up the various forces in the Standard Model.  But as we zoom down to the Planck Scale, space and spacetime stop being smooth like a piece of paper, and start to break up into something else, which we think reveals the grainy nature of gravity as a field composed of innumerable gravitons buzzing about.

But what these fragmentary elements of space and time ‘look’ like is impossible to say. All we have are mathematical tools to describe them, and like our attempts at describing the electron, they lead to a world of pure abstraction that cannot be directly observed.

If you want to learn a bit more about the nature of space, consider reading my short booklet ‘Exploring Quantum Space‘ available at amazon.com. It describes the amazing history of our learning about space from ancient Greek ‘common sense’ ideas, to the highlights of mind-numbing modern quantum theory.

Check back here on Thursday, December 22 for the last blog in this series!

What IS space?

One thing that is true about physics is that it involves a lot of mathematics. What this means is that we often use the mathematics to help us visualize what is going on in the world. But like I said in an earlier blog, this ‘vision thing’ in math can sometimes let you mistake the model for the real thing, like the case of the electron. The same problem emerges when we try to understand an invisible  thing like space.

The greatest discovery about space  was made by Einstein just before 1915 as he was struggling to turn his special theory of relativity into something more comprehensive.

Special relativity was his theory of space and time that described how various observers would see a consistent world despite their uniform motion at high speeds. This theory alone revolutionized physics, and has been the main-stay of modern quantum mechanics, as well as the designs of powerful accelerators that successfully and accurately push particles to nearly the speed of light. The problem was that special relativity did not include a natural place for accelerated motion, especially in gravitational fields, which are of course very common in the universe.

Geometrically, special relativity only works when worldlines are perfectly straight, and  form lines within a perfectly flat, 4-dimensional spacetime (a mathematical arena where 3 dimensions of space are combined with one dimension of time). But accelerated motion causes worldlines to be curved, and you cannot magically make the curves go straight again and keep the spacetime geometrically flat just by finding another coordinate system.

Special relativity, however, promised that so long as motion is at constant speed and worldlines are straight, two different observers (coordinate systems) would agree about what they are seeing and measuring by using the mathematics of special relativity. With curved worldlines and acceleration, the equations of special relativity, called the Lorentz Transformations, would not work as they were. Einstein was, shall we say, annoyed by this because clearly there should be some mathematical process that would allow the two accelerated observers to again see ( or calculate) consistent physical phenomena.

He began his mathematical journey to fix this problem by writing his relativity equations in a way that was coordinate independent using the techniques of tensor analysis. But he soon found himself frustrated by what he needed in order to accomplish this mathematical miracle, versus his knowledge of advanced analytic geometry in four dimensions. So he went to his classmate and math wiz, Marcel Grossman, who immediately recognized that Einstein’s mathematical needs were just an awkward way of stating certain properties of non-Euclidean geometry developed by Georg Riemann and others in the mid-to-late 1800s.

This was the missing-math that Einstein needed, who being a quick learner, mastered this new language and applied it to relativity. After an intense year of study, and some trial-and-error mathematical efforts, he published his complete Theory of General Relativity in November 1915. Just like the concept of spacetime did away with space and time as independent ideas in special relativity, his new theory made an even bigger, revolutionary, discovery.

It was still a theory of the geometry of worldlines that he was proposing, but now the geometric properties of these worldlines was controlled by a specific mathematical term called the metric tensor. This mathematical object was fundamental to all geometry as Grossman had showed him, and allowed you to calculate distances between points in space. It also defined what a ‘straight line’ meant, as well as how curved the space was. Amazingly, when you translated all this geometric talk into the hard, cold reality of physics in 4-dimensions, this metric tensor turned into the gravitational field through which the worldline of a particle was defined as the straightest-possible path.

An interesting factoid, indeed, but why is it so revolutionary?

All other fields in physics (e.g like the electromagnetic field) are defined by some quantity, call it A, that is specified at each coordinate point in space and time: A(x,y,z,t). If you take-away the field, the coordinate grid remains intact. But with the gravitational field, there is no background coordinate grid to define its intensity, instead, the gravitational field provides its own coordinate grid because it is identical to the metric tensor!!

This is why Einstein and physicists say that gravity is not a force like the others we know about, but instead it is a statement about the shape of the geometry of spacetime through which particles move. (Actually, particles do not move through spacetime. Their histories from start to finish simply exist all at once like a line drawn on a piece of paper!)

So, imagine a cake with frosting on it. The frosting represents the various fields in space, and you can locate where they are and how much frosting is on the cake from place to place. But the bulk of the cake, which is supporting the frosting and telling you that ‘this is the top, center, side, etc of the cake’ is what supports the frosting. Take away the cake, and the frosting is unsupported, and can’t even be defined in the first place. Similarly, take away the gravitational field, symbolized by Einstein’s metric tensor, and spacetime actually disappears!

Amazingly, Einstein’s equations say that although matter and energy produce gravitational fields, you can have situations where there is no matter and energy and spacetime still doesn’t vanish! These vacuum solutions are real head-scratchers when physicists try to figure out how to combine quantum mechanics, our premier theory of matter, with general relativity: our premier theory of gravity and spacetime. These vacuum solutions represent gravitational fields in their purest form, and are the starting point for learning how to describe the quantum properties of gravitational fields. They are also important to the existence of gravity waves, which move from place to place as waves in the empty spacetime between the objects producing them.

But wait a minute. Einstein originally said that ‘space’ isn’t actually a real thing. Now we have general relativity, which seems to be bringing space (actually spacetime) back as something significant in its own right as an aspect of the gravitational field.

What gives?

To see how some physicists resolve these issues, we have to delve into what is called quantum gravity theory, and this finally gets us back to some of my earlier blogs about the nature of space, and why I started this blog series!

 

Check back here on Wednesday, December 21 for the last installment on this series about space!

Is Infinity Real?

In the daytime, you are surrounded by trees, buildings and the all-too-familiar accoutrements of Nature, to which by evolution we were designed to appreciate and be familiar. But at night, we see an unimaginably different view: The dark, starry night sky, with no sense of perspective or depth. It is easy to understand how The Ancients thought it a celestial ceiling with pinpoint lights arrayed in noteworthy patterns. Many millennia of campfires were spent trying to figure it out.

We are stuck in the middle ground between two vast scales that stretch before us and within us. Both, we are told, lead to the infinitely-large and the infinitely-small. But is this really true?

Astronomically, we can detect objects that emerged from the Big Bang nearly 14 billion years ago, which means their light-travel distance from us is 14 billion light years or 13,000,000,000,000,000,000,000,000,000 centimeters. This is, admittedly, a big number but it is not infinitely-large.

In the microcosm, we have probed the structure of electrons to a scale of 0.000000000000000000001 centimeters and found no signs of any smaller distance yet. So again, there is no sign that we have reached anything like an infinitely-small limit to Nature either.

When it comes right down to it, the only evidence we have for the universe being infinitely large (or other aspects of it being infinitely small) is in the mathematics and geometry we use to describe it. Given that infinity is the largest number you can count to, it is pretty obvious that even the scale of our visible universe of 13,000,000,000,000,000,000,000,000,000 centimeters falls woefully short of being even a relatively stupendous number by comparison to infinity.

Infinity is as old as the Ancient Greeks. But even Aristotle (384 – 322 BCE) would only allow the integers (1,2,3,…) to be potentially infinite, but not actually infinite, in quantity. Since then, infinity or its cousin eternity, have become a part of our literary and religious vernacular when we mention something really, really, really….. big or old! Through literary and philosophical repetition, we have become comfortable with this idea in a way that is simply not justifiable.

Mathematics can define infinity very precisely, and even the mathematician Cantor (1845 – 1918) was able to classify ‘transfinite numbers’ as being either representing countable infinities or uncountable infinities. To the extent that mathematics is also used in physics, we inherit infinity as the limit to many of our calculations and models of the physical world. But the problem is that our world is only able to offer us the concept of something being very, very, very… big, like the example of the visible universe above.

If you take a sphere a foot across and place an ant on it, it crawls around and with a bit of surveying it can tell you the shape is a sphere with a finite closed surface. But now take this sphere and blow it up so that it is 1 million miles across. The ant now looks across its surface and sees something that looks like an infinite plane. Its geometry is as flat as a sheet of paper on a table.

In astronomy we have the same problem.

We make calculations and measurements within the 28 billion light years that spans our visible universe and conclude that the geometry of the universe is flat, and so geometrically it seems infinite, but the only thing the measurements can actually verify is that the universe is very, very, very large and LOOKS like its geometry is that of an infinite, flat, 3-dimensional space. But modern Big Bang cosmology also says that what we are seeing within our visible universe is only a portion of a larger thing that emerged from the Big Bang and ‘inflated’ to enormous size in the first microseconds.  If you identify our visible universe out to 14 billion light years as the size of the period at the end of this sentence, that larger thing predicted by inflation may be millions of miles across at the same scale. This is very, very big, but again it is not infinite!

Going the other way, the current best theoretical ideas about the structure of the physical world seems to suggest that at some point near a so-called Planck scale of 0.0000000000000000000000000000000015 centimeters we literally ‘run out of space’. This mathematical conclusion seems to be the result of combining the two great pillars of all physical science, quantum mechanics and general relativity, into a single ‘unified’ theory.  The mathematics suggests that, rather than being able to probe the nature of matter and space at still-smaller scales, the entire edifice of energy, space, time and matter undergoes a dramatic and final change into something vastly different than anything we have ever experienced: elements that are beyond space and time themselves.  These ideas are captured in theories such as Loop Quantum Gravity and String Theory, but frankly we are still at a very early stage in understanding what this all means. Even more challenging is that we have no obvious way to make any measurements that would directly test whether physical reality simply comes to an end at these scales or not.

So on the cosmological scene, we can convincingly say we have no evidence that anything as large as ‘infinity’ exists because it is literally beyond our 14 billion light-year horizon of detection. The universe is simply not old enough for us to sample such an imponderably large realm. Advances in Big Bang cosmology can only propose that we live in an incomprehensively alien ‘multiverse’ or that we inhabit one miniscule dot in a vastly larger cosmos, which our equations extrapolate as infinity. Meanwhile, the world of the quantum hints that no infinitely-small structures exist in the universe, not even what we like to call space itself can be indefinitely sub-divided below the Planck scale.

In the end, it seems that infinity is a purely  mathematical ideal that can be classified by Cantor’s transfinite numbers manipulated symbolically, and thought about philosophically, but is never actually found among the objects that inhabit our physical world.

Now let’s go back to the issue of space after the relativity revolution and try to make sense of where we stand now!

Check back here on Monday, December 19 for the next installment!