Category Archives: Astronomy

Space Power!

On Earth we can deploy a 164-ton wind turbine to generate 1.5 megawatts of electricity, but in the also energy-hungry environment of space travel, far more efficient energy-per-mass systems are a must. The choices for such systems are not unlimited in the vacuum of space!

OK…this is a rather obscure topic, but as I discussed in my previous blog, in order to create space propulsion systems that can get us to Mars in a few days, or Pluto in a week, we need some major improvements in how we generate power in space.

I am going to focus my attention on ion propulsion, because it is far less controversial than any of the more efficient nuclear rocket designs. Although nuclear rocket technology is pretty well worked out theoretically and in engineering designs since the 1960s,   there is simply no political will to deploy this technology in the next 50 years due to enormous public concerns. The concerns are not entirely unfounded. The highest-efficiency and least massive fission power plants would use near-weapons grade uranium or plutonium fuel, making them look like atomic bombs to some skeptics!

Both fission and fusion propulsion have a lot in common with ordinary chemical propulsion. They heat a propellant up to very high temperatures and direct the exhaust flow, mechanically, out the back of the engine using tapered  ‘combustion chambers’ that resemble chemical rockets. The high temperatures insure that the isotropic speeds of the particles are many km/sec, but the flow has to be shaped by the engine nozzle design to leave the ship in one direction. The melting temperature of a fission reactor is about 4,500 K so the maximum speed of the ejected thermal gas (hydrogen) passing through its core is about 10 km/sec.

Ion engines are dramatically different. They guide ionized particles out the back of the engine using one or more acceleration grids. The particles are electrostatically guided and accelerated literally one at a time, so that instead of flowing all over the place in the rocket chamber, they start out life already ‘collimated’ to flow in only one direction at super-thermal speeds. For instance, the Dawn spacecraft ejected Zenon particles at a speed of 25 km/sec. If you had a high-temperature xenon gas with particles at that same speed, the temperature of this gas would be 4 million Celsius! Well above the melting point of the ion engine!

We are well into the design of high-thrust ion engines, and have already deployed several of these. The Dawn spacecraft launched in 2007 has visited asteroid Vesta (2011) and dwarf planet Ceres (2015) using a 10 kilowatt ion engine system with 937 pounds of xenon propellant, and achieved a record-breaking speed change of 10 kilometers/sec. It delivered about 0.09 Newtons of thrust over 2,000 days of continuous operation. Compare this with the millions of Newtons of thrust delivered by the Saturn V in a few minutes.

Under laboratory conditions, newer ion engine designs are constantly being developed and tested. The NASA NEXT program in 2010 demonstrated over 5.5 years of continuous operation for a 7 kilowatt ion engine. It used 862 kg of xenon and produced a thrust of 3.5 Newtons, some 30 times better than the Dawn technology.

Theoretically, an extensive research study on the design of megawatt ion engines by David Fearn presented at the Space Power Symposium of the 56th International Astronautical Congress in 2005 gave some typical characteristics for engines at this power level. The conclusion was that these kinds of ion engines pose no particular design challenges and can achieve exhaust speeds that exceed 100 km/sec. As a specific example, an array of nine thrusters using xenon propellant would deliver a thrust of 120 Newtons and consume 7.4 megawatts. A relatively small array of thrusters can also achieve exhaust speeds of 1,500 km/sec using lower-mass hydrogen propellants.

Ion propulsion requires megawatts of energy in order to produce enough continuous thrust to get us to the high speeds and thrusts we need for truly fast interplanetary travel.

The bottom line for ion propulsion is the total electrical power that is available to accelerate the propellant ions. Very high efficiency solar panels that convert more than 75% of the sunlight into electricity work very well near Earth orbit (300 watts/kg), but produce only 10 watts/kg near Jupiter, and 0.3 watts/kg near Pluto. That means the future of fast space travel via ion propulsion spanning our solar system requires some kind of non-solar-electric, fission reactor system (500 watts/kg) to produce the electricity. The history of using reactors in space though trivial from an engineering standpoint, is a politically complex one because of the prevailing fear that a launch mishap will result in a dirty bomb or even a Hiroshima-like event in the minds of the general public and Congress.

The Soviet Union has been launching nuclear reactors into space for decades in its Kosmos series of satellites. Early in 1992, the idea of purchasing a Russian-designed and fabricated space reactor power system and integrating it with a US designed satellite went from fiction to reality with the purchase of the first two Topaz II reactors by the Strategic Defense Initiative Organization (now the Ballistic Missile Defense Organization (BMDO). SDIO also requested that the Applied Physics Laboratory in Laurel, MD propose a mission and design a satellite in which the Topaz II could be used as the power source. Even so, the Topaz II reactor had a mass of 1,000 kg and produced 10 kilowatts for an efficiency of 10 watts/kg. Due to funding reduction within the SDIO, the Topaz II flight program was postponed indefinitely at the end of Fiscal Year 1993.

Similarly, cancellation was the eventual fate of the US SP-100 reactor program. This program was started in 1983 by NASA, the US Department of Energy and other agencies. It developed a 4000 kg, 100 kilowatt reactor ( efficiency = 25 watts/kg) with heat pipes transporting the heat to thermionic converters.

Proposed SP-100 reactor ca 1980  (Image credit: NASA/DoE/DARPA)

Believe it or not, small nuclear fission reactors are becoming very popular as portable ‘batteries’ for running remote communities of up to 70,000 people. The Hyperion Hydride Reactor is not much larger than a hot tub, is totally sealed and self-operating, has no moving parts and, beyond refueling, requires no maintenance of any sort.

Hyperion, Uranium Hydride Reactor (Credit:Hyperion, Inc)

According to the Hyperion Energy Company the Gen4 reactor has a mass of about 100-tons and is designed to deliver 25 megawatts electricity for a 10-year lifetime, without refueling. The efficiency for such a system is 250 watts/kg! Of course you cannot just slap one of  these Bad Boys onto a rocket ship to provide the electricity for the ion engines, but this technology already proves that fission reactors can be made very small and deliver quite the electrical wallop, and do so in places where solar panels are not practical.

Some of the advanced photo-electric system being developed by NASA and NASA contractors are based on the solar energy technology used in the NASA Deep Space 1 mission and the Naval Research Laboratory’s TacSat 4 reconnaissance satellite, and are based on ‘stretched lens array’ lens concentrators for sunlight that amplify the sunlight by up to 8 times (called eight-sun systems). The solar arrays are also flexible and can be rolled out like a curtain. The technology promises to reach efficiency levels of 1000 watts/kg, and less than $50/watt, compared to the 100 w/kg and $400/watt of current ‘one sun’ systems that do not use lens concentrators. A 350 kW solar-electric ion engine system is a suggested propulsion for a 70 ton crewed mission to Mars. With the most efficient stretched lens array solar arrays currently under design, a 350 kW system would have a mass of only 350 kg and cost about $18 million. The very cool thing about this is that improvements in solar panel technology not only directly benefit space power systems for inner solar system travel, but lead to immediate consumer applications in Green Energy!  Imagine covering your roof with a 1-square-meter high efficiency panel rather than your entire roof with an unsightly  lower-efficiency system!

So to really zip around the solar system and avoid the medical problems of prolonged voyages, we really need more work on compact power plant design that is politically realistic. Once we solve THAT problem, even Pluto will be a week’s journey away!

Check back here on Monday, April 24 for my next topic!

That was then, this is now!

Back in the 1960s when I began my interest in astronomy, the best pictures we had of the nine planets were out of focus black and white photos. I am astonished how far we have come since then and decided to devote this blog to a gallery of the best pictures I could find of our solar system neighbors! First, let’s have look at the older photos.

First we have Mercury, which is never very far from pour sun and a very challenging telescopic object.

Above is what mars looked like! Then we have Jupiter and Saturn shown below.

Among the hardest and most mysterious objects were Uranus shown here. I will not show a nearly identical telescopic view of Uranus.

Finally we come to Pluto, which has always been a star-like object for most of the 20th century.

These blurry but intriguing images were the best we could do for most of the 20th century, yet they were enough to encourage generations of children to become astronomers and passionately explore space. The features of mercury were mere blotches of differing shaded of gray. Uranus, Neptune were slightly resolvable to reveal faint details, and distant Pluto remained completely star-like and unresolved, yet we knew it was its own world many thousands of kilometers across. Mars continued to reveal its tantalizing blotchy features that came and went with the seasons along with the ebb and flow of its two polar ice caps. Jupiter was a banded world with its Great Red Spot, but the details of these atmospheric bands was completely hidden in the optical smearing of our own atmosphere. Saturn possessed some large bands, and its majestic ring system could be seen in rough detail but never resolved into its many components. As for the various moons of these distant worlds, they were blurry disks or star-like spots and never revealed their details.

The advent of the Space Program in the 1960s, and the steady investment in spacecraft to ‘fly by’ these planets led to progressively higher and higher resolution images starting with Mariner 4 in 1965 and its historic encounter with Mars, revealing a cratered, moonlike landscape. The Pioneer spacecraft in the early 1970s gave us stunning images of Jupiter, followed by the Voyager spacecraft encounters with the outer planets and their moons. Magellan orbited Venus and with its radar system mapped the surface to show a dynamic and volcanic surface that is permanently hidden beneath impenetrable clouds. Finally in 2015, the New Horizons spacecraft gave us the first clear images of distant Pluto. Meanwhile, many return trips to our own moon have mapped its surface to 2-meter resolution, while the MESSSENGER spacecraft imaged the surface of Mercury and mapped its many extreme geological features. Even water ice has been detected on mercury and the moon to slacken the thirst of future explorers.

For many of the planets, we have extreme close up images too!
Jupiter’s south pole from the Juno spacecraft shows a bewildering field of tremendous hurricanes each almost as large as Earth, swirling about aimlessly in a nearly motionless atmosphere.

Pluto details a few hundred meters across. Can you come up with at least ten questions you would like answers for about what you are seeing?

Here is one of thousands of typical views from the Martian surface. Check out the rocks strewn across the field. Some are dark and pumice-like while others are white and granite-looking. ‘Cats and dogs living together’. What’s going on here?

The Venera 13 image shown below from the surface of Venus is unique and extremely puzzling from a surface that is supposed to be hotter than molten lead.

We also have images from a multitude of moons, asteroids and comets!
The Lunar Reconnaissance Orbiter gave us 2-meter resolution images of the entire lunar surface allowing us to revisit the Apollo landing sites once more:

The dramatic canyons and rubble fields of a comet were brought into extreme focus by the Rosetta mission

Even Saturn’s moon Titan has been explored to reveal its extensive liquid nitrogen tributaries

This bewildering avalanche of detail has utterly transformed how we view these worlds and the kinds of questions we can now explore. If you compare what we knew about Pluto before 2015 when it was little more than a peculiar ‘star in the sky’, to the full-color detailed orb we now see, you can imagine how science progresses by leaps and bounds through the simple technique of merely seeing the object more clearly. It used to be fashionable to speculate about Pluto when all we knew was its size, mass and density and they it had a thin atmosphere. But now we are delightfully challenged to understand this world as the dynamic place that it is with mountains of ice, continent-sized glaciers, and nitrogen snow. And of course, the mere application of improved resolution now lets us explore the entire surface of our moon with the same clarity as an astronaut hovering over its surface from a height of a few dozen feet!

We Old-Timers have had a wonderful run in understanding our solar system as we transitioned from murky details to crystal clarity. All of the easy low-hanging fruit of theory building and testing over the last century has been accomplished for the most-part. Now the ever more challenging work of getting the details straight begins, and will last for another century at least. When you can tele-robotically explore planetary and asteroidal surfaces, or perform on-the-spot microscopic assays of minerals, what incredible new questions will emerge? Is there life below the surface of Europa? Why does Mars belch forth methane gas in the summer? Can the water deposits on the moon be mined? Is Pluto’s moon Charon responsible for the tidal heating of an otherwise inert Pluto?
One can only wonder!

Check back here on Tuesday, April 18 for my next topic!

Crowdsourcing Gravity

The proliferation of smartphones with internal sensors has led to some interesting opportunities to make large-scale measurements of a variety of physical phenomena.

The iOS app ‘Gravity Meter’ and its android equivalent have been used to make  measurements of the local surface acceleration, which is nominally 9.8 meters/sec2. The apps typically report the local acceleration to 0.01 (iOS) or even 0.001 (android) meters/secaccuracy, which leads to two interesting questions: 1)How reliable are these measurements at the displayed decimal limit, and 2) Can smartphones be used to measure expected departures from the nominal surface acceleration due to Earth rotation? Here is a map showing the magnitude of this (centrifugal) rotation effect provided by The Physics Forum.

As Earth rotates, any object on its surface will feel a centrifugal force directed outward from the center of Earth and generally in the direction of local zenith. This causes Earth to be slightly bulged-out at the equator compared to the poles, which you can see from the difference between its equatorial radius of 6,378.14 km versus its polar radius of 6,356.75 km: a polar flattening difference of 21.4 kilometers. This centrifugal force also has an effect upon the local surface acceleration  by reducing it slightly at the equator compared to the poles. At the equator, one would measure a value for ‘g’ that is about 9.78 m/sec2 while at the poles it is about 9.83 m/sec2. Once again, and this is important to avoid any misconceptions, the total acceleration defined as gravity plus centrifugal is reduced, but gravity is itself not changed because from Newton’s Law of Universal Gravitation, gravity is due to mass not rotation.

Assuming that the smartphone accelerometers are sensitive enough, they may be able to detect this equator-to-pole difference by comparing the surface acceleration measurements from observers at different latitudes.

 

Experiment 1 – How reliable are ‘gravity’ measurements at the same location?

To check this, I looked at the data from several participating classrooms at different latitudes, and selected the more numerous iOS measurements with the ‘Gravity Meter’ app. These data were kindly provided by Ms. Melissa Montoya’s class in Hawaii (+19.9N), George Griffith’s class in Arapahoe, Nebraska (+40.3N), Ms. Sue Lamdin’s class in Brunswick, Maine (+43.9N), and Elizabeth Bianchi’s class in Waldoboro, Maine (+44.1N).

All four classrooms measurements, irrespective of latitude (19.9N, 40.3N, 43.9N or 44.1N) showed distinct ‘peaks’, but also displayed long and complicated ‘tails’, making these distributions not Gaussian as might be expected for random errors. This suggests that under classroom conditions there may be some systematic effects introduced from the specific ways in which students may be making the measurements, introducing  complicated and apparently non-random,  student-dependent corrections into the data.

A further study using the iPad data from Elizabeth Bianchi’s class, I discovered that at least for iPads using the Gravity Sensor app, there was a definite correlation between when the measurement was made and the time it was made during a 1.5-hour period. This resembles a heating effect, suggesting that the longer you leave the technology on before making the measurement, the larger will be the measured value. I will look into this at a later time.

The non-Gaussian behavior in the current data does not make it possible to assign a normal average and standard-deviation to the data.

 

Experiment 2 – Can the rotation of Earth be detected?

Although there is the suggestion that in the 4-classroom data we could see a nominal centrifugal effect of about the correct order-of-magnitude, we were able to get a large sample of individual observers spanning a wide latitude range, also using the iOS platform and the same ‘Gravity Meter’ app. Including the median values from the four classrooms in Experiment 1, we had a total of 41 participants: Elizabeth Abrahams, Jennifer Arsenau, Dorene Brisendine, Allen Clermont, Hillarie Davis, Thom Denholm, Heather Doyle, Steve Dryer, Diedra Falkner, Mickie Flores, Dennis Gallagher, Robert Gallagher, Rachael Gerhard, Robert Herrick, Harry Keller, Samuel Kemos, Anna Leci, Alexia Silva Mascarenhas, Alfredo Medina, Heather McHale, Patrick Morton, Stacia Odenwald, John-Paul Rattner, Pat Reiff, Ghanjah Skanby, Staley Tracy, Ravensara Travillian, and Darlene Woodman.

The scatter plot of these individual measurements is shown here:

The red squares are the individual measurements. The blue circles are the android phone values. The red dashed line shows the linear regression line for only the iOS data points assuming each point is equally-weighted. The solid line is the predicted change in the local acceleration with latitude according to the model:

G =   9.806   –  0.5*(9.832-9.78)*Cos(2*latitude)    m/sec2

where the polar acceleration is 9.806 m/sec2 and the equatorial acceleration is 9.780 m/sec2. Note: No correction for lunar and solar tidal effects have been made since these are entirely undetectable with this technology.

Each individual point has a nominal variation of +/-0.01 m/sec2 based on the minimum and maximum value recorded during a fixed interval of time. It is noteworthy that this measurement RMS is significantly smaller than the classroom variance seen in Experiment 1 due to the apparently non-Gaussian shape of the classroom sampling. When we partition the iOS smartphone data into 10-degree latitude bins and take the median value in each bin we get the following plot, which is a bit cleaner:

The solid blue line is the predicted acceleration. The dashed black line is the linear regression for the equally-weighted individual measurements. The median values of the classroom points are added to show their distribution. It is of interest that the linear regression line is parallel, and nearly coincident with, the predicted line, which again suggests that Earth’s rotation effect may have been detected in this median-sampled data set provided by a total of 37 individuals.

The classroom points clustering at ca +44N represent a total of 36 measures representing the plotted median values, which is statistically significant. Taken at face value, the classroom data would, alone, support the hypothesis that there was a detection of the rotation effect, though they are consistently 0.005 m/sec2 below the predicted value at the mid-latitudes. The intrinsic variation of the data, represented by the consistent +/-0.01 m/sec2 high-vs-low range of all of the individual samples, suggests that this is probably a reasonable measure of the instrumental accuracy of the smartphones. Error bars (thin vertical black lines) have been added to the plotted median points to indicate this accuracy.

The bottom-line seems to be that it may be marginally possible to detect the Earth rotation effect, but precise measurements at the 0.01 m/sec2 level are required against what appears to be a significant non-Gaussian measurement background. Once again, some of the variation seen at each latitude may be due to how warm the smartphones were at the time of the measurement. The android and iOS measurements do seem to be discrepant with the android measurements leading to a larger measurement variation.

Check back here on Wednesday, March 29 for the next topic!

Hohmann’s Tyrany

It really is a shame. When all you have is a hammer, everything else looks like a nail. This also applies to our current, international space programs.

We have been using chemical rockets for centuries, but since the advent of V2s and the modern space age, these brute-force and cheap work horses have been the main propulsion technology we use to go just about everywhere in the solar system. But this amounts to thinking that one technology can span all of our needs, and the trillions of cubic miles that encompass interplanetary space.

We pay a huge price for this belief.

Chemical rockets have their place in space travel. They are fantastic ways of delivering HUGE thrusts quickly; the method par excellance for getting us off this planet and paying the admission ticket to space.  No other known propulsion technology is as cheap, simple, and technologically elegant as chemical propulsion in this setting.  Applying this same technology to interplanetary travel beyond the moon is quite another thing, and sets in motion an escalating series of difficult problems.

Every interplanetary spacecraft launched so far to travel to each of the planets in our solar system works on the exact same principle. Give the spacecraft a HUGE boost to get it off the launch pad, and with enough velocity to reach the distant planet, then cut the engines off after a few minutes so the spacecraft can literally coast the whole way. With a few more ‘Delta-V’ changes, this is called the minimum –energy trajectory or for rocket scientists the Hohmann Transfer orbit. It is designed to get you there, not in the shortest time, but using the least amount of energy. In propulsion, energy is money. We use souped-up Atlas rockets at a few hundred million dollars a pop to launch space craft to the outer planets. We don’t use  even larger and expensive Saturn V rockets that deliver even more energy for a dramatically-shorter ride.

If you bank on taking the slow-boat to Mars rather than a more energetic ride, this leads to all sorts of problems. The biggest of these is that the inexpensive 220-day journeys let humans build up all sorts of nasty medical problems that short 2-week trips would completely eliminate. In fact, the entire edifice of the $150 billion International Space Station is there to explore the extended human stays in space that are demanded by Hohmann Transfer orbits and chemical propulsion. We pay a costly price to keep using cheap chemical rockets that deliver long stays in space, and cause major problems that are expensive to patch-up afterwards. The entire investment in the ISS could have been eliminated if we focused on getting the travel times in space down to a few weeks.

You do not need Star Trek warp technology to do this!

Since the 1960s, NASA engineers and academic ‘think tanks’ have designed nuclear rocket engines and ion rocket engines, both show enormous promise in breaking the hegemony of chemical transportation. The NASA nuclear rocket program began in the e arly-1960s and built several operational prototypes, but the program was abandoned in the late 1960s because nuclear rockets were extremely messy, heavy, and had a nasty habit of slowly vaporizing the nuclear reactor and blowing it out the rocket engine!  Yet, Wernher  Von Braun designed a Mars expedition for the 1970s in which several,  heavy 100-ton nuclear motors would be placed in orbit by a Saturn V and then incorporated into an set of three interplanetary transports. This program was canceled when the Apollo program was ended and there was no longer a conventional need for the massive Saturn V rockets. But ion rockets continued to be developed and today several of these have already been used on interplanetary spacecraft like Deep Space 1 and Dawn. The plans for humans on Mars in 2030s rely on ion rocket propulsion powered by massive solar panels.

Unlike chemical rockets, which limit spacecraft speeds to a few kilometers/sec, ion rockets can be developed with speeds up to several thousand km/sec. All that they need is more thrust, and to get that they need low-mass power plants in the gigawatt range. ‘Rocket scientists’ gauge engine designs based on their Specific Impulse, which is the exhaust speed divided by the acceleration of gravity on Earth. Chemical rockets can only provide SIs of 300 seconds, but ion engine designs can reach 30,000 seconds or more! With these engine designs, you can travel to Mars in SIX DAYS, and a jaunt to Pluto can take a neat 2 months! Under these conditions, most of the problems and hazards of prolonged human travel in space are eliminated.

But instead of putting our money into perfecting these engine designs, we keep building chemical rockets and investing billions of dollars trying to keep our long-term passengers alive.

Go figure!!!

Check back here on Friday, March 17 for a new blog!

 

The Mystery of Gravity

In grade school we learned that gravity is an always-attractive force that acts between particles of matter. Later on, we learn that it has an infinite range through space, weakens as the inverse-square of the distance between bodies, and travels exactly at the speed of light.

But wait….there’s more!

 

It doesn’t take a rocket scientist to remind you that humans have always known about gravity! Its first mathematical description as a ‘universal’ force was by Sir Isaac Newton in 1666. Newton’s description remained unchanged until Albert Einstein published his General Theory of Relativity in 1915. Ninety years later, physicists, such as Edward Witten, Steven Hawkings, Brian Greene and Lee Smolin among others, are finding ways to improve our description of ‘GR’ to accommodate the strange rules of quantum mechanics. Ironically, although gravity is produced by matter, General Relativity does not really describe matter in any detail – certainly not with the detail of the modern quantum theory of atomic structure. In the mathematics, all of the details of a planet or a star are hidden in a single variable, m, representing its total mass.

 

The most amazing thing about gravity is that is a force like no other known in Nature. It is a property of the curvature of space-time and how particles react to this distorted space. Even more bizarrely, space and time are described by the mathematics of  GR as qualities of the gravitational field of the cosmos that have no independent existence. Gravity does not exist like the frosting on a cake, embedded in some larger arena of space and time. Instead, the ‘frosting’ is everything, and matter is embedded and intimately and indivisibly connected to it. If you could turn off gravity, it is mathematically predicted that space and time would also vanish! You can turn off electromagnetic forces by neutralizing the charges on material particles, but you cannot neutralize gravity without eliminating spacetime itself.  Its geometric relationship to space and time is the single most challenging aspect of gravity that has prevented generations of physicists from mathematically describing it in the same way we do the other three forces in the Standard Model.

Einstein’s General Relativity, published in 1915, is our most detailed mathematical theory for how gravity works. With it, astronomers and physicists have explored the origin and evolution of the universe, its future destiny, and the mysterious landscape of black holes and neutron stars. General Relativity has survived many different tests, and it has made many predictions that have been confirmed. So far, after 90 years of detailed study, no error has yet been discovered in Einstein’s original, simple theory.

Currently, physicists have explored two of its most fundamental and exotic predictions: The first is that gravity waves exist and behave as the theory predicts. The second is that a phenomenon called ‘frame-dragging’ exists around rotating massive objects.

Theoretically, gravity waves must exist in order for Einstein’s theory to be correct. They are distortions in the curvature of spacetime caused by accelerating matter, just as electromagnetic waves are distortions in the electromagnetic field of a charged particle produced by its acceleration. Gravity waves carry energy and travel at light-speed. At first they were detected indirectly. By 2004, astronomical bodies such as the  Hulse-Taylor orbiting pulsars were found to be losing energy by gravity waves emission at exactly the predicted rates. Then  in 2016, the  twin  LIGO gravity wave detectors detected the unmistakable and nearly simultaneous pulses of geometry distortion created by colliding black holes billions of light years away.

Astronomers also detected by 1997 the ‘frame-dragging’ phenomenon in  X-ray studies of distant black holes. As a black hole (or any other body) rotates, it actually ‘drags’ space around with it. This means that you cannot have stable orbits around a rotating body, which is something totally unexpected in Newton’s theory of gravity. The  Gravity Probe-B satellite orbiting Earth also confirmed in 2011 this exotic spacetime effect at precisely the magnitude expected by the theory for the rotating Earth.

Gravity also doesn’t care if you have matter or anti-matter; both will behave identically as they fall and move under gravity’s influence. This quantum-scale phenomenon was searched for at the Large Hadron Collider ALPHA experiment, and in 2013 researchers placed the first limits on how matter and antimatter ‘fall’ in Earth’s gravity. Future experiments will place even more stringent limits on just how gravitationally similar matter and antimatter are. Well, at least we know that antimatter doesn’t ‘fall up’!

There is only one possible problem with our understanding of gravity known at this time.

Applying general relativity, and even Newton’s Universal Gravitation, to large systems like galaxies and the universe leads to the discovery of a new ingredient called Dark Matter. There do not seem to be any verifiable elementary particles that account for this gravitating substance. Lacking a particle, some physicists have proposed modifying Newtonian gravity and general relativity themselves to account for this phenomenon without introducing a new form of matter. But none of the proposed theories leave the other verified predictions of general relativity experimentally intact. So is Dark Matter a figment of an incomplete theory of gravity, or is it a here-to-fore undiscovered fundamental particle of nature? It took 50 years for physicists to discover the lynchpin particle called the Higgs boson. This is definitely a story we will hear more about in the decades to come!

There is much that we now know about gravity, yet as we strive to unify it with the other elementary forces and particles in nature, it still remains an enigma. But then, even the briefest glance across the landscape of the quantum world fills you with a sense of awe and wonderment at the improbability of it all. At its root, our physical world is filled with improbable and logic-twisting phenomena and it simply amazing that they have lent themselves to human logic to the extent that they have!

 

Return here on Monday, March 13 for my next blog!

Martian Swamp Gas?

Thanks to more than a decade of robotic studies, the surface of Mars is becoming a place as familiar to some of us as similar garden spots on Earth such as the Atacama Desert in Chile, or Devon Island in Canada. But this rust-colored world still has some tricks up its sleave!

Back in 2003, NASA astronomer Michael Mumma and his team discovered traces of methane in the dilute atmosphere of Mars. The gas was localized to only a few geographic areas in the equatorial zone in the martian Northern Hemisphere, but this was enough to get astrobiologists excited about the prospects for sub-surface life. The amount being released in a seasonal pattern was about 20,000 tons during the local summer months.


The discovery using ground-based telescopes in 2003 was soon confirmed a year later by other astronomers and by the Mars Express Orbiter, but the amount is highly variable. Ten years later, the Curiosity rover also detected methane in the atmosphere from its location many hundreds of miles from the nearest ‘plume’ locations. It became clear that the hit-or-miss nature of these detections had to do with the source of the methane turning on and off over time, and it was not some steady seepage going on all the time. Why was this happening, and did it have anything to do with living systems?

On Earth, there are organisms that take water (H2O) and combine it with carbon dioxide in the air (CO2) to create methane (CH3) as a by-product, but there are also inorganic processes that create methane too. For instance, electrostatic discharges can ionize water and carbon dioxide and can produce trillions of methane molecules per discharge. There is plenty of atmospheric dust in the very dry Martian atmosphere, so this is not a bad explanation at all.

This diagram shows possible ways that methane might make it into Mars’ atmosphere (sources) and disappear from the atmosphere (sinks). (Credit: NASA/JPL-Caltech/SAM-GSFC/Univ. of Michigan)

Still, the search for conclusive evidence for methane production and removal is one of the high frontiers in Martian research these days. New mechanisms are being proposed every year that involve living or inorganic origins. There is even some speculation that the Curiosity rover’s chemical lab was responsible for the rover’s methane ‘discovery’. Time will tell if some or any of these ideas ultimately checks out. There seem to be far more geological ways to create a bit of methane compared to biotic mechanisms. This means the odds do not look so good that the fleeting traces of methane we do see are produced by living organisms.

What does remain very exciting is that Mars is a chemically active place that has more than inorganic molecules in play. In 2014, the Curiosity rover took samples of mudstone and tested them with its on-board spectrometer. The samples were rich in organic molecules that have chlorine atoms including chlorobenzene (C6H4Cl2) , dichloroethane (C2H4Cl2), dichloropropane (C3H6Cl2) and dichlorobutane (C4H8Cl2). Chlorobenzene is not a naturally occurring compound on Earth. It is used in the manufacturing process for pesticides, adhesives, paints and rubber. Dichloropropane is used as an industrial solvent to make paint strippers, varnishes and furniture finish removers, and is classified as a carcinogen. There is even some speculation that the abundant perchlorate molecules (ClO4) in the Martian soil, when heated inside the spectrometer with the mudstone samples, created these new organics.

Mars is a frustratingly interesting place to study because, emotionally, it holds out hope for ultimately finding something exciting that takes us nearer to the idea that life once flourished there, or may still be present below its inaccessible surface. But all we have access to for now is its surface geology and atmosphere. From this we seem to encounter traces of exotic chemistry and perhaps our own contaminants at a handful of parts-per-billion. At these levels, the boring chemistry of Mars comes alive in the statistical noise of our measurements, and our dreams of Martian life are temporarily re-ignited.

Meanwhile, we will not rest until we have given Mars a better shot at revealing traces of its biosphere either ancient or contemporary!

Check back here on Thursday, March 2 for the next essay!

Death By Vacuum

As an astrophysicist, this has GOT to be one of my favorite ‘fringe’ topics in physics. There’s a long preamble story behind it, though!

The discovery of the Higgs Boson with a mass of 126 GeV, about 130 times more massive than a proton, was an exciting event back in 2012. By that time we had a reasonably complete Standard Model of how particles and fields operated in Nature to create everything from a uranium atom and a rainbow, to lighting the interior of our sun. A key ingredient was a brand new fundamental field in nature, and its associated particle called the Higgs boson. The Standard Model says that all fundamental particles in Nature have absolutely no mass, but they all interact with the Higgs field. Depending on how strong this interaction, like swimming through a container of molasses, they gain different amounts of mass. But the existence of this Higgs field has led to some deep concerns about our world that go well beyond how this particle creates the physical property we call mass.

In a nutshell, according to the Standard Model, all particles interact with the ever-present Higgs field, which permeates all space. For example, the W-particles interact very strongly with the Higgs field and gain the most mass, while photons interact not at all, remain massless.

The Higgs particles come from the Higgs field, which as I said is present in every cubic centimeter of space in our universe. That’s why electrons in the farthest galaxy have the same mass as those here on Earth. But Higgs particles can also interact with each other. This produces a very interesting effect, like the tension in a stretched spring. A cubic centimeter of space anywhere in the universe is not at all perfectly empty, and actually has a potential energy ‘stress’ associated with it. This potential energy is  related to just how massive the Higgs boson is. You can draw a curve like the one below that shows the vacuum energy  and how it changes with the Higgs particle mass:

Now the Higgs mass actually changes as the universe expands and cools. When the universe was very hot, the curve looked like the one on the right, and the mass of the Higgs was zero at the bottom of the curve. As the universe expanded and cooled, this Higgs interaction curve turned into the one on the left, which shows that the mass of the Higgs is now X0 or 126 GeV. Note, the Higgs mass represented by the red ball used to be zero, but ‘rolled down’ into the lower-energy pit as the universe cooled.

The Higgs energy curve shows a very stable situation for ‘empty’ space at its lowest energy (green balls) because there is a big energy wall between where the field is today, and where it used to be (red ball). That means that if you pumped a bit of energy into empty space by colliding two particles there, it would not suddenly turn space into the roaring hot house of the Higgs field at the top of this curve.

We don’t actually know exactly what the Higgs curve looks like, but physicists have been able to make models of many alternative versions of the above curve to test out how stable the vacuum is. What they  found is something very interesting.

The many different kinds of Higgs vacuua can be defined  by using two masses: the Higgs mass and the mass of the top quark. Mathematically, you can then vary the values for the Higgs boson and the Top quark and see what happens to the stability of the vacuum. The results are summarized in the plot below.

The big surprise is that, from the observed mass of the Higgs boson and our top quark shown in the small box, their values are consistent with our space being inside a very narrow zone of what is called meta-stability. We do not seem to be living in a universe where we can expect space to be perfectly stable. What does THAT mean? It does sound rather ominous that empty space can be unstable!

What it means is that, at least in principle, if you collided particles with enough energy that they literally blow-torched a small region of space, this could change the Higgs mass enough that the results could be catastrophic. Even though the collision region is smaller than an atom, once created, it could expand at the speed of light like an inflating bubble. The interior would be a region of space with new physics, and new masses for all of the fundamental particles and forces. The surface of this bubble would be a maelstrom of high-energy collisions leaking out of empty space! You wouldn’t see the wall of this bubble coming. The walls can contain a huge amount of energy, so you would be incinerated as the bubble wall ploughed through you.

Of course the world is not that simple. These are all calculations based on the Standard Model, which may be incomplete. Also, we know that cosmic rays collide with Earth’s atmosphere at energies far beyond anything we will ever achieve…and we are still here.

So sit back and relax and try not to worry too much about Death By Vacuum.

Then again…

 

Return here on Wednesday, February 22 for my next blog!

The Next Sunspot Cycle

Forecasters are already starting to make predictions for what might be in store as our sun winds-down its current sunspot cycle (Number 24) in a few years. Are we in for a very intense cycle of solar activity, or the beginning of a century-long absence of sunspots and a rise in colder climates?

Figure showing the sunspot counts for the past few cycles. (Credit:www.solen.info)

Ever since Samuel Schwabe discovered the 11-year ebb and flow of sunspots on the sun in 1843, predicting when the next sunspot cycle will appear, and how strong it will be, has been a cottage industry among scientists and non-scientists alike. For solar physicists, the sunspot cycle is a major indicator of how the sun’s magnetic field is generated, and the evolution of various patterns of plasma circulation near the solar surface and interior. Getting these forecasts bang-on would be proof that we indeed have a ‘deep’ understanding of how the sun works that is a major step beyond just knowing it is a massive sphere of plasma heated by thermonuclear fusion in its core.

So how are we doing?

For over a century, scientists have scrutinized the shapes of dozens of individual sunspot cycles to glean features that could be used for predicting the circumstances of the next one. Basically, we know that 11-years is an average and some cycles are as short as 9 years or as long as 14. The number of sunspots during the peak year, called sunspot maximum, can vary from as few as 50 to as many as 260. The speed with which sunspot numbers rise to a maximum can be as long as 80 months for weaker sunspot cycles, and as short as 40 months for the stronger cycles. All of these features, and many other statistical rules-of-thumb, lead to predictive schemes of one kind or another, but they generally fail to produce accurate and detailed forecasts of the ‘next’ sunspot cycle.

Prior to the current sunspot cycle (Number 24), which spans the years 2008-2019, NASA astronomer Dean Pesnell collected 105 forecasts for Cycle 24 . For something as simple as how many sunspots would be present during the peak year, the predictions varied from as few as 40 to as many as 175 with an average of 106 +/-31. The actual number at the 2014 peak was 116. Most of the predictions were based on little more than extrapolating statistical patterns in older data. What we really want are forecasts that are based upon the actual physics of sunspot formation, not statistics. The most promising physics-based models we have today actually follow magnetic processes on the surface of the sun and below and are called Flux Transport Dynamo models.

Solar polar magnetic field trends (Credit: Wilcox Solar Observatory)

The sun’s magnetic field is much more fluid than the magnetic field of a toy bar magnet. Thanks to the revolutionary work by helioseismologists using the SOHO spacecraft and the ground-based GONG program, we can now see below the turbulent surface of the sun. There are vast rivers of plasma wider than a dozen Earths, which wrap around the sun from east to west. There is also a flow pattern that runs north and south from the equator to each pole. This meridional current is caused by giant convection cells below the solar surface and acts like a conveyor belt for the surface magnetic fields in each hemisphere. The sun’s north and south magnetic fields can be thought of as waves of magnetism that flow at about 60 feet/second from the equator at sunspot maximum to the poles at sunspot minimum, and back again to the equator at the base of the convection cell. At sunspot minimum they are equal and opposite in intensity at the poles, but at sunspot maximum they vanish at the poles and combine and cancel at the sun’s equator. The difference in the polar waves during sunspot minimum seems to predict how strong the next sunspot maximum will be about 6 years later as the current returns the field to the equator at the peak of the next cycle. V.V Zharkova at Northumbria University in the UK uses this to predict that Cycle 25 might continue the declining trend of polar field decrease seen in the last three sunspot cycles, and be even weaker than Cycle 24 with far fewer than 100 spots. However, a recent paper by NASA solar physicists David Hathaway and Lisa Upton  re-assessed the trends in the polar fields and predict that the average strength of the polar fields near the end of Cycle 24 will be similar to that measured near the end of Cycle 23, indicating that Cycle 25 will be similar in strength to the current cycle.

But some studies such as those by Matthew Penn and William Livingston at the National Solar Observatory seem to suggest that  sunspot magnetic field strengths have been declining since about 2000 and are already close to the minimum needed to sustain sunspots on the solar surface.  By Cycle 25 or 26, magnetic fields may be too weak to punch through the solar surface and form recognizable sunspots at all, spelling the end of the sunspot cycle phenomenon, and the start of another Maunder Minimum cooling period perhaps lasting until 2100. A quick GOOGLE search will turn up a variety of pages claiming that a new ‘Maunder Minimum’ and mini-Ice Age are just around the corner! An interesting on-the-spot assessment of these disturbing predictions was offered back in 2011 by NASA solar physicist C. Alex Young, concluding from the published evidence that these conclusions were probably ‘Much Ado about Nothing’.

What can we bank on?

The weight of history is a compelling guide, which teaches us that tomorrow will be very much like yesterday. Statistically speaking, the current Cycle 24 is scheduled to draw to a close about 11 years after the previous sunspot minimum in January 2008, which means sometime in 2019. You can eyeball the figure at the top of this blog and see that that is about right. We entered the Cycle 24 sunspot minimum period in 2016 because in February and June, we already had two spot-free days. As the number of spot-free days continues to increase in 2017-2018, we will start seeing the new sunspots of Cycle 25 appear sometime in late-2019. Sunspot maximum is likely to occur in 2024, with most forecasts predicting about half as many sunspots as in Cycle 24.

None of the current forecasts suggest Cycle 25 will be entirely absent. A few forecasts even hold out some hope that a sunspot maximum equal to or greater than Cycle 24 which was near 140 is possible, while others place the peak closer to 60 in 2025.

It seems to be a pretty sure bet that there will be yet-another sunspot cycle to follow the current one. If you are an aurora watcher, 2022-2027 would be the best years to go hunting for them. If you are a satellite operator or astronaut, this next cycle may be even less hostile than Cycle 24 was, or at least no worse!

In any event, solar cycle prediction will be a rising challenge in the next few years as scientists pursue the Holy Grail of creating a reliable theory of why the sun even has such cycles in the first place!

Check back here on Friday, February 17 for my next blog!

Interstellar Travel?

Interstellar travel revolves around the answers to three major questions:

1) Where will we go?

2) What will we do when we get there?

3) How will it benefit folks back on Earth?

Far beyond the issue of whether interstellar travel is technologically possible is the very practical issue of answering these questions long before we turn the first screw in the hardware.

A few of the thousands of stars within 50 light years of Earth.(Credit: Atlas of the Universe)

In science fiction, finding new destinations is usually handled by either manned or unmanned expeditionary forces. Not surprisingly, the hazardous experiences of the manned expeditions are usually the exciting core of the story itself. When we create this technology ourselves, the first  trips are usually to a popular nearby star like Alpha Centauri (e.g Babylon 5). When we are co-opting alien technology, we often have to go to where the aliens previously had outposts often hundreds or thousands of light years away ( e.g Contact or Stargate). In any event, most stories assume that the travelers can select multiple destinations and hop-scotch their way around the local universe within a single human lifetime to find a habitable planet. Apparently, once you have convinced politicians to fund the first trip, the incremental cost of additional trips is very small!

The whole idea of interstellar travel was created by fiction writers, but because it has many elements of good science in it, it is a very persuasive idea. It is an idea located smack in the middle of the ‘gray area’ between fantasy and reality, and this is what goads people on to try to imagine ways to make it a reality. One thing we do know is that it will be an expensive venture requiring an investment at the level of many percent of an entire planet’s GDP.

By some estimates, the first interstellar voyage will cost many trillions of dollars, require decades to construct, and involve tens to hundreds of passengers and explorers. Assuming a project of this scope can even be sold to the bulk of humanity that will be left behind to pay the bills, how will the destination be selected? Will we just point the ship towards any star and commit these resources to a random journey and outcome, or will we know a LOT about where we are going before the fuel is loaded? Most of us will agree that the latter case for such an expensive ‘one of’ mission is more likely. By the way, let’s not talk about the human Manifest Destiny to explore the unknown. Even Christopher Columbus knew his destination in detail (India!) and traveled within a very benign biosphere, free breathable atmosphere and comfortable gravity to get there in a few months.

So…where will we go?

Contrary to popular ideas, we will know our destination in great detail long before we leave our solar system. We will know whether the star has any planets, and we will know it has at least one planet in its habitable zone (HZ) where temperature would allow liquid water to exist. We will know if the planet has an atmosphere or not. We will know its mass and size, and perhaps more importantly, whether the planet has a biosphere. We will not invest perhaps trillions of dollars to study a barren Mars or Venus-like planet. All of these issues will be worked out by astronomical remote-sensing research at far lower cost than traveling there. If a nearby star does not have detectable planets, we will most certainly NOT mount a trillion-dollar mission to just ‘go and see’!

The 10 nearest stars are: Proxima Centauri (4.24 lys), Alpha Centauri (4.36), Barnards Star (5.96), Luhman 16 (6.59), Wolf 359 (7.78), Lalande 21185 (8.29), Sirius (8.59), Luyten 726-8 (8.72), Ross 154 (9.68) and Ross 248 (10.32). This takes us out to a distance of just over 10 light years from Earth. The prospects for an interesting world to visit are not good.

Proxima Centauri has one recently-detected Earth-sized planet orbiting inside its HZ, making it a Venus-like world of no interest. Alpha Centauri B has one unverified Earth-sized planet, but not in the star’s liquid-water HZ. It orbits ten times closer than Mercury. There are no planets larger than Neptune orbiting this star closer than our planet Jupiter. Barnards Star has a no known planets, but a Jupiter-sized planet inside the orbit of Mars is exluded, so this is still a viable star for future searches for terrestrial planets in the star’s HZ. Luhman 16 is a binary system whose members orbit each other every 25 years at a distance of 3 AU. A possible companion orbits one of these stars every month at a distance closer than Mercury. As for the stars Wolf 359, Lalande 21185, Sirius, Luyten 726-8, Ross 154 and Ross 248, there have been searches for Jupiter-sized companions around these  stars, but none have ever been claimed.

So, our nearest stars within 10 light years are pretty bleak as destinations for expensive missions. There is no solid evidence for Earth-sized planets orbiting within the HZs of any of them. These would not be plausible targets because there is so little return on the high cost of getting there, even though that cost in terms of travel time is the smallest of all stars in our neighborhood. This also sets a scale for the technology required. It is not enough to visit our nearest star, but we have to trudge 2 to 3 times farther before we can find better destinations.

Better Destinations.

Let’s take a bigger step. Out to a distance of 16 light years there are 56 normal stars, which include some promising candidate targets.

Epsilon Eridani (10.52 ly) has one known giant planet outside its HZ. It also has two asteroid belts: one at about three times Earth’s distance from our sun (3 AU) and one at about 20 AU. No one would ever risk a priceless mission by sending it to a sparse planetary system with deadly asteroid belts and no HZ candidates!

Groombridge 34 (11.62 ly) –The only suspected planet has a mass of more than five Earths. No mission would be sent to such a planet for which an atmosphere would probably be crushingly dense and probably Jupiter-like even if it was in its HZ.

Epsilon Indi (11.82 ly) – has a possible Jupiter-sized planet with a period of more than 20 years. No known smaller planets.

Artist rendering of the planets Tau Ceti e and f (Credit: PHL @ UPR Arecibo)

Tau Ceti (11.88 ly) probably has five planets between two and six times Earth’s mass, and with periods from 14 to 640 days. Planet Tau Ceti f is colder than Mars and is at the outer limit to the star’s HZ. Its atmosphere might be dense enough for greenhouse heating, so the world might be habitable after all. But this is guesswork not certainty.

Kapteyn’s Star (12.77 ly) – It has two planets, Kapteyn b and Kapteyn c, that are 5 to 8 times the mass of Earth. Kapteyn b has a period of 120 days and is a potentially habitable planet estimated to be 11 billion years old. Again, a massive planet whose surface you could never visit, so what is the point of the interstellar expedition?

Gliese 876 (15.2 ly) has four planets. All have more than 6 times the mass of Earth and orbit closer than the planet Mercury. Gliese 876 c is a giant planet like Jupiter in the star’s habitable zone. Would you bet the entire mission that 876c has habitable ‘Galilean moons’ like our Jupiter? This would be an unacceptable shot in the dark, though a tantalizing one.

So, out to 15 light years we have some interesting prospects but no confirmed Earth-sized planet in its star’s HZ whose surface you could actually visit. We also have no solid data on the atmospheres of any of these worlds. None of these candidates seem worth investing the resources of a trillion-dollar mission to reach and study. We can study them all from Earth at far less cost.

Best Destinations.

If we take an even bigger step and consider stars closer than 50 light years we have a sample of potentially 2000 stars but not all of them have been discovered and cataloged. About 130 are bright enough to be seen with the naked eye. The majority are dim and cool red dwarf stars, which are still good candidates for planetary systems. In this sample we encounter among the known planetary candidates several that would be intriguing targets:

61 Virginis (11.41 ly) – It has three planets with masses between 5 and 25 times our Earth, crowded inside the orbit of Venus. The asteroidal debris disk has at least 10 times as many comets as our solar system. There are no detected planets more massive than Saturn within 6 AU. An Earth-mass planet in the star’s habitable zone remains a possiblity, but the asteroid belts make this an unacceptable high risk target.

Gliese 667 planets (Credit: ESO )

Gliese 667 (23.2 ly) –As many as seven planets may orbit this star, but have not been confirmed. All have masses between that of Earth and Uranus. All but one are huddled inside the orbit of Mercury. Planets c and d are in the star’s HZ and are at least 3 times the mass of Earth. Their hypothetical moons may be habitable.
55 Cancri (40.3 ly)- All five planets orbiting this star are more than five times the mass of Earth. Only 55 Cancri e is located at the inner edge of the star’s HZ and its hypothetical moons could be habitable. More planets are possible within the stable zone between 0.9 to 3.8 AU if their orbits are circular. This is a system we still need to study.

HD 69830 (40.7 ly) has a debris disk produced by an asteroid belt twenty times more massive than that in our own solar system. Three detected planets have masses between 10 to 18 times that of Earth. The debris disk makes this a high-risk prospect even if there are habitable moons.

HD 40307 (41.8 ly) Five of the six planets orbit very close to the star inside the orbit of Mercury. The fifth planet orbits at a distance similar to Venus and is in the system’s habitable zone. The planets range in mass from three to ten times Earth. Again, is a planet in its HZ with a mass too great for a direct human visit a good candidate? I don’t think so.

Upsilon Andromedae (44.25 ly) The two outer planets are in orbits more elliptical than any of the planets in the Solar System. Upsilon Andromedae d is in the system’s habitable zone, has three times the mass of Jupiter, with huge temperature swings. Its hypothetical moons may be habitable.

47 Ursa Majoris (45.9 ly) The only known planet 47 Ursae Majoris b is more than twice the mass of Jupiter and orbits between Mars and Jupiter. The inner part of the habitable zone could host a terrestrial planet in a stable orbit. None yet detected.

There are still many more stars in this sample to detect, catalog and study so it is possible that a Goldilocks Planet could be found eventually. But we are now looking at destinations more than 20 light years away at a minimum. This will considerably increase the cost and duration of any interstellar mission by factors of five to ten times a simple jaunt to Alpha Centauri.

Other issues.

Would you really consider a planet with two to five times Earth’s gravity to be a candidate? Who would want to live under that crushing weight? Many of the candidates we have found so far are massive Earth’s that few colonists would consider standing upon. Their surfaces are also technologically expensive to get to and leave. But perhaps these worlds might have moons with more comfortable gravities? There is always hope, but will that be enough to risk a multi-trillion-dollar mission?

There is also the issue of atmosphere. None of the candidate planets we have discussed transit their stars, so we cannot detect their atmospheres and figure out if they have atmospheres and if their  trace gases would be  lethal. The perfect destination worth the expense of a trip would have a breathable atmosphere with oxygen. Since free oxygen is only produced by living systems, our target planet would have a biosphere. We can only hope that as the surveys of the nearby stars continue, we will find one of these. But statistics suggests we will have to search much farther than 50 light years and a few thousand stars before we encounter one. That makes the interstellar voyage even more costly, not by factors of five and ten, but potentially hundreds of times. But if the trip were to a world with a known biosphere, THAT might be worth the effort, but possibly nothing less than this would be worth the cost, the risk, and the scientific return.

So the bottom line is that the only interstellar destination worth the expense is either one in which colonists can live comfortably on the planet with a lethal atmosphere, hermetically sealed under a dome, or a similar planet with a breathable oxygen atmosphere and a biosphere. Statistically, we will find far more examples of the first kind of target than the second. But in the majority of the cases, we will not be able to detect the atmosphere of an Earth-sized world in its habitable zone before we start the trip, and will have to ‘guess’ whether it even has an atmosphere at all!

The enormous cost of an interstellar trip to a target tens or even hundreds of light years away will preclude any guess work about what we will find when we get there. Consider this: Investing $100 billion to travel to Mars, a low-risk planet we thoroughly understand in detail, is still considered a political pipe dream even with existing technology! What would we call a trip that costs perhaps 100 times as much?

For more on this topic, have a look at my book ‘Interstellar Travel:An astronomer’s guide’. Available at Amazon.com.

 

Return here for my next blog posting on Friday, February   3

Why NASA needs ARMs

In 2013, a small 70-meter asteroid exploded over the town of Chelyabinsk and injured 3000 people from flying glass. Had this asteroid exploded a few hours earlier over New York City, the flying glass hazard would have been lethal for thousands of people, sending thousands more into the emergency rooms of hospitals for critical-care treatment. Of all the practical benefits of space exploration, it is hard to argue that asteroid investigations are not a high priority above dreams of colonization of the moon and Mars.

So why is it that the only NASA mission to actually try a simple method to adjust the orbit of an asteroid cannot seem to garner much support?

There has been much debate over the next step in human exploration: whether to go back to the moon or take the harder path to Mars. The later goal has been much favored, and for the last decade or so, NASA has developed a step-by-step Journey to Mars approach for doing this, beginning with the development of the SLS launch vehicle, and the testing out of many necessary systems, technologies and strategies to support astronauts making this trip, both quickly and safely. Along with numerous Mars mapping and rover missions now in progress or soon to be launched, there are also technology development missions to test out such things as solar-electric ‘ion’ propulsion systems.

One of these test-bed missions with significant scientific returns is the Asteroid Redirect Mission to be launched in ca 2021 for a cost of about $1.4 billion. NASA’s first-ever robotic mission will visit a large near-Earth asteroid, collect a multi-ton boulder from its surface, and use it in an enhanced gravity tractor asteroid deflection demonstration. The spacecraft will then redirect the multi-ton boulder into a stable orbit around the moon, where astronauts will explore it and return with samples in the mid-2020s.

But all is not well for ARM.

ARM was proposed in 2010 during the Obama Administration as an alternative to the canceled Constellation Program proposed by the Bush Administration, so with the new GOP-dominated administration set on dismantling all of the Obama Administrations’ legacy work, there is much incentive to eliminate it for political reasons alone.

Reps. Lamar Smith (R-Texas), chairman of the HCSST, and Brian Babin (R-Texas), chairman of the HSST space subcommittee reportedly feel that the incoming Trump administration should be “unencumbered” by decisions made by the current one — like what they want to do with the ACA . They claim to have access to “honest assessments” of ARM’s value rather than “farcical studies scoped to produce a predetermined outcome.” The House’s version of the 2017 FY appropriations bill includes wording that would force NASA to fully defund the ARM program. Furthermore, Smith and Babin wrote, “the next Administration may find merit in some, if not all, of the components of ARM, and continue the program; however, that decision should be made after a full and fair review based on the merits of the program and in the context of a larger exploration and science strategy.” Similar arguments will no doubt be used to cancel climate change research, which has also been deemed politically biased and unscientific by the current, incoming administration.

But ARM is no ordinary ‘exploration and science’ space mission, even absent its unique ability to test the first high-power ion engines for interplanetary travel, and retrieve a large, pristine multi-ton asteroid sample. All other NASA missions have certainly demonstrated their substantial scientific returns, and this is often the key justification that allows them to proceed. Mission technology also affords unique tech spinoff opportunities in the commercial sector that makes the US aerospace industrial base very happy to participate. But these returns all seem rather abstract, and for the person-on-the-street rather hard to appreciate.


For decades, astronomers have been discovering and tracking 100s of thousands of asteroids. We live in an interplanetary shooting gallery, where some 15,000 Near Earth Objects have already been discovered, and 30 new ones added every week. NEOs, by the way, are asteroids that come within 30 million miles of Earth’s orbit. These asteroids measure 1 kilometer or more, and statistically over 90% of this population has now been identified. But only 27% of those 140 meters or larger have been discovered. Once their orbits are determined, we can make predictions about which ones will pose an danger to Earth.

Currently there are 1,752 potentially hazardous asteroids  that come within 5 million miles of Earth (20 times Earth-moon distance). There are none predicted to impact Earth in the next 100 years. But new ones are found every week, and between now and February 2017, one object called 2016YJ about 30 meters across will pass within 1.2 lunar distances of Earth. The list of closest approaches in 2016 is quite exciting to look through The object 2016 QA2 discovered in 2016 in the nick of time, was about 70 meters across and came within 53,000 miles of Earth. Upon impact, it would have been an event similar to Chelyabinsk. Even larger, and far more troubling very close encounters have been predicted for the 325-meter asteroid Apophis in 2029, and the 1-kilometer asteroid 2001WN5 in 2028 and well within the man-made satellite cloud that surrounds Earth.

The first successful forecast of an impact event was made on 6 October 2008 when the asteroid 2008 TC3 was discovered. It was calculated that it would hit the Earth only 21 hours later. Luckily it had a diameter of only three meters and did not cause any damage. Since then, some stony remnants of the asteroid have been found. But this object could just as easily have been a 100-meter object exploding over New York City or London, with devastating consequences.

So in terms of planetary defense, asteroids are a dramatically important hazard we need to study. For some asteroids, we may have as little as a year to decide what to do. Although many mitigation strategies have been proposed, none have actually been tested! We need to test as many different orbit-changing strategies as we can before the asteroid with Earth’s name written on it is discovered.

Honestly, what more practical benefit can there be for a NASA mission than to materially protect Earth and our safety?

Check back here on Thursday, January 5 for the next installment!