The Rush to Mars

Even at the start of NASA’s space program in 1958, the target of our efforts was not the moon but Mars. The head of NASA, Werner von Braun was obsessed with Mars, and his Mars Project book published in 1948, was the blueprint for how to do this, which he revised in 1969. He saw the development of the Saturn V launch vehicle as the means to this end.

A major effort running in parallel with Kennedy’s moon program was the development of nuclear rocket technology. This would be the means for getting to Mars in the proposed expedition launch in November, 1981. The program was abandoned in 1972 when President Nixon unceremoniously canceled the Apollo Program and stopped the production of Saturn Vs. That immediately put the kibosh on any nuclear propulsion efforts because the required fission reactors were far too heavy to be lifted into space by any other means. He resigned from NASA once he realized that his dream would never be realized, and died five years later.

Flash forward to 2004 when President George Bush announced his Space Exploration Initiative to include a manned trip to Mars by the 2030s. There would be manned trips to the moon by 2015 to test out technologies relevant to the Mars trip and to learn how to live there for extended periods of time.

By 2008, the lunar portion of this effort was canceled, however the development of the Orion capsule and what is now called the Space Launch System were the legacies of this program still in place and expected to be operational by ca 2019. The rest of the Initiative is now called NASA’s Journey to Mars, and lays out a detailed plan for astronauts learning how to work farther and farther from Earth in self-sustaining habitats, leading to a visit to Mars in the 2030s. Meanwhile, the International Space Station has been greatly extended in life to the mid-2020s so we can finally get a handle on how to live and work in space and solve the many medical issues that still plague this environment.

However, NASA’s systematic approach is not the only one in progress today.

The entire foundation of Elon Musk’s Space-X company is to build and make commercially profitable successively larger launch vehicles leading to the Interplanetary Transport System which will bring 100 colonists at a time to the surface of Mars in about 80 days starting around 2026. Space-X is even partnering with NASA for a sample return mission called Red Dragon in ca 2018. Meanwhile, a competing program called Mars One (see picture above) proposes a crew of four people to land in 2032 with additional crew delivered every two years. . This will be a one-way do-or-die colony, and loss of life is expected. Mars One consists of two entities: the not-for-profit Mars One Foundation, and the for-profit company Mars One Ventures with CEO Bas Lansdorp at the corporate helm.

But wait a minute, what about all the non-tech issues like astronaut health and generating sustainable food supplies? Astronauts have been living in the International Space Station for decades in shifts, and many issues have been identified that we would be hard pressed to solve in only ten more years. NASA’s go-slow approach may be the only one consistent with not sending astronauts to a premature death on mars, with all the political and social ramifications that implies.

The dilemma is that slow trips to Mars, like the 240-day trips advocated by NASA’s plan exacerbate health effects from prolonged weightlessness including bone loss, failing eyesight, muscle atrophy and immune system weakening. These effects are almost eliminated by much shorter trips such as the 80-day target by Space-X. In fact, the entire $100 billion International Space Station raison de etra is to study long term space effects during these long transits. This existential reason for ISS would have been eliminated had a similar investment been made in ion or nuclear propulsion systems that reduced the travel time to a month or less!

Ironically, Werner von Braun knew about this as long ago as 1969, but his insights were dismissed for political reasons that led directly to our confinement to low Earth orbit for the next 50 years!

Check back here on Sunday, January 22 for the next installment!

The first named human

Not surprisingly, the record of the first humans identified by a personal name goes back to before the dawn of history itself. Through his artistic ‘Love Symbol’, the The Artist Formerly Known as Prince gave us a clue how pre-writing names were probably rendered!

Example of Jiahu Symbols (Wikipedia)


Pottery shards and other artifacts uncovered in China often bare curious symbols dating from the dawn of Chinese writing between 6600 and 6200 BC. Called Jiahu Symbols, they are not part of a written language but merely personally-invented symbols scratched on pottery to mark ownership by a specific individual: In other words a name!


The first recorded name given in an actual writing system can be found on clay tablets dating from the Jemdet Nasr period in Sumeria between 3200 and 3101 BC.

Example of Jemdet Nasr cuneiform (Credit: Metropolitan Museum of Art)

The tablets are not profound treatises on human thinking, but accounting ledgers for tallying up goods and possessions! Some of the first names are those of the slave owner Gal-Sal and his two slaves Enpap-x and Sukkalgir (3200-3100 BC). Another name is that of Turgunu Sanga (3100 BC) who seems to have been an accountant for the Turgunu family. There are many more names from this period but none that appear much before 3200 BC.



Example of Iry-Hor’s name on a pottery shard (Credit:Wikipedia)

Looking to Egypt, Iry-Hor (The Mouth of Horus) would be the earliest name we know dating from about 3200 BC. Little is known about Iry-Hor other than his name found on pottery shards in one of the oldest tombs in Abydos, though based on his burial he was a pre-dynastic pharaoh of Upper Egypt. [Wikipedia]. King Ka, from around this same time, was the first to inscribe his name inside a box-shaped serekh as an indicator of kingship. Following king Ka and king Iry-Hor we also have kings with hieroglyphic symbols of Crocodile King and Scorpion Kingfollowed by the name of the first pharaoh, Narmer (Catfish King), who united both Upper and Lower Egypt and together with his wife Neithhotep, lived between 3150 and 3125 BC. She, by the way, is the oldest women to be mentioned by name. The name Neithhotep means “[The Goddess] Neith is satisfied”.

Other civilizations arrive at writing names much later than the Chinese, Sumerians and Egyptians, but we can still ask the same question.


Anitta (no known meaning to the name) was the king of the Hittite city of Kussara. He lived around 1700 BC and is the earliest known ruler to compose a text in the Hittite language, which is the oldest known Indo-European text.

Linear B is a syllabic script that predates the Greek alphabet by several centuries. The oldest writing dates to about 1450 BC. Some Knossos Linear B tablets mention people by name. A number of Mycenian names have exact equivalents in Homer such as Hektor , which means “holding fast”.

Following many other ancient naming traditions, even ancient Greek names have an intrinsic meaning. For example Archimedes means “master of thought”, from the Greek element (archos) “master” combined with (medomai) “to think, to be mindful of”. And of course nearly all ancient Egyptian names have a separate meaning such as Amun Tut Ankh whose heiroglyphic name can be directly transcribed with the words ‘Amun’s Image Living’. We know him more popularly as Tutankamun.


The Mayans rose to prominence around A.D. 250. The oldest clearly named king is given by a glyph that translates into Yax Ehb’ Xook which literally means “First Step Shark”. He was the first king of Tikal who ruled sometime between 63 and 90 AD. Much later in 420 AD we have the purported founder of Copan, K’inich Yax K’uk’ Mo whose name means “Sun-Eyed Resplendent Quetzal Macaw”.
The peoples of Africa, Australia and North America all had spoken languages but not written symbolism, so until writing was imported to these areas we have no documentable record of names. For example among Native Americans, the oldest known name dates from the arrival of the Pilgims and their historical record-keeping. We read about Tisquantum (meaning The Wrath of God) ca 1620 AD who was a member of the Patuxet tribe. In Africa, there are many names that have come forward in time literally by word of mouth, but no way to establish their actual dates of usage through writing. For example, the legendary Queen of Sheba (1005-955 BC) was traditionally believed to be a part of the Ethiopian dynasty established in 1370 BC by Za Besi Angabo. Among Australian Aboriginals, writing only appeared after the arrival of Europeans in ca 1780s who transcribed language sounds into Latin text. Some of their names include Tharah, which means ‘thunder’ or Mokee which means ‘cloudy’.

What is interesting about almost all ancient human names is that in their own languages they actually mean something. They are not sterile monikers. At a cocktail party a conversation between two ancient Egyptians would be ‘Hi, my name is Living Image of Amun’…Pleased to meet you! My name is The Beautiful One Has Come!” It would not be heard as ‘Hi, my name is Tutankhamun…Pleased to meet you! My name is Nefertiti!”

This widespread human habit of naming people by phrases is far different than what we experience in modern times. We rarely think too much about names like ‘John Cartwright’, or Mike Brown. My own Swedish name, Sten Odenwald, translates into ‘Stone of Oden’s Forest’, and occasionally I really do think of it as more than a set of sounds or letters that designate me.

So the next time you visit Starbucks, imagine having this conversation:
You: I’d like a vente hot chocolate with whipped cream.
Barrista: Your Name?
You: The Living Image of the Irridescent Higgs Field
Barrista: ??
You: Just call me Bob.

Check here on Tuesday, January 17 for the next blog!

2016: A Year Beyond Reason

Psychologists define Cognitive Dissonance as the anxiety (dissonance) felt when people are confronted with information that is inconsistent with their beliefs. If the dissonance is not reduced by changing one’s belief, the dissonance can result in restoring consonance through misperception, rejection or refutation of the information, seeking support from others who share the beliefs, and attempting to persuade others.

In other words, humans can often carry two completely conflicting ideas in their consciousness at the same time. This is a stressful condition, and to alleviate it, we resort to rejecting contrary information, or try to persuade others of the consistency of our viewpoint.

We saw a lot of this condition in 2016!

This is not some liberal psychological plot to disparage the far-right of our political spectrum, but an objective fact of how our brains work. Researchers using functional Magnetic Resonance Imaging (fMRI) have found that cognitive dissonance activated specific brain regions called the dorsal anterior cingulate cortex and the anterior insular cortex. They also found that the more the anterior cingulate cortex signaled a conflict, the more dissonance a person experiences. During decision-making processes where the participant is trying to reduce dissonance, activity increased in the right-inferior frontal gyrus, medial fronto-parietal region and ventral striatum, while activity decreased in the anterior insula. Researchers concluded that rationalization activity, where you are trying to reduce the stress caused by cognitive dissonance, may take place quickly (within seconds) without conscious deliberation, and that the brain may engage emotional responses in the decision-making process.

The problem is that CD leads to other kinds of things that are sometimes harder to discern objectively. Confirmation bias refers to how people read or access information that affirms their already established opinions, rather than referencing material that contradicts them. This bias is particularly apparent when someone is faced with deeply held beliefs, i.e., when a person has ‘high commitment’ to their attitudes. People display confirmation bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position.

We saw a lot of that, too, in 2016.

An interesting study of biased interpretation occurred during the 2004 U.S. presidential election and involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from George W. Bush, John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual’s statements were inconsistent. There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory. The participants made their judgments while in an fMRI scanner that monitored their brain activity. As participants evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing the cognitive dissonance induced by reading about their favored candidate’s irrational or hypocritical behavior.

The bottom line is that, thanks to evolution, we have been blessed with a brain that suffers from many different kinds of reasoning pathologies. These may have had survival value in the remote past for making quick judgments in our social groups, or mistaking a distant shadow for a tiger, but now they are liabilities in our far more rational world of science and technology. Scientists spend a lot of time trying to weed out CD and CB from their analyses, and the result is that for 400 years of observing Nature as dispassionately as we can, we have created a marvelously accurate model of our world.

Sadly, CD and CB have at the same time been used to manipulate voters and consumers, with amazing negative consequences. The dissonance is that we fully realize that we are being manipulated by biased information, yet we seem powerless to resist its sirean call. In the current election, voters supporting Trump steadfastly refused to use his frequent and documented lying as grounds for not trusting him.

Some of the worst cases of CD and CB occurred during the 2016 election, and psychologists will be writing papers about it for decades. It all comes down to how people were convinced not to vote in their own self-interest.

How is it that voters whos only insurance came from the ACA voted for a GOP ticket that promised to repeal it? How is it that so many students voted against the democratic candidate who promised to eliminate tuition? How is it that so many poor people voted for an aledged multi-billionaire whose lavish gold-plated lifestyle was the antithesis of a poor person’s lifestyle?  How is it that Clinton and Trump were placed on the same ‘untrustworthy’ pedestal, when evidence showed that Clinton played by the rules and released her income tax statements, while Trump ran a Trump University con job and withheld his?  How is it that Trump’s steadfast attacks against our own intelligence service to defend Putin and Assange are not met with more rejection and patriotic contempt by his followers?

In the end, Trump voters and Red States will be paying a disproportionate economic penalty for letting CD and CB get the better of their reasoning. But because we are all in this together for the next four years, the rest of us will also feel some of this dissonance as well as collateral damage as voters in the red states ask voters in the blue states to bail them out.

Check back here on Saturday, January 14 for the next installment!

Space Travel via Ions

For 60 years, NASA has used chemical rockets to send its astronauts into space, and to get its spacecraft from planet to planet. The huge million-pound thrusts sustained for only a few minutes were enough to do the job, mainly to break the tyranny of Earth’s gravity and get tons of payload into space. This also means that Mars is over 200 days away from Earth and Pluto is nearly 10 years. The problem: rockets use propellant (called reaction mass) which can only be ejected at speeds of a few kilometers per second. To make interplanetary travel a lot zippier, and to reduce its harmful effects on passenger health, we have to use rocket engines that eject mass at far-higher speeds.

In the 1960s when I was but a wee lad, it was the hey-day of chemical rockets leading up to the massive Saturn V, but I also knew about other technologies being investigated like nuclear rockets and ion engines. Both were the stuff of science fiction, and in fact I read science fiction stories that were based upon these futuristic technologies. But I bided my time and dreamed that in a few decades after Apollo, we would be traveling to the asteroid belt and beyond on day trips with these exotic rocket engines pushing us along.

Well…I am not writing this blog from a passenger ship orbiting Saturn, but the modern-day reality 50 years later is still pretty exciting. Nuclear rockets have been tested and found workable but too risky and heavy to launch.  Ion engines, however, have definitely come into their own!

Most geosynchronous satellites use small ‘stationkeeping’ ion thrusters to keep them in their proper orbit slots, and NASA has sent several spacecraft like Deep Space-1, Dawn powered by ion engines to rendezvous with asteroids. Japan’s Hayabusha spacecraft also used ion engines as did ESA’s Beppi-Colombo and the LISA Pathfinder. These engines eject charged, xenon atoms ( ions) to speeds as high as 200,000 mph (90 km/sec), but the thrust is so low it takes a long time for spacecraft to build up to kilometer/sec speeds. The Dawn spacecraft, for example, took 2000 days to get to 10 km/sec although it only used a few hundred pounds of xenon!

But on the drawing boards even more powerful engines are being developed. Although chemical rockets can produce millions of pounds of thrust for a few minutes at a time, ion engines produce thrusts measured in ounces for thousands of days at a time. In space, a little goes a long way. Let’s do the math!

The Deep Space I engines use 2,300 watts of electrical energy, and produced F= 92 milliNewtons of thrust, which is only 1/3 of an ounce! The spacecraft has a mass of m= 486 kg, so from Newton’s famous ‘F=ma’  we get an acceleration of a= 0.2 millimeters/sec/sec. It takes about 60 days to get to 1 kilometer/sec speeds. The Dawn mission, launched in 2007 has now visited asteroid Vesta (2011) and dwarf planet Ceres (2015) using a 10 kilowatt ion engine system with 937 pounds of xenon, and achieved a record-breaking speed change of 10 kilometers/sec, some 2.5 times greater than the Deep Space-1 spacecraft.

The thing that limits the thrust of the xenon-based ion engines is the electrical energy available. Currently, kilowatt engines are the rage because spacecraft can only use small solar panels to generate the electricity. But NASA is not standing still on this.


The NEXIS ion engine was developed by NASA’s  Jet Propulsion Laboratory, and this photograph was taken when the engine’s power consumption was 27 kW, with a thrust of 0.5 Newtons (about 2 ounces).

An extensive research study on the design of megawatt ion engines by the late David Fearn was presented at the Space Power Symposium of the 56th International Astronautical Congress in 2005. The conclusion was that these megawatt ion engines pose no particular design challenges and can achieve exhaust speeds that exceed 10 km/second. Among the largest ion engines that have actually been tested so far is a 5 megawatt engine developed in 1984 by the Culham Laboratory. With a beam energy of 80 kV, the exhaust speed is a whopping  4000 km/second, and the thrust was 2.4 Newtons (0.5 pounds).

All we have to do is come up with efficient means of generating megawatts of power to get at truly enormous exhaust speeds. Currently the only ideas are high-efficiency solar panels and small fission reactors. If you use a small nuclear reactor that delivers 1 gigawatt of electricity, you can get thrusts as high as 500 Newtons (100 pounds). What this means is that a 1 ton spacecraft would accelerate at 0.5 meters/sec/sec and reach Mars in about a week!  Meanwhile, NASA plans to use some type of 200 kilowatt, ion engine design with solar panels to transport cargo and humans to Mars in the 2030s. Test runs will also be carried out with the Asteroid Redirect Mission ca 2021 also using a smaller 50 kilowatt solar-electric design.

So we are literally at the cusp of seeing a whole new technology for interplanetary travel being deployed. If you want to read more about the future of interplanetary travel, have a look at my book ‘Interplanetary Travel: An astronomer’s guide’, which covers destinations, exploration and propulsion technology, available at

Stay tuned!

Check back here on Wednesday, January 11 for the next installment!

Image credits:

Ion engine schematic

Star Destroyer

Why NASA needs ARMs

In 2013, a small 70-meter asteroid exploded over the town of Chelyabinsk and injured 3000 people from flying glass. Had this asteroid exploded a few hours earlier over New York City, the flying glass hazard would have been lethal for thousands of people, sending thousands more into the emergency rooms of hospitals for critical-care treatment. Of all the practical benefits of space exploration, it is hard to argue that asteroid investigations are not a high priority above dreams of colonization of the moon and Mars.

So why is it that the only NASA mission to actually try a simple method to adjust the orbit of an asteroid cannot seem to garner much support?

There has been much debate over the next step in human exploration: whether to go back to the moon or take the harder path to Mars. The later goal has been much favored, and for the last decade or so, NASA has developed a step-by-step Journey to Mars approach for doing this, beginning with the development of the SLS launch vehicle, and the testing out of many necessary systems, technologies and strategies to support astronauts making this trip, both quickly and safely. Along with numerous Mars mapping and rover missions now in progress or soon to be launched, there are also technology development missions to test out such things as solar-electric ‘ion’ propulsion systems.

One of these test-bed missions with significant scientific returns is the Asteroid Redirect Mission to be launched in ca 2021 for a cost of about $1.4 billion. NASA’s first-ever robotic mission will visit a large near-Earth asteroid, collect a multi-ton boulder from its surface, and use it in an enhanced gravity tractor asteroid deflection demonstration. The spacecraft will then redirect the multi-ton boulder into a stable orbit around the moon, where astronauts will explore it and return with samples in the mid-2020s.

But all is not well for ARM.

ARM was proposed in 2010 during the Obama Administration as an alternative to the canceled Constellation Program proposed by the Bush Administration, so with the new GOP-dominated administration set on dismantling all of the Obama Administrations’ legacy work, there is much incentive to eliminate it for political reasons alone.

Reps. Lamar Smith (R-Texas), chairman of the HCSST, and Brian Babin (R-Texas), chairman of the HSST space subcommittee reportedly feel that the incoming Trump administration should be “unencumbered” by decisions made by the current one — like what they want to do with the ACA . They claim to have access to “honest assessments” of ARM’s value rather than “farcical studies scoped to produce a predetermined outcome.” The House’s version of the 2017 FY appropriations bill includes wording that would force NASA to fully defund the ARM program. Furthermore, Smith and Babin wrote, “the next Administration may find merit in some, if not all, of the components of ARM, and continue the program; however, that decision should be made after a full and fair review based on the merits of the program and in the context of a larger exploration and science strategy.” Similar arguments will no doubt be used to cancel climate change research, which has also been deemed politically biased and unscientific by the current, incoming administration.

But ARM is no ordinary ‘exploration and science’ space mission, even absent its unique ability to test the first high-power ion engines for interplanetary travel, and retrieve a large, pristine multi-ton asteroid sample. All other NASA missions have certainly demonstrated their substantial scientific returns, and this is often the key justification that allows them to proceed. Mission technology also affords unique tech spinoff opportunities in the commercial sector that makes the US aerospace industrial base very happy to participate. But these returns all seem rather abstract, and for the person-on-the-street rather hard to appreciate.

For decades, astronomers have been discovering and tracking 100s of thousands of asteroids. We live in an interplanetary shooting gallery, where some 15,000 Near Earth Objects have already been discovered, and 30 new ones added every week. NEOs, by the way, are asteroids that come within 30 million miles of Earth’s orbit. These asteroids measure 1 kilometer or more, and statistically over 90% of this population has now been identified. But only 27% of those 140 meters or larger have been discovered. Once their orbits are determined, we can make predictions about which ones will pose an danger to Earth.

Currently there are 1,752 potentially hazardous asteroids  that come within 5 million miles of Earth (20 times Earth-moon distance). There are none predicted to impact Earth in the next 100 years. But new ones are found every week, and between now and February 2017, one object called 2016YJ about 30 meters across will pass within 1.2 lunar distances of Earth. The list of closest approaches in 2016 is quite exciting to look through The object 2016 QA2 discovered in 2016 in the nick of time, was about 70 meters across and came within 53,000 miles of Earth. Upon impact, it would have been an event similar to Chelyabinsk. Even larger, and far more troubling very close encounters have been predicted for the 325-meter asteroid Apophis in 2029, and the 1-kilometer asteroid 2001WN5 in 2028 and well within the man-made satellite cloud that surrounds Earth.

The first successful forecast of an impact event was made on 6 October 2008 when the asteroid 2008 TC3 was discovered. It was calculated that it would hit the Earth only 21 hours later. Luckily it had a diameter of only three meters and did not cause any damage. Since then, some stony remnants of the asteroid have been found. But this object could just as easily have been a 100-meter object exploding over New York City or London, with devastating consequences.

So in terms of planetary defense, asteroids are a dramatically important hazard we need to study. For some asteroids, we may have as little as a year to decide what to do. Although many mitigation strategies have been proposed, none have actually been tested! We need to test as many different orbit-changing strategies as we can before the asteroid with Earth’s name written on it is discovered.

Honestly, what more practical benefit can there be for a NASA mission than to materially protect Earth and our safety?

Check back here on Thursday, January 5 for the next installment!

To Pluto in 30 days!

OK…While everyone else is worrying how to get to Mars, let’s take a really big step and figure out how to get to Pluto….in a month!

The biggest challenge for humans is surviving the long-term rigours of space hazards, but all that is nearly eliminated if we keep our travel times down to a few weeks.

Historically, NASA spacecraft such as the Pioneer, Voyager and New Horizons missions have taken many years to get as far away from Earth as Pluto. The New Horizons mission was the fastest and most direct of these. Its Atlas V launch vehicle gave it an initial speed of 58,000 km/hr. With a brief gravity assist by Jupiter, its speed was boosted to 72,000 km/hour, and the 1000-pound spacecraft made it to Pluto in 9.5 years. We will have to do a LOT better than that if we want to get there in 1 month!

The arithmetic of the journey is quite simple: Good old speed = distance / time. But if we gain a huge speed to make the trip, we have to lose this speed to arrive at Pluto and enter orbit. The best strategy is to accelerate for the first half, then turn the spacecraft around and decelerate for the second half of the trip. The closest distance of Pluto to Earth is about 4.2 billion kilometers (2.7 billion miles). That means that for 15 days and 2.1 billion kilometers, you are traveling at an average speed of 5.8 million kilometers per hour!

Astronomers like to use kilometers/second as a speed unit, so this becomes about 1,600 km/sec. By comparison, the New Horizons speed was 20 km/sec. Other fast things in our solar system include the orbit speed of Mercury around the sun (57 km/s), the average solar wind speed (400 km/s) and a solar coronal mass ejection event (3,000 km/s).

If our spacecraft was generating a constant thrust by running its engines all the time, it would be creating a uniform acceleration from minute to minute. We can calculate how much this is using the simple formula distance = ½ acceleration x Time-squared. With distance as 2.1 billion km and time as 15 days we get 0.00062 km/sec/sec or 0.62 meters/sec/sec. Earth’s gravity is 9.8 meters/sec/sec so we will be feeling an ‘artificial gravity’ of about 0.06 Gs….hardly enough to feel, so you will still be essentially weightless the whole journey!

If the rocket is squirting fuel (reaction mass) out its engines to produce the thrust, we can estimate that this speed has to be about 1,600 km/sec. Rocket engines are compared in terms of their Specific Impulse (SI), which is the exhaust speed divided by the acceleration of gravity on Earth’s surface, so if the exhaust speed is 1,600 km/sec, then the SI = 160,000 seconds. For chemical rockets like the Saturn V, SI=250 seconds!

What technology do we need to get to these speeds and specific impulses?

The most promising technology we have today is the ion rocket engine, which has SIs in the range of 2,000 to 30,000 seconds .The largest ion engine designs include the VASIMR engine; a proposed 200 megawatt, nuclear-electric ion engine design that could conceivably get us to Mars in 39 days. Ion engines are limited by the electrical power used to accelerate the ions (currently in the kilowatt-range but gigawatts are possible if you use nuclear power plants), and the mass of the ions themselves (currently xenon atoms).

Other designs propose riding the solar wind using solar sails, however although this works on the outward-bound leg of the trip, it is very difficult to return to the inner solar system! The familiar technique of ‘tacking into the wind’ will not work because for sailboats it relies on movement through manipulating pressure changes behind the sail, while solar wind pressure changes are nearly zero. Laser propulsion systems have also been considered, but the power requirements often compete with the total electrical power generated by a large faction of the world for payloads with appreciable mass.

So, some version of ion propulsion with gigawatt power plants (fission or fusion) may do the trick. Because the SIs are so very large, the amount of fuel will be a small fraction of the payload mass, and these ships may look very much like those fantastic ships we often see in science fiction after-all!

Oh…by the way, the same technology that would get you to Pluto in 30 days would get you to Mars in 9 days and the Moon in 5 minutes.

Now, wouldn’t THAT be cool?

If you want to see some more ideas about interplanetary travel, have a look at my book ‘Interplanetary Travel:An astronomer’s guide’ available at

Check back here on Monday, January 2 for the next installment!

Selling Ice to Eskimos

Looking beyond our first journeys to Mars in the 2030s, and perhaps setting up outposts there in the 2040s, a frequently-mentioned plan for commercialization of space often brings up the prospects of interplanetary mining. A bit of careful thought can define the prospects and successes for such a venture if we are willing to confront them honestly.

The biggest challenge is that the inner solar system out to the asteroid belt is vastly different than the outer solar system from Jupiter to the distant Kuiper Belt. It is as though they occupy two completely separate universes, and for all intents and purposes, they do!

The inner solar system is all about rocky materials, either on accessible planetary surfaces and their moons, or in the form of asteroids like this photo of asteroid Vesta. We have studied a representative sample of them and they are rich in metals, silicates and carbon-water compounds. Lots of fantastic raw materials here for creating habitats, building high-tech industries, and synthesizing food.

Humans tend to ‘follow the water’ and we know that the polar regions of Mercury and the Moon have water-ice locked away in permanently shadowed craters under the regolith. Mars is filthy rich with water-ice, which forms the permanent core of its polar caps, and probably exists below the surface in the ancient ocean basins of the Northern Hemisphere. Many asteroids in the outer belt are also rich in water, as are the occasional cometary bodies that pass through our neighborhood dozens of times a year.

The inner solar system is also compressed in space. Typical closest distances between its four planets can be about 30 million miles, so the technological requirements for interplanetary travel are not so bad. Over the decades, we have launched about 50 spacecraft to inner solar system destinations for a modest sum of money and rocketry skill.

The outer solar system is quite another matter.

Just to get there we have to travel over 500 million miles to reach Jupiter…ten times the distance to Mars when closest to Earth. The distances between destinations in the outer solar system are close to one billion miles! We have sent ten spacecraft to study these destinations. You cannot land on any of the planets there, only their moons. Even so, many of these moons (e.g those near Jupiter) are inaccessible to humans due the intense radiation belts of their planets.

The most difficult truth to deal with in the outer solar system is the quality of the resources we will find there. It is quite clear from astronomical studies and spacecraft visits that the easiest accessible resources are various forms of water and methane ice. What little rocky material there is, is typically buried under hundreds of kilometers of ice, like Saturn’s moon Enceladus shown here, or at the cores of the massive planets. The concept of mining in the outer solar system is one of recovering ice, which has limited utility for fabricating habitats or being used as fuel and reaction mass.

The lack of commercializable resources in the outer solar system is the biggest impediment to developing future ‘colonization’ plans for creating permanent, self-sustaining outposts there. This is dramatically different than what we encounter in the inner solar system where minable resources are plentiful, and water is far less costly to access than in the outer solar system.

Astronomically speaking, we will have much to occupy ourselves in developing the inner solar system for human access and commercialization, but there is a big caveat. Mined resources cannot be brought back to Earth no matter how desirable the gold, platinum and diamonds might be that are uncovered. The overhead costs to mine and ship these desirable resources is so high that they will never be able to compete with similar resources mined on Earth. Like they say about Las Vegas, ‘what is mined in space, stays in space’. Whatever resources we mine will be utilized to serve the needs of habitats on Mars and elsewhere, where the mining costs are just part of the high-cost bill for having humans in space in the first place.

The good news, however, is that the outer solar system will be the playground for scientific research, and who knows, perhaps even tourism. The same commercial pressures that will drive rocket system technology to get us to Mars in 150 days, will force these trips to take months, then weeks, then days. Once we can get to Mars in a week or less, we can get to Pluto in a handful of months, not the current ten-year journeys. Like so many other historical situations, scientific research and tourism became viable goals for travel as partners to the political or commercial competition to get to India in the 1500s, the Moon in the 1960s…or Mars in the 2000s.

In the grand scheme of things, we have all the time in the world to make this happen!

For more about this, have a look at my book ‘Interplanetary Travel:An Astronomer’s Guide’, for details about resources, rocket technology, and how to keep humans alive, based upon the best current ideas in astronomy, engineering, psychology and space medicine. Available at

Check back here on Friday, December 30 for the next installment!

Cancer and Cosmology

For the treatment of my particular cancer, small B-cell follicular non-Hodgkins Lymphoma, I will soon be starting a 6-month course of infusions of Rituximab and Bendamustine. The biology of these miracle drugs seems to be very solid and logically sound. This one-two chemical punch to my lymphatic system will use targeted antibodies to bind with the CD20 receptor on the cancerous B-cells. This will set in motion several cellular mechanisms that will kill the cells. First, the antibody bound to the CD20 receptor attracts T-cells in the immune system to treat the cancerous B-cell as an invader. Thus begins my immune system’s process of killing the invader. The antibody also triggers a reaction in the cell to commit suicide called apoptosis. Even better, Rituximab does not set in motion the process to kill normal B-cells!

The promise is that my many enlarged lymph nodes chock-a-block with the cancerous B-cells will be dramatically reduced in size to near-normal levels as they are depopulated of the cancerous cells. So why do some patients not all show the same dramatic reductions? About 70% respond to this therapy to various degrees while 10% do not. Why, given the impeccable logic of the process, aren’t the response rates closer to 100%?

Meanwhile, in high-energy physics, supersymmetry is a deeply beautiful and lynch-pin mathematical principle upon which the next generations of theories about matter and gravity are based. By adding a teaspoon of it to the Standard Model, which currently accounts in great mathematical detail for all known particles and forces, supersymmetry provides an elegant way to explore an even larger universe that includes dark matter, unifying all natural forces, and explaining many of the existing mysteries not answered by the Standard Model.
Called the Minimal Supersymmetric Standard Model (MSSM), Nature consistently rewards the simplest explanations for physical phenomena, so why has there been absolutely no sign of supersymmetry at the energies predicted by MSSM, and being explored by the CERN Large Hadron Collider?

In both cases, I have a huge personal interest in these logically compelling strategies and ideas: One to literally save my life, and the other to save the intellectual integrity of the physical world I have so deeply explored as an astronomer during my entire 40 year career. In each case, the logic seems to be flawless, and it is hard to see how Nature would not avail itself of these simple and elegant solutions with high fidelity. But for some reason it chooses not to do so. Rituximab works only imperfectly, while supersymmetry seems an un-tapped logical property of the world.

So what’s going on here?

In physics, we deal with dumb matter locked into simple systems controlled by forces that can be specified with high mathematical accuracy. The fly in the ointment is that, although huge collections of matter on the astronomical scale follow one set of well-known laws first discovered by Sir Isaac Newton and others, at the atomic scale we have another set of laws that operate on individual elementary particles like electrons and photons. This is still not actually a problem, and thanks to some intense mathematical reasoning and remarkable experiments carried out between 1920 and 1980, our Standard Model is a huge success. One of the last hold-outs in this model was the discovery of the Higgs Boson in 2012, some 50 years after its existence was predicted! But as good as the Standard Model is, there seem to be many loose ends that are like red flags to the inquiring human mind.

One major loose end is that astronomers have discovered what is popularly called ‘dark matter’, and there is no known particle or force in the Standard Model to account for it. Supersymmetry answers the question, why does nature have two families of particles when one would be even simpler? Amazingly, and elegantly, supersymmetry answers this question by showing how electrons, and quarks, which are elementary matter particles, are related to photons and gluons, which are elementary force-carrying particles. But in beautifully unifying the particles and forces, it also offers up a new family of particles, the lightest of which would fit the bill as missing dark matter particles!

This is why physicists are desperately trying to verify supersymmetry, not only to simplify physics, but to explain dark matter on the cosmological scale. As an astronomer, I am rooting for supersymmetry because I do not like the idea that 80% of the gravitating stuff in the universe is not stars and dust, but inscrutable dark matter. Nature seems not to want to offer us this simple option that dark matter is produced by ‘supersymmetric neutralinos’. But apparently Nature may have another solution in mind that we have yet to stumble upon. Time will tell, but it will not be for my generation to discover.

On the cancer-side of the equation, biological systems are gears-within-gears in a plethora of processes and influences. A logically simple idea like the Rituximab treatment looks compelling if you do not look too closely at what the rest of the cancerous B-cells are doing, or how well they like being glommed onto by a monoclonal antibody like Rituximab. No two individuals apparently have the same B-cell surfaces, or the same lymphatic ecology in a nearly-infinite set of genetic permutations, so a direct chemical hit by a Rituximab antibody to one cancerous B-cell may be only a glancing blow to another. This is why I am also rooting for my upcoming Rituximab treatments to be a whopping success. Like supersymmetry, it sure would simplify my life!

The bottom line seems to be that, although our mathematical and logical ideas seem elegant, they are never complete. It is this incompleteness that defeats us, sometimes by literally killing us and sometimes by making our entire careers run through dark forests for decades before stumbling into the light.


Check back here on Wednesday, December 28 for the next installment!

Rainbow image credit: Daily Mail: UK

Oops…One more thing!

After writing thirteen essays about space, I completely forgot to wrap up the whole discussion with some thoughts about the Big Picture! If you follow the links in this essay you will come to the essay where I explained the idea in more detail!

Why did I start these essays with so much talk about brain research? Well, it is the brain, after all, that tries to create ideas about what you are seeing based on what the senses are telling it. The crazy thing is that what the brain does with sensory information is pretty bizarre when you follow the stimuli all the way to consciousness. In fact, when you look at all the synaptic connections in the brain, only a small number have anything to do with sensory inputs. It’s as though you could literally pluck the brain out of the body and it would hardly realize it needed sensory information to keep it happy. It spends most of its time ‘taking’ to itself.

The whole idea of space really seems to be a means of representing the world to the brain to help it sort out the rules it needs to survive and reproduce. The most important rule is that of cause-and-effect or ‘If A happens then B will follow’. This also forms the hardcore basis of logic and mathematical reasoning!
But scientifically, we know that space and time are not just some illusion because objectively they seem to be the very hard currency through which the universe represents sensory stimuli to us. How we place ourselves in space and time is an interesting issue in itself. We can use our logic and observations to work out the many rules that the universe runs by that involve the free parameters of time and space. But when we take a deep dive into how our brains work and interfaces with the world outside our synapses, we come across something amazing.

The brain needs to keep track of what is inside the body, called the Self, and what is outside the body. If it can’t do this infallibly, it cannot keep track of what factors are controlling its survival, and what factors are solely related to its internal world of thoughts, feelings, and imaginary scenarios. This cannot be just a feature of human brains, but has to also be something that many other creatures also have at some rudimentary level so that they too can function in the external world with its many hazards. In our case, this brain feature is present as an actual physical area in the cerebral cortex. When it is active and stimulated, we have a clear and distinct perception of our body and its relation to space. We can use this to control our muscles, orient ourselves properly in space, walk and perform many other skills that require a keen perception of this outside world. Amazingly, when you remove the activity in this area through drugs or meditation, you can no longer locate yourself in space and this leads to the feeling that your body is ‘one’ with the world, your Self has vanished, and in other cases you experience the complete dislocation of the Self from the body, which you experience as Out of Body travel.

What does this have to do with space in the real world? Well, over millions of years of evolution, we have made up many rules about space and how to operate within it, but then Einstein gave us relativity, and this showed that space and time are much more plastic than any of the rules we internalized over the millennia. But it is the rules and concepts of relativity that make up our external world, not the approximate ‘common sense’ ideas we all carry around with us. Our internal rules about space and time were never designed to give us an accurate internal portrayal of moving near the speed of light, or functioning in regions of the outside world close to large masses that distort space.

But now that we have a scientific way of coming up with even more rules about space and time, we discover that our own logical reasoning wants to paint an even larger picture of what is going on and is happy to do so without bothering too much with actual (sensory) data. We have developed for other reasons a sense of artistry, beauty and aesthetics that, when applied to mathematics and physics, has taken us into the realm of unifying our rules about the outside world so that there are fewer and fewer of them. This passion for simplification and unification has led to many discoveries about the outside world that, miraculously, can be verified to be actual objective facts of this world.

Along this road to simplifying physics, even the foundations of space and time become players in the scenery rather than aloof partners on a stage. This is what we are struggling with today in physics. If you make space and time players in the play, the stage itself vanishes and has to somehow be re-created through the actions of the actors themselves .THAT is what quantum gravity hopes to do, whether you call the mathematics Loop Quantum Gravity or String Theory. This also leads to one of the most challenging concepts in all of physics…and philosophy.

What are we to make of the ingredients that come together to create our sense of space and time in the first place? Are these ingredients, themselves, beyond space and time, just as the parts of a chain mail vest are vastly different than the vest that they create through their linkages? And what is the arena in which these parts connect together to create space and time?

These questions are the ones I have spent my entire adult life trying to comprehend and share with non-scientists, and they lead straight into the arms of the concept of emergent structures: The idea that elements of nature come together in ways that create new objects that have no resemblance to the ingredients, such as evolution emerging from chemistry, or mind emerging from elementary synaptic discharges. Apparently, time and space may emerge from ingredients still more primitive, that may have nothing to do with either time or space!

You have to admit, these ideas certainly make for interesting stories at the campfire!

Check back here on Monday, December 26 for the start of a new series of blogs on diverse topics!

Quantum Gravity…Oh my!

So here’s the big problem.

Right now, physicists have a detailed mathematical model for how the fundamental forces in nature work: electromagnetism, and the strong and weak nuclear forces. Added to this is a detailed list of the fundamental particles in nature like the electron, the quarks, photons, neutrinos and others. Called the Standard Model, it has been extensively verified and found to be an amazingly accurate way to describe nearly everything we see in the physical world. It explains why some particles have mass and others do not. It describes exactly how forces are generated by particles and transmitted across space. Experimenters at the CERN Large Hadron Collider are literally pulling out their hair to find errors or deficiencies in the Standard Model that go against the calculated predictions, but have been unable to turn up anything yet. They call this the search for New Physics.

Along side this accurate model for the physical forces and particles in our universe, we have general relativity and its description of gravitational fields and spacetime. GR provides no explanation for how this field is generated by matter and energy. It also provides no description for the quantum structure of matter and forces in the Standard Model. GR and the Standard Model speak two very different languages, and describe two very different physical arenas. For decades, physicists have tried to find a way to bring these two great theories together, and the results have been promising but untestable. This description of gravitational fields that involves the same principles as the Standard Model has come to be called Quantum Gravity.

The many ideas that have been proposed for Quantum Gravity are all deeply mathematical, and only touch upon our experimental world very lightly. You may have tried to read books on this subject written by the practitioners, but like me you will have become frustrated by the math and language this community has developed over the years to describe what they have discovered.

The problem faced by Quantum Gravity is that gravitational fields only seem to display their quantum features at the so-called Planck Scale of 10^-33 centimeters and  10^-43 seconds. I cant write this blog using scientific notation, so I am using the shorthand that 10^3 means 1000 and 10^8 means 100 million. Similarly, 10^-3 means 0.001 and so on. Anyway, the Planck scale  also corresponds to an energy of 10^19 GeV or 10 billion billion GeV, which is an energy 1000 trillion times higher than current particle accelerators can reach.

There is no known technology that can reach the scales where these effects can be measured in order to test these theories. Even the concept of measurement itself breaks down! This happens because the very particles (photons) you try to use to study physics at the Planck scale carry so much energy  they turn into quantum black holes and are unable to tell you what they saw or detected!

One approach to QG is called Loop Quantum Gravity.  Like relativity, it assumes that the gravitational field is all there is, and that space and time become grainy or ‘quantized’ near the Planck Scale. The space and time we know and can experience in-the-large is formed from individual pieces that come together in huge numbers to form the appearance of a nearly-continuous and smooth gravitational field.

The problem is that you cannot visualize what is going on at this scale because it is represented in the mathematics, not by nuggets of space and time, but by more abstract mathematical objects called loops and spin networks. The artist rendition above is just that.

So here, as for Feynman Diagrams, we have a mathematical picture that represents a process, but the picture is symbolic and not photographic. The biggest problem, however, is that although it is a quantum theory for gravity that works, Loop Quantum Gravity does not include any of the Standard Model particles. It represents a quantum theory for a gravitational field (a universe of space and time) with no matter in it!

In other words, it describes the cake but not the frosting.

The second approach is string theory. This theory assumes there is already some kind of background space and time through which another mathematical construct called a string, moves. Strings that form closed loops can vibrate, and each pattern of vibrations represents a different type of fundamental particle. To make string theory work, the strings have to exist in 10 dimensions, and most of these are wrapped up into closed balls of geometry called Calabi-Yau spaces. Each of these spaces has its own geometry within which the strings vibrate. This means there can be millions of different ‘solutions’ to the string theory equations: each a separate universe with its own specific type of Calabi-Yau subspace that leads to a specific set of fundamental particles and forces. The problem is that string theory violates general relativity by requiring a background space!

In other words, it describes the frosting but not the cake!

One solution proposed by physicist Lee Smolin is that Loop Quantum Gravity is the foundation for creating the strings in string theory. If you looked at one of these strings at high magnification, its macaroni-like surface would turn into a bunch of loops knitted together, perhaps like a Medieval chainmail suit of armor. The problem is that Loop Quantum Gravity does not require a gravitational field with more than four dimensions ( 3 of space and one of time) while strings require ten or even eleven. Something is still not right, and right now, no one really knows how to fix this. Lacking actual hard data, we don’t even know if either of these theories is closer to reality!

What this hybrid solution tries to do is find aspects of the cake that can be re-interpreted as particles in the frosting!

This work is still going on, but there are a few things that have been learned along the way about the nature of space itself. At our scale, it looks like a continuous gravitational field criss-crossed by the worldlines of atoms, stars and galaxies. This is how it looks even at the atomic scale, because now you get to add-in the worldlines of innumerable ‘virtual particles’ that make up the various forces in the Standard Model.  But as we zoom down to the Planck Scale, space and spacetime stop being smooth like a piece of paper, and start to break up into something else, which we think reveals the grainy nature of gravity as a field composed of innumerable gravitons buzzing about.

But what these fragmentary elements of space and time ‘look’ like is impossible to say. All we have are mathematical tools to describe them, and like our attempts at describing the electron, they lead to a world of pure abstraction that cannot be directly observed.

If you want to learn a bit more about the nature of space, consider reading my short booklet ‘Exploring Quantum Space‘ available at It describes the amazing history of our learning about space from ancient Greek ‘common sense’ ideas, to the highlights of mind-numbing modern quantum theory.

Check back here on Thursday, December 22 for the last blog in this series!