Space Power!

On Earth we can deploy a 164-ton wind turbine to generate 1.5 megawatts of electricity, but in the also energy-hungry environment of space travel, far more efficient energy-per-mass systems are a must. The choices for such systems are not unlimited in the vacuum of space!

OK…this is a rather obscure topic, but as I discussed in my previous blog, in order to create space propulsion systems that can get us to Mars in a few days, or Pluto in a week, we need some major improvements in how we generate power in space.

I am going to focus my attention on ion propulsion, because it is far less controversial than any of the more efficient nuclear rocket designs. Although nuclear rocket technology is pretty well worked out theoretically and in engineering designs since the 1960s,   there is simply no political will to deploy this technology in the next 50 years due to enormous public concerns. The concerns are not entirely unfounded. The highest-efficiency and least massive fission power plants would use near-weapons grade uranium or plutonium fuel, making them look like atomic bombs to some skeptics!

Both fission and fusion propulsion have a lot in common with ordinary chemical propulsion. They heat a propellant up to very high temperatures and direct the exhaust flow, mechanically, out the back of the engine using tapered  ‘combustion chambers’ that resemble chemical rockets. The high temperatures insure that the isotropic speeds of the particles are many km/sec, but the flow has to be shaped by the engine nozzle design to leave the ship in one direction. The melting temperature of a fission reactor is about 4,500 K so the maximum speed of the ejected thermal gas (hydrogen) passing through its core is about 10 km/sec.

Ion engines are dramatically different. They guide ionized particles out the back of the engine using one or more acceleration grids. The particles are electrostatically guided and accelerated literally one at a time, so that instead of flowing all over the place in the rocket chamber, they start out life already ‘collimated’ to flow in only one direction at super-thermal speeds. For instance, the Dawn spacecraft ejected Zenon particles at a speed of 25 km/sec. If you had a high-temperature xenon gas with particles at that same speed, the temperature of this gas would be 4 million Celsius! Well above the melting point of the ion engine!

We are well into the design of high-thrust ion engines, and have already deployed several of these. The Dawn spacecraft launched in 2007 has visited asteroid Vesta (2011) and dwarf planet Ceres (2015) using a 10 kilowatt ion engine system with 937 pounds of xenon propellant, and achieved a record-breaking speed change of 10 kilometers/sec. It delivered about 0.09 Newtons of thrust over 2,000 days of continuous operation. Compare this with the millions of Newtons of thrust delivered by the Saturn V in a few minutes.

Under laboratory conditions, newer ion engine designs are constantly being developed and tested. The NASA NEXT program in 2010 demonstrated over 5.5 years of continuous operation for a 7 kilowatt ion engine. It used 862 kg of xenon and produced a thrust of 3.5 Newtons, some 30 times better than the Dawn technology.

Theoretically, an extensive research study on the design of megawatt ion engines by David Fearn presented at the Space Power Symposium of the 56th International Astronautical Congress in 2005 gave some typical characteristics for engines at this power level. The conclusion was that these kinds of ion engines pose no particular design challenges and can achieve exhaust speeds that exceed 100 km/sec. As a specific example, an array of nine thrusters using xenon propellant would deliver a thrust of 120 Newtons and consume 7.4 megawatts. A relatively small array of thrusters can also achieve exhaust speeds of 1,500 km/sec using lower-mass hydrogen propellants.

Ion propulsion requires megawatts of energy in order to produce enough continuous thrust to get us to the high speeds and thrusts we need for truly fast interplanetary travel.

The bottom line for ion propulsion is the total electrical power that is available to accelerate the propellant ions. Very high efficiency solar panels that convert more than 75% of the sunlight into electricity work very well near Earth orbit (300 watts/kg), but produce only 10 watts/kg near Jupiter, and 0.3 watts/kg near Pluto. That means the future of fast space travel via ion propulsion spanning our solar system requires some kind of non-solar-electric, fission reactor system (500 watts/kg) to produce the electricity. The history of using reactors in space though trivial from an engineering standpoint, is a politically complex one because of the prevailing fear that a launch mishap will result in a dirty bomb or even a Hiroshima-like event in the minds of the general public and Congress.

The Soviet Union has been launching nuclear reactors into space for decades in its Kosmos series of satellites. Early in 1992, the idea of purchasing a Russian-designed and fabricated space reactor power system and integrating it with a US designed satellite went from fiction to reality with the purchase of the first two Topaz II reactors by the Strategic Defense Initiative Organization (now the Ballistic Missile Defense Organization (BMDO). SDIO also requested that the Applied Physics Laboratory in Laurel, MD propose a mission and design a satellite in which the Topaz II could be used as the power source. Even so, the Topaz II reactor had a mass of 1,000 kg and produced 10 kilowatts for an efficiency of 10 watts/kg. Due to funding reduction within the SDIO, the Topaz II flight program was postponed indefinitely at the end of Fiscal Year 1993.

Similarly, cancellation was the eventual fate of the US SP-100 reactor program. This program was started in 1983 by NASA, the US Department of Energy and other agencies. It developed a 4000 kg, 100 kilowatt reactor ( efficiency = 25 watts/kg) with heat pipes transporting the heat to thermionic converters.

Proposed SP-100 reactor ca 1980  (Image credit: NASA/DoE/DARPA)

Believe it or not, small nuclear fission reactors are becoming very popular as portable ‘batteries’ for running remote communities of up to 70,000 people. The Hyperion Hydride Reactor is not much larger than a hot tub, is totally sealed and self-operating, has no moving parts and, beyond refueling, requires no maintenance of any sort.

Hyperion, Uranium Hydride Reactor (Credit:Hyperion, Inc)

According to the Hyperion Energy Company the Gen4 reactor has a mass of about 100-tons and is designed to deliver 25 megawatts electricity for a 10-year lifetime, without refueling. The efficiency for such a system is 250 watts/kg! Of course you cannot just slap one of  these Bad Boys onto a rocket ship to provide the electricity for the ion engines, but this technology already proves that fission reactors can be made very small and deliver quite the electrical wallop, and do so in places where solar panels are not practical.

Some of the advanced photo-electric system being developed by NASA and NASA contractors are based on the solar energy technology used in the NASA Deep Space 1 mission and the Naval Research Laboratory’s TacSat 4 reconnaissance satellite, and are based on ‘stretched lens array’ lens concentrators for sunlight that amplify the sunlight by up to 8 times (called eight-sun systems). The solar arrays are also flexible and can be rolled out like a curtain. The technology promises to reach efficiency levels of 1000 watts/kg, and less than $50/watt, compared to the 100 w/kg and $400/watt of current ‘one sun’ systems that do not use lens concentrators. A 350 kW solar-electric ion engine system is a suggested propulsion for a 70 ton crewed mission to Mars. With the most efficient stretched lens array solar arrays currently under design, a 350 kW system would have a mass of only 350 kg and cost about $18 million. The very cool thing about this is that improvements in solar panel technology not only directly benefit space power systems for inner solar system travel, but lead to immediate consumer applications in Green Energy!  Imagine covering your roof with a 1-square-meter high efficiency panel rather than your entire roof with an unsightly  lower-efficiency system!

So to really zip around the solar system and avoid the medical problems of prolonged voyages, we really need more work on compact power plant design that is politically realistic. Once we solve THAT problem, even Pluto will be a week’s journey away!

Check back here on Monday, April 24 for my next topic!

That was then, this is now!

Back in the 1960s when I began my interest in astronomy, the best pictures we had of the nine planets were out of focus black and white photos. I am astonished how far we have come since then and decided to devote this blog to a gallery of the best pictures I could find of our solar system neighbors! First, let’s have look at the older photos.

First we have Mercury, which is never very far from pour sun and a very challenging telescopic object.

Above is what mars looked like! Then we have Jupiter and Saturn shown below.

Among the hardest and most mysterious objects were Uranus shown here. I will not show a nearly identical telescopic view of Uranus.

Finally we come to Pluto, which has always been a star-like object for most of the 20th century.

These blurry but intriguing images were the best we could do for most of the 20th century, yet they were enough to encourage generations of children to become astronomers and passionately explore space. The features of mercury were mere blotches of differing shaded of gray. Uranus, Neptune were slightly resolvable to reveal faint details, and distant Pluto remained completely star-like and unresolved, yet we knew it was its own world many thousands of kilometers across. Mars continued to reveal its tantalizing blotchy features that came and went with the seasons along with the ebb and flow of its two polar ice caps. Jupiter was a banded world with its Great Red Spot, but the details of these atmospheric bands was completely hidden in the optical smearing of our own atmosphere. Saturn possessed some large bands, and its majestic ring system could be seen in rough detail but never resolved into its many components. As for the various moons of these distant worlds, they were blurry disks or star-like spots and never revealed their details.

The advent of the Space Program in the 1960s, and the steady investment in spacecraft to ‘fly by’ these planets led to progressively higher and higher resolution images starting with Mariner 4 in 1965 and its historic encounter with Mars, revealing a cratered, moonlike landscape. The Pioneer spacecraft in the early 1970s gave us stunning images of Jupiter, followed by the Voyager spacecraft encounters with the outer planets and their moons. Magellan orbited Venus and with its radar system mapped the surface to show a dynamic and volcanic surface that is permanently hidden beneath impenetrable clouds. Finally in 2015, the New Horizons spacecraft gave us the first clear images of distant Pluto. Meanwhile, many return trips to our own moon have mapped its surface to 2-meter resolution, while the MESSSENGER spacecraft imaged the surface of Mercury and mapped its many extreme geological features. Even water ice has been detected on mercury and the moon to slacken the thirst of future explorers.

For many of the planets, we have extreme close up images too!
Jupiter’s south pole from the Juno spacecraft shows a bewildering field of tremendous hurricanes each almost as large as Earth, swirling about aimlessly in a nearly motionless atmosphere.

Pluto details a few hundred meters across. Can you come up with at least ten questions you would like answers for about what you are seeing?

Here is one of thousands of typical views from the Martian surface. Check out the rocks strewn across the field. Some are dark and pumice-like while others are white and granite-looking. ‘Cats and dogs living together’. What’s going on here?

The Venera 13 image shown below from the surface of Venus is unique and extremely puzzling from a surface that is supposed to be hotter than molten lead.

We also have images from a multitude of moons, asteroids and comets!
The Lunar Reconnaissance Orbiter gave us 2-meter resolution images of the entire lunar surface allowing us to revisit the Apollo landing sites once more:

The dramatic canyons and rubble fields of a comet were brought into extreme focus by the Rosetta mission

Even Saturn’s moon Titan has been explored to reveal its extensive liquid nitrogen tributaries

This bewildering avalanche of detail has utterly transformed how we view these worlds and the kinds of questions we can now explore. If you compare what we knew about Pluto before 2015 when it was little more than a peculiar ‘star in the sky’, to the full-color detailed orb we now see, you can imagine how science progresses by leaps and bounds through the simple technique of merely seeing the object more clearly. It used to be fashionable to speculate about Pluto when all we knew was its size, mass and density and they it had a thin atmosphere. But now we are delightfully challenged to understand this world as the dynamic place that it is with mountains of ice, continent-sized glaciers, and nitrogen snow. And of course, the mere application of improved resolution now lets us explore the entire surface of our moon with the same clarity as an astronaut hovering over its surface from a height of a few dozen feet!

We Old-Timers have had a wonderful run in understanding our solar system as we transitioned from murky details to crystal clarity. All of the easy low-hanging fruit of theory building and testing over the last century has been accomplished for the most-part. Now the ever more challenging work of getting the details straight begins, and will last for another century at least. When you can tele-robotically explore planetary and asteroidal surfaces, or perform on-the-spot microscopic assays of minerals, what incredible new questions will emerge? Is there life below the surface of Europa? Why does Mars belch forth methane gas in the summer? Can the water deposits on the moon be mined? Is Pluto’s moon Charon responsible for the tidal heating of an otherwise inert Pluto?
One can only wonder!

Check back here on Tuesday, April 18 for my next topic!

Things we agree on…

Although many survey questions you hear about show close to a 50/50 split in public opinion, there are still many questions that offer nearly unanimous agreement and probably help to define who we are as a Nation in terms of core values and beliefs. I have always wondered what these key issues are, so I gathered up as many of these “over-80 percent” responses as I could easily locate back in 2014. They come in two kinds of statistical samples: biased and un-biased.

Un-Biased Surveys

The only correct way to survey people’s opinions is through a carefully designed randomized survey to eliminate biases that would skew the results. The answers you get from these surveys are probably the most reliable. After each question I give the response and its percentage, the number of people in the sample, the name of the surveyor, and the date. Many of these surveys are by land-line telephone, so a fair question is: Are people that answer their land-lines typical of the general population today?

Do you use your seatbelt? Yes=98 percent (1500, Washington state poll, Traffic Safety Commission 9/23/2010)

Do you believe that man-made climate change is real? Yes=97 percent (1372 scientists, National Academy of Science, 6/22/2010) Note. Pew Research survey in 2016 of 1019 US adults found that only 65% believed this was true.

There are now all too many examples of significant climate change..How many more do we need? (Credit: NATO Review)

Do you play video games? Yes = 97 percent (1102 children ages12-17, Pew Internet and American Life Project, 9/17/2008)

Do you broadcast your location on the Internet using location-based services? No = 96 percent (1500, Forrester Research, 8/30/2010)

Do you believe in a God? Yes = 95 percent (1500, Gallop Poll, 3/29/2001) Note Gallup Poll 2016 shows that 89% now believe in God. Related to this is the Pew Research poll in 2015 that showed 72% of people believed in an afterlife. A Roper Survey in 2011 found 40% of US adults believed in ghosts, but this belief has been declining since 2005 when it was 48%.

Do you want stronger protection for your Internet privacy? Yes=94 percent (2117, Pew Internet and American Life Project, 5/2000 ) Note: In April 2017, President Trump signed an executive order that now allows Internet Service Providers to sell your private information without telling you!

Are the Arts vital to providing a well-rounded education to children? Yes=93 percent (1000, Harris Poll, 6/13/2005)

Would you vote for a woman for President? Yes=92 percent (1229, CBS News/New York Times, 2/5/2006 ) In the 2016 presidential election, over 3 million more people voted for the female candidate than the male candidate.

Would you stop doing business with a company because of bad service? Yes = 87 percent (2000, Harris Interactive,10/8/2008)

Do you use the print version of the Yellow Pages phone book? Yes = 87 percent (9008, Knowledge Network/SRI Industry Usage Study, 2/26/2008)

Do you think that English should be the official language of the US? Yes= 87 percent (1000, Rassmussen Report, 5/11 /2010) Note, in 2016 a Pew Survey found that 90% of American adults thought that English should be the official language.

Do you think it is important for America to use and develop solar energy? Yes=92 percent (1000 online survey, SCHOTT Solar Barometer; Kelton Research, 10/8/2009)

Do you think the federal government is broken? Yes= 86 percent (1023, CNN/Opinion Research Poll, 2/22/2010). Note in 2015, 75% of Gallup Survey believed that widespread government corruption exists. President Trump was elected to shake up the government and ‘drain the swamp’, only to demonstrate that he was himself a major corrupting influence supported by intense Russian influence in the election.

Would you prefer to stop using paper and go Green? Yes=85 percent (1000, Harris Interactive/DocuSign Inc, 6/30/2010)

Should you have to prove you are a citizen before you receive healthcare in the U.S.? Yes=83 percent (1500, Rassmussen Report, 9/7/2009)

Do you shop ‘Green? Yes = 82 percent (1000, Opinion Research Corporation, 2/6/2009)

Do you favor legalizing marijuana for medical use? Yes=81 percent (1083, ABC/Washington Post, 1/18/2010)

Is a car a necessity? Yes=86 percent (2967, Pew Research, 4/2/2009)

Do you think the government will make progress on important issues? No=90 percent (1010, Pew Research, 9/23/2010)

In the future, will computers be able to talk to humans? Yes=81 percent (1546, Pew Research, 6/22/2010 )

Do you know what Twitter is? Yes= 85 percent (1007, Pew Research, 7/15/2010)

Is President Obama a Muslim? No = 82 percent (3003 adults; Pew Research 8/19/2010) Note by 2015 this had fallen to 79% (CNN/ORC Poll). This truly shows that nearly 30% of American adults are certifiably as dumb as dust. This belief among GOP voters is nearly 3 times higher than for democrats.

Has science had a positive effect on society? Yes = 84 percent (2001, Pew Research, 7/9/2009)

Is climate change a serious threat and are you willing to make sacrifices to combat it? Yes=80 percent (1000, Institution of Civil Engineers, 11/20/2009). President Trump’s official position is that climate change science is a Chinese hoax.

Do you live in a house with at least one cellphone? Yes = 90 percent (3001, Pew Research Center, 2/4/2011) Note in 2015 the Pew Research Center found that 64% of American adults owned a smartphone.

Will you be eating Thanksgiving meal with family? Yes= 89 percent (2691, USA Today, 11/30/2011)

Did you make a good investment getting your undergraduate degree? Yes= 89 percent (1500, American Council on Education Winston Group survey, 12/30/2010)

Biased but Interesting Surveys

Biased surveys a not regulated (a person can vote multiple times) and often ask you to vote online, or are conducted by institutions that have a point to make and could be suspected of selecting in advance the people they want to survey that are like-minded (e.g. Fox News). In the results below I have selected CNN.com’s daily online voting results because they were easily available. CNN readers are in equal shares, Liberal, Moderate and Conservative. In addition 50 percent are Democrats and 16 percent are Republicans. Of course ,all have access to the internet and are not surveyed by land-line telephone so they probably represent a younger population.

Do you think Japan should become a permanent member of the United Nations Security Council? “ Yes = 94 percent (924,421, 4/12/2005 )

Do you know how you will vote in the mid-term elections? Yes=90 percent (30799, 10/21/2010 )

Is it time to break out of the two-party political system? Yes = 84 percent (26837, 10/26/2010)

Do political TV ads influence your vote? Yes = 83 percent(70588, 10/27/2010)

Did you brave the crowds and shop on Black Friday? No=84 percent (112322, 11/27/2010)

What do you think of a publisher’s decision to remove the N-word from Huckleberry Finn? Disapprove = 92 percent (44377, 1/7/2011)

Do you think there may be life on planets other than Earth? Yes = 88 percent (243250, 5/22/2011).

Is raising a child free of gender roles a good idea? No=85 percent (198329, 5/27/2011)

Do you approve of the performance of your congressional representatives? No=86 percent (123776, 8/3/2011) Note in 2017 the Rassmusen Survey found that 75% of adults gave Congress a poor rating. So we like the Congressperson we voted for, but dislike everyone else and what they do.

Have you lost confidence in the ability of world leaders to tackle economic problems? Yes=86 percent (187969, 9/16/2011)

Should states require welfare recipients to pass drug tests? Yes = 80 percent (170382, 10/26/2011)

Do you snack on grocery store food before you buy it?
No=89 percent (59894, 11/4/2011)

Are you ready to “boot out” your representative in Congress?
Yes=81 percent (89916, 12/10/2011)

Should racist remarks be subject to criminal prosecution?
No=86 percent (106918, 12/22/2011)

Should convicted murderers be eligible for full pardons?
No = 86 percent (86970, 1/12/2012)

Things we should agree on but don’t.

There are also many issues we should agree on but don’t. It doesn’t matter how much money we invest in ‘public education’. The general public simply doesn’t get it on many significant issues…and they vote accordingly. Here are some of my favorites, sad to say.

Does Ebola spread easily? No=27 percent (1025, Harvard School of Public Health, 8/13/2014). This is a case of fear overcoming reason and evidence.

Are childhood vaccines safe and effective? Yes=53 percent (1012, AP/GFK Poll, 3/24/2014). This is another case of fear overcoming evidence, but with potentially devastating results if too many people ‘opt out’.

Did the universe begin with a huge explosion? Yes= 38 percent (1500, National Science Board,2014 ). This is a case of personal belief and religious fundamentalism overcoming evidence and reasoned discussion. Even the Catholic Pope finds no contradiction with believing the scientific story!

Have humans and other living things evolved over time? Yes=60 percent ( 1983, Pew Research, 3/8/2013). Again, religious fundamentalism and pseudoscience have biased American public thinking.

Would you support a candidate who advocate carbon emission reduction? Yes=68 percent (2105, University of Texas, 9/4/2014). This is directly connected to the public’s lukewarm belief in climate change and the massive negative campaigning by the GOP and industrial lobbyists. In 2016 we elected a president who sides with industry and climate change deniers and is now dismantling both the EPA and canceling all research on climate change at many governmental institutions.

Check back here on Wednesday, April 12 for my next topic!

Glueballs anyone?

Today, physicists are both excited and disturbed by how well the Standard Model is behaving, even at the enormous energies provided by the CERN Large Hadron Collider. There seems to be no sign of the expected supersymmetry property that would show the way to the next-generation version of the Standard Model: Call it V2.0. But there is another ‘back door’ way to uncover its deficiencies. You see, even the tests for how the Standard Model itself works are incomplete, even after the dramatic 2012 discovery of the Higgs Boson! To see how this backdoor test works, we need a bit of history.

Glueballs found in a quark-soup (Credit: Alex Dzierba, Curtis Meyer and Eric Swanson)

Over fifty years ago in 1964, physicists Murray Gell-Mann at Caltech and George Zweig at CERN came up with the idea of the quark as a response to the bewildering number of elementary particles that were being discovered at the huge “atom smasher” labs sprouting up all over the world. Basically, you only needed three kinds of elementary quarks, called “up,” “down” and “strange.” Combining these in threes, you get the heavy particles called baryons, such as the proton and neutron. Combining them in twos, with one quark and one anti-quark, you get the medium-weight particles called the mesons. In my previous blog, I discussed how things are going with testing the quark model and identifying all of the ‘missing’ particles that this model predicts.

In addition to quarks, the Standard Model details how the strong nuclear force is created to hold these quarks together inside the particles of matter we actually see, such as protons and neutrons. To do this, quarks must exchange force-carrying particles called gluons, which ‘glue’ the quarks together in to groups of twos and threes. Gluons are second-cousins to the photons that transmit the electromagnetic force, but they have several important differences. Like photons, they carry no mass, however unlike photons that carry no electric charge, gluons carry what physicist call color-charge. Quarks can be either ‘red’, ‘blue’ or ‘green’, as well as anti-red, anti-green and anti-blue. That means that quarks have to have complex color charges like (red, anti-blue) etc. Because the gluons carry color charge, unlike photons which do not interact with each other, gluons can interact with each other very strongly through their complicated color-charges. The end result is that, under some circumstances, you can have a ball of gluons that resemble a temporarily-stable particle before they dissipate. Physicists call these glueballs…of course!

Searching for Glueballs.

Glueballs are one of the most novel, and key predictions of the Standard Model, so not surprisingly there has been a decades-long search for these waifs among the trillions of other particles that are also routinely created in modern particle accelerator labs around the world.

Example of glueball decay into pi mesons.

Glueballs are not expected to live very long, and because they carry no electrical charge they are perfectly neutral particles. When these pseudo-particles decay, they do so in a spray of other particles called mesons. Because glueballs consist of one gluon and one anti-gluon, they have no net color charge. From the various theoretical considerations, there are 15 basic glueball types that differ in what physicists term parity and angular momentum. Other massless particles of the same general type also include gravitons and Higgs bosons, but these are easily distinguished from glueball states due to their mass (glueballs should be between 1 and 5GeV) and other fundamental properties. The most promising glueball candidates are as follows:

Scalar candidates: f0(600), f0(980), f0(1370), f0(1500), f0(1710), f0(1790)
Pseudoscalar candidates: η(1405), X(1835), X(2120), X(2370), X(2500)
Tensor candidates: fJ(2220), f2(2340)

By 2015, the f-zero(1500) and f-zero(1710) had become the prime glueball candidates. The properties of glueball states can be calculated from the Standard Model, although this is a complex undertaking because glueballs interact with nearby quarks and other free gluons very strongly and all these factors have to be considered.

On October 15, 2015 there was a much-ballyhooed announcement that physicists had at last discovered the glueball particle. The articles cited Professor Anton Rebhan and Frederic Brünner from TU Wien (Vienna) as having completed these calculations, concluding that the f-zero(1710) was the best candidate consistent with experimental measurements and its predicted mass. More rigorous experimental work to define the properties and exact decays of this particle are, even now, going on at the CERN Large Hadron Collider and elsewhere.

So, between the missing particles I described in my previous blog, and glueballs, there are many things about the Standard Model that still need to be tested. But even with these predictions confirmed, physicists are still not ‘happy campers’ when it comes to this grand theory of matter and forces. Beyond these missing particles, we still need to have a deeper understanding of why some things are the way they are, and not something different.

Check back here on Wednesday, April 5 for my next topic!

Crowdsourcing Gravity

The proliferation of smartphones with internal sensors has led to some interesting opportunities to make large-scale measurements of a variety of physical phenomena.

The iOS app ‘Gravity Meter’ and its android equivalent have been used to make  measurements of the local surface acceleration, which is nominally 9.8 meters/sec2. The apps typically report the local acceleration to 0.01 (iOS) or even 0.001 (android) meters/secaccuracy, which leads to two interesting questions: 1)How reliable are these measurements at the displayed decimal limit, and 2) Can smartphones be used to measure expected departures from the nominal surface acceleration due to Earth rotation? Here is a map showing the magnitude of this (centrifugal) rotation effect provided by The Physics Forum.

As Earth rotates, any object on its surface will feel a centrifugal force directed outward from the center of Earth and generally in the direction of local zenith. This causes Earth to be slightly bulged-out at the equator compared to the poles, which you can see from the difference between its equatorial radius of 6,378.14 km versus its polar radius of 6,356.75 km: a polar flattening difference of 21.4 kilometers. This centrifugal force also has an effect upon the local surface acceleration  by reducing it slightly at the equator compared to the poles. At the equator, one would measure a value for ‘g’ that is about 9.78 m/sec2 while at the poles it is about 9.83 m/sec2. Once again, and this is important to avoid any misconceptions, the total acceleration defined as gravity plus centrifugal is reduced, but gravity is itself not changed because from Newton’s Law of Universal Gravitation, gravity is due to mass not rotation.

Assuming that the smartphone accelerometers are sensitive enough, they may be able to detect this equator-to-pole difference by comparing the surface acceleration measurements from observers at different latitudes.

 

Experiment 1 – How reliable are ‘gravity’ measurements at the same location?

To check this, I looked at the data from several participating classrooms at different latitudes, and selected the more numerous iOS measurements with the ‘Gravity Meter’ app. These data were kindly provided by Ms. Melissa Montoya’s class in Hawaii (+19.9N), George Griffith’s class in Arapahoe, Nebraska (+40.3N), Ms. Sue Lamdin’s class in Brunswick, Maine (+43.9N), and Elizabeth Bianchi’s class in Waldoboro, Maine (+44.1N).

All four classrooms measurements, irrespective of latitude (19.9N, 40.3N, 43.9N or 44.1N) showed distinct ‘peaks’, but also displayed long and complicated ‘tails’, making these distributions not Gaussian as might be expected for random errors. This suggests that under classroom conditions there may be some systematic effects introduced from the specific ways in which students may be making the measurements, introducing  complicated and apparently non-random,  student-dependent corrections into the data.

A further study using the iPad data from Elizabeth Bianchi’s class, I discovered that at least for iPads using the Gravity Sensor app, there was a definite correlation between when the measurement was made and the time it was made during a 1.5-hour period. This resembles a heating effect, suggesting that the longer you leave the technology on before making the measurement, the larger will be the measured value. I will look into this at a later time.

The non-Gaussian behavior in the current data does not make it possible to assign a normal average and standard-deviation to the data.

 

Experiment 2 – Can the rotation of Earth be detected?

Although there is the suggestion that in the 4-classroom data we could see a nominal centrifugal effect of about the correct order-of-magnitude, we were able to get a large sample of individual observers spanning a wide latitude range, also using the iOS platform and the same ‘Gravity Meter’ app. Including the median values from the four classrooms in Experiment 1, we had a total of 41 participants: Elizabeth Abrahams, Jennifer Arsenau, Dorene Brisendine, Allen Clermont, Hillarie Davis, Thom Denholm, Heather Doyle, Steve Dryer, Diedra Falkner, Mickie Flores, Dennis Gallagher, Robert Gallagher, Rachael Gerhard, Robert Herrick, Harry Keller, Samuel Kemos, Anna Leci, Alexia Silva Mascarenhas, Alfredo Medina, Heather McHale, Patrick Morton, Stacia Odenwald, John-Paul Rattner, Pat Reiff, Ghanjah Skanby, Staley Tracy, Ravensara Travillian, and Darlene Woodman.

The scatter plot of these individual measurements is shown here:

The red squares are the individual measurements. The blue circles are the android phone values. The red dashed line shows the linear regression line for only the iOS data points assuming each point is equally-weighted. The solid line is the predicted change in the local acceleration with latitude according to the model:

G =   9.806   –  0.5*(9.832-9.78)*Cos(2*latitude)    m/sec2

where the polar acceleration is 9.806 m/sec2 and the equatorial acceleration is 9.780 m/sec2. Note: No correction for lunar and solar tidal effects have been made since these are entirely undetectable with this technology.

Each individual point has a nominal variation of +/-0.01 m/sec2 based on the minimum and maximum value recorded during a fixed interval of time. It is noteworthy that this measurement RMS is significantly smaller than the classroom variance seen in Experiment 1 due to the apparently non-Gaussian shape of the classroom sampling. When we partition the iOS smartphone data into 10-degree latitude bins and take the median value in each bin we get the following plot, which is a bit cleaner:

The solid blue line is the predicted acceleration. The dashed black line is the linear regression for the equally-weighted individual measurements. The median values of the classroom points are added to show their distribution. It is of interest that the linear regression line is parallel, and nearly coincident with, the predicted line, which again suggests that Earth’s rotation effect may have been detected in this median-sampled data set provided by a total of 37 individuals.

The classroom points clustering at ca +44N represent a total of 36 measures representing the plotted median values, which is statistically significant. Taken at face value, the classroom data would, alone, support the hypothesis that there was a detection of the rotation effect, though they are consistently 0.005 m/sec2 below the predicted value at the mid-latitudes. The intrinsic variation of the data, represented by the consistent +/-0.01 m/sec2 high-vs-low range of all of the individual samples, suggests that this is probably a reasonable measure of the instrumental accuracy of the smartphones. Error bars (thin vertical black lines) have been added to the plotted median points to indicate this accuracy.

The bottom-line seems to be that it may be marginally possible to detect the Earth rotation effect, but precise measurements at the 0.01 m/sec2 level are required against what appears to be a significant non-Gaussian measurement background. Once again, some of the variation seen at each latitude may be due to how warm the smartphones were at the time of the measurement. The android and iOS measurements do seem to be discrepant with the android measurements leading to a larger measurement variation.

Check back here on Wednesday, March 29 for the next topic!

Fifty Years of Quarks!

Today, physicists are both excited and disturbed by how well the Standard Model is behaving, even at the enormous energies provided by the CERN Large Hadron Collider. There seems to be no sign of the expected supersymmetry property that would show the way to the next-generation version of the Standard Model: Call it V2.0. But there is another ‘back door’ way to uncover its deficiencies. You see, even the tests for how the Standard Model itself works are incomplete, even after the dramatic 2012 discovery of the Higgs Boson! To see how this backdoor test works, we need a bit of history.

Over fifty years ago in 1964, physicists Murray Gell-Mann at Caltech and George Zweig at CERN came up with the idea of the quark as a response to the bewildering number of elementary particles that were being discovered at the huge “atom smasher” labs sprouting up all over the world. Basically, you only needed three kinds of elementary quarks, called “up,” “down” and “strange.” Combining these in threes, you get the heavy particles called baryons, such as the proton and neutron. Combining them in twos, with one quark and one anti-quark, you get the medium-weight particles called the mesons.

This early idea was extended to include three more types of quarks, dubbed “charmed,” “top” and “bottom” (or on the other side of the pond, “charmed,” “truth” and “beauty”) as they were discovered in the 1970s. These six quarks form three generations — (U, D), (S, C), (T, B) — in the Standard Model.

Particle tracks at CERN/CMS experiment (credit: CERN/CMS)

Early Predictions

At first the quark model easily accounted for the then-known particles. A proton would consist of two up quarks and one down quark (U, U, D), and a neutron would be (D, D, U). A pi-plus meson would be (U, anti-D), and a pi-minus meson would be (D, anti-U), and so on. It’s a bit confusing to combine quarks and anti-quarks in all the possible combinations. It’s kind of like working out all the ways that a coin flipped three times give you a pattern like (T,T,H) or (H,T,H), but when you do this in twos and threes for U, D and S quarks, you get the entire family of the nine known mesons, which forms one geometric pattern in the figure below, called the Meson Nonet.

The basic Meson Nonet (credit: Wikimedia Commons)

If you take the three quarks U, D and S and combine them in all possible unique threes, you get two patterns of particles shown below, called the Baryon Octet (left) and the Baryon Decuplet (right).

Normal baryons made from three-quark triplets

The problem was that there was a single missing particle in the simple 3-quark baryon pattern. The Omega-minus (S,S,S) at the apex of the Baryon Decuplet was nowhere to be found. This slot was empty until Brookhaven National Laboratory discovered it in early 1964. It was the first indication that the quark model was on the right track and could predict a new particle that no one had ever seen before. Once the other three quarks (C, T and B) were discovered in the 1970s, it was clear that there were many more slots to fill in the geometric patterns that emerged from a six-quark system.

The first particles predicted, and then discovered, in these patterns were the J/Psi “charmonium” meson (C, anti-C) in 1974, and the Upsilon “bottomonium” meson (B, anti-B) in 1977. Apparently there are no possible top mesons (T, anti-T) because the top quark decays so quickly it is gone before it can bind together with an anti-top quark to make even the lightest stable toponium meson!

The number of possible particles that result by simply combining the six quarks and six anti-quarks in patterns of twos (mesons) is exactly 39 mesons. Of these, only 26 have been detected as of 2017. These particles have masses between 4 and 11 times more massive than a single proton!

For the still-heavier three-quark baryons, the quark patterns predict 75 baryons containing combinations of all six quarks. Of these, the proton and neutron are the least massive! But there are 31 of these predicted baryons that have not been detected yet. These include the lightest missing particle, the double charmed Xi (U,C,C) and the bottom Sigma (U, D, B), and the most massive particles, called the charmed double-bottom Omega (C, B, B) and the triple-bottom omega (B,B,B). In 2014, CERN/LHC announced the discovery of two of these missing particles, called the bottom Xi baryons (B, S, D), with masses near 5.8 GeV.
To make life even more interesting for the Standard Model, other combinations of more than three quarks are also possible.

Exotic Baryons
A pentaquark baryon particle can contain four quarks and one anti-quark. The first of these, called the Theta-plus baryon, was predicted in 1997 and consists of (U, U, D, D, anti-S). This kind of quark package seems to be pretty rare and hard to create. There have been several claims for a detection of such a particle near 1.5 GeV, but experimental verification remains controversial. Two other possibilities called the Phi double-minus (D, D, S, S, anti-U) and the charmed neutral Theta (U, U, D, D, anti-C) have been searched for but not found.

Comparing normal and exotic baryons (credit: Quantum Diaries)

There are also tetraquark mesons, which consist of four quarks. The Z-meson (C, D, anti-C, anti-U) was discovered by the Japanese Bell Experiment in 2007 and confirmed in 2014 by the Large Hadron Collider at 4.43 GeV, hence the proper name Z(4430). The Y(4140) was discovered at Fermilab in 2009 and confirmed at the LHC in 2012 and has a mass 4.4 times the proton’s mass. It could be a combination of charmed quarks and charmed anti-quarks (C, anti-C, C, anti-C). The X(3830) particle was also discovered by the Japanese Bell Experiment and confirmed by other investigators, and could be yet another tetraquark combination consisting of a pair of quarks and anti-quarks (q, anti-q, q, anti-q).

So the Standard Model, and the six-quark model it contains, makes specific predictions for new baryon and meson states to be discovered. All totaled, there are 44 ordinary baryons and mesons that remain to be discovered! As for the ‘exotics’ that opens up a whole other universe of possibilities. In theory, heptaquarks (5 quarks, 2 antiquarks), nonaquarks (6 quarks, 3 antiquarks), etc. could also exist.

At the current pace of a few particles per year or so, we may finally wrap up all the predictions of the quark model in the next few decades. Then we really get to wonder what lies beyond the Standard once all the predicted particle slots have been filled. It is actually a win-win situation, because we either completely verify the quark model, which is very cool, or we discover anomalous particles that the quark model can’t explain, which may show us the ‘backdoor’ way to the Standard Model v.2.0 that the current supersymmetry searches seem not to be providing us just yet.

Check back here on Wednesday, March 22 for the next topic!

Hohmann’s Tyrany

It really is a shame. When all you have is a hammer, everything else looks like a nail. This also applies to our current, international space programs.

We have been using chemical rockets for centuries, but since the advent of V2s and the modern space age, these brute-force and cheap work horses have been the main propulsion technology we use to go just about everywhere in the solar system. But this amounts to thinking that one technology can span all of our needs, and the trillions of cubic miles that encompass interplanetary space.

We pay a huge price for this belief.

Chemical rockets have their place in space travel. They are fantastic ways of delivering HUGE thrusts quickly; the method par excellance for getting us off this planet and paying the admission ticket to space.  No other known propulsion technology is as cheap, simple, and technologically elegant as chemical propulsion in this setting.  Applying this same technology to interplanetary travel beyond the moon is quite another thing, and sets in motion an escalating series of difficult problems.

Every interplanetary spacecraft launched so far to travel to each of the planets in our solar system works on the exact same principle. Give the spacecraft a HUGE boost to get it off the launch pad, and with enough velocity to reach the distant planet, then cut the engines off after a few minutes so the spacecraft can literally coast the whole way. With a few more ‘Delta-V’ changes, this is called the minimum –energy trajectory or for rocket scientists the Hohmann Transfer orbit. It is designed to get you there, not in the shortest time, but using the least amount of energy. In propulsion, energy is money. We use souped-up Atlas rockets at a few hundred million dollars a pop to launch space craft to the outer planets. We don’t use  even larger and expensive Saturn V rockets that deliver even more energy for a dramatically-shorter ride.

If you bank on taking the slow-boat to Mars rather than a more energetic ride, this leads to all sorts of problems. The biggest of these is that the inexpensive 220-day journeys let humans build up all sorts of nasty medical problems that short 2-week trips would completely eliminate. In fact, the entire edifice of the $150 billion International Space Station is there to explore the extended human stays in space that are demanded by Hohmann Transfer orbits and chemical propulsion. We pay a costly price to keep using cheap chemical rockets that deliver long stays in space, and cause major problems that are expensive to patch-up afterwards. The entire investment in the ISS could have been eliminated if we focused on getting the travel times in space down to a few weeks.

You do not need Star Trek warp technology to do this!

Since the 1960s, NASA engineers and academic ‘think tanks’ have designed nuclear rocket engines and ion rocket engines, both show enormous promise in breaking the hegemony of chemical transportation. The NASA nuclear rocket program began in the e arly-1960s and built several operational prototypes, but the program was abandoned in the late 1960s because nuclear rockets were extremely messy, heavy, and had a nasty habit of slowly vaporizing the nuclear reactor and blowing it out the rocket engine!  Yet, Wernher  Von Braun designed a Mars expedition for the 1970s in which several,  heavy 100-ton nuclear motors would be placed in orbit by a Saturn V and then incorporated into an set of three interplanetary transports. This program was canceled when the Apollo program was ended and there was no longer a conventional need for the massive Saturn V rockets. But ion rockets continued to be developed and today several of these have already been used on interplanetary spacecraft like Deep Space 1 and Dawn. The plans for humans on Mars in 2030s rely on ion rocket propulsion powered by massive solar panels.

Unlike chemical rockets, which limit spacecraft speeds to a few kilometers/sec, ion rockets can be developed with speeds up to several thousand km/sec. All that they need is more thrust, and to get that they need low-mass power plants in the gigawatt range. ‘Rocket scientists’ gauge engine designs based on their Specific Impulse, which is the exhaust speed divided by the acceleration of gravity on Earth. Chemical rockets can only provide SIs of 300 seconds, but ion engine designs can reach 30,000 seconds or more! With these engine designs, you can travel to Mars in SIX DAYS, and a jaunt to Pluto can take a neat 2 months! Under these conditions, most of the problems and hazards of prolonged human travel in space are eliminated.

But instead of putting our money into perfecting these engine designs, we keep building chemical rockets and investing billions of dollars trying to keep our long-term passengers alive.

Go figure!!!

Check back here on Friday, March 17 for a new blog!

 

The Mystery of Gravity

In grade school we learned that gravity is an always-attractive force that acts between particles of matter. Later on, we learn that it has an infinite range through space, weakens as the inverse-square of the distance between bodies, and travels exactly at the speed of light.

But wait….there’s more!

 

It doesn’t take a rocket scientist to remind you that humans have always known about gravity! Its first mathematical description as a ‘universal’ force was by Sir Isaac Newton in 1666. Newton’s description remained unchanged until Albert Einstein published his General Theory of Relativity in 1915. Ninety years later, physicists, such as Edward Witten, Steven Hawkings, Brian Greene and Lee Smolin among others, are finding ways to improve our description of ‘GR’ to accommodate the strange rules of quantum mechanics. Ironically, although gravity is produced by matter, General Relativity does not really describe matter in any detail – certainly not with the detail of the modern quantum theory of atomic structure. In the mathematics, all of the details of a planet or a star are hidden in a single variable, m, representing its total mass.

 

The most amazing thing about gravity is that is a force like no other known in Nature. It is a property of the curvature of space-time and how particles react to this distorted space. Even more bizarrely, space and time are described by the mathematics of  GR as qualities of the gravitational field of the cosmos that have no independent existence. Gravity does not exist like the frosting on a cake, embedded in some larger arena of space and time. Instead, the ‘frosting’ is everything, and matter is embedded and intimately and indivisibly connected to it. If you could turn off gravity, it is mathematically predicted that space and time would also vanish! You can turn off electromagnetic forces by neutralizing the charges on material particles, but you cannot neutralize gravity without eliminating spacetime itself.  Its geometric relationship to space and time is the single most challenging aspect of gravity that has prevented generations of physicists from mathematically describing it in the same way we do the other three forces in the Standard Model.

Einstein’s General Relativity, published in 1915, is our most detailed mathematical theory for how gravity works. With it, astronomers and physicists have explored the origin and evolution of the universe, its future destiny, and the mysterious landscape of black holes and neutron stars. General Relativity has survived many different tests, and it has made many predictions that have been confirmed. So far, after 90 years of detailed study, no error has yet been discovered in Einstein’s original, simple theory.

Currently, physicists have explored two of its most fundamental and exotic predictions: The first is that gravity waves exist and behave as the theory predicts. The second is that a phenomenon called ‘frame-dragging’ exists around rotating massive objects.

Theoretically, gravity waves must exist in order for Einstein’s theory to be correct. They are distortions in the curvature of spacetime caused by accelerating matter, just as electromagnetic waves are distortions in the electromagnetic field of a charged particle produced by its acceleration. Gravity waves carry energy and travel at light-speed. At first they were detected indirectly. By 2004, astronomical bodies such as the  Hulse-Taylor orbiting pulsars were found to be losing energy by gravity waves emission at exactly the predicted rates. Then  in 2016, the  twin  LIGO gravity wave detectors detected the unmistakable and nearly simultaneous pulses of geometry distortion created by colliding black holes billions of light years away.

Astronomers also detected by 1997 the ‘frame-dragging’ phenomenon in  X-ray studies of distant black holes. As a black hole (or any other body) rotates, it actually ‘drags’ space around with it. This means that you cannot have stable orbits around a rotating body, which is something totally unexpected in Newton’s theory of gravity. The  Gravity Probe-B satellite orbiting Earth also confirmed in 2011 this exotic spacetime effect at precisely the magnitude expected by the theory for the rotating Earth.

Gravity also doesn’t care if you have matter or anti-matter; both will behave identically as they fall and move under gravity’s influence. This quantum-scale phenomenon was searched for at the Large Hadron Collider ALPHA experiment, and in 2013 researchers placed the first limits on how matter and antimatter ‘fall’ in Earth’s gravity. Future experiments will place even more stringent limits on just how gravitationally similar matter and antimatter are. Well, at least we know that antimatter doesn’t ‘fall up’!

There is only one possible problem with our understanding of gravity known at this time.

Applying general relativity, and even Newton’s Universal Gravitation, to large systems like galaxies and the universe leads to the discovery of a new ingredient called Dark Matter. There do not seem to be any verifiable elementary particles that account for this gravitating substance. Lacking a particle, some physicists have proposed modifying Newtonian gravity and general relativity themselves to account for this phenomenon without introducing a new form of matter. But none of the proposed theories leave the other verified predictions of general relativity experimentally intact. So is Dark Matter a figment of an incomplete theory of gravity, or is it a here-to-fore undiscovered fundamental particle of nature? It took 50 years for physicists to discover the lynchpin particle called the Higgs boson. This is definitely a story we will hear more about in the decades to come!

There is much that we now know about gravity, yet as we strive to unify it with the other elementary forces and particles in nature, it still remains an enigma. But then, even the briefest glance across the landscape of the quantum world fills you with a sense of awe and wonderment at the improbability of it all. At its root, our physical world is filled with improbable and logic-twisting phenomena and it simply amazing that they have lent themselves to human logic to the extent that they have!

 

Return here on Monday, March 13 for my next blog!

A Family Resurrection

When someone tells you that they are a family geneaologist, your first reaction is to gird yourself for a boring conversation about begats that will sound something like a chapter out of the Old Testament Genesis. What you probably don’t understand is the compulsion that drives us in this task.

A pretty little scene from one of my ancestral places near Uddevalla!

5,176 – That’s the number of people I have helped bring back from oblivion through my labors. There is an ineffable feeling of deep satisfaction in having tracked them down through the countless Swedish church records spanning over 600 years of  history. Every one recovered and named was a personal victory for me against the forgetfulness of time and history. Ancient Egyptians believed that if you removed a person’s name from monuments, or ceased to speak it, that the person’s spirit would actually cease to exist.  That is why so many pharaoh’s defaced their predecessor’s monuments by removing their names. To counter this eternal death, all you have to do is again speak their name!

I, personally, have resurrected over 15 families who I never knew existed. I can now name their parents, their children, when and where they were born and died, how often they pulled up roots in one town and moved to another. I know where and when they lived among the countless towns and farms in rural Sweden. I can also anticipate what major stories and geopolitical issues must have been the small talk around dinner tables and among their fellow farm laborers in the fields.

Everyone loves the thrill of the hunt, the stalking of prey, and the final moment of satisfaction at the completion of the pursuit. For a genealogist, we hunt through historical records in pursuit of a single  individual. Pouring through a record containing a thousand names, we spot the birth of an ancestor. The accompanying census record tips us off about his family unit captured in hand-written names, birth dates and places. A bit more work among the records of that moment clinches the number of children and where the family had last been in its travels. Looking forward, we recover the names of the grandparents, the birth and death dates and places of the parents, marriages, when the children left home, and where they later wound up as their own lives played out in the course of time.

Countess ‘Aha’ moments reward you as you meticulously slog through these records, and step-by-step recover a parent, a child, a place, a history. In doing so, you enter the acute fogginess of an alternate state of mind. The reward for this compulsion carries you on for many days until at last your labors are completed and you sit back and admire what you have just accomplished. There on your pages of notes, an entire family has been brought back from the depths of time. Like a diadem recovered from the soil, you can now admire the texture of this family and see it as an organic and living thing in space and time. As little as two weeks ago, you never even knew they existed. For years and even centuries before that, these ancestors lay buried in time and utterly forgotten by the living. They slumbered in the many fragmented pages of a hundred church books spread across countless square miles of Sweden until you, one day, decided to resurrect them and tell their story.

The curious, but typical story of Eric Juhlin!

Eric Ulric Juhlin, my second great great uncle, was born on July 4, 1823 to my third great grandfather Magnus Juhlin and his wife Britta Ulrica Gadd, in the town of Tutaryd, Sweden.

In 2010 I only knew Eric existed from the town’s census record that revealed the 11 children of my distant grandfather Magnus Juhlin, which were listed neatly in their own little rows of data. Eric was the twin to Petrus Juhlin who, sadly, died three months later as was a common risk for young children and infants in 18th century rural Sweden.

Well, Eric eventually met Eva Cajsa Svensdotter from Ljungby, Sweden; The exact details of how they met are obscure. But they settled for a time in the town of Halmstad, where on March 6, 1855 they had their first child Clara. Returning to Tutaryd in 1856, for the next 22 years they gave birth to six more children: Ida Regina, Ferdinand, Hedvig, Davida, Ulrica, and Gustaf Adolph. By the time their seventh-and- last child Carl Leonard was born in 1878, Eric was by that time 55 years old and Eva Caisa had turned 48 only 5 months later.

This family was singularly unlucky in raising their children to adulthood. Ferdinand died at the age of only five years. Carl Leonard made it to age 3, and Gustav Adolph, with an impressive King’s name, died at age 8. But there were also some hopeful stories too among their more fortunate siblings who they barely got to know.

Ida Regina did survive childhood and went on to marry Magnus Adolph Persson. Settling in the southern town of Hjämarp, they raised three sons and three daughters who all grew up and lived to old age. Magnus eventually died in 1930 at the age of 69 followed by Ida ten years later at the age of 83. Ida’s 16-year-old sister Ulrica, decided for whatever reason to emmigrate to the United States, where she eventually met and married her husband Fredrick William Picknell in 1893. Settling in Champaign, Illinois, they raised three sons: Percy Gordon, Frederick and Charles. Sadly, Ulrica died in 1915 at the age of 45. Her husband survived her another 30 years. Their three sons went on to form their own families until they, themselves, passed into history; the last of them, Frederick exiting this world in Toledo, Ohio on Halloween Day, 1986. But each of them managed to cast yet another generation into futurity, and through their children, colonized the years between 1920 and 2012. One of them, Harry Gene Picknell lived in Bethesda, Maryland only a stone’s throw from my own front door…but I never got to meet him.

What became of Clara, Hedvig and Davida? Well, between the ages of 20-22, they moved from their family homes in Tutaryd to the far-flung towns of  Hesslunda and  Mörarp. Their younger sister Davida followed suit on August 10, 1883 and moved to the Big City of Halmstad. The census for this town spans thousands of pages and is a daunting challenge to study. Perhaps Davida will turn up somewhere among them, but it will be a long while before I muster up the courage to dive into THAT archive.

Or perhaps like so many other ancestors, Davida Juhlin’s story will remain the silent gold of history!

Check back here on Wednesday, March 8 for a new blog!

Martian Swamp Gas?

Thanks to more than a decade of robotic studies, the surface of Mars is becoming a place as familiar to some of us as similar garden spots on Earth such as the Atacama Desert in Chile, or Devon Island in Canada. But this rust-colored world still has some tricks up its sleave!

Back in 2003, NASA astronomer Michael Mumma and his team discovered traces of methane in the dilute atmosphere of Mars. The gas was localized to only a few geographic areas in the equatorial zone in the martian Northern Hemisphere, but this was enough to get astrobiologists excited about the prospects for sub-surface life. The amount being released in a seasonal pattern was about 20,000 tons during the local summer months.


The discovery using ground-based telescopes in 2003 was soon confirmed a year later by other astronomers and by the Mars Express Orbiter, but the amount is highly variable. Ten years later, the Curiosity rover also detected methane in the atmosphere from its location many hundreds of miles from the nearest ‘plume’ locations. It became clear that the hit-or-miss nature of these detections had to do with the source of the methane turning on and off over time, and it was not some steady seepage going on all the time. Why was this happening, and did it have anything to do with living systems?

On Earth, there are organisms that take water (H2O) and combine it with carbon dioxide in the air (CO2) to create methane (CH3) as a by-product, but there are also inorganic processes that create methane too. For instance, electrostatic discharges can ionize water and carbon dioxide and can produce trillions of methane molecules per discharge. There is plenty of atmospheric dust in the very dry Martian atmosphere, so this is not a bad explanation at all.

This diagram shows possible ways that methane might make it into Mars’ atmosphere (sources) and disappear from the atmosphere (sinks). (Credit: NASA/JPL-Caltech/SAM-GSFC/Univ. of Michigan)

Still, the search for conclusive evidence for methane production and removal is one of the high frontiers in Martian research these days. New mechanisms are being proposed every year that involve living or inorganic origins. There is even some speculation that the Curiosity rover’s chemical lab was responsible for the rover’s methane ‘discovery’. Time will tell if some or any of these ideas ultimately checks out. There seem to be far more geological ways to create a bit of methane compared to biotic mechanisms. This means the odds do not look so good that the fleeting traces of methane we do see are produced by living organisms.

What does remain very exciting is that Mars is a chemically active place that has more than inorganic molecules in play. In 2014, the Curiosity rover took samples of mudstone and tested them with its on-board spectrometer. The samples were rich in organic molecules that have chlorine atoms including chlorobenzene (C6H4Cl2) , dichloroethane (C2H4Cl2), dichloropropane (C3H6Cl2) and dichlorobutane (C4H8Cl2). Chlorobenzene is not a naturally occurring compound on Earth. It is used in the manufacturing process for pesticides, adhesives, paints and rubber. Dichloropropane is used as an industrial solvent to make paint strippers, varnishes and furniture finish removers, and is classified as a carcinogen. There is even some speculation that the abundant perchlorate molecules (ClO4) in the Martian soil, when heated inside the spectrometer with the mudstone samples, created these new organics.

Mars is a frustratingly interesting place to study because, emotionally, it holds out hope for ultimately finding something exciting that takes us nearer to the idea that life once flourished there, or may still be present below its inaccessible surface. But all we have access to for now is its surface geology and atmosphere. From this we seem to encounter traces of exotic chemistry and perhaps our own contaminants at a handful of parts-per-billion. At these levels, the boring chemistry of Mars comes alive in the statistical noise of our measurements, and our dreams of Martian life are temporarily re-ignited.

Meanwhile, we will not rest until we have given Mars a better shot at revealing traces of its biosphere either ancient or contemporary!

Check back here on Thursday, March 2 for the next essay!