Crowdsourcing Gravity

The proliferation of smartphones with internal sensors has led to some interesting opportunities to make large-scale measurements of a variety of physical phenomena.

The iOS app ‘Gravity Meter’ and its android equivalent have been used to make  measurements of the local surface acceleration, which is nominally 9.8 meters/sec2. The apps typically report the local acceleration to 0.01 (iOS) or even 0.001 (android) meters/secaccuracy, which leads to two interesting questions: 1)How reliable are these measurements at the displayed decimal limit, and 2) Can smartphones be used to measure expected departures from the nominal surface acceleration due to Earth rotation? Here is a map showing the magnitude of this (centrifugal) rotation effect provided by The Physics Forum.

As Earth rotates, any object on its surface will feel a centrifugal force directed outward from the center of Earth and generally in the direction of local zenith. This causes Earth to be slightly bulged-out at the equator compared to the poles, which you can see from the difference between its equatorial radius of 6,378.14 km versus its polar radius of 6,356.75 km: a polar flattening difference of 21.4 kilometers. This centrifugal force also has an effect upon the local surface acceleration  by reducing it slightly at the equator compared to the poles. At the equator, one would measure a value for ‘g’ that is about 9.78 m/sec2 while at the poles it is about 9.83 m/sec2. Once again, and this is important to avoid any misconceptions, the total acceleration defined as gravity plus centrifugal is reduced, but gravity is itself not changed because from Newton’s Law of Universal Gravitation, gravity is due to mass not rotation.

Assuming that the smartphone accelerometers are sensitive enough, they may be able to detect this equator-to-pole difference by comparing the surface acceleration measurements from observers at different latitudes.

 

Experiment 1 – How reliable are ‘gravity’ measurements at the same location?

To check this, I looked at the data from several participating classrooms at different latitudes, and selected the more numerous iOS measurements with the ‘Gravity Meter’ app. These data were kindly provided by Ms. Melissa Montoya’s class in Hawaii (+19.9N), George Griffith’s class in Arapahoe, Nebraska (+40.3N), Ms. Sue Lamdin’s class in Brunswick, Maine (+43.9N), and Elizabeth Bianchi’s class in Waldoboro, Maine (+44.1N).

All four classrooms measurements, irrespective of latitude (19.9N, 40.3N, 43.9N or 44.1N) showed distinct ‘peaks’, but also displayed long and complicated ‘tails’, making these distributions not Gaussian as might be expected for random errors. This suggests that under classroom conditions there may be some systematic effects introduced from the specific ways in which students may be making the measurements, introducing  complicated and apparently non-random,  student-dependent corrections into the data.

A further study using the iPad data from Elizabeth Bianchi’s class, I discovered that at least for iPads using the Gravity Sensor app, there was a definite correlation between when the measurement was made and the time it was made during a 1.5-hour period. This resembles a heating effect, suggesting that the longer you leave the technology on before making the measurement, the larger will be the measured value. I will look into this at a later time.

The non-Gaussian behavior in the current data does not make it possible to assign a normal average and standard-deviation to the data.

 

Experiment 2 – Can the rotation of Earth be detected?

Although there is the suggestion that in the 4-classroom data we could see a nominal centrifugal effect of about the correct order-of-magnitude, we were able to get a large sample of individual observers spanning a wide latitude range, also using the iOS platform and the same ‘Gravity Meter’ app. Including the median values from the four classrooms in Experiment 1, we had a total of 41 participants: Elizabeth Abrahams, Jennifer Arsenau, Dorene Brisendine, Allen Clermont, Hillarie Davis, Thom Denholm, Heather Doyle, Steve Dryer, Diedra Falkner, Mickie Flores, Dennis Gallagher, Robert Gallagher, Rachael Gerhard, Robert Herrick, Harry Keller, Samuel Kemos, Anna Leci, Alexia Silva Mascarenhas, Alfredo Medina, Heather McHale, Patrick Morton, Stacia Odenwald, John-Paul Rattner, Pat Reiff, Ghanjah Skanby, Staley Tracy, Ravensara Travillian, and Darlene Woodman.

The scatter plot of these individual measurements is shown here:

The red squares are the individual measurements. The blue circles are the android phone values. The red dashed line shows the linear regression line for only the iOS data points assuming each point is equally-weighted. The solid line is the predicted change in the local acceleration with latitude according to the model:

G =   9.806   –  0.5*(9.832-9.78)*Cos(2*latitude)    m/sec2

where the polar acceleration is 9.806 m/sec2 and the equatorial acceleration is 9.780 m/sec2. Note: No correction for lunar and solar tidal effects have been made since these are entirely undetectable with this technology.

Each individual point has a nominal variation of +/-0.01 m/sec2 based on the minimum and maximum value recorded during a fixed interval of time. It is noteworthy that this measurement RMS is significantly smaller than the classroom variance seen in Experiment 1 due to the apparently non-Gaussian shape of the classroom sampling. When we partition the iOS smartphone data into 10-degree latitude bins and take the median value in each bin we get the following plot, which is a bit cleaner:

The solid blue line is the predicted acceleration. The dashed black line is the linear regression for the equally-weighted individual measurements. The median values of the classroom points are added to show their distribution. It is of interest that the linear regression line is parallel, and nearly coincident with, the predicted line, which again suggests that Earth’s rotation effect may have been detected in this median-sampled data set provided by a total of 37 individuals.

The classroom points clustering at ca +44N represent a total of 36 measures representing the plotted median values, which is statistically significant. Taken at face value, the classroom data would, alone, support the hypothesis that there was a detection of the rotation effect, though they are consistently 0.005 m/sec2 below the predicted value at the mid-latitudes. The intrinsic variation of the data, represented by the consistent +/-0.01 m/sec2 high-vs-low range of all of the individual samples, suggests that this is probably a reasonable measure of the instrumental accuracy of the smartphones. Error bars (thin vertical black lines) have been added to the plotted median points to indicate this accuracy.

The bottom-line seems to be that it may be marginally possible to detect the Earth rotation effect, but precise measurements at the 0.01 m/sec2 level are required against what appears to be a significant non-Gaussian measurement background. Once again, some of the variation seen at each latitude may be due to how warm the smartphones were at the time of the measurement. The android and iOS measurements do seem to be discrepant with the android measurements leading to a larger measurement variation.

Check back here on Wednesday, March 29 for the next topic!

Fifty Years of Quarks!

Today, physicists are both excited and disturbed by how well the Standard Model is behaving, even at the enormous energies provided by the CERN Large Hadron Collider. There seems to be no sign of the expected supersymmetry property that would show the way to the next-generation version of the Standard Model: Call it V2.0. But there is another ‘back door’ way to uncover its deficiencies. You see, even the tests for how the Standard Model itself works are incomplete, even after the dramatic 2012 discovery of the Higgs Boson! To see how this backdoor test works, we need a bit of history.

Over fifty years ago in 1964, physicists Murray Gell-Mann at Caltech and George Zweig at CERN came up with the idea of the quark as a response to the bewildering number of elementary particles that were being discovered at the huge “atom smasher” labs sprouting up all over the world. Basically, you only needed three kinds of elementary quarks, called “up,” “down” and “strange.” Combining these in threes, you get the heavy particles called baryons, such as the proton and neutron. Combining them in twos, with one quark and one anti-quark, you get the medium-weight particles called the mesons.

This early idea was extended to include three more types of quarks, dubbed “charmed,” “top” and “bottom” (or on the other side of the pond, “charmed,” “truth” and “beauty”) as they were discovered in the 1970s. These six quarks form three generations — (U, D), (S, C), (T, B) — in the Standard Model.

Particle tracks at CERN/CMS experiment (credit: CERN/CMS)

Early Predictions

At first the quark model easily accounted for the then-known particles. A proton would consist of two up quarks and one down quark (U, U, D), and a neutron would be (D, D, U). A pi-plus meson would be (U, anti-D), and a pi-minus meson would be (D, anti-U), and so on. It’s a bit confusing to combine quarks and anti-quarks in all the possible combinations. It’s kind of like working out all the ways that a coin flipped three times give you a pattern like (T,T,H) or (H,T,H), but when you do this in twos and threes for U, D and S quarks, you get the entire family of the nine known mesons, which forms one geometric pattern in the figure below, called the Meson Nonet.

The basic Meson Nonet (credit: Wikimedia Commons)

If you take the three quarks U, D and S and combine them in all possible unique threes, you get two patterns of particles shown below, called the Baryon Octet (left) and the Baryon Decuplet (right).

Normal baryons made from three-quark triplets

The problem was that there was a single missing particle in the simple 3-quark baryon pattern. The Omega-minus (S,S,S) at the apex of the Baryon Decuplet was nowhere to be found. This slot was empty until Brookhaven National Laboratory discovered it in early 1964. It was the first indication that the quark model was on the right track and could predict a new particle that no one had ever seen before. Once the other three quarks (C, T and B) were discovered in the 1970s, it was clear that there were many more slots to fill in the geometric patterns that emerged from a six-quark system.

The first particles predicted, and then discovered, in these patterns were the J/Psi “charmonium” meson (C, anti-C) in 1974, and the Upsilon “bottomonium” meson (B, anti-B) in 1977. Apparently there are no possible top mesons (T, anti-T) because the top quark decays so quickly it is gone before it can bind together with an anti-top quark to make even the lightest stable toponium meson!

The number of possible particles that result by simply combining the six quarks and six anti-quarks in patterns of twos (mesons) is exactly 39 mesons. Of these, only 26 have been detected as of 2017. These particles have masses between 4 and 11 times more massive than a single proton!

For the still-heavier three-quark baryons, the quark patterns predict 75 baryons containing combinations of all six quarks. Of these, the proton and neutron are the least massive! But there are 31 of these predicted baryons that have not been detected yet. These include the lightest missing particle, the double charmed Xi (U,C,C) and the bottom Sigma (U, D, B), and the most massive particles, called the charmed double-bottom Omega (C, B, B) and the triple-bottom omega (B,B,B). In 2014, CERN/LHC announced the discovery of two of these missing particles, called the bottom Xi baryons (B, S, D), with masses near 5.8 GeV.
To make life even more interesting for the Standard Model, other combinations of more than three quarks are also possible.

Exotic Baryons
A pentaquark baryon particle can contain four quarks and one anti-quark. The first of these, called the Theta-plus baryon, was predicted in 1997 and consists of (U, U, D, D, anti-S). This kind of quark package seems to be pretty rare and hard to create. There have been several claims for a detection of such a particle near 1.5 GeV, but experimental verification remains controversial. Two other possibilities called the Phi double-minus (D, D, S, S, anti-U) and the charmed neutral Theta (U, U, D, D, anti-C) have been searched for but not found.

Comparing normal and exotic baryons (credit: Quantum Diaries)

There are also tetraquark mesons, which consist of four quarks. The Z-meson (C, D, anti-C, anti-U) was discovered by the Japanese Bell Experiment in 2007 and confirmed in 2014 by the Large Hadron Collider at 4.43 GeV, hence the proper name Z(4430). The Y(4140) was discovered at Fermilab in 2009 and confirmed at the LHC in 2012 and has a mass 4.4 times the proton’s mass. It could be a combination of charmed quarks and charmed anti-quarks (C, anti-C, C, anti-C). The X(3830) particle was also discovered by the Japanese Bell Experiment and confirmed by other investigators, and could be yet another tetraquark combination consisting of a pair of quarks and anti-quarks (q, anti-q, q, anti-q).

So the Standard Model, and the six-quark model it contains, makes specific predictions for new baryon and meson states to be discovered. All totaled, there are 44 ordinary baryons and mesons that remain to be discovered! As for the ‘exotics’ that opens up a whole other universe of possibilities. In theory, heptaquarks (5 quarks, 2 antiquarks), nonaquarks (6 quarks, 3 antiquarks), etc. could also exist.

At the current pace of a few particles per year or so, we may finally wrap up all the predictions of the quark model in the next few decades. Then we really get to wonder what lies beyond the Standard once all the predicted particle slots have been filled. It is actually a win-win situation, because we either completely verify the quark model, which is very cool, or we discover anomalous particles that the quark model can’t explain, which may show us the ‘backdoor’ way to the Standard Model v.2.0 that the current supersymmetry searches seem not to be providing us just yet.

Check back here on Wednesday, March 22 for the next topic!

Hohmann’s Tyrany

It really is a shame. When all you have is a hammer, everything else looks like a nail. This also applies to our current, international space programs.

We have been using chemical rockets for centuries, but since the advent of V2s and the modern space age, these brute-force and cheap work horses have been the main propulsion technology we use to go just about everywhere in the solar system. But this amounts to thinking that one technology can span all of our needs, and the trillions of cubic miles that encompass interplanetary space.

We pay a huge price for this belief.

Chemical rockets have their place in space travel. They are fantastic ways of delivering HUGE thrusts quickly; the method par excellance for getting us off this planet and paying the admission ticket to space.  No other known propulsion technology is as cheap, simple, and technologically elegant as chemical propulsion in this setting.  Applying this same technology to interplanetary travel beyond the moon is quite another thing, and sets in motion an escalating series of difficult problems.

Every interplanetary spacecraft launched so far to travel to each of the planets in our solar system works on the exact same principle. Give the spacecraft a HUGE boost to get it off the launch pad, and with enough velocity to reach the distant planet, then cut the engines off after a few minutes so the spacecraft can literally coast the whole way. With a few more ‘Delta-V’ changes, this is called the minimum –energy trajectory or for rocket scientists the Hohmann Transfer orbit. It is designed to get you there, not in the shortest time, but using the least amount of energy. In propulsion, energy is money. We use souped-up Atlas rockets at a few hundred million dollars a pop to launch space craft to the outer planets. We don’t use  even larger and expensive Saturn V rockets that deliver even more energy for a dramatically-shorter ride.

If you bank on taking the slow-boat to Mars rather than a more energetic ride, this leads to all sorts of problems. The biggest of these is that the inexpensive 220-day journeys let humans build up all sorts of nasty medical problems that short 2-week trips would completely eliminate. In fact, the entire edifice of the $150 billion International Space Station is there to explore the extended human stays in space that are demanded by Hohmann Transfer orbits and chemical propulsion. We pay a costly price to keep using cheap chemical rockets that deliver long stays in space, and cause major problems that are expensive to patch-up afterwards. The entire investment in the ISS could have been eliminated if we focused on getting the travel times in space down to a few weeks.

You do not need Star Trek warp technology to do this!

Since the 1960s, NASA engineers and academic ‘think tanks’ have designed nuclear rocket engines and ion rocket engines, both show enormous promise in breaking the hegemony of chemical transportation. The NASA nuclear rocket program began in the e arly-1960s and built several operational prototypes, but the program was abandoned in the late 1960s because nuclear rockets were extremely messy, heavy, and had a nasty habit of slowly vaporizing the nuclear reactor and blowing it out the rocket engine!  Yet, Wernher  Von Braun designed a Mars expedition for the 1970s in which several,  heavy 100-ton nuclear motors would be placed in orbit by a Saturn V and then incorporated into an set of three interplanetary transports. This program was canceled when the Apollo program was ended and there was no longer a conventional need for the massive Saturn V rockets. But ion rockets continued to be developed and today several of these have already been used on interplanetary spacecraft like Deep Space 1 and Dawn. The plans for humans on Mars in 2030s rely on ion rocket propulsion powered by massive solar panels.

Unlike chemical rockets, which limit spacecraft speeds to a few kilometers/sec, ion rockets can be developed with speeds up to several thousand km/sec. All that they need is more thrust, and to get that they need low-mass power plants in the gigawatt range. ‘Rocket scientists’ gauge engine designs based on their Specific Impulse, which is the exhaust speed divided by the acceleration of gravity on Earth. Chemical rockets can only provide SIs of 300 seconds, but ion engine designs can reach 30,000 seconds or more! With these engine designs, you can travel to Mars in SIX DAYS, and a jaunt to Pluto can take a neat 2 months! Under these conditions, most of the problems and hazards of prolonged human travel in space are eliminated.

But instead of putting our money into perfecting these engine designs, we keep building chemical rockets and investing billions of dollars trying to keep our long-term passengers alive.

Go figure!!!

Check back here on Friday, March 17 for a new blog!

 

The Mystery of Gravity

In grade school we learned that gravity is an always-attractive force that acts between particles of matter. Later on, we learn that it has an infinite range through space, weakens as the inverse-square of the distance between bodies, and travels exactly at the speed of light.

But wait….there’s more!

 

It doesn’t take a rocket scientist to remind you that humans have always known about gravity! Its first mathematical description as a ‘universal’ force was by Sir Isaac Newton in 1666. Newton’s description remained unchanged until Albert Einstein published his General Theory of Relativity in 1915. Ninety years later, physicists, such as Edward Witten, Steven Hawkings, Brian Greene and Lee Smolin among others, are finding ways to improve our description of ‘GR’ to accommodate the strange rules of quantum mechanics. Ironically, although gravity is produced by matter, General Relativity does not really describe matter in any detail – certainly not with the detail of the modern quantum theory of atomic structure. In the mathematics, all of the details of a planet or a star are hidden in a single variable, m, representing its total mass.

 

The most amazing thing about gravity is that is a force like no other known in Nature. It is a property of the curvature of space-time and how particles react to this distorted space. Even more bizarrely, space and time are described by the mathematics of  GR as qualities of the gravitational field of the cosmos that have no independent existence. Gravity does not exist like the frosting on a cake, embedded in some larger arena of space and time. Instead, the ‘frosting’ is everything, and matter is embedded and intimately and indivisibly connected to it. If you could turn off gravity, it is mathematically predicted that space and time would also vanish! You can turn off electromagnetic forces by neutralizing the charges on material particles, but you cannot neutralize gravity without eliminating spacetime itself.  Its geometric relationship to space and time is the single most challenging aspect of gravity that has prevented generations of physicists from mathematically describing it in the same way we do the other three forces in the Standard Model.

Einstein’s General Relativity, published in 1915, is our most detailed mathematical theory for how gravity works. With it, astronomers and physicists have explored the origin and evolution of the universe, its future destiny, and the mysterious landscape of black holes and neutron stars. General Relativity has survived many different tests, and it has made many predictions that have been confirmed. So far, after 90 years of detailed study, no error has yet been discovered in Einstein’s original, simple theory.

Currently, physicists have explored two of its most fundamental and exotic predictions: The first is that gravity waves exist and behave as the theory predicts. The second is that a phenomenon called ‘frame-dragging’ exists around rotating massive objects.

Theoretically, gravity waves must exist in order for Einstein’s theory to be correct. They are distortions in the curvature of spacetime caused by accelerating matter, just as electromagnetic waves are distortions in the electromagnetic field of a charged particle produced by its acceleration. Gravity waves carry energy and travel at light-speed. At first they were detected indirectly. By 2004, astronomical bodies such as the  Hulse-Taylor orbiting pulsars were found to be losing energy by gravity waves emission at exactly the predicted rates. Then  in 2016, the  twin  LIGO gravity wave detectors detected the unmistakable and nearly simultaneous pulses of geometry distortion created by colliding black holes billions of light years away.

Astronomers also detected by 1997 the ‘frame-dragging’ phenomenon in  X-ray studies of distant black holes. As a black hole (or any other body) rotates, it actually ‘drags’ space around with it. This means that you cannot have stable orbits around a rotating body, which is something totally unexpected in Newton’s theory of gravity. The  Gravity Probe-B satellite orbiting Earth also confirmed in 2011 this exotic spacetime effect at precisely the magnitude expected by the theory for the rotating Earth.

Gravity also doesn’t care if you have matter or anti-matter; both will behave identically as they fall and move under gravity’s influence. This quantum-scale phenomenon was searched for at the Large Hadron Collider ALPHA experiment, and in 2013 researchers placed the first limits on how matter and antimatter ‘fall’ in Earth’s gravity. Future experiments will place even more stringent limits on just how gravitationally similar matter and antimatter are. Well, at least we know that antimatter doesn’t ‘fall up’!

There is only one possible problem with our understanding of gravity known at this time.

Applying general relativity, and even Newton’s Universal Gravitation, to large systems like galaxies and the universe leads to the discovery of a new ingredient called Dark Matter. There do not seem to be any verifiable elementary particles that account for this gravitating substance. Lacking a particle, some physicists have proposed modifying Newtonian gravity and general relativity themselves to account for this phenomenon without introducing a new form of matter. But none of the proposed theories leave the other verified predictions of general relativity experimentally intact. So is Dark Matter a figment of an incomplete theory of gravity, or is it a here-to-fore undiscovered fundamental particle of nature? It took 50 years for physicists to discover the lynchpin particle called the Higgs boson. This is definitely a story we will hear more about in the decades to come!

There is much that we now know about gravity, yet as we strive to unify it with the other elementary forces and particles in nature, it still remains an enigma. But then, even the briefest glance across the landscape of the quantum world fills you with a sense of awe and wonderment at the improbability of it all. At its root, our physical world is filled with improbable and logic-twisting phenomena and it simply amazing that they have lent themselves to human logic to the extent that they have!

 

Return here on Monday, March 13 for my next blog!

A Family Resurrection

When someone tells you that they are a family geneaologist, your first reaction is to gird yourself for a boring conversation about begats that will sound something like a chapter out of the Old Testament Genesis. What you probably don’t understand is the compulsion that drives us in this task.

A pretty little scene from one of my ancestral places near Uddevalla!

5,176 – That’s the number of people I have helped bring back from oblivion through my labors. There is an ineffable feeling of deep satisfaction in having tracked them down through the countless Swedish church records spanning over 600 years of  history. Every one recovered and named was a personal victory for me against the forgetfulness of time and history. Ancient Egyptians believed that if you removed a person’s name from monuments, or ceased to speak it, that the person’s spirit would actually cease to exist.  That is why so many pharaoh’s defaced their predecessor’s monuments by removing their names. To counter this eternal death, all you have to do is again speak their name!

I, personally, have resurrected over 15 families who I never knew existed. I can now name their parents, their children, when and where they were born and died, how often they pulled up roots in one town and moved to another. I know where and when they lived among the countless towns and farms in rural Sweden. I can also anticipate what major stories and geopolitical issues must have been the small talk around dinner tables and among their fellow farm laborers in the fields.

Everyone loves the thrill of the hunt, the stalking of prey, and the final moment of satisfaction at the completion of the pursuit. For a genealogist, we hunt through historical records in pursuit of a single  individual. Pouring through a record containing a thousand names, we spot the birth of an ancestor. The accompanying census record tips us off about his family unit captured in hand-written names, birth dates and places. A bit more work among the records of that moment clinches the number of children and where the family had last been in its travels. Looking forward, we recover the names of the grandparents, the birth and death dates and places of the parents, marriages, when the children left home, and where they later wound up as their own lives played out in the course of time.

Countess ‘Aha’ moments reward you as you meticulously slog through these records, and step-by-step recover a parent, a child, a place, a history. In doing so, you enter the acute fogginess of an alternate state of mind. The reward for this compulsion carries you on for many days until at last your labors are completed and you sit back and admire what you have just accomplished. There on your pages of notes, an entire family has been brought back from the depths of time. Like a diadem recovered from the soil, you can now admire the texture of this family and see it as an organic and living thing in space and time. As little as two weeks ago, you never even knew they existed. For years and even centuries before that, these ancestors lay buried in time and utterly forgotten by the living. They slumbered in the many fragmented pages of a hundred church books spread across countless square miles of Sweden until you, one day, decided to resurrect them and tell their story.

The curious, but typical story of Eric Juhlin!

Eric Ulric Juhlin, my second great great uncle, was born on July 4, 1823 to my third great grandfather Magnus Juhlin and his wife Britta Ulrica Gadd, in the town of Tutaryd, Sweden.

In 2010 I only knew Eric existed from the town’s census record that revealed the 11 children of my distant grandfather Magnus Juhlin, which were listed neatly in their own little rows of data. Eric was the twin to Petrus Juhlin who, sadly, died three months later as was a common risk for young children and infants in 18th century rural Sweden.

Well, Eric eventually met Eva Cajsa Svensdotter from Ljungby, Sweden; The exact details of how they met are obscure. But they settled for a time in the town of Halmstad, where on March 6, 1855 they had their first child Clara. Returning to Tutaryd in 1856, for the next 22 years they gave birth to six more children: Ida Regina, Ferdinand, Hedvig, Davida, Ulrica, and Gustaf Adolph. By the time their seventh-and- last child Carl Leonard was born in 1878, Eric was by that time 55 years old and Eva Caisa had turned 48 only 5 months later.

This family was singularly unlucky in raising their children to adulthood. Ferdinand died at the age of only five years. Carl Leonard made it to age 3, and Gustav Adolph, with an impressive King’s name, died at age 8. But there were also some hopeful stories too among their more fortunate siblings who they barely got to know.

Ida Regina did survive childhood and went on to marry Magnus Adolph Persson. Settling in the southern town of Hjämarp, they raised three sons and three daughters who all grew up and lived to old age. Magnus eventually died in 1930 at the age of 69 followed by Ida ten years later at the age of 83. Ida’s 16-year-old sister Ulrica, decided for whatever reason to emmigrate to the United States, where she eventually met and married her husband Fredrick William Picknell in 1893. Settling in Champaign, Illinois, they raised three sons: Percy Gordon, Frederick and Charles. Sadly, Ulrica died in 1915 at the age of 45. Her husband survived her another 30 years. Their three sons went on to form their own families until they, themselves, passed into history; the last of them, Frederick exiting this world in Toledo, Ohio on Halloween Day, 1986. But each of them managed to cast yet another generation into futurity, and through their children, colonized the years between 1920 and 2012. One of them, Harry Gene Picknell lived in Bethesda, Maryland only a stone’s throw from my own front door…but I never got to meet him.

What became of Clara, Hedvig and Davida? Well, between the ages of 20-22, they moved from their family homes in Tutaryd to the far-flung towns of  Hesslunda and  Mörarp. Their younger sister Davida followed suit on August 10, 1883 and moved to the Big City of Halmstad. The census for this town spans thousands of pages and is a daunting challenge to study. Perhaps Davida will turn up somewhere among them, but it will be a long while before I muster up the courage to dive into THAT archive.

Or perhaps like so many other ancestors, Davida Juhlin’s story will remain the silent gold of history!

Check back here on Wednesday, March 8 for a new blog!

Martian Swamp Gas?

Thanks to more than a decade of robotic studies, the surface of Mars is becoming a place as familiar to some of us as similar garden spots on Earth such as the Atacama Desert in Chile, or Devon Island in Canada. But this rust-colored world still has some tricks up its sleave!

Back in 2003, NASA astronomer Michael Mumma and his team discovered traces of methane in the dilute atmosphere of Mars. The gas was localized to only a few geographic areas in the equatorial zone in the martian Northern Hemisphere, but this was enough to get astrobiologists excited about the prospects for sub-surface life. The amount being released in a seasonal pattern was about 20,000 tons during the local summer months.


The discovery using ground-based telescopes in 2003 was soon confirmed a year later by other astronomers and by the Mars Express Orbiter, but the amount is highly variable. Ten years later, the Curiosity rover also detected methane in the atmosphere from its location many hundreds of miles from the nearest ‘plume’ locations. It became clear that the hit-or-miss nature of these detections had to do with the source of the methane turning on and off over time, and it was not some steady seepage going on all the time. Why was this happening, and did it have anything to do with living systems?

On Earth, there are organisms that take water (H2O) and combine it with carbon dioxide in the air (CO2) to create methane (CH3) as a by-product, but there are also inorganic processes that create methane too. For instance, electrostatic discharges can ionize water and carbon dioxide and can produce trillions of methane molecules per discharge. There is plenty of atmospheric dust in the very dry Martian atmosphere, so this is not a bad explanation at all.

This diagram shows possible ways that methane might make it into Mars’ atmosphere (sources) and disappear from the atmosphere (sinks). (Credit: NASA/JPL-Caltech/SAM-GSFC/Univ. of Michigan)

Still, the search for conclusive evidence for methane production and removal is one of the high frontiers in Martian research these days. New mechanisms are being proposed every year that involve living or inorganic origins. There is even some speculation that the Curiosity rover’s chemical lab was responsible for the rover’s methane ‘discovery’. Time will tell if some or any of these ideas ultimately checks out. There seem to be far more geological ways to create a bit of methane compared to biotic mechanisms. This means the odds do not look so good that the fleeting traces of methane we do see are produced by living organisms.

What does remain very exciting is that Mars is a chemically active place that has more than inorganic molecules in play. In 2014, the Curiosity rover took samples of mudstone and tested them with its on-board spectrometer. The samples were rich in organic molecules that have chlorine atoms including chlorobenzene (C6H4Cl2) , dichloroethane (C2H4Cl2), dichloropropane (C3H6Cl2) and dichlorobutane (C4H8Cl2). Chlorobenzene is not a naturally occurring compound on Earth. It is used in the manufacturing process for pesticides, adhesives, paints and rubber. Dichloropropane is used as an industrial solvent to make paint strippers, varnishes and furniture finish removers, and is classified as a carcinogen. There is even some speculation that the abundant perchlorate molecules (ClO4) in the Martian soil, when heated inside the spectrometer with the mudstone samples, created these new organics.

Mars is a frustratingly interesting place to study because, emotionally, it holds out hope for ultimately finding something exciting that takes us nearer to the idea that life once flourished there, or may still be present below its inaccessible surface. But all we have access to for now is its surface geology and atmosphere. From this we seem to encounter traces of exotic chemistry and perhaps our own contaminants at a handful of parts-per-billion. At these levels, the boring chemistry of Mars comes alive in the statistical noise of our measurements, and our dreams of Martian life are temporarily re-ignited.

Meanwhile, we will not rest until we have given Mars a better shot at revealing traces of its biosphere either ancient or contemporary!

Check back here on Thursday, March 2 for the next essay!

Near Death Experiences

A CBS News Survey in 2014  found that 3 in 4 Americans believe in an afterlife. A similar survey in the UK in 2009 found 1 in 2 believe in life after death and 70% believe in the existence of a human soul.

So pervasive is this belief that, amazingly, more Britons believe in life after death than believe in God! This belief in life-after-death is so fundamental to how humans see the world that a 2013 Pew Poll of Americans  found that 13% of athiests also believed in an afterlife!

Luigi Schiavonetti’s 1808 engraving of a soul leaving a body. (Credit: National Gallery of Victoria, Melbourne)

Of course, many will argue that once you are gone you are gone, but in that twilight moment in the minutes and seconds before death, people have been revived through heroic medical interventions and some but not all declare they have experienced ‘something’ absolutely remarkable.

Called Near Death Experiences, entire shelves of books have been written on this subject over the decades since the ground-breaking work of Ceila Green in 1968 and then popularized in 1975 by psychiatrist Raymond Moody. Extensive eye-witness accounts were recorded, classified and sorted into a small number of apparently archetypical scenarios such as tunnels of pure light; out of body experiences; meeting loved ones; indescribable love. According to a Gallup Poll about 3% of Americans claim to have had them.  There were early attempts by Duncan MacDougall in 1901 at detecting the exit of the soul from the body by carefully weighing the patient, but all failed, and were immediately explained by denying that the soul had any weight at all.

Scientists have largely refused to wade into this area of inquiry because, like many other human beliefs, there is enormous public resistance to scientists meddling in such cherished and highly personal ideas shared by virtually all humans, even some athiests! In a classic case of what psychologists call confirmation bias, there is nothing that science can say about this matter that would be trusted unless it lines up exactly to confirm what we have all made up our minds about, literally for millennia. That said, I myself, must tread very carefully as I write this blog because, frankly, those of you reading it have also made up your mind about the subject and I do not want to slap you in the face by disrespecting your fundamental core beliefs, which will always trump anything a scientist can tell you. Even my simple uttering of this disclaimer will be interpreted as me being a condescending scientist…or worse!

But I cannot help myself! I have been curious about this subject all my life, and any new insights I come across in my readings are like candy to my brain. So here goes!

NDEs are not a feature of any other organ than the brain because they involve visual perceptions, bodily sensations, and the knitting together of a story that is later told by the ‘traveler’. All of these are brain functions, so it is no wonder that those who study the clinical aspects of NDEs begin with what the brain is doing. Amazingly, you do not even have to be clinically ‘near death’ to experience them. All that is required is a deep conviction that you ARE dying to trigger them.

What could be a more compelling and simple idea than putting a dying person in a functional magnetic resonance imager (fMRI) or strapping an EEG net to their heads, and literally watching what the brain is doing during one of these events? Well, it would be a heinous experiment and an unwelcomed intrusion on a patient’s privacy, but nevertheless these things do happen accidentally. Cardiac patients who are more likely to die suddenly and be recovered are often monitored for other reasons prior to their NDE, and there are many other indirect ways to snoop on the brain to see what happens too.

We have already learned from fMRI studies that there is a specific brain region that allows you to have a sense of where your body is located in space. In an earlier blog I discussed how removing the stimulation of this normally very active region causes meditators to have the sensation of being ‘at-one’ with the universe. This state can also be reproduced at will through chemical manipulation. The region, when stimulated with an electrode, or during temporal lobe epilepsy, also produces the aura sensation that your Self is no longer anchored to your body in space during so-called Out-of-Body (OBE) events. So, an essential element of your body sense during an NDE can be traced to one specific brain region and whether it’s activity is stimulated or depressed. This region wins both ways because when its electrical activity is gone, you have one ‘cosmic’ sensation of leaving your body, and when it is over-stimulated you have the OBE sensation. As we know, death is the ultimate event that lowers brain activity, or temporarily elevates it in other places as blood flow catastrophically changes. We all have the same brains, so the real question is, why is it that EVERYONE doesn’t have a NDE?

It all seems to depend on how close you get to the precipice of never returning from the journey, and it is the closeness of your brain to this physiological edge that seems to trigger the events leading to this NDE experience. But we do not know for certain.

A 2011 Scientific American article summarized some of these elements announced by brain researchers Dean Mobbs and Caroline Watt.

OBE experiences can be artificially triggered by stimulating the right temporoparietal junction in the brain. Patients with Cotard or “walking corpse” syndrome believe they are dead. This is a condition caused by trauma to the parietal cortex and the prefrontal cortex. Parkinson’s disease patients have reported visions of ghosts. This condition involves abnormal functioning of dopamine, a neurotransmitter that can sometimes but not all the time evoke hallucinations. The common experience of reliving moments from one’s life can be tied to a neural circuit involving the locus coeruleus, which releases noradrenaline during stress and trauma. The locus coeruleus (shown below) is connected to brain regions that are involved with emotion and memory, such as the amygdala and hypothalamus. Finally, a number of medicinal and recreational drugs can mirror the euphoria often felt during NDEs, such as the anesthetic ketamine, which can trigger out-of-body experiences and hallucinations. These discussions of the neural basis for many of the separate elements to NBEs are now part of the official medical explanation in places such as the one found in Tim Newman’s 2016 article in Medical News Today.

Norepinephrine system (Credit: Patricia Brown, University of Cincinnati)

Beware, however, of other articles like the one in The Atlantic called ‘The Science of Near Death Experiences’. This 2015 popularization, written in the typical breezy style of newspaper reporters, also purported to summarize what we know about this condition. Sadly, the reporter spent most of the article interviewing those who experienced it and hardly any column space on actual scientific research. It was a typical ‘puff piece’ that offered nothing more than speculation and very self-serving and bias-affirming pseudoscience, along-side free plugs for many recent, lurid, books and movies about first-person accounts.

The bottom line is that NDEs are by no means common to people who think they are dying, and their incidence crosses many religious boundaries. They remain enormously powerful events that actually change the lives and even personalities of the survivors, and so they are not merely will-o-the-whisp hallucinations. We do know that their detailed descriptions follow specific cultural expectations for what the afterlife is like: a New Guinee tribesman will not describe the event the same way as a southern Evangelical.

We are only beginning to understand how our brains synthesize what we experience into the on-going story that is our personal reality, but we know from the evidence of numerous brain pathologies that this is a highly plastic process in which imagination and emotion blend with hard facts in a sometimes inseparable tapestry. Our senses are objectively known to be fallible in countless ways if left unattended, and how we interpret what we experiences is as much a logical process as a process of out-right confabulation. Like many other events in our lives, NDEs are seen as one experience that our brains work very hard to incorporate into a plausible story of our world. It is this story that through millions of years of evolution allows us to function as an integrated Self,  avoid being injured or eaten, and  propagate our genes to the next generation.

Isn’t it amazing that, against this backdrop of cognitive dissonance, sensory bias, emotional chaos, and evolutionary hard-wiring  we can create a workable story of who we are in the first place?

Check back here on Monday February 28 for my next blog!

Death By Vacuum

As an astrophysicist, this has GOT to be one of my favorite ‘fringe’ topics in physics. There’s a long preamble story behind it, though!

The discovery of the Higgs Boson with a mass of 126 GeV, about 130 times more massive than a proton, was an exciting event back in 2012. By that time we had a reasonably complete Standard Model of how particles and fields operated in Nature to create everything from a uranium atom and a rainbow, to lighting the interior of our sun. A key ingredient was a brand new fundamental field in nature, and its associated particle called the Higgs boson. The Standard Model says that all fundamental particles in Nature have absolutely no mass, but they all interact with the Higgs field. Depending on how strong this interaction, like swimming through a container of molasses, they gain different amounts of mass. But the existence of this Higgs field has led to some deep concerns about our world that go well beyond how this particle creates the physical property we call mass.

In a nutshell, according to the Standard Model, all particles interact with the ever-present Higgs field, which permeates all space. For example, the W-particles interact very strongly with the Higgs field and gain the most mass, while photons interact not at all, remain massless.

The Higgs particles come from the Higgs field, which as I said is present in every cubic centimeter of space in our universe. That’s why electrons in the farthest galaxy have the same mass as those here on Earth. But Higgs particles can also interact with each other. This produces a very interesting effect, like the tension in a stretched spring. A cubic centimeter of space anywhere in the universe is not at all perfectly empty, and actually has a potential energy ‘stress’ associated with it. This potential energy is  related to just how massive the Higgs boson is. You can draw a curve like the one below that shows the vacuum energy  and how it changes with the Higgs particle mass:

Now the Higgs mass actually changes as the universe expands and cools. When the universe was very hot, the curve looked like the one on the right, and the mass of the Higgs was zero at the bottom of the curve. As the universe expanded and cooled, this Higgs interaction curve turned into the one on the left, which shows that the mass of the Higgs is now X0 or 126 GeV. Note, the Higgs mass represented by the red ball used to be zero, but ‘rolled down’ into the lower-energy pit as the universe cooled.

The Higgs energy curve shows a very stable situation for ‘empty’ space at its lowest energy (green balls) because there is a big energy wall between where the field is today, and where it used to be (red ball). That means that if you pumped a bit of energy into empty space by colliding two particles there, it would not suddenly turn space into the roaring hot house of the Higgs field at the top of this curve.

We don’t actually know exactly what the Higgs curve looks like, but physicists have been able to make models of many alternative versions of the above curve to test out how stable the vacuum is. What they  found is something very interesting.

The many different kinds of Higgs vacuua can be defined  by using two masses: the Higgs mass and the mass of the top quark. Mathematically, you can then vary the values for the Higgs boson and the Top quark and see what happens to the stability of the vacuum. The results are summarized in the plot below.

The big surprise is that, from the observed mass of the Higgs boson and our top quark shown in the small box, their values are consistent with our space being inside a very narrow zone of what is called meta-stability. We do not seem to be living in a universe where we can expect space to be perfectly stable. What does THAT mean? It does sound rather ominous that empty space can be unstable!

What it means is that, at least in principle, if you collided particles with enough energy that they literally blow-torched a small region of space, this could change the Higgs mass enough that the results could be catastrophic. Even though the collision region is smaller than an atom, once created, it could expand at the speed of light like an inflating bubble. The interior would be a region of space with new physics, and new masses for all of the fundamental particles and forces. The surface of this bubble would be a maelstrom of high-energy collisions leaking out of empty space! You wouldn’t see the wall of this bubble coming. The walls can contain a huge amount of energy, so you would be incinerated as the bubble wall ploughed through you.

Of course the world is not that simple. These are all calculations based on the Standard Model, which may be incomplete. Also, we know that cosmic rays collide with Earth’s atmosphere at energies far beyond anything we will ever achieve…and we are still here.

So sit back and relax and try not to worry too much about Death By Vacuum.

Then again…

 

Return here on Wednesday, February 22 for my next blog!

The Next Sunspot Cycle

Forecasters are already starting to make predictions for what might be in store as our sun winds-down its current sunspot cycle (Number 24) in a few years. Are we in for a very intense cycle of solar activity, or the beginning of a century-long absence of sunspots and a rise in colder climates?

Figure showing the sunspot counts for the past few cycles. (Credit:www.solen.info)

Ever since Samuel Schwabe discovered the 11-year ebb and flow of sunspots on the sun in 1843, predicting when the next sunspot cycle will appear, and how strong it will be, has been a cottage industry among scientists and non-scientists alike. For solar physicists, the sunspot cycle is a major indicator of how the sun’s magnetic field is generated, and the evolution of various patterns of plasma circulation near the solar surface and interior. Getting these forecasts bang-on would be proof that we indeed have a ‘deep’ understanding of how the sun works that is a major step beyond just knowing it is a massive sphere of plasma heated by thermonuclear fusion in its core.

So how are we doing?

For over a century, scientists have scrutinized the shapes of dozens of individual sunspot cycles to glean features that could be used for predicting the circumstances of the next one. Basically, we know that 11-years is an average and some cycles are as short as 9 years or as long as 14. The number of sunspots during the peak year, called sunspot maximum, can vary from as few as 50 to as many as 260. The speed with which sunspot numbers rise to a maximum can be as long as 80 months for weaker sunspot cycles, and as short as 40 months for the stronger cycles. All of these features, and many other statistical rules-of-thumb, lead to predictive schemes of one kind or another, but they generally fail to produce accurate and detailed forecasts of the ‘next’ sunspot cycle.

Prior to the current sunspot cycle (Number 24), which spans the years 2008-2019, NASA astronomer Dean Pesnell collected 105 forecasts for Cycle 24 . For something as simple as how many sunspots would be present during the peak year, the predictions varied from as few as 40 to as many as 175 with an average of 106 +/-31. The actual number at the 2014 peak was 116. Most of the predictions were based on little more than extrapolating statistical patterns in older data. What we really want are forecasts that are based upon the actual physics of sunspot formation, not statistics. The most promising physics-based models we have today actually follow magnetic processes on the surface of the sun and below and are called Flux Transport Dynamo models.

Solar polar magnetic field trends (Credit: Wilcox Solar Observatory)

The sun’s magnetic field is much more fluid than the magnetic field of a toy bar magnet. Thanks to the revolutionary work by helioseismologists using the SOHO spacecraft and the ground-based GONG program, we can now see below the turbulent surface of the sun. There are vast rivers of plasma wider than a dozen Earths, which wrap around the sun from east to west. There is also a flow pattern that runs north and south from the equator to each pole. This meridional current is caused by giant convection cells below the solar surface and acts like a conveyor belt for the surface magnetic fields in each hemisphere. The sun’s north and south magnetic fields can be thought of as waves of magnetism that flow at about 60 feet/second from the equator at sunspot maximum to the poles at sunspot minimum, and back again to the equator at the base of the convection cell. At sunspot minimum they are equal and opposite in intensity at the poles, but at sunspot maximum they vanish at the poles and combine and cancel at the sun’s equator. The difference in the polar waves during sunspot minimum seems to predict how strong the next sunspot maximum will be about 6 years later as the current returns the field to the equator at the peak of the next cycle. V.V Zharkova at Northumbria University in the UK uses this to predict that Cycle 25 might continue the declining trend of polar field decrease seen in the last three sunspot cycles, and be even weaker than Cycle 24 with far fewer than 100 spots. However, a recent paper by NASA solar physicists David Hathaway and Lisa Upton  re-assessed the trends in the polar fields and predict that the average strength of the polar fields near the end of Cycle 24 will be similar to that measured near the end of Cycle 23, indicating that Cycle 25 will be similar in strength to the current cycle.

But some studies such as those by Matthew Penn and William Livingston at the National Solar Observatory seem to suggest that  sunspot magnetic field strengths have been declining since about 2000 and are already close to the minimum needed to sustain sunspots on the solar surface.  By Cycle 25 or 26, magnetic fields may be too weak to punch through the solar surface and form recognizable sunspots at all, spelling the end of the sunspot cycle phenomenon, and the start of another Maunder Minimum cooling period perhaps lasting until 2100. A quick GOOGLE search will turn up a variety of pages claiming that a new ‘Maunder Minimum’ and mini-Ice Age are just around the corner! An interesting on-the-spot assessment of these disturbing predictions was offered back in 2011 by NASA solar physicist C. Alex Young, concluding from the published evidence that these conclusions were probably ‘Much Ado about Nothing’.

What can we bank on?

The weight of history is a compelling guide, which teaches us that tomorrow will be very much like yesterday. Statistically speaking, the current Cycle 24 is scheduled to draw to a close about 11 years after the previous sunspot minimum in January 2008, which means sometime in 2019. You can eyeball the figure at the top of this blog and see that that is about right. We entered the Cycle 24 sunspot minimum period in 2016 because in February and June, we already had two spot-free days. As the number of spot-free days continues to increase in 2017-2018, we will start seeing the new sunspots of Cycle 25 appear sometime in late-2019. Sunspot maximum is likely to occur in 2024, with most forecasts predicting about half as many sunspots as in Cycle 24.

None of the current forecasts suggest Cycle 25 will be entirely absent. A few forecasts even hold out some hope that a sunspot maximum equal to or greater than Cycle 24 which was near 140 is possible, while others place the peak closer to 60 in 2025.

It seems to be a pretty sure bet that there will be yet-another sunspot cycle to follow the current one. If you are an aurora watcher, 2022-2027 would be the best years to go hunting for them. If you are a satellite operator or astronaut, this next cycle may be even less hostile than Cycle 24 was, or at least no worse!

In any event, solar cycle prediction will be a rising challenge in the next few years as scientists pursue the Holy Grail of creating a reliable theory of why the sun even has such cycles in the first place!

Check back here on Friday, February 17 for my next blog!

The War on Cancer

In an earlier essay, I described how cognitive dissonance can wreak havoc with our perception of the world, especially in the case of politics. Cognitive dissonance is a psychological state in which you can believe two logically different ideas at the same time. For example, some scientists are avowed Creationists, and some people who want to help the poor, fervently vote for enormous tax breaks for the wealthy. Psychologists say that this dissonance causes internal stress and anxiety until you, yourself, create a ‘story’ that creates an acceptable way to justify the two extremes.

Depending on how you voted in 2016, you will find my earlier discussion either brilliantly insightful or insufferably condescending. But here is a topic I think we can all agree upon that suffers dramatically from this same mental affliction, namely, cancer research.
Here are the facts for 2016:

People contracting cancer……………………………. 1,685,210.

People dying from cancer…………………………………. 595,650

Cancers found in people older than 50…………………….85%

State with highest incidence rates…………………. Kentucky

State with lowest incidence rate………………………….. Arizona

Annual medical costs for cancer treatment……..$75 billion

We all agree cancer is a scary disease, and for many the mere use of this word is terrifying. Far more people die from cancer every year than from nearly all other non-disease causes of death combined. You have a one-in-two chance of getting cancer in your life and a one-in- four chance of dying from cancer in your lifetime. Your risk of dying in an airplane crash, earth quake or terrorist action is insignificant compared to your risk of dying from cancer.

Why is cognitive dissonance involved in cancer research? Because we all collectively understand the facts of cancer, but then we turn around and vote a half-hearted research budget to combat it, and then collectively shrug our shoulders that we are doing everything we can to win the war. Let’s take a look at what we are collectively doing about cancer.

The decline in NCI funding for research 2003-2014. (Credit: ASCO Connection.com)

Funding for cancer research (NCI): $5.21 billion for FY 2017. Loss of buying power since 2003: 25%. So let’s see….the annual cost for cancer treatment is $75 billion and we invest just over $5 billion to find cures. Then we have the DoD hiding $125 billion in waste at the same time they want to expand their current budget  by $2.2 billion to $524 billion in FY17.

Why is it that the case made by the DoD to increase their budget, or President Trumps  call to embark on a ‘new’ trillion-dollar arms race, are so much more compelling than the very obvious efforts to cure a major threat to the lifespans of most American voters? The polling statistics are also rather troubling.

In a new national public opinion survey commissioned by Research!America, an overwhelming majority of Americans (85%) say it is important for candidates running for national office to assign a high priority to increasing funding for medical research. The U.S. spends about five cents of each health dollar on research to prevent, cure and treat disease and disability, but only 56% say that is not enough. Yet, Vice President Joe Biden’s 2016  Cancer Moonshot   initiative to defeat cancer earns support for a tax increase to fund cancer research among only half of the respondents (50%). Only a weak majority of Democrats (67%), and few Republicans (38%) and Independents (39%) support a tax increase to engage this war. Of those who favor a tax increase, more than half (88%) say they are willing to pay up to or more than $50 per year in taxes.

The over-all NIH, FY17 budget increase of $850 million over FY16 will support an increase of 600 research projects above FY16. From this you can roughly deduce that the added $680 million for the Cancer Moonshot in FY17 will support an additional 480 cancer-related grants. But although we do NOT want to look a gift horse in the mouth, the NCI budgets since 2003 have never kept pace with inflation. By 2014 the NCI purchasing power for cancer research had fallen behind by about $1.5 billion or 30% from where they were in 2003. Have a look at this graph below. The top line shows the funded amount and the bottom line shows the inflation-adjusted purchasing power from 2003 to 2014.

In the 45 years since Mr. Nixon signed the National Cancer Act of 1971, which launched the previous War on Cancer, NCI has spent more than $100 billion on cancer research. This, buy the way is equal to the cost of NASA’s International Space Station. Since 1946, American Cancer Society has spent more than $4.5 billion to find cancer cures and forty-seven ACS-funded researchers been awarded the Nobel Prize. A relevant question is; Will the $680 million increase proposed by the Cancer Moonshot for FY17, and hopefully similar amounts after that, be enough to tip the scales towards cures in the way that VP Joe Biden has hoped?

The FY17 $6.3 billion 21st Century Cures bill was approved by the Senate in an overwhelming 94-5 vote and will doubtless be signed into law by President Barack Obama. The 10-year funding ‘Cures’ bill made its way through Congress largely because it is packed with substantial amounts of money for enough pet projects, including a batch of Medicare and mental health reforms, to keep disparate lawmakers on board.

Although other elements of President Obama’s 21st Century Cures program will receive automatic refunding in future years, Cancer Moonshot research funding is not guaranteed. It will have to be appropriated each year. Even worse, it will be paid for in part by raiding Obamacare’s Prevention and Public Health Fund, which pays for anti-smoking campaigns and other preventive health efforts. So Congress will give us a modest increase in the cancer research budget, about 10%, but will not promise this sustained support over the long haul. Again, researchers will be placed on a year-by-year leash to get their support for carrying out complex and time-consuming research. Every grant PI will have to spend time each year arguing for the refunding of their research rather than being focused on cures. Even scientists at NASA can usually count on three-year commitments for their funding!

What can you do? The odds say you should expect to contract cancer in your lifetime. The odds also say that you will probably not have a good long-term outcome. You need to accept this and do what you can to preventively temper your lifestyle and eating habits accordingly. Then, you need to decide for yourself if you are happy with the trending for cancer research. Why do we settle for $5 billion each year to fight this war when $10 billion would be far better, especially if the funding were more stable for long-term research programs? Call your Congressperson to make this case. It may actually save your life!!

Check back here on Monday, February 13 for the next installment!

By the way, check out the 2015 Ken Burns and Barak Goodman’s PBS Special Cancer:The Emperor of all Maladies about the ins and outs of cancer research. It was narrated by Edward Herrmann while he had brain cancer. He died soon after filming completed. The program will utterly change your perspective, and get you mad as hell!

My other blogs on cancer research can be found at the Huffington Post:

Our Shamefully Wimpy War Against Cancer

Our Pathetic War Against Cancer: Part III

Our Pathetic War Against Cancer: Part III