# Boltzmann Brains

Back in the 1800’s, Ludwig Boltzmann (1844-1906) developed the idea of entropy and thermodynamics, which have been the main-stay of chemistry and physics ever since. Long before atoms were identified, Boltzmann had used them in designing his theory of statistical mechanics, which related entropy to the number of possible statistical states these particles could occupy. His famous formula

S = k log W

is even inscribed on his tombstone! His frustrations with the anti-atomists who hated his crowning achievement ‘statistical mechanics’ led him in profound despair to commit suicide in 1906.

If you flip a coin 4 times, it is unlikely that all 4 flips will result in all-heads or all-tails. It is far more likely that you will get a mixture of heads and tails. This is a result of their being a total of 2^4 = 16 possible outcomes or ‘states’ for this system, and the state with all heads or all tails occur only 1/16 of the time. Most of the states you will produce have a mixture of heads and tails (14/16). Now replace the coin flips by the movement of a set of particles in three dimensions.

Boltzmann’s statistical mechanics related the number of possible states for N particles moving in 3-dimensional space, to the entropy of the system. It is more difficult to calculate the number of states than for the coin flip example above, but it can be done using his mathematics, and the result is the ‘W’ in his equation S = k Log W. The bottom line is that, the more states available to a collection of particles (for example atoms of a gas), the higher is the entropy given by . How does a gas access more states? One way is for you to turn up its temperature so that the particles are moving faster. This means that as you increase the temperature of a gas, its entropy increases in a measurable way.

Cosmologically, as our universe expands and cools, its entropy is actually increasing steadily because more and more space is available for the particles to occupy even as they are moving more slowly as the temperature declines. The Big Bang event itself, even at its unimaginably high temperature was actually a state of very low entropy because even though [particles were moving near the speed of light, there was so little space for matter to occupy!

For random particles in a gas colliding like billiard balls, with no other organizing forces acting on them, (called the kinetic theory of gases), we can imagine a collection of 100 red particles clustered in one corner of a box, and 1000 other blue particles located elsewhere in the box. If we were to stumble on a box of 1100 particles that looked like this we would immediately say ‘how odd’ because we sense that as the particles jostled around the 100 red particles would quickly get uniformly spread out inside the box. This is an expression of their being far more available states where the red balls are uniformly mixed, than states where they are clustered together. This is also a statement that the clustered red balls is a lower-entropy version of the system, and the uniformly-mixed version is a higher form of entropy. So we would expect that the system evolves from lower to higher entropy as the red particles diffuse through the box: Called the Second Law of Thermodynamics.

Boltzmann Brains.

The problem is that given enough time, even very rare states can have a non-zero probability of happening. With enough time and enough jostling, we could randomly find the red balls once again clustered together. It may take billions of years but there is nothing that stands in the way of this happening from statistical principles. Now let’s suppose that instead of just a collection of red balls, we have a large enough system of particles that some rare states resemble any physical object you can imagine: a bacterium, a cell phone, a car…even a human brain!

A human brain is a collection of particles organized in a specific way to function and to store memories. In a sufficiently large and old universe, there is no obvious reason why such a brain could not just randomly assemble itself like the 100 red particles in the above box. It would be sentient, have memories and even senses. None of its memories would be of actual events it experienced but simply artificial reconstructions created by just the right neural pathways randomly assembled. It would remember an entire lifetime to date without having actually lived or occupied any of the events in space and time.

When you calculate the probability for such a brain to evolve naturally in a low-entropy universe like ours rather than just randomly assembling itself you run into a problem. According to Boltzmann’s cosmology, our vast low-entropy and seemingly highly organized universe is embedded in a much larger universe where the entropy is much higher. It is far less likely that our organized universe exists in such a low entropy state conducive to organic evolution than a universe where a sentient brain simply assembles itself from random collisions. In any universe destined to last for eternity, it will rapidly be populated by incorporeal brains rather than actual sentient creatures! This is the Paradox of the Boltzmann Brain.

Even though Creationists like to invoke the Second Law to deny evolution as a process of random collisions, the consequence of this random idea about structure in the universe says that we are actually all Boltzmann Brains not assembled by evolution at all. It is, however, of no comfort to those who believe in God because God was not involved in randomly assembling these brains, complete with their own memories!

So how do we avoid filling our universe with the abomination of these incorporeal Boltzman Brains?

First of all, we do not live in Boltzmann’s universe. Instead of an eternally static system existing in a finite space, direct observations show that we live in an expanding universe of declining density and steadily increasing entropy.

Secondly, it isn’t just random collisions that dictate the assembly of matter (a common idea used by Creationists to dismantle evolution) but a collection of specific underlying forces and fundamental particles that do not come together randomly but in a process that is microscopically determined by specific laws and patterns. The creation of certain simple structures leads through chemical processes to the inexorable creation of others. We have long-range forces like gravity and electromagnetism that non-randomly organize matter over many different scales in space and time.

Third, we do not live in a universe dominated by random statistical processes, but one in which we find regularity in composition and physical law spanning scales from the microscopic to the cosmic, all the way out to the edges of the visible universe. When two particles combine, they can stick together through chemical forces and grow in numbers from either electromagnetic or gravitational forces attracting other particles to the growing cluster, called a nucleation site.

Fourth, quantum processes and gravitational processes dictate that all existing particles will eventually decay or be consumed in black holes, which will evaporate to destroy all but the most elementary particles such as electrons, neutrinos and photons; none of which can be assembled into brains and neurons.

The result is that Boltzmann Brains could not exist in our universe, and will not exist even in the eternal future as the cosmos becomes more rarefied and reaches its final and absolute thermodynamic equilibrium.

The accelerated expansion of the universe now in progress will also insure that eventually all complex collections of matter are shattered into individual fundamental particles each adrift in its own expanding and utterly empty universe!

Have a nice day!

Check back here on Tuesday, May 9 for my next topic!

# Is Infinity Real?

In the daytime, you are surrounded by trees, buildings and the all-too-familiar accoutrements of Nature, to which by evolution we were designed to appreciate and be familiar. But at night, we see an unimaginably different view: The dark, starry night sky, with no sense of perspective or depth. It is easy to understand how The Ancients thought it a celestial ceiling with pinpoint lights arrayed in noteworthy patterns. Many millennia of campfires were spent trying to figure it out.

We are stuck in the middle ground between two vast scales that stretch before us and within us. Both, we are told, lead to the infinitely-large and the infinitely-small. But is this really true?

Astronomically, we can detect objects that emerged from the Big Bang nearly 14 billion years ago, which means their light-travel distance from us is 14 billion light years or 13,000,000,000,000,000,000,000,000,000 centimeters. This is, admittedly, a big number but it is not infinitely-large.

In the microcosm, we have probed the structure of electrons to a scale of 0.000000000000000000001 centimeters and found no signs of any smaller distance yet. So again, there is no sign that we have reached anything like an infinitely-small limit to Nature either.

When it comes right down to it, the only evidence we have for the universe being infinitely large (or other aspects of it being infinitely small) is in the mathematics and geometry we use to describe it. Given that infinity is the largest number you can count to, it is pretty obvious that even the scale of our visible universe of 13,000,000,000,000,000,000,000,000,000 centimeters falls woefully short of being even a relatively stupendous number by comparison to infinity.

Infinity is as old as the Ancient Greeks. But even Aristotle (384 – 322 BCE) would only allow the integers (1,2,3,…) to be potentially infinite, but not actually infinite, in quantity. Since then, infinity or its cousin eternity, have become a part of our literary and religious vernacular when we mention something really, really, really….. big or old! Through literary and philosophical repetition, we have become comfortable with this idea in a way that is simply not justifiable.

Mathematics can define infinity very precisely, and even the mathematician Cantor (1845 – 1918) was able to classify ‘transfinite numbers’ as being either representing countable infinities or uncountable infinities. To the extent that mathematics is also used in physics, we inherit infinity as the limit to many of our calculations and models of the physical world. But the problem is that our world is only able to offer us the concept of something being very, very, very… big, like the example of the visible universe above.

If you take a sphere a foot across and place an ant on it, it crawls around and with a bit of surveying it can tell you the shape is a sphere with a finite closed surface. But now take this sphere and blow it up so that it is 1 million miles across. The ant now looks across its surface and sees something that looks like an infinite plane. Its geometry is as flat as a sheet of paper on a table.

In astronomy we have the same problem.

We make calculations and measurements within the 28 billion light years that spans our visible universe and conclude that the geometry of the universe is flat, and so geometrically it seems infinite, but the only thing the measurements can actually verify is that the universe is very, very, very large and LOOKS like its geometry is that of an infinite, flat, 3-dimensional space. But modern Big Bang cosmology also says that what we are seeing within our visible universe is only a portion of a larger thing that emerged from the Big Bang and ‘inflated’ to enormous size in the first microseconds.  If you identify our visible universe out to 14 billion light years as the size of the period at the end of this sentence, that larger thing predicted by inflation may be millions of miles across at the same scale. This is very, very big, but again it is not infinite!

Going the other way, the current best theoretical ideas about the structure of the physical world seems to suggest that at some point near a so-called Planck scale of 0.0000000000000000000000000000000015 centimeters we literally ‘run out of space’. This mathematical conclusion seems to be the result of combining the two great pillars of all physical science, quantum mechanics and general relativity, into a single ‘unified’ theory.  The mathematics suggests that, rather than being able to probe the nature of matter and space at still-smaller scales, the entire edifice of energy, space, time and matter undergoes a dramatic and final change into something vastly different than anything we have ever experienced: elements that are beyond space and time themselves.  These ideas are captured in theories such as Loop Quantum Gravity and String Theory, but frankly we are still at a very early stage in understanding what this all means. Even more challenging is that we have no obvious way to make any measurements that would directly test whether physical reality simply comes to an end at these scales or not.

So on the cosmological scene, we can convincingly say we have no evidence that anything as large as ‘infinity’ exists because it is literally beyond our 14 billion light-year horizon of detection. The universe is simply not old enough for us to sample such an imponderably large realm. Advances in Big Bang cosmology can only propose that we live in an incomprehensively alien ‘multiverse’ or that we inhabit one miniscule dot in a vastly larger cosmos, which our equations extrapolate as infinity. Meanwhile, the world of the quantum hints that no infinitely-small structures exist in the universe, not even what we like to call space itself can be indefinitely sub-divided below the Planck scale.

In the end, it seems that infinity is a purely  mathematical ideal that can be classified by Cantor’s transfinite numbers manipulated symbolically, and thought about philosophically, but is never actually found among the objects that inhabit our physical world.

Now let’s go back to the issue of space after the relativity revolution and try to make sense of where we stand now!

Check back here on Monday, December 19 for the next installment!