This image was taken from the International Space Station and displays the most important feature of the sun for life on Earth: Its light and heat!
The Sun is a spectral type G2 V dwarf star that emits 3.8 x 1033 ergs/sec or 3.8 x 1026 watts of electromagnetic power from gamma ray to radio wavelengths, with most of the energy emitted in the visible light spectrum between 400 nanometers and 800 nanometers. This is illustrated by the spectrum provided by Nick84 [CC BY-SA 3.0(link is external)], via Wikimedia Commons.
This is the spectrum seen at Earth’s surface where molecules of water vapor and carbon dioxide obscure some of the radiation to form the various dips in solar intensity. The common measure of solar brightness is called Irradiance. It represents the amount of energy (watts) that pass through a 1-meter2 surface facing the sun, and measured over a 1 nanometer bandwidth. Earth is located 150 million km from the sun, so if you surround the sun with a spherical surface with this radius, the surface area is A = 4piD2 or 2,8×1023 meters2. If we divide the solar luminosity by A we get 3.8×1026 watts/2.8×1023 m2 = 1,344 watts/m2 at the top of Earth’s atmosphere. Most of this is emitted in the spectrum between 300 to 900 nm for a 600 nm bandwidth, so the average irradiance over this spectral window is 1,344/600nm = 2.2 watts/m2/nm., which more or less matches the vertical axis of the above plot.
In one hour, or 3600 seconds, the sun produces 3.8×1026 joules/sec x 3600 sec = 1.4 x 1030 Joules of energy or 3.8 x 1023 kilowatt-hours.
Since E = mc2, and c = 3×108 m/s, in 1 hour the sun looses (1.4 x 1030 ergs)/(9 x 1016) = 1.5 x 1013 kilograms or 15 billion metric tons of mass each hour. It’s been doing this for about 4.5 billion years! So its mass loss over this time (3.9×1013 hours) = 5.9×1026 kg . But the sun’s mass is 2×1030 kg, so it has only lost about 0.0003 or 0.05% of its mass so far.
In my book ‘Interstellar Travel:An astronomer’s guide‘ I discuss some of the modern and planned developments in fast space travel. For the most part, it’s all about speed!
Using the kinds of chemical-based propulsion systems we have today, and taking advantage of a ‘gravitational slingshot’ from Jupiter, we could probably get up to 150,000 miles per hour. The Galileo probe managed to get to about 106,000 miles per hour (18 km/sec) and currently holds the record for the fastest speed ever achieved by an artificial body. Alpha Centauri is about 4.3 light years from Earth or 40 trillion kilometers. At this speed it would take 72,000 years to get there, not including slowing down to enter the system.
Rocket designers have been studying ion propulsion since the 1950s, and mention of the technology often turns up in works of science fiction. Ion propulsion was featured in a September 1968 episode of Star Trek called “Spock’s Brain,” in which invaders steal Spock’s brain and flee in an ion-powered spacecraft. The same technology is used intermittently for altitude control aboard 11 Hughes-built communications satellites in geosynchronous orbit 22,300 miles (35,885 kilometers) above Earth. The above photo shows the NASA NEXT ion engine firing at peak power during 2009 testing at NASA’s Glenn Research Center(Image: NASA).
Deep Space 1 launched in 1998, was the first spacecraft to use ions as a primary means of propulsion. Instead of the fiery thrust produced by typical rockets, an ion engine emits only an eerie blue glow as electrically charged atoms of xenon are pushed out of the engine. Xenon is the same gas found in photo flash bulbs and lighthouse search lamps. Acceleration with patience In the engine, each xenon atom is stripped of an electron, leaving an electrically charged particle called an ion. Those ions are then jolted by electricity that is produced by the probe’s solar panels and accelerated at high speeds as they shoot out from the engine. That produces thrust for the probe. The ions travel out into space at 68,000 miles (109,430 kilometers) per hour. But Deep Space 1 doesn’t move that fast in the other direction because it is much heavier than the ions. Its cruising speed is closer to 33,000 miles (53,100 kilometers) per hour.
The thrust itself is amazingly light — about the force felt by a sheet of paper on the palm of your hand. It takes four days to go from zero to 60 (miles per hour). But once ion propulsion gets going, nothing compares to its acceleration. Over the long haul, it can deliver 10 times as much thrust-per-pound of fuel as more traditional rockets. Each day the thrust adds 15 to 20 miles (25 to 32 kilometers) per hour to the spacecraft’s speed. By the end of Deep Space 1’s mission, the ion engine will have changed its speed by 6,800 miles (11,000 kilometers) per hour.
Using current ion drive technology, with 10 times DS-1s acceleration of 25 km/hr per day, you could reach 1/2 the speed of light (540 million km/hr) in about 4900 years. For our journey to Alpha Centauri, you would accelerate for half the trip, turn around and decelerate for the second half. A bit of math shows what you get for travel time:
Distance to turn around = 2.1 light years or 20 trillion km 10x DHS acceleration is 250 km/hr/day or 8×10-7 km/sec/sec. At this acceleration, distance = 1/2 acceleration x Time2 so T = 228 years. You then turn the ship around and decelerate for another 228 years for a total trip time of 456 years! What is interesting is that this same ion rocket technology will let us travel to Mars in 270 days, so this isn’t really pushing the technology envelope very hard! If we could get to Mars in 30 days, then the same technology would get us to Alpha Centauri in about 50 years or a single human lifetime!
If you wanted to make the trip to Alpha Centauri in, say, 30 years, you would reach the half-way distance of 2.1 light years in 15 years. An acceleration rate, A, of 17 kms/sec per day would be needed. This is about 1000 times faster than Deep Space 1. At this pace, you would be traveling as fast as the solar wind (500 km/sec) in about one month, and would pass the orbit of Pluto in 100 days traveling at a speed of 1700 km/sec.
Nuclear rocket technology can make the trip even faster because the exhaust speeds can approach 20% the speed of light or higher. At these speeds, a trip to Alpha Centauri would take less than 25 years!
The orbit insertion velocity near the Earth’s surface is practically the same as the Earth escape velocity of 11.2 kilometers per second, or 25,805 miles per hour.
Even at an altitude of 200 kilometers, a rocket is still inside the outer reaches of the Earth’s atmosphere. To really leave the atmosphere it probably has to get to a distance of 27,000 miles near the ‘geosynchronous’ limit. There is no sharp boundary to Earth’s atmosphere. It just decreases steadily in density until it eventually matches the density of the interplanetary medium. Here is an image that shows the extent of the geocorona, whose density is so low that it has no effect on spacecraft but qualifies as part of Earth’s atmosphere nonetheless. It was taken by NASA astronaut John Young using an ultraviolet camera on Apollo 16.
The dilute interstellar medium permeates space at a density of about one hydrogen atom per cubic centimeter. This image shows an all-sky map of this hydrogen observed by the Wisconsin H-Alpha Mapper (WHAM) Northern Sky Survey (Haffner, L. M. et al, 2003, Astrophysical Journal Supplement, 149, 405).. The Wisconsin H-Alpha Mapper is funded by the (US) National Science Foundation.
We do not really know what the interstellar medium looks like at the human-scale. If it is just stray hydrogen atoms you will just experience a head-on flow of ‘cosmic rays’ that will collide with your spacecraft and probably generate secondary radiation in the skin of your ship. This can be annoying, but it can be shielded so long as the particles are not ultra-relativistic. At spacecraft speeds of 50-90% the speed of light, these particles are not likely to be a real problem. At speeds just below the speed of light, the particles are ultra-relativistic and would generate a very large x-ray and gamma-ray background in the skin of your ship.
As it turns out, our solar system is inside a region called the Local Bubble where the density of hydrogen atoms is about 100 times lower that in the general interstellar medium. This Bubble, produced by an ancient supernova, extends about 300 light years from the Sun but has an irregular shape. There are thousands of stars within this region which is enough to keep us very busy exploring safely. Here is one version of this region by astronomers at the Harvard-Smithsonian Center for Astrophysics.
Interstellar space also contains a few microscopic dust grains (micron-sized is common) in a region about a few meters on a side. At their expected densities you are probably in for a rough ride, but it really depends on your speed. The space shuttle, encountering flecks of paint traveling at 28,000 mph (about 6 miles/second or 0.005 percent the speed of light) is pitted and pierced by these fast moving particles, but dust grains have masses a thousand times smaller than the smallest paint fleck, so at 0.005 percent light speed, they will not be a problem.
At 50 percent the speed of light which is the minimum for interstellar travel you will cover enough distance in a short amount of time, that your likelihood of encountering a large interstellar dust grain becomes significant. Only one such impact would be enough to cause severe spacecraft damage given the kinetic energy involved.
A large dust grain might have a mass of a few milligrams. Traveling at 50% the speed of light, its kinetic energy is given non-relativistically by 1/2 mv^2 so E = .5 (0.001 grams) x (0.5 x 3 x 10^10 cm/sec) = 1.1 x 10^17 ergs. This, equals the kinetic energy of a 10 gram bullet traveling at a speed of 1500 kilometers per second, or the energy of a 100 pound person traveling at 13 miles per second! The point is that at these speeds, even a dust grain would explode like a pinpoint bomb, forming an intense fireball that would melt through the skin like a hot poker melts a block of cheese.
The dust grains at interstellar speeds become lethal interstellar ‘BB shots’ pummeling your spacecraft like rain. They puncture your ship, exploding in a brief fireball at the instant of contact.
Your likelihood of encountering a deadly dust grain is simply dependent on the volume of space your spacecraft sweeps out. The speed at which you do this only determines how often you will encounter the dust grain in your journey. At 10,000 times the space shuttle’s speed, the collision vaporizes the particles and a fair depth of the spacecraft bulkhead along the path of travel.
But the situation could well be worse than this if the interstellar medium contains lots of ice globules from ancient comets and other things we cannot begin to detect in interstellar space. These impacts even at 0.1c would be fatal…we just don’t know what the ‘size spectrum’ of matter is between interstellar ‘micron-sized’ dust grains, and small stars, in interstellar space.
My gut feeling is that interstellar space is rather filthy, and this would make interstellar, relativistic travel, not only technically difficult but impossible to boot! Safe speeds for current technology would be only slightly higher than space shuttle speeds especially if interstellar space contains chunks of comet ice.
This is an issue that no one in the science fiction world has even bothered to explore! The only possible exception is in Star Trek where the Enterprise is equipped with a forward-directed ‘Brussard Deflector’ (that big blue dish just below the main saucer) which is supposed to sweep away particles before they arrive at the ship. This is very dubious technology because hydrogen atoms are not the main problems a ship like that would have to worry about, especially traveling inside a planetary system at sub-light speeds. It’s dust grains!
According to popular science fiction accounts, on a space station orbiting Saturn, a man inside a punctured spacesuit swells to monstrous proportions and explodes. On Mars, the eyes of a man exposed to the near-vacuum of the martian atmosphere, pop out of his head and dangle by their optic nerves on the sides of his face. En route to Jupiter on the Discovery spacecraft, Astronaut Dave Bowman space walks for 15 seconds with no helmet, and in no apparent pain, succeeds in reentering the Discovery through an open hatch. Fortunately, only in science fiction stories do humans ever come into direct contact with the vacuum of space, but these contacts are often portrayed as having horrific consequences.
To experience the vacuum is to die, but not quite in the gristly manner portrayed in the popular movies Total Recall and Outland. The truth of the matter seems to be closer to what Stanley Kubrik had in mind in 2001: A Space Odyssey.
According to the McGraw/Hill Encyclopedia of Space, when animals are subjected to explosive decompression to a vacuum-like state, they do not suddenly balloon-up or have their eyes pop out of their heads. It is, in fact, virtually impossible to compress or expand organic tissues in this way.
Instead, death arises from the response of the free gasses trapped within the tissues. When the ambient pressure falls below 47 millimeters of mercury, about 1/20 the atmospheric pressure at sea level, the water inside all tissues passes into a vapor state beginning at the skin surface. This causes the collapse of surface cells and the loss of huge amounts of body heat via evaporation. After 15 seconds, mental confusion sets-in, and after 20 seconds you become unconscious. You can survive this for about 80 seconds if a pressure higher than about 47 millimeters of mercury is then reestablished.
There have been instances of accidental exposure to a hard vacuum during space suit tests in vacuum chambers, and by pilots flying military aircraft at 100,000 feet. The experience was not fatal, or even exceptionally uncomfortable, for the typically 10 to 15 seconds or so that it was experienced.
The decompression incident on Kittinger’s balloon jump is discussed further in Shayler’s Disasters and Accidents in Manned Spaceflight: [When Kittinger reached his peak altitude] “his right hand was twice the normal size… He tried to release some of his equipment prior to landing, but was not able to as his right hand was still in great pain. He hit the ground 13 min. 45 sec. after leaving Excelsior. Three hours after landing his swollen hand and his circulation were back to normal.”
Earth’s magnetic field at the surface has been mapped for decades. This map provided by the British Geological Survey, shows basic polarity difference between the North and South Hemispheres.
The magnetic field of Earth is shaped like the one you see in a toy bar magnet, but there is a very important difference. The toy magnet field is firmly fixed in the solid body of the magnet and does not change with time, unless you decide to melt the magnet with a blow torch! The Earth’s field, however, changes in time. Not only does its strength change, but the direction it is pointing also changes. Here is a computer model of its 3-d shape in space that reveals its complex features even though it is still a ‘dipolar’ field. (Credit:Wikipedia- Dr. Gary A. Glatzmaier – Los Alamos National Laboratory – U.S. Department of Energy.)
Map makers have been aware that the direction of the magnetic field changes since the 1700’s. Every few decades, they had to re-draw their maps of harbors and landmarks to record the new compass bearings for places of interest. Think about it. If you are on a ship navigating a harbor in a fog, a slight change in your compass heading can take you into a reef or a sandbar!
Geologists have also been keeping track of the wandering magnetic poles as well. Instead of using compasses, they can actually detect the minute fossil traces of Earth’s magnetism in rocks. These rocks are dated to determine when they were formed. From this information, geologists can figure out exactly how Earth’s magnetic field has changed during the last two billion years. The results are surprising. Right now, the North point of your compass points towards the magnetic pole in the Northern Hemisphere. That’s why compass creators put the ‘N’ on the tip of the magnetized compass needle. But because opposite’s attract, this means that the magnetic pole in the Northern Hemisphere is actually a south magnetic pole! That’s because scientists named magnetic polarity after the geographic compass direction!
Since the 1800’s, Earth’s magnetic South Pole which lives in the Northern Hemisphere has wandered over 1100 kilometers. By the year 2030, the magnetic pole will actually be almost right on top of our geographic North Pole. Then in the next century, it will be in the northern reaches of Siberia! Scientists are excited, and a bit concerned, by the sudden dramatic change in the magnetic pole’s location. They worry that something may be going on deep within the Earth to cause these changes, and they have seen this kind of thing happen before.
What geologists have discovered is that the magnetic poles of Earth don’t just wander around a little, they actually flip-flop over time. About 800,000 years ago, the Earth’s magnetic poles were opposite to the ones we have today. Back then, your compass in the Northern Hemisphere would point to Antarctica, because in the Northern Hemisphere the polarity had changed to ‘North’ and this would have repelled the North tip of your (magnetized) compass needle. Geologists have discovered in the dating of the rocks that the magnetism of Earth has reversed itself hundreds of times over the last billion years. Careful measurements of rock strata from around the world confirm these reversal events in the same layers, so they really are global events, not just local ones. What is even more interesting is that the time between these magnetic reversals, and how long they last, has changed dramatically. 70 million years ago, when dinosaurs still roamed the landscape, the time between magnetic reversals was about one million years. Each reversal lasted about 500,000 years. 20 million years ago, the time between reversals had shortened to about 330,000 years, and each reversal lasted 220,000 years.
Today, the time between reversals has declined to only about 200,000 years during the last few million years, and each reversal lasts about 100,000 years or so. When did the last reversal happen?
This is a plot of the change in the main field strength of Earth for the last 800,000 years from the research by Yohan Guyodo and Jean-Pierre Valet at the Instuitute de Physique in Paris published in the journal Nature on May 20, 1999 (page 249-252).
The Brunhes-Matuyama Reversal ended 980,000 years ago when the polarity of the field actually did ‘flip’. Since that time, the polarity of Earth’s field has remained the same as what we measure today with the Northern Hemisphere Arctic Region containing a ‘South-Type’ magnetic polarity, and the Antarctic Region containing a ‘North-type’ polarity. You will note that the last reversal ended when the magnetic intensity reached near-zero levels. Since then, there was a near-reversal about 200,000 years ago labeled ‘Jamaica/Pringle Falls’ after the geologic stratum in which these intensity measurements were first identified. Scientists do not know just how low our field has to fall in intensity before a reversal is triggered, but the threshold seems to be below 2.0 units on the scale of the above ‘VADM’ plot. Beginning in the 1920’s, geologists discovered traces of the last few magnetic reversals in rock samples from around the world. Between 730,000 years ago to today, we have had the current magnetic conditions where the South-type magnetic polarity is located in the Northern Hemisphere near the Arctic. Geologists call this the Brunhes Chron. Between 730,000 to 1,670,000 years ago, Earth’s magnetic poles were reversed during what geologists call the Matuyama Chron. This means that the North-type magnetic polarity was found in the Northern Hemisphere. Notice that the time since the last reversal (the end of the Mayuyama Chron) is 730,000 years. This is a LOT longer than the 200,000 years!
Some scientists think that we may be overdue for a magnetic reversal by about 500,000 years!
Is there any evidence that we are headed towards this condition? Scientists think that the sudden, rapid change in our magnetic pole location is one sign of a significant change beginning to occur. Another sign is the actual strength of Earth’s magnetic field.
Scientists are convinced that Earth’s magnetic field is created by currents flowing in the liquid outer core of Earth. Like the current that flows to create an electromagnet, Earth’s currents can change in time causing the field to increase and decrease in intensity. Geological evidence shows that Earth’s field used to be twice as strong 1.5 billion years ago as it is today, but like the weather it has gone through many complicated ups and downs that scientists don’t have a real good explanation for, or ability to predict. But the fossil evidence does tell us something important.
In the 730,000 years since the last magnetic reversal, Earth’s field has at times been as little as 1/6 its current strength. This happened about 200,000 years ago. Also, around 700 AD it was 50% stronger than it is today. There have been many sudden ups and downs in this intensity, but some scientists think that conditions are rapidly becoming very different than the past historical trends have shown.
We’ve only been able to measure the Earth’s magnetic field strength for about two centuries. During this time, there has been a gradual decline in the field strength. In recent years, the rate of decline seems to be accelerating. In the last 150 years, the strength of Earth’s field has decreased by 5% per century. This doesn’t seem like a very fast decrease, but it is one of the fastest ones that has been verified in the 800,000 year magnetic record we now have. At this rate, in 10 centuries we will be 50% below our current field strength, and after 2000 years we could be at zero-strength. The data on past reversals seems to show that, when the field reaches 10% of its current strength, a magnetic reversal can be triggered. It has been 730,000 years since the last reversal ended. We are certainly long overdue for a reversal, by some statistical estimates.
But the caveat is that magnetic changes come in a variety of timescales from the major reversal events every few hundred thousand years to micro changes called ‘excursions’ that come and go withing a few thousand years. Two detailed “studies of the geomagnetic field in the last 1 million years have found 14 excursions, large changes in direction lasting 5-10 thousand years each, six of which are established as global phenomena by correlation between different sites. Excursions appear to be a frequent and intrinsic part of the (paleomagnetic) secular variation”.(Gubbins, David. 1999. The distinction between geomagnetic excursions and reversals. Geophysical Journal International, Vol. 137, pp. F1-F3.). The figure below shows on the left-side the magnetic intensity measurements since 500,000 years ago during the current Brunhes magnetic chron. You can easily see the ‘spiky’ fast excursions, but the overall magnetic intensity is decreasing in time to the present day. We may be living inside one of these fast excursions which will be replaced by a growing field in a few thousand years, but it seems that the big picture is still that the overall largescale field is declining slowly over 100,000 year timescles. It isn’t the excursions we need to worry about for ‘reversals’ but this larger trend downwards that seems to be going on.
So, what will happen when the field reverses? The fossil record, and other geological records, seem to say ‘Not much!’
Scientists have recovered deep-sea sediment cores from the bottom of the ocean. These sediments record the abundance of oxygen atoms and their most common isotope: Oxygen-18. The increases and decreases in this oxygen isotope track the ebb and flow of periods of global glaciation. What we see is that, during the time when the last reversal happened, there was no obvious change in the glacial conditions or in the way that the conditions came and went. So, at least for the last reversal, there was no obvious change in Earth’s temperature other than what geologists see from the ‘normal’ pattern of glaciation. By the way, because glaciation depends on the tilt of Earth’s spin axis, this also means that a magnetic reversal doesn’t change the spinning Earth in any measurable way.
Loess deposits in China have recently given climatologists a nearly unbroken, continuous record of climate changes during the last 1,200,000 years. What they found was that the sedimentation record shows the summer monsoons and how severe they are. The only significant variation in the data could be attributed to the coming and going of glacial and inter-glacial periods. So, summer monsoons in China were not affected by the reversal in any way that can be obviously seen in the climate-related data from this period. The fossil record, at least for large animals and plants, is even less spectacular when it comes to seeing changes that can be tied to the magnetic reversal.
The Brunhes-Matuyama reversal happened 730,000 years ago during what paleontologists call the Middle Pleistocene Era (100,000 to 1 million years ago). There were no major changes in plant and animal life during this time, so the magnetic reversal did not lead to planet-wide extinctions, or other calamities that would have impacted existing life. It seems that the biggest stresses to plant and animal life were the comings and goings of the many Pleistocene Ice Ages. This led very rapidly to the evolution of cold-tolerant life forms like Woolly Mammoths, for example.
So, it seems that we may be headed for another magnetic reversal event in perhaps the next few thousand years. This event, based on past fossil and geological history, will not cause planet-wide catastrophies. The biosphere will not become extinct. Radiation from space will not cause horrible mutations everywhere. Ocean tides will not devastate coastal regions, and there will certainly not be volcanic activity that leads to global warming.
Of course, scientists cannot predict which minor effects may take place. A magnetic reversal could be a big nuisance to many organisms that will not lead to their extinction, but it just might lead to temporary changes in the way they would normally conduct themselves. The fossil record doesn’t record how a species reacted to minor nuisances! Some animals use Earth’s field to magnetically navigate, but we know that these same animals have back-up navigation systems too. Pigeons use Earth’s magnetism to navigate, as do dolphins, whales and some insects. They also use their eyes as a backup, and a knowledge of land forms and geography, or the location of the Sun and Moon to get about. Humans have used compasses to navigate for thousands of years, but now we rely almost entirely on satellites to steer by. In the future, only those few anachronistic people using the ancient technology of compasses to get around, would have any problems!
The magnetic field of Earth shields us from cosmic rays, so losing this shield may seem like a big deal, but it really isn’t. Cosmic rays are not the same kind of radiation as light, instead it consists of fast-moving particles of matter such as electrons, protons and the nuclei of some atoms. Our atmosphere is actually a far better shield of cosmic radiation than Earth’s magnetic field. Losing the magnetic field during a reversal would only increase our natural radiation background exposure on the ground by a small amount – perhaps not more than 10%. The long term result might be a few thousand additional cases of cancer every year, but certainly not the extinction of the human race.
Return to Dr. Odenwald’s FAQ page at the Astronomy Cafe Blog. References: Guo, Zhengtang, et al., 2000, “Summer Monsoon Variations Over the Last 1.2 Million Years from the Weathering of Loess-soil Sequences in China”, Geophysical Research Letters, June 15, pp. 1751-1754. Guyodo, Yohan and Valet, Jean-Pierre 2003, “Global Changes in Intensity of the Earth’s Magnetic Field During the Past 800 kyr”, Nature, May 20, 2003, p. 249. Jacobs, J. A., “Reversals of the Earth’s Magnetic Field, (pp. 48-50) Jacobs, J. A. “Geomagnetism” Academic Press (pp. 186-89, 215-220, 236-42) Merrill, Ronald, McElhinny, M. and McFadden, P., “The Magnetic Field of the Earth”, Academic Press, (pp.120-125) Raymo, M., Oppo, D. W., and Curry, W. 1997, “The Mid-Pleistocene Climate Transition: A deep Sea Carbon Isotopic Perspective”, Paleoceanography, August 1997, pp. 546-559. Rikitake, Tsuneji and Honkura, Yoshimori, “Solid Earth Geomagnetism”, D. Reidel Publishing Co. (pp. 42-45) Ruddiman, W., et al. 1989, “Pleistocene Evolution: Northern Hemisphere Ice Sheets and North Atlantic Ocean”, Paleoceanography, August, pp. 353-412. Wollin, G., Ericson, D., Ryan, W. and Foster, J. 1971, “Magnetism of the Earth and Climate Changes”, Earth and Planetary Science Letters, vol. 12, pp. 175-183.
The color of a star is a combination of two phenomena. The first is the star’s temperature. This determines the wavelength (frequency) where the peak of its electromagnetic radiation will emerge in the spectrum. A cool object, like an iron rod heated to 3000 degrees, will emit most of its light at wavelengths near 9000 Angstroms ( the far-red part of the visible spectrum) in wavelength. A very hot object at a temperature of 30,000 degrees will emit its light near a wavelength of 900 Angstroms (the far-ultraviolet part of the visible spectrum). The amount of energy emitted at other wavelengths is precisely determined by the bodies temperature, and by Planck’s radiation law of ‘black bodies’. (Credit: Wikipedia)
It shows that as the temperature of the object increases, the peak shifts further to short wavelengths. But the phenomenon we call ‘color’ is another matter. Color does not exist as an objective property of nature.
Color is a perception we humans have because of the kinds of pigments used in our retinae. Our eyes do not sense light evenly across the visible spectrum but have a greater sensitivity for green light, and somewhat less so for red and blue light as the response spectrum below illustrates:
In effect, what you have to do is ‘multiply’ the spectrum of light you receive from a heated body, by the response of the eye to the various wavelengths of light in the spectrum. When this happens, a very unusual thing happens.
If I were to figure out how hot a star would have to be so that the peak of its emission was in the ‘green’ area near 4000 Angstroms, I would estimate that the temperature of the star would have to be about 10,000 degrees. There are many such stars in the sky. The two brightest of these ‘A-type’ stars are Vega in the constellation Lyra, and Sirius in Canes Major. But if you were to look at them in the sky, they would appear WHITE not green! Stars are ranked according to increasing temperature by the sequence of letters:
This is NOT the same sequence of colors you see in a rainbow (red, orange, yellow, green, blue, indigo, violet) because the distribution of energy in the light source is different, and in the case of the rainbow, optical refraction in a raindrop is added.
Another factor working against us is that we see stars in the sky using our black/white rods not our color-sensitive cones. This means that only the very brightest stars have much of a color, usually red, orange, yellow and blue. By chance there are no stars nearby that would have produced green colors had their spectral shapee been just right.
So, there are no genuinely green stars because stars with the expected temperature emit their light in a way that our eye combines into the perception of ‘whiteness’.
For more information on star colors, have a look at the article by Philip Steffey in the September, 1992 issue of Sky and Telescope (p. 266), which gives a thorough discussion of stellar colors and how we perceive them.
Based on tons of scientific data and decades of research, here is an artist’s impression of the Milky Way Galaxy, as seen from above the galactic “North pole”. (Credit: NASA. JPL-Caltech/R. Hurt (SSC/Ca)
All of the basic elements have been established including its spiral arm pattern and the shape of its central bulge of stars. To directly answer this question, however, is a difficult, if not impossible, task. The problem is that we cannot directly see every star in the Milky Way because most are located behind interstellar clouds from our vantage point in the Milky Way. The best we can do is to figure out the total mass of the Milky Way, subtract the portion that is contributed by interstellar gas and dust clouds ( about 1 – 5 percent or so), and then divide the remaining mass by the average mass of a single star.
From a number of studies, the mass of the Milky Way inside the orbit of our sun can be estimated to an accuracy of perhaps 20 percent as 140 billion times the mass of the Sun, if you use the Sun’s speed around the core of the galaxy. Radio astronomers have detected much more material outside the orbit of the Sun, so the above number is probably an underestimate by a factor of 2 to 5 times in mass alone.
Now, to find out how many stars this represents, you have to divide by the average mass of a star. If you like the sun, then use ‘one solar mass’ and you then get about 140 billion sun-like stars for what’s inside the sun’s orbit. But astronomers have known for a long time that stars like the sun in mass are not that common. Far more plentiful are stars with half the mass of the sun, and even one tenth the mass of the sun. The problem is that we don’t know exactly how much of the Milky Way is in the form of these low-mass stars. In text books, you will therefore get answers that range anywhere between a few hundred billion and as high as a trillion stars depending on what the author used as a typical mass for the most abundant type of star. This is a pretty embarrasing uncertainty, but then again, why would you need to know this number exactly?
The best estimates come from looking at the motions of nearby galaxies such as a recent study by G. R. Bell (Harvey Mudd/USNO Flagstaff), S. E. Levine (USNO Flagstaff):
Using radial velocities and the recently determined proper motions for the Magellanic Clouds and the dwarf spheroidal galaxies in Sculptor and Ursa Minor, we have modeled the satellite galaxies’ orbits around the Milky Way. Assuming the orbits of the dwarf spheroidals are bound, have apogalacticon less than 300 kpc, and are of low eccentricity, then the minimum mass of our galaxy contained within a radius of 100 kpc is 590 billion solar masses, and the most likely mass is 700 billion. These mass estimates and the orbit models were used to place limits on the possible maximum tangential velocities and proper motions of the other known dwarf spheroidal galaxies and to assess the likelihood of membership of the dwarf galaxies in various streams.
Again, you have to divide this by the average mass of a star…say 0.3 solar masses, to get an estimate for the number of stars which is well into the trillions!
Another factor that confuses the problem is that our Milky Way contains a lot of dark matter that also produces its own gravity and upsets the estimates for actual stellar masses. Our galaxy is embedded in a roughly spherical cloud of dark matter. Various theoretical calculations show that these should be very common among galaxies. Here is an example of such a model in which the luminous galaxy is embedded in a massive DM halo. (Credit:Wikipedia-Dark Matter Halo N-body simulation)
By using the motions of distant galaxies astronomers have ‘weighed’ the entire Milky Way and deduce that the dark matter halo is likely to include around 3 trillion solar masses of dark matter.
To get distances, we use a variety of techniques. The most basic one is geometric parallax. By photographing the same star 6 months apart from points 1 and 2 in earth’s orbit, the shift of the star relative to more distant background stars when R = 1 Astronomical Unit amounts to 1 second of arc at 1 parsec ( 3.26 light years), 1/2 arcsecond at 2 parsecs, 1/10 arcsecond at 10 parsecs etc. By the way, at 1 parsec, an arcsecond also subtends 206265 astronomical units.
The Hipparcos astrometric satellite has determined the distance to over 100 thousand stars in this way. Read an ESA Press Release about the mission accomplishments. For example, the distances to the Nearest 10 stars can be found in their Table of 150 closest stars which I reprint below:
Name Parallax Alpha Centauri C 772.33 Alpha2 Centauri C 742.12 Alpha1 Centauri C 742.12 Barnard's Star 549.01 Alpha Canis Majoris (Sirius) 379.12 Epsilon Eridani 310.75 61 Cygni A 287.13 Alpha Canis Minoris 285.93 61 Cygni B 285.42 Epsilon Indi 275.76 Tau Ceti 274.17
Note: the Parallax is measured in 1/arcseconds. To calculate the distance in parsecs you have to take 1000.0 and divide it by the parallax number in the last column above. For example, Alpha Centauri C (Proxima) is at a distance of 1000.0/772.33 = 1.295 parsecs which equals 1.295 x 3.26 = 4.22 light years. Alpha Centauri is at 1000/742 = 1.34 parsecs or 4.39 light years. I leave it as a simple calculator exercise for you to convert the parallaxes above into light years!
Stellar diameters can be measured for some nearby giant and supergiant stars by using a technique called stellar interferometry. The Navy Prototype Optical Interferometer has been operating for over a decade at Mount Wilson Observatory, and routinely measures the angular diameters of bright stars to fractions of a milli arcsecond (0.001 arcseconds) accuracy. The table below shows only a few stars that have had their diameters measured. Once their distances are accurately known…from the Hipparcos Survey…their linear diameters in millions of kilometers can easily be found.
The table below shows the sizes in multiples of the solar diameter for some typical stars that have measured angular diameters in column 5 given in arcseconds. The highest resolution of the Hubble Space Telescope is about 0.046 arcseconds. So it is just able to see Betelgeuse as a resolved ‘disk’
The size in kilometers = 3 x 10^13 (d /3.26) (D/3600)/57.3 or 44.6 million x d x D where d = the distance in light years and D is the angular diameter in arcseconds. In terms of solar diameters (1,390,000 km) you get Size = 32 d x D solar diameters. The later formula gives you the above entries in the last column. The super giant star Betelgeuse is 734.4 times the diameter of the Sun.
Absolute Zero is an ‘asympotic’ state which you can only get close to but never reach. In fact there are quantum mechanical phenomena that intervene that probably prevent you from actually reaching it because the physical vacuum itself even at ‘Absolute Zero’ contains energy that interferes with any physical system in space. Einstein-Bose Condensates are a good example of what happens when you try to cool a small collection of atoms to very cold temperatures. Their wave functions spread out and you end up with an indivisible ‘super particle’ rather than a collection of even more frigid discrete particles.
Among the coldest naturally-occurring things in nature is the cosmic fireball radiation which fills all space in the universe and has a temperature of 2.7 degrees above absolute zero. Above you see in the image of this radiation across the sky, created by NASA’s COBE spacecraft, how this very cold light still has faint irregularities in it from the vast collections of matter that it has passed through to get to us.
We define time in terms of clocks which are collections of matter that change their states. At Absolute Zero, there would be no thermal energy to keep such collections moving, but that doesn’t mean that very large collections of matter would not move. Temperature is only defined for collections of ‘small’ things such as atoms…or quanta of energy like photons. Planets would still orbit stars and spin on their axis so a physical clock would still exist, and therefore we would still have ‘time’.
An astronomer's point-of-view on matters of space, space travel, general science and consciousness