People sometimes ask why we are spending so much time searching for planets that are so far away. The question refers to the Kepler mission and the fact that the distance to its target stars is generally 600 to 3,000 light years. In fact, fewer than one percent of the stars Kepler is examining out along the Orion arm are closer than 600 light years. The reason: Kepler is all about statistics, and our ability to learn how common exoplanets and in particular terrestrial planets are in the aggregate. The last thing the Kepler team is thinking about is targets for a future interstellar probe.
Studies of closer stars continue — we have three ongoing searches for planets around the Alpha Centauri stars, for example. But there is so much we still have to learn about the overall disposition of planets in our galaxy. New work by an international team of astronomers involves gravitational microlensing to answer some of these questions, and the results suggest that planets — even warm, terrestrial ones — are out there in vast numbers. Here again statistical analysis plays a crucial role, in conjunction with other forms of exoplanet detection. Arnaud Cassan (Institut d?Astrophysique de Paris) is lead author of the paper on this work in Nature:
“We have searched for evidence for exoplanets in six years of microlensing observations. Remarkably, these data show that planets are more common than stars in our galaxy. We also found that lighter planets, such as super-Earths or cool Neptunes, must be more common than heavier ones.”
Gravitational microlensing is yet another tool in the exoplanet hunt, and an extremely useful one because it gets around some of the limitations of the other major methods. Radial velocity studies tend to favor large planets that are close to their star, although with time and improving techniques, we’re using RV to learn about smaller and more distant worlds. Transit studies like Kepler’s are powerful but take time, as we wait for lengthy planetary orbits to be completed and confirm the presence of planets suggested by slight dips in starlight. But microlensing can detect planets over a wide mass range and also spot planets much further from their stars.
Image: The Milky Way above the dome of the Danish 1.54-metre telescope at ESO’s La Silla Observatory in Chile. The central part of the Milky Way is visible behind the dome of the ESO 3.6-metre telescope in the distance. On the right the Magellanic Clouds can be seen. This telescope was a major contributor to the PLANET project to search for exoplanets using microlensing. The picture was taken using a normal digital camera with a total exposure time of 15 seconds. Credit: ESO/Z. Bardon/ProjectSoft.
The current work uses data from the PLANET and OGLE microlensing teams, two studies that rely on a foreground star magnifying the light of a much more distant star lined up behind it. If the lensing star also has an orbiting planet, the planet’s effect in brightening the background star is measurable. The method gives us the chance to look for planets at a wide range of distances from the Earth, but it also relies on purely chance alignments that are obviously rare. In fact, from 2002 to 2007, only 3247 such events were identified, with 500 studied at high resolution. All this from a microlensing search that involved millions of stars.
The researchers combined the PLANET and OGLE data with detections from earlier microlensing work and weighed these against non-detections during the six year period of study. They then analyzed these data in conjunction with radial velocity and transit findings. The result: Given the odds against finding planets through these chance celestial alignments, planets must be abundant in the Milky Way. In fact, the researchers conclude that one in six of the stars studied hosts a planet with a Jupiter-class companion, half have planets of Neptune’s mass and two-thirds are likely to have super-Earths. Note that the survey was sensitive to planets with masses ranging from five times the Earth’s up to ten times the mass of Jupiter.
Uffe Gråe Jørgensen is head of the research group in Astrophysics and Planetary Science at the Niels Bohr Institute at the University of Copenhagen:
“Our microlensing data complements the other two methods by identifying small and large planets in the area midway between the transit and radial velocity measurements. Together, the three methods are, for the first time, able to say something about how common our own solar system is, as well as how many stars appear to have Earth-size planets in the orbital area where liquid water could, in principle, exist as lakes, rivers and oceans — that is to say, where life as we know it from Earth could exist in principle.”
Jørgensen goes on to conclude that out of the Milky Way’s 100 billion stars, there are about 10 billion with planets in the habitable zone, “…billions of planets with orbits like Earth and of comparable size to the Earth.” Daniel Kubas (ESO, and co- lead author of the paper), takes all this into account and concludes: “We used to think that the Earth might be unique in our galaxy. But now it seems that there are literally billions of planets with masses similar to Earth orbiting stars in the Milky Way.” Statistics tell the tale, one that will be refined with each new exoplanet detection, but one that points increasingly to a galaxy where Earth-sized planets are common.
The paper is Cassan, Kubas et al., “One or more bound planets per Milky Way star from microlensing observations,” Nature 481, 167–169 (12 January 2012). Abstract available.
Rob: Technically, you could argue that the rise of intelligence, rather than requiring unusual circumstances, is just very unlikely. Same as we argue for abiogenesis. The problem with this is these two things:
1) The difference in complexity between a human and the simplest imaginable organism is not nearly as great as the difference between that organism and a random mix of inorganic nucleotides.
2) We can envision, and in fact observe in great detail, a plausible path for bridging the former gap. Complete with an overarching theory, that of evolution. For the latter, we have very little. Evolution does not apply, in its stead we have a rather more nebulous theory of complex systems (synergetics, etc.) and helpful but insufficient concepts like hypercycles and quasispecies. It feels like there is an actual abyss there that requires enormous luck as well as the right conditions to bridge.
Yes Eniac, we can argue luck, but then we need to satisfy the following.
1) Intelligence has a low selectivity value. Actually it is hard to put a value on extra data processing, but we could say the more layers of processing, the more that can go wrong. In an unvarying environment its advantage is zero, and here we must also say that in a hypervariable environment, its long term average advantage is also very low.
2) The many different lines in which intelligence and brain complexity seems to be gaining with time are just an artifact of our own existence. Since the main selective pressures on organisms are other organisms, without the challenge of clever prey, there might not be sufficient evolutionary pressure to keep the level of intelligence of our ancestors gained from *random walk* type mechanisms. We must postulate so even if we must hold (from the above) that that same pressure provides very little drive for gaining further intelligence boosting genes.
Note how difficult satisfying both these conditions looks. I suspect that it is impossible.
Actually I take it all back. It is not impossible.
Let’s split evolution into two components, the usual natural selection, which should work to change gene frequencies in a completely predictable fashion in an infinite population, and a randomising and less purifying effect that takes hold of small populations called genetic drift. These are both well characterised, but there is a third and very rare condition that is neither and is just driven by systematic bias induced from the proteins that reproduce DNA. It cause Huntington’s to grow in severity between generations, and is thought by a few to have significant input into evolution. Here I will call it *molecular drive*
My new hypothesis is that the trend to larger brains on Earth is purely due to molecular drive. Somehow, the genes that specify neural growth were originally laid out in the first metazoan a way skewed the probabilities above those provided by selective pressure that their descendant had more neurons. If so it is likely unique to Earth.
The biggest problem with this is that the trend should accelerate as more genes for more neurons are copied. In creatures with very large brains, such as humans, the trend should be very obvious. The disturbing bit is that I am beginning to se multiple lines of evidence in its favour.
1) In the fossil record our brain size exploded, and only seemed to stop when it became such a limiting factor in child birth that further increments would probably kill the mother (giving an absolute selective value against further growth).
2) in the Western world where we have been alleviated of this factor, caesareans, are becoming so much more common, that many are saying the whole trend can’t be due to social factors. Of interest is that those born by caesarean, are significantly more likely to have their own children that way
3) the Flick effect is very hard to explain as purely due to nutritional improvements.
4) the brain places such massive physiological demands on us that there is always an advantage in having a smaller one if intelligence can be maintained. For example think of the advantage if an average man just supported a woman’s brain mass.
Oops, above I meant Flynn effect – not Flick