We’ve been assuming all along that it would take the Kepler mission three years-plus to detect true Earth analogues, meaning planets orbiting Sun-like stars at about the Earth’s orbital distance. Now it turns out that figure may have to be extended, as this article in Nature makes clear. Author Ron Cowen points out that a close analysis of approximately 2,500 of the tens of thousands of stars in the Kepler field are flickering more than expected, and that spells trouble.
Image: Kepler’s field of view superimposed on the night sky. Credit: Carter Roberts.
The reason: The dip in starlight signalling the presence of a planet can be masked by the unexpected noise in the Kepler data. As described by Kepler scientist Ron Gilliland (Space Telescope Science Institute), the signal of an Earth analogue — assuming a star much like the Sun — should be a drop of about 85 parts per million when the planet passes in front of its star, lasting a statistical average of 10 hours, and occurring once per year. Moreover, the signal would only be evident after three successive transits, equally spaced in time, had been observed.
But that assumption was made with luminosity fluctuations about the same as the Sun’s. The noise metric the Kepler team uses is called Combined Differential Photometric Precision, or CDPP, and according to Gilliland and colleagues in the paper on this matter, CDPP was intended to be about 20 ppm for dwarf stars at 12th magnitude. But CDPP turns out to be 30 ppm, 50 percent higher than expected. The paper goes on to demonstrate that most of the noise is intrinsic to the stars themselves rather than being the result of instrument problems.
From the paper:
Kepler… is the first mission capable of quantifying the variability of large numbers of stars to the small levels by which the Sun is known to vary. Kepler will continue to provide exciting new insights into the astrophysics of quiet stars, and their galactic distributions. While we are not surprised to have learned new things from this new observational capability, the fact that the stars are more variable than expected has a significant influence on the ability to readily detect Earth-analog planet transits where the expected signal per transit is only a few times the inferred noise level on comparable time scales. Observing for a longer time baseline can compensate for the loss of transit detection sensitivity from the higher than anticipated stellar noise.
A longer time indeed, the upshot being that Kepler will need not three but an average of six transits per planet to adequately verify that the change in starlight is actually a planet. Finding the Earth analogues among the Kepler stars, then, calls for an extended mission, and that will more than double the planned mission life of three and a half years. Cowen quotes Kepler chief data analyst Jon Jenkins as saying that while an overall eight year mission now seems necessary, the extension to the mission is by no means assured. After all, NASA’s budget problems include the costs of the James Webb Space Telescope, and Kepler could be squeezed out of the picture.
Magnetic activity on the Kepler stars may be the reason for the unexpected noise in the data, and if that is the case, it may be because the stars are young, thus spinning faster and feeding a more powerful magnetic field. That runs against predictions that most of the stars in the sample would be older than the Sun, but right now the cause seems less important than the need to get Kepler the extended mission it needs to actually tell us something about the distribution of Earth-like worlds in a large sample of stars. It would be outrageous if Kepler were to be shut down when it was just a few years away from answering such long-held questions, but at the moment NASA isn’t talking about Kepler’s chances. That decision will come next spring.
The paper is Gilliland, “Kepler Mission Stellar and Instrument Noise Properties,” accepted by the Astrophysical Journal (preprint). Thanks to Antonio Tavani for the original link to the Cowen article.
Related: “On Monday, 12 September 2011, astronomers will report significant new results in the field of exoplanets, obtained with the High Accuracy Radial Velocity Planet Searcher, better known as HARPS, the spectrograph on ESO’s 3.6-meter telescope at La Silla Observatory in Chile.” More here.
Why must it be a copy of Earth? Gas giants can have habitable moons, greenhousing can warm low-insolation worlds, tidelocked worlds can have temperature gradients. Most habitable worlds are not Earth clones!
Kepler’s web site has more info about the need for extending the mission. It is a good idea to extend it anyway even if there was no added difficulty in identifying transit signatures in the light curves generated by the noisier then expected stars.
Kelly Beatty, in Sky and Telescopes’ article discussing the need to extend Keplers’ mission makes the statement that it would cost about $17 million per year. If accurate then approving this amount of additional funding should be a no-brainer for a mission of Keplers historic significance. Even if the cost was much higher who could argue that the mission with a healthy functioning spacecraft should be prematurely scrubbed before successfully completing its’ primary purpose? Ofcourse the big worry is with the NASA budget problems existing today some ( I think it would be criminal) idiocy may end up being committed. I wonder just how hard people will need to fight to assure the additional funding is secured?
An extended mission was always desired, the noisy light curves just makes an extended mission that much more of an imperative.
It is becoming clear that to progress down to Earth-size planets, especially around the nearby stars we need a dedicated astrometric mission. Astrometry is less affected by the activity problems that plague both radial velocity and transits. Of course, SIM-Lite got cut, another victim of the JWST budgetary black hole… guess we must wait on the European NEAT mission.
The Reagan Administration seriously considered shutting down Voyager 2 in 1981 after its Saturn encounter to save a few visible million dollars. Someone with a brain kept this from happening, otherwise we would still be waiting for detailed closeups of the Uranus and Neptune systems. So such foolishness as ending Kepler before it can complete its mission is possible.
Martin, other worlds besides Earthlike ones may indeed be better places for life to happen, but NASA better keep telling the guys with the purse strings that they are looking for other Earths to keep the money flowing. Recommending exomoons circling exojovians over exoEarths will just confuse them and stop the funding.
Well, I suppose we could say they’re like that place Pandora in Avatar. Maybe the funders will think they can get rich from mining unobtainium so we will get funding for a fleet of starships in the process!
If six transits are needed for detecting Earth-size HZ planets for the noisier stars I think we are getting close to that number for the later part of the K-dwarf sequence. There are better then 28 months of Keplers’ observations so far.
As for the results from the small number of M-dwarfs in Keplers target list? I wonder what candidates the Kepler science team are sitting on, waiting for radial velocity confirmations before they announce their findings.
Martin J Sallberg mentions a good point about other interesting potential abodes of life but Earth-size terrestial planets orbiting at the right distance from nice, main sequence low mass dwarfs are the only known life-bearing world(s) at this time. Hence our limited resources are concentrated on that particular approach. With better instruments I hope that all possiblities can be explored in the future. And with better funding to build those improved instruments.
@Mike I doubt they’re sitting on ANY potentially habitable planets around K or M dwarfs. You’re right that much of the K stars should be probed by now, but even more right than perhaps you think because as they’re smaller, the transit signal is larger since it scales as the area of the star (so fewer than 6 transits will be needed to require the necessary S/N. But to wait for an RV confirmation means waiting for around a decade, as new spectrographs and the next generation of telescopes come on line (assuming the RV noise can be modelled out and that most of the timescales for magnetic activity are different from orbital periods). If they had detections NOW, I’m reasonably confident they’d show them to help insure an extended mission. As for funding, NASA has a track record of shutting down working satellites to save pennies on the dollar: IUE and crashing Galileo into Jupiter being two such cases.
If the funding is cut in the end, we can start a grass root movement to ask Kepler fans around the world to donate the fund for $17m/yr.
I probably can part $1k/yr myself, plus the company match, that is $2k/yr already, real cost to me is roughly $750ish after tax return.
Without an extension, is it possible to use three transits to predict the next ones and use another telescope (in space or on the earth) to complete the 6 transits needed?
Does the noise being much more than expected based on the suns activity indicate that the sun is different in anyway?
This “noise” is not a problem, it is DATA. To understand that some stars have more variability than expected is important in testing stellar models and is exactly the type of unexpected finding that fuels new knowledge. These same observations have implications for earth based astronomy as well, and sets the limits on the reliability of using just a few spectroscopic/photometric measurements to categorized a single star . This is the type of hidden assumption in the centuries old practice of taking static pictures of stars and making the assumption that the “stable” star always looks like that! We need to explore this noise phenomena in-depth over many stellar types and with a long term observation platform- OH! wait a minute – we have that! it is called KEPLER.
To coolstar, do you think you might be getting ahead of yourself? With only the first 4 months of data from Kepler to go by I wouldn’t assume too much regarding the numbers of Earth-size planets orbiting in the HZ of K-dwarfs or even M-dwarfs for that matter. What ever the results that are contained in the observed 24 months and counting we are going have to wait for the Kepler science team news releases.
As for the Kepler science team, well Bill Borucki has stated they want to be as sure as possible before announcing any potentially habitable planets. The
more transits detected especially for a smaller planet the higher the confidence of the validity. Precise orbital measurments can provide info about exoplanet mass, so can transit timing variations if they’re present. All this takes time, years in fact. Yes, at this time radial velocity observations can’t determine small planet masses easily but they can at least constrain the upper limit of the mass.
NASA funding is always a concern.
Yes yes and instrumental effects are DATA (about the instrument rather than what the instrument was designed to observe, but data nonetheless). Well done.
And yes, if you are interested in trying to find planets, it is a problem.
Just like, say, trying to observe an Earthlike planet around Alpha Centauri suffers from the problem that there is this really bright star in the vicinity which is drowning out the reflected light from the planet with this vast flood of DATA.
One experiment’s noise is another experiment’s data.
@Mike You may be right, but I doubt it. My bet is that if planets in the habitable zone had been detected around K or M stars (there could be 10 or more transits from some M stars, with HUGE s/n) , we’d know about it (essentiall all the data for transits has been thoroughly screened by now). It would be very hard to keep such a secret as it’s a large team and I think something would have leaked. I hope I’m wrong about this, but I’d be willing to make a small wager I’m not. After all, that is Kepler’s main mission (though the OTHER science being done is wonderful, and long ignored by most of the American astronomical community).
One recent paper (don’t have the reference handy, sorry) that was mentioned on this blog predicted that 90% of HZ planets would be tidally locked and orbiting K or M stars.
The Kepler folks knew going in that few if any of their low-mass detections(anything the size of the earth that’s a real planet (not the core of a former star) can’t be much denser than the earth) would get RV confirmation before they were announced. Their vetting process seems very good and no real errors have cropped up yet. This (lack of RV confirmation) has been critisized by quite a few folks, but I would have thought the quality of the Kepler science would have caused them to shut up by now. I would have been wrong…..
yep, if one is extremely lucky, TTV obs can give one the mass over time, but the number of close to earth-mass planets that are going to be found is likely to be so low, that there probably won’t be a single case where that’s important.
I agree with Andy on the utility of a high precision astrometric mission though I think at least a few earth sized planets in HZs will be confirmed (at a very high level, even if masses aren’t directly known) before any such mission is launched
So, this is interesting information indeed. Now I can see why the Kepler planet count drop off so steeply below ~2 Re and, by implication, this led some to believe that super-earths and ice-giants are genuinely much more common than the smaller earth-sized worlds. Yet if Kepler is having as hard a time finding smaller planets than the investigators expected due to larger than expected stellar jitter, then it is hardly a surprise that Kepler is so far finding more super-earths and ice-giants than earths (let alone sub-earths).
Is there any chance that the variability data is actually something like… rogue planets? Perhaps, as some have predicted, the data is trying to tell us there are lots of rogue planets floating around, unattached to a star, many more than there are planets in captured in stellar orbits.
@ericSECT: If there were so many rogue planets as to cause frequent transits of stars, a considerable fraction of these would have to be total occlusions, in those instances where the planet is much closer than the star. We would see the stars blink off and on, which is not happening. I think that this is one of the ways we conclude that most dark matter cannot be dark stars or planets.
@Eniac: on the other hand the gravity of the objects would tend to cause lensing. Which effect (lensing or occlusion) wins out overall depends on the distances to the lens and the source, and the masses involved.
But rogue planets are still not a good suggestion, it is somewhat difficult to see how they would also cause signs of magnetic activity on the stars in question…
There’s an important exoplanet meeting going on this week at Jackson Hole, WY (which somehow I had managed to miss). Check out: http://ciera.northwestern.edu/Jackson2011/program.php
Unfortunately, I can’t find any evidence that any of the talks will be webcast.
The program with abstracts is on line and it looks to be an exciting meeting.
On a different note, I’ve also been remiss in that Kepler already HAS found
4 roughly earth radius planets in HZ orbits around K stars. They had originally announced 5, but one has been found to be a false positive. All have periods of roughly 50- 120 days and they were announced in the Feb. 2011 data release. The periods are notable since they require much more data (assuming 3 transits) than had been released at that time. Perhaps they’re waiting until more data becomes public to write detailed papers on these objects? If (and it’s a big if) tidally locked HZ planets around K & M stars make up the bulk of HZ planets in the galaxy, this does not bode well for true earth analogs in the Kepler field.
What % of Kepler stars have had their ages estimated? Young stars have a fair amount of low level variability. There are quite a few open clusters and associations near the Sun (or vice versa) so some of the jitter may be age related.
Why other earths? Because a main part of the mission is to establish how unique our situation is. Also, moons around gas giants may allow life but probably not advanced lifeforms as we have here due to high radiation environments.
Regarding the noise levels, that is something they should have established beforehand by other means and frankly, these missions are too expensive for there to be that kind of uncertainty. However, the mission would likely have been extended anyway if the hardware remains functional. But regarding the extra money needed, yes it would be unwise to cut off the funding for an extension however, it would also be unreasonable to just expect new funding without cutting the NASA budget somewhere else.
@Bob:
Have you perhaps considered that Kepler is a major leap forward in photometric stability and that until it was launched it would have been extremely difficult to measure stellar variability at this level? They did attempt to figure out what the impact of stellar variability would be, and the evidence pre-Kepler suggested that the majority of these stars should be pretty quiet. Turns out there’s a lot we didn’t know about stellar microvariability before Kepler, mainly because we hadn’t got the necessary instruments to observe it. Unfortunately for the planet hunters, things turned out more difficult than they had anticipated – that’s what you get for pushing into unknown territory, sometimes things don’t work out.
But if you know a way to have the 20/20 hindsight before the event, do please tell the rest of us…
Not to jump on the Kepler team, but wasn’t there a specific problem with some of the electronics. Something about the amplifiers that they knew about but thought they could fix in-flight but then could not. IIRC the end result was the sensitivity would struggle to get much below 80 ppm while a Earth/Mars type planet would require 20 ppm.
Wonder if this 6 transit thing is part of that process.
Still if we can’t get a adequate census of Planets ~ Me it would be a dispiriting blow. No doubt the Kepler team too would be bitterly disappointed.
Esp. since nothing like Kepler is in the pipeline for at least the present generation it seems like.
@TheoA: if you read the paper, the precision is not nearly as bad as you are claiming. From the paper itself “In reality (Christiansen et al. 2010) the CDPP assessed as the mode over 12th magnitude dwarf stars is about 30 ppm, or 50% higher than planned for.” Note that an Earth analogue should have a transit depth of roughly 80 ppm, so this issue is not necessarily fatal provided you do an extended mission.
Section 3.7 and Table 4 give the breakdown of the noise contributions. In particular, the intrinsic detector noise seems to be only slightly worse than anticipated (10.8 vs 10 ppm). Stellar noise gets you to 19.5 ppm (very nearly the entire budgeted noise level) on its own.
The other issue seems to have been quarter-to-quarter changes, which is dominated by events happening in the early part of the mission:
According to Table 4, if the stellar noise had been as predicted, CDPP would have been about 23.6 ppm (the terms add in quadrature), which is still pretty close to the design estimation. Yes, Kepler is somewhat noisier than expected – it’s experimental science, it isn’t all neat and tidy – but this certainly does not appear to be the major limitation, and this extra instrumental noise would not have been fatal to the mission.
Andy,
For the cost of Kepler, the science team should have been able to determine the limits of uncertainty, they could and should have worked out that there was a reasonable possibility that the mission could not be done in three years and been up front about it. They could have sold it as a mission to understand stellar variability but they did not. I’m sorry but for six hundred million dollars of taxpayer money the expectations are high and cannot just be shrugged off with a “oh, that’s the way science is” kind of statement. I think big science gets too much of a pass on these things. If a company has to double the time to make a profit or if a president’s economic policies take twice as long to work there are consequences. The managers need to be held accountable for their mistakes.
“But if you know a way to have the 20/20 hindsight before the event, do please tell the rest of us…”
Well, why are we in such a hurry? We could have done lesser experiments to get the data we need prior to designing these kinds of mission. It’s not like these stellar systems are going anywhere.
Bob, you miss the fact that Kepler was always designed to do asteroseismology (some stars, particularly earlier-type stars are observed with a cadence that is suited for asteroseismology but less good for transit observations), photometric missions do both jobs – the public on the other hand is probably more interested in the planet angle, so this aspect was emphasised.
As for your demands that they should have known beforehand, they did do the analysis based on the experimental data that was available at the time. The issue is that to get the data you need to have known this issue about stellar variability you basically need something pretty close to what the Kepler mission actually is. Your lesser experiment would be pretty much worthless, the cost saving would be pretty minor for the “lesser experiment” versus Kepler, and then you’d have to go to the additional expense of launching a follow-up mission (space launches are expensive you know!). But please show us your design for your lesser mission, and show how the loss of scientific ability for your mission justify whatever cost savings you are making. Or are you just an armchair whinger?
(And hey, we can easily reduce the cost of the Kepler mission: just shut it down earlier and save on the operation costs. If you’re so outraged by a scientific experiment discovering something you don’t like, why not write to NASA to demand it gets shut down now to save on operational costs. You’d be in great company with this kind of attitude, remember to say hi to your new buddies in the creationist, global warming denialist and antivaxxer communities.)
Welcome to experimental science, the results aren’t always to your liking. If we followed your approach of “I don’t like the result of doing the experiment, heads must roll”, we’d get nowhere.
Andy writes
“:But please show us your design for your lesser mission, and show how the loss of scientific ability for your mission justify whatever cost savings you are making. Or are you just an armchair whinger?”
Andy, I might accept the first part of your argument but then you ruin your post by getting all upset. Are you on the Kepler project? If not how are you not an “armchair whinger”? Is this site only for Phd scientists who work in the field? If so, please let me know.
Because they did not assume a higher margin of error to account for greater uncertainty it remains a fact that the planned mission has to be greatly extended. Did I say they should be cut off? No, I said the reverse but why should I give them an uncritical pass? You seem to think science and scientists are above criticism. Or you may assume that criticism implies a lack of support. Neither is true. I am simply disappointed because I am a huge fan of the Kepler project. I want it to succeed. Now I worry that the probe may not last as long as they need it to. What is the probability the probe will last the now full eight year mission let alone get the funding if it did. Could the probe be pointed to another portion of the galaxy to do an new extended mission after this one ends?
“You’d be in great company with this kind of attitude, remember to say hi to your new buddies in the creationist, global warming denialist and antivaxxer communities.”
Likewise, I find the above statement completely unjustified. I am simply a person with a long standing interest in the inter stellar problem who has bought Mr. Gilster’s book, and many others, and find his web site interesting.
Bob, I’m not on the Kepler project, but then I’m not the one calling for heads to roll because the experiment found something unexpected. And as for you claiming that I believe scientists should be above criticism, this is not the case. I just do not feel it is fair to criticise them for something they could not have known in advance. They did do the job of estimating whether the mission could succeed, and according to the estimates using the data and models they had available the mission would have done its job. Then they ran into something unexpected (almost a factor of 2 in the stellar noise), and they ran into it unexpected because they had the first instrument that could probe these levels of photometric precision. and you demand a witchhunt because they failed to have your 20/20 hindsight during the design phase.
Andy,
I am hearing two things from you. First that they knew they were going to get data on the variability at a level never before achieved and second, that they also designed the instrument and mission with the best variability data they had at the time which they knew in advance was not as complete as it would be when the new data came in. Fair enough?
I never intended to claim they should have known exactly what the actual variability would be but only that they could have framed the mission to account for the possibility they might be surprised since that is such a crucial part of the design of the whole mission. The heart of the mission is critically dependent on these data capabilities and understandings and it is unusual to me as an engineer that someone did not think “what if these variability limits are off since we know in advance that we are designing our mission based on current limited data which may be different than the better data from the newer capabilities we will get with this instrument? How then would that impact the mission design?” The design of the probe might have remained the same but the mission might have been designed – and thus sold and *funded* for the eight years. Now they are some 50 dollars million short. I do not believe I am expecting perfection or 20/20 hindsight.
I did say managers should be accountable but I do not demand an outside witch hunt or a congressional investigation but someone responsible for that aspect of the mission planning will likely resign or be reassigned because frankly, from an organizational point of view, NASA must be somewhat embarrassed by this and they will likely react accordingly.