In today’s world, one of the more useful gifts for a scientist to have is the ability to save money. Enter the Jet Propulsion Laboratory’s Wesley Traub, who copes with problems like NASA’s indefinite hold on Terrestrial Planet Finder with a low-cost alternative of his own. Last year, Traub and crew experimented with the Solar Bolometric Imager, an observatory lofted by a balloon to altitudes of 35 kilometers and more. Their study of air distortions at those altitudes convinced Traub that the balloon’s movements through the stratosphere would not distort received images, and that led to speculation about doing exoplanet science close to home.
A balloon-based TPF? Hardly, but Traub does talk about imaging perhaps twenty exoplanets, according to a recent story in New Scientist. The method: A coronagraph teamed with a one to two-meter mirror. The so-called Planetscope weighs in at $10 million, making it a bargain when compared to space-based observatories, and cheap enough to tempt experimentation, even though a full-blown space mission would obviously offer far higher levels of performance.
Traub’s presentation to the American Astronomical Society’s annual meeting in January explained how Planetscope’s coronagraph could block the glare of the stars being investigated while allowing light from their planets to be detected, opening up the possibility of spectrographic studies of distant atmospheres. With both TPF and Darwin coping with budgetary issues, not to mention technological questions in need of resolution, Planetscope could become a useful stop-gap, just as coming observatories will open up new options from the ground.
Not nearly as inexpensive but nonetheless well below some Terrestrial Planet Finder estimates is another mission Traub has championed called Small Prototype Planet Finding Interferometer (SPPFI). Here we’re talking about a passively cooled two-telescope space interferometer operating at the L2 point in near- to mid-infrared wavelengths. The team investigating this one believes that it can be used to study the atmospheres of non-transiting exoplanets and perhaps taken still further:
Clearly such a mission concept has sufficient sensitivity to detect and characterize a broad range of extrasolar planets. If the telescopes are somewhat larger than has been discussed in some of the exisiting mission concepts (e.g., 1-2 m) and are somewhat cooler (e.g., < 60K) so that the interferometer can operate at longer wavelengths, it is possible for the SPPFI system to detect earth-like planets around the nearest stars. This is especially important now that there is an increasing belief that lower mass planets are very common, based on the detection of the 5.5 Earth mass planet using the microlensing approach...
A mission like this one comes with a price tag of $600 to $800 million and offers the opportunity to study not just exoplanets but the debris disks around stars we’ll later want to look at with missions like Darwin and TPF. We’re clearly ready for these next steps. Radial velocity methods have steadily improved, to the point where we can now find planets not only of Saturn mass but that of Neptune and Uranus, with improvements expected in the near future. Microlensing and transit studies both offer the chance to spot smaller, rocky worlds. Given our budgetary constraints, concepts that can take us to the next level and set the table for the breakthrough observatories we all hope for have to be affordable.
Which is why work like what Traub’s team is doing deserves your attention. Think affordability. Right now NASA has funded nineteen teams for studies for future observatories, a total of $12 million in fiscal 2008 and 2009 that includes a concept Centauri Dreams has always admired both for its technology and its budget, Webster Cash’s New Worlds Observer. Also under scrutiny is a study of direct imaging of giant planets around nearby stars using 2-meter class optical space telescopes. Study results for these latter mission ideas are expected in March of 2009.
For more on the work of Traub’s team, see Danchi et al., “Towards a Small Prototype Planet Finding Interferometer: The next step in planet finding and characterization in the infrared,” a white paper for the Exoplanetary Task Force. On 2-meter class optical space telescopes, see especially Stapelfeldt et al, “First Steps in Direct Imaging of Planetary Systems Like our Own: The Science Potential of 2-m Class Optical Space Telescopes,” also submitted to the AAAC Exoplanet Task Force (abstract). We’ll follow all these mission concepts as they make their way through the system.
A couple of questions:
1) How would planet finding telescopes in Antarctica (Dome A, Dome C, etc) compare in cost and efficacy to the cheap high atmosphere scopes? They may cost more to set up at first, but give their location, there would be great incentive to keep the running costs down!
2) How much of the cost of a space telescope mission is due to the required quality of the components, and the inordinate amount of testing required? I’m curious because if there was a way to significantly reduce the launching costs (by a factor of 10 at least) — i.e. by using the hypothesized space elevator — then while a mission failure would still be bad, the much lower cost and the high availability of such a launching system would make it much cheaper to try again. So if the cost and risks of a launch failure be much lower, would you be able to lower the cost of the building and testing of the mission as well?
Tacitus, I don’t have any numbers on the cost of Antarctica vs. high-atmosphere experiments; maybe someone else can weigh in on that. Re the cost of missions and reducing launch costs, you’re certainly right that lower cost to space would work wonders on overall expenditures. An Atlas V will run you around $130 million upfront, and that’s before you even begin totaling up costs of the observatory yu want to lift. But I don’t think the required testing re component quality would be eased much — so much of this is dependent on the environment these craft operate in, the testing mandatory to make sure they can perform their mission and, let’s hope, an extended mission beyond the primary. I’d love to see that space elevator, though, which would make all our near-Earth operations so much more efficient and reduce at least one key cost component.
A Characteristic Planetary Feature in Double-Peaked, High-Magnification Microlensing Events
Authors: Cheongho Han (CBNU, Korea), B. Scott Gaudi (OSU)
(Submitted on 8 May 2008)
Abstract: A significant fraction of microlensing planets have been discovered in high-magnification events, and a significant fraction of these events exhibit a double-peak structure at their peak. However, very wide or very close binaries can also produce double-peaked high-magnification events, with the same gross properties as those produced by planets. Traditionally, distinguishing between these two interpretations has relied upon detailed modeling, which is both time-consuming and generally does not provide insight into the observable properties that allow discrimination between these two classes of models. We study the morphologies of these two classes of double-peaked high-magnification events, and identify a simple diagnostic that can be used to immediately distinguish between perturbations caused by planetary and binary companions, without detailed modeling. This diagnostic is based on the difference in the shape of the intra-peak region of the light curves. The shape is smooth and concave for binary lensing, while it tends to be either boxy or convex for planetary lensing.
In planetary lensing this intra-peak morphology is due to the small, weak cusp of the planetary central caustic located between the two stronger cusps. We apply this diagnostic to five observed double-peaked high-magnification events to infer their underlying nature. A corollary of our study is that good coverage of the intra-peak region of double-peaked high-magnification events is likely to be important for their unique interpretation.
Comments: 6 pages, 3 figures
Subjects: Astrophysics (astro-ph)
Cite as: arXiv:0805.1103v1 [astro-ph]
Submission history
From: Cheongho Han [view email]
[v1] Thu, 8 May 2008 05:50:45 GMT (276kb)
http://arxiv.org/abs/0805.1103
http://www.technologyreview.com/blog/arxiv/24075/
Thursday, September 03, 2009
Astronomers Turn To Omniscopes For Low-Cost Observation
Omniscopes promise omnidirectional, omnichromatic astronomy at reasonable cost.
Astronomers want bigger and better telescopes. That’s understandable. But in the world of radio telescopes, there’s a problem looming: Greater sensitivity requires a bigger surface area, and the cost for a steerable single-dish telescope grows with area faster than linearly. So really big dishes are just too expensive to build.
That’s why astronomers are interested in the much cheaper approach of connecting many smaller dishes together to form an interferometer. Such an array of dishes can be as large as you like. The problem here is that in an interferometer, the signals from each dish have to be correlated with all the others, and the computational cost of this rises quadratically–i.e., the cost is proportional to the square of the number of dishes.
That soon becomes prohibitive, so astronomers have looked at two cost-cutting measures, say Max Tegmark at MIT in Cambridge and Matias Zaldarriaga at the Institute for Advanced Study in Princeton, N.J.
The first is to divide the array into groups, each considered a single element. This cuts the correlation cost from N^2 to (N/M)^2, at the price of reducing the sky area covered by a factor M. So the savings comes from ignoring part of the sky and not having to compute it.
The second is to arrange the antennae into a rectangular grid that can be correlated using fast Fourier transforms. This reduces the computational cost from N^2 to N.log2 (N) at the price of lower resolution.
That’s not such a bad trade-off. The question that Tegmark and Zaldarriaga ask is what shapes of array can benefit from this N.log2 (N) computational cost improvement.
Their answer is a surprisingly large class of arrays. It turns out that not only rectangular arrays but arbitrary combinations of grids should benefit.
That should make it possible to build arrays of dishes of almost any size and still benefit from the N.log2 (N) computational cost. Tegmark and Zaldarriaga say,
“This opens up the possibility of getting the best of both worlds, combining affordable signal processing with baseline coverage tailored to specific scientific needs.”
Such huge arrays would be omnidirectional and omnichromatic so Tegmark and Zaldarriaga coin the term omniscopes to describe them.
Ref: http://arxiv.org/abs/0909.0001: Omniscopes: Large Area Telescope Arrays with only N log N Computational Cost