Centauri Dreams
Imagining and Planning Interstellar Exploration
Good News for a Gravitational Focus Mission
We’ve talked about the ongoing work at the Jet Propulsion Society on the Sun’s gravitational focus at some length, most recently in JPL Work on a Gravitational Lensing Mission, where I looked at Slava Turyshev and team’s Phase II report to the NASA Innovative Advanced Concepts office. The team is now deep into the work on their Phase III NIAC study, with a new paper available in preprint form. Dr. Turyshev tells me it can be considered a summary as well as an extension of previous results, and today I want to look at the significance of one aspect of this extension.
There are numerous reasons for getting a spacecraft to the distance needed to exploit the Sun’s gravitational lens – where the mass of our star bends the light of objects behind it to produce a lens with extraordinary properties. The paper, titled “Resolved Imaging of Exoplanets with the Solar Gravitational Lens,” notes that at optical or near-optical wavelengths, the amplification of light is on the order of ~ 2 X 1011, with equally impressive angular resolution. If we can reach this region beginning at 550 AU from the Sun, we can perform direct imaging of exoplanets.
We’re talking multi-pixel images, and not just of huge gas giants. Images of planets the size of Earth around nearby stars, in the habitable zone and potentially life-bearing.
Other methods of observation give way to the power of the solar gravitational lens (SGL) when we consider that, according to Turyshev and co-author Viktor Toth’s calculations, to get a multi-pixel image of an Earth-class planet at 30 parsecs with a diffraction-limited telescope, we would need an aperture of 90 kilometers, hardly a practical proposition. Optical interferometers, too, are problematic, for even they require long-baselines and apertures in the tens of meters, each equipped with its own coronagraph (or conceivably a starshade) to block stellar light. As the paper notes:
Even with these parameters, interferometers would require integration times of hundreds of thousands to millions of years to reach a reasonable signal-to-noise ratio (SNR) of ? 7 to overcome the noise from exo-zodiacal light. As a result, direct resolved imaging of terrestrial exoplanets relying on conventional astronomical techniques and instruments is not feasible.
Integration time is essentially the time it takes to gather all the data that will result in the final image. Obviously, we’re not going to send a mission to the gravitational lensing region if it takes a million years to gather up the needed data.
Image: Various approaches will emerge about the kind of spacecraft that might fly a mission to the gravitational focus of the Sun. In this image (not taken from the Turyshev et al. paper), swarms of small solar sail-powered spacecraft are depicted that could fly to a spot where our Sun’s gravity distorts and magnifies the light from a nearby star system, allowing us to capture a sharp image of an Earth-like exoplanet. Credit: NASA/The Aerospace Corporation.
But once we reach the needed distance, how do we collect an image? Turyshev’s team has been studying the imaging capabilities of the gravitational lens and analyzing its optical properties, allowing the scientists to model the deconvolution of an image acquired by a spacecraft at these distances from the Sun. Deconvolution means reducing noise and hence sharpening the image with enhanced contrast, as we do when removing atmospheric effects from images taken from the ground.
All of this becomes problematic when we’re using the Sun’s gravitational lens, for we are observing exoplanet light in the form of an ‘Einstein ring’ around the Sun, where lensed light from the background object appears in the form of a circle. This runs into complications from the Sun’s corona, which produces significant noise in the signal. The paper examines the team’s work on solar coronagraphs to block coronal light while letting through light from the Einstein ring. An annular coronagraph aboard the spacecraft seems a workable solution. For more on this, see the paper.
An earlier study analyzed the solar corona’s role in reducing the signal-to-noise ratio, which extended the time needed to integrate the full image. In that work, the time needed to recover a complex multi-pixel image from a nearby exoplanet was well beyond the scope of a practical mission. But the new paper presents an updated model for the solar corona modeling whose results have been validated in numerical simulations under various methods of deconvolution. What leaps out here is the issue of pixel spacing in the image plane. The results demonstrate that a mission for high resolution exoplanet imaging is, in the authors’ words, ‘manifestly feasible.’
Pixel spacing is an issue because of the size of the image we are trying to recover. The image of an exoplanet the size of the Earth at 1.3 parsecs, which is essentially the distance of Proxima Centauri from the Earth, when projected onto an image plane at 1200 AU from the Sun, is almost 60 kilometers wide. We are trying to create a megapixel image, and must take account of the fact that individual image pixels are not adjacent. In this case, they are 60 meters apart. It turns out that this actually reduces the integration time of the data to produce the image we are looking for.
From the paper [italics mine]:
We estimated the impact of mission parameters on the resulting integration time. We found that, as expected, the integration time is proportional to the square of the total number of pixels that are being imaged. We also found, however, that the integration time is reduced when pixels are not adjacent, at a rate proportional to the inverse square of the pixel spacing.
Consequently, using a fictitious Earth-like planet at the Proxima Centauri system at z0 = 1.3 pc from the Earth, we found that a total cumulative integration time of less than 2 months is sufficient to obtain a high quality, megapixel scale deconvolved image of that planet. Furthermore, even for a planet at 30 pc from the Earth, good quality deconvolution at intermediate resolutions is possible using integration times that are comfortably consistent with a realistic space mission.
Image: This is Figure 5 from the paper. In the caption, PSF refers to the Point Spread Function, which is essentially the response of the light-gathering instrument to the object studied. It measures how much the light has been distorted by the instrument. Here the SGL itself is considered as the source of the distortion. The full caption: Simulated monochromatic imaging of an exo-Earth at z0 = 1.3 pc from z = 1200 AU at N = 1024 × 1024 pixel resolution using the SGL. Left: the original image. Middle: the image convolved with the SGL PSF, with noise added at SNRC = 187, consistent with a total integration time of ?47 days. Right: the result of deconvolution, yielding an image with SNRR = 11.4. Credit: Turyshev et al.
The solar gravity lens presents itself not as a single focal point but a cylinder, meaning that we can stay within the focus as we move further from the Sun. The authors find that as the spacecraft moves ever further out, the signal to noise ratio improves. This heightening in resolution persists even with the shorter integration times, allowing us to study effects like planetary rotation. This is, of course, ongoing work, but these results cannot but be seen as encouraging for the concept of a mission to the gravity focus, giving us priceless information for future interstellar probes.
The paper is Turyshev & Toth., “Resolved imaging of exoplanets with the solar gravitational lens,” available for now only as a preprint. The Phase II NIAC report is Turyshev et al., “Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission,” Final Report NASA Innovative Advanced Concepts Phase II (2020). Full text.
NASA Interstellar Probe: Overview and Prospects
A recent paper in Acta Astronautica reminds me that the Mission Concept Report on the Interstellar Probe mission has been available on the team’s website since December. Titled Interstellar Probe: Humanity’s Journey to Interstellar Space, this is the result of lengthy research out of Johns Hopkins Applied Physics Laboratory under the aegis of Ralph McNutt, who has served as principal investigator. I bring the mission concept up now because the new paper draws directly on the report and is essentially an overview to the community about the findings of this team.
We’ve looked extensively at Interstellar Probe in these pages (see, for example, Interstellar Probe: Pushing Beyond Voyager and Assessing the Oberth Maneuver for Interstellar Probe, both from 2021). The work on this mission anticipates the Solar and Space Physics 2023-2032 Decadal Survey, and presents an analysis of what would be the first mission designed from the top down as an interstellar craft. In that sense, it could be seen as a successor to the Voyagers, but one expressly made to probe the local interstellar medium, rather than reporting back on instruments designed originally for planetary science.
The overview paper is McNutt et al., “Interstellar probe – Destination: Universe!,” a title that recalls (at least to me) A. E. van Vogt’s wonderful collection of short stories by the same name (1952), whose seminal story “Far Centaurus” so keenly captures the ‘wait’ dilemma; i.e., when do you launch when new technologies may pass the craft you’re sending now along the way? In the case of this mission, with a putative launch date at the end of the decade, the question forces us into a useful heuristic: Either we keep building and launching or we sink into stasis, which drives little technological innovation. But what is the pace of such progress?
I say build and fly if at all feasible. Whether this mission, whose charter is basically “[T]o travel as far and as fast as possible with available technology…” gets the green light will be determined by factors such as the response it generates within the heliophysics community, how it fares in the upcoming decadal report, and whether this four-year engineering and science trade study can be implemented in a tight time frame. All that goes to feasibility. It’s hard to argue against it in terms of heliophysics, for what better way to study the Sun than through its interactions with the interstellar medium? And going outside the heliosphere to do so makes it an interstellar mission as well, with all that implies for science return.
Image: This is Figure 2-1 from the Mission Concept Report. Caption: During the evolution of our solar system, its protective heliosphere has plowed through dramatically different interstellar environments that have shaped our home through incoming interstellar gas, dust, plasma, and galactic cosmic rays. Interstellar Probe on a fast trajectory to the very local interstellar medium (VLISM) would represent a snapshot to understand the current state of our habitable astrosphere in the VLISM, to ultimately be able to understand where our home came from and where it is going. Credit: Johns Hopkins Applied Physics Laboratory.
Crossing through the heliosphere to the “Very Local” interstellar medium (VLISM) is no easy goal, especially when the engineering requirements to meet the decadal survey specifications take us to a launch no later than January of 2030. Other basic requirements include the ability to take and return scientific data from 1000 AU (with all that implies about long-term function in instrumentation), with power levels no more than 600 W at the beginning of the mission and no more than half of that at its end, and a mission working lifetime of 50 years. Bear in mind that our Voyagers, after all these years, are currently at 155 and 129 AU respectively. A successor to Voyager will have to move much faster.
But have a look at the overview, which is available in full text. Dr. McNutt tells me that we can expect a companion paper from Pontus Brandt (likewise at APL) on the science aspects of the larger Mission Concept Report; this is likewise slated for publication in Acta Astronautica. According to McNutt, the APL contract from NASA’s Heliophysics Division completes on April 30 of this year, so the ball now lands in the court of the Solar and Space Physics Decadal Survey Team. And let me quote his email:
“Reality is never easy. I have to keep reminding people that the final push on a Solar Probe began with a conference in 1977, many studies at JPL through 2001, then studies at APL beginning in late 2001, the Decadal Survey of that era, etc. etc. with Parker Solar Probe launching in August 2018 and in the process now of revolutionizing our understanding of the Sun and its interaction with the interplanetary medium.”
Image: This is Figure 2-8 from the Mission Concept Report. Caption: Recent studies suggest that the Sun is on the path to leave the LIC [Local Interstellar Cloud] and may be already in contact with four interstellar clouds with different properties (Linsky et al., 2019). (Left: Image credit to Adler Planetarium, Frisch, Redfield, Linsky.)
Our society has all too little patience with decades-long processes, much less multi-generational goals. But we do have to understand how long it takes missions to go through the entire sequence before launch. It should be obvious that a 2030 launch date sets up what the authors call a ‘technology horizon’ that forces realism with respect to the physics and material properties at play here. Note this, from the paper:
…the enforcement of the “technology horizon” had two effects: (1) limit thinking to what can “be done now with maybe some “‘minor’ extensions” and (2) rule out low-TRL [technology readiness level] “technologies” which (1) we have no real idea how to develop, e.g., “fusion propulsion” or “gas-core fission”, or which we think we know how to develop but have no means of securing the requisite funds, e.g., NEP (while some might argue with this assertion, the track record to date does not argue otherwise).
Thus the dilemma of interstellar studies. Opportunities to fund and fly missions are sparse, political support always problematic, and deadlines shape what is possible. We have to be realistic about what we can do now, while also widening our thinking to include the kind of research that will one day pay off in long-term results. Developing and nourishing low-TRL concepts has to be a vital part of all this, which is why think tanks like NASA’s Innovative Advanced Concept office are essential, and why likewise innovative ideas emerging from the commercial sector must be considered.
Both tracks are vital as we push beyond the Solar System. McNutt refers to a kind of ‘relay race’ that began with Pioneer 10 and has continued through Voyagers 1 and 2. A mission dedicated to flying beyond the heliopause picks up that baton with an infusion of new instrumentation and science results that take us “outward through the heliosphere, heliosheath, and near (but not too near) interstellar space over almost five solar cycles…” Studies like these assess the state of the art (over 100 mission approaches are quantified and evaluated), defining our limits as well as our ambitions.
The paper is McNutt et al., “Interstellar probe – Destination: Universe!” Acta Astronautica Vol. 196 (July 2022), 13-28 (full text).
Toward a Multilayer Interstellar Sail
Centauri Dreams tracks ongoing work on beamed sails out of the conviction that sail designs offer us the best hope of reaching another star system within this century, or at least, the next. No one knows how this will play out, of course, and a fusion breakthrough of spectacular nature could shift our thinking entirely – so, too, could advances in antimatter production, as Gerald Jackson’s work reminds us. But while we continue the effort on all alternative fronts, beamed sails currently have the edge.
On that score, take note of a soon to be available two-volume set from Philip Lubin (UC-Santa Barbara), which covers the work he and his team have been doing under the name Project Starlight and DEEP-IN for some years now. This is laser-beamed propulsion to a lightsail, an idea picked up by Breakthrough Starshot and central to its planning. The Path to Transformational Space Exploration pulls together Lubin and team’s work for NASA’s Innovative Advanced Concepts office, as well as work funded by donors and private foundations, for a deep dive into where we stand today. The set is expensive, lengthy (over 700 pages) and quite technical, definitely not for casual reading, but those of you with a nearby research library should be aware of it.
Just out in the journal Communications Materials is another contribution, this one examining the structure and materials needed for a lightsail that would fly such a mission. Giovanni Santi (CNR/IFN, Italy) and team are particularly interested in the question of layering the sail in an optimum way, not only to ensure the best balance between efficiency and weight but also to find the critical balance between reflectance and emissivity, because we have to build a sail that can survive its acceleration.
What this boils down to is that we are assuming a laser phased array producing a beam that is applied to an extremely thin, lightweight structure, with the intention of reaching a substantial percentage (20 percent) of lightspeed. The laser flux at the spacecraft is, the paper notes, on the order of 10 – 100 GW m-2, with the sail no further from the Earth than the Moon. Lubin’s work at UC-Santa Barbara (he is a co-author here) has demonstrated a directed energy source with emission at 1064 nm.
Thermal issues are critical. The sail has to survive the temperature increases of the acceleration phase, so we need materials that offer high reflectance as well as the ability to blow off that heat, meaning high emissivity in the infrared. The Santa Barbara laboratory work has used ytterbium-doped fiber laser amplifiers in what the paper describes as a ‘master oscillator phased array’ topology. And this gets fascinating from the relativistic point of view. Remember, we are trying to produce a spacecraft that can make the journey to a nearby star in a matter of decades, so relativistic effects have to be considered.
In terms of the sail itself, that means that our high-speed spacecraft will quickly see a longer wavelength in the beamed energy than was originally emitted. This is true despite the fact that the period of acceleration is short, on the order of minutes.
The authors suggest that there are two ways to cope with this. The laser can shorten its emission wavelength as the spacecraft accelerates, meaning that the received wavelength is constant. But the paper focuses on a second option: Make the reflecting surface broadband enough to allow a fairly large range of received wavelengths.
Thus the core of this paper is an analysis of the materials possible for such a sail and their thermal properties, keeping this wavelength change in mind, while at the same time studying – after the operative laser wavelength is determined – how the structure of the lightsail can be engineered to survive the extremities of the acceleration phase.
Image: This is Figure 1 from the paper. Caption: The red arrows denote the incident, transmitted and reflected laser power, while the violet ones indicate the thermal radiation leaving the structure from the front and back surfaces. The surface area is modeled as ?D2; ??=?1 for a squared lightsail of side D and ??=??/4 for a circular lightsail of diameter D. Credit: Santi et al.
The paper considers a range of possible materials for the sail, all of them low in density and widely used, so that their optical parameters are readily available in the literature. Optimization is carried out for stacks of different materials in combination to find structures with maximum optical properties and highest performance. Critical parameters are the reflectance and the areal density of the resulting sail.
Out of all this, titanium dioxide (TiO2) stands out in terms of thermal emission. This work suggests a combination of stacked materials:
The most promising structures to be used with a 1064?nm laser source result to be the TiO2-based ones, in the form of single layer or multilayer stack which include the SiO2 as a second material. In term of propulsion efficiency, the single layer results to be the most performing, while the multilayers offer some advantages in term[s] of thermal control and stiffness. The engineering process is fundamental to obtain proper optical characteristics, thus reducing the absorption of the lightsail in the Doppler-shifted wavelength of the laser in order to allow the use of high-power laser up to 100 GW. The use of a longer wavelength laser source could expand the choice of potential materials having the required optical characteristics.
So much remains to be determined as this work continues. The required mechanical strength of the multilayer structure means we need to learn a lot more about the properties of thin films. Also critical is the stability of the lightsail. We want a sail that survives acceleration not only physically but also in terms of staying aligned with the axis of the beam that is driving it. The slightest imperfection in the material, induced perhaps in manufacturing, could destroy this critical alignment. A variety of approaches to stability have emerged in the literature and are being examined.
The take-away from this paper is that thin-film multilayers are a way to produce a viable sail capable of being accelerated by beamed energy at these levels. We already have experience with thin films in areas like the coatings deposited on telescope mirrors, and because the propulsion efficiency is only slightly affected by the angle at which the beam strikes the sail, various forms of curved designs become feasible.
Can a sail survive the rigors of a journey through the gas and dust of the interstellar medium? At 20 percent of c, the question of how gas accumulates in materials needs work, as we’d like to arrive at destination with a sail that may double as a communications tool. Each of these areas, in turn, fragments into needed laboratory work on many levels, which is why a viable effort to design a beamed mission to a star demands a dedicated facility focusing on sail materials and performance. Breakthrough Starshot seems ideally placed to make such a facility happen.
The paper is Santi et al., “Multilayers for directed energy accelerated lightsails,” Communications Materials 3 (16 (2022). Abstract.
AB Aurigae b: The Case for Disk Instability
What to make of a Jupiter-class planet that orbits its host star at a distance of 13.8 billion kilometers? This is well over twice the distance of Pluto from the Sun, out past the boundaries of what in our system is known as the Kuiper Belt. Moreover, this is a young world still in the process of formation. At nine Jupiter masses, it’s hard to explain through conventional modeling, which sees gas giants growing through core accretion, steadily adding mass through progressive accumulation of circumstellar materials.
Core accretion makes sense and seems to explain typical planet formation, with the primordial cloud around an infant star dense in dust grains that can accumulate into larger and larger objects, eventually growing into planetesimals and emerging as worlds. But the new planet – AB Aurigae b – shouldn’t be there if core accretion were the only way to produce a planet. At these distances from the star, core accretion would take far longer than the age of the system to produce this result.
Enter disk instability, which we’ve examined many a time in these pages over the years. Here the mechanism works from the top down, with clumps of gas and dust forming quickly through what Alan Boss (Carnegie Institution for Science), who has championed the concept, sees as wave activity generated by the gravity of the disk gas. Waves something like the spiral arms in galaxies like our own can lead to the formation of massive clumps whose internal dust grains settle into the core of a protoplanet.
Data from ground- and space-based instruments have homed in on AB Aurigae b, with Hubble’s Space Telescope Imaging Spectrograph and Near Infrared Camera and Multi-Object Spectrograph complemented by observations from the planet imager called SCExAO on Japan’s 8.2-meter Subaru Telescope at Mauna Kea (Hawaii). The fact that the growing system around AB Aurigae presents itself more or less face-on as viewed from Earth makes the distinction between disk and planet that much clearer.
Image: Researchers were able to directly image newly forming exoplanet AB Aurigae b over a 13-year span using Hubble’s Space Telescope Imaging Spectrograph (STIS) and its Near Infrared Camera and Multi-Object Spectrograph (NICMOS). In the top right, Hubble’s NICMOS image captured in 2007 shows AB Aurigae b in a due south position compared to its host star, which is covered by the instrument’s coronagraph. The image captured in 2021 by STIS shows the protoplanet has moved in a counterclockwise motion over time. Credit: Science: NASA, ESA, Thayne Currie (Subaru Telescope, Eureka Scientific Inc.); Image Processing: Thayne Currie (Subaru Telescope, Eureka Scientific Inc.), Alyssa Pagan (STScI).
We benefit from the sheer amount of data Hubble has accumulated when working with a planetary orbit on a world this far from its star. A time span of a single year would hardly be enough to detect the motion at the distance of AB Aurigae, over 500 light years from Earth. The paper on this work – Thayne Currie (Subaru Telescope and Eureka Scientific) is lead researcher – pulls together observations of the system at multiple wavelengths to give disk instability a boost. The authors note the significance of the result, comparing it with PDS 70, a young system with two growing exoplanets, one of whom, PDS 70b, was the first confirmed exoplanet to be directly imaged:
…this discovery has profound consequences for our understanding of how planets form. AB Aur b provides a key direct look at protoplanets in the embedded stage. Thus, it probes an earlier stage of planet formation than the PDS 70 system. AB Aur’s protoplanetary disk shows multiple spiral arms, and AB Aur b appears as a spatially resolved clump located in proximity to these arms. These features bear an uncanny resemblance to models of jovian planet formation by disk instability. AB Aur b may then provide the first direct evidence that jovian planets can form by disk instability. An observational anchor like the AB Aur system significantly informs the formulation of new disk instability models diagnosing the temperature, density and observability of protoplanets formed under varying conditions.
I do want to bring up an additional paper of likely relevance. In 2019, Michael Kuffmeier (Zentrum für Astronomie der Universität Heidelberg) and team looked at a variety of systems in terms of late encounter events that can disrupt a debris disk that is still forming. AB Aurigae is one of the systems studied, as noted in their paper:
Our results show how star-cloudlet encounters can replenish the mass reservoir around an already formed star. Furthermore, the results demonstrate that arc structures observed for AB Aurigae or HD 100546 are a likely consequence of such late encounter events. We find that large second-generation disks can form via encounter events of a star with denser gas condensations in the ISM millions of years after stellar birth as long as the parental Giant Molecular Cloud has not fully dispersed. The majority of mass in these second-generation disks is located at large radii, which is consistent with observations of transitional disks.
Just what effect such late encounter events might have on what may well be disk instability at work will be the subject of future studies, but if we’re using AB Aurigae as a likely model of the process at work, we will need to untangle such effects.
The paper is Currie et al., “Images of embedded Jovian planet formation at a wide separation around AB Aurigae,” Nature Astronomy 04 April 2022 (abstract / preprint). The Kuffmeier paper is “Late encounter-events as a source of disks and spiral structures,” Astronomy & Astrophysics Vol. 633 A3 (19 December 2019). Abstract.
Ramping Up the Technosignature Search
In the ever growing realm of acronyms, you can’t do much better than COSMIC – the Commensal Open-Source Multimode Interferometer Cluster Search for Extraterrestrial Intelligence. This is a collaboration between the SETI Institute and the National Radio Astronomy Observatory (NRAO), which operates the Very Large Array in New Mexico. The news out of COSMIC could not be better for technosignature hunters: Fiber optic amplifiers and splitters are now installed at each of the 27 VLA antennas.
What that means is that COSMIC will have access to the complete datastream from the entire VLA, in effect an independent copy of everything the VLA observes. Now able to acquire VLA data, the researchers are proceeding with the development of high-performance Graphical Processing Unit (GPU) code for data analysis. Thus the search for signs of technology among the stars gains momentum at the VLA.
Image: SETI Institute post-doctoral researchers, Dr Savin Varghese and Dr Chenoa Tremblay, in front of one of the 25-meter diameter dishes that makes up the Very Large Array. Credit: SETI Institute.
Jack Hickish, digital instrumentation lead for COSMIC at the SETI Institute, takes note of the interplay between the technosignature search and ongoing work at the VLA:
“Having all the VLA digital signals available to the COSMIC system is a major milestone, involving close collaboration with the NRAO VLA engineering team to ensure that the addition of the COSMIC hardware doesn’t in any way adversely affect existing VLA infrastructure. It is fantastic to have overcome the challenges of prototyping, testing, procurement, and installation – all conducted during both a global pandemic and semiconductor shortage – and we are excited to be able to move on to the next task of processing the many Tb/s of data to which we now have access.”
Tapping the VLA for the technosignature search brings powerful tools to bear, considering that each of the installation’s 27 antennas is 25 meters in diameter, and that these movable dishes can be spread over fully 22 miles. The Y-shaped configuration is found some 50 miles west of Socorro, New Mexico in the area known as the Plains of San Agustin. By combining data from the antennas, scientists can create the resolution of an antenna 36 kilometers across, with the sensitivity of a dish 130 meters in diameter.
Each of the VLA antennas uses eight cryogenically cooled receivers, covering a continuous frequency range from 1 to 50 GHz, with some of the receivers able to operate below 1 GHz. This powerful instrumentation will be brought to bear, according to sources at COSMIC SETI, on 40 million star systems, making this the most comprehensive SETI observing program ever undertaken in the northern hemisphere. (Globally, Breakthrough Listen continues its well-funded SETI program, using the Green Bank Observatory in West Virginia and the Parkes Observatory in Australia).
Cherry Ng, a SETI Institute COSMIC project scientist, points to the range the project will cover:
“We will be able to monitor millions of stars with a sensitivity high enough to detect an Arecibo-like transmitter out to a distance of 25 parsecs (81 light-years), covering an observing frequency range from 230 MHz to 50 GHz, which includes many parts of the spectrum that have not yet been explored for ETI signals.”
The VLA is currently conducting the VLA Sky Survey, a new, wide-area centimeter wavelength survey that will cover the entire visible sky. The SETI work is scheduled to begin when the new system becomes fully operational in early 2023, working in parallel with the VLASS effort.
Microlensing: K2’s Intriguing Find
Exoplanet science can look forward to a rash of discoveries involving gravitational microlensing. Consider: In 2023, the European Space Agency will launch Euclid, which although not designed as an exoplanet mission per se, will carry a wide-field infrared array capable of high resolution. ESA is considering an exoplanet microlensing survey for Euclid, which will be able to study the galactic bulge for up to 30 days twice per year, perhaps timed for the end of the craft’s cosmology program.
Look toward crowded galactic center long enough and you just may see a star in the galaxy’s disk move in front of a background star located much further away in that dense bulge. The result: The lensing phenomenon predicted by Einstein, with the light of the background star magnified by the intervening star. If that star has a planet, it’s one we can detect even if it’s relatively small, and even if it’s widely spaced from its star.
For its part, NASA plans to launch the Roman space telescope by 2027, with its own exoplanet microlensing survey slotted in as a core science activity. The space telescope will be able to conduct uninterrupted microlensing operations for two 72-day periods per year, and may coordinate these activities with the Euclid team. In both cases, we have space instruments that can detect cool, low-mass exoplanets for which, in many cases, we’ll be able to combine data from the spacecraft and ground observatories, helping to nail down orbit and distance measurements.
While we await these new additions to the microlensing family, we can also take surprised pleasure in the announcement of a microlensing discovery, the world known as K2-2016-BLG-0005Lb. Yes, this is a Kepler find, or more precisely, a planet uncovered through exhaustive analysis of K2 data, with the help of ground-based data from the OGLE microlensing survey, the Korean Microlensing Telescope Network (KMTNet), Microlensing Observations in Astrophysics (MOA), the Canada-France-Hawaii Telescope and the United Kingdom Infrared Telescope. I list all these projects and instruments by way of illustrating how what we learn from microlensing grows with wide collaboration, allowing us to combine datasets.
Kepler and microlensing? Surprise is understandable, and the new world, similar to Jupiter in its mass and distance from its host star, is about twice as distant as any exoplanets confirmed by Kepler, which used the transit method to make its discoveries. David Specht (University of Manchester) is lead author of the paper, which will appear in Monthly Notices of the Royal Astronomical Society. The effort involved sifting K2 data for signs of an exoplanet and its parent star occulting a background star, with accompanying gravitational lensing caused by both foreground objects.
Eamonn Kerins is principal investigator for the Science and Technology Facilities Council (STFC) grant that funded the work. Dr Kerins adds:
“To see the effect at all requires almost perfect alignment between the foreground planetary system and a background star. The chance that a background star is affected this way by a planet is tens to hundreds of millions to one against. But there are hundreds of millions of stars towards the center of our galaxy. So Kepler just sat and watched them for three months.”
Image: The view of the region close to the Galactic Center centered where the planet was found. The two images show the region as seen by Kepler (left) and by the Canada-France-Hawaii Telescope (CFHT) from the ground. The planet is not visible but its gravity affected the light observed from a faint star at the center of the image (circled). Kepler’s very pixelated view of the sky required specialized techniques to recover the planet signal. Credit: Specht et al.
This is a classic case of pushing into a dataset with specialized analytical methods to uncover something the original mission designers never planned to see. The ground-based surveys that examined the same area of sky offered a combined dataset to go along with what Kepler saw slightly earlier, given its position 135 million kilometers from Earth, allowing scientists to triangulate the system’s position along the line of sight, and to determine the mass of the exoplanet and its orbital distance.
What an intriguing, and decidedly unexpected, result from Kepler! K2-2016-BLG-0005Lb is also a reminder of the kind of discovery we’re going to be making with Euclid and the Roman instrument. Because it is capable of finding lower-mass worlds at a wide range of orbital distances, microlensing should help us understand how common it is to have a Jupiter-class planet in an orbit similar to Jupiter’s around other stars. Is the architecture of our Solar System, in other words, unique or fairly representative of what we will now begin to find?
Animation: The gravitational lensing signal from Jupiter twin K2-2016-BLG-0005Lb. The local star field around the system is shown using real color imaging obtained with the ground-based Canada-France-Hawaii Telescope by the K2C9-CFHT Multi-Color Microlensing Survey team. The star indicated by the pink lines is animated to show the magnification signal observed by Kepler from space. The trace of this signal with time is shown in the lower right panel. On the left is the derived model for the lensing signal, involving multiple images of the star cause by the gravitational field of the planetary system. The system itself is not directly visible. Credit: CFHT.
From the paper:
The combination of spatially well separated simultaneous photometry from the ground and space also enables a precise measurement of the lens–source relative parallax. These measurements allow us to determine a precise planet mass (1.1 ± 0.1 ?? ), host mass (0.58 ± 0.03 ??) and distance (5.2 ± 0.2 kpc).
The authors describe the world as “a close analogue of Jupiter orbiting a K-dwarf star,” noting:
The location of the lens system and its transverse proper motion relative to the background source star (2.7 ± 0.1 mas/yr) are consistent with a distant Galactic-disk planetary system microlensing a star in the Galactic bulge.
Given that Kepler was not designed for microlensing operations, it’s not surprising to see the authors refer to it as “highly sub-optimal for such science.” But here we have a direct planet measurement including mass with high precision made possible by the craft’s uninterrupted view of its particular patch of sky. Euclid and the Roman telescope should have much to contribute given that they are optimized for microlensing work. We can look for a fascinating expansion of the planetary census.
The paper is Specht et al., “Kepler K2 Campaign 9: II. First space-based discovery of an exoplanet using microlensing,” in process at Monthly Notices of the Royal Astronomical Society” (preprint).