Centauri Dreams

Imagining and Planning Interstellar Exploration

Into the Cosmic Haystack

A new paper from Jason Wright (Penn State) and colleagues Shubham Kanodia and Emily Lubar deals with SETI and the ‘parameter space’ within which we search, with interesting implications. For the researchers show that despite searching for decades through a variety of projects and surveys, SETI is in early days indeed. Those who would draw conclusions about its lack of success to this point fail to understand the true dimensions of the challenge.

But before getting into the meat of the paper, let’s talk about a few items in its introduction. For Wright and team contextualize SETI in relation to broader statements about our place in the cosmos. We can ask questions about what we see and what we don’t see, but we have to avoid being too facile in our interpretation of what some consider to be an ‘eerie silence’ (the reference is to a wonderful book by Paul Davies of the same name).

Image: Penn State’s Jason Wright. Credit: Jody Barshinger.

Back in the 1970s, Michael Hart argued that even with very slow interstellar travel, the Milky Way should have been well settled by now. If, that is, there were civilizations out there to settle it. Frank Tipler made the same point, deducing from the lack of evidence that SETI itself was pointless, because if other civilizations existed, they would have already shown up.

In their new paper, Wright and team take a different tack, looking at the same argument as applied to more terrestrial concerns. Travel widely (Google Earth will do) and you’ll notice that most of the places you select at random show no obvious signs of humans or, in a great many cases, our technology. Why is this? After all, it takes but a small amount of time to fly across the globe when compared to the age of the technology that makes this possible. Shouldn’t we, then, expect that by now, most parts of the Earth’s surface should bear signs of our presence?

It’s a canny argument in particular because we are the only example of a technological species we have, and the Hart-style argument fails for us. If we accept the fact that although there are huge swaths of Earth’s surface that show no evidence of us, the Earth is still home to a technological civilization, then perhaps the same can be said for the galaxy. Or, for that matter, the Solar System, so much of which we have yet to explore. Could there be, for example, a billion year old Bracewell probe awaiting activation among the Trans-Neptunian objects?

Maybe, then, there is no such thing as an ‘eerie silence,’ or at least not one whose existence has been shown to be plausible. The matter seems theoretical until you realize it impacts practical concerns like SETI funding. If we assume that extraterrestrial civilizations do not exist because they have not visited us, then SETI is a wasteful exercise, its money better spent elsewhere.

By the same token, some argue that because we have not yet had a SETI detection of an alien culture, we can rule out their existence, at least anywhere near us in the galaxy. What Wright wants to do is show that the conclusion is false, because given the size of the search space, SETI has barely begun. We need, then, to examine just how much of a search we have actually been able to mount. What interstellar beacons, for example, might we have missed because we lacked the resources to keep a constant eye on the same patch of sky?

The Wright paper is about the parameter space within which we hope to find so-called ‘technosignatures.’ Jill Tarter has described a ‘cosmic haystack’ existing in three spatial dimensions, one temporal dimension, two polarization dimensions, central frequency, sensitivity and modulation — a haystack, then, of nine dimensions. Wright’s team likes this approach:

This “needle in a haystack” metaphor is especially appropriate in a SETI context because it emphasizes the vastness of the space to be searched, and it nicely captures how we seek an obvious product of intelligence and technology amidst a much larger set of purely natural products. SETI optimists hope that there are many alien needles to be found, presumably reducing the time to find the first one. Note that in this metaphor the needles are the detectable signatures of alien technology, meaning that a single alien species might be represented by many needles.

Image: Coming to terms with the search space as SETI proceeds, in this case at Green Bank, WV. Credit: Walter Bibikow/JAI/Corbis /Green Bank Observatory.

The Wright paper shows how our search haystacks can be defined even as we calculate the fraction of them already examined for our hypothetical needles. A quantitative, eight-dimensional model is developed to make the calculation, with a number of differences between the model haystack and the one developed by Tarter, and factoring in recent SETI searches like Breakthrough Listen’s ongoing work. The assumption here, necessary for the calculation, is that SETI surveys have similar search strategies and sensitivities.

This assumption allows the calculation to proceed, and it is given support when we learn that its results align fairly well with the previous calculation Jill Tarter made in a 2010 paper. Thus Wright: “…our current search completeness is extremely low, akin to having searched something like a large hot tub or small swimming pool’s worth of water out of all of Earth’s oceans.”

And then Tarter, whose result for the size of our search is a bit smaller. Let me just quote her (from an NPR interview in 2012) on the point:

“We’ve hardly begun to search… The space that we’re looking through is nine-dimensional. If you build a mathematical model, the amount of searching that we’ve done in 50 years is equivalent to scooping one 8-ounce glass out of the Earth’s ocean, looking and seeing if you caught a fish. No, no fish in that glass? Well, I don’t think you’re going to conclude that there are no fish in the ocean. You just haven’t searched very well yet. That’s where we are.”

This being the case, the idea that a lack of success for SETI to date is a compelling reason to abandon the search is shown for what it is, a misreading of the enormity of the search space. SETI cannot be said to have failed. But this leads to a different challenge. Wright again:

We should be careful, however, not to let this result swing the pendulum of public perceptions of SETI too far the other way by suggesting that the SETI haystack is so large that we can never hope to find a needle. The whole haystack need only be searched if one needs to prove that there are zero needles—because technological life might spread through the Galaxy, or because technological species might arise independently in many places, we might expect there to be a great number of needles to be found.

The paper also points out that in its haystack model are included regions of interstellar space between stars for which there is no assumption of transmitters. Transmissions from nearby stars are but a subset of the haystack, and move up in the calculation of detection likelihood.

So we keep looking, wary of drawing conclusions too swiftly when we have searched such a small part of the available parameter space, and we look toward the kind of searches that can accelerate the process. These would include “…surveys with large bandwidth, wide fields of view, long exposures, repeat visits, and good sensitivity,” according to the paper. The ultimate survey? All sky, all the time, the kind of all-out stare that would flag repeating signals that today could only register as one-off phenomena, and who knows what other data of interest not just to SETI but to the entire community of deep-sky astronomers and astrophysicists.

The paper is Wright et al., “How Much SETI Has Been Done? Finding Needles in the n-Dimensional Cosmic Haystack,” accepted at The Astronomical Journal (preprint).

tzf_img_post

Trillion Planet Survey Targets M-31

Can rapidly advancing laser technology and optics augment the way we do SETI? At the University of California, Santa Barbara, Phil Lubin believes they can, and he’s behind a project called the Trillion Planet Survey to put the idea into practice for the benefit of students. As an incentive for looking into a career in physics, an entire galaxy may be just the ticket.

For the target is the nearest galaxy to our own. The Trillion Planet Survey will use a suite of meter-class telescopes to search for continuous wave (CW) laser beacons from M31, the Andromeda galaxy. But TPS is more than a student exercise. The work builds on Lubin’s 2016 paper called “The Search for Directed Intelligence,” which makes the case that laser technology foreseen today could be seen across the universe. And that issue deserves further comment.

Centauri Dreams readers are familiar with Lubin’s work with DE-STAR, (Directed Energy Solar Targeting of Asteroids and exploRation), a scalable technology that involves phased arrays of lasers. DE-STAR installations could be used for purposes ranging from asteroid deflection (DE-STAR 2-3) to propelling an interstellar spacecraft to a substantial fraction of the speed of light (DE-STAR 3-4). The work led to NIAC funding (NASA Starlight) in 2015 examining beamed energy systems for propulsion in the context of miniature probes using wafer-scale photonics and is also the basis for Breakthough Starshot.

Image: UC-Santa Barbara physicist Philip Lubin. Credit: Paul Wellman/Santa Barbara Independent.

A bit more background here: Lubin’s Phase I study “A Roadmap to Interstellar Flight ” is available online. It was followed by Phase II work titled “Directed Energy Propulsion for Interstellar Exploration (DEEP-IN).” Lubin’s discussions with Pete Worden on these ideas led to talks with Yuri Milner in late 2015. The Breakthrough Starshot program draws on the DE-STAR work, particularly in its reliance on miniaturized payloads and, of course, a laser array for beamed propulsion, the latter an idea that had largely been associated with large sails rather than chip-sized payloads. Mason Peck and team’s work on ‘sprites’ is also a huge factor.

But let’s get back to the Trillion Planet Survey — if I start talking about the history of beamed propulsion concepts, I could spend days, and anyway, Jim Benford has already undertaken the task in these pages in his A Photon Beam Propulsion Timeline. What’s occupies us this morning is the range of ideas that play around the edges of beamed propulsion, one of them being the beam itself, and how it might be detected at substantial distances. Lubin’s DE-STAR 4, capable of hitting an asteroid with 1.4 megatons of energy per day, would stand out in many a sky.

In fact, according to Lubin’s calculations, such a system — if directed at another star — would be seen in systems as distant as 1000 light years as, briefly, the brightest star in the sky. Suddenly we’re talking SETI, because if we can build such systems in the foreseeable future, so can the kind of advanced civilizations we may one day discover among the stars. Indeed, directed energy systems might announce themselves with remarkable intensity.

Image: M31, the Andromeda Galaxy, the target of the largely student led Trillion Planet Survey. Credit & Copyright: Robert Gendler.

Lubin makes this point in his 2016 paper, in which he states “… even modest directed energy systems can be ‘seen’ as the brightest objects in the universe within a narrow laser linewidth.” Amplifying on this from the paper, he shows that stellar light in a narrow bandwidth would be very small in comparison to the beamed energy source:

In case 1) we treat the Sun as a prototype for a distant star, one that is unresolved in our telescope (due to seeing or diffraction limits) but one where the stellar light ends up in ~ one pixel of our detector. Clearly the laser is vastly brighter in this sense. Indeed for the narrower linewidth the laser is much brighter than an entire galaxy in this sense. For very narrow linewidth lasers (~ 1 Hz) the laser can be nearly as bright as the sum of all stars in the universe within the linewidth. Even modest directed energy systems can stand out as the brightest objects in the universe within the laser linewidth.

And again (and note here that the reference to ‘class 4’ is not to an extended Kardashev scale, but rather to a civilization transmitting at DE-STAR 4 levels, as defined in the paper):

As can be seen at the distance of the typical Kepler planets (~ 1 kly distant) a class 4 civilization… appears as the equivalent of a mag~0 star (ie the brightest star in the Earth’s nighttime sky), at 10 kly it would appear as about mag ~ 5, while the same civilization at the distance of the nearest large galaxy (Andromeda) would appear as the equivalent of a m~17 star. The former is easily seen with the naked eye (assuming the wavelength is in our detection band) while the latter is easily seen in a modest consumer level telescope.

Out of this emerges the idea that a powerful civilization could be detected with modest ground-based telescopes if it happened to be transmitting in our direction when we were observing. Hence the Trillion Planet Survey, which looks at using small telescopes such as those in the Las Cumbres Observatory’s robotic global network to make such a detection.

With M31 as the target, the students in the Trillion Planet Survey are conducting a survey of the galaxy as TPS gets its software pipeline into gear. Developed by Emory University student Andrew Stewart, the pipeline processes images under a set of assumptions. Says Stewart:

“First and foremost, we are assuming there is a civilization out there of similar or higher class than ours trying to broadcast their presence using an optical beam, perhaps of the ‘directed energy’ arrayed-type currently being developed here on Earth. Second, we assume the transmission wavelength of this beam to be one that we can detect. Lastly, we assume that this beacon has been left on long enough for the light to be detected by us. If these requirements are met and the extraterrestrial intelligence’s beam power and diameter are consistent with an Earth-type civilization class, our system will detect this signal.”

Screening transient signals from its M31 images, the team will then submit them to further processing in the software pipeline to eliminate false positives. The TPS website offers links to background information, including Lubin’s 2016 paper, but as of yet has little about the actual image processing, so I’ll simply quote from a UCSB news release on the matter:

“We’re in the process of surveying (Andromeda) right now and getting what’s called ‘the pipeline’ up and running,” said researcher Alex Polanski, a UC Santa Barbara undergraduate in Lubin’s group. A set of photos taken by the telescopes, each of which takes a 1/30th slice of Andromeda, will be knit together to create a single image, he explained. That one photograph will then be compared to a more pristine image in which there are no known transient signals — interfering signals from, say, satellites or spacecraft — in addition to the optical signals emanating from the stellar systems themselves. The survey photo would be expected to have the same signal values as the pristine “control” photo, leading to a difference of zero. But a difference greater than zero could indicate a transient signal source, Polanski explained. Those transient signals would then be further processed in the software pipeline developed by Stewart to kick out false positives. In the future the team plans to use simultaneous multiple color imaging to help remove false positives as well.

Why Andromeda? The Trillion Planet Survey website notes that the galaxy is home to at least one trillion stars, a stellar density higher than the Milky Way’s, and thus represents “…an unprecedented number of targets relative to other past SETI searches.” The project gets the students who largely run it into the SETI business, juggling the variables as we consider strategies for detecting other civilizations and upgrading existing search techniques, particularly as we take into account the progress of exponentially accelerating photonic technologies.

Projects like these can exert a powerful incentive for students anxious to make a career out of physics. Thus Caitlin Gainey, now a freshman in physics at UC Santa Barbara:

“In the Trillion Planet Survey especially, we experience something very inspiring: We have the opportunity to look out of our earthly bubble at entire galaxies, which could potentially have other beings looking right back at us. The mere possibility of extraterrestrial intelligence is something very new and incredibly intriguing, so I’m excited to really delve into the search this coming year.”

And considering that any signal arriving from M31 would have been enroute for well over 2 million years, the TPS also offers the chance to involve students in the concept of SETI as a form of archaeology. We could discover evidence of a civilization long dead through signals sent well before civilization arose on Earth. A ‘funeral beacon’ announcing the demise of a once-great civilization is a possibility. In terms of artifacts, the search for Dyson Spheres or other megastructures is another. The larger picture is that evidence of extraterrestrial intelligence can come in various forms, including optical or radio signals as well as artifacts detectable through astronomy. It’s a field we continue to examine here, because that search has just begun.

Phil Lubin’s 2016 paper is “The Search for Directed Intelligence,” REACH – Reviews in Human Space Exploration, Vol. 1 (March 2016), pp. 20-45. (Preprint / full text).

tzf_img_post

Small Provocative Workshop on Propellantless Propulsion

In what spirit do we pursue experimentation, and with what criteria do we judge the results? Marc Millis has been thinking and writing about such questions in the context of new propulsion concepts for a long time. As head of NASA’s Breakthrough Propulsion Physics program, he looked for methodologies by which to push the propulsion envelope in productive ways. As founding architect of the Tau Zero Foundation, he continues the effort through books like Frontiers of Propulsion Science, travel and conferences, and new work for NASA through TZF. Today he reports on a recent event that gathered people who build equipment and test for exotic effects. A key issue: Ways forward that retain scientific rigor and a skeptical but open mind. A quote from Galileo seems appropriate: “I deem it of more value to find out a truth about however light a matter than to engage in long disputes about the greatest questions without achieving any truth.”

by Marc G Millis

A workshop on propellantless propulsion was held at a sprawling YMCA campus of classy rusticity, in Estes Park Colorado, from Sept 10 to 14. These are becoming annual events, with the prior ones being in LA in Nov 2017, and in Estes Park, Sep 2016. This is a fairly small event of only about 30 people.

It was at the 2016 event where three other labs reported the same thrust that Jim Woodward and his team had been reporting for some time – with the “Mach Effect Thruster” (which also goes by the name “Mach Effect Gravity Assist” device). Backed by those independent replications, NASA awarded Woodward’s team NIAC grants. Updates on this work and several other concepts were discussed at this workshop. There will be a proceedings published after all the individual reports are rounded up and edited.

Before I go on to describe these updates, I feel it would be helpful to share a technique that I regularly use to when trying to assess potential breakthrough concepts. I began using this technique when I ran NASA’s Breakthrough Propulsion Physics project to help decide which concepts to watch and which to skip.

When faced with research that delves into potential breakthroughs, one faces the challenge of distinguishing which of those crazy ideas might be the seeds of breakthroughs and which are the more generally crazy ideas. In retrospect, it is easy to tell the difference. After years of continued work, the genuine breakthroughs survive, along with infamous quotes from their naysayers. Meanwhile the more numerous crazy ideas are largely forgotten. Making that distinction before the fact, however, is difficult.

So how do I tell that difference? Frankly, I can’t. I’m not clairvoyant nor brilliant enough to tell which idea is right (though it is easy to spot flagrantly wrong ideas). What I can judge and what needs to be judged is the reliability of the research. Regardless if the research is reporting supportive or dismissive evidence of a new concept, those findings mean nothing unless they are trustworthy. The most trustworthy results come from competent, rigorous researchers who are impartial – meaning they are equally open to positive or negative findings. Therefore, I first look for the impartiality of the source – where I will ignore “believers” or pedantic pundits. Next, I look to see if their efforts are focused on the integrity of the findings. If experimenters are systematically checking for false positives, then I have more trust in their findings. If theoreticians go beyond just their theory to consider conflicting viewpoints, then I pay more attention. And lastly, I look to see if they are testing a critical make-break issue or just some less revealing detail. If they won’t focus on a critical issue, then the work is less relevant.

Consider the consequences of that tactic: If a reliable researcher is testing a bad idea, you will end up with a trustworthy refutation of that idea. Null results are progress – knowing which ideas to set aside. Reciprocally, if a sloppy or biased researcher is testing a genuine breakthrough, then you won’t get the information you need to take that idea forward. Sloppy or biased work is useless (even if from otherwise reputable organizations). The ideal situation is to have impartial and reliable researchers studying a span of possibilities, where any latent breakthrough in that suite will eventually reveal itself (the “pony in the pile”).

Now, back to the workshop. I’ll start with the easiest topic, the infamous EmDrive. I use the term “infamous” to remind you that (1) I have a negative bias that can skew my impartiality, and (2) there are a large number of “believers” whose experiments never passed muster (which lead to my negative bias and overt frustration).

Three different tests of the EmDrive were reported of varying degrees of rigor. All of the tests indicated that the claimed thrust is probably attributable to false positives. The most thorough tests were from the Technical University of Dresden, Germany, led by Martin Tajmar, and where his student, Marcel Weikert presented the EmDrive tests, and Matthias Kößling on the details of their thrust stand. They are testing more than one version of the EmDrive, under multiple conditions, and all with alertness for false positives. Their interim results show that thrusts are measured when the device is not in a thrusting mode – meaning that something else is creating the appearance of a thrust. They are not yet fully satisfied with the reliability of their findings and tests continue. They want to trace the apparent thrust its specific cause.

The next big topic was Woodward’s Mach Effect Thruster – determining if the previous positive results are indeed genuine, and then determining if they are scalable to practical levels. In short – it is still not certain if the Mach Effect Thruster is demonstrating a genuine new phenomenon or if it is a case of a common experimental false positive. In addition to work of Woodward’s team, led by Heidi Fearn, the Dresden team also had substantial progress to report, specifically where Maxime Monette covered the Mach Effect thruster details in addition to the thrust stand details from Matthias Kößling. There was also an analytical assessment by based on conventional harmonic oscillators, plus more than one presentation related to the underlying theory.

One of the complications that developed over the years is that the original traceability between Woodward’s theory and the current thruster hardware has thinned. The thruster has become a “back box” where the emphasis is now on the empirical evidence and less on the theory.

Originally, the thruster hardware closely followed the 1994 patent which itself was a direct application of Woodward’s 1990 hypothesized fluctuating inertia. It involved two capacitors at opposite ends of a piezoelectric separator, where the capacitors experience the inertial fluctuations (during charging and discharging cycles) and where the piezoelectric separator cyclically changes length between these capacitors.

Its basic operation is as follows: While the rear capacitor’s inertia is higher and the forward capacitor lower, the piezoelectric separator is extended. The front capacitor moves forward more than the rear one moves rearward. Then, while the rear capacitor’s inertia is lower and the forward capacitor higher, the piezoelectric separator is contracted. The front capacitor moves backward less than the rear one moves forward. Repeating this cycle shifts the center of mass of the system forward – apparently violating conservation of momentum.

The actual conservation of momentum is more difficult to assess. The original conservation laws are anchored to the idea of an immutable connection between inertia and an inertial frame. The theory behind this device deals with open questions in physics about the origins and properties of inertial frames, specifically evoking “Mach’s Principle.” In short, that principle is ‘inertia here because of all the matter out there.’ Another related physics term is “Inertial Induction.” Skipping through all the open issues, the upshot is that variations in inertia would require revisions to the conservation laws. It’s an open question.

Back to the tale of the evolved hardware. Eventually over the years, the hardware configuration changed. While Woodward and his team tried different ways to increase the observed thrust, the ‘fluctuating inertia’ components and the ‘motion’ components were merged. Both the motions and mass fluctuations are now occurring in a stack of piezoelectric disks. Thereafter, the emphasis shifted to the empirical observations. There were no analyses to show how to connect the original theory to this new device. The Dresden team did develop a model to link the theory to the current hardware, but determining its viability is part of the tests that are still unfinished [Tajmar, M. (2017). Mach-Effect thruster model. Acta Astronautica, 141, 8-16.].

Even with the disconnect between the original theory and hardware now under test, there were a couple of presentations about the theory, one by Lance Williams and the other by Jose’ Rodal. Lance, reporting on discussions he had when attending the April 2018 meeting of American Physical Society, Division of Gravitational Physics, suggested how to engage the broader physics community about this theory, such as using the more common term of “Inertial Induction” instead of “Mach’s Principle.” Lance elaborated on the prevailing views (such as the absence of Maxwellian gravitation) that would need to be brought into the discussion – facing the constructive skepticism to make further advances. Jose’ Rodal elaborated on the possible applicability of “dilatons” from the Kaluza-Klein theory of compactified dimensions. Amid these and other presentations, there was lively discussion involving multiple interpretations of well established physics.

An additional provocative model for the Mach Effect Thruster came from an interested software engineer, Jamie Ciomperlik, who dabbles in these topics for recreation. In addition to his null tests of the EmDrive, he created a numerical simulation for the Mach Effect using conventional harmonic oscillators. The resulting complex simulations showed that, with the right parameters, a false positive thrust could result from vibrational effects. After lengthy discussions, it was agreed to examine this more closely, both experimentally and analytically. Though the experimentalists already knew of possible false positives from vibration, they did not previously have an analytical model to help hunt for these effects. One of the next steps is to check how closely the analysis parameters match the actual hardware.

Quantum approaches were also briefly covered, where Raymond Chiao discussed the negative energy densities of Casimir cavities and Jonathan Thompson (a prior student of Chiao’s) gave an update on experiments to demonstrate the “Dynamical Casimir effect” – a method to create a photon rocket using photons extracted from the quantum vacuum.

There were several other presentations too, spanning topics of varying relevance and fidelity. Some of these were very speculative works, whose usefulness can be compared to the thought-provoking effect of good science fiction. They don’t have to be right to be enlightening. One was from retired physicist and science fiction writer, John Cramer, who described the assumptions needed to induce a wormhole using the Large Hadron Collider (LHC) that could cover 1200 light-years in 59 days.

Representing NASA’s Innovative Advanced Concepts (NIAC), Ron Turner gave an overview of the scope and how to propose for NIAC awards.

A closing thought about consequences. By this time next year, we will have definitive results on the Mach Effect Thruster, and the findings of the EmDrive will likely arrive sooner. Depending on if the results are positive or negative, here are my recommendations on how to proceed in a sane and productive manner. These recommendations are based on history repeating itself, using both the good and bad lessons:

If It Does Work:

  • Let the critical reviews and deeper scrutiny run their course. If this is real, a lot of people will need to repeat it for themselves to discover what it’s about. This takes time, and not all of it will be useful or pleasant. Pay more attention to those who are attempting to be impartial, rather than those trying to “prove” or “disprove.” Because divisiveness sells stories, expect press stories focusing on the controversy or hype, rather than reporting the blander facts.
  • Don’t fall for the hype of exaggerated expectations that are sure to follow. If you’ve never heard of the “Gartner Hype Cycle,” then now’s the time to look it up. Be patient, and track the real test results more than the news stories. The next progress will still be slow. It will take a while and a few more iterations before the effects start to get unambiguously interesting.
  • Conversely, don’t fall for the pedantic disdain (typically from those whose ideas are more conventional and less exciting). You’ll likely hear dismissals like, “Ok, so it works, but it’s not useful. ” or “We don’t need it to do the mission.” Those dismissals only have a kernel of truth in a very narrow, near-sighted manner.
  • Look out for the sharks and those riding the coattails of the bandwagon. Sorry to mix metaphors, but it seemed expedient. There will be a lot of people coming out of the woodwork in search of their own piece of the action. Some will be making outrageous claims (hype) and selling how their version is better than the original. Again, let the test results, not the sales pitches, help you decide.

If It Does Not Work:

  • Expect some to dismiss the entire goal of “spacedrives” based on the failure of one or two approaches. This is a “generalization error” which might make some feel better, but serves no useful purpose.
  • Expect others to chime in with their alternative new ideas to fill the void, the weakest of which will be evident by their hyped sales pitches.
  • Follow the advice given earlier: When trying to figure out which idea to listen too, check their impartiality and rigor. Listen to those that are not trying to sell nor dismiss, but rather to honestly investigate and report. When you find those service providers, keep tuned in to them.
  • To seek new approaches toward the breakthrough goals, look for the intersection of open questions in physics to the critical make-break issues of those desired breakthroughs. Those intersections are listed in our book Frontiers of Propulsion Science.

tzf_img_post

Gaia Data Hint at Galactic Encounter

The Sagittarius Dwarf Galaxy is a satellite of the Milky Way, about 70,000 light years from Earth and in a trajectory that has it currently passing over the Milky Way’s galactic poles; i.e., perpendicular to the galactic plane. What’s intriguing about this satellite is that its path takes it through the plane of our galaxy multiple times in the past, a passage whose effects may still be traceable today. A team of scientists led by Teresa Antoja (Universitat de Barcelona) is now using Gaia data to trace evidence of its effects between 300 and 900 million years ago.

Image: The Sagittarius dwarf galaxy, a small satellite of the Milky Way that is leaving a stream of stars behind as an effect of our Galaxy’s gravitational tug, is visible as an elongated feature below the Galactic centre and pointing in the downwards direction in the all-sky map of the density of stars observed by ESA’s Gaia mission between July 2014 to May 2016. Credit: ESA/Gaia/DPAC.

This story gets my attention because of my interest in the Gaia data and the uses to which they can be put. We just looked at interstellar interloper ‘Oumuamua and saw preliminary work on tracing it back to a parent star. No origin could be determined, but the selection of early candidates was an indication of an evolving method in using the Gaia dataset, which will expand again with the 2021 release. The Sagittarius Dwarf galaxy compels a different method, and we’ll be seeing quite a few new investigations with methods of their own growing out of this attempt to begin a three-dimensional map of the Milky Way. A kinematic census of over one billion stars will come out of Gaia.

A billion stars represents less than 1 percent of the galactic population, so you can see how far we have to go, but we’re already finding innovative ways to put the Gaia data to use, as witness Antoja’s new paper in Nature. As we saw in ‘Oumuamua’s Origin: A Work in Progress, Gaia uses astrometric methods to measure not just the position but the velocity of stars on the plane of the sky. We also get a subset of a few million stars for which the mission will include radial velocity, producing stellar motion in a three-dimensional ‘phase space.’

From the Antoja paper:

By exploring the phase space of more than 6 million stars (positions and velocities) in the disk of the Galaxy in the first kiloparsecs around the Sun from the Gaia Data Release 2 (DR2, see Methods), we find that certain phase space projections show plenty of substructures that are new and that had not been predicted by existing models. These have remained blurred until now due to the limitations on the number of stars and the precision of the previously available datasets.

Antoja’s team found that these unique data revealed an unexpected pattern when stellar positions were plotted against velocity. The pattern is a snail shell shape that emerges when plotting the stars’ altitude above or below the plane of the galaxy against their velocity in the same direction. Nothing like this had been noted before, nor could it have been without Gaia.

“At the beginning the features were very weird to us,” says Antoja. “I was a bit shocked and I thought there could be a problem with the data because the shapes are so clear. It looks like suddenly you have put the right glasses on and you see all the things that were not possible to see before.”

Image: This graph shows the altitude of stars in our Galaxy above or below the plane of the Milky Way against their velocity in the same direction, based on a simulation of a near collision that set millions of stars moving like ripples on a pond. The snail shell-like shape of the pattern reproduces a feature that was first seen in the movement of stars in the Milky Way disc using data from the second release of ESA’s Gaia mission, and interpreted as an imprint of a galactic encounter. Credit: T. Antoja et al. 2018.

Stellar motions, we are learning, produce ripples that may no longer show up in the stars’ visible distribution, but do emerge when their velocities are taken into consideration. Antoja and colleagues believe the cause of this motion was the Sagittarius Dwarf Galaxy, whose last close pass would have perturbed many stars in the Milky Way. The timing is the crux, for estimates of when the snail shell pattern began fit with the timing of the last dwarf galaxy pass.

As with the ‘Oumuamua study, we’re at the beginning of teasing out newly available information from the trove that Gaia is giving us. To firm up the connection with the Sagittarius Dwarf Galaxy, Antoja team has much to do as it moves beyond early computer modeling and analysis, but the evidence for perturbation, whatever the source, is clear. From the paper:

…an ensemble of stars will stretch out in phase space, with the range of frequencies causing a spiral shape in this projection. The detailed time evolution of stars in this toy model is described in Methods and shown in Extended Data Fig. 3. As time goes by, the spiral gets more tightly wound, and eventually, this process of phase mixing leads to a spiral that is so wound that the coarse-grained distribution appears to be smooth. The clarity of the spiral shape in the Z-VZ [vertical position and velocity] plane revealed by the Gaia DR2 data, implies that this time has not yet arrived and thus provides unique evidence that phase mixing is currently taking place in the disk of the Galaxy.

The shell-like pattern thus contains information about the distribution of matter in the Milky Way and the nature of stellar encounters. The bigger picture is that untangling the evolution of the galaxy and explaining its structure is what Gaia was designed for, a process that is now gathering momentum. We’re only beginning to see what options this mission is opening up.

The paper is Antoja et al., “A Dynamically Young and Perturbed Milky Way Disk,” Nature 561 (2018), 360-362 (abstract / preprint).

tzf_img_post

Of Storms on Titan

I always imagined Titan’s surface as a relatively calm place, perhaps thinking of the Huygens probe in an exotic, frigid landing zone that I saw as preternaturally still. Then, prompted by an analysis of what may be dust storms on Titan, I revisited what Huygens found. It turns out the probe experienced maximum winds about ten minutes after beginning its descent, at an altitude of some 120 kilometers. It was below 60 kilometers that the wind dropped. And during the final 7 kilometers, the winds were down to a few meters per second. At the surface, according to the European Space Agency, Huygens found a light breeze of 0.3 meters per second.

But is Titan’s surface always that quiet? The Cassini probe has shown us that Titan experiences interesting weather driven by a methane cycle that operates at temperatures far below Earth’s water cycle, filling its lakes and seas with methane and ethane. The evaporation of hydrocarbon molecules produces clouds that lead to rain, with conditions varying according to season. Conditions at the time of the equinox, with the Sun crossing Titan’s equator, are particularly lively, producing massive clouds and storms in the tropical regions.

So a lot can happen here depending on where and when we sample. Sebastien Rodriguez (Université Paris Diderot, France) and colleagues noticed unusual brightenings in infrared images made by Cassini near the moon’s 2009-2010 northern spring equinox. The paper refers to these as “three distinctive and short-lived spectral brightenings close to the equator.”

The first assumption was that these were clouds, but that idea was quickly discounted. Says Rodriguez:

“From what we know about cloud formation on Titan, we can say that such methane clouds in this area and in this time of the year are not physically possible. The convective methane clouds that can develop in this area and during this period of time would contain huge droplets and must be at a very high altitude — much higher than the 6 miles (10 kilometers) that modeling tells us the new features are located.”

Image: This compilation of images from nine Cassini flybys of Titan in 2009 and 2010 captures three instances when clear bright spots suddenly appeared in images taken by the spacecraft’s Visual and Infrared Mapping Spectrometer. The brightenings were visible only for a short period of time — between 11 hours to five Earth weeks — and cannot be seen in previous or subsequent images. Credit: NASA/JPL-Caltech/University of Arizona/University Paris Diderot/IPGP.

In a paper just published in Nature Geoscience, the researchers likewise discount the possibility that Cassini had detected surface features, areas of frozen methane or lava flows of ice. The problem here is that the bright features in the infrared were visible for relatively short periods — 11 hours to 5 weeks — while surface spots should have remained visible for longer. Nor do they bear the chemical signature expected from such formations at the surface.

Image: This animation — based on images captured by the Visual and Infrared Mapping Spectrometer on NASA’s Cassini mission during several Titan flybys in 2009 and 2010 — shows clear bright spots appearing close to the equator around the equinox that have been interpreted as evidence of dust storms. Credit: NASA/JPL-Caltech/University of Arizona/University Paris Diderot/IPGP.

Rodriguez and team used computer modeling to show that the brightened features were atmospheric but extremely low, forming what is in all likelihood a thin layer of solid organic particles. Such particles form because of the interaction between methane and sunlight. Because the bright features occurred over known dune fields at Titan’s equator, Rodriguez believes that they are clouds of dust kicked up by wind hitting the dunes.

“We believe that the Huygens Probe, which landed on the surface of Titan in January 2005, raised a small amount of organic dust upon arrival due to its powerful aerodynamic wake,” says Rodriguez. “But what we spotted here with Cassini is at a much larger scale. The near-surface wind speeds required to raise such an amount of dust as we see in these dust storms would have to be very strong — about five times as strong as the average wind speeds estimated by the Huygens measurements near the surface and with climate models.”

Image: Artist’s concept of a dust storm on Titan. Researchers believe that huge amounts of dust can be raised on Titan, Saturn’s largest moon, by strong wind gusts that arise in powerful methane storms. Such methane storms, previously observed in images from the international Cassini spacecraft, can form above dune fields that cover the equatorial regions of this moon especially around the equinox, the time of the year when the Sun crosses the equator. Credit: NASA/ESA/IPGP/Labex UnivEarthS/University Paris Diderot.

In reaching this conclusion, the researchers analyzed Cassini spectral data and deployed atmospheric models and simulations to show that micrometer-sized solid organic particles from the dunes below were responsible, an indication of dust in the atmosphere that far exceeds what Huygens found at the surface. The winds associated with the phenomenon would be unusually strong, but could be explained by downbursts in the equinoctial methane storms.

If dust storms can be created by such winds, then Titan’s equatorial regions are still active, with the dunes undergoing constant change. We have a world that is active not only in its hydrocarbon cycle and its geology, but also in what we can call its ‘dust cycle.’ The only moon in the Solar System with a dense atmosphere and surface liquid offers yet another analogy with Earth, a similarity that highlights the complexity of this frigid, hydrocarbon-rich world.

The paper is Rodriguez et al., “Observational evidence for active dust storms on Titan at equinox,” Nature Geoscience 24 September 2018 (abstract).

tzf_img_post

‘Oumuamua’s Origin: A Work in Progress

The much discussed interstellar wanderer called ‘Oumuamua made but a brief pass through our Solar System, and was only discovered on the way out in October of last year. Since then, the question of where the intriguing interloper comes from has been the object of considerable study. This is, after all, the first object known to be from another star observed in our system. Today we learn that a team of astronomers led by Coryn Bailer-Jones (Max Planck Institute for Astronomy) has been able to put Gaia data and other resources to work on the problem.

The result: Four candidate stars identified as possible home systems for ‘Oumuamua. None of these identifications is remotely conclusive, as the researchers make clear. The significance of the work is in the process, which will be expanded as still more data become available from the Gaia mission. So in a way this is a preview of a much larger search to come.

What we are dealing with is the reconstruction of ‘Oumuamua’s motion before it encountered our Solar System, and here the backtracking become tangled with the object’s trajectory once we actually observed it. Its passage through the system as well as stars it encountered before it reached us all factor into determining its origin.

What the Bailer-Jones teams brings to the table is something missing in earlier attempts to solve the riddle of ‘Oumuamua’s home. We learned in June of 2018 that ‘Oumuamua’s orbit was not solely the result of gravitational influences, but that a tiny additional acceleration had been added when the object was close to the Sun. That brought comets into the discussion: Was ‘Oumuamua laden with ice that, sufficiently heated, produced gases that accelerated it?

The problem with that idea was that no such outgassing was visible on images of the object, the way it would be with comets imaged close to the Sun. Whatever the source of the exceedingly weak acceleration, though, it had to be factored into any attempt to extrapolate the object’s previous trajectory. Bailer-Jones and team manage to do this, offering a more precise idea of the direction from which the object came.

Image: This artist’s impression shows the first interstellar asteroid: `Oumuamua. This unique object was discovered on 19 October 2017 by the Pan-STARRS 1 telescope in Hawai`i. Subsequent observations from ESO’s Very Large Telescope in Chile and other observatories around the world show that it was travelling through space for millions of years before its chance encounter with our star system. `Oumuamua seems to be a dark red object, either elongated, as in this image, or else shaped like a pancake. Credit: ESO/M. Kornmesser.

At the heart of this work are the abundant data being gathered by the Gaia mission, whose Data Release 2 (DR2) includes position, on-sky motion and parallax information on 1.3 billion stars. As this MPIA news release explains, we also have radial velocity data — motion directly away from or towards the Sun — of 7 million of these Gaia stars. The researchers then added in Simbad data on an additional 220,000 stars to retrieve further radial velocity information.

To say this gets complicated is a serious understatement. 4500 stars turn up as potential homes for ‘Oumuamua, assuming both the object and the stars under consideration all moved along straight lines and at constant speeds. Then the researchers had to take into consideration the gravitational influence of all the matter in the galaxy. The likelihood is that ‘Oumuamua was ejected from a planetary system during the era of planet formation, and that it would have been sent on its journey by gravitational interactions with giant planets in the infant system.

Calculating its trajectory, then, could lead us back to ‘Oumuamua’s home star, or at least to a place close to it. Another assumption is that the relative speed of ‘Oumuamua and its parent star is comparatively slow, because objects are not typically ejected from planetary systems at high speed. Given all this, Bailer-Jones and team come down from 4500 candidates to four that they consider the best possibilities. None of these stars is currently known to have planets at all, much less giant planets, but none has been seriously examined for planets to this point.

Let’s pause on this issue, because it’s an interesting one. Digging around in the paper, I learned that unstable gas giants would be more likely to eject planetesimals than systems with stable giant planets, a consequence of the eccentric orbits of multiple gas giants during an early phase of system instability. It also turns out that there are ways to achieve higher ejection velocities. Does ‘Oumuamua come from a binary star? Let me quote from the paper on this:

Higher ejection velocities can occur for planetesimals scattered in a binary star system. To demonstrate this, we performed a simple dynamical experiment on a system comprising a 0.1 M? star in a 10 au circular orbit about a 1.0 M? star. (This is just an illustration; a full parameter study is beyond the scope of this work.) Planetesimals were randomly placed between 3 au and 20 au from the primary, enveloping the orbit of the secondary… Once again most (80%) of the ejections occur at velocities lower than 10 km s?1, but a small fraction is ejected at higher velocities in the range of those we observe (and even exceeding 100 km s?1).

So keep this in mind in evaluating the candidate stars. One of these is the M-dwarf HIP 3757, which can serve as an example of how much remains to be done before we can claim to know ‘Oumuamua’s origin. Approximately 77 light years from Earth, the star as considered by these methods would have been within 1.96 light years of ‘Oumuamua about 1 million years ago. This is close enough to make the star a candidate given how much play there is in the numbers.

But the authors are reluctant to claim HIP 3757 as ‘Oumuamua’s home star because the relative speed between the object and the star is about 25 kilometers per second, making ejection by a giant planet in the home system less likely. More plausible on these grounds is HD 292249, which would have been within a slightly larger distance some 3.8 million years ago. Here we get a relative speed of 10 kilometers per second. Two other stars also fit the bill, one with an encounter 1.1 million years ago, the other at its closest 6.3 million years ago. Both are in the DR2 dataset and have been catalogued by previous surveys, but little is known about them.

Now note another point: None of the candidate stars in the paper are known to have giant planets, but higher speed ejections can still be managed in a binary star system, or for that matter in a system undergoing a close pass by another star. None of the candidates is known to be a binary. Thus the very mechanism of ejection remains unknown, and the authors are quick to add that they are working at this point with no more than a small percentage of the stars that could have been ‘Oumuamua’s home system.

Given that the 7 million stars in Gaia DR2 with 6D phase space information is just a small fraction of all stars for which we can eventually reconstruct orbits, it is a priori unlikely that our current search would find ‘Oumuamua’s home star system.

Yes, and bear in mind too that ‘Oumuamua is expected to pass within 1 parsec of about 20 stars and brown dwarfs every million years. Given all of this, the paper serves as a valuable tightening of our methods in light of the latest data we have about ‘Oumuamua, and points the way toward future work. The third Gaia data release is to occur in 2021, offering a sample of stars with radial velocity data ten times larger than DR2 [see the comments for a correction on this]. No one is claiming that ‘Oumuamua’s home star has been identified, but the process for making this identification is advancing, an effort that will doubtless pay off as we begin to catalog future interstellar objects making their way into our system.

The paper is Bailer-Jones et al., “Plausible home stars of the interstellar object ‘Oumuamua found in Gaia DR2,” accepted for publication in The Astrophysical Journal (preprint).

tzf_img_post

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Version 1.0.0

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives