Centauri Dreams

Imagining and Planning Interstellar Exploration

The Odds on an Empty Cosmos

When Arthur C. Clarke tells me that something is terrifying, he’s got my attention. After all, since boyhood I’ve not only had my imagination greatly expanded by Clarke’s work but have learned a great deal about scientific methodology and detachment. So where does terror fit in? Clarke is said to have used the term in a famous quote: “Two possibilities exist: either we are alone in the Universe or we are not. Both are equally terrifying.” But let’s ponder this: Would we prefer to live in a universe with other intelligent beings, or one in which we are alone?

Are they really equally terrifying? Curiosity favors the former, as does innate human sociability. But the actual situation may be far more stark, which is why David Kipping deploys the Clarke quote in a new paper probing the probabilities.

Working with the University of Sydney’s Geraint Lewis, Kipping (Columbia University) has applied a thought experiment first conceived by Edwin Jaynes to dig into the matter. Jaynes (1922-1998) was a physicist at Washington University in St. Louis, MO. Through his analysis of probabilities (statistical inference was a key aspect of his work), Jaynes laid a framework that he analyzed with rigor, one that was later tweaked by J. B. S. Haldane, a man who had his own set of famous quotes, including the familiar “Now, my own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose.” This seems to be a day for good quotes.

Imagine a lab bench on which are a large number of beakers filled with water, roughly the same amount in each. The goal is to find out whether an unknown chemical will dissolve within these flasks. Remember, each flask contains nothing but water, all of it from the same source. You are to pour some of the chemical into each.

The logical expectation is that the unknown compound will either dissolve in each flask or not. That result should hold across the board: What happens in one flask should happen in all. What we would not expect is for the compound to dissolve in some flasks but not others. You can see what this would imply, that the tiniest variations in temperature and pressure could swing the outcome either way. In other words, as Kipping and Lewis note, it would imply that the conditions in the room and properties of the compound were “balanced on a knife edge; fine-tuned to yield such an outcome.”

Fine-tuning is telling us something: Are the conditions in the room so perfectly set that there is some kind of hair-trigger threshold that some but not all of the flasks can reveal when the chemical is added? How could that happen? Jaynes went about exploring this gedankenexperiment (and many others – he would become known as one of the founders of so-called Objective Bayesianism). The beauty of the Kipping and Lewis paper is that the authors have applied the Jaynes experiment, for the first time, I think, to the cosmos. Thus instead of beakers of water think of exoplanets, and liken the dissolving of the chemical to abiogenesis. From the paper:

Consider an ensemble of Earth-like planets across the cosmos – worlds with similar gravity, composition, chemical inventories and climatic conditions. Although small differences will surely exist across space (like the beakers across the laboratory), one should reasonably expect that life either emerges nearly all of the time in such conditions, or hardly ever. As before, it would seem contrived for life to emerge in approximately half of the cases – again motivated from the fine-tuning perspective.

Image: This is Figure 1 from the paper. Caption: In the gedankenexperiment of attempting to dissolve an unknown compound X into a series of water vessels, Jaynes and Haldane argued that, a-priori, X will either dissolve almost all of the time or very rarely, but it would be contrived for nearly half of the cases to dissolve and half not. The function plotted here represents the Haldane prior (F−1(1 − F)−1) that captures this behaviour. Credit: Kipping and Lewis.

The authors argue that the idea can be extended beyond abiogenesis to include the fraction of worlds on which multicellular life develops, and indeed the fraction of worlds where technological civilizations develop. Now we’re pondering a universe that is either crammed with life or devoid of it, with little room to maneuver in between. Which of these is most likely to be true? Can we connect this with the Drake Equation, that highly influential statement that so defined SETI’s early years in terms of the factors that influence the number of communicating technological civilizations in the galaxy?

Rather than extending the variables of the Drake Equation, a process that could go on indefinitely, the authors choose to distill it using what they call a ‘birth-death formalism.’ The result is a ‘steady state’ version of the Drake Equation (SSD).

The balance between birth and death is crucial. A civilization emerges. Another one dies. Think of the first six terms of the Drake Equation as representing the birth rate, while the final term, L, represents the death rate. The authors suggest that problems with the original equation can be resolved by paring it into this form, producing a new term F, which stands for the ‘occupation fraction’; i.e., planets with technological civilizations, a term arrived at through the ratio of births to deaths per year. Thus in the case of a galaxy filled with technological societies, F would come out close to 1. The paper fully develops how the new equation is reached but the end result is this:

Where λBD is the birth to death ratio. The particulars of how this is derived are fascinating, and can also be explored in Kipping’s Cool Worlds video.

Now we have something to work with. A galaxy in which there are few births compared to deaths is one that is all but empty. Start adjusting the ratio to factor in more civilization births and the galaxy begins to fill. Continue the adjustment and the entire galaxy fills. The S-curve is a familiar one, and one that puts the pressure on SETI optimists because it seems evident that not all stars are occupied by civilizations.

Assuming that F does not equal 1 or come close to it, we can explore the steep S-curve as it rises. Here NT refers to the total number of stars. From the paper:

This is what we consider to be the SETI optimist’s scenario (given that F ≈ 1 is not allowed). Here, F takes on modest but respectable values, sufficiently large that one might expect success with a SETI survey. For example, modern SETI surveys scan NT ∼ 103-104 targets… so for such a survey to be successful one requires F to exceed the reciprocal of this (i.e. F ≥ 10−4), but realistically greatly so (i.e. F ≫ 10−4 ) since not every occupied seat will produce the exact technosignature we are searching for, in the precise moments we look, and at the power level we are sensitive to. This arguably places the SETI optimist is a rather narrow corridor of requiring N−1T ≪ λBD ≲ 1.

That narrow corridor is the SETI fine-tuning problem. The tiny birth-death ratio range available in this ‘uncanny valley of possibility’ is all the room to maneuver we have for a successful detection.

And the authors point out that the value for λBD may be ‘outrageously small’. Just how common is abiogenesis? A telling case in point: One recent calculation shows that the probability of spontaneously forming proteins from amino acids is on the order of 10-77. And having arrived at these amino acids, it would still be necessary to go through all the further steps to arrive at an actual living creature. Not to mention the issue of producing living creatures and having them develop technologies.

Image: This is Figure 3 from the paper. Caption: Figure 3. Left: Occupation fraction of potential “seats” as a function of the birth-to-death rate ratio (λBD), accounting for finite carrying capacity. In the context of communicative ETIs, an occupation fraction of F ∼ 1 is apparently incompatible with both Earth’s history and our (limited) observations to date. Values of λBD ≪ 1 imply a lonely cosmos, and thus SETI optimists must reside somewhere along the middle of the S-shaped curve. Right: As we expand the bounds on λBD, the case for SETI optimism appears increasingly contrived and becomes a case of fine-tuning. Credit: Kipping and Lewis.

Thus the birth to death ratio cannot be too low but neither can it be too high if it is to fit our history of observations. The window for successful SETI detection is small, a fine-tuned ‘valley’ in which we are unlikely to be. To this point SETI has produced no telling evidence for technological civilizations other than our own (we do pick our own signals up quite often, of course, in the form of RFI, a well-known problem!) You have to get into the realm of conspiracy theories like the ‘zoo hypothesis’ to explain this result and still maintain that the galaxy is filled with technological civilizations.

We can also weigh the result in the context of our own planetary past:

Moreover, F ≈ 1 is simply incompatible with Earth’s history. Most of Earth’s history lacks even multicellular life, let alone a technological civilization. We thus argue that F ≈ 1 can be reasonably dismissed as a viable hypothesis…We highlight that excluding F ≈ 1 is compatible with placing a “Great Filter” at any position, such as the “Rare Earth” hypothesis (Ward & Brownlee 2000) or some evolutionary “Hard Step” (Carter 2008).

So what’s actually going on in those flasks on Jaynes’ lab table? Because if some flasks are doing one thing when the chemical is added and some are doing another, we may be precisely fine-tuned to where a SETI detection will be consistent with our previous observations. But that’s a pretty thin knife-edge to place all our hopes on.

I should add that the authors introduce mitigating factors into the discussion at the end. In particular, violating the SSD might involve the so-called ‘grabby aliens’ hypothesis, in which alien civilizations do emerge, though rarely, and when they do, they often colonize their own part of the galaxy. Thus most regions fill up, but not all, and we humans have perhaps emerged in an area where this colonization wave has not yet reached. That’s sort of intriguing, as it implies that the best SETI targets might be very far away, and extragalactic SETI may offer the best hope for a reception.

But let me end by questioning that note of hope, and for that matter, the issue of ‘terror’ that the Clarke quote invokes. Because I don’t find the idea of a universe devoid of other civilizations particularly terrifying, and I certainly don’t see it as one that is beyond hope. A Milky Way stuffed with civilizations would be fascinating, but a cosmos empty of other sentient beings is also a remarkable scientific result. So of course we keep looking, but the real goal is to understand our place in the universe. If we are a spectacular contradiction to an otherwise empty galaxy, let’s get on with exploring it.

The paper is Kipping & Lewis, “Do SETI Optimists Have a Fine-Tuning Problem?” submitted to International Journal of Astrobiology (preprint). See Kipping’s Cool Worlds video on the matter for more.

The Final Parsec Paradox: When Things Do Not Go Bump in the Night

Something interesting is going on in the galaxy NGC 6240, some 400 million light years from the Sun in Ophiuchus. Rather than sporting a single supermassive black hole at its center, this galaxy appears to have two, located about 3000 light years from each other. A merger seems likely, or is it? Centauri Dreams regular Don Wilkins returns to his astronomical passion with a look at why multiple supermassive black holes are puzzling scientists and raising questions that may even involve new physics.

By Don Wilkins

Super massive black holes (SMBH), black holes with a mass exceeding 100,000 solar masses, don’t behave as expected. When these galaxies collide, gas and dust smash into each other forming new stars. Existing stars are too far apart to collide. The two SMBH of the galaxies converge. Intuition foresees the two massive bodies coalescing into a single giant, Figure 1. The Universe, as frequently happens, ignores our intuition.

The relevant force is dynamical friction. [1-4] As a result of it, the SMBH experience a deceleration in the direction of motion. Gas and stars between the two SMBH leach momenta from the black holes. The smaller masses pick up enormous speeds and are hurled away from the SMBH. Over time and as immense masses are thrown away, the SMBH inch closer together.

The effect is similar to the gravitational assist maneuver, the fly-by of a massive object, used to accelerate space probes.

The paradox occurs when the gasses and stars have all been expelled from the volume between the two SMBH, a distance of about 3 lightyears. There is no more mass to siphon off momenta. Modeling indicates the standoff would last longer than the life of the Universe.

According to Dr. Gonzalo Alonso Alvarez, a postdoc at the University of Toronto:

“Previous calculations have found that this process [the merger of SMBH] stalls when the black holes are around 1 parsec away from each other, a situation sometimes referred to as the final parsec problem.”

Figure 1. Super Massive Black Holes Orbit Each Other. Credit: NASA.

Two Laser Interferometer Gravitational-Wave Observatories (LIGO) employ laser interferometry to detect gravitational waves in the 10 to 1 kiloHertz regions. This band is suitable to sense black holes 5 to 100 times more massive than the sun. To detect collisions of SMBH, the detector must sense nanometer distortions in spacetime.

The size of gravity wave detectors is inversely proportional to the wavelength. A gravity wave detector sized to measure the cry of a SMBH collision would be immense. Scientists in The NANOGrav Collaboration have overcome the need for detectors with dimensions of light-years by employing pulsars.

In this approach, the timing of a number of pulsars is very accurately measured. A pulsar timing array used sixty-eight pulsars as timing sources. Gravitational waves, compressing and expanding spacetime in their passage, alter the timing of each pulsar in a small way. Timing changes were collected for fifteen years.

Evidence for gravitational waves with periods of years to decades was found. The data are under evaluation to determine the source of the distortions. One possibility is the collision of SMBH.

There are five possible solutions to the paradox. The first is that the NanoGravity detections are not SMBH collisions. There is no paradox. Two SMBH will not merge within the life of the Universe. Rather boringly, our “maths” are correct. Nothing new to learn. [5-12]

Researchers propose that more realistic, triaxial and rotating galaxy models resolve the paradox in ten billion year or less. [13]

Another solution involves three SMBH. The third member continues to remove momentum from the other SMBH until gravitational attraction pulls its two partners into a single black hole. [14]

Expelled gas and stars may return to the two SMBH. These can continue to siphon off momentum until a collision occurs. [15]

Dark matter could contribute to reducing momenta. In this case, particles of dark matter must be able to interact with each other. [16] From Dr. Alvarez whose team published the dark matter paper:

“What struck us the most when Pulsar Timing Array collaborations announced evidence for a gravitational wave spectrum is that there was room to test new particle physics scenarios, specifically dark matter self-interactions, even within the standard astrophysical explanation of supermassive black hole mergers.”

Figure 2. Image: The distorted appearance of NGC 6240 is a result of a galactic merger that occurred when two galaxies drifted too close to one another. When the two galaxies came together, their central black holes did so, too. There are two supermassive black holes within this jumble, spiraling closer and closer to one another. They are currently only some 3,000 light-years apart, incredibly close given that the galaxy itself spans 300,000 light-years. Image credit: NASA, ESA, the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration, and A. Evans (University of Virginia, Charlottesville/NRAO/Stony Brook University).

For the moment, several theories have been advanced to explain the merger of SHBM. Whether further evaluation of the nano wavelength data reveals SBHM coalescence is the primary question.

References

1. S. Chandrasekhar, Dynamical Friction I. General Considerations: the Coefficient of Dynamical Friction, https://articles.adsabs.harvard.edu/pdf/1943ApJ….97..255C

2. S. Chandrasekhar, The Rate of Escape of Stars from Clusters and the Evidence for the Operation of Dynamical Friction, https://articles.adsabs.harvard.edu/pdf/1943ApJ….97..263C

3. S. Chandrasekhar, Dynamical Friction III. A More Exact Theory of the Rate of Escape of Stars from Clusters, https://articles.adsabs.harvard.edu/pdf/1943ApJ….98…54C

4. John Kormendy and Luis C. Ho, Coevolution (Or Not) of Supermassive Black Holes and Host Galaxies, https://arxiv.org/pdf/1304.7762

5. The NANOGrav Collaboration, Focus on NANOGrav’s 15 yr Data Set and the Gravitational Wave Background, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/collections/apjl-230623-245-Focus-on-NANOGrav-15-year

6. Gabriella Agazie, et al, The nanograv 15 yr data set: evidence for a gravitational-wave background, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acdac6/meta

7. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Observations and Timing of 68 Millisecond Pulsars, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acda9a/meta

8. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Detector Characterization and Noise Budget, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acda88/meta

9. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Search for Signals from New Physics, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acdc91/meta

10. Gabriella Agazie, et al, 15 yr Data Set: Bayesian Limits on Gravitational Waves from Individual Supermassive Black Hole Binaries, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023. https://iopscience.iop.org/article/10.3847/2041-8213/ace18a/meta

11. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Constraints on Supermassive Black Hole Binaries from the Gravitational-wave Background, The Astrophysical Journal Leteters, Volume 951, Number 1, 29 June 2023
https://iopscience.iop.org/article/10.3847/2041-8213/ace18b

12. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Search for Anisotropy in the Gravitational-wave Background, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acf4fd/meta

13. Peter Berczik, David Merritt, Rainer Spurzem, Hans-Peter Bischof, Efficient Merger of Binary Supermassive Black Holes in Non-Axisymmetric Galaxies, https://arxiv.org/pdf/astro-ph/0601698

14. Masaki Iwasawa, Yoko Funato and Junichiro Makino, Evolution of Massive Blackhole Triples I — Equal-mass binary-single systems, https://arxiv.org/pdf/astro-ph/0511391

15. Milos Milosavljevic and David Merritt, Long Term Evolution of Massive Black Hole Binaries, https://arxiv.org/pdf/astro-ph/0212459

16. Gonzalo Alonso-Álvarez et al, Self-Interacting Dark Matter Solves the Final Parsec Problem of Supermassive Black Hole Mergers, Physical Review Letters (2024). https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.133.021401

The Search for Things that Matter

Overpopulation has spawned so many dystopian futures in science fiction that it would be a lengthy though interesting exercise to collect them all. Among novels, my preference for John Brunner’s Stand on Zanzibar goes back to my utter absorption in its world when first published in book form in 1968. Kornbluth’s “The Marching Morons” (1951) fits in here, and so does J.G. Ballard’s Billenium (1969), and of course Harry Harrison’s Make Room! Make Room! from 1966, which emerged in much changed form in the film Soylent Green in 1973.

You might want to check Science Fiction and Other Suspect Ruminations for a detailed list, and for that matter on much else in the realm of vintage science fiction as perceived by the pseudonymous Joachim Boaz (be careful, you might spend more time in this site than you had planned). In any case, so strongly has the idea of a clogged, choking Earth been fixed in the popular imagination that I still see references to going off-planet as a way of relieving population pressure and saving humanity.

So let’s qualify the idea quickly, because it has a bearing on the search for technosignatures. After all, a civilization that just keeps getting bigger has to take desperate measures to generate the power needed to sustain itself. It’s worth noting, then, that a 2022 UN report suggests a world population peaking at a little over 10 billion and then beginning to decline as the century ends. A study in the highly regarded medical journal The Lancet from a few years back sees us peaking at a little under 10 billion by 2064 with a decline to less than 9 billion by the end of the 2100s.

Image: The April, 1951 issue of Galaxy Science Fiction where “The Marching Morons” first appeared. Brilliant and, according to friend and collaborator Frederick Pohl, exceedingly odd, Cyril Kornbluth died of a heart attack in 1958 at the age of 34, on the way to being interviewed for a job as editor of Fantasy and Science Fiction.

How accurate such projections are is unknown, as is what happens beyond the end of this century, but it seems clear that we can’t assume the kind of exponential increase in population that will put an end to us in Malthurian fashion any time soon. It’s conceivable that one reason we are not finding Dyson spheres (although there are some candidates out there, as we’ve discussed here previously) is that technological civilizations put the brakes on their own growth and sustain levels of energy consumption that would not readily be apparent from telescopes light years away.

Thus a new paper from Ravi Kopparapu (NASA GSFC), in which the population figure of 8 billion (which is about where we are now) is allowed to grow to 30 billion under conditions in which the standard of living is high globally. Assuming the use of solar power, the authors discover that this civilization, far larger in population than ours, uses much less energy that the sunlight incident upon the planet provides. Here is an outcome that puts many one of the most cherished tropes of science fiction to the test, for as Kopparapu explains:

“The implication is that civilizations may not feel compelled to expand all over the galaxy because they may achieve sustainable population and energy-usage levels even if they choose a very high standard of living. They may expand within their own stellar system, or even within nearby star systems, but galaxy-spanning civilizations may not exist.”

Image: Conceptual image of an exoplanet with an advanced extraterrestrial civilization. Structures on the right are orbiting solar panel arrays that harvest light from the parent star and convert it into electricity that is then beamed to the surface via microwaves. The exoplanet on the left illustrates other potential technosignatures: city lights (glowing circular structures) on the night side and multi-colored clouds on the day side that represent various forms of pollution, such as nitrogen dioxide gas from burning fossil fuels or chlorofluorocarbons used in refrigeration. Credit: NASA/Jay Freidlander.

The harvesting of the energies of stellar light may be obsolete among civilizations older than our own, given alternative ways of generating power. But if not, the paper models a telescope on the order of the proposed Habitable Worlds Observatory to explore how readily it might detect a massive array of solar panels on a planet some 30 light years away. This is intriguing stuff, because it turns out that even huge populations don’t demand enough power to cover their planet in solar panels. Indeed, it would take several hundred hours of observing time to detect at high reliability a land coverage of 23 percent on an Earth-like planet using silicon-based solar panels for its needs.

The conclusion is striking. From the paper:

Kardashev (1964) even imagined a Type II civilization as one that utilizes the entirety of its host star’s output; however, such speculations are based primarily on the assumption of a fixed growth rate in world energy use. But such vast energy reserves would be unnecessary even under cases of substantial population growth, especially if fusion and other renewable sources are available to supplement solar energy.

A long-held assumption thus comes under fire:

The concept of a Type I or Type II civilization then becomes an exercise in imagining the possible uses that a civilization would have for such vast energy reserves. Even activities such as large-scale physics experiments and (relativistic) interstellar space travel (see Lingam & Loeb 2021, Chapter 10) might not be enough to explain the need for a civilization to harness a significant fraction of its entire planetary or stellar output. In contrast, if human civilization can meet its own energy demands with only a modest deployment of solar panels, then this expectation might also suggest that concepts like Dyson spheres would be rendered unnecessary in other technospheres.

Of course, good science fiction is all about questioning assumptions by pushing them to their logical conclusions, and ideas like this should continue to be fertile ground for writers. Does a civilization necessarily have to expand to absorb the maximum amount of energy its surroundings make available? Or is it more likely to evolve only insofar as it needs to reach the energy level required for its own optimum level of existence?

So. What makes for an ‘optimum experience of life’? And how can we assume other civilizations will necessarily answer the question the same way we would?

The question explores issues of ecological sustainability and asks us to look more deeply at how and why life expands, relating this to the question of how a technosphere would or would not grow once it had reached a desired level. We’re crossing disciplinary boundaries in ways that make some theorists uncomfortable, and rightly so because the answers are anything but apparent. We’re probing issues that are ancient, philosophical and central to the human experience. Plato would be at home with this.

The paper is Kopparapu et al., “Detectability of Solar Panels as a Technosignature,” Astrophysical Journal Vol. 967, No. 2 (24 May 2024), 119 (full text). Thanks to my friend Antonio Tavani for the pointer to this paper.

On Ancient Stars (and a Thought on SETI)

I hardly need to run through the math to point out how utterly absurd it would be to have two civilizations develop within a few light years of each other at roughly the same time. The notion that we might pick up a SETI signal from a culture more or less like our own fails on almost every level, but especially on the idea of time. A glance at how briefly we have had a technological society makes the point eloquently. We can contrast it to how many aeons Earth has seen since its formation 4.6 billion years ago.

Brian Lacki (UC-Berkeley) looked into the matter in detail at a Breakthrough Discuss meeting in 2021. Lacki points out that our use of radio takes up 100,000,000th of the lifespan of the Sun. We must think, he believes, in terms of temporal coincidence, as the graph he presented at the meeting shows. Note the arbitrary placement of a civilization at Centauri B, and others at Centauri A and C, along with our own timeline. The thin line representing our civilization actually corresponds to a lifetime of 10 million years. What are the odds that the lines of any two stars coincide? Faint indeed, unless societies can persist for not just millions but even billions of years. We don’t know if they can, but we need to think about it in terms of what we might receive.

Image: Brian Lacki’s slide illustrating temporal coincidence. Credit: Brian Lacki.

But there is another point. Should we assume that stars near the Sun are roughly the same age as ours? You might think so at first glance, given the likely formation of our star in a stellar cluster, but in fact clusters separate and diverge over time, so that finding the Sun’s birthplace and its siblings is challenging in itself (though some astronomers are trying). As we’re also learning, slowly but surely, stars around us in the Milky Way’s so-called ‘thin disk’ – within which the Sun moves – actually show a wider range of ages than we first thought.

A planet-hosting star a billion years older than ours might be a more interesting SETI target than one considerably younger, simply because life has had more time to start emerging on its planets. But untangling all the factors that help us understand stellar age and movement is not easy. What is now happening is that we are developing what are known as chrono-chemo-kinematical maps, which track these factors along with the chemical composition of the stars under study. Here we’re combining spectroscopic analysis with models of stellar evolution and radial velocity analysis.

This multi-dimensional approach is greatly aided by ESA’s Gaia mission and its extensive datasets on stars within a few thousand light years of the Sun. Gaia is remarkably helpful at using astrometry to pin down stellar motion and distance. Then we can factor in metallicity, for the oldest stars in the galaxy were formed at a time when hydrogen and helium were about the only ingredients the cosmos had to work with. A chrono-chemo-kinematical map can interrelate these factors, and with the help of neural networks tease out some conclusions that have surprised astronomers.

Thus a new paper out of the Leibniz-Institut für Astrophysik Potsdam. Here Samir Nepal and colleagues have been using machine learning (with what they call a ‘hybrid convolutional neural network’) to attack one million spectra from the Radial Velocity Spectrometer (RVS) in Gaia’s Data Release 3. Altogether, they are working with a sample of 565,606 stars to determine their parameters. Here the metallicity of stars is significant because the thin disk, which extends in the plane of the galaxy out to its edges, has been thought to consist primarily of younger Population I stars. Thus we should find higher metallicity, as we do, in a region of ongoing star formation.

But the Gaia mission is helping us understand that there is a surprising portion of the thin disk that consists of ancient stars on orbits that are similar to the Sun. And while the thin disk has largely been thought to have begun forming some 8 to 10 billion years ago, the maps that are emerging from the Potsdam work show that the majority of ancient stars in the Gaia sample (within 3200 light years) are far older than this. Most are metal-poor, but some have higher metal content than our Sun, which implies that early in the Milky Way’s development metal enrichment could already take place.

Let me quote Samir Nepal directly on this:

“These ancient stars in the disc suggest that the formation of the Milky Way’s thin disc began much earlier than previously believed, by about 4-5 billion years. This study also highlights that our galaxy had an intense star formation at early epochs leading to very fast metal enrichment in the inner regions and the formation of the disc. This discovery aligns the Milky Way’s disc formation timeline with those of high-redshift galaxies observed by the James Webb Space Telescope (JWST) and Atacama Large Millimeter Array (ALMA) Radio Telescope. It indicates that cold discs can form and stabilize very early in the universe’s history, providing new insights into the evolution of galaxies.“

Image: An artist’s impression of our Milky Way galaxy, a roughly 13 billon-year-old ‘barred spiral galaxy’ that is home to a few hundred billion stars. On the left, a face-on view shows the spiral structure of the Galactic Disc, where the majority of stars are located, interspersed with a diffuse mixture of gas and cosmic dust. The disc measures about 100 000 light-years across, and the Sun sits about half way between its centre and periphery. On the right, an edge-on view reveals the flattened shape of the disc. Observations point to a substructure: a thin disc some 700 light-years high embedded in a thick disc, about 3000 light-years high and populated with older stars. Credit: Left: NASA/JPL-Caltech; right: ESA; layout: ESA/ATG medialab.

Here is an image showing the movement of stars near the Sun around galactic center, as informed by the Potsdam work:

Image: Rotational motion of young (blue) and old (red) stars similar to the Sun (orange). Credit: Background image by NASA/JPL-Caltech/R. Hurt (SSC/Caltech).

A few thoughts: Combining data from different sources using the neural networks deployed in this study, and empowered by the Gaia DR3 RVS results, the authors are able to cover a wide range of stellar parameters, from gravity, temperature and metal content to distances, kinematics and stellar age. It’s going to take that kind of depth to begin to untangle the interacting structures of the Milky Way and place them into the context of their early formation.

Secondly, these results really seem surprising given that while the majority of the metal-poor stars in thin-disk orbits are older than 10 billion years, fully 50 percent are older than 13 billion years. The thin disk began forming less than a billion years after the Big Bang – that’s 4 billion years earlier than previous estimates. We also learn that while metallicity is a key factor, it varies considerably throughout this older population. In other words, intense star formation made metal enrichment possible, working swiftly from the inner regions of the galaxy and pushing outwards.

So our Solar System is moving through regions containing a higher proportion of ancient stars than we knew, and upcoming work extending these machine learning techniques, now in the planning stages and using data from the 4-metre Multi-Object Spectroscopic Telescope (4MOST) should refine the results of the Potsdam team in 2025. I return to what this may tell us from a SETI perspective. Ancient stars, especially those with higher than expected metallicity, should be interesting targets given the opportunities for life and technology to develop on their planets.

Maybe we’re making the Fermi question even tougher to answer. Because many such stars in the nearby cosmic environment are older — far older — than we had realized.

The paper is Nepal et al., “Discovery of the local counterpart of disc galaxies at z > 4: The oldest thin disc of the Milky Way using Gaia-RVS,” accepted for publication in Astronomy & Astrophysics (preprint).

SPECULOOS-3b: A Gem for Atmospheric Investigation

“What is this fascination of yours with small red stars?” a friend asked in a recent lunch encounter, having seen something I wrote a few years back about TRAPPIST-1 in one of his annual delvings into the site. “They’re nothing like the Sun, to quote Shakespeare, and anyway, even if they have planets, they can’t support life. Right?”

Hmmm. The last question is about as open as a question can get. But my friend is on to something, at least in terms of the way most people think about exoplanets. My fascination with small red stars is precisely their difference from our familiar G-class star. An M-dwarf planet bearing life would be truly exotic, in an orbit lasting mere days rather than months (depending on the class of M-dwarf), and perhaps tidally locked, so inhabitants would see their star fixed in the sky. How science fictional can you get? And we certainly don’t have enough data to make the call on life around any of them.

Let’s talk a minute about how we classify small red stars, because this bears on the interesting project called SPECULOOS and its latest discovery that I want to get into today. SPECULOOS is of course an acronym (Search for Planets EClipsing ULtra-cOOl Stars), but in parts of northern Europe and especially Belgium it’s a word that conjures up the spiced shortcrust biscuits that are traditional on St. Nicholas’ Day (December 6). It’s always good to have something baking while you’re parsing exoplanet data.

The scientific parameters for SPECULOOS involve a transit search of the 40 parsecs nearest Earth to study the 1650 or so very low-mass stars and brown dwarfs found within this volume. Of note today is that category of stars known as ultracool dwarfs (UDS). Some 900, more or less, of these are found here in spectral types M6.5 to L2, the former being M-class dwarfs at the low end of the temperature range, the latter being even cooler than the M-dwarfs but at the high-end of the L range. We’re talking about stars with a mass between 0.07 and 0.1 solar masses and sizes not far from Jupiter’s.

I’ll send you to today’s paper for further details on the robotic, and international, network of observatories that make up SPECULOOS, and mention in passing that the remarkable TRAPPIST-1, with its seven Earth-sized transiting planets, was the network’s first discovery. A recent super-Earth has also been announced around the star LP 890-9, but the latest find, dubbed SPECULOOS-3b, orbiting an M6.5 dwarf some 16.75 parsecs out, merits special attention. This one has useful implications for our studies of exoplanet atmospheres and, as the authors point out, should be a prime target for the James Webb Space Telescope. The paper notes that “The planet’s high irradiation (16 times that of Earth) combined with the infrared luminosity and Jupiter-like size of its host star make it one of the most promising rocky exoplanets for detailed emission spectroscopy characterization.”

SPECULOOS-3 turns out to be the second-smallest main-sequence star found to host a transiting planet (it’s just a bit larger than TRAPPIST-1). The tiny host provides an excellent transit depth for detecting the Earth-sized planet. While its mass has not yet been determined, the likelihood is that it is a rocky world (all planets known to be Earth-sized in the NASA exoplanet archive have masses that imply a rocky composition). Making the definitive call will involve analyzing its composition, which would include Doppler studies and a relatively short observing program that the authors describe in the paper.

But another kind of investigation makes this find significant. Beyond radial velocity methods, we can put emission spectroscopy to work by measuring the combined light of star and planet just before the planet goes behind the star (secondary eclipse), and the star’s light just after it does so, using JWST’s Mid-InfraRed Low-Resolution Spectrometer (MIRI/LRS). The difference between the two yields the light emitted by the planet. Note the difference here from transmission spectroscopy, which examines the star’s light as it passes through the planet’s atmosphere. Emission spectroscopy is preferable here, as the paper explains:

…the interpretation of emission spectra is not dependent on the mass of the planet. Secondly, emission spectra provide the energy budget of the planet, which is essential to understand its atmosphere’s chemistry, its dynamics and can be used to constrain the planet’s albedo. Finally, in the absence of an atmosphere, emission spectroscopy instead directly accesses the planetary surface where its mineralogy can be studied, something impossible to achieve with transmission spectroscopy. For all these reasons, emission spectroscopy is a more reliable method to assess the presence of an atmosphere and study the nature of terrestrial planets around UDS. And… SPECULOOS-3 b is one of the smallest terrestrial planets that is within reach of the JWST in emission spectroscopy with MIRI/LRS.

Image: Emission spectroscopy, the secondary eclipse method, measures changes in the total infrared light from a star system as its planet transits behind the star, vanishing from our Earthly point of view. The dip in observed light can then be attributed to the planet alone. The spectrum is taken first with star and planet together, and then, as the planet disappears from view, a spectrum of just the star (second panel). By subtracting the star’s spectrum from the combined spectrum of the star plus the planet, it is possible to get the spectrum for just the planet (third panel). Credit: NASA/JPL-Caltech/R. Hurt (SSC/Caltech).

And here’s the transmission method:

Image: This is a transmission spectrum of an Earth-like exoplanet. The graph, based on a simulation, shows what starlight looks like as it passes through the atmosphere of an Earth-like exoplanet. As the exoplanet moves in front of the star, some of the starlight is absorbed by the gas in that exoplanet’s atmosphere and some is transmitted through it. Each element or molecule in the atmosphere’s gas absorbs light at a very specific pattern of wavelengths. This creates a spectrum with dips that show where the wavelengths of light are absorbed, as seen in the graph. Each dip is like a “signature” of that element or molecule. Credit: NASA, ESA, CSA, STScI, Joseph Olmsted (STScI).

As to my friend’s speculations about habitability, we can keep SPECULOOS-3b out of the mix, at least judging from its equilibrium temperature of 553 K, which works out to roughly 280°C or 535°F. Granted, we can speculate about extremophilic life or subsurface habitats, but there’s almost no point in doing that without reams of data that we do not yet possess. SPECULOOS-2b would be a better bet, being in the habitable zone of its M6-dwarf host, but there we have to bear in mind that the planet is a super-Earth. I think the question of life is a bit misplaced in the study of these dim stars. What we first have to find out is how accurately we can assess them with tools like JWST and its successors, and then begin cataloging the data. SPECULOOS-3b looks to be an early testing ground for what that future will bring.

The paper is Gillon et al., “Detection of an Earth-sized exoplanet orbiting the nearby ultracool dwarf star SPECULOOS-3,” for which the preprint is now available. I also want to give a nod to the TESS discovery of a planet transiting an M2.5 dwarf that is roughly Mars-sized. Quite a catch! The discovery paper of that one is Tey et al., “GJ 238 b: A 0.57 Earth Radius Planet Orbiting an M2.5 Dwarf Star at 15.2 pc,” Astronomical Journal Volume 167, Issue 6 (June, 2024), id.283, 13 pp. Abstract / Preprint.

Galactic Insights into Dark Matter

Put two massive galaxy clusters into collision and you have an astronomical laboratory for the study of dark matter, that much discussed and controversial form of matter that does not interact with light or a magnetic field. We learn about it through its gravitational effects on normal matter. In new work out of Caltech, two such clusters, each of them containing thousands of galaxies, are analyzed as they move through each other. Using data from observations going back decades, the analysis reveals dark and normal matter velocities decoupling as a result of the collision.

Collisions on galactic terms have profound effects on the vast stores of gas that lie between individual galaxies, causing the gas to become roiled by the ongoing passage. Counter-intuitively, though, the galaxies themselves are scarcely affected simply because of the distances between them, and for that matter between the individual stars that make up each.

We need to keep an eye on work like this because according to the paper in the Astrophysical Journal, so little of the matter in these largest structures in the universe is in the form we understand. That’s a telling comment on how much work we have ahead if we are to make sense of the structure of a cosmos we would like to explore. In fact, the authors make the case that only 15 percent of the mass in the clusters under study is normal matter, most of it in the form of hot gas but also locked up in stars and planets. That would make 85 percent of the cluster mass dark matter.

The clusters in question are tagged with the collective name MACS J0018.5+1626. All matter, including dark matter, interacts through gravity, while normal matter is also responsive to electromagnetism. That means that normal matter slows down in these clusters as the gas between the individual galaxies becomes turbulent and superheated, while the dark matter within the clusters moves ahead in the absence of electromagnetic effects. Lead author Emily Silich (a Caltech grad student working with principal investigator Jack Sayers) likens the effect to that of a collision between dump trucks carrying sand. “The dark matter is like the sand and flies ahead.”

Image: This artist’s concept shows what happened when two massive clusters of galaxies, collectively known as MACS J0018.5+1626, collided: The dark matter in the galaxy clusters (blue) sailed ahead of the associated clouds of hot gas, or normal matter (orange). Both dark matter and normal matter feel the pull of gravity, but only the normal matter experiences additional effects like shocks and turbulence that slow it down during collisions. Credit: W.M. Keck Observatory/Adam Makarenko.

Some years back we looked at the two colliding galaxy clusters known collectively as the Bullet Cluster (see A Gravitational Explanation for Dark Matter). There, the behavior of the component materials of the clusters has been analyzed in the study of dark matter, but the clusters are seen from Earth with a spatial separation. In the case of MACS J0018.5, the clusters are oriented such that one is moving toward us, the other away. These challenging observations made it possible to analyze the velocity differential between dark and normal matter for the first time in a cluster collision.

Caltech’s Sayers explains:

“With the Bullet Cluster, it’s like we are sitting in a grandstand watching a car race and are able to capture beautiful snapshots of the cars moving from left to right on the straightway. In our case, it’s more like we are on the straightway with a radar gun, standing in front of a car as it comes at us and are able to obtain its speed.”

I’m reminded of my previous post on Chris Lintott’s book, where the astrophysicist takes note of the role of surprise in astronomy. In this case, the scientists used the kinetic Sunyaev-Zel’dovich effect (SZE), a distortion of the cosmic microwave background spectrum caused by scattering of photons off high-energy electrons, to measure the speed of the normal matter in the clusters. With the two clusters moving in opposite directions as viewed from Earth, untangling the effects took Silich to data from NASA’s Chandra X-ray Observatory (another reminder of why Chandra’s abilities, in this case to measure extreme temperatures of interstellar gas, are invaluable).

Adds Sayers:

“We had this complete oddball with velocities in opposite directions, and at first we thought it could be a problem with our data. Even our colleagues who simulate galaxy clusters didn’t know what was going on. And then Emily got involved and untangled everything.”

Nice work! The analysis tapped many Earth- and space-based facilities. Data from the Caltech Submillimeter Observatory (CSO), now being relocated from Maunakea to Chile, go back fully twenty years. The European Space Agency’s Herschel and Planck observatories, along with the Atacama Submillimeter Telescope Experiment in Chile, were critical to the analysis, and data from the Hubble Space Telescope were used to map the dark matter through gravitational lensing. With the clusters moving through each other at 3000 kilometers per second – one percent of the speed of light – collisions like these are in Silich’s words “the most energetic phenomena since the Big Bang.”

Dark matter explains many phenomena including galaxy rotation curves, which imply more mass than we can see, and gravitational lensing has been used to show that visible mass is insufficient to explain the lensing effect. But we still don’t know what this stuff is, assuming it is real and not a demonstration of our need to refine General Relativity through theories like Modified Newtonian Dynamics (MOND). What we need is direct detection of dark matter particles, an ongoing effort whose resolution will shape our understanding of galactic structure and conceivably point to new physics.

The paper is Silich et al. 2024. “ICM-SHOX. I. Methodology Overview and Discovery of a Gas–Dark Matter Velocity Decoupling in the MACS J0018.5+1626 Merger,” Astrophysical Journal 968 (2): 74. Full text.

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives