Centauri Dreams

Imagining and Planning Interstellar Exploration

Our Earliest Ancestor Appeared Soon After Earth Formed

Until we learn whether or not life exists on other planets, we extrapolate on the basis of our single living world. Just how long it took life to develop is a vital question, with implications that extend to other planetary systems. In today’s essay, Alex Tolley brings his formidable background in the biological sciences to bear on the matter of Earth’s first living things, which may well have emerged far earlier than was once thought. In particular, what was the last universal common ancestor — LUCA — from which bacteria, archaea, and eukarya subsequently diverged? Without the evidence future landers and space telescopes will give us, we remain ignorant of so fundamental a question as whether life itself — not to mention intelligence — is a rarity in the cosmos. But we’re piecing together a framework that reveals Earth’s surprising ability to spring into early life.

by Alex Tolley

Once upon a time, the history of life on Earth seemed so much simpler. Darwin had shown how natural selection of traits could create new species given enough time, although he did not argue for the origin of life, other than it would start in a “warm pond”. Extant animals and plants had been classified starting with Linnaeus, and evolution was inferred by comparing traits of organisms. Fossils of ancient animals added to the idea of evolution in deep time. In 1924, Oparin, and later in 1929, Haldane, suggested that a primordial soup would accumulate in a sterile ocean, due to the formation of organic molecules from reduced gasses and energy. This would be the milieu for life to emerge.

With the Miller-Urey experiment (1952) that demonstrated that amino acids, the “basic building blocks of life” could be created quickly in the lab with a primordial atmosphere gas mixture and electricity, it was assumed that proteins that form the basis of most of life’s structure and function would follow. The time needed for the evolution of life was increased from less than 10,000 years in the Biblical Old Testament, to 100 million years (my) in the late 19th century, to about 4.5 billion years (Ga) once radioisotopic dating was established by 1953. Fossil evidence relied on the mineralization of hard structures which started to appear in the Cambrian period around 550 million years ago (mya).

The Apollo lunar samples indicated that the Moon had been subjected to a late heavy impactor bombardment (LHB) after its formation 4.5 Ga from around 4.1 – 3.8 Ga. With the Earth assumed to be sterilized by the LHB, there seemed to be plenty of time for life to appear. Then the dating of stromatolites pushed the earliest known life to nearly 3.5 Ga and reduced the time for abiogenesis to just a few 100 million years after the LHB. This seemed to leave too little time for abiogenesis. There was a reprieve when it was argued that the LHB was an artifact of lunar sample collection, with the later Imbrium impact adding its younger age to the older samples. If the LHB was not a sterilizing event, then another 500 million years to a billion years could be allowed for life to appear.

Even though the structure of DNA was determined by Watson and Crick in 1953, and with it the site of genes, sequencing even short lengths of DNA was a slow process. This changed with gene sequencing machines and algorithms during the 1990s with the sequencing of the human genome. Sequencing costs have fallen sharply, and gene databases are being filled. We now have vast numbers of sequenced genes from a range of organisms, and full genomes from selected species.

The resulting inexpensive gene sequencing kickstarted the genomics revolution. With gene sequences from a large number of extant species, Richard Dawkins suggested that even if there were no fossils, evolution could be inferred by the changes in the nucleotide base sequences in modern organisms, and evolution was represented by the incremental changes in species’ genomes. His opus magnum The Ancestors’ Tale was an exploration of the tree of life moving backwards in time. [6].

The slow changes over time in the sequences of key functional genes that appear in all organisms is called the “molecular clock”. The greater the difference in sequences between the genes in 2 species, the greater their evolutionary separation. However, unlike atomic clocks, the molecular clock does not tick at the same rate for each organism, or gene. If they did, all the divergences would sum to the same length of time. As Figure 1 demonstrates, they do not. Nevertheless, evolutionary trees for all organisms with sequenced conservative functional genes were built to show how species evolved from each other and could be compared with phylogeny trees created using the fossil record.

Figure 1. Rooted and unrooted phylogenetic trees. (Source: Creative Commons Chiswick Chap).

While this phylogenetic tree shows evolutionary separation, it has no timeline. These trees converge back in time to a Last Universal Common Ancestor (LUCA) at the point where the 2 most distantly related domains of life, the Bacteria, and Archaea are joined. However, fossils can provide a means to calibrate the timeline for the tree branches and when LUCA can be placed in time. For example, if we can find and date human fossils and chimpanzee fossils, we can be confident that their common ancestor lived at an earlier age. The common ancestor would be younger than the time that both humans and chimps diverged from our ape ancestor, and in turn that ancestor would be younger than the ancestor of all primates. The phylogenetic trees based on gene sequences can be compared to trees based on morphology. Generally, they match. With fossil evidence, these new phylogenetic trees can be calibrated to date the branches.

Without good fossil evidence to calibrate the phylogenetic tree, it is harder to date the tree of life as we approach its root where we believe LUCA must be present. Several attempts have been made to determine this timeline. In 2018, a paper by Betts indicated that LUCA could be dated to about the age of the Earth [2]. Mahendrarajah et al, analyzing the gene for ATP Synthase, estimated a similarly early date for its appearance before the separation of the Archaea and Bacteria placing LUCA at over 4 Ga.[3]

The new paper by Moody et al, extends the work of the aforementioned 2 co-authors, as well as others, to create the best estimate of the timeline of life, the dating of LUCA, a description of LUCA, and its environment. The approach used a cross-bracing method using gene duplications of ancient functional genes to firm up the phylogenetic tree and the fossil calibrations. Cross-bracing is the use of duplicated genes (paralogs) to anchor different trees with dates to provide mutual support for the dating [12].

The 2 different trees are based on gene duplication before LUCA appeared to create the separate trees, which are shown in Figure 2. The analysis dates LUCA at least 4 Ga to the age of the Earth, 4.5 Ga. As most theories of abiogenesis require a watery environment, the earliest dating of surface water on Earth and the appearance of oceans is fairly fast, within 100 million years (my) after Earth’s formation, about 4.4 Ga, [11]. The relaxed Bayesian distributions used hard (no 2.5% tail distribution) and soft (include 2.5% tail distribution) dates for the boundary dating calibrations The maximum likelihood for the age of LUCA was set at 4.2 Ga, 200 my after the oceans were formed and about 300 my after the Earth formed and the impact that formed the moon and sterilized the Earth.

Figure 2 shows the new timeline. The dendrogram indicates the degree of gene sequence divergence as a horizontal line from each node. The greater the length of the line, the more ticks of the molecular clock as the sequence changes compared to nearby species’ lines, and the greater the time the species have been separated by evolution. LUCA is dated within the Hadean eon, a time once thought to be devoid of life due to its hellish surface conditions from impactor bombardment as well as the heat from its formation and radioactivity. The 4.5 Ga calibration date is a hard constraint as terrestrial abiogenesis is impossible before then.

Figure 2. The calibrated phylogenetic tree shows the 2 lineages for the gene duplications, with each of the 2 trees acting as cross braces. The 2 algorithm variants with distributions in gold and teal converge to close overlaps with the dating of LUCA. Note the small purple stars that are the fossil calibrations. The calibrations for LUCA use the age of the Earth and prior fossil evidence as there is no fossil evidence for LUCA unless the controversial carbon isotope evidence demonstrates life and not an abiotic process. Credit: Moody et al.

The paper also uses the gene sequence evidence to paint a picture of LUCA as very similar to a prokaryote bacterium. It has all the important cellular machinery of a contemporary bacterium but with several cellular pathways absent or of low probability. It was probably a chemoautotroph, meaning that it could use free hydrogen and carbon dioxide to reduce and fix carbon as well as extract energy, from either geochemical processes or other contemporary organisms.

Because LUCA is not a protocell, but a likely procaryote, this implies that the sequence of abiogenesis from inanimate chemistry to a functioning prokaryote cell must have taken no more than 300 my, and more likely 200 my.

As the authors state:

How evolution proceeded from the origin of life to early communities at the time of LUCA remains an open question, but the inferred age of LUCA (~4.2 Ga) compared with the origin of the Earth and Moon suggests that the process required a surprisingly short interval of geologic time. (emphasis mine).

The issue of the rapid appearance of life was back in play.

Figure 3 shows the hypothetical progression of abiogenesis to the Tree of Life and the steps needed to get from a habitable world to LUCA at the base of the Tree of Life.

Figure 3. The hypothetical development of life from the habitable planet through simpler stages and eventually to the radiation of species we see today. (Source: Creative Commons Chiswick Chap).

Given that the complexity of LUCA appears to be great, why is the timeline to evolve it so short when the timeline to the last archaean and last bacterial common ancestors (LACA, LBCA) is so prolonged at a billion years? Are the genomic divergences between bacteria and archaea so great not because of a slow ticking of the molecular clock, but rather evidence of rapid evolution that would imply LUCA was younger than it appears as the molecular clock was ticking faster?

It is important to understand that LUCA was not a single organism, but a representative of a population. It probably lived in an ecosystem with other organisms, none of whose lineages survived. This is shown below in Figure 4. The red lines indicating that other extinct lineages may have transferred genes to each of the archaean and bacterial lineages after LUCA evolved could, in principle, have exaggerated the divergence of these 2 lineages, exaggerating the depth of the timeline from LUCA. This is purely speculative to explain the authors’ findings.

Figure 4. LUCA must have had ancestors and likely contemporary organisms. The gray lineage includes LUCA’s ancestors as well as other lineages that became extinct. The red lines indicate horizontal gene transfer across lineages.

A key question is whether the calibrated timeline is correct. While the authority of the number of authors is impressive, and the many checks on their analysis are substantial, the method may be simply inaccurate. We have a similar methodological issue with the Hubble Tension between 2 methods of determining the Hubble constant for the universe’s rate of expansion. Molecular clock rates are not uniform between species and estimated timelines for the divergence of species can vary when compared to the oldest fossils. DNA sequences can be extracted for relatively recent fossils to more accurately calibrate the phylogenetic tree. However, this is not possible after a few million years due to DNA degradation. Purely mineralized fossils, impressions in rocks, and isotopic biosignature evidence rule this tight calibration out. Fossils are relatively rare and usually prove younger than the node that starts their particular lineage. This is to be expected, although the discovery of older fossils can modify the picture.

Because molecular clock rates are not fixed, various means are used to estimate rates, using Bayesian probability. These rely on different distributions. The authors use 2 methods:

1. Geometric Brownian motion (GBM)

2. Independent log-normal.(ILN)

In Figure 2, the distributions are indicated by color. For the younger nodes, these methods clearly diverge, and in the case of the last eukarya common ancestor, the 2 distributions do not overlap. The distributions converge deeper in time, with the GBM maximum probability now a little older than the ILN one. The authors selected the GBM peak as the best dating for LUCA, although using the ILN method makes almost no difference.

While the Bayesian method has become the standard method for calibrated phylogenetic tree dating, the question remains whether it is accurate. All the genes and cross-bracing used would be false support if there is a flaw in the methodology. A 2023 paper by Budd et al highlights the problem. In particular, based on fossils, the divergence of mammals occurs after the K-T event that is associated with the extinction of the non-avian dinosaurs, whereas the genomic data supports a much older divergence without any fossil evidence. The paper argues that the same applies to the emergence of animals. Fossils in the Cambrian era are much younger than the calibrated phylogenetic data suggests.

Budd states that:

Overall, the clear implication is that the molecular part of the analysis does not allow us to distinguish between different times of origin of the clade, and thus does not contradict the general picture provided by the fossil record….

…we believe that our results must cast severe doubt on all relaxed clock outcomes that substantially predate well-established fossil records, including those affected by mass extinctions.

This becomes extremely problematic when there are no fossils to compare with. In the Moody paper the LACA and LBCA nodes have no calibrations at all, and LUCA has somewhat ad hoc calibration points. If Budd is correct, and he makes a good case, then all the careful analyses of the Moody paper are ineffective, due to fundamental flaws in the tools.

Given the paucity of hard fossil evidence, the known issues of calibrated Bayesian priors for molecular clock dating of phylogenetic trees, compared to the careful testing by the authors of the LUCA paper, the best we can do is look at the consequences of the paper being an over/underestimate of the age of LUCA.

The easy consequence is that the age of LUCA has been overestimated. That LUCA was represented by a population between 3.4 and 4 Gya, with a peak probability somewhere in between. This would allow up to a billion years for abiogenesis to reach this point before the various taxons of archaea and bacteria separated 100s of millions of years later, and subsequently, the eukarya separated from the archaea even later.

This would grant a comfortable period to postulate that at least one abiogenesis happened on Earth and that all life on Earth is local. Conventional ideas on the likely sequence of events remain reasonably intact. Other planets may have their abiogenesis events, with any possibility of panspermia increasingly unlikely with distance. For example, any life discovered in the Enceladan ocean would be a local event with a biology different from Earth’s.

The harder consequences are assuming the short timeline for abiogenesis is correct. What are the implications?

First, it strengthens the argument that under the right conditions, life emerges very quickly. While we do not know what those conditions are exactly, it does suggest that our neighbor, Mars, which has evidence of surface water as lakes and a boreal sea, could have also spawned life. As Mars was not formed after an early collision, its water bodies may date another 100 my before the oceans on Earth. As Mars’ gravity is lower than on Earth, the transfer of material containing any life might have seeded Earth with life.

If we find life in the subsurface of Mars’s crust, it would be important to determine if its biology was the same or different from Earth’s life. If different, that would be the most exciting result as it would argue for the ease of abiogenesis. If the same, then a possible common origin. The same applies to any life that might be found in the subsurface oceans of the icy moons of the outer planets. Different origins imply abiogenesis is common. Astrobiologist Nathalie Cabrol seems quite optimistic about possible life on Mars, and any [dwarf] planet with a subsurface ocean [8]. Radiogenic heating can also ensure liquid water on planets that are well outside the traditional habitable zone (HZ) [10].

If abiogenesis is common, then we should detect biosignatures in many exoplanets in the HZ with the conditions we expect for life to start and thrive. Carr has suggested, rather controversially, that Mars was the better environment for abiogenesis, and therefore terrestrial life was due to panspermia from Mars [5].

What if the rest of the solar system is sterile, with no sign of either extant or extinct life? This would imply the conditions on Earth suitable for abiogenesis are narrower than we thought, which would suggest exoplanet biosignatures would be rarer than we might expect from the detected conditions on those worlds.

The last option is one we would prefer not to be the case if the aim is to work on how abiogenesis occurred on Earth. This option is to accept that LUCA appeared after just a few hundred million years, but that this time was too short. It would imply that the location of abiogenesis, however it occurred, was not on Earth. It would imply that the same probably applies to other bodies in the solar system and therefore life originated in another star system.

Leslie Orgel and Francis Crick’s early suggestion was that terrestrial life was spawned by panspermia [4]. Would that derail studies on the origin of life, or assume only plausible terrestrial conditions? How would we determine the truth of panspermia? I think it could only be demonstrated by sampling life on exoplanets and determining they all shared the same biology fairly exactly. The consequences of that might be profound.

A last thought, that surprised me in my thinking about abiogenesis being seemingly impossibly short: Cabrol, states, with no supporting evidence that [9]:

…how much time it takes for the building blocks of life to transition to biology.….estimates range between 10 million years and as little as a few thousand years.

If true, then life could appear anywhere with suitable conditions, however transient those conditions are. What state that life would be in, for example, protocells, or some state prior to LUCA is not explained [but see Figure 3], but if correct, appears to offer more time for LUCA to evolve. That is indeed food for thought.

References

Moody, E. R. R., Álvarez-Carretero, S., Mahendrarajah, T. A., Clark, J. W., Betts, H. C. Dombrowski, N., Szánthó, L. L., Boyle, R. A., Daines, S., Chen, X., Lane, N., Yang, Z., Shields, G. A., Szöllősi, G. J., Spang, A., Pisani, D., Williams, T. A., Lenton, T. M., & Donoghue, P. C. J. (2024). The nature of the last universal common ancestor and its impact on the early Earth system. Nature Ecology & Evolution. https://doi.org/10.1038/s41559-024-02461-1 https://www.nature.com/articles/s41559-024-02461-1

Betts, H. C., Puttick, M. N., Clark, J. W., Williams, T. A., Donoghue, P. C. J., & Pisani, D. (2018). Integrated genomic and fossil evidence illuminates life’s early evolution and eukaryote origin. Nature Ecology & Evolution, 2(10), 1556–1562. https://doi.org/10.1038/s41559-018-0644-x https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6152910/

Mahendrarajah, T. A., Moody, E. R. R., Schrempf, D., Szánthó, L. L., Dombrowski, N., Davín, A. A., Pisani, D., Donoghue, P. C. J., Szöllősi, G. J., Williams, T. A., & Spang, A. (2023). ATP synthase evolution on a cross-braced dated tree of life. Nature Communications, 14(1). https://doi.org/10.1038/s41467-023-42924-w

F.H.C. Crick, L.E. Orgel, (1973) “Directed panspermia”, Icarus, Volume 19, Issue 3, Pages 341-346, ISSN 0019-1035, https://doi.org/10.1016/0019-1035(73)90110-3.

Carr, C. E. (2022). Resolving the history of life on Earth by seeking life as we know it on Mars. Astrobiology, 22(7), 880–888. https://doi.org/10.1089/ast.2021.0043 https://arxiv.org/pdf/2102.02362

Dawkins, R. (2004). The Ancestor’s Tale: A Pilgrimage to the Dawn of Evolution. Houghton Mifflin Harcourt.

Budd, G. E., & Mann, R. P. (2023). Two notorious nodes: a critical examination of relaxed molecular clock age estimates of the bilaterian animals and placental mammals. Systematic Biology. https://doi.org/10.1093/sysbio/syad057

Cabrol, N. A. (2024). The secret life of the universe: An Astrobiologist’s Search for the Origins and Frontiers of Life. Simon and Schuster.

Ibid. 148.

Tolley, A (2021) “Radiolytic H2: Powering Subsurface Biospheres” https://www.centauri-dreams.org/2021/07/02/radiolytic-h2-powering-subsurface-biospheres/

Elkins-Tanton, L. T. (2010). Formation of early water oceans on rocky planets. Astrophysics and Space Science, 332(2), 359–364. https://doi.org/10.1007/s10509-010-0535-3

Sharma, P. P., & Wheeler, W. C. (2014). Cross-bracing uncalibrated nodes in molecular dating improves congruence of fossil and molecular age estimates. Frontiers in Zoology, 11(1). https://doi.org/10.1186/s12983-014-0057-x

Background Reading

The Hadean-Archaean Environment
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2869525/

History of Earth
https://en.m.wikipedia.org/wiki/History_of_Earth

Hadean
https://en.m.wikipedia.org/wiki/Hadean

Late Heavy Bombardment
https://en.m.wikipedia.org/wiki/Late_Heavy_Bombardment

Wikipedia: Portal: Evolutionary Biology
https://en.wikipedia.org/wiki/Portal:Evolutionary_biology

Origin of life: Drawing the big picture
https://www.sciencedirect.com/science/article/abs/pii/S0079610723000391

The Origin of Life: What We Do and Don’t Know
https://hea-www.harvard.edu/lifeandthecosmos/wkshop/sep2012/present/CleavesSILifeInTheCosmosTalk2012b.pdf

Introduction to Origins of Life of Earth
https://pressbooks.umn.edu/introbio/chapter/originsintro/

Abiogenesis
https://en.wikipedia.org/wiki/Abiogenesis

Last universal common ancestor
https://en.wikipedia.org/wiki/Last_universal_common_ancestor

Earth’s timeline
https://dynamicEarth.org.uk/geological-timeline-pack-2.pdf

Formation of early water oceans on rocky planets
https://link.springer.com/article/10.1007/s10509-010-0535-3

Earliest known life forms
https://en.wikipedia.org/wiki/Earliest_known_life_forms

Molecular clock
https://en.wikipedia.org/wiki/Molecular_clock

Phylogenetic Tree
https://en.wikipedia.org/wiki/Phylogenetic_tree

Primordial Soup
https://en.wikipedia.org/wiki/Primordial_soup

Are Interstellar Quantum Communications Possible?

A favorite editor of mine long ago told me never to begin an article with a question, but do I ever listen to her? Sometimes. Today’s lead question, then, is this: Can we expand communications over interstellar distances to include quantum methods? A 2020 paper by Arjun Berera (University of Edinburgh) makes the case for quantum coherence over distances that have only recently been suggested for communications:

…We have been able to deduce that quantum teleportation and more generally quantum coherence can be sustained in space out to vast interstellar distances within the Galaxy. The main sources of decoherence in the Earth based experiments, atmospheric turbulence and other environmental effects like fog, rain, smoke, are not present in space. This leaves only the elementary particle interactions between the transmitted photons and particles present in the interstellar medium.

Quantum coherence is an important matter; it refers to the integrity of the quantum state involved, and is thus essential to the various benefits of quantum communications. But let’s back up by tackling a new paper from another University of Edinburgh researcher, Latham Boyle. Working at the Higgs Centre for Theoretical Physics there, Boyle cites Berera’s work and moves on to explore quantum communications at the interstellar level and their application to SETI questions.

Traditional communications involve bits in one of two states, 0 or 1. Quantum bits, or qubits, can exist in superposition, meaning that a qubit can represent a 0 or a 1 simultaneously. Here I pause to remind all of us of the famous Richard Feynman quote: “I think I can safely say that nobody understands quantum mechanics.” Which is in no way to play down the ongoing work to explore the subject, given its mathematical precision and the fact that experiments involving quantum physics produce results. Thus another famous quote attributed to David Mermin: “Shut up and calculate.”

In other words, use quantum mechanics to get results because it works, and stop getting distracted by the philosophical issues it raises. I am trying to do this now, but philosophy keeps rearing its head. The specter of George Berkeley wanders by…

But back to quantum methods and interstellar information exchange. The Berera paper makes the case that at certain frequency ranges, photon qubits can maintain their quantum coherence over conceivably intergalactic distances. Fully understood or not, quantum communications opens up a wide range of effects that are interesting in the interstellar context. Boyle notes that protocols based on quantum communication offer exponentially faster performance for specific ranges of problems and tasks.

Let’s drill further into quantum benefits. From the paper:

First, it is already known to permit many tasks that are impossible with classical communication alone, including quantum cryptography [10, 11], quantum teleportation [12], superdense coding [13], remote state preparation [14], entanglement distillation/purification [15–17], or direct transmission of (potentially highly complex, highly entangled) quantum states (e.g. the results of complex quantum computations). Second, protocols based on quantum communication are exponentially faster than those based on classical communication for some problems/tasks [18], in particular as measured by the one-way classical communication complexity [19–21] (the number of bits that must be transmitted one-way, from sender to receiver, to solve a problem or carry out a task – possibly the notion most pertinent to interstellar communication).

Boyle explores these advantages and associated problems through the quantum capacity of a quantum communication channel, constraining this by examining the properties of the interstellar medium in light of what are known as quantum erasure channels, which model error correction and channel carrying capacity. The question is: How much information can be reliably carried over a quantum channel even if some photons are lost in the process? And it turns out that these constraints mean that the choice of frequency bands is critical.

Image: This is Figure 1 from the paper. Caption: Quantum communication with Q > 0, over distance L, is impossible at wavelengths where the horizontal line corresponding to L lies within the blue shaded region (summarizing the Milky Way ISM’s extinction curve). Gray regions are off limits from the ground. Adapted from [23, 26], with data from [30–37]. Credit: Latham Boyle.

The interstellar quantum communications channel Boyle studies is one in which photons can be erased in three different ways, the first being their absorption or scattering due to the interstellar medium between sender and receiver. Thus the pink line in the figure, indicating the frequency that a sender on Proxima Centauri would need to select to reach the Earth. A second problem is extinction within the Earth’s atmosphere, demanding a wavelength that avoids the gray bands of Figure 1 (hence the benefit of a receiver in space as opposed to Earth’s surface). Finally, photons can be lost due to the spreading of the photon beam as it moves between sender and receiver.

To avoid depolarization by the cosmic microwave background, the wavelength of our photon channel must be less than 26.5 cm (the frequency is about 1.13 GHz), but for communication between stars Boyle calculates that we need to get into the ultraviolet range, with wavelengths as short as 320 nm. Doing this makes our communications channel far more efficient, for we can work with a narrower beam, but having said that, we now run into trouble. Let me quote Boyle on one of several elephants in the room:

This third erasure constraint is the hardest to satisfy! Whereas classical communication (C > 0) can take place even if the receiver only receives a tiny fraction of the photons emitted by the sender, forward quantum communication (Q > 0) requires large enough telescopes that the sender can put the majority of their photons into the receiver’s telescope (Fig. 2b)! Even in the best case, taking the nearest star (Proxima Centauri, L = 1.30 parsec) and the shortest wavelength available from the ground (λ = 320nm, see Fig. 1), this implies D > 100 km!

We can pause here to note, as Boyle does, that the largest telescope currently under construction (ESO’s Extremely Large Telescope) has an aperture of 39 meters. To reach the staggering 100 km suggested by the author, we would have to explore coherently combined smaller dishes using optical interferometry. Boyle notes that quantum teleportation involving photons has been demonstrated at 100-kilometer baselines at sea level and 1000 km baselines from Earth to a satellite. Thus a ‘coherent dense array of optical telescopes over 100 km distances’ may be ultimately feasible. A great deal of research is ongoing on the subject of manipulating quantum states. The author notes work on quantum repeaters and quantum memories that may one day be enabling.

Why would a civilization want to use quantum communications methods given problems like this? For one thing, sending complex quantum calculations becomes possible in ways not available through classical communications. Remember that each qubit can exist in a superposition of states, manipulated by algorithms impossible on classical computers. Quantum error correction and quantum cryptography are among the other advantages of a communications channel based on quantum methods. In addition, extraordinarily high resolutions could be obtained by telescopes using astronomically long baseline interferometry (ALBI) via quantum repeaters.

An intriguing thought concludes the paper.

…we have seen that (setting aside the loopholes mentioned above) the sending and receiving telescopes must be extremely large, satisfying the inequality in Eq. (1); but this same inequality implies that, if the sender has a large enough telescope to communicate quantumly with us, they necessarily also have enough angular resolution to see that we do not yet have a sufficiently large receiving telescope [49], so it would make no sense to send any quantum communications to us until we had built one. Thus, the assumption that interstellar communication is quantum appears sufficient to explain the Fermi paradox.

So there you are. This method of information exchange demands such large telescopes that if an extraterrestrial civilization had them, they could quickly determine whether we had them. And because we don’t, there would certainly be no reason to send a signal to us if quantum methods were deemed necessary for a worthwhile exchange.

The paper is Boyle, “On Interstellar Quantum Communication and the Fermi Paradox” (preprint). The Berera paper is “Quantum coherence to interstellar distances,” Physical Review D 102 (9 September 2020), 063005 (abstract / preprint).

The Odds on an Empty Cosmos

When Arthur C. Clarke tells me that something is terrifying, he’s got my attention. After all, since boyhood I’ve not only had my imagination greatly expanded by Clarke’s work but have learned a great deal about scientific methodology and detachment. So where does terror fit in? Clarke is said to have used the term in a famous quote: “Two possibilities exist: either we are alone in the Universe or we are not. Both are equally terrifying.” But let’s ponder this: Would we prefer to live in a universe with other intelligent beings, or one in which we are alone?

Are they really equally terrifying? Curiosity favors the former, as does innate human sociability. But the actual situation may be far more stark, which is why David Kipping deploys the Clarke quote in a new paper probing the probabilities.

Working with the University of Sydney’s Geraint Lewis, Kipping (Columbia University) has applied a thought experiment first conceived by Edwin Jaynes to dig into the matter. Jaynes (1922-1998) was a physicist at Washington University in St. Louis, MO. Through his analysis of probabilities (statistical inference was a key aspect of his work), Jaynes laid a framework that he analyzed with rigor, one that was later tweaked by J. B. S. Haldane, a man who had his own set of famous quotes, including the familiar “Now, my own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose.” This seems to be a day for good quotes.

Imagine a lab bench on which are a large number of beakers filled with water, roughly the same amount in each. The goal is to find out whether an unknown chemical will dissolve within these flasks. Remember, each flask contains nothing but water, all of it from the same source. You are to pour some of the chemical into each.

The logical expectation is that the unknown compound will either dissolve in each flask or not. That result should hold across the board: What happens in one flask should happen in all. What we would not expect is for the compound to dissolve in some flasks but not others. You can see what this would imply, that the tiniest variations in temperature and pressure could swing the outcome either way. In other words, as Kipping and Lewis note, it would imply that the conditions in the room and properties of the compound were “balanced on a knife edge; fine-tuned to yield such an outcome.”

Fine-tuning is telling us something: Are the conditions in the room so perfectly set that there is some kind of hair-trigger threshold that some but not all of the flasks can reveal when the chemical is added? How could that happen? Jaynes went about exploring this gedankenexperiment (and many others – he would become known as one of the founders of so-called Objective Bayesianism). The beauty of the Kipping and Lewis paper is that the authors have applied the Jaynes experiment, for the first time, I think, to the cosmos. Thus instead of beakers of water think of exoplanets, and liken the dissolving of the chemical to abiogenesis. From the paper:

Consider an ensemble of Earth-like planets across the cosmos – worlds with similar gravity, composition, chemical inventories and climatic conditions. Although small differences will surely exist across space (like the beakers across the laboratory), one should reasonably expect that life either emerges nearly all of the time in such conditions, or hardly ever. As before, it would seem contrived for life to emerge in approximately half of the cases – again motivated from the fine-tuning perspective.

Image: This is Figure 1 from the paper. Caption: In the gedankenexperiment of attempting to dissolve an unknown compound X into a series of water vessels, Jaynes and Haldane argued that, a-priori, X will either dissolve almost all of the time or very rarely, but it would be contrived for nearly half of the cases to dissolve and half not. The function plotted here represents the Haldane prior (F−1(1 − F)−1) that captures this behaviour. Credit: Kipping and Lewis.

The authors argue that the idea can be extended beyond abiogenesis to include the fraction of worlds on which multicellular life develops, and indeed the fraction of worlds where technological civilizations develop. Now we’re pondering a universe that is either crammed with life or devoid of it, with little room to maneuver in between. Which of these is most likely to be true? Can we connect this with the Drake Equation, that highly influential statement that so defined SETI’s early years in terms of the factors that influence the number of communicating technological civilizations in the galaxy?

Rather than extending the variables of the Drake Equation, a process that could go on indefinitely, the authors choose to distill it using what they call a ‘birth-death formalism.’ The result is a ‘steady state’ version of the Drake Equation (SSD).

The balance between birth and death is crucial. A civilization emerges. Another one dies. Think of the first six terms of the Drake Equation as representing the birth rate, while the final term, L, represents the death rate. The authors suggest that problems with the original equation can be resolved by paring it into this form, producing a new term F, which stands for the ‘occupation fraction’; i.e., planets with technological civilizations, a term arrived at through the ratio of births to deaths per year. Thus in the case of a galaxy filled with technological societies, F would come out close to 1. The paper fully develops how the new equation is reached but the end result is this:

Where λBD is the birth to death ratio. The particulars of how this is derived are fascinating, and can also be explored in Kipping’s Cool Worlds video.

Now we have something to work with. A galaxy in which there are few births compared to deaths is one that is all but empty. Start adjusting the ratio to factor in more civilization births and the galaxy begins to fill. Continue the adjustment and the entire galaxy fills. The S-curve is a familiar one, and one that puts the pressure on SETI optimists because it seems evident that not all stars are occupied by civilizations.

Assuming that F does not equal 1 or come close to it, we can explore the steep S-curve as it rises. Here NT refers to the total number of stars. From the paper:

This is what we consider to be the SETI optimist’s scenario (given that F ≈ 1 is not allowed). Here, F takes on modest but respectable values, sufficiently large that one might expect success with a SETI survey. For example, modern SETI surveys scan NT ∼ 103-104 targets… so for such a survey to be successful one requires F to exceed the reciprocal of this (i.e. F ≥ 10−4), but realistically greatly so (i.e. F ≫ 10−4 ) since not every occupied seat will produce the exact technosignature we are searching for, in the precise moments we look, and at the power level we are sensitive to. This arguably places the SETI optimist is a rather narrow corridor of requiring N−1T ≪ λBD ≲ 1.

That narrow corridor is the SETI fine-tuning problem. The tiny birth-death ratio range available in this ‘uncanny valley of possibility’ is all the room to maneuver we have for a successful detection.

And the authors point out that the value for λBD may be ‘outrageously small’. Just how common is abiogenesis? A telling case in point: One recent calculation shows that the probability of spontaneously forming proteins from amino acids is on the order of 10-77. And having arrived at these amino acids, it would still be necessary to go through all the further steps to arrive at an actual living creature. Not to mention the issue of producing living creatures and having them develop technologies.

Image: This is Figure 3 from the paper. Caption: Figure 3. Left: Occupation fraction of potential “seats” as a function of the birth-to-death rate ratio (λBD), accounting for finite carrying capacity. In the context of communicative ETIs, an occupation fraction of F ∼ 1 is apparently incompatible with both Earth’s history and our (limited) observations to date. Values of λBD ≪ 1 imply a lonely cosmos, and thus SETI optimists must reside somewhere along the middle of the S-shaped curve. Right: As we expand the bounds on λBD, the case for SETI optimism appears increasingly contrived and becomes a case of fine-tuning. Credit: Kipping and Lewis.

Thus the birth to death ratio cannot be too low but neither can it be too high if it is to fit our history of observations. The window for successful SETI detection is small, a fine-tuned ‘valley’ in which we are unlikely to be. To this point SETI has produced no telling evidence for technological civilizations other than our own (we do pick our own signals up quite often, of course, in the form of RFI, a well-known problem!) You have to get into the realm of conspiracy theories like the ‘zoo hypothesis’ to explain this result and still maintain that the galaxy is filled with technological civilizations.

We can also weigh the result in the context of our own planetary past:

Moreover, F ≈ 1 is simply incompatible with Earth’s history. Most of Earth’s history lacks even multicellular life, let alone a technological civilization. We thus argue that F ≈ 1 can be reasonably dismissed as a viable hypothesis…We highlight that excluding F ≈ 1 is compatible with placing a “Great Filter” at any position, such as the “Rare Earth” hypothesis (Ward & Brownlee 2000) or some evolutionary “Hard Step” (Carter 2008).

So what’s actually going on in those flasks on Jaynes’ lab table? Because if some flasks are doing one thing when the chemical is added and some are doing another, we may be precisely fine-tuned to where a SETI detection will be consistent with our previous observations. But that’s a pretty thin knife-edge to place all our hopes on.

I should add that the authors introduce mitigating factors into the discussion at the end. In particular, violating the SSD might involve the so-called ‘grabby aliens’ hypothesis, in which alien civilizations do emerge, though rarely, and when they do, they often colonize their own part of the galaxy. Thus most regions fill up, but not all, and we humans have perhaps emerged in an area where this colonization wave has not yet reached. That’s sort of intriguing, as it implies that the best SETI targets might be very far away, and extragalactic SETI may offer the best hope for a reception.

But let me end by questioning that note of hope, and for that matter, the issue of ‘terror’ that the Clarke quote invokes. Because I don’t find the idea of a universe devoid of other civilizations particularly terrifying, and I certainly don’t see it as one that is beyond hope. A Milky Way stuffed with civilizations would be fascinating, but a cosmos empty of other sentient beings is also a remarkable scientific result. So of course we keep looking, but the real goal is to understand our place in the universe. If we are a spectacular contradiction to an otherwise empty galaxy, let’s get on with exploring it.

The paper is Kipping & Lewis, “Do SETI Optimists Have a Fine-Tuning Problem?” submitted to International Journal of Astrobiology (preprint). See Kipping’s Cool Worlds video on the matter for more.

The Final Parsec Paradox: When Things Do Not Go Bump in the Night

Something interesting is going on in the galaxy NGC 6240, some 400 million light years from the Sun in Ophiuchus. Rather than sporting a single supermassive black hole at its center, this galaxy appears to have two, located about 3000 light years from each other. A merger seems likely, or is it? Centauri Dreams regular Don Wilkins returns to his astronomical passion with a look at why multiple supermassive black holes are puzzling scientists and raising questions that may even involve new physics.

By Don Wilkins

Super massive black holes (SMBH), black holes with a mass exceeding 100,000 solar masses, don’t behave as expected. When these galaxies collide, gas and dust smash into each other forming new stars. Existing stars are too far apart to collide. The two SMBH of the galaxies converge. Intuition foresees the two massive bodies coalescing into a single giant, Figure 1. The Universe, as frequently happens, ignores our intuition.

The relevant force is dynamical friction. [1-4] As a result of it, the SMBH experience a deceleration in the direction of motion. Gas and stars between the two SMBH leach momenta from the black holes. The smaller masses pick up enormous speeds and are hurled away from the SMBH. Over time and as immense masses are thrown away, the SMBH inch closer together.

The effect is similar to the gravitational assist maneuver, the fly-by of a massive object, used to accelerate space probes.

The paradox occurs when the gasses and stars have all been expelled from the volume between the two SMBH, a distance of about 3 lightyears. There is no more mass to siphon off momenta. Modeling indicates the standoff would last longer than the life of the Universe.

According to Dr. Gonzalo Alonso Alvarez, a postdoc at the University of Toronto:

“Previous calculations have found that this process [the merger of SMBH] stalls when the black holes are around 1 parsec away from each other, a situation sometimes referred to as the final parsec problem.”

Figure 1. Super Massive Black Holes Orbit Each Other. Credit: NASA.

Two Laser Interferometer Gravitational-Wave Observatories (LIGO) employ laser interferometry to detect gravitational waves in the 10 to 1 kiloHertz regions. This band is suitable to sense black holes 5 to 100 times more massive than the sun. To detect collisions of SMBH, the detector must sense nanometer distortions in spacetime.

The size of gravity wave detectors is inversely proportional to the wavelength. A gravity wave detector sized to measure the cry of a SMBH collision would be immense. Scientists in The NANOGrav Collaboration have overcome the need for detectors with dimensions of light-years by employing pulsars.

In this approach, the timing of a number of pulsars is very accurately measured. A pulsar timing array used sixty-eight pulsars as timing sources. Gravitational waves, compressing and expanding spacetime in their passage, alter the timing of each pulsar in a small way. Timing changes were collected for fifteen years.

Evidence for gravitational waves with periods of years to decades was found. The data are under evaluation to determine the source of the distortions. One possibility is the collision of SMBH.

There are five possible solutions to the paradox. The first is that the NanoGravity detections are not SMBH collisions. There is no paradox. Two SMBH will not merge within the life of the Universe. Rather boringly, our “maths” are correct. Nothing new to learn. [5-12]

Researchers propose that more realistic, triaxial and rotating galaxy models resolve the paradox in ten billion year or less. [13]

Another solution involves three SMBH. The third member continues to remove momentum from the other SMBH until gravitational attraction pulls its two partners into a single black hole. [14]

Expelled gas and stars may return to the two SMBH. These can continue to siphon off momentum until a collision occurs. [15]

Dark matter could contribute to reducing momenta. In this case, particles of dark matter must be able to interact with each other. [16] From Dr. Alvarez whose team published the dark matter paper:

“What struck us the most when Pulsar Timing Array collaborations announced evidence for a gravitational wave spectrum is that there was room to test new particle physics scenarios, specifically dark matter self-interactions, even within the standard astrophysical explanation of supermassive black hole mergers.”

Figure 2. Image: The distorted appearance of NGC 6240 is a result of a galactic merger that occurred when two galaxies drifted too close to one another. When the two galaxies came together, their central black holes did so, too. There are two supermassive black holes within this jumble, spiraling closer and closer to one another. They are currently only some 3,000 light-years apart, incredibly close given that the galaxy itself spans 300,000 light-years. Image credit: NASA, ESA, the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration, and A. Evans (University of Virginia, Charlottesville/NRAO/Stony Brook University).

For the moment, several theories have been advanced to explain the merger of SHBM. Whether further evaluation of the nano wavelength data reveals SBHM coalescence is the primary question.

References

1. S. Chandrasekhar, Dynamical Friction I. General Considerations: the Coefficient of Dynamical Friction, https://articles.adsabs.harvard.edu/pdf/1943ApJ….97..255C

2. S. Chandrasekhar, The Rate of Escape of Stars from Clusters and the Evidence for the Operation of Dynamical Friction, https://articles.adsabs.harvard.edu/pdf/1943ApJ….97..263C

3. S. Chandrasekhar, Dynamical Friction III. A More Exact Theory of the Rate of Escape of Stars from Clusters, https://articles.adsabs.harvard.edu/pdf/1943ApJ….98…54C

4. John Kormendy and Luis C. Ho, Coevolution (Or Not) of Supermassive Black Holes and Host Galaxies, https://arxiv.org/pdf/1304.7762

5. The NANOGrav Collaboration, Focus on NANOGrav’s 15 yr Data Set and the Gravitational Wave Background, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/collections/apjl-230623-245-Focus-on-NANOGrav-15-year

6. Gabriella Agazie, et al, The nanograv 15 yr data set: evidence for a gravitational-wave background, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acdac6/meta

7. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Observations and Timing of 68 Millisecond Pulsars, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acda9a/meta

8. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Detector Characterization and Noise Budget, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acda88/meta

9. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Search for Signals from New Physics, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acdc91/meta

10. Gabriella Agazie, et al, 15 yr Data Set: Bayesian Limits on Gravitational Waves from Individual Supermassive Black Hole Binaries, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023. https://iopscience.iop.org/article/10.3847/2041-8213/ace18a/meta

11. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Constraints on Supermassive Black Hole Binaries from the Gravitational-wave Background, The Astrophysical Journal Leteters, Volume 951, Number 1, 29 June 2023
https://iopscience.iop.org/article/10.3847/2041-8213/ace18b

12. Gabriella Agazie, et al, The NANOGrav 15 yr Data Set: Search for Anisotropy in the Gravitational-wave Background, The Astrophysical Journal Letters, Volume 951, Number 1, 29 June 2023, https://iopscience.iop.org/article/10.3847/2041-8213/acf4fd/meta

13. Peter Berczik, David Merritt, Rainer Spurzem, Hans-Peter Bischof, Efficient Merger of Binary Supermassive Black Holes in Non-Axisymmetric Galaxies, https://arxiv.org/pdf/astro-ph/0601698

14. Masaki Iwasawa, Yoko Funato and Junichiro Makino, Evolution of Massive Blackhole Triples I — Equal-mass binary-single systems, https://arxiv.org/pdf/astro-ph/0511391

15. Milos Milosavljevic and David Merritt, Long Term Evolution of Massive Black Hole Binaries, https://arxiv.org/pdf/astro-ph/0212459

16. Gonzalo Alonso-Álvarez et al, Self-Interacting Dark Matter Solves the Final Parsec Problem of Supermassive Black Hole Mergers, Physical Review Letters (2024). https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.133.021401

The Search for Things that Matter

Overpopulation has spawned so many dystopian futures in science fiction that it would be a lengthy though interesting exercise to collect them all. Among novels, my preference for John Brunner’s Stand on Zanzibar goes back to my utter absorption in its world when first published in book form in 1968. Kornbluth’s “The Marching Morons” (1951) fits in here, and so does J.G. Ballard’s Billenium (1969), and of course Harry Harrison’s Make Room! Make Room! from 1966, which emerged in much changed form in the film Soylent Green in 1973.

You might want to check Science Fiction and Other Suspect Ruminations for a detailed list, and for that matter on much else in the realm of vintage science fiction as perceived by the pseudonymous Joachim Boaz (be careful, you might spend more time in this site than you had planned). In any case, so strongly has the idea of a clogged, choking Earth been fixed in the popular imagination that I still see references to going off-planet as a way of relieving population pressure and saving humanity.

So let’s qualify the idea quickly, because it has a bearing on the search for technosignatures. After all, a civilization that just keeps getting bigger has to take desperate measures to generate the power needed to sustain itself. It’s worth noting, then, that a 2022 UN report suggests a world population peaking at a little over 10 billion and then beginning to decline as the century ends. A study in the highly regarded medical journal The Lancet from a few years back sees us peaking at a little under 10 billion by 2064 with a decline to less than 9 billion by the end of the 2100s.

Image: The April, 1951 issue of Galaxy Science Fiction where “The Marching Morons” first appeared. Brilliant and, according to friend and collaborator Frederick Pohl, exceedingly odd, Cyril Kornbluth died of a heart attack in 1958 at the age of 34, on the way to being interviewed for a job as editor of Fantasy and Science Fiction.

How accurate such projections are is unknown, as is what happens beyond the end of this century, but it seems clear that we can’t assume the kind of exponential increase in population that will put an end to us in Malthurian fashion any time soon. It’s conceivable that one reason we are not finding Dyson spheres (although there are some candidates out there, as we’ve discussed here previously) is that technological civilizations put the brakes on their own growth and sustain levels of energy consumption that would not readily be apparent from telescopes light years away.

Thus a new paper from Ravi Kopparapu (NASA GSFC), in which the population figure of 8 billion (which is about where we are now) is allowed to grow to 30 billion under conditions in which the standard of living is high globally. Assuming the use of solar power, the authors discover that this civilization, far larger in population than ours, uses much less energy that the sunlight incident upon the planet provides. Here is an outcome that puts many one of the most cherished tropes of science fiction to the test, for as Kopparapu explains:

“The implication is that civilizations may not feel compelled to expand all over the galaxy because they may achieve sustainable population and energy-usage levels even if they choose a very high standard of living. They may expand within their own stellar system, or even within nearby star systems, but galaxy-spanning civilizations may not exist.”

Image: Conceptual image of an exoplanet with an advanced extraterrestrial civilization. Structures on the right are orbiting solar panel arrays that harvest light from the parent star and convert it into electricity that is then beamed to the surface via microwaves. The exoplanet on the left illustrates other potential technosignatures: city lights (glowing circular structures) on the night side and multi-colored clouds on the day side that represent various forms of pollution, such as nitrogen dioxide gas from burning fossil fuels or chlorofluorocarbons used in refrigeration. Credit: NASA/Jay Freidlander.

The harvesting of the energies of stellar light may be obsolete among civilizations older than our own, given alternative ways of generating power. But if not, the paper models a telescope on the order of the proposed Habitable Worlds Observatory to explore how readily it might detect a massive array of solar panels on a planet some 30 light years away. This is intriguing stuff, because it turns out that even huge populations don’t demand enough power to cover their planet in solar panels. Indeed, it would take several hundred hours of observing time to detect at high reliability a land coverage of 23 percent on an Earth-like planet using silicon-based solar panels for its needs.

The conclusion is striking. From the paper:

Kardashev (1964) even imagined a Type II civilization as one that utilizes the entirety of its host star’s output; however, such speculations are based primarily on the assumption of a fixed growth rate in world energy use. But such vast energy reserves would be unnecessary even under cases of substantial population growth, especially if fusion and other renewable sources are available to supplement solar energy.

A long-held assumption thus comes under fire:

The concept of a Type I or Type II civilization then becomes an exercise in imagining the possible uses that a civilization would have for such vast energy reserves. Even activities such as large-scale physics experiments and (relativistic) interstellar space travel (see Lingam & Loeb 2021, Chapter 10) might not be enough to explain the need for a civilization to harness a significant fraction of its entire planetary or stellar output. In contrast, if human civilization can meet its own energy demands with only a modest deployment of solar panels, then this expectation might also suggest that concepts like Dyson spheres would be rendered unnecessary in other technospheres.

Of course, good science fiction is all about questioning assumptions by pushing them to their logical conclusions, and ideas like this should continue to be fertile ground for writers. Does a civilization necessarily have to expand to absorb the maximum amount of energy its surroundings make available? Or is it more likely to evolve only insofar as it needs to reach the energy level required for its own optimum level of existence?

So. What makes for an ‘optimum experience of life’? And how can we assume other civilizations will necessarily answer the question the same way we would?

The question explores issues of ecological sustainability and asks us to look more deeply at how and why life expands, relating this to the question of how a technosphere would or would not grow once it had reached a desired level. We’re crossing disciplinary boundaries in ways that make some theorists uncomfortable, and rightly so because the answers are anything but apparent. We’re probing issues that are ancient, philosophical and central to the human experience. Plato would be at home with this.

The paper is Kopparapu et al., “Detectability of Solar Panels as a Technosignature,” Astrophysical Journal Vol. 967, No. 2 (24 May 2024), 119 (full text). Thanks to my friend Antonio Tavani for the pointer to this paper.

On Ancient Stars (and a Thought on SETI)

I hardly need to run through the math to point out how utterly absurd it would be to have two civilizations develop within a few light years of each other at roughly the same time. The notion that we might pick up a SETI signal from a culture more or less like our own fails on almost every level, but especially on the idea of time. A glance at how briefly we have had a technological society makes the point eloquently. We can contrast it to how many aeons Earth has seen since its formation 4.6 billion years ago.

Brian Lacki (UC-Berkeley) looked into the matter in detail at a Breakthrough Discuss meeting in 2021. Lacki points out that our use of radio takes up 100,000,000th of the lifespan of the Sun. We must think, he believes, in terms of temporal coincidence, as the graph he presented at the meeting shows. Note the arbitrary placement of a civilization at Centauri B, and others at Centauri A and C, along with our own timeline. The thin line representing our civilization actually corresponds to a lifetime of 10 million years. What are the odds that the lines of any two stars coincide? Faint indeed, unless societies can persist for not just millions but even billions of years. We don’t know if they can, but we need to think about it in terms of what we might receive.

Image: Brian Lacki’s slide illustrating temporal coincidence. Credit: Brian Lacki.

But there is another point. Should we assume that stars near the Sun are roughly the same age as ours? You might think so at first glance, given the likely formation of our star in a stellar cluster, but in fact clusters separate and diverge over time, so that finding the Sun’s birthplace and its siblings is challenging in itself (though some astronomers are trying). As we’re also learning, slowly but surely, stars around us in the Milky Way’s so-called ‘thin disk’ – within which the Sun moves – actually show a wider range of ages than we first thought.

A planet-hosting star a billion years older than ours might be a more interesting SETI target than one considerably younger, simply because life has had more time to start emerging on its planets. But untangling all the factors that help us understand stellar age and movement is not easy. What is now happening is that we are developing what are known as chrono-chemo-kinematical maps, which track these factors along with the chemical composition of the stars under study. Here we’re combining spectroscopic analysis with models of stellar evolution and radial velocity analysis.

This multi-dimensional approach is greatly aided by ESA’s Gaia mission and its extensive datasets on stars within a few thousand light years of the Sun. Gaia is remarkably helpful at using astrometry to pin down stellar motion and distance. Then we can factor in metallicity, for the oldest stars in the galaxy were formed at a time when hydrogen and helium were about the only ingredients the cosmos had to work with. A chrono-chemo-kinematical map can interrelate these factors, and with the help of neural networks tease out some conclusions that have surprised astronomers.

Thus a new paper out of the Leibniz-Institut für Astrophysik Potsdam. Here Samir Nepal and colleagues have been using machine learning (with what they call a ‘hybrid convolutional neural network’) to attack one million spectra from the Radial Velocity Spectrometer (RVS) in Gaia’s Data Release 3. Altogether, they are working with a sample of 565,606 stars to determine their parameters. Here the metallicity of stars is significant because the thin disk, which extends in the plane of the galaxy out to its edges, has been thought to consist primarily of younger Population I stars. Thus we should find higher metallicity, as we do, in a region of ongoing star formation.

But the Gaia mission is helping us understand that there is a surprising portion of the thin disk that consists of ancient stars on orbits that are similar to the Sun. And while the thin disk has largely been thought to have begun forming some 8 to 10 billion years ago, the maps that are emerging from the Potsdam work show that the majority of ancient stars in the Gaia sample (within 3200 light years) are far older than this. Most are metal-poor, but some have higher metal content than our Sun, which implies that early in the Milky Way’s development metal enrichment could already take place.

Let me quote Samir Nepal directly on this:

“These ancient stars in the disc suggest that the formation of the Milky Way’s thin disc began much earlier than previously believed, by about 4-5 billion years. This study also highlights that our galaxy had an intense star formation at early epochs leading to very fast metal enrichment in the inner regions and the formation of the disc. This discovery aligns the Milky Way’s disc formation timeline with those of high-redshift galaxies observed by the James Webb Space Telescope (JWST) and Atacama Large Millimeter Array (ALMA) Radio Telescope. It indicates that cold discs can form and stabilize very early in the universe’s history, providing new insights into the evolution of galaxies.“

Image: An artist’s impression of our Milky Way galaxy, a roughly 13 billon-year-old ‘barred spiral galaxy’ that is home to a few hundred billion stars. On the left, a face-on view shows the spiral structure of the Galactic Disc, where the majority of stars are located, interspersed with a diffuse mixture of gas and cosmic dust. The disc measures about 100 000 light-years across, and the Sun sits about half way between its centre and periphery. On the right, an edge-on view reveals the flattened shape of the disc. Observations point to a substructure: a thin disc some 700 light-years high embedded in a thick disc, about 3000 light-years high and populated with older stars. Credit: Left: NASA/JPL-Caltech; right: ESA; layout: ESA/ATG medialab.

Here is an image showing the movement of stars near the Sun around galactic center, as informed by the Potsdam work:

Image: Rotational motion of young (blue) and old (red) stars similar to the Sun (orange). Credit: Background image by NASA/JPL-Caltech/R. Hurt (SSC/Caltech).

A few thoughts: Combining data from different sources using the neural networks deployed in this study, and empowered by the Gaia DR3 RVS results, the authors are able to cover a wide range of stellar parameters, from gravity, temperature and metal content to distances, kinematics and stellar age. It’s going to take that kind of depth to begin to untangle the interacting structures of the Milky Way and place them into the context of their early formation.

Secondly, these results really seem surprising given that while the majority of the metal-poor stars in thin-disk orbits are older than 10 billion years, fully 50 percent are older than 13 billion years. The thin disk began forming less than a billion years after the Big Bang – that’s 4 billion years earlier than previous estimates. We also learn that while metallicity is a key factor, it varies considerably throughout this older population. In other words, intense star formation made metal enrichment possible, working swiftly from the inner regions of the galaxy and pushing outwards.

So our Solar System is moving through regions containing a higher proportion of ancient stars than we knew, and upcoming work extending these machine learning techniques, now in the planning stages and using data from the 4-metre Multi-Object Spectroscopic Telescope (4MOST) should refine the results of the Potsdam team in 2025. I return to what this may tell us from a SETI perspective. Ancient stars, especially those with higher than expected metallicity, should be interesting targets given the opportunities for life and technology to develop on their planets.

Maybe we’re making the Fermi question even tougher to answer. Because many such stars in the nearby cosmic environment are older — far older — than we had realized.

The paper is Nepal et al., “Discovery of the local counterpart of disc galaxies at z > 4: The oldest thin disc of the Milky Way using Gaia-RVS,” accepted for publication in Astronomy & Astrophysics (preprint).

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives