Centauri Dreams
Imagining and Planning Interstellar Exploration
All the Light We Can See
I’ve reminisced before about crossing Lake George in the Adirondacks in a small boat late one night some years back, when I saw the Milky with the greatest clarity I had ever experienced. Talk about dark skies! That view was not only breathtaking on its own, but it also raised the point about what we can see where. Ponder the cosmic optical background (COB), which sums up everything that has produced light over the history of the universe. The sum of light can be observed with even a small telescope, but the problem is to screen out local sources. No telescope is better placed to do just this than the Long Range Reconnaissance Imager (LORRI) aboard the New Horizons spacecraft.
Deep in the Kuiper Belt almost 60 AU from the Sun, the craft has a one-way light time of over eight hours (Voyager 1, by comparison, shows a one-way light time of almost 23 hours at 165 AU). It’s heartening that we’re continuing to keep the Voyagers alive even as the options slowly diminish, but New Horizons is still robust and returning data from numerous instruments. No telescope anywhere sees skies as dark as LORRI. That makes measurements of the COB as authoritative as anything we’re likely to get soon.
Image: Not my view from the Adirondacks but close. The Milky Way is gorgeous when unobscured by city lights. Credit: Derek Rowley.
The issue of background light came to the fore in 2021, when scientists at the National Science Foundation-funded NSF NOIRLab put data from New Horizons’ 20.8 cm telescope to work. That effort involved measuring the light found in a small group of images drawn from deep in the cosmos. It suggested a universe that was brighter than it should be, as if there were uncounted sources of light. Now we have further analysis of observations made with LORRI in 2023 supplemented by data from ESA’s Planck mission, which aids in calibrating the dust density in the chosen fields of view. We learn that contamination from the Milky Way can explain the anomaly.
The new paper from lead author Marc Postman (Space Telescope Science Institute) studies light from 16 different fields carefully chosen to minimize the background light of our own galaxy which, of course, surrounds us and compromises our view. This new work, rather than using archival data made for other purposes, explicitly uses LORRI to create images minimizing foreground light sources. The conclusion is evidently air-tight, as laid out by Postman:
At the outset of this work we posed the question: Is the COB intensity as expected from our census of faint galaxies, or does the Universe contain additional sources of light not yet recognized? With our present result, it appears that these diverse approaches are converging to a common answer. Galaxies are the greatly dominant and perhaps even complete source of the COB. There does remain some room for interesting qualifications and adjustments to this picture, but in broad outline it is the simplest explanation for what we see.
And let me throw in this bit from the conclusion of the paper because it adds an interesting dimension to the study:
If our present COB intensity is correct, however, it means that galaxy counts, VHE γ-ray extinction, and direct optical band measurements of the COB intensity have finally converged at an interesting level of precision. There is still room to adjust the galaxy counts slightly, or to allow for nondominant anomalous intensity sources.
In other words, to fully analyze the COB, the scientists have included VHE (very high energy) gamma ray extinction, meaning adjustments for the scattering of gamma rays as they travel to us. Although not visible at optical wavelengths, gamma rays can interact with the photons of the COB in ways that can be measured, as an adjustment to the rest of the COB data. That analysis complements the count of known galaxies and the optical band measurements to produce the conclusion now achieved.
I always find it interesting that there is both a deep satisfaction in solving a mystery and also a slight letdown, for let’s face it, odd things in the universe are fascinating, and let our imaginations run wild. In this case, however, the issue seems resolved.
I don’t have to mention to this audience how much good science continues to get done by having a fully functioning probe this deep in the Kuiper Belt. From New Horizons’ vantage point, there is little to no effect from zodiacal light, which is the result of sunlight scattering off interplanetary dust. The latter is a key factor in the brightness of the sky in the inner Solar System and has made previous attempts to measure the COB from the inner system challenging. We now look ahead to New Horizons’ search for other Kuiper Belt Objects to explore and try to learn whether there is a second belt of debris beyond the known one, and thus between it and the inner Oort Cloud.
We’ll doubtless continue to find things that challenge our assumptions as we press on, a reminder that a successor to New Horizons and the Voyagers is still a matter of debate both in terms of mission design and funding. As to the cosmic optical background, we give up the unlikely but highly interesting prospect that any significant levels of light come from sources unknown to us. As the paper concludes: “…the simplest hypothesis appears to provide the best explanation of what we see: the COB is the light from all the galaxies within our horizon..”
The paper is Postman et al., “New Synoptic Observations of the Cosmic Optical background with New Horizons,” The Astrophysical Journal Vol. 972, No. 1 (28 August 2024), 95 (full text). The 2021 paper is Lauer et al., “New Horizons Observations of the Cosmic Optical Background,” The Astrophysical Journal Vol. 906, No. 2 (11 January 2021), 77 (full text).
Green Mars: A Nanotech Beginning
I want to return to Mars this morning because an emerging idea on how to terraform it is in the news. The idea is to block infrared radiation from escaping into space by releasing engineered dust particles about half as long as the wavelength of this radiation, which is centered around wavelengths of 22 and 10 μm, into the atmosphere. Block those escape routes and the possibility of warming Mars in a far more efficient way than has previously been suggested emerges. The paper on this work even suggests a SETI implication (!), but more about that in a moment.
Grad student Samaneh Ansari (Northwestern University) is lead author of the paper, working with among others Ramses Ramirez (University of Central Florida), whose investigations into planetary habitability and the nature of the habitable zone have appeared frequently in these pages (see, for example, Revising the Classical ‘Habitable Zone’). The engineered ‘nanorods’ at the heart of the concept could raise the surface temperature enough to allow survivability of microbial life, which would at least be a beginning to the long process of making the Red Planet habitable.
As opposed to using artificial greenhouse gases, a method that would involve vast amounts of fluorine scarce on the Martian surface, the nanorod approach takes advantage of the properties of the planet’s dust, which is lofted to high altitudes as an aerosol. The authors calculate, using the Mars Weather Research and Forecasting global climate model, that releasing 9-μm-long conductive nanorods made of aluminum “not much smaller than commercially available glitter” would provide the needed infrared blocking that natural dust cannot, and once at high altitude settle more slowly to the surface.
What stands out in the authors’ modeling is that their method is over 5,000 times more efficient than other methods of terraforming, and relies on materials already available on Mars. Natural dust particles, you would think, should warm the planet if released in greater quantities, but the result of doing so is actually to cool the surface even more. Let me quote the paper on this counter-intuitive (to me at least) result:
Because of its small size (1.5-μm effective radius), Mars dust is lofted to high altitude (altitude of peak dust mass mixing ratio, 15 to 25 km), is always visible in the Mars sky, and is present up to >60 km altitude (14–15). Natural Mars dust aerosol lowers daytime surface temperature [e.g., (16)], but this is due to compositional and geometric specifics that can be modified in the case of engineered dust. For example, a nanorod about half as long as the wavelength of upwelling thermal infrared radiation should interact strongly with that radiation (17).
Edwin Kite (University of Chicago) is a co-author on the work:
“You’d still need millions of tons to warm the planet, but that’s five thousand times less than you would need with previous proposals to globally warm Mars. This significantly increases the feasibility of the project… This suggests that the barrier to warming Mars to allow liquid water is not as high as previously thought.”
Image: This is Figure 3 from the paper. Caption: The proposed nanoparticle warming method. Figure credit: Aaron M. Geller, Northwestern, Center for Interdisciplinary Exploration and Research in Astrophysics + IT-RCDS.
Strikingly, the effects begin to emerge quite quickly. Within months of the beginning of the process, atmospheric pressure rises by 20 percent as CO2 ice sublimes, creating a positive warming feedback. Note this from the paper:
On a warmed Mars, atmospheric pressure will further increase by a factor of 2 to 20 as adsorbed CO2 desorbs (35), and polar CO2 ice (36) is volatilized on a timescale that could be as long as centuries. This will further increase the area that is suitable for liquid water (6).
That said, we’re still not in range for creating a surface habitable by humans. We have to deal with barriers to oxygenic photosynthesis, including the makeup of the Martian sands, which are laden with potentially toxic levels of nitrates, and an atmosphere with little oxygen. Toxic perchlorates in the soil would require ‘bioremediation’ involving perchlorate-reducing bacteria, which yield molecular oxygen as a byproduct. We’re a long way from creating an atmosphere humans can breathe, but we’re in range of the intermediate goal of warming the surface, possibly enough to sustain food crops.
Addendum: I made a mistake above, soon caught by Alex Tolley. Let me insert his comment here to straighten out my mistake:
“… which are laden with potentially toxic levels of nitrates,”
I think you misinterpreted the sentence from the paper:
“…is not sufficient to make the planet’s surface habitable for oxygenic photosynthetic life: barriers remain (7). For example, Mars’ sands have ~300 ppmw nitrates (37), and Mars’ air contains very little O2, as did Earth’s air prior to the arrival of cyanobacteria. Remediating perchlorate-rich soil…”
300 ppm nitrates is very low and will not support much plant or bacterial life. [You want ~ 10,000 ppm ] That is why N and P are added to simulated Mars regolith when testing plant growth for farming or terraforming. IIRC, there have been suggestions of importing nitrogen from Titan to meet its needs on Mars.
Thanks for catching this, Alex!
Although nanoparticles could warm Mars… both the benefits and potential costs of this course of action are now uncertain. For example, in the unlikely event that Mars’ soil contains irremediable compounds toxic to all Earth-derived life (this can be tested with Mars Sample Return), then the benefit of warming Mars is nil. On the other hand, if a photosynthetic biosphere can be established on the surface of Mars, perhaps with the aid of synthetic biology, then that might increase the Solar System’s capacity for human flourishing. On the cost side, if Mars has extant life, then study of that life could have great benefits that warrant robust protections for its habitat. More immediately, further research into nanoparticle design and manufacture coupled with modeling of their interaction with the climate could reduce the expense of this method.
That’s a robust way forward, one the authors suggest could involve wind tunnel experiments at Mars pressure to analyze how both dust and nanomaterials are released from modeled Mars surfaces, from dusty flat terrain to the ice of the poles. Large eddy simulations (LES) are ways to model larger flows such as winds and weather patterns. Deploying these should be useful in learning how the proposed nanorods will disperse in the atmosphere, while local warming methods also demand consideration.
A question I had never thought to ask about terraforming was how long the effects can be expected to last, and indeed the authors point out how little is known about long-term sustainability. A 2018 paper on current loss rates in the Martian atmosphere suggests that it would take at least 300 million years to fully deplete the atmosphere. The big unknown here is the Martian ice, and what may lie beneath it:
…if the ground ice observed at meters to tens of meters depth is underlain by empty pore space, then excessive warming over centuries could allow water to drain away, requiring careful management of long-term warming. Subsurface exploration by electromagnetic methods could address this uncertainty regarding how much water remains on Mars deep underground.
Image: Will we ever get to this? The ‘nanorod’ approach oculd be the beginning. Credit: Daein Ballard, Wikimedia Commons CC BY-SA 3.0.
The SETI implication? Nanoparticle warming is efficient, so much so that we might expect other civilizations to use the technique. A potential technosignature emerges in the polarization of light, because a terrestrial world with a magnetic field will show the interaction of polarized light with the planet’s atmosphere, the latter conceivably laden with the nanoparticles at work in terraforming. Polarization will occur when light interacts with nanoparticles, aerosols, or dust in the atmosphere or the magnetic field. This would be an elusive signature to spot, but not outside the range of possibility.
In the absence of an active geodynamo to drive a magnetic field, Mars would not be a candidate for this kind of remote observation. But an exoplanet of terrestrial class with a magnetic field should, by these calculations, be a candidate for this kind of study.
The paper is Ansari et al., “Feasibility of keeping Mars warm with nanoparticles,” Science Advances Vol. 10, No. 32 (7 August 2024). Abstract / Preprint. Thanks to Centauri Dreams reader Ivan Vuletich for the pointer to this paper.
The ‘Freakish Radio Writings’ of 1924
Mars was a lively destination in early science fiction because of its proximity. When H. G. Wells needed a danger from outer space, The War of the Worlds naturally looked toward Mars, as a place close to Earth and one with the ability to provoke curiosity. Closely studied at opposition in 1877, Mars provoked in Giovanni Schiaparelli the prospect of a network of canals, surely feeding a civilization that might still be alive. No wonder new technologies turned toward the Red Planet as they became available to move beyond visible light and even attempt to make contact with its inhabitants.
All this comes to mind this morning because of an intriguing story sent along by my friend Al Jackson, whose work on interstellar propulsion is well known in these pages, as is his deep involvement with the Apollo program. Al had never heard of the incident described in the story. It occurred in 1924, when at another Martian opposition (an orbital alignment bringing Earth and Mars as close as they’ll get during its 26-month orbit), the U. S. Navy imposed radio silence nationwide for five minutes once an hour from August 21 to 24. The plan: Allow observatories worldwide to listen for Martians.
Image: The cover of the Edgar Rice Burroughs novel that would have been on Mars enthusiasts’ shelves when the 1924 opposition occurred. Burroughs’ depiction of Mars was hugely popular in its day.
This was serious SETI for its day. A dirigible was launched from the U. S. Naval Observatory carrying radio equipment for these observations, with the capability of relaying its signals back to a laboratory on the ground. A military cryptographer was brought in to monitor the situation, as attested by a provocative New York Times headline from August 23 of that year: “Code Expert Ready for Message.; RADIO HEARS THINGS AS MARS NEARS US.”
All this was news to me too, and thus I was entranced by the new article, a Times essay from August 20 of this year, written by Becky Ferreira. Because something indeed happened and was reported in August 28 of 1924, again in the Times: “SEEKS SIGN FROM MARS IN 38-FOOT RADIO FILM; Dr. Todd Will Study Photograph of Mysterious Dots and Dashes Recently Recorded.”
As Ferreira explains::
A series of dots and dashes, captured by an airborne antenna, produced a photographic record of “a crudely drawn face,” according to news reports. The tantalizing results and subsequent media frenzy inflamed the public’s imagination. It seemed as if Mars was speaking, but what was it trying to say?
“The film shows a repetition, at intervals of about a half hour, of what appears to be a man’s face,” one of the experiment’s leaders said days later.
You may recall that when Frank Drake began Project Ozma at Green Bank in 1960, he homed in on nearby stars Tau Ceti and Epsilon Eridani. And relatively soon he got a strong signal, causing him to ponder whether detecting other civilizations might be easy if you just pointed your antenna and began to listen. But the signal turned out to be from an aircraft in the skies of West Virginia, an early SETI frustration, for radio frequency interference (RFI) is a source of constant concern, as witness the stir caused briefly in 2019 by what appeared to be a signal from Proxima Centauri, but was not.
I don’t think the 1960 RFI experience got much media play, if any, though Project Ozma itself received a certain degree of coverage. But the ‘face’ found in the Mars radio reception of 1924 would have caused newspaper readers in that year to recall Guglielmo Marconi’s 1920 claim that he had detected signals “sent by the inhabitants of other planets to the inhabitants of Earth.” This was an era bristling with the new exploration of radio wavelengths, which if they could offer communications across a continent or ocean, could surely make possible a signal from one planet to another.
The interest was international, as another Times headline makes clear, this one from August 23, 1924: “RADIO HEARS THINGS AS MARS NEARS US; A 24-Tube Set in England Picks Up Strong Signals Made in Harsh Dots. VANCOUVER ALSO FAVORED At Washington the Translator of McLean Telegrams Stands by to Decode Any Message.”
Back to the ‘face’ found in the research effort on the American side of the Atlantic, dug out of data relayed from the dirigible. It was an astronomer named David Peck Todd who went to work with inventor Charles Francis Jenkins, using a radio from the National Electrical Supply Company designed to support troops in combat. Jenkins would use it to pick up any signals from Mars as detected by the airship. He had for his part built a ‘radio camera’ that would convert the radio data into optical flashes that would be imprinted on photographic paper, and it was within the result that what seemed to be a face emerged. But it was one that not everyone saw.
Jenkins himself was unimpressed, as I learned from a story titled “Freakish Radio writings on Mars Machine” that ran in the Daily News on August 27. Let me quote the small piece in its entirety:
C. Francis Jenkins, Washington inventor, is investigating to ascertain cause of a series of freakish writings received on his special machine designed to record any possible radio signals from Mars.
The film record shows an arrangement of dots and dashes and pictures resembling a human face.
“I do not think the results have anything to do with Mars,” Jenkins said.
A little more digging in the newspaper archives revealed that Jenkins told Associated Press reporters, as recorded in the Buffalo Evening Times that same day (“Radio Signals Shown on Films, Puzzle Savants at Capital”) that he thought the results came from radio frequency interference, saying what appears to be a face is “a freak which we can’t explain.” The image was indeed part of a repeating pattern recorded on Jenkins’ machine, but people were reading into it what they wanted to see.
Image: What remains of the 1924 ‘face on Mars’ detection, as captured through photography of the original paper roll produced by Jenkins in his lab. Credit: Yale University Library.
So where is the 38-foot long roll of photographic paper that caused the ‘detection’ of a face from Mars? The original, according to Ferreira’s research, seems to have been lost, but Yale University Library lists three images from its collection of materials on David Peck Todd under the title “Martian signals recorded by Jenkins.” So we have at least three photographs of Jenkins’ work, but to me at least, no face seems apparent.
Also in Buffalo, the Buffalo American ran a much longer piece titled “Astronomers Scan Mars To Discover Human Life” for its August 28, 1924 issue, which looks at the whole issue of studying Mars, though without mention of Jenkins’ work. It includes this interesting paragraph:
…perhaps Mars does see what is happening on the earth. If you were on Mars and looked at the earth you would see a star twice as large as Mars appears to Buffalo, as the earth is double Mars’ size. In the far distance on the same side of the sky would be the sun but it would only be two-thirds as large as it appears here. On the other wise would be discerned a huge mass of vapor 1,300 times as large as the earth. That would be Jupiter, which has not solidified yet. You would also see a couple of moons. They light Mars at night and are responsible for the tides on its oceans.
The Buffalo American article takes us right into the Barsoom of Edgar Rice Burroughs’ imagination, which at that time had reached, in its 10-book series, The Chessmen of Mars (1922). If you’re a hard-core Burroughs fan, you may remember the chess game (known on Mars as Jetan) in which humans play the role of the chess pieces and fight to the death (Burroughs loved chess). Despite the Buffalo American’s mention of oceans, even in John Carter’s day Barsoom was depicted as a place where water resources were rare and tightly controlled.
And just why study Mars in the first place? The newspaper article explains:
They want to know if the earth is the only celestial globe on which the Creator put human beings and if the planets and stars beyond were designed merely for the people on Earth to admire.
The Mars of the day was an extraordinary place. In researching this piece, I came across this from an article on the 1924 opposition by Rowland Thomas that ran in the St. Louis Post-Dispatch:
For some time astronomers all over the world have been preparing to get close-ups of Mars with their telescopes. The observers at Lowell Observatory, Flagstaff, Ariz., where the late Percival Lowell carried on his lifelong study of the planet which confirmed his belief that intelligent life exists on it, reported that on the southern hemisphere of Mars, where the polar ice cap is now melting under the rays of what is there a spring-tide sun, vast areas of what may be continents, marshland, prairies and the beds of dired-up oceans are constantly changing in appearance.
Image: Mars as conceived by astronomer Percival Lowell (1855-1916) and discussed by him in three books: Mars (1895), Mars and Its Canals (1906), and Mars As the Abode of Life (1908). The canals are here shown filled, with the vegetation in vigorous growth. Painting by H. Seppings Wright (1850-1937).
We’re still fourteen years from Orson Welles’ “War of the Worlds” broadcast in October of 1938, which gave plenty of time for early SETI interest to grow along with magazine science fiction, which in the US began in the pages of Hugo Gernsback’s radio magazines starting with The Electrical Experimenter and moving on to Science and Invention, but would soon claim its own dedicated title in Amazing Stories, whose first issue appeared in April of 1926. A nod as well to a sprinkling of earlier SF stories in Street & Smith’s pulp The Thrill Book.
Ferreira’s article is terrific, and I’m glad to hear that she is working on a book on SETI. It took Mariner 4’s flyby in 1965 to finally demonstrate what the surface of Mars was really like, and by then the interstellar SETI effort was just beginning to get attention. I wonder how the Mars enthusiasts of 1924 would have reacted to the news that despite the SETI efforts of the ensuing 100 years, we still have no proof of intelligence or indeed life of any kind on another world?
Pumping Energy into the Solar Wind
The solar wind is ever enticing, providing as it does a highly variable stream of charged particles moving out from the Sun at speeds up to 800 kilometers per second. Finding ways to harness that energy for propulsive purposes is tricky, although a good deal of work has gone into designs like magsails, where a loop of superconducting wire creates the magnetic field needed to interact with this ‘wind.’ But given its ragged variability, the sail metaphor makes us imagine a ship constantly pummeled by gusts in varying degrees of intensity, constantly adjusting sail to maintain course and stability. And it’s hard to keep the metaphor working when we factor in solar flares or coronal mass ejections.
We can lose the superconducting loop if we create a plasma cloud of charged particles around the craft for the same purpose. Or maybe we can use an electric ‘sail,’ enabled by long tethers that deflect solar wind ions. All of these ideas cope with a solar wind that, near the Sun, may be moving at tens of kilometers per second but accelerating rapidly with distance, so that it can reach its highest speeds at 10 solar radii and more. Different conditions in the corona can produce major variations in these velocities.
Obviously it behooves us to learn as much as we can about the solar wind even as we continue to investigate less turbulent options like solar sails (driven by photon momentum) and their beam-driven lightsail cousins. A new paper in Science is a useful step in nailing down the process of how the solar wind is energized once it has left the Sun itself. The work comes out of the Smithsonian Astrophysical Observatory (SAO), which is part of the Center for Astrophysics | Harvard & Smithsonian (CfA), and it bores into the question of ‘switchbacks’ in the solar wind that have been thought to deposit energy.
At the heart of the process are Alfvén waves, named after Hannes Alfvén (1908-1995), a Nobel-winning Swedish scientist and engineer whose work is at the heart of the discipline known as magnetohydrodynamics. What Alfvén more or less defined was the study of the interactions of magnetic behavior in plasmas. Alfvén waves move along magnetic field lines imparting energy and momentum that nourishes the solar wind. Kinks in the magnetic field known as ‘switchbacks’ are crucial here. These sudden deflections of the magnetic field quickly snap back to their original position. Although not fully understood, switchbacks are thought to be closely involved with the Alfvén wave phenomenon.
Image: Artist’s illustration of the solar wind flowing from the Sun measured by Parker Solar Probe near the edge of the corona and later with Solar Orbiter at a larger distance during a spacecraft alignment. The solar wind contains magnetic switchbacks, or large amplitude magnetic waves, near Parker Solar Probe that disappear farther from the Sun where Solar Orbiter is located. Credit: Image background: NASA Goddard/CIL/Adriana Manrique Gutierrez, Spacecraft images: NASA/ESA.
Data from two spacecraft have now clarified the role of these switchbacks. The Parker Solar Probe readily detected them in the solar wind, but data from ESA’s Solar Orbiter mission added crucial context. The two craft, one designed to penetrate the solar corona, the other working at larger distances, came into alignment in February of 2022 so that they observed the same solar wind stream in the scope of two days of observations. CfA’s Samuel Badman is a co-author of the study:
“We didn’t initially realize that Parker and Solar Orbiter were measuring the same thing at all. Parker saw this slower plasma near the Sun that was full of switchback waves, and then Solar Orbiter recorded a fast stream which had received heat and with very little wave activity. When we connected the two, that was a real eureka moment.”
So we had a theoretical process of energy movement through the corona and the solar wind in which Alfvén waves transported energy, but now we have data charting the interaction of the waves with the solar wind over time. The authors argue that the switchback phenomena pumps energy into the process of heating and acceleration sufficient to drive the fastest streams of the solar wind. Indeed, John Belcher (MIT), not a part of the study, considers this to be a ‘classic paper’ that demonstrates the fulfillment of one of the Parker Solar Probe’s main goals.
Such work has ramifications that will be amplified over time as we continue to investigate the environment close to the Sun and the solar wind that grows out of it. The findings will help clarify how future craft might be designed to take advantage of solar wind activity, but will also provide insights into the behavior of any sailcraft we send into close solar passes to achieve high velocity gravitational slingshots to the outer system. Always bear in mind that heliophysics plays directly into our thinking about the system’s outer edges and the evolution of spacecraft designed to explore them.
The paper is Rivera et al., “In situ observations of large-amplitude Alfvén waves heating and accelerating the solar wind,” Science Vol 385, Issue 6712 (29 August 2024), p. 962-966 (abstract).
Our Earliest Ancestor Appeared Soon After Earth Formed
Until we learn whether or not life exists on other planets, we extrapolate on the basis of our single living world. Just how long it took life to develop is a vital question, with implications that extend to other planetary systems. In today’s essay, Alex Tolley brings his formidable background in the biological sciences to bear on the matter of Earth’s first living things, which may well have emerged far earlier than was once thought. In particular, what was the last universal common ancestor — LUCA — from which bacteria, archaea, and eukarya subsequently diverged? Without the evidence future landers and space telescopes will give us, we remain ignorant of so fundamental a question as whether life itself — not to mention intelligence — is a rarity in the cosmos. But we’re piecing together a framework that reveals Earth’s surprising ability to spring into early life.
by Alex Tolley
Once upon a time, the history of life on Earth seemed so much simpler. Darwin had shown how natural selection of traits could create new species given enough time, although he did not argue for the origin of life, other than it would start in a “warm pond”. Extant animals and plants had been classified starting with Linnaeus, and evolution was inferred by comparing traits of organisms. Fossils of ancient animals added to the idea of evolution in deep time. In 1924, Oparin, and later in 1929, Haldane, suggested that a primordial soup would accumulate in a sterile ocean, due to the formation of organic molecules from reduced gasses and energy. This would be the milieu for life to emerge.
With the Miller-Urey experiment (1952) that demonstrated that amino acids, the “basic building blocks of life” could be created quickly in the lab with a primordial atmosphere gas mixture and electricity, it was assumed that proteins that form the basis of most of life’s structure and function would follow. The time needed for the evolution of life was increased from less than 10,000 years in the Biblical Old Testament, to 100 million years (my) in the late 19th century, to about 4.5 billion years (Ga) once radioisotopic dating was established by 1953. Fossil evidence relied on the mineralization of hard structures which started to appear in the Cambrian period around 550 million years ago (mya).
The Apollo lunar samples indicated that the Moon had been subjected to a late heavy impactor bombardment (LHB) after its formation 4.5 Ga from around 4.1 – 3.8 Ga. With the Earth assumed to be sterilized by the LHB, there seemed to be plenty of time for life to appear. Then the dating of stromatolites pushed the earliest known life to nearly 3.5 Ga and reduced the time for abiogenesis to just a few 100 million years after the LHB. This seemed to leave too little time for abiogenesis. There was a reprieve when it was argued that the LHB was an artifact of lunar sample collection, with the later Imbrium impact adding its younger age to the older samples. If the LHB was not a sterilizing event, then another 500 million years to a billion years could be allowed for life to appear.
Even though the structure of DNA was determined by Watson and Crick in 1953, and with it the site of genes, sequencing even short lengths of DNA was a slow process. This changed with gene sequencing machines and algorithms during the 1990s with the sequencing of the human genome. Sequencing costs have fallen sharply, and gene databases are being filled. We now have vast numbers of sequenced genes from a range of organisms, and full genomes from selected species.
The resulting inexpensive gene sequencing kickstarted the genomics revolution. With gene sequences from a large number of extant species, Richard Dawkins suggested that even if there were no fossils, evolution could be inferred by the changes in the nucleotide base sequences in modern organisms, and evolution was represented by the incremental changes in species’ genomes. His opus magnum The Ancestors’ Tale was an exploration of the tree of life moving backwards in time. [6].
The slow changes over time in the sequences of key functional genes that appear in all organisms is called the “molecular clock”. The greater the difference in sequences between the genes in 2 species, the greater their evolutionary separation. However, unlike atomic clocks, the molecular clock does not tick at the same rate for each organism, or gene. If they did, all the divergences would sum to the same length of time. As Figure 1 demonstrates, they do not. Nevertheless, evolutionary trees for all organisms with sequenced conservative functional genes were built to show how species evolved from each other and could be compared with phylogeny trees created using the fossil record.
Figure 1. Rooted and unrooted phylogenetic trees. (Source: Creative Commons Chiswick Chap).
While this phylogenetic tree shows evolutionary separation, it has no timeline. These trees converge back in time to a Last Universal Common Ancestor (LUCA) at the point where the 2 most distantly related domains of life, the Bacteria, and Archaea are joined. However, fossils can provide a means to calibrate the timeline for the tree branches and when LUCA can be placed in time. For example, if we can find and date human fossils and chimpanzee fossils, we can be confident that their common ancestor lived at an earlier age. The common ancestor would be younger than the time that both humans and chimps diverged from our ape ancestor, and in turn that ancestor would be younger than the ancestor of all primates. The phylogenetic trees based on gene sequences can be compared to trees based on morphology. Generally, they match. With fossil evidence, these new phylogenetic trees can be calibrated to date the branches.
Without good fossil evidence to calibrate the phylogenetic tree, it is harder to date the tree of life as we approach its root where we believe LUCA must be present. Several attempts have been made to determine this timeline. In 2018, a paper by Betts indicated that LUCA could be dated to about the age of the Earth [2]. Mahendrarajah et al, analyzing the gene for ATP Synthase, estimated a similarly early date for its appearance before the separation of the Archaea and Bacteria placing LUCA at over 4 Ga.[3]
The new paper by Moody et al, extends the work of the aforementioned 2 co-authors, as well as others, to create the best estimate of the timeline of life, the dating of LUCA, a description of LUCA, and its environment. The approach used a cross-bracing method using gene duplications of ancient functional genes to firm up the phylogenetic tree and the fossil calibrations. Cross-bracing is the use of duplicated genes (paralogs) to anchor different trees with dates to provide mutual support for the dating [12].
The 2 different trees are based on gene duplication before LUCA appeared to create the separate trees, which are shown in Figure 2. The analysis dates LUCA at least 4 Ga to the age of the Earth, 4.5 Ga. As most theories of abiogenesis require a watery environment, the earliest dating of surface water on Earth and the appearance of oceans is fairly fast, within 100 million years (my) after Earth’s formation, about 4.4 Ga, [11]. The relaxed Bayesian distributions used hard (no 2.5% tail distribution) and soft (include 2.5% tail distribution) dates for the boundary dating calibrations The maximum likelihood for the age of LUCA was set at 4.2 Ga, 200 my after the oceans were formed and about 300 my after the Earth formed and the impact that formed the moon and sterilized the Earth.
Figure 2 shows the new timeline. The dendrogram indicates the degree of gene sequence divergence as a horizontal line from each node. The greater the length of the line, the more ticks of the molecular clock as the sequence changes compared to nearby species’ lines, and the greater the time the species have been separated by evolution. LUCA is dated within the Hadean eon, a time once thought to be devoid of life due to its hellish surface conditions from impactor bombardment as well as the heat from its formation and radioactivity. The 4.5 Ga calibration date is a hard constraint as terrestrial abiogenesis is impossible before then.
Figure 2. The calibrated phylogenetic tree shows the 2 lineages for the gene duplications, with each of the 2 trees acting as cross braces. The 2 algorithm variants with distributions in gold and teal converge to close overlaps with the dating of LUCA. Note the small purple stars that are the fossil calibrations. The calibrations for LUCA use the age of the Earth and prior fossil evidence as there is no fossil evidence for LUCA unless the controversial carbon isotope evidence demonstrates life and not an abiotic process. Credit: Moody et al.
The paper also uses the gene sequence evidence to paint a picture of LUCA as very similar to a prokaryote bacterium. It has all the important cellular machinery of a contemporary bacterium but with several cellular pathways absent or of low probability. It was probably a chemoautotroph, meaning that it could use free hydrogen and carbon dioxide to reduce and fix carbon as well as extract energy, from either geochemical processes or other contemporary organisms.
Because LUCA is not a protocell, but a likely procaryote, this implies that the sequence of abiogenesis from inanimate chemistry to a functioning prokaryote cell must have taken no more than 300 my, and more likely 200 my.
As the authors state:
How evolution proceeded from the origin of life to early communities at the time of LUCA remains an open question, but the inferred age of LUCA (~4.2 Ga) compared with the origin of the Earth and Moon suggests that the process required a surprisingly short interval of geologic time. (emphasis mine).
The issue of the rapid appearance of life was back in play.
Figure 3 shows the hypothetical progression of abiogenesis to the Tree of Life and the steps needed to get from a habitable world to LUCA at the base of the Tree of Life.
Figure 3. The hypothetical development of life from the habitable planet through simpler stages and eventually to the radiation of species we see today. (Source: Creative Commons Chiswick Chap).
Given that the complexity of LUCA appears to be great, why is the timeline to evolve it so short when the timeline to the last archaean and last bacterial common ancestors (LACA, LBCA) is so prolonged at a billion years? Are the genomic divergences between bacteria and archaea so great not because of a slow ticking of the molecular clock, but rather evidence of rapid evolution that would imply LUCA was younger than it appears as the molecular clock was ticking faster?
It is important to understand that LUCA was not a single organism, but a representative of a population. It probably lived in an ecosystem with other organisms, none of whose lineages survived. This is shown below in Figure 4. The red lines indicating that other extinct lineages may have transferred genes to each of the archaean and bacterial lineages after LUCA evolved could, in principle, have exaggerated the divergence of these 2 lineages, exaggerating the depth of the timeline from LUCA. This is purely speculative to explain the authors’ findings.
Figure 4. LUCA must have had ancestors and likely contemporary organisms. The gray lineage includes LUCA’s ancestors as well as other lineages that became extinct. The red lines indicate horizontal gene transfer across lineages.
A key question is whether the calibrated timeline is correct. While the authority of the number of authors is impressive, and the many checks on their analysis are substantial, the method may be simply inaccurate. We have a similar methodological issue with the Hubble Tension between 2 methods of determining the Hubble constant for the universe’s rate of expansion. Molecular clock rates are not uniform between species and estimated timelines for the divergence of species can vary when compared to the oldest fossils. DNA sequences can be extracted for relatively recent fossils to more accurately calibrate the phylogenetic tree. However, this is not possible after a few million years due to DNA degradation. Purely mineralized fossils, impressions in rocks, and isotopic biosignature evidence rule this tight calibration out. Fossils are relatively rare and usually prove younger than the node that starts their particular lineage. This is to be expected, although the discovery of older fossils can modify the picture.
Because molecular clock rates are not fixed, various means are used to estimate rates, using Bayesian probability. These rely on different distributions. The authors use 2 methods:
1. Geometric Brownian motion (GBM)
2. Independent log-normal.(ILN)
In Figure 2, the distributions are indicated by color. For the younger nodes, these methods clearly diverge, and in the case of the last eukarya common ancestor, the 2 distributions do not overlap. The distributions converge deeper in time, with the GBM maximum probability now a little older than the ILN one. The authors selected the GBM peak as the best dating for LUCA, although using the ILN method makes almost no difference.
While the Bayesian method has become the standard method for calibrated phylogenetic tree dating, the question remains whether it is accurate. All the genes and cross-bracing used would be false support if there is a flaw in the methodology. A 2023 paper by Budd et al highlights the problem. In particular, based on fossils, the divergence of mammals occurs after the K-T event that is associated with the extinction of the non-avian dinosaurs, whereas the genomic data supports a much older divergence without any fossil evidence. The paper argues that the same applies to the emergence of animals. Fossils in the Cambrian era are much younger than the calibrated phylogenetic data suggests.
Budd states that:
Overall, the clear implication is that the molecular part of the analysis does not allow us to distinguish between different times of origin of the clade, and thus does not contradict the general picture provided by the fossil record….
…we believe that our results must cast severe doubt on all relaxed clock outcomes that substantially predate well-established fossil records, including those affected by mass extinctions.
This becomes extremely problematic when there are no fossils to compare with. In the Moody paper the LACA and LBCA nodes have no calibrations at all, and LUCA has somewhat ad hoc calibration points. If Budd is correct, and he makes a good case, then all the careful analyses of the Moody paper are ineffective, due to fundamental flaws in the tools.
Given the paucity of hard fossil evidence, the known issues of calibrated Bayesian priors for molecular clock dating of phylogenetic trees, compared to the careful testing by the authors of the LUCA paper, the best we can do is look at the consequences of the paper being an over/underestimate of the age of LUCA.
The easy consequence is that the age of LUCA has been overestimated. That LUCA was represented by a population between 3.4 and 4 Gya, with a peak probability somewhere in between. This would allow up to a billion years for abiogenesis to reach this point before the various taxons of archaea and bacteria separated 100s of millions of years later, and subsequently, the eukarya separated from the archaea even later.
This would grant a comfortable period to postulate that at least one abiogenesis happened on Earth and that all life on Earth is local. Conventional ideas on the likely sequence of events remain reasonably intact. Other planets may have their abiogenesis events, with any possibility of panspermia increasingly unlikely with distance. For example, any life discovered in the Enceladan ocean would be a local event with a biology different from Earth’s.
The harder consequences are assuming the short timeline for abiogenesis is correct. What are the implications?
First, it strengthens the argument that under the right conditions, life emerges very quickly. While we do not know what those conditions are exactly, it does suggest that our neighbor, Mars, which has evidence of surface water as lakes and a boreal sea, could have also spawned life. As Mars was not formed after an early collision, its water bodies may date another 100 my before the oceans on Earth. As Mars’ gravity is lower than on Earth, the transfer of material containing any life might have seeded Earth with life.
If we find life in the subsurface of Mars’s crust, it would be important to determine if its biology was the same or different from Earth’s life. If different, that would be the most exciting result as it would argue for the ease of abiogenesis. If the same, then a possible common origin. The same applies to any life that might be found in the subsurface oceans of the icy moons of the outer planets. Different origins imply abiogenesis is common. Astrobiologist Nathalie Cabrol seems quite optimistic about possible life on Mars, and any [dwarf] planet with a subsurface ocean [8]. Radiogenic heating can also ensure liquid water on planets that are well outside the traditional habitable zone (HZ) [10].
If abiogenesis is common, then we should detect biosignatures in many exoplanets in the HZ with the conditions we expect for life to start and thrive. Carr has suggested, rather controversially, that Mars was the better environment for abiogenesis, and therefore terrestrial life was due to panspermia from Mars [5].
What if the rest of the solar system is sterile, with no sign of either extant or extinct life? This would imply the conditions on Earth suitable for abiogenesis are narrower than we thought, which would suggest exoplanet biosignatures would be rarer than we might expect from the detected conditions on those worlds.
The last option is one we would prefer not to be the case if the aim is to work on how abiogenesis occurred on Earth. This option is to accept that LUCA appeared after just a few hundred million years, but that this time was too short. It would imply that the location of abiogenesis, however it occurred, was not on Earth. It would imply that the same probably applies to other bodies in the solar system and therefore life originated in another star system.
Leslie Orgel and Francis Crick’s early suggestion was that terrestrial life was spawned by panspermia [4]. Would that derail studies on the origin of life, or assume only plausible terrestrial conditions? How would we determine the truth of panspermia? I think it could only be demonstrated by sampling life on exoplanets and determining they all shared the same biology fairly exactly. The consequences of that might be profound.
A last thought, that surprised me in my thinking about abiogenesis being seemingly impossibly short: Cabrol, states, with no supporting evidence that [9]:
…how much time it takes for the building blocks of life to transition to biology.….estimates range between 10 million years and as little as a few thousand years.
If true, then life could appear anywhere with suitable conditions, however transient those conditions are. What state that life would be in, for example, protocells, or some state prior to LUCA is not explained [but see Figure 3], but if correct, appears to offer more time for LUCA to evolve. That is indeed food for thought.
References
Moody, E. R. R., Álvarez-Carretero, S., Mahendrarajah, T. A., Clark, J. W., Betts, H. C. Dombrowski, N., Szánthó, L. L., Boyle, R. A., Daines, S., Chen, X., Lane, N., Yang, Z., Shields, G. A., Szöllősi, G. J., Spang, A., Pisani, D., Williams, T. A., Lenton, T. M., & Donoghue, P. C. J. (2024). The nature of the last universal common ancestor and its impact on the early Earth system. Nature Ecology & Evolution. https://doi.org/10.1038/s41559-024-02461-1 https://www.nature.com/articles/s41559-024-02461-1
Betts, H. C., Puttick, M. N., Clark, J. W., Williams, T. A., Donoghue, P. C. J., & Pisani, D. (2018). Integrated genomic and fossil evidence illuminates life’s early evolution and eukaryote origin. Nature Ecology & Evolution, 2(10), 1556–1562. https://doi.org/10.1038/s41559-018-0644-x https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6152910/
Mahendrarajah, T. A., Moody, E. R. R., Schrempf, D., Szánthó, L. L., Dombrowski, N., Davín, A. A., Pisani, D., Donoghue, P. C. J., Szöllősi, G. J., Williams, T. A., & Spang, A. (2023). ATP synthase evolution on a cross-braced dated tree of life. Nature Communications, 14(1). https://doi.org/10.1038/s41467-023-42924-w
F.H.C. Crick, L.E. Orgel, (1973) “Directed panspermia”, Icarus, Volume 19, Issue 3, Pages 341-346, ISSN 0019-1035, https://doi.org/10.1016/0019-1035(73)90110-3.
Carr, C. E. (2022). Resolving the history of life on Earth by seeking life as we know it on Mars. Astrobiology, 22(7), 880–888. https://doi.org/10.1089/ast.2021.0043 https://arxiv.org/pdf/2102.02362
Dawkins, R. (2004). The Ancestor’s Tale: A Pilgrimage to the Dawn of Evolution. Houghton Mifflin Harcourt.
Budd, G. E., & Mann, R. P. (2023). Two notorious nodes: a critical examination of relaxed molecular clock age estimates of the bilaterian animals and placental mammals. Systematic Biology. https://doi.org/10.1093/sysbio/syad057
Cabrol, N. A. (2024). The secret life of the universe: An Astrobiologist’s Search for the Origins and Frontiers of Life. Simon and Schuster.
Ibid. 148.
Tolley, A (2021) “Radiolytic H2: Powering Subsurface Biospheres” https://www.centauri-dreams.org/2021/07/02/radiolytic-h2-powering-subsurface-biospheres/
Elkins-Tanton, L. T. (2010). Formation of early water oceans on rocky planets. Astrophysics and Space Science, 332(2), 359–364. https://doi.org/10.1007/s10509-010-0535-3
Sharma, P. P., & Wheeler, W. C. (2014). Cross-bracing uncalibrated nodes in molecular dating improves congruence of fossil and molecular age estimates. Frontiers in Zoology, 11(1). https://doi.org/10.1186/s12983-014-0057-x
Background Reading
The Hadean-Archaean Environment
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2869525/
History of Earth
https://en.m.wikipedia.org/wiki/History_of_Earth
Hadean
https://en.m.wikipedia.org/wiki/Hadean
Late Heavy Bombardment
https://en.m.wikipedia.org/wiki/Late_Heavy_Bombardment
Wikipedia: Portal: Evolutionary Biology
https://en.wikipedia.org/wiki/Portal:Evolutionary_biology
Origin of life: Drawing the big picture
https://www.sciencedirect.com/science/article/abs/pii/S0079610723000391
The Origin of Life: What We Do and Don’t Know
https://hea-www.harvard.edu/lifeandthecosmos/wkshop/sep2012/present/CleavesSILifeInTheCosmosTalk2012b.pdf
Introduction to Origins of Life of Earth
https://pressbooks.umn.edu/introbio/chapter/originsintro/
Abiogenesis
https://en.wikipedia.org/wiki/Abiogenesis
Last universal common ancestor
https://en.wikipedia.org/wiki/Last_universal_common_ancestor
Earth’s timeline
https://dynamicEarth.org.uk/geological-timeline-pack-2.pdf
Formation of early water oceans on rocky planets
https://link.springer.com/article/10.1007/s10509-010-0535-3
Earliest known life forms
https://en.wikipedia.org/wiki/Earliest_known_life_forms
Molecular clock
https://en.wikipedia.org/wiki/Molecular_clock
Phylogenetic Tree
https://en.wikipedia.org/wiki/Phylogenetic_tree
Primordial Soup
https://en.wikipedia.org/wiki/Primordial_soup
Are Interstellar Quantum Communications Possible?
A favorite editor of mine long ago told me never to begin an article with a question, but do I ever listen to her? Sometimes. Today’s lead question, then, is this: Can we expand communications over interstellar distances to include quantum methods? A 2020 paper by Arjun Berera (University of Edinburgh) makes the case for quantum coherence over distances that have only recently been suggested for communications:
…We have been able to deduce that quantum teleportation and more generally quantum coherence can be sustained in space out to vast interstellar distances within the Galaxy. The main sources of decoherence in the Earth based experiments, atmospheric turbulence and other environmental effects like fog, rain, smoke, are not present in space. This leaves only the elementary particle interactions between the transmitted photons and particles present in the interstellar medium.
Quantum coherence is an important matter; it refers to the integrity of the quantum state involved, and is thus essential to the various benefits of quantum communications. But let’s back up by tackling a new paper from another University of Edinburgh researcher, Latham Boyle. Working at the Higgs Centre for Theoretical Physics there, Boyle cites Berera’s work and moves on to explore quantum communications at the interstellar level and their application to SETI questions.
Traditional communications involve bits in one of two states, 0 or 1. Quantum bits, or qubits, can exist in superposition, meaning that a qubit can represent a 0 or a 1 simultaneously. Here I pause to remind all of us of the famous Richard Feynman quote: “I think I can safely say that nobody understands quantum mechanics.” Which is in no way to play down the ongoing work to explore the subject, given its mathematical precision and the fact that experiments involving quantum physics produce results. Thus another famous quote attributed to David Mermin: “Shut up and calculate.”
In other words, use quantum mechanics to get results because it works, and stop getting distracted by the philosophical issues it raises. I am trying to do this now, but philosophy keeps rearing its head. The specter of George Berkeley wanders by…
But back to quantum methods and interstellar information exchange. The Berera paper makes the case that at certain frequency ranges, photon qubits can maintain their quantum coherence over conceivably intergalactic distances. Fully understood or not, quantum communications opens up a wide range of effects that are interesting in the interstellar context. Boyle notes that protocols based on quantum communication offer exponentially faster performance for specific ranges of problems and tasks.
Let’s drill further into quantum benefits. From the paper:
First, it is already known to permit many tasks that are impossible with classical communication alone, including quantum cryptography [10, 11], quantum teleportation [12], superdense coding [13], remote state preparation [14], entanglement distillation/purification [15–17], or direct transmission of (potentially highly complex, highly entangled) quantum states (e.g. the results of complex quantum computations). Second, protocols based on quantum communication are exponentially faster than those based on classical communication for some problems/tasks [18], in particular as measured by the one-way classical communication complexity [19–21] (the number of bits that must be transmitted one-way, from sender to receiver, to solve a problem or carry out a task – possibly the notion most pertinent to interstellar communication).
Boyle explores these advantages and associated problems through the quantum capacity of a quantum communication channel, constraining this by examining the properties of the interstellar medium in light of what are known as quantum erasure channels, which model error correction and channel carrying capacity. The question is: How much information can be reliably carried over a quantum channel even if some photons are lost in the process? And it turns out that these constraints mean that the choice of frequency bands is critical.
Image: This is Figure 1 from the paper. Caption: Quantum communication with Q > 0, over distance L, is impossible at wavelengths where the horizontal line corresponding to L lies within the blue shaded region (summarizing the Milky Way ISM’s extinction curve). Gray regions are off limits from the ground. Adapted from [23, 26], with data from [30–37]. Credit: Latham Boyle.
The interstellar quantum communications channel Boyle studies is one in which photons can be erased in three different ways, the first being their absorption or scattering due to the interstellar medium between sender and receiver. Thus the pink line in the figure, indicating the frequency that a sender on Proxima Centauri would need to select to reach the Earth. A second problem is extinction within the Earth’s atmosphere, demanding a wavelength that avoids the gray bands of Figure 1 (hence the benefit of a receiver in space as opposed to Earth’s surface). Finally, photons can be lost due to the spreading of the photon beam as it moves between sender and receiver.
To avoid depolarization by the cosmic microwave background, the wavelength of our photon channel must be less than 26.5 cm (the frequency is about 1.13 GHz), but for communication between stars Boyle calculates that we need to get into the ultraviolet range, with wavelengths as short as 320 nm. Doing this makes our communications channel far more efficient, for we can work with a narrower beam, but having said that, we now run into trouble. Let me quote Boyle on one of several elephants in the room:
This third erasure constraint is the hardest to satisfy! Whereas classical communication (C > 0) can take place even if the receiver only receives a tiny fraction of the photons emitted by the sender, forward quantum communication (Q > 0) requires large enough telescopes that the sender can put the majority of their photons into the receiver’s telescope (Fig. 2b)! Even in the best case, taking the nearest star (Proxima Centauri, L = 1.30 parsec) and the shortest wavelength available from the ground (λ = 320nm, see Fig. 1), this implies D > 100 km!
We can pause here to note, as Boyle does, that the largest telescope currently under construction (ESO’s Extremely Large Telescope) has an aperture of 39 meters. To reach the staggering 100 km suggested by the author, we would have to explore coherently combined smaller dishes using optical interferometry. Boyle notes that quantum teleportation involving photons has been demonstrated at 100-kilometer baselines at sea level and 1000 km baselines from Earth to a satellite. Thus a ‘coherent dense array of optical telescopes over 100 km distances’ may be ultimately feasible. A great deal of research is ongoing on the subject of manipulating quantum states. The author notes work on quantum repeaters and quantum memories that may one day be enabling.
Why would a civilization want to use quantum communications methods given problems like this? For one thing, sending complex quantum calculations becomes possible in ways not available through classical communications. Remember that each qubit can exist in a superposition of states, manipulated by algorithms impossible on classical computers. Quantum error correction and quantum cryptography are among the other advantages of a communications channel based on quantum methods. In addition, extraordinarily high resolutions could be obtained by telescopes using astronomically long baseline interferometry (ALBI) via quantum repeaters.
An intriguing thought concludes the paper.
…we have seen that (setting aside the loopholes mentioned above) the sending and receiving telescopes must be extremely large, satisfying the inequality in Eq. (1); but this same inequality implies that, if the sender has a large enough telescope to communicate quantumly with us, they necessarily also have enough angular resolution to see that we do not yet have a sufficiently large receiving telescope [49], so it would make no sense to send any quantum communications to us until we had built one. Thus, the assumption that interstellar communication is quantum appears sufficient to explain the Fermi paradox.
So there you are. This method of information exchange demands such large telescopes that if an extraterrestrial civilization had them, they could quickly determine whether we had them. And because we don’t, there would certainly be no reason to send a signal to us if quantum methods were deemed necessary for a worthwhile exchange.
The paper is Boyle, “On Interstellar Quantum Communication and the Fermi Paradox” (preprint). The Berera paper is “Quantum coherence to interstellar distances,” Physical Review D 102 (9 September 2020), 063005 (abstract / preprint).