Centauri Dreams

Imagining and Planning Interstellar Exploration

The “Habitability” of Worlds (Part I)

Dave Moore is a Centauri Dreams regular who has long pursued an interest in the observation and exploration of deep space. He was born and raised in New Zealand, spent time in Australia, and now runs a small business in Klamath Falls, Oregon. He counts Arthur C. Clarke as a childhood hero, and science fiction as an impetus for his acquiring a degree in biology and chemistry. Dave has kept up an active interest in SETI (see If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare) as well as the exoplanet hunt. In the essay below, he examines questions of habitability and how we measure it, issues that resonate in a time when we are preparing to evaluate exoplanets as life-bearing worlds and look for their biosignatures.

by Dave Moore

In this essay I’ll be examining the meaning of the word ‘habitable’ when applied to planetary bodies. What do we mean when we talk about a habitable planet or a planet’s habitability? What assumptions do we make? The first part of this essay will look into this and address the implications that come with it. In part two, I’ll focus on human habitability, looking at the mechanisms that could produce a habitable planet for humans and what this would imply.

If you look at the Wikipedia entry on habitable planets, the author implies that “habitability” refers to the ability of a planetary body to sustain life, and this is by far the most frequent use of the term, particularly in the literature of popular science articles.

Europa has sulfate deposits on it, which indicates that its surface is oxidizing. If the hydrothermal vents in the moon’s subsurface ocean are like those on Earth, they would release reducing gases such as H2S, and Methane. A connection between the two would provide an electrochemical differential that life could exploit. So it’s quite plausible that Europa’s ocean could harbor life, and if it does, would this now make it a “habitable” moon? If we find subsurface Methanogens on Mars, does Mars become a habitable planet? Traces of Phosphine in Venusian clouds point to the possibility of life forms there. If that’s so, would Venus now be considered habitable?

Andrew LePage on his website is more careful in defining what a habitable planet is. On his Habitable Planet Reality Check postings, he has the following definition:

…the best we can hope to do at this time is to compare the known properties of extrasolar planets to our current understanding of planetary habitability to determine if an extrasolar planet is “potentially habitable.” And by “habitable,” I mean in an Earth-like sense where the surface conditions allow for the existence of liquid water – one of the presumed prerequisites for the development of life as we know it. While there may be other worlds that might possess environments that could support life, these would not be Earth-like habitable worlds of the sort being considered here.

By Andrew’s definition, a habitable planet is first a body that can give rise to life. He then narrows it by adding that the type of life is “life as we know it,” which is life that needs an aqueous medium to evolve. If life evolved in some other medium, say Ammonia, then this would be life as we don’t know it; and the planet would not be classified as habitable. But this is not the only definitional constraint he makes. The planet must also be Earth-like in a sense that its surface conditions allow for liquid water. Europa would be excluded even if it had life in its oceans as its surface conditions do not allow for liquid water. His definition also implies that the planet must be in the habitable zone as defined by Kopparapu, which is thought to be the zone of insolation that allows for surface water on “Earth-like” planets. Would an ocean world with an ocean full of life fit his definition of habitable? Would a Super-Earth with a deep Hydrogen atmosphere (sometimes called a Hycean world) outside the habitable zone but with both oceans and continents and a temperate surface at moderate temperature be habitable? I do note however that his definition does not include human survivability as a requirement because elsewhere in his post he talks about the factors that have kept Earth habitable over billions of years, and Earth’s atmosphere has only been breathable to humans over the last 500 million years.

I’m not picking on Andrew in particular here; he has put more thought into the matter of defining habitability than most. Why I am using him as an example is to show just how fraught defining habitability can be. It’s a word that is bandied about with a lot of unexamined assumptions.

This may seem picayune, but the study of life on other worlds has very little data to rely on, so hypotheses are made using logical inference and logical deduction. And if your definitions are inexact, sliding in meaning through your logical process, then you are likely to draw invalid conclusions. Also, if the definition of habitable is that of a planet that could have life evolve on it, why include this arbitrary set of exclusions?

The answer becomes obvious from reading articles in the popular press. A habitable planet is not just one that is life-bearing, but a planet in which life gives rise to conditions that may be habitable for humans.

The assumption that life leads to human habitability is strongly ingrained from our historical experience. By the early 19th century, it was known that oxygen was required to survive and plants produced oxygen, hence the idea of life and human habitability became intertwined. Also, our experience of exploring Earth strongly influenced our perception of other planets. We found parts of Earth hot, parts cold, others wet and others dry. Indigenous inhabitants were almost everywhere, and you could always breathe the air. And this mindset was carried over to our imaginings of planets. They would be like Earth, only different.

For instance, H. G. Wells, an author known for applying scientific rigor to his stories, in The First Men in the Moon (1901), postulates a thin but breathable atmosphere on the moon and its native inhabitants. This is despite the lack of atmosphere on the moon being known for over a 100 years prior. Such was our mindset about other planetary bodies. Pulp SF before WWII got away with swash-buckling adventures on pretty much every body in the solar system without the requirement for space suits. Post WWII, until the early sixties, both Venus and Mars were portrayed as having breathable atmospheres, Mars usually as a dying planet as per Bradbury, Venus as a tropical planet as for example in Heinlein’s Between Planets (1951.)

When the first results from Mariner 2 came back in 1962 showing the surface of Venus was hot enough to roast souls, there was considerable resistance in the scientific community to accepting this and much scrambling to come up with alternative explanations. In 1965 Mariner 4 flew by Mars showing us a planet that was a cratered approximation of the moon and erased our last hopes that the new frontiers in our solar system would be anything like the old frontier. Crushed by what our solar system had served up, we turned to the stars.

Our search for life is now two-pronged: the first part being a search for signals from technological civilizations, which we regard as a pretty good indication of life; the second being the search for biomarkers on exosolar planets. We’re searching for biomarkers because, in the near future, characterizing exosolar planets will be by mass, radius and atmospheric spectra. Buoyed by our knowledge of extremophiles, we continue to search the planets and moons of our solar system for signs of life, but now it is in places not remotely habitable by humans. If the parameters for the search for life touch on habitable conditions for humans, they are purely tangential. These two elements once fused together in our romantic past have now become separate.

This divergence has led to a change in goals to the search for life. We look now for the basic principles that govern the emergence of life and under what conditions can life evolve and/or allow for panspermia? This leads to the concept of planetary habitability being secondary. Life, once evolved, in its single-celled form, is tough and adaptable, so it is likely to continue until there’s a really major change in the state of a planet; habitability is a parameter of life’s continuity, not its origins. So when describing planets, terms like life-potential or life-bearing become more pertinent. This latter term is now starting to be used in preference to the description habitable.

If we now look at the other fork, the idea of habitability when applied to humans, we note that the term has been used in a loose sort of way since the 17th century. Even the idea of the habitable zone was first raised in the 19th century, but it was Stephen Dole with his report, Habitable Planets For Man, under the auspices of the Rand Corporation in 1964 that put a modern framework to it by precisely defining what a habitable planet was for humans. The book can be downloaded at the Rand site.

This report has held up well considering it was written at a time (1962) when Mercury’s mass had not been fully established and Venus’s atmosphere and surface temperature were unknown.

Image: PG note: Neither Dave nor I could find a better image of the cover of the original Dole volume than the one above, but Stephen Dole’s Planets for Man was a new version of the more technical Habitable Planets for Man, co-authored by Isaac Asimov and published in 1964. If you happen to have a copy of the earlier volume and could scan the cover at higher resolution, I would appreciate having the image in the Centauri Dreams files.

Dole first defines carefully what he means by habitability (material omitted for brevity):

“For present purposes, we shall enlarge on our definition of a habitable planet (a planet on which large numbers of people could live without needing excessive protection from the natural environment) to mean that the human population must be able to live there without dependence on materials bought from other planets. In other words, a planet that is habitable can supply all of the physical requirements of human beings and provide and environment in which people can live comfortably and enjoyably…”

You’ll note that Dole’s definition contains echoes of the experience of American settlement where initial settlement is exercised with minimal technology and living off the land. There is emphasis on self-sustainment. It’s the sort of place you’d send an ark ship to.

I take a view of habitability as more of a sliding scale on how much technology you need to survive and live comfortably. On some parts of Earth, the level of technology needed to survive is minimal: basic shelter, light clothes and a pair of flip-flops will do the job. Living at the South Pole is a different story. You must have a heated, insulated station to live in, and when you venture outside, you need heavily insulated clothing covering your entire body and goggles to prevent your eyeballs from freezing. Move to Mars and you need to add radiation protection and a pressurized, breathable atmosphere. The more hostile the environment the more technology you need. By stretching the definition, you could say that an O’Neill colony makes space itself habitable.

I contrast my definition to Dole’s to show that even when dealing with what makes a planet “habitable for humans” you can still get a significant variation on what this entails.

Dole does however itemize carefully the specific requirements necessary to meet his definition. They are:

Temperature: The planet must have substantial areas with mean annual temperatures between 32°F and 86°F. This is not only to meet human needs for comfort, but to allow the growing of crops and the raising of animals. Also seasonal temperatures cannot be too extreme.

Surface gravity: up to 1.5 g.

Atmospheric composition and pressure: For humans, the lower limit for Oxygen is a partial pressure of 100 millibars, below which hypoxia sets in. The upper limit is about 400 millibars at which you get Oxygen toxicity, resulting in things like blindness over time. For inert gasses, there is a partial pressure above which narcosis occurs. This is proportional to the molecular weight of the molecule. The most important of these to consider is Nitrogen, which becomes narcotic above a partial pressure of 2.3 bar. For CO2, the upper limit is a partial pressure of 10 millibars, above which acidosis leads to long term health problems and impaired performance. Most other gasses are poisonous at low or very low concentrations.

Image: Original illustration from Dole’s Report. You may notice the lower level of O2 set at 60 mm Hg. This is the blood level minimum not the atmospheric minimum. There is a 42 millibar drop in O2 partial pressure between the atm. and the blood.

Other factors he considered were having enough water for oceans but not enough to drown the planet, sufficient light, wind velocities that aren’t excessive or too much radioactivity, volcanic activity or meteorite in-fall.

Dole then went on to discuss general planetology and how stellar parameters would affect habitability—something we now know in much greater detail–and he finishes up by calculating the likelihood of a habitable planet around the nearest stars in a manner similar to the Drake equation.

You will notice that these requirements listed bear little resemblance to the parameters used when discussing habitability with regard to life. The two have gone their separate ways.

Using Dole’s report as a basis for examining the habitability of a planet, in Part II of this essay, I will note how our current state of knowledge has updated his conclusions. Then I will look at how you could produce a planet habitable for humans and the consequences of those mechanisms.

——–

Wikipedia Planetary Habitability Definition
https://en.wikipedia.org/wiki/Planetary_habitability

Andrew LePage: Habitable Planet Reality Check: TOI-700e
https://www.drewexmachina.com/2023/01/23/habitable-planet-reality-check-toi-700e-discovered-by-nasas-tess-mission/

Manasvi Lingam, A brief history of the term ‘habitable zone’ in the 19th century, International Journal of Astrobiology, Volume 20, Issue 5, October 2021, pp. 332 – 336.

Stephen Dole, Habitable Planets For Man, The Rand Corporation, R414-R
https://www.rand.org/content/dam/rand/pubs/reports/2005/R414.pdf

A Liquid Water Mechanism for Cold M-dwarf Planets

A search for liquid water on a planetary surface may be too confining when it comes to the wide range of possibilities for supporting life. We see that in our own Solar System. Consider the growing interest in icy moons like Europa and Enceladus, where there is no possibility of surface water but a potentially rich environment under a thick layer of ice. Extending these thoughts into the realm of exoplanets reminds us that our calculations about how many life-bearing worlds are out there may be in need of revision.

This is the thrust of work by Lujendra Ojha (Rutgers University) and colleagues, as developed in a paper in Nature Communications and presented at the recent Goldschmidt geochemistry conference in Lyon. What Ojha and team point out is that radiogenic heating can maintain liquid water below the surface of planets in M-dwarf systems, and that added into our astrobiological catalog, such worlds, orbiting a population of stars that takes in 75 percent or more of all stars in the galaxy, dramatically increase the chances of life elsewhere. The effect is striking. Says Ojha:

“We modeled the feasibility of generating and sustaining liquid water on exoplanets orbiting M-dwarfs by only considering the heat generated by the planet. We found that when one considers the possibility of liquid water generated by radioactivity, it is likely that a high percentage of these exoplanets can have sufficient heat to sustain liquid water – many more than we had thought. Before we started to consider this sub-surface water, it was estimated that around 1 rocky planet every 100 stars would have liquid water. The new model shows that if the conditions are right, this could approach 1 planet per star. So we are a hundred times more likely to find liquid water than we thought. There are around 100 billion stars in the Milky Way Galaxy. That represents really good odds for the origin of life elsewhere in the universe.”

Image: This is Figure 2 from the paper. Caption: Schematic of a basal melting model for icy exo-Earths. a Due to the high surface gravity of super-Earths, ice sheets may undergo numerous phase transformations. Liquid water may form within the ice layers and at the base via basal melting with sufficient geothermal heat. If high-pressure ices are present, meltwater will be buoyant and migrate upward, feeding the main ocean. The red arrows show geothermal heat input from the planet’s rocky interior. b Pure water phase diagram from the SeaFreeze representation illustrating the variety of phases possible in a thick exo-Earth ice sheet. Density differences between the ice phases lead to a divergence from a linear relationship between pressure and ice-thickness. Credit: Ohja et al.

The effect is robust. Indeed, water can be maintained above freezing even when planets are subject to as little as 0.1 Earth’s geothermal heat produced by radiogenic elements. The paper models the formation of ice sheets on such worlds and implies that the circumstellar region that can support life should be widened, which would take in colder planets outside what we have normally considered the habitable zone.

But the work goes further still, for it implies that planets closer to their host star than the inner boundaries of the traditional habitable zone may also support subglacial liquid water. We also recall that the sheer ubiquity of M-dwarfs in the galaxy helps us, for if water from an internal ocean does reach the surface, perhaps through cracks venting plumes and geysers, we may find numerous venues relatively close to the Sun on which to search for biosignatures.

The key factor here is subglacial melting through geothermal heat, for oceans and lakes of liquid water should be able to form under the ice on Earth-sized planets even when temperatures are as low as 200 K, as we find, for example, on TRAPPIST-1g, which is the coldest of the exoplanets for which Ojha’s team runs calculations.

Such water is found to be buoyant and can migrate through this ‘basal melting,’ a term used, explain the authors, for “any situation where the local geothermal heat flux, as well as any frictional heat produced by glacial sliding, is sufficient to raise the temperature at the base of an ice sheet to its melting point.” Subglacial ice sheets are found on Earth in the West Antarctic Ice Sheet, Greenland and possibly the Canadian Arctic, and the paper points out the possibility of the mechanism at work at the south pole of Mars.

The authors’ modeling uses a software tool called SeaFreeze along with a heat transport model to investigate the thermodynamic and elastic properties of water and ice at a wide range of temperatures and pressures. Given the high surface gravity of worlds like Proxima Centauri b, LHS 1140 b and some of the planets in the TRAPPIST-1 system, water ice should be subjected to extreme pressures and temperatures, and as the paper points out, may evolve into high-pressure ice phases. In such conditions, the meltwater migrates upward to form lakes or oceans. Indeed, this kind of melting and migration of water is more likely to occur on planets where the ice sheets are thicker and there is both higher surface gravity as well as higher surface temperatures.

Image: A frozen world heated from within, as envisioned by the paper’s lead author, Lujendra Ojha.

Beyond radiogenic heating, tidal effects are an interesting question, given the potential tidal lock of planets in close orbits around M-dwarfs. Yet planets further out in the system could still benefit from tidal activity, as the paper notes about TRAPPIST-1:

…the age of the TRAPPIST-1 system is estimated to be 7.6 ± 2.2 Gyr; thus, if geothermal heating has waned more than predicted by the age-dependent heat production rate assumed here, tidal heating could be an additional source of heat for basal melting on the TRAPPIST-1 system. On planets e and f of the TRAPPIST-1 system, tidal heating is estimated to contribute heat flow between 160 and 180 mW m−2. Thus, even if geothermal heating were to be negligible on these bodies, basal melting could still occur via tidal heating alone. However, for TRAPPIST-1 g, the mean tidal heat flow estimate from N-body simulation is less than 90 mW m−2. Thus, ice sheets thinner than a few kilometers are unlikely to undergo basal melting on TRAPPIST-1 g.

So we have two mechanisms in play to maintain lakes or oceans beneath surface ice on M-dwarf planets. The finding is encouraging given that one of the key objections to life in these environments is the time needed for life to evolve given that the young planet should be bombarded by ultraviolet and X-ray radiation, a common issue for these stars. We put in place what Amri Wandel (Hebrew University of Jerusalem), who writes a commentary on this work for Nature Communications, calls ‘a safe neighborhood,’ and one for which forms of biosignature detection relying on plume activity will doubtless emerge building on our experience at Enceladus and Europa.

The paper is Ojha et al., “Liquid water on cold exo-Earths via basal melting of ice sheets,” Nature Communications 13, Article number: 7521 (6 December, 2022). Full text. Wandel’s excellent commentary is “Habitability and sub glacial liquid water on planets of M-dwarf stars,” Nature Communications 14, Article number: 2125 (14 April 2023). Full text.

Reducing the Search Space with the SETI Ellipsoid

SETI’s task challenges the imagination in every conceivable way, as Don Wilkins points out in the essay below. A retired aerospace engineer with thirty-five years experience in designing, developing, testing, manufacturing and deploying avionics, Don is based in St. Louis, where he is an adjunct instructor of electronics at Washington University. He holds twelve patents and is involved with the university’s efforts at increasing participation in science, technology, engineering, and math. The SETI methodology he explores today offers one way to narrow the observational arena to targets more likely to produce a result. Can spectacular astronomical phenomena serve as a potential marker that could lead us to a technosignature?

by Don Wilkins

Finite SETI search facilities searching a vast search volume must set priorities for exploration. Dr. Jill Tarter, Chair Emeritus for SETI Research, describes the search space as a “nine-dimensional haystack” composed of three spatial, one temporal (when the signal is active), two polarization, central frequency, sensitivity, and modulation dimensions. Methods to reduce the search space and prioritize targets are urgently needed.

One method for limiting the search volume is the SETI Ellipsoid, Figure 1, which is reproduced from a recent paper in The Astronomical Journal by lead author James R. A. Davenport (University of Washington: Seattle) and colleagues. [1]

Image: This is Figure 1 from the paper. Caption: Schematic diagram of the SETI Ellipsoid framework. A civilization (black dot) could synchronize a technosignature beacon with a noteworthy source event (green dot). The arrival time of these coordinated signals is defined by the time-evolving ellipsoid, whose foci are Earth and the source event. Stars outside the Ellipsoid (blue dot) may have transmitted signals in coordination with their observation of the source event, but those signals have not reached Earth yet. For stars far inside the Ellipsoid (pink dot), we have missed the opportunity to receive such coordinated signals. Credit: Davenport et al.

In this approach, an advanced civilization (black dot) synchronizes a technosignature beacon with a significant astronomical event (green dot). The astronomical event, in the example, is SN 1987A, a type II supernova in the Large Magellanic Cloud, a dwarf satellite galaxy of the Milky Way. The explosion occurred approximately 51.4 kiloparsecs (168,000 light-years) from the Sun.

Arrival time of the coordinated signals is defined by a time-evolving ellipsoid, with foci at Earth (or an observation station within the Solar System) and the source event. The synchronized signals arrive from an advanced civilization based on the distance to the Solar System or other system with a technological system (d1), and the distance from the advanced civilization to the astronomical event (d2). Signals from civilizations (blue dot) outside the Ellipsoid coordinated with the source event have not reached the Solar System. Stars inside the Ellipsoid (pink dot) but on line between the advanced civilization and the Solar System will not receive the signals intended for the Solar System. However, the advanced civilization could beam new signals to the pink star and form a new Ellipsoid.

The source event acts as a “Schelling Point” to facilitate communication between observers who have not coordinated the time or place of message exchanges. A Schelling point is a game theory concept which proposes links can be formed between two would-be communicators simply by using common references, in this case a supernova, to coordinate the time and place of communication. In addition to supernovae, source events include gamma-ray bursts, binary neutron star mergers, and galactic novae.

In conjunction with the natural event which attracts the attention of other civilizations, the advanced civilization broadcasts a technosignature signal unambiguously advertising its existence. The technosignature might, as an example, mimic a pulsar’s output: modulation, frequency, bandwidth, periods, and duty cycle.

The limiting factor in using the SETI Elliposoid to search for targets is the unavailability of precise distance measurements to nearby stars. The Gaia project remedies that problem. The mission’s two telescopes provide parallaxes, with precision 100 times better than its predecessors, for over 1.5 billion sources. Distance uncertainties are less than 10% for stars within several kiloparsecs of Earth. This precision directly translates into lower uncertainties on the timing for signal coordination along the SETI Ellipsoid.

“I think the technique is very straightforward. It’s dealing with triangles and ellipses, things that are like high-school geometry, which is sort of my speed,” James Davenport , University of Washington astronomer and lead author in the referenced papers, joked with GeekWire. “I like simple shapes and things I can calculate easily.” [2]

An advanced civilization identifies a prominent astronomical event, as an example, a supernova. It then determines which stars could harbor civilizations which could also observe the supernova and the advanced civilization’s star. An unambiguous beacon is transmitted to stars within the Ellipsoid. The volume devoted to beacon propagation is significantly reduced, which reduces power and cost, when compared to an omnidirectional beacon.

At the receiving end, the listeners would determine which stars could see the supernova and which would have time to send a signal to the listeners. The listening astronomers would benefit by limiting their search volume to stars which meet both criteria.

For example, astronomers on Earth only observed SN 1987A in 1987, thirty six years ago. If the advanced civilization beamed a signal at the Solar System a century ago, our astronomers would not have the necessary clue, the observation of SN 1987A, to select the advanced civilization’s star as the focus of a search. Assuming both civilizations are using SN 1987A as a coordination beacon, human astronomers should listen to targets within a hemisphere defined by a radius of thirty-six light-years.

The following is written with apologies to Albert Einstein. The advanced civilization could observe the motion of stars and predict when a star will come within the geometry defined by the Ellipsoid. In the case of the Earth and SN1987A, the advanced civilization could have begun transmissions thirty-six years ago.

The recently discovered SN 2023ixf in the spiral galaxy M101 could serve as one of the foci of an Ellipsoid. 108 stars within 0.1 light-year of the SN 2023ixf – Earth SETI Ellipsoid. [3]

Researchers propose to use the Allen Telescope Array (ATA), designed specifically for radio technosignature searches, to search this Ellipsoidal. The authors point out the utility of the approach and caution about its inherent anthropocentric biases:

“…there are numerous other conspicuous astronomical phenomena that have been suggested for use in developing the SETI Ellipsoid, including gamma-ray bursts (Corbet 1999), binary neutron star mergers (Seto 2019), and historical supernovae (Seto 2021). We cannot know what timescales or astrophysical processes would seem “conspicuous” to an extraterrestrial agent with likely a much longer baseline for scientific and technological discovery (e.g., Kipping et al. 2020; Balbi & Ćirković 2021). Therefore we acknowledge the potential for anthropogenic bias inherent in this choice, and instead focus on which phenomena may be well suited to our current observing capabilities.”

1. James R. A. Davenport , Bárbara Cabrales, Sofia Sheikh , Steve Croft , Andrew P. V. Siemion, Daniel Giles, and Ann Marie Cody, Searching the SETI Ellipsoid with Gaia, The Astronomical Journal, 164:117 (6pp), September 2022, https://doi.org/10.3847/1538-3881/ac82ea

2. Alan Boyle, How ‘Big Data’ could help SETI researchers intensify the search for alien civilizations, 22 June 2022, https://www.geekwire.com/2022/how-big-data-could-help-seti-researchers-intensify-the-search-for-alien-civilizations/

3. James R. A. Davenport, Sofia Z. Sheikh, Wael Farah, Andy Nilipour, B´arbara Cabrales, Steve Croft, Alexander W. Pollak, and Andrew P. V. Siemion, Real-Time Technosignature Strategies with SN2023ixf, Draft version June 7, 2023.

Earth in Formation: The Accretion of Terrestrial Worlds

It would be useful to have a better handle on how and when water appeared on the early Earth. We know that comets and asteroids can bring water from beyond the ‘snowline,’ that zone demarcated by temperatures beyond which volatiles like water, ammonia or carbon dioxide are cold enough to condense into ice grains. For our Solar System, that distance in our era is 5 AU, roughly the orbital distance of Jupiter, although the snowline would have been somewhat closer to the Sun during the period of planet formation. So we have a mechanism to bring ices into the inner Solar System but don’t know just how large a role incoming ices played in Earth’s development.

Knowing more about the emergence of volatiles on Earth would help us frame what we see in other stellar systems, as we evaluate whether or not a given planet may be habitable. Usefully, there are ways to study our planet’s formation that can drill down to its accretion from the materials in the original circumstellar disk. A new study from Caltech goes to work on the magmas that emerge from the planetary interior, finding that water could only have arrived later in the history of Earth’s formation.

Published in Science Advances, the paper involves an international team working in laboratories at Caltech as well as the University of the Chinese Academy of Sciences, with Caltech grad student Weiyi Liu as first author. When I think about studying magma, zircon comes first to mind. It appears in crystalline form as magma cools and solidifies. I’m no geologist, but I’m told that the chemistry of melt inclusions can identify factors such as volatile content and broader chemical composition of the original magma itself. Feldspar crystals are likewise useful, and the isotopic analysis of a variety of rocks and minerals can tell us much about their origin.

So it’s no surprise to learn that the Caltech paper uses isotopes, in this case the changing ratio of isotopes of xenon (Xe) as found in mid-ocean ridge basalt vs. ocean island basalt. Specifically, 129Xe* comes from the radioactive decay of the extinct volatile 129I, whose half-life is 15.7 million years, while 136Xe*Pu comes from the extinction of 244Pu, with a halflife of 80 million years. So the 129Xe*/136Xe*Pu ratio is a useful tool. As the paper notes, this ratio:

…evolves as a function of both time and reservoirs compositions (i.e., I/Pu ratio) early in Earth’s history. Hence, the study of the 129Xe*/136Xe*Pu in silicate reservoirs of Earth has the potential to place strong constraints on Earth’s accretion and evolution.

The ocean island basalt samples, originating as far down as the core/mantle boundary, reveal this ratio to be low by a factor of 2.8 as compared to mid-ocean ridge basalts, which have their origin in the upper mantle. Using computationally intensive simulations drawing on what is known as first-principles molecular dynamics (FPMD), the authors find that the low I/Pu levels were established in the first 80 to 100 million years of the Solar System (thus before 129I extinction), and have been preserved for the past 4.45 billion years. Their calculations assess the I/Pu findings under different accretion scenarios, drawing on simulated magmas from the lower mantle, which runs from 680 kilometers below the surface, to the core-mantle boundary (2,900 kilometers), and also from the upper mantle beginning at 15 kilometers and extending downward to 680 kilometers.

The result: The lower mantle reveals an early Earth composed primarily of dry, rocky materials, with a distinct lack of volatiles, with the later-forming upper mantle numbers showing three times the amount of volatiles found below. The volatiles essential for life seem to have emerged only within the last 15 percent, and perhaps less, of Earth’s formation. In the caption below, the italics are mine.

Image: This is Figure 4 from the paper. Caption: Schematic representation of the heterogeneous accretion history of Earth that is consistent with the more siderophile behavior of I and Pu at high P-T [pressure-temperature] conditions (this work). As core formation alone does not result in I/Pu fractionations sufficient to explain the ~3 times lower 129Xe*/136Xe*Pu ratio observed in OIBs [ocean island basalt] compared to MORBs [mid-ocean ridge basalt], a scenario of heterogeneous accretion has to be invoked in which volatile-depleted differentiated planetesimals constitute the main building blocks of Earth for most of its accretion history (phase 1), before addition of, comparatively, volatile-rich undifferentiated materials (chondrite and possibly comet) during the last stages of accretion (phase 2).Isolation and preservation, at the CMB [core mantle boundary], of a small portion of the proto-Earth’s mantle before addition of volatile-rich material would explain the lower I/Pu ratio of plume mantle, while the mantle involved in the last stages of the accretion would have higher, MORB-like, I/Pu ratios. Because the low I/Pu mantle would also have an inherently lower Mg/Si, its higher viscosity could help to be preserved at the CMB until today. Credit: Liu et al.

We’re a long way from knowing in just what proportions Earth’s water has derived from incoming materials from beyond the snowline. But we’re making progress:

…our model sheds light on the origin of Earth’s water, as it requires that chondrites represent the main material delivered to Earth in the last 1 to 15% of its accretion. Independent constraints from Mo [molybdenum] nucleosynthetic anomalies require these late accreted materials to come from the carbonaceous supergroup. Together, these results indicate that carbonaceous chondrites [the most primitive class of meoteorites, containing a high proportion of carbon along with water and minerals] must have represented a non-negligible fraction of the volatile-enriched materials in phase 2 and, thus, play a substantial role in the water delivery to Earth.

All this from the observation that mid-ocean ridge basalts had roughly three times higher iodine/plutonium ratios (inferred from xenon isotopes) as compared to ocean island basalts. The key to this paper, though, is the demonstration that the ratio difference is likely from a history of accretion that began with dry planetesimals followed by a secondary accretion phase driven by infalling materials rich in volatiles.

Thus Earth presents us with a model of planet formation from dry, rocky materials, one that presumably would apply to other terrestrial worlds, though we’d like to know more. To push the inquiry forward, Caltech’s Francois Tissot, a co-author on the paper, advocates looking at rocky worlds within our own Solar System:

“Space exploration to the outer planets is really important because a water world is probably the best place to look for extraterrestrial life. But the inner solar system shouldn’t be forgotten. There hasn’t been a mission that’s touched Venus’ surface for nearly 40 years, and there has never been a mission to the surface of Mercury. We need to be able to study those worlds to better understand how terrestrial planets such as Earth formed.”

And indeed, to better measure the impact of ices brought from far beyond the snowline to the infant worlds of the inner system. Tissot’s work demonstrates how deeply we are now delving into the transition between planetary nebulae and fully formed planets. working across the entire spectrum of what he calls ‘geochemical problematics,’ which includes studying the isotopic makeup of meteorites and their inclusions, the reconstruction of the earliest redox conditions in the Earth’s ocean and atmosphere, and the analysis of isotopes to investigate ancient magmas. At Caltech, he has created the Isotoparium, a state-of-the-art facility for high-precision isotope studies.

That we are now probing our planet’s very accretion is likely not news to many of my readers, but it stuns me as another example of extraordinary methodologies driving theory forward through simulation and laboratory work. And as we don’t often consider work on the geological front in these pages, it seems a good time to point this out.

The paper is Weiyi Liu et al., “I/Pu reveals Earth mainly accreted from volatile-poor differentiated planetesimals,” Science Advances Vol. 9, No. 27 (5 July 2023) (full text).

On Retrieving Dyson

One of the pleasures of writing and editing Centauri Dreams is connecting with people I’ve been writing about. A case in point is my recent article on Freeman Dyson’s “Gravitational Machines” paper, which has only lately again come to light thanks to the indefatigable efforts of David Derbes (University of Chicago Laboratory Schools, now retired). See Freeman Dyson’s Gravitational Machines for more, as well as the follow-up, Building the Gravitational Machine. I was delighted to begin an email exchange with Dr. Derbes following the Centauri Dreams articles, out of which emerges today’s post, which presents elements of that exchange.

I run this particularly because of my continued fascination with the work and personality of Freeman Dyson, who is one of those rare individuals who seems to grow in stature every time I read or hear about his contributions to physics. It was fascinating to receive from Dr. Derbes not only the background on how this manuscript hunter goes about his craft, thereby illuminating some of the more hidden corners of physics history, but also to learn of his recollections of the interactions between Dyson and Peter Higgs, whose ‘Higgs mechanism’ has revolutionized our understanding of mass and contributed a key factor to the Standard Model of particle physics. I’m also pleased to make the acquaintance of a kindred spirit, who shares my fascination with how today’s physics came to be, and the great figures who shaped its growth.

by David Derbes

I have a lifelong interest in the history of physics, particularly the history of physicists. Somehow I got through graduate school (in the UK; but I’m American) with only a very shaky acquaintance with Feynman diagrams and calculations in QED [quantum electrodynamics, the relativistic quantum theory of electrically charged particles, mutually interacting by exchange of photons]. This led me to a program of self-study (resulting in “Feynman’s derivation of the Schrödiinger equation”, Amer. Jour. Phys. 64 (1996) 881-884, two editions of Dyson’s AQM [Advanced Quantum Mechanics], and, with Richard Sohn, David J. Griffiths, and a cast of thousands, Sidney Coleman’s Lectures on Quantum Field Theory).

Along the way I stumbled onto David Kaiser’s Drawing Theories Apart, a sociological study of Feynman’s diagrams. Kaiser, who is now a friend, is a very remarkable fellow; he has two PhD’s, one in physics ostensibly under Coleman but actually under Alan Guth, and another in the history and philosophy of science). Kaiser mentioned the Cornell AQM notes of Dyson, never published, and I thought, hmmm… I found scans of them online at MIT, and (deleting a few side trips here) contacted Dyson about LaTeX’ing them for the arXiv (where they may be found today).

Image: Physicist, writer and teacher David Derbes, recently retired from University of Chicago Laboratory Schools. Credit: Maria Shaughnessy.

Dyson was quite enthusiastic. It probably helped that I had been a grad student of Higgs’ under Nick Kemmer at Edinburgh; Kemmer had steered Dyson towards physics and away from mathematics at Cambridge after the war. Ultimately (in my opinion) it is Dyson who was (very quietly) responsible for the recognition of Higgs’s work, and its incorporation by Weinberg into the Standard Model. Dyson had seen Higgs’s short pieces from 1964, learned (maybe from Kemmer) that he was at UNC Chapel Hill for 1965-66, wrote Higgs to give a talk at the IAS, which led to his giving a talk to Harvard (with Coleman, Glashow, and maybe Weinberg, then at MIT, in the audience).

Typing up Dyson’s Cornell lectures killed two birds: I learned more about QED, and I learned LaTeX from scratch. In retirement, “manuscript salvage” is my main hobby. (There are at least a couple of other oddballs who are doing much the same thing: David Delphenich, and there’s a guy in Australia, Ian Bruce, who has done a bunch of stuff from the 17th and 18th century, among other things a new translation of the Principia.)

Flash forward to shortly after LIGO’s results were announced. A letter in Physics Today drew attention to Dyson’s “Gravitational Machines”, so I went looking for it in the Cameron collection. I have a copy of Dyson’s Selected Works, and as you report the paper is not there. Couldn’t find it anywhere else, either. Cameron’s collection was mostly published in ephemeral paperback (I think there were a small number of hardbacks for libraries, but the U of Chicago’s copy is in paper covers).

So I wrote Dyson, with whom I had developed a very friendly relationship (there is a second edition of AQM, and it was more work than the first, due to the ~200 Feynman diagrams in the supplement), and asked if he would consent to my retyping (and redrawing the illustration for) his article for the arXiv. He was pleased by this. I very much regret that I couldn’t get it done before he died. The reason for that was copyright problems.

I’m going to give you only bullet points for that. Cameron died in 2005. His Interstellar Communication was published by W. A. Benjamin, then purchased by Cummings, Cummings was purchased by Addison-Wesley, and most of A-W’s assets purchased by Pearson; some by Taylor & Francis (UK). Took about four years to unravel. Neither Pearson (totally unhelpful) nor T&F (much better) had any record of the Cameron collection. As this may be helpful to you down the road, here was the resolution:

A work which was in copyright prior to January 1, 1964 had to have its copyright renewed in the 28th year after original copyright or lose its US copyright protection forever. Cameron’s collection was copyrighted in 1963. It took hours, but by scouring the online catalog at the US Copyright Office (you can do it in person near the Library of Congress) I was able to convince myself that the copyright had never been renewed. As far as US copyright goes, “Gravitational Engines” is in the public domain, and so I was clear of corporate entanglements (more to the point, so is the arXiv).

However, as I learned from Dyson’s Selected Papers, the article had originally been entered into an annual contest by the Gravity Research Foundation. The contributors to this contest read historically like a Who’s Who of astrophysics, general relativists and astronomers. So I got in touch with that organization’s director, George Rideout Jr. Rideout’s father had been appointed director by Roger W. Babson. who made a pile of money and set the foundation up. The story behind this is very sad: His beloved older sister drowned, and he blamed gravity. So he thought, well, if people could only invent anti-gravity, that might prevent future disasters. So he set up the foundation. (I think they also provided some funding for GR1 [Conference on the Role of Gravitation in Physics], the first international general relativity conference, Chapel Hill, 1957.)

I quickly obtained permission from George Rideout, satisfied the arXiv officials that they were free and clear to post “Gravitational Engines,” and here we are. (As I mentioned in the arXiv posting, the abstract comes from the original Gravity Research Foundation submission; it is absent in the Cameron collection.)

Incidentally, in chasing down other things, I found something I’d been seeking for a long time, the report from the Chapel Hill conference:

https://edition-open-sources.org/publications/#sources

https://edition-open-sources.org/sources/5/index.html

(So as you can see, there are several of us oddball manuscript hunters out there.)

Theoretical physics was not that large a community in 1965, and the British community even smaller. The physicists of Dyson’s generation typically went to Cambridge (which remains the main training ground for math and physics in the UK), with smaller spillover at Oxford, Imperial College London, and Edinburgh.

Kemmer hired Higgs at Edinburgh (Peter had been in the same department as Maurice Wilkins and Rosalind Franklin at King’s College, London. He was an expert at the time on crystal structure via group theory. He did not have any direct involvement with the DNA work, though subsequently he wrote an article that had a lot to do post facto with explaining the helical structure. The big boss at the lab (not Wilkins) was apparently quite annoyed with Higgs that he didn’t want to work on DNA.) Higgs wrote a Kemmer obit for the University of Edinburgh bulletin. He had been at Edinburgh for a couple of years in the 1950s in a junior position before he returned for good in 1960 (I think).

If I recall correctly, as Peter tells the story, Sheldon Glashow (who Higgs had known since a Scottish Summer School (conference) in Physics, 1960, I think) told Higgs that if he were ever planning to be in the Northeast, Glashow would arrange for Peter to give a talk at Harvard on whatever he liked. Independently of Glashow, Dyson wrote Peter to give a talk on what is now famously the Higgs mechanism at IAS, and Peter called Glashow to say something like, “Well, I’m driving from Chapel Hill to Princeton, and I see that Cambridge is only another few hours, so…” and that led to Higgs giving pretty much the same talk at Harvard, a really important event. But if Dyson hadn’t asked Peter to come to Princeton, he would not have gone to Harvard.

[Thus the contingencies of history, always telling a fascinating tale, in this case of a concept that rocked the world of physics, and wouldn’t you know Freeman Dyson would be in the middle of it.- PG]

Sunshade: A New Trek through ‘Daedalus Country’

Letting the imagination roam has philosophical as well as practical benefits. From the interstellar perspective, consider the Daedalus starship, designed with loving detail by members of the British Interplanetary Society in the 1970s. The mammoth (54,000 ton) vehicle was never conceived as remotely feasible at our stage of technology. But ‘our stage of technology’ is exactly the point the project illustrated. Daedalus demonstrated that there was nothing in physical law to prevent the construction of a starship. The question was, when would we reach the level of building it? For as Robert Forward frequently pointed out, interstellar flight could no longer be considered impossible.

We can’t know the answer to the question, but recall that before Daedalus, there was a lot of ‘informed’ opinion that interstellar flight was a chimera, and that all species were necessarily restricted to their home systems. Daedalus made the point debatable. If a civilization had a thousand year jump on us in terms of tech, could they build this thing? Probably, but they’d also surely come up with far better methods than we in the 1970s could imagine. Daedalus was, then, a possibility maker, a driver for further imaginings.

Fortunately, the Daedalus impulse – and the broader concept of thought experiments that so captivated Einstein – remains with us. I think, for example, of Cliff Singer’s pellet-driven starship, one that would demand a particle accelerator fully 100,000 miles long. Crazy? Sure, but a few decades later we were talking about slinging nanochip satellites in swarms using Jupiter’s magnificent magnetic fields, finding a way to do with nature what was evidently impossible for us to build with our own hands.

Robert Forward used to conceive of enormous laser sails for interstellar exploration, sails whose outbound laser flux would be amplified by an even larger 560,000-ton Fresnel lens built between the orbits of Saturn and Uranus. But I discovered in a new paper from Greg Matloff (New York City College of Technology, CUNY) that it was James Early who introduced another extraordinary idea, that of using a gigantic sail-like structure not for propulsion but as a sunshade. Early’s 1989 paper in the Journal of the British Interplanetary Society specifically addressed the ‘greenhouse effect,’ which even then concerned scientists in terms of its effect on global climate. Could technology tame it?

Once again we’re in Daedalus country, or Forward country, if you will. Imagine a true megastructure, a 2000 kilometer sunshade located at the L1 Lagrange region between the Earth and the Sun, approximately 1.5 million kilometers from Earth. The five Lagrange points allow a spacecraft to remain in a relatively fixed orbital position in relation to two larger masses, in the case of L1 the Earth and the Sun. But L1 is not stable, which means that a structure like the sunshade would require thrusting capability for course correction to maintain its optimum position in relation to the Earth. Bear in mind as well the effect of solar radiation pressure on the shade.

Image: Physicist and prolific writer Greg Matloff, author of The Starflight Handbook (Wiley, 1989) and many other books and papers including the indispensable Deep Space Probes (Springer, 2005).

Would a 2000-kilometer shade be sufficient, assuming the intention of reducing the Earth’s effective temperature (255 K) by one K? We learn that solar flux would need to be reduced by 1.5 percent to reduce Earth’s EFF to 254 K. 2000 kilometers does in fact somewhat overshoot the need, reducing solar influx by about 2 percent. That’s a figure that changes over astronomical time, of course, for like any active star, the Sun experiences increased luminosity as it ages, but 2000 km certainly serves for now.

But how to build such a thing? Matloff looks at two versions of the technology, the first being a fully opaque, thick sunshade which would be constructed of lunar or perhaps asteroidal materials. Think in terms of a square sunshade with a thickness of 10-4 meters, and a density of 2,000 kg/m3, producing a mass of 8 X 1011 kg. Building such a thing on Earth is a non-starter, so we can think in terms of assembly in lunar orbit, with the shade materials taken from an asteroid of 460 meters in radius. Corrective thrusting via solar-electric methods with an exhaust velocity of 100 km/s adds up to an eye-opening fuel consumption of 400 kg/s.

But we have other options. Matloff goes on to consider a transparent diffractive film sail (Andreas Hein has recently explored this possibility). Here the sail is imprinted with a diffraction pattern that diverts incoming sunlight from striking the Earth. This is a sail that experiences low solar radiation pressure, its mass reaching 6.4 X 108 kg. But thinner transparent surfaces are feasible as the technology matures, reducing the mass on orbit to 107 kg. Such a futuristic sunshade could be built on Earth and delivered to LEO through 100 flights of today’s super-heavy launch vehicles. Presumably other options will emerge by the time we have the assembly capabilities.

Either of these designs would divert 5.6 X 1015 watts of sunlight from the Earth, energy that if directed to other optical devices would offer numerous possibilities. Matloff considers powering up laser arrays for asteroid mitigation, an in-space defensive system that would work with energy levels much higher than those available through currently envisioned systems like the proposed Breakthrough Starshot Earth-based laser array. A space-based system would also have the advantage of not being confined to a single hemisphere on the surface.

Other possibilities emerge. A laser near the sunshade could tap some of the solar flux and direct it to power stations in geosynchronous Earth orbit, where it would be converted into a microwave frequency to which the Earth’s atmosphere is transparent. You can see the political problem here, which Matloff acknowledges. Any such instrumentation clearly has implications as a weapon, demanding international governance, although through what mechanisms remains to be determined.

But let’s push this concept as hard as we can. How about accelerating a starship? Matloff works the math on a crewed generation ship accelerated to interstellar velocities, with travel time to the nearest star totaling about four centuries. The point is, this is an energy source that makes abundant solar power available while producing the desired reduction in temperatures on Earth, a benefit that could drive development of these technologies not only by us but conceivably by other civilizations as well. If such is a case, we have a new kind of technosignature:

If sufficiently large telescopes are constructed on Earth or in space, astronomers might occasionally survey the vicinity of nearby habitable planets for momentary visual glints. If these sporadic events correspond to the planet-star L1 point, they might constitute an observable technosignature of an existing advanced extraterrestrial civilization.

When considering technosignatures from ET sunshades, it is worth noting that a single monolithic sunshade might be replaced by two or more smaller devices. Also, an advanced extraterrestrial civilization may choose to place its sunshade in a location other than planet-star L1.

There is a Bob Forward quality to this paper that reminds me of Forward’s pleasure in delving into the feasibility of projects from the standpoint of physics while leaving open the issue of how engineers could create structures that at present seem fantastic. That quality might be described as ‘visionary,’ calling up, say, Konstantin Tsiolkovsky in its sheer sweep. Matloff, who knew Forward well, preserves Forward’s exuberance, the pleasure of painting what will be possible for our descendants, who as they one day leave our system will surely continue the exploration of their own ‘Daedalus country.’

The paper is Matloff, “The Lagrange Sunshade: Its Effectiveness in Combating Global Warming and Its Application to Earth Defense from Asteroid Impacts, Beaming Solar Energy for Terrestrial Use, Propelling Interstellar Migration by Laser-Photon Sails and Its Technosignature,” JBIS Vol. 76, No. 4 (April 2023). The Early paper is “Space-based solar shield to offset greenhouse effect,” JBIS Vol. 42, Dec. 1989, p. 567-569 (abstract).

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives