Centauri Dreams

Imagining and Planning Interstellar Exploration

The Long Afternoon of Earth

Every time I mention a Brian Aldiss novel, I have to be careful to check the original title against the one published in the US. The terrific novel Non-Stop (1958) became Starship in the States, rather reducing the suspense of decoding its strange setting. Hothouse (1962) became The Long Afternoon of Earth when abridged in the US following serialization in The Magazine of Fantasy & Science Fiction. I much prefer the poetic US title with its air of brooding fin de siècle decline as Aldiss imagines our deep, deep future.

Imagine an Earth orbiting a Sun far hotter than it is today, a world where our planet is now tidally locked to that Sun, which Aldiss describes as “paralyzing half the heaven.” The planet is choked with vegetation so dense and rapidly evolving that humans are on the edge of extinction, living within a continent-spanning tree. The memory of reading all this always stays with me when I think about distant futures, which by most accounts involve an ever-hotter Sun and the eventual collapse of our biosphere.

Image: The dust jacket of the first edition of Brian Aldiss’ novel Hothouse.

Indeed, warming over the next billion years will inevitably affect the carbon-silicate cycle. Its regulation of atmospheric carbon dioxide is a process that takes CO2 all the way from rainfall through ocean sediments, their subduction into the mantle and the eventual return of CO2 to the atmosphere by means of volcanism. Scientists have thought that the warming Sun will cause CO2 to be drawn out of the atmosphere at rates sufficient to starve out land plants, spelling an end to habitability. That long afternoon of Earth, though, may be longer than we have hitherto assumed.

A new study now questions not only whether CO2 starvation is the greatest threat but also manages to extend the lifetime of a habitable Earth far beyond the generally cited one billion years. The scientists involved apply ‘global mean models,’ which help to analyze how vegetation affects the carbon cycle. Lead author Robert Graham (University of Chicago), working with colleagues at Israel’s Weizmann Institute of Science, is attempting to better understand the mechanisms of plant extinction. Their new constraints on silicate weathering push the conclusion that the terrestrial biosphere will eventually succumb to temperatures near runaway greenhouse conditions. The biosphere dies from simple overheating rather than CO2 starvation.

The implications are intriguing and offer fodder for a new generation of science fiction writers working far-future themes. For in the authors’ models, the lifespan of our biosphere may be almost twice as long as has been previously expected. Decreases in plant productivity act to slow and eventually (if only temporarily) reverse the future decrease in CO2 as the Sun continues to brighten.

Here’s the crux of the matter: Rocks undergo weathering as CO2 laden rainwater carrying carbonic acid reacts with silicate minerals, part of the complicated process of sequestering CO2 in the oceans. The authors’ models show that if this process of silicate weathering is only weakly dependent on temperature – so that even large temperature changes have comparatively little effect – or strongly CO2 dependent, then “…progressive decreases in plant productivity can slow, halt, and even temporarily reverse the expected future decrease in CO2 as insolation continues to increase.”

From the paper:

Although this compromises the ability of the silicate weathering feedback to slow the warming of the Earth induced by higher insolation, it can also delay or prevent CO2 starvation of land plants, allowing the continued existence of a complex land biosphere until the surface temperature becomes too hot. In this regime, contrary to previous results, expected future decreases in CO2 outgassing and increases in land area would result in longer lifespans for the biosphere by delaying the point when land plants overheat.

How much heat can plants take? The paper cites a grass called Dichanthelium lanuginosum that grows in geothermal settings (with the aid of a symbiotic relationship with a fungus) as holding the record for survival, at temperatures as high as 338 K. The authors take this as the upper temperature limit for plants, adding this:

Importantly, with a revised thermotolerance limit for vascular land plants of 338 K, these results imply that the biotic feedback on weathering may allow complex land life to persist up to the moist or runaway greenhouse transition on Earth (and potentially Earth-like exoplanets). (Italics mine)

The long afternoon of Earth indeed. The authors point out that the adaptation of land plants (Aldiss’ continent-spanning tree, for example) could push their extinction to even later dates, limited perhaps by the eventual loss of Earth’s oceans.

…an important implication of our work is that the factors controlling Earth’s transitions into exotic hot climate states could be a primary control on the lifespan of the complex biosphere, motivating further study of the moist and runaway greenhouse transitions with 3D models. Generalizing to exoplanets, this suggests that the inner edge of the “complex life habitable zone” may be coterminous with the inner edge of the classical circumstellar habitable zone, with relevance for where exoplanet astronomers might expect to find plant biosignatures like the “vegetation red edge” (Seager et al. 2005).

The paper is Graham, Halevy & Abbot, “Substantial extension of the lifetime of the terrestrial biosphere,” accepted at Planetary Science Journal (preprint).

Beamed Propulsion and Planetary Security

Power beaming to accelerate a ‘lightsail’ has been pondered since the days when Robert Forward became intrigued with nascent laser technologies. The Breakthrough Starshot concept has been to use a laser array to drive a fleet of tiny payloads to a nearby star, most likely Proxima Centauri. It’s significant that a crucial early decision was to place the laser array that would drive such craft on the Earth’s surface rather than in space. You would think that a space-based installation would have powerful advantages, but two immediate issues drove the choice, the first being political.

The politics of laser beaming can be complicated. I’m reminded of the obligations involved in what is known as the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies (let’s just call it the Outer Space Treaty), spurred by a paper from Adam Hibberd that has just popped up on arXiv. The treaty, which comes out of the United Nations Office for Space Affairs, emerged decades ago and has 115 signatories globally.

Here’s the bit relevant for today’s discussion, as quoted by Hibberd (Institute for Interstellar Studies, London):

States Parties to the Treaty undertake not to place in orbit around the earth any objects carrying nuclear weapons or any other kinds of weapons of mass destruction, install such weapons on celestial bodies, or station such weapons in outer space in any other manner. The moon and other celestial bodies shall be used by all States Parties to the Treaty exclusively for peaceful purposes. The establishment of military bases, installations and fortifications, the testing of any type of weapons and the conduct of military manoeuvres on celestial bodies shall be forbidden. The use of military personnel for scientific research or for any other peaceful purposes shall not be prohibited. The use of any equipment or facility necessary for peaceful exploration of the moon and other celestial bodies shall also not be prohibited.

So we’re ruling out weaponry in orbit or elsewhere in space. Would that prohibit building an enormous laser array designed for space exploration? Hibberd believes a space laser would be permitted if its intention were for space exploration or planetary defense, but you can see the problem: Power beaming at this magnitude can clearly be converted into a weapon in the wrong hands. And what a weapon. A 10 km X 10 km installation as considered in Philip Lubin’s DE-STAR 4 concept generates 70 GW beams. You can do a lot with that beyond pushing a craft to deep space or taking an Earth-threatening asteroid apart.

Build the array on Earth and the political entanglements do not vanish but perhaps become manageable as attention shifts to how to avoid accidentally hitting commercial airliners and the like, including the effects on wildlife and the environment.


Image:
Pushing a lightsail with beamed energy is a feasible concept capable of being scaled for a wide variety of missions. But where do we put the beamer? Credit: Philip Lubin / UC-Santa Barbara.

The second factor in the early Starshot discussions was time. Although now slowed down as its team looks at near-term applications for the technologies thus far examined, Starshot was initially ramping up for a deployment by mid-century. That’s pretty ambitious, and we wouldn’t have a space option that could develop the beamer if that stretchiest-of-all-stretch goals actually became a prerequisite.

So if we ease the schedule and assume we have the rest of the century or more to play with, we can again examine laser facilities off-planet. Moreover, Starshot is just one beamer concept, and we can back away from its specifics to consider an overall laser infrastructure. Hibberd’s choice is the DE-STAR framework (Directed Energy Systems for Targeting of Asteroids and Exploration) developed by Philip Lubin at UC-Santa Barbara and first described in a 2012 on planetary defense. The concept has appeared in numerous papers since, especially 2016’s “A Roadmap to Interstellar Flight.”

If the development of these ideas intrigues you, let me recommend Jim Benford’s A Photon Beam Propulsion Timeline, published here in 2016, as well as Philip Lubin’s DE-STAR and Breakthrough Starshot: A Short History, also from these pages.

What Hibberd is about in his new paper is to work out how far away various categories of laser systems would have to be to ensure the safety of our planet. This leads to a sequence of calculations defining different safe distances depending on the size of the installation. The DE-STAR concept is modular, a square phased array of lasers where each upgrade indicates a power of base 10 expansion to the array in meters. In other words, while DE-STAR 0 is 1 meter to the side, DE-STAR 1 goes to 10 meters to the side, and so on. Here’s the chart Hibberd presents for the system (Table 1 in his paper).

Keep scaling up and you achieve arrays of stupendous size, and in fact an early news release from UC-Santa Barbara described a DE-STAR 6 as a propulsion system for a 10-ton interstellar craft. It’s hard to imagine the 1,000 kilometer array this would involve, although I’m sure Robert Forward would have enjoyed the idea.

So taking Lubin’s DE-STAR as the conceptual model (and sticking with the more achievable lower end of the DE-STAR scale), how can we lower the risks of this kind of array being used as a weapon? And that translates into: Where can we put an array so that even its largest iterations are too far from Earth to cause concern?

Hibberd’s calculations involve determining the minimum level of flux generated by an individual 1 meter aperture laser element (this is DE-STAR 0) – “the unphased flux of any DE-STAR n laser system” – and using as the theoretical minimum safe distance from Earth a value on the order of 10 percent of the solar constant at Earth, meaning the average electromagnetic radiation per unit area received at the surface. The solar constant value is 1361 watts per square meter (W/m²); Hibberd pares it down to a maximum allowed flux of 100 W/m² and proceeds accordingly.

Now the problems of a space-based installation become strikingly apparent, for the calculations show that DE-STAR 1 (10 m X 10 m) would need to be positioned outside cis-lunar space to ensure these standards, and even further away (beyond the Earth-Moon Lagrange 2 point) for ultraviolet wavelengths (λ ≲ 350nm). That takes us out 450,000 kilometers from Earth. However, a position at the Sun-Earth L2 Lagrange location would be safe for a DE-STAR 1 array.

The numbers add up, and we have to take account of stability. The Sun/Earth Lagrange 4 and 5 points would allow a DE-STAR 2 laser installation to remain at a fixed location without on-board propulsion. DE-STAR 3 would have to be positioned beyond the asteroid belt, or even beyond Jupiter if we take ultraviolet wavelengths into account. The enormous DE-STAR 4 level array would need to be placed as far as 70 AU away.

All this assumes we are working with an array on direct line of sight with the Earth, but this does not have to be the case. Let me quote Hibberd on this, as it’s rather interesting:

Two such locations are the Earth/Moon Lagrange 2 point (on a line from the Earth to the Moon, extending beyond the Moon by ∼ 61, 000 km) and the Sun/Earth Lagrange 3 point (at 1 au from the Sun and diametrically opposite the Earth as it orbits the Sun). In both cases, the instability of these points will result in the DE-STAR wandering away and potentially becoming visible from Earth, so an on-board propulsion would be needed to prevent this. One solution would be to use the push-back from the lasers to provide a means of corrective propulsion. However it would appear a DE-STAR’s placement at either of these points is not an entirely satisfactory solution to the problem.

So we can operate with on-board propulsion to achieve no direct line-of-sight to Earth, but the orbital instabilities involved make this problematic. Achieving the goal of a maximum safe flux at Earth isn’t easy, and we’re forced to place even DE-STAR 2 arrays at least 1 AU from the Sun at the Sun/Earth Lagrange 4 or 5 positions to achieve stable orbits. DE-STAR 3 demands movement beyond the asteroid belt at a minimum. DE-STAR levels beyond this will require new strategies for safety.

Back to the original surmise. Even if we had the technology to build a DE-STAR array in space in the near future, safety constraints dictate that it be placed at large distances from the Earth, making it necessary to have first developed an infrastructure within the Solar System that could support a project like this. As opposed to one-off missions from Earth launching before such an infrastructure is in place, we’ll need to have the ability to move freely at distances that ensure safety, unless other means of planetary protection can be ensured. Hibberd doesn’t speculate as to what these might be, but somewhere down the line we’re going to need solutions for this conundrum.

The paper is Hibberd, “Minimum Safe Distances for DE-STAR Space Lasers,” available as a preprint. Philip Lubin’s “A Roadmap to Interstellar Flight” appeared in Journal of the British Interplanetary Society 69, 40-72 (2016). Full text.

All the Light We Can See

I’ve reminisced before about crossing Lake George in the Adirondacks in a small boat late one night some years back, when I saw the Milky with the greatest clarity I had ever experienced. Talk about dark skies! That view was not only breathtaking on its own, but it also raised the point about what we can see where. Ponder the cosmic optical background (COB), which sums up everything that has produced light over the history of the universe. The sum of light can be observed with even a small telescope, but the problem is to screen out local sources. No telescope is better placed to do just this than the Long Range Reconnaissance Imager (LORRI) aboard the New Horizons spacecraft.

Deep in the Kuiper Belt almost 60 AU from the Sun, the craft has a one-way light time of over eight hours (Voyager 1, by comparison, shows a one-way light time of almost 23 hours at 165 AU). It’s heartening that we’re continuing to keep the Voyagers alive even as the options slowly diminish, but New Horizons is still robust and returning data from numerous instruments. No telescope anywhere sees skies as dark as LORRI. That makes measurements of the COB as authoritative as anything we’re likely to get soon.

Image: Not my view from the Adirondacks but close. The Milky Way is gorgeous when unobscured by city lights. Credit: Derek Rowley.

The issue of background light came to the fore in 2021, when scientists at the National Science Foundation-funded NSF NOIRLab put data from New Horizons’ 20.8 cm telescope to work. That effort involved measuring the light found in a small group of images drawn from deep in the cosmos. It suggested a universe that was brighter than it should be, as if there were uncounted sources of light. Now we have further analysis of observations made with LORRI in 2023 supplemented by data from ESA’s Planck mission, which aids in calibrating the dust density in the chosen fields of view. We learn that contamination from the Milky Way can explain the anomaly.

The new paper from lead author Marc Postman (Space Telescope Science Institute) studies light from 16 different fields carefully chosen to minimize the background light of our own galaxy which, of course, surrounds us and compromises our view. This new work, rather than using archival data made for other purposes, explicitly uses LORRI to create images minimizing foreground light sources. The conclusion is evidently air-tight, as laid out by Postman:

At the outset of this work we posed the question: Is the COB intensity as expected from our census of faint galaxies, or does the Universe contain additional sources of light not yet recognized? With our present result, it appears that these diverse approaches are converging to a common answer. Galaxies are the greatly dominant and perhaps even complete source of the COB. There does remain some room for interesting qualifications and adjustments to this picture, but in broad outline it is the simplest explanation for what we see.

And let me throw in this bit from the conclusion of the paper because it adds an interesting dimension to the study:

If our present COB intensity is correct, however, it means that galaxy counts, VHE γ-ray extinction, and direct optical band measurements of the COB intensity have finally converged at an interesting level of precision. There is still room to adjust the galaxy counts slightly, or to allow for nondominant anomalous intensity sources.

In other words, to fully analyze the COB, the scientists have included VHE (very high energy) gamma ray extinction, meaning adjustments for the scattering of gamma rays as they travel to us. Although not visible at optical wavelengths, gamma rays can interact with the photons of the COB in ways that can be measured, as an adjustment to the rest of the COB data. That analysis complements the count of known galaxies and the optical band measurements to produce the conclusion now achieved.

I always find it interesting that there is both a deep satisfaction in solving a mystery and also a slight letdown, for let’s face it, odd things in the universe are fascinating, and let our imaginations run wild. In this case, however, the issue seems resolved.

I don’t have to mention to this audience how much good science continues to get done by having a fully functioning probe this deep in the Kuiper Belt. From New Horizons’ vantage point, there is little to no effect from zodiacal light, which is the result of sunlight scattering off interplanetary dust. The latter is a key factor in the brightness of the sky in the inner Solar System and has made previous attempts to measure the COB from the inner system challenging. We now look ahead to New Horizons’ search for other Kuiper Belt Objects to explore and try to learn whether there is a second belt of debris beyond the known one, and thus between it and the inner Oort Cloud.

We’ll doubtless continue to find things that challenge our assumptions as we press on, a reminder that a successor to New Horizons and the Voyagers is still a matter of debate both in terms of mission design and funding. As to the cosmic optical background, we give up the unlikely but highly interesting prospect that any significant levels of light come from sources unknown to us. As the paper concludes: “…the simplest hypothesis appears to provide the best explanation of what we see: the COB is the light from all the galaxies within our horizon..”

The paper is Postman et al., “New Synoptic Observations of the Cosmic Optical background with New Horizons,” The Astrophysical Journal Vol. 972, No. 1 (28 August 2024), 95 (full text). The 2021 paper is Lauer et al., “New Horizons Observations of the Cosmic Optical Background,” The Astrophysical Journal Vol. 906, No. 2 (11 January 2021), 77 (full text).

Green Mars: A Nanotech Beginning

I want to return to Mars this morning because an emerging idea on how to terraform it is in the news. The idea is to block infrared radiation from escaping into space by releasing engineered dust particles about half as long as the wavelength of this radiation, which is centered around wavelengths of 22 and 10 μm, into the atmosphere. Block those escape routes and the possibility of warming Mars in a far more efficient way than has previously been suggested emerges. The paper on this work even suggests a SETI implication (!), but more about that in a moment.

Grad student Samaneh Ansari (Northwestern University) is lead author of the paper, working with among others Ramses Ramirez (University of Central Florida), whose investigations into planetary habitability and the nature of the habitable zone have appeared frequently in these pages (see, for example, Revising the Classical ‘Habitable Zone’). The engineered ‘nanorods’ at the heart of the concept could raise the surface temperature enough to allow survivability of microbial life, which would at least be a beginning to the long process of making the Red Planet habitable.

As opposed to using artificial greenhouse gases, a method that would involve vast amounts of fluorine scarce on the Martian surface, the nanorod approach takes advantage of the properties of the planet’s dust, which is lofted to high altitudes as an aerosol. The authors calculate, using the Mars Weather Research and Forecasting global climate model, that releasing 9-μm-long conductive nanorods made of aluminum “not much smaller than commercially available glitter” would provide the needed infrared blocking that natural dust cannot, and once at high altitude settle more slowly to the surface.

What stands out in the authors’ modeling is that their method is over 5,000 times more efficient than other methods of terraforming, and relies on materials already available on Mars. Natural dust particles, you would think, should warm the planet if released in greater quantities, but the result of doing so is actually to cool the surface even more. Let me quote the paper on this counter-intuitive (to me at least) result:

Because of its small size (1.5-μm effective radius), Mars dust is lofted to high altitude (altitude of peak dust mass mixing ratio, 15 to 25 km), is always visible in the Mars sky, and is present up to >60 km altitude (14–15). Natural Mars dust aerosol lowers daytime surface temperature [e.g., (16)], but this is due to compositional and geometric specifics that can be modified in the case of engineered dust. For example, a nanorod about half as long as the wavelength of upwelling thermal infrared radiation should interact strongly with that radiation (17).

Edwin Kite (University of Chicago) is a co-author on the work:

“You’d still need millions of tons to warm the planet, but that’s five thousand times less than you would need with previous proposals to globally warm Mars. This significantly increases the feasibility of the project… This suggests that the barrier to warming Mars to allow liquid water is not as high as previously thought.”

Image: This is Figure 3 from the paper. Caption: The proposed nanoparticle warming method. Figure credit: Aaron M. Geller, Northwestern, Center for Interdisciplinary Exploration and Research in Astrophysics + IT-RCDS.

Strikingly, the effects begin to emerge quite quickly. Within months of the beginning of the process, atmospheric pressure rises by 20 percent as CO2 ice sublimes, creating a positive warming feedback. Note this from the paper:

On a warmed Mars, atmospheric pressure will further increase by a factor of 2 to 20 as adsorbed CO2 desorbs (35), and polar CO2 ice (36) is volatilized on a timescale that could be as long as centuries. This will further increase the area that is suitable for liquid water (6).

That said, we’re still not in range for creating a surface habitable by humans. We have to deal with barriers to oxygenic photosynthesis, including the makeup of the Martian sands, which are laden with potentially toxic levels of nitrates, and an atmosphere with little oxygen. Toxic perchlorates in the soil would require ‘bioremediation’ involving perchlorate-reducing bacteria, which yield molecular oxygen as a byproduct. We’re a long way from creating an atmosphere humans can breathe, but we’re in range of the intermediate goal of warming the surface, possibly enough to sustain food crops.

Addendum: I made a mistake above, soon caught by Alex Tolley. Let me insert his comment here to straighten out my mistake:

“… which are laden with potentially toxic levels of nitrates,”

I think you misinterpreted the sentence from the paper:

“…is not sufficient to make the planet’s surface habitable for oxygenic photosynthetic life: barriers remain (7). For example, Mars’ sands have ~300 ppmw nitrates (37), and Mars’ air contains very little O2, as did Earth’s air prior to the arrival of cyanobacteria. Remediating perchlorate-rich soil…”

300 ppm nitrates is very low and will not support much plant or bacterial life. [You want ~ 10,000 ppm ] That is why N and P are added to simulated Mars regolith when testing plant growth for farming or terraforming. IIRC, there have been suggestions of importing nitrogen from Titan to meet its needs on Mars.

Thanks for catching this, Alex!

Although nanoparticles could warm Mars… both the benefits and potential costs of this course of action are now uncertain. For example, in the unlikely event that Mars’ soil contains irremediable compounds toxic to all Earth-derived life (this can be tested with Mars Sample Return), then the benefit of warming Mars is nil. On the other hand, if a photosynthetic biosphere can be established on the surface of Mars, perhaps with the aid of synthetic biology, then that might increase the Solar System’s capacity for human flourishing. On the cost side, if Mars has extant life, then study of that life could have great benefits that warrant robust protections for its habitat. More immediately, further research into nanoparticle design and manufacture coupled with modeling of their interaction with the climate could reduce the expense of this method.

That’s a robust way forward, one the authors suggest could involve wind tunnel experiments at Mars pressure to analyze how both dust and nanomaterials are released from modeled Mars surfaces, from dusty flat terrain to the ice of the poles. Large eddy simulations (LES) are ways to model larger flows such as winds and weather patterns. Deploying these should be useful in learning how the proposed nanorods will disperse in the atmosphere, while local warming methods also demand consideration.

A question I had never thought to ask about terraforming was how long the effects can be expected to last, and indeed the authors point out how little is known about long-term sustainability. A 2018 paper on current loss rates in the Martian atmosphere suggests that it would take at least 300 million years to fully deplete the atmosphere. The big unknown here is the Martian ice, and what may lie beneath it:

…if the ground ice observed at meters to tens of meters depth is underlain by empty pore space, then excessive warming over centuries could allow water to drain away, requiring careful management of long-term warming. Subsurface exploration by electromagnetic methods could address this uncertainty regarding how much water remains on Mars deep underground.

Image: Will we ever get to this? The ‘nanorod’ approach oculd be the beginning. Credit: Daein Ballard, Wikimedia Commons CC BY-SA 3.0.

The SETI implication? Nanoparticle warming is efficient, so much so that we might expect other civilizations to use the technique. A potential technosignature emerges in the polarization of light, because a terrestrial world with a magnetic field will show the interaction of polarized light with the planet’s atmosphere, the latter conceivably laden with the nanoparticles at work in terraforming. Polarization will occur when light interacts with nanoparticles, aerosols, or dust in the atmosphere or the magnetic field. This would be an elusive signature to spot, but not outside the range of possibility.

In the absence of an active geodynamo to drive a magnetic field, Mars would not be a candidate for this kind of remote observation. But an exoplanet of terrestrial class with a magnetic field should, by these calculations, be a candidate for this kind of study.

The paper is Ansari et al., “Feasibility of keeping Mars warm with nanoparticles,” Science Advances Vol. 10, No. 32 (7 August 2024). Abstract / Preprint. Thanks to Centauri Dreams reader Ivan Vuletich for the pointer to this paper.

The ‘Freakish Radio Writings’ of 1924

Mars was a lively destination in early science fiction because of its proximity. When H. G. Wells needed a danger from outer space, The War of the Worlds naturally looked toward Mars, as a place close to Earth and one with the ability to provoke curiosity. Closely studied at opposition in 1877, Mars provoked in Giovanni Schiaparelli the prospect of a network of canals, surely feeding a civilization that might still be alive. No wonder new technologies turned toward the Red Planet as they became available to move beyond visible light and even attempt to make contact with its inhabitants.

All this comes to mind this morning because of an intriguing story sent along by my friend Al Jackson, whose work on interstellar propulsion is well known in these pages, as is his deep involvement with the Apollo program. Al had never heard of the incident described in the story. It occurred in 1924, when at another Martian opposition (an orbital alignment bringing Earth and Mars as close as they’ll get during its 26-month orbit), the U. S. Navy imposed radio silence nationwide for five minutes once an hour from August 21 to 24. The plan: Allow observatories worldwide to listen for Martians.

Image: The cover of the Edgar Rice Burroughs novel that would have been on Mars enthusiasts’ shelves when the 1924 opposition occurred. Burroughs’ depiction of Mars was hugely popular in its day.

This was serious SETI for its day. A dirigible was launched from the U. S. Naval Observatory carrying radio equipment for these observations, with the capability of relaying its signals back to a laboratory on the ground. A military cryptographer was brought in to monitor the situation, as attested by a provocative New York Times headline from August 23 of that year: “Code Expert Ready for Message.; RADIO HEARS THINGS AS MARS NEARS US.”

All this was news to me too, and thus I was entranced by the new article, a Times essay from August 20 of this year, written by Becky Ferreira. Because something indeed happened and was reported in August 28 of 1924, again in the Times: “SEEKS SIGN FROM MARS IN 38-FOOT RADIO FILM; Dr. Todd Will Study Photograph of Mysterious Dots and Dashes Recently Recorded.”

As Ferreira explains::

A series of dots and dashes, captured by an airborne antenna, produced a photographic record of “a crudely drawn face,” according to news reports. The tantalizing results and subsequent media frenzy inflamed the public’s imagination. It seemed as if Mars was speaking, but what was it trying to say?

“The film shows a repetition, at intervals of about a half hour, of what appears to be a man’s face,” one of the experiment’s leaders said days later.

You may recall that when Frank Drake began Project Ozma at Green Bank in 1960, he homed in on nearby stars Tau Ceti and Epsilon Eridani. And relatively soon he got a strong signal, causing him to ponder whether detecting other civilizations might be easy if you just pointed your antenna and began to listen. But the signal turned out to be from an aircraft in the skies of West Virginia, an early SETI frustration, for radio frequency interference (RFI) is a source of constant concern, as witness the stir caused briefly in 2019 by what appeared to be a signal from Proxima Centauri, but was not.

I don’t think the 1960 RFI experience got much media play, if any, though Project Ozma itself received a certain degree of coverage. But the ‘face’ found in the Mars radio reception of 1924 would have caused newspaper readers in that year to recall Guglielmo Marconi’s 1920 claim that he had detected signals “sent by the inhabitants of other planets to the inhabitants of Earth.” This was an era bristling with the new exploration of radio wavelengths, which if they could offer communications across a continent or ocean, could surely make possible a signal from one planet to another.

The interest was international, as another Times headline makes clear, this one from August 23, 1924: “RADIO HEARS THINGS AS MARS NEARS US; A 24-Tube Set in England Picks Up Strong Signals Made in Harsh Dots. VANCOUVER ALSO FAVORED At Washington the Translator of McLean Telegrams Stands by to Decode Any Message.”

Back to the ‘face’ found in the research effort on the American side of the Atlantic, dug out of data relayed from the dirigible. It was an astronomer named David Peck Todd who went to work with inventor Charles Francis Jenkins, using a radio from the National Electrical Supply Company designed to support troops in combat. Jenkins would use it to pick up any signals from Mars as detected by the airship. He had for his part built a ‘radio camera’ that would convert the radio data into optical flashes that would be imprinted on photographic paper, and it was within the result that what seemed to be a face emerged. But it was one that not everyone saw.

Jenkins himself was unimpressed, as I learned from a story titled “Freakish Radio writings on Mars Machine” that ran in the Daily News on August 27. Let me quote the small piece in its entirety:

C. Francis Jenkins, Washington inventor, is investigating to ascertain cause of a series of freakish writings received on his special machine designed to record any possible radio signals from Mars.

The film record shows an arrangement of dots and dashes and pictures resembling a human face.

“I do not think the results have anything to do with Mars,” Jenkins said.

A little more digging in the newspaper archives revealed that Jenkins told Associated Press reporters, as recorded in the Buffalo Evening Times that same day (“Radio Signals Shown on Films, Puzzle Savants at Capital”) that he thought the results came from radio frequency interference, saying what appears to be a face is “a freak which we can’t explain.” The image was indeed part of a repeating pattern recorded on Jenkins’ machine, but people were reading into it what they wanted to see.

Image: What remains of the 1924 ‘face on Mars’ detection, as captured through photography of the original paper roll produced by Jenkins in his lab. Credit: Yale University Library.

So where is the 38-foot long roll of photographic paper that caused the ‘detection’ of a face from Mars? The original, according to Ferreira’s research, seems to have been lost, but Yale University Library lists three images from its collection of materials on David Peck Todd under the title “Martian signals recorded by Jenkins.” So we have at least three photographs of Jenkins’ work, but to me at least, no face seems apparent.

Also in Buffalo, the Buffalo American ran a much longer piece titled “Astronomers Scan Mars To Discover Human Life” for its August 28, 1924 issue, which looks at the whole issue of studying Mars, though without mention of Jenkins’ work. It includes this interesting paragraph:

…perhaps Mars does see what is happening on the earth. If you were on Mars and looked at the earth you would see a star twice as large as Mars appears to Buffalo, as the earth is double Mars’ size. In the far distance on the same side of the sky would be the sun but it would only be two-thirds as large as it appears here. On the other wise would be discerned a huge mass of vapor 1,300 times as large as the earth. That would be Jupiter, which has not solidified yet. You would also see a couple of moons. They light Mars at night and are responsible for the tides on its oceans.

The Buffalo American article takes us right into the Barsoom of Edgar Rice Burroughs’ imagination, which at that time had reached, in its 10-book series, The Chessmen of Mars (1922). If you’re a hard-core Burroughs fan, you may remember the chess game (known on Mars as Jetan) in which humans play the role of the chess pieces and fight to the death (Burroughs loved chess). Despite the Buffalo American’s mention of oceans, even in John Carter’s day Barsoom was depicted as a place where water resources were rare and tightly controlled.

And just why study Mars in the first place? The newspaper article explains:

They want to know if the earth is the only celestial globe on which the Creator put human beings and if the planets and stars beyond were designed merely for the people on Earth to admire.

The Mars of the day was an extraordinary place. In researching this piece, I came across this from an article on the 1924 opposition by Rowland Thomas that ran in the St. Louis Post-Dispatch:

For some time astronomers all over the world have been preparing to get close-ups of Mars with their telescopes. The observers at Lowell Observatory, Flagstaff, Ariz., where the late Percival Lowell carried on his lifelong study of the planet which confirmed his belief that intelligent life exists on it, reported that on the southern hemisphere of Mars, where the polar ice cap is now melting under the rays of what is there a spring-tide sun, vast areas of what may be continents, marshland, prairies and the beds of dired-up oceans are constantly changing in appearance.

Image: Mars as conceived by astronomer Percival Lowell (1855-1916) and discussed by him in three books: Mars (1895), Mars and Its Canals (1906), and Mars As the Abode of Life (1908). The canals are here shown filled, with the vegetation in vigorous growth. Painting by H. Seppings Wright (1850-1937).

We’re still fourteen years from Orson Welles’ “War of the Worlds” broadcast in October of 1938, which gave plenty of time for early SETI interest to grow along with magazine science fiction, which in the US began in the pages of Hugo Gernsback’s radio magazines starting with The Electrical Experimenter and moving on to Science and Invention, but would soon claim its own dedicated title in Amazing Stories, whose first issue appeared in April of 1926. A nod as well to a sprinkling of earlier SF stories in Street & Smith’s pulp The Thrill Book.

Ferreira’s article is terrific, and I’m glad to hear that she is working on a book on SETI. It took Mariner 4’s flyby in 1965 to finally demonstrate what the surface of Mars was really like, and by then the interstellar SETI effort was just beginning to get attention. I wonder how the Mars enthusiasts of 1924 would have reacted to the news that despite the SETI efforts of the ensuing 100 years, we still have no proof of intelligence or indeed life of any kind on another world?

Pumping Energy into the Solar Wind

The solar wind is ever enticing, providing as it does a highly variable stream of charged particles moving out from the Sun at speeds up to 800 kilometers per second. Finding ways to harness that energy for propulsive purposes is tricky, although a good deal of work has gone into designs like magsails, where a loop of superconducting wire creates the magnetic field needed to interact with this ‘wind.’ But given its ragged variability, the sail metaphor makes us imagine a ship constantly pummeled by gusts in varying degrees of intensity, constantly adjusting sail to maintain course and stability. And it’s hard to keep the metaphor working when we factor in solar flares or coronal mass ejections.

We can lose the superconducting loop if we create a plasma cloud of charged particles around the craft for the same purpose. Or maybe we can use an electric ‘sail,’ enabled by long tethers that deflect solar wind ions. All of these ideas cope with a solar wind that, near the Sun, may be moving at tens of kilometers per second but accelerating rapidly with distance, so that it can reach its highest speeds at 10 solar radii and more. Different conditions in the corona can produce major variations in these velocities.

Obviously it behooves us to learn as much as we can about the solar wind even as we continue to investigate less turbulent options like solar sails (driven by photon momentum) and their beam-driven lightsail cousins. A new paper in Science is a useful step in nailing down the process of how the solar wind is energized once it has left the Sun itself. The work comes out of the Smithsonian Astrophysical Observatory (SAO), which is part of the Center for Astrophysics | Harvard & Smithsonian (CfA), and it bores into the question of ‘switchbacks’ in the solar wind that have been thought to deposit energy.

At the heart of the process are Alfvén waves, named after Hannes Alfvén (1908-1995), a Nobel-winning Swedish scientist and engineer whose work is at the heart of the discipline known as magnetohydrodynamics. What Alfvén more or less defined was the study of the interactions of magnetic behavior in plasmas. Alfvén waves move along magnetic field lines imparting energy and momentum that nourishes the solar wind. Kinks in the magnetic field known as ‘switchbacks’ are crucial here. These sudden deflections of the magnetic field quickly snap back to their original position. Although not fully understood, switchbacks are thought to be closely involved with the Alfvén wave phenomenon.

Image: Artist’s illustration of the solar wind flowing from the Sun measured by Parker Solar Probe near the edge of the corona and later with Solar Orbiter at a larger distance during a spacecraft alignment. The solar wind contains magnetic switchbacks, or large amplitude magnetic waves, near Parker Solar Probe that disappear farther from the Sun where Solar Orbiter is located. Credit: Image background: NASA Goddard/CIL/Adriana Manrique Gutierrez, Spacecraft images: NASA/ESA.

Data from two spacecraft have now clarified the role of these switchbacks. The Parker Solar Probe readily detected them in the solar wind, but data from ESA’s Solar Orbiter mission added crucial context. The two craft, one designed to penetrate the solar corona, the other working at larger distances, came into alignment in February of 2022 so that they observed the same solar wind stream in the scope of two days of observations. CfA’s Samuel Badman is a co-author of the study:

“We didn’t initially realize that Parker and Solar Orbiter were measuring the same thing at all. Parker saw this slower plasma near the Sun that was full of switchback waves, and then Solar Orbiter recorded a fast stream which had received heat and with very little wave activity. When we connected the two, that was a real eureka moment.”

So we had a theoretical process of energy movement through the corona and the solar wind in which Alfvén waves transported energy, but now we have data charting the interaction of the waves with the solar wind over time. The authors argue that the switchback phenomena pumps energy into the process of heating and acceleration sufficient to drive the fastest streams of the solar wind. Indeed, John Belcher (MIT), not a part of the study, considers this to be a ‘classic paper’ that demonstrates the fulfillment of one of the Parker Solar Probe’s main goals.

Such work has ramifications that will be amplified over time as we continue to investigate the environment close to the Sun and the solar wind that grows out of it. The findings will help clarify how future craft might be designed to take advantage of solar wind activity, but will also provide insights into the behavior of any sailcraft we send into close solar passes to achieve high velocity gravitational slingshots to the outer system. Always bear in mind that heliophysics plays directly into our thinking about the system’s outer edges and the evolution of spacecraft designed to explore them.

The paper is Rivera et al., “In situ observations of large-amplitude Alfvén waves heating and accelerating the solar wind,” Science Vol 385, Issue 6712 (29 August 2024), p. 962-966 (abstract).

Charter

In Centauri Dreams, Paul Gilster looks at peer-reviewed research on deep space exploration, with an eye toward interstellar possibilities. For many years this site coordinated its efforts with the Tau Zero Foundation. It now serves as an independent forum for deep space news and ideas. In the logo above, the leftmost star is Alpha Centauri, a triple system closer than any other star, and a primary target for early interstellar probes. To its right is Beta Centauri (not a part of the Alpha Centauri system), with Beta, Gamma, Delta and Epsilon Crucis, stars in the Southern Cross, visible at the far right (image courtesy of Marco Lorenzi).

Now Reading

Recent Posts

On Comments

If you'd like to submit a comment for possible publication on Centauri Dreams, I will be glad to consider it. The primary criterion is that comments contribute meaningfully to the debate. Among other criteria for selection: Comments must be on topic, directly related to the post in question, must use appropriate language, and must not be abusive to others. Civility counts. In addition, a valid email address is required for a comment to be considered. Centauri Dreams is emphatically not a soapbox for political or religious views submitted by individuals or organizations. A long form of the policy can be viewed on the Administrative page. The short form is this: If your comment is not on topic and respectful to others, I'm probably not going to run it.

Follow with RSS or E-Mail

RSS
Follow by Email

Follow by E-Mail

Get new posts by email:

Advanced Propulsion Research

Beginning and End

Archives