The ‘Disintegrating Planet’ Factor

Using machine learning to provide an algorithmic approach to the abundant data generated by the Transiting Exoplanet Survey Satellite (TESS) has proven unusually productive. I’m looking at an odd object called TIC 400799224, as described in a new paper in The Astronomical Journal from Brian Powell (NASA GSFC) and team, a source that displays a sudden drop in brightness – 25% in a matter of four hours – followed by a series of brightness variations. What’s going on here?

We’re looking at something that will have to be added to a small catalog of orbiting objects that emit dust; seven of these are presented in the paper, including this latest one. The first to turn up was KIC 12557548, whose discovery paper in 2012 argued that the object was a disintegrating planet emitting a dust cloud, a model that was improved in subsequent analyses. K2-22b, discovered in 2015, showed similar features, with varying transit depths and shapes, although no signs of gas absorption..

In fact, the objects in what we can call our ‘disintegrating planet catalog’ are rather fascinating. WD 1145+017 is a white dwarf evidently showing evidence for orbiting bodies emitting dust, with masses of each found to be comparable to our Moon. These appear to be concentrations of dust rather than solid bodies. And another find, ZTF J0139+5245, may turn out to be a white dwarf orbited by extensive planetary debris.

So TIC 400799224 isn’t entirely unusual in showing variable transit depths and durations, a possibly disintegrating body whose transits may or may not occur when expected. But dig deeper and, the authors argue, this object may be in a category of its own. This is a widely separated binary system, the stars approximately 300 AU apart, and at this point it is not clear which of the two stars is the host to the flux variations. The light curve dips are found only in one out of every three to five transits.

All of this makes it likely that what is occulting the star is some kind of dust cloud. Studying the TESS data and following up with a variety of ground-based instruments, the authors make the case: One of the stars is pulsating with a 19.77 day period that is probably the result of an orbiting body emitting clouds of dust. The dust cloud involved is substantial enough to block between 37% and 75% of the star’s light, depending on which of the two stars is the host. But while the quantity of dust emitted is large, the periodicity of the dips has remained the same over six years of observation.

Image: An optical/near-infrared image of the sky around the TESS Input Catalog (TIC) object TIC 400799224 (the crosshair marks the location of the object, and the width of the field of view is given in arcminutes). Astronomers have concluded that the mysterious periodic variations in the light from this object are caused by an orbiting body that periodically emits clouds of dust that occult the star. Credit: Powell et al., 2021.

How is this object producing so much dust, and how does it remain intact, with no apparent variation in periodicity? The authors consider sublimation as a possibility but find that it doesn’t replicate the mass loss rate found in TIC 400799224. Also possible: A ‘shepherding’ planet embedded within the dust, although here we would expect more consistent light curves from one transit to the next. Far more likely is a series of collisions with a minor planet. Let me quote the paper on this:

A long-term (at least years) phase coherence in the dips requires a principal body that is undergoing collisions with minor bodies, i.e., ones that (i) do not destroy it, and (ii) do not even change its basic orbital period. The collisions must be fairly regular (at least 20-30 over the last 6 years) and occur at the same orbital phase of the principal body.

This scenario emerges, in the authors’ estimate, as the most likely:

Consider, for example, that there is a 100 km asteroid in a 20 day orbit around TIC 400799224. Further suppose there are numerous other substantial, but smaller (e.g., ≲1/10th the radius), asteroids in near and crossing orbits. Perhaps this condition was set up in the first place by a massive collision between two larger bodies. Once there has been such a collision, all the debris returns on the next orbit to nearly the same region in space. This high concentration of bodies naturally leads to subsequent collisions at the same orbital phase. Each subsequent collision produces a debris cloud, presumably containing considerable dust and small particles, which expands and contracts vertically, while spreading azimuthally, as time goes on. This may be sufficient to make one or two dusty transits before the cloud spreads and dissipates. A new collision is then required to make a new dusty transit.

Amateur astronomers may want to see what they can learn about this object themselves. The authors point out that it’s bright enough to be monitored by ‘modest-size backyard telescopes,’ allowing suitably equipped home observers to look for transits. Such transits should also show up in historical data, giving us further insights into the behavior of the binary and the dust cloud producing this remarkably consistent variation in flux. As noted, the object in question evidently remains intact.

Digression: I mentioned earlier how much machine learning has helped our analysis of TESS data. The paper makes this clear, citing beyond TIC 400799224 such finds as:

  • several hundred thousand eclipsing binaries in TESS light curves;
  • a confirmed sextuple star system;
  • a confirmed quadruple star system;
  • many additional quadruple star system candidates;
  • numerous triple star system candidates;
  • “candidates for higher-order systems that are currently under investigation.”

Algorithmic approaches to light curves are becoming an increasingly valuable part of the exoplanet toolkit, about which we’ll be hearing a great deal more.

The paper is Powell et al, “Mysterious Dust-emitting Object Orbiting TIC 400799224,” The Astronomical Journal Vol. 162, No. 6 (8 December 2021). Full text.

tzf_img_post

Optimal Strategies for Exploring Nearby Stars

We’ve spoken recently about civilizations expanding throughout the galaxy in a matter of hundreds of thousands of years, a thought that led Frank Tipler to doubt the existence of extraterrestrials, given the lack of evidence of such expansion. But let’s turn the issue around. What would the very beginning of our own interstellar exploration look like, if we reach the point where probes are feasible and economically viable? This is the question Johannes Lebert examines today. Johannes obtained his Master’s degree in Aerospace at the Technische Universität München (TUM) this summer. He likewise did his Bachelor’s in Mechanical Engineering at TUM and was visiting student in the field of Aerospace Engineering at the Universitat Politècnica de València (UPV), Spain. He has worked at Starburst Aerospace (a global aerospace & defense startup accelerator and strategic advisory company) and AMDC GmbH (a consultancy with focus on defense located in Munich). Today’s essay is based upon his Master thesis “Optimal Strategies for Exploring Nearby-Stars,” which was supervised by Martin Dziura (Institute of Astronautics, TUM) and Andreas Hein (Initiative for Interstellar Studies).

by Johannes Lebert

1. Introduction

Last year, when everything was shut down and people were advised to stay at home instead of going out or traveling, I ignored those recommendations by dedicating my master thesis to the topic of interstellar travel. More precisely, I tried to derive optimal strategies for exploring near-by stars. As a very early-stage researcher I was really honored when Paul asked me to contribute to Centauri Dreams and want to thank him for this opportunity to share my thoughts on planning interstellar exploration from a strategic perspective.

Figure 1: Me, last year (symbolic image). Credit: hippopx.com).

As you are an experienced and interested reader of Centauri Dreams, I think it is not necessary to make you aware of the challenges and fascination of interstellar travel and exploration. I am sure you’ve already heard a lot about interstellar probe concepts, from gram-scale nanoprobes such as Breakthrough Starshot to huge spaceships like Project Icarus. Probably you are also familiar with suitable propulsion technologies, be it solar sails or fusion-based engines. I guess, you could also name at least a handful of promising exploration targets off the cuff, perhaps with focus on star systems that are known to host exoplanets. But have you ever thought of ways to bring everything together by finding optimal strategies for interstellar exploration? As a concrete example, what could be the advantages of deploying a fleet of small probes vs. launching only few probes with respect to the exploration targets? And, more fundamentally, what method can be used to find answers to this question?

In particular the last question has been the main driver for this article: Before starting with writing, I was wondering a lot what could be the most exciting result I could present to you and found that the methodology as such is the most valuable contribution on the way towards interstellar exploration: Once the idea is understood, you are equipped with all relevant tools to generate your own results and answer similar questions. That is why I decided to present you a summary of my work here, addressing more directly the original idea of Centauri Dreams (“Planning […] Interstellar Exploration”), instead of picking a single result.

Below you’ll find an overview of this article’s structure to give you an impression of what to expect. Of course, there is no time to go into detail for each step, but I hope it’s enough to make you familiar with the basic components and concepts.

Figure 2: Article content and chapters

I’ll start from scratch by defining interstellar exploration as an optimization problem (chapter 2). Then, we’ll set up a model of the solar neighborhood and specify probe and mission parameters (chapter 3), before selecting a suitable optimization algorithm (chapter 4). Finally, we apply the algorithm to our problem and analyze the results (more generally in chapter 5, with implications for planning interstellar exploration in chapter 6).

But let’s start from the real beginning.

2. Defining and Classifying the Problem of Interstellar Exploration

We’ll start by stating our goal: We want to explore stars. Actually, it is star systems, because typically we are more interested in the planets that are potentially hosted by a star instead of the star as such. From a more abstract perspective, we can look at the stars (or star systems) as a set of destinations that can be visited and explored. As we said before, in most cases we are interested in planets orbiting the target star, even more if they might be habitable. Hence, there are star systems which are more interesting to visit (e. g. those with a high probability of hosting habitable planets) and others, which are less attracting. Based on these considerations, we can assign each star system an “earnable profit” or “stellar score” from 0 to 1. The value 0 refers to the most boring star systems (though I am not sure if there are any boring star systems out there, so maybe it’s better to say “least fascinating”) and 1 to the most fascinating ones. The scoring can be adjusted depending on one’s preferences, of course, and extended by additional considerations and requirements. However, to keep it simple, let’s assume for now that each star system provides a score of 1, hence we don’t distinguish between different star systems. Having this in mind, we can draw a sketch of our problem as shown in Figure 3.

Figure 3: Solar system (orange dot) as starting point, possible star systems for exploration (destinations with score ) represented by blue dots

To earn the profit by visiting and exploring those destinations, we can deploy a fleet of space probes, which are launched simultaneously from Earth. However, as there are many stars to be explored and we can only launch a limited number of probes, one needs to decide which stars to include and which ones to skip – otherwise, mission timeframes will explode. This decision will be based on two criteria: Mission return and mission duration. The mission return is simply the sum of the stellar score of each visited star. As we assume a stellar score of 1 for each star, the mission return is equal to the number of stars that is visited by all our probes. The mission duration is the time needed to finish the exploration mission.

In case we deploy several probes, which carry out the exploration mission simultaneously, the mission is assumed to be finished when the last probe reaches the last star on its route – even if other probes have finished their route earlier. Hence, the mission duration is equal to the travel time of the probe with the longest trip. Note that the probes do not need to return to the solar system after finishing their route, as they are assumed to send the data gained during exploration immediately back to Earth.

Based on these considerations we can classify our problem as a bi-objective multi-vehicle open routing problem with profits. Admittedly quite a cumbersome term, but it contains all relevant information:

  • Bi-objective: There are two objectives, mission return and mission duration. Note that we want to maximize the return while keeping the duration minimal. Hence, from intuition we can expect that both objectives are competing: The more time, the more stars can be visited.
  • Multi-vehicle: Not only one, but several probes are used for simultaneous exploration.
  • Open: Probes are free to choose where to end their route and are not forced to return back to Earth after finishing their exploration mission.
  • Routing problem with profits: We consider the stars as a set of destinations with each providing a certain score si. From this set, we need to select several subsets, which are arranged as routes and assigned to different probes (see Figure 4).

Figure 4: Problem illustration: Identify subsets of possible destinations si, find the best sequences and assign them to probes

Even though it appears a bit stiff, the classification of our problem is very useful to identify suitable solution methods: Before, we were talking about the problem of optimizing interstellar exploration, which is quite unknown territory with limited research. Now, thanks to our abstraction, we are facing a so-called Routing Problem, which is a well-known optimization problem class, with several applications across various fields and therefore being exhaustively investigated. As a result, we now have access to a large pool of established algorithms, which have already been tested successfully against these kinds of problems or other very similar or related problems such as the Traveling Salesman Problem (probably the most popular one) or the Team Orienteering Problem (subclass of the Routing Problem).

3. Model of the Solar Neighborhood and Assumptions on Probe & Mission Architecture

Obviously, we’ll also need some kind of galactic model of our region of interest, which provides us with the relevant star characteristics and, most importantly, the star positions. There are plenty of star catalogues with different focus and historical background (e.g. Hipparcos, Tycho, RECONS). One of the latest, still ongoing surveys is the Gaia Mission, whose observations are incorporated in the Gaia Archive, which is currently considered to be the most complete and accurate star database.

However, the Gaia Archive ­­­­­­­­­­­­­­­­­­­­­­– more precisely the Gaia Data Release 2 (DR2), which will be used here* (accessible online [1] together with Gaia based distance estimations by Bailer-Jones et al. [2]) – provides only raw observation data, which include some reported spurious results. For instance, it lists more than 50 stars closer than Proxima Centauri, which would be quite a surprise to all the astronomers out there.

*1. Note that there is already an updated Data Release (Gaia DR3), which was not available yet at the time of the thesis.

Hence, a filtering is required to obtain a clean data set. The filtering procedure applied here, which consists of several steps, is illustrated in Figure 5 and follows the suggestions from Lindegren et al. [3]. For instance, data entries are eliminated based on parallax errors and uncertainties in BP and RP fluxes. The resulting model (after filtering) includes 10,000 stars and represents a spherical domain with a radius of roughly 110 light years around the solar system.

Figure 5: Setting up the star model based on Gaia DR2 and filtering (animated figure from [9])

To reduce the complexity of the model, we assume all stars to maintain fixed positions – which is of course not true (see Figure 5 upper right) but can be shown to be a valid simplification for our purposes, and we limit the mission time frames to 7,000 years. 7,000 years? Yes, unfortunately, the enormous stellar distances, which are probably the biggest challenge we encounter when planning interstellar travel, result in very high travel times – even if we are optimistic concerning the travel speed of our probes, which are defined by the following.

We’ll use a rather simplistic probe model based on literature suggestions, which has the advantage that the results are valid across a large range of probe concepts. We assume the probes to travel along straight-line trajectories (in line with Fantino & Casotto [4] at an average velocity of 10 % of the speed of light (in line with Bjørk [5]. They are not capable of self-replicating; hence, the probe number remains constant during a mission. Furthermore, the probes are restricted to performing flybys instead of rendezvous, which limits the scientific return of the mission but is still good enough to detect planets (as reported by Crawford [6]. Hence, the considered mission can be interpreted as a reconnaissance or scouting mission, which serves to identify suitable targets for a follow-up mission, which then will include rendezvous and deorbiting for further, more sophisticated exploration.

Disclaimer: I am well aware of the weaknesses of the probe and mission model, which does not allow for more advanced mission design (e. g. slingshot maneuvers) and assumes a very long-term operability of the probes, just to name two of them. However, to keep the model and results comprehensive, I tried to derive the minimum set of parameters which is required to describe interstellar exploration as an optimization problem. Any extensions of the model, such as a probe failure probability or deorbiting maneuvers (which could increase the scientific return tremendously), are left to further research.

4. Optimization Method

Having modeled the solar neighborhood and defined an admittedly rather simplistic probe and mission model, we finally need to select a suitable algorithm for solving our problem, or, in other words, to suggest “good” exploration missions (good means optimal with respect to both our objectives). In fact, the algorithm has the sole task of assigning each probe the best star sequences (so-called decision variables). But which algorithm could be a good choice?

Optimization or, more generally, operations research is a huge research field which has spawned countless more or less sophisticated solution approaches and algorithms over the years. However, there is no optimization method (not yet) which works perfectly for all problems (“no free lunch theorem”) – which is probably the main reason why there are so many different algorithms out there. To navigate through this jungle, it helps to recall our problem class and focus on the algorithms which are used to solve equal or similar problems. Starting from there, we can further exclude some methods a priori by means of a first analysis of our problem structure: Considering n stars, there are ?! possibilities to arrange them into one route, which can be quite a lot (just to give you a number: for n=50 we obtain 50!? 1064 possibilities).

Given that our model contains up to 10,000 stars, we cannot simply try out each possibility and take the best one (so called enumeration method). Instead, we need to find another approach, which is more suitable for those kinds of problems with a very large search space, as an operations researcher would say. Maybe you already have heard about (meta-)heuristics, which allow for more time-efficient solving but do not guarantee to find the true optimum. Even if you’ve never heard about them, I am sure that you know at least one representative of a metaheuristic-based solution, as it is sitting in front of your screen right now as you are reading this article… Indeed, each of us is the result of a thousands of years lasting, still ongoing optimization procedure called evolution. Wouldn’t it be cool if we could adopt the mechanisms that brought us here to do the next, big step in mankind and find ways to leave the solar system and explore unknown star systems?

Those kinds of algorithms, which try to imitate the process of natural evolution, are referred to as Genetic Algorithms. Maybe you remember the biology classes at school, where you learned about chromosomes, genes and how they are shared between parents and their children. We’ll use the same concept and also the wording here, which is why we need to encode our optimization problem (illustrated in Figure 6): One single chromosome will represent one exploration mission and as such one possible solution for our optimization problem. The genes of the chromosome are equivalent to the probes. And the gene sequences embody the star sequences, which in turn define the travel routes of each probe.

If we are talking about a set of chromosomes, we will use the term “population”, therefore sometimes one chromosome is referred to as individual. Furthermore, as the population will evolve over the time, we will speak about different generations (just like for us humans).

Figure 6. Genetic encoding of the problem: Chromosomes embody exploration missions; genes represent probes and gene sequences are equivalent to star sequences.

The algorithm as such is pretty much straightforward, the basic working principle of the Genetic Algorithm is illustrated below (Figure 7). Starting from a randomly created initial population, we enter an evolution loop, which stops either when a maximum number of generations is reached (one loop represents one generation) or if the population stops evolving and keeps stable (convergence is reached).

Figure 7: High level working procedure of the Genetic Algorithm

I don’t want to go into too much detail on the procedure – interested readers are encouraged to go through my thesis [7] and look for the corresponding chapter or see relevant papers (particularly Bederina and Hifi [8], from where I took most of the algorithm concept). To summarize the idea: Just like in real life, chromosomes are grouped into pairs (parents) and create children (representing new exploration missions) by sharing their best genes (which are routes in our case). For higher variety, a mutation procedure is applied to a few children, such as a partial swap of different route segments. Finally, the worst chromosomes are eliminated (evolve population = “survival of the fittest”) to keep the population size constant.

Side note: Currently, we have the chance to observe this optimization procedure when looking at the Coronavirus. It started almost two years ago with the alpha version; right now the population is dominated by the delta version, with omicron an emerging variant. From the virus perspective, it has improved over time through replication and mutation, which is supported by large populations (i.e., a high number of cases).

Note that the genetic algorithm is extended by a so-called local search, which comprises a set of methods to improve routes locally (e. g. by inverting segments or swapping two random stars within one route). That is why this method is referred to as Hybrid Genetic Algorithm.

Now let’s see how the algorithm is operating when applied to our problem. In the animated figure below, we can observe the ongoing optimization procedure. Each individual is evaluated “live” with respect to our objectives (mission return and duration). The result is plotted in a chart, where one dot refers to one individual and thus represents one possible exploration mission. The color indicates the corresponding generation.

Figure 8: Animation of the ongoing optimization procedure: Each individual (represented by a dot) is evaluated with respect to the objectives, one color indicates one generation

As shown in this animated figure, the algorithm seems to work properly: With increasing generations, it tries to generate better solutions, as it optimizes towards higher mission return and lower mission duration (towards the upper left in the Figure 8). Solutions from the earlier generation with poor quality are subsequently replaced by better individuals.

5. Optimization Results

As a result of the optimization, we obtain a set of solutions (representing the surviving individuals from the final generation), which build a curve when evaluated with respect to our twin objectives of mission duration and return (see Figure 9). Obviously, we’ll get different curves when we change the probe number m between two optimization runs. In total, 9 optimization runs are performed; after each run the probe number is doubled, starting with m=2. As already in the animated Figure 8, one dot represents one chromosome and thus one possible exploration mission (one mission is illustrated as an example).

Figure 9: Resulting solutions for different probe numbers and mission example represented by one dot

Already from this plot, we can make some first observations: The mission return (which we assume equal to the number of explored stars, just as a reminder) increases with mission duration. More precisely, there appears to be an approximately linear incline of star number with time, at least in most instances. This means that when doubling the mission duration, we can expect more or less twice the mission return. An exception to this behavior is the 512 probes curve, which flattens when reaching > 8,000 stars due to the model limits: In this region, only few unexplored stars are left which may require unfavorable transfers.

Furthermore, we see that for a given mission duration the number of explored stars can be increased by launching more probes, which is not surprising. We will elaborate a bit more on the impact of the probe number and on how it is linked with the mission return in a minute.

For now, let’s keep this in our mind and take a closer look at the missions suggested by the algorithm. In the figure below (Figure 10), routes for two missions with different probe number m but similar mission return J1 (nearly 300 explored stars) are visualized (x, y, z-axes dimensions in light years). One color indicates one route that is assigned to one probe.

Figure 10: Visualization of two selected exploration missions with similar mission return J1 but different probe number m – left: 256 available probes, right: 4 available probes (J2 is the mission duration in years)

Even though the mission return is similar, the route structures are very different: The higher probe number mission (left in Figure 10) is built mainly from very dense single-target routes and thus focuses more on the immediate solar neighborhood. The mission with only 4 probes (right in Figure 10), contrarily, contains more distant stars, as it consists of comparatively long, chain-like routes with several targets included. This is quite intuitive: While for the right case (few probes available) mission return is added by “hopping” from star to star, in the left case (many probes available) simply another probe is launched from Earth. Needless to say, the overall mission duration J2 is significantly higher when we launch only 4 probes (> 6000 years compared to 500 years).

Now let’s look a bit closer at the corresponding transfers. As before, we’ll pick two solutions with different probe number (4 and 64 probes) and similar mission return (about 230 explored stars). But now, we’ll analyze the individual transfer distances along the routes instead of simply visualizing the routes. This is done by means of a histogram (shown in Figure 11), where simply the number of transfers with a certain distance is counted.

Figure 11: Histogram with transfer distances for two different solution – orange bars belong to a solution with 4 probes, blue bars to a solution with 64 probes; both provide a mission return of roughly 230 explored stars.

The orange bars belong to a solution with 4 probes, the blue ones to a solution with 64 probes. To give an example on how to read the histogram: We can say that the solution with 4 probes includes 27 transfers with a distance of 9 light years, while the solution with 64 probes contains only 8 transfers of this distance. What we should take from this figure is that with higher probe numbers apparently more distant transfers are required to provide the same mission return.

Based on this result we can now concretize earlier observations regarding the probe number impact: From Figure 9 we already found that the mission return increases with probe number, without being more specific. Now, we discovered that the efficiency of the exploration mission w. r. t. routing decreases with increasing probe number, as there are more distant transfers required. We can even quantify this effect: After doing some further analysis on the result curve and a bit of math, we’ll find that the mission return J1 scales with probe number m according to ~m0.6 (at least in most instances). By incorporating the observations on linearity between mission return and duration (J2), we obtain the following relation: J1 ~ J2m0.6.

As J1 grows only with m0.6 (remember that m1 indicates linear growth), the mission return for a given mission duration does not simply double when we launch twice as many probes. Instead, it’s less; moreover, it depends on the current probe number – in fact, the contribution of additional probes to the overall mission return diminishes with increasing probe numbers.

This phenomenon is similar to the concept of diminishing returns in economics, which denotes the effect that an increase of the input yields progressively lower or even reduced increase in output. How does that fit with earlier observations, e. g. on route structure? Apparently, we are running into some kind of a crowding effect, when we launch many probes from the same spot (namely our solar system): Long initial transfers are required to assign each probe an unexplored star. Obviously, this effect intensifies with each additional probe being launched.

6. Conclusions and Implications for Planning Interstellar Exploration

What can we take from all this effort and the results of the optimization? First, let’s recap the methodology and tools which we developed for planning interstellar exploration (see Figure 12).

Figure 12: Methodology – main steps

Beside the methodology, which of course can be extended and adapted, we can give some recommendations for interstellar mission design considerations, in particular regarding the probe number impact:

  • High probe numbers are favorable when we want to explore many stars in the immediate solar neighborhood. As further advantage of high probe numbers, mostly single-target missions are performed, which allows the customization of each probe according to its target star (e. g. regarding scientific instrumentation).
  • If the number of available probes is limited (e. g. due to high production costs), it is recommended to include more distant stars, as it enables a more efficient routing. The aspect of higher routing efficiency needs to be considered in particular when fuel costs are relevant (i. e. when fuel needs to be transported aboard). For other, remotely propelled concepts (such as laser driven probes, e. g. Breakthrough Starshot) this issue is less relevant, which is why those concepts could be deployed in larger numbers, allowing for shorter overall mission duration at the expense of more distant transfers.
  • When planning to launch a high number of probes from Earth, however, one should be aware of crowding effects. This effect sets in already for few probes and intensifies with each additional probe. One option to encounter this issue and thus support a more efficient probe deployment could be swarm-based concepts, as indicated by the sketch in Figure 13.

    The swarm-based concept includes a mother ship, which transports a fleet of smaller explorer probes to a more distant star. After arrival, the probes are released and start their actual exploration mission. As a result, the very dense, crowded route structures, which are obtained when many probes are launched from the same spot (see again Figure 10, left plot), are broken up.

Figure 13: Sketch illustrating the beneficial effect of swarm concepts for high probe numbers.

Obviously, the results and derived implications for interstellar exploration are not mind-blowing, as they are mostly in line with what one would expect. However, this in turn indicates that our methodology seems to work properly, which of course does not serve as a full verification but is at least a small hint. A more reliable verification result can be obtained by setting up a test problem with known optimum (which is not shown here, but was also done for this approach, showing that the algorithm’s results deviate about 10% compared to the ideal solution).

Given the very early-stage level of this work, there is still a lot of potential for further research and refinement of the simplistic models. Just to pick one example: As a next step, one could start to distinguish between different star systems by varying the reward of each star system si based on a stellar metric, where more information of the star is incorporated (such as spectral class, metallicity, data quality, …). In the end it’s up to oneself, which questions he or she wants to answer – there is more than enough inspiration up there in the night sky.

Figure 14: More people, now

Assuming that you are not only an interested reader of Centauri Dreams but also familiar with other popular literature on that topic, you maybe have heard about Clarke’s three laws. I would like to close this article by taking up his second one: The only way of discovering the limits of the possible is to venture a little way past them into the impossible. As said before, I hope that the introduced methodology can help to answer further questions concerning interstellar exploration from a strategic perspective. The more we know, the better we are capable of planning and imagining interstellar exploration, thus pushing gradually the limits of what is considered to be possible today.

References

[1] ESA, “Gaia Archive,“ [Online]. Available: https://gea.esac.esa.int/archive/.

[2] C. A. L. Bailer-Jones et al., “Estimating Distances from Parallaxes IV: Distances to 1.33 Billion Stars in Gaia Data Release 2,” The Astronomical Journal, vol. 156, 2018.
https://iopscience.iop.org/article/10.3847/1538-3881/aacb21

[3] L. Lindegren et al., “Gaia Data Release 2 – The astrometric solution,” Astronomy & Astrophysics, vol. 616, 2018.
https://doi.org/10.1051/0004-6361/201832727

[4] E. Fantino and S. Casotto, “Study on Libration Points of the Sun and the Interstellar Medium for Interstellar Travel,” Universitá di Padova/ESA, 2004.

[5] R. Bjørk, “Exploring the Galaxy using space probes,” International Journal of Astrobiology, vol. 6, 2007.
https://doi.org/10.1017/S1473550407003709

[6] I. A. Crawford, “The Astronomical, Astrobiological and Planetary Science Case for Interstellar Spaceflight,” Journal of the British Interplanetary Society, vol. 62, 2009. https://arxiv.org/abs/1008.4893

[7] J. Lebert, “Optimal Strategies for Exploring Near-by Stars,“ Technische Universität München, 2021.
https://mediatum.ub.tum.de/1613180

[8] H. Bederina and M. Hifi, “A Hybrid Multi-Objective Evolutionary Algorithm for the Team Orienteering Problem,” 4th International Conference on Control, Decision and Information Technologies, Barcelona, 2017.
https://ieeexplore.ieee.org/document/8102710

[9] University of California – Berkeley, “New Map of Solar Neighborhood Reveals That Binary Stars Are All Around Us,” SciTech Daily, 22 February 2021.
https://scitechdaily.com/new-map-of-solar-neighborhood-reveals-that-binary-stars-are-all-around-us/

tzf_img_post

Probing the Likelihood of Panspermia

I’m looking at a paper just accepted at The Astrophysical Journal on the subject of panspermia, the notion that life may be distributed through the galaxy by everything from interstellar dust to comets and debris from planetary impacts. We have no hard data on this — no one knows whether panspermia actually occurs from one planet to another, much less from one stellar system to another star. But we can investigate possibilities based on what we know of everything from the hardiness of organisms to the probabilities of ejecta moving on an interstellar trajectory.

In “Panspermia in a Milky Way-like Galaxy,” lead author Raphael Gobat (Pontificia Universidad Católica de Valparaíso, Chile) and colleagues draw together current approaches to the question and develop a modeling technique based on our assumptions about galactic habitability and simulations of galaxy structure.

Panspermia is an ancient concept. Indeed, the word first emerges in the work of Anaxagoras (born ca. 500–480 BC) and makes its way through Lucian of Samosata (born around 125 AD), through Kepler’s Somnium, to re-emerge in 19th Century microbiology. Accidental propagation of life’s building blocks was considered by Swedish chemist Svante Arrhenius in the early 20th Century. Fred Hoyle and Nalin Chandra Wickramasinghe developed the idea still further in the 1970s and 80s.

So how do we approach a subject that has remained controversial, likely because it does not appear necessary in explaining how life emerged on our own Earth? As the paper notes, modern work falls into three distinct categories, the first involving whether or not microorganisms can survive ejection from a planetary surface and re-entry onto another. Remarkably, hypervelocity impacts are not show-stoppers for the idea, suggesting that a small fraction of spores could survive impact and transit.

As to timescale and kinds of transfer mechanisms, most work seems to have focused on mass transfer between planets in the same stellar system, usually through lithopanspermia, which is the exchange of meteoroids. It’s true, however, that transit between different stars has been investigated, looking at radiation pressure on small grains of material. There are even a few studies on whether or not a stellar system might be intentionally seeded by means of technology. The term here is directed panspermia, a subject more often treated in science fiction than academic circles.

Although not entirely. While directed panspermia is off the table for Gobat and colleagues, we’ll take a look in a month or so at what does appear in the literature. Some interesting ideas have emerged, but they’re not for today.

What Gobat and co-authors have in mind is to apply a model of galactic habitability they have developed (citation below) in conjunction with the simulations of spiral galaxies based on hydrodynamics that are found in the McMaster Unbiased Galaxy Simulations (MUGS), a set of 16 simulated galaxies developed within the last decade. On the latter, the paper notes:

These simulations made use of the cosmological zoom method, which seeks to focus computational effort into a region of interest, while maintaining enough of the surrounding large-scale structure to produce a realistic assembly history. To accomplish this, the simulation was first carried out at low resolution using N-body physics only. Dark matter halos were then identified, and a sample of interesting objects selected. The particles making up, and surrounding, these halos were then traced back to their origin, and the simulation carried out again with the region of interest simulated at higher resolution.

Simulation and re-simulation allow the MUGS galaxies to reproduce the known metallicity gradients in observed galaxies and likewise reproduce their large-scale structure, including disks, halos and bulges. The authors use one of the simulated galaxies, a spiral galaxy similar to but not identical with the Milky Way, to investigate the probability and efficiency of panspermia as dependent on the galactic environment.

Image: This is Figure 1 from the paper. Caption: Mock UV J color images of the simulated galaxy g15784 (Stinson et al. 2010; Nickerson et al. 2013), for both edge-on (left) and face-on (right) orientations, using star and gas particles, and assuming Bruzual & Charlot (2003) stellar population models and a simple dust attenuation model (Li & Draine 2001) with a gas-to-dust ratio of 0.01 at solar metallicity. Additionally, we include line emission from star particles with ages ? 50 Myr, following case B recombination (Osterbrock & Ferland 2006) and metallicity-dependent line ratios (Anders & Fritze-v. Alvensleben 2003). All panels are 50 kpc across and have a resolution of 100 pc. Two spheroidal satellites can be seen above and below the galactic plane, respectively. Credit: Gobat et al.

Panspermia appears to be more likely in the central regions of the galactic bulge, as we might assume due to the high density of stars there, a factor which counterbalances their lower habitability in this model. Panspermia is found to be much less likely as we move out into the central disk. In the model of habitability as developed by Gopat and Sungwook Hong in 2016, habitability increases as we depart from galactic center, while the new paper shows that the likelihood of panspermia works inversely, being more likely toward the bulge.

In a sense, we decouple habitability from panspermia. The paper uses the term ‘particles’ to refer not to individual stars, but to ensembles of stars with a range of masses but the same metallicity. This reflects, say the authors, the resolution limits of the simulations, which cannot track individual stars through time. From the paper, noting the narrow dynamic range of habitability vs. panspermia [the italics are mine]:

In dense regions [of the simulated galaxy], many source particles can contribute to panspermia, whereas in the outer disk and halo the panspermia probability is typically dominated by one or, at most, a few source star particles. Unlike natural habitability, whose value varies by only ? 5% throughout the galaxy, the panspermia probability has a wide dynamic range of several orders of magnitudes..

The models used here have a number of limitations, but it’s interesting that they point to panspermia as being considerably less efficient at seeding planets than the evolution of life on the planets themselves. At best, the authors find the probability of panspermia to be no more than 3% of all the star particles in their simulation. This may be an overly generous figure, and the paper acknowledges that it cannot be more precisely quantified other than to say that when it comes to efficiency, local evolution wins going away. Higher resolution galaxy simulations will offer more realistic insights.

We have a result, as the authors acknowledge, that is more qualitative than quantitative, a measure of how much we have to learn about galaxies themselves, and about the Milky Way in particular. The sample galaxy, for example, has a higher bulge-to-disk ratio than the Milky Way. But more significantly, the capture fraction of spores by target planets and the likelihood that life actually does develop on planets considered habitable are subjects with no concrete data to firm up the conclusions.

We can anticipate that future simulations will take into account a rotating evolving galaxy as opposed to the single simulation ‘snapshot’ the paper offers. Nonetheless, this modeling of organic compounds being transferred between stars points to the orders of magnitude difference in the likelihood of panspermia between the inner and the outer disk, a useful finding. Given that so few of the star particles the simulation generates have high panspermia probability, the process may occur but under conditions that make it much less effective than prebiotic evolution.

The paper is Gobat et al., “Panspermia in a Milky Way-like Galaxy,” accepted at the Astrophysical Journal (preprint). The paper on galactic habitability is Gobat & Hong, “Evolution of galaxy habitability,” Astronomy & Astrophysics Vol. 592, A96 (04 August 2016). Abstract.

tzf_img_post

SETI as a Central Project: An Addendum to Space Development Futures

How does SETI fit into the long-term objectives of a civilization? To a society whose central project is communication, the ‘success’ of the project in detecting intelligence around another star is obviously not assured, but if it does find a signal, would it eventually receive an Encyclopedia Galactica? There is much to ponder here, and Nick Nielsen today tackles the question from the standpoint of not one but many Encylopedia Galacticas, spread out through cosmological time as opposed to the ‘snapshot’ version a finite species sees. Read on to consider the kinds of civilizations that might practice or be discovered by SETI and how they might formulate their listening and communications strategy. SETI is analyzed here as one of a variety of central projects Nielsen has examined in these pages and elsewhere. For more of his work, consult Grand Strategy: The View from Oregon, and Grand Strategy Annex.

by J. N. Nielsen

1. Variations on the Theme of Spacefaring Civilization
2. A Missed Opportunity
3. The SETI Paradigm
4. Space Development of SETI Civilizations
5. The First Edition of Encyclopedia Galactica: The Null Catalogue
6. Transformation of SETI in the Light of Technosignature Detection
7. Tensions Intrinsic to SETI Civilizations
8. The Success and Failure of Civilizations
9. The Civilizational-Cosmological Endgame

1. Variations on the Theme of Spacefaring Civilization

In my previous Centauri Dreams post Space Development Futures I formulated six scenarios for the future of civilization, based on six distinct central projects that could be the drivers of future civilization. My list of six central projects included the Enlightenment, science, environmentalism, traditionalism, virtualism (simulations and singulatarianism), and urbanism. But I wasn’t finished there.

In my newsletter 128 I came at the idea of the central project of a civilization from a different angle and briefly laid out four possible scenarios for future civilizations based on biocentric initiatives, which latter could be understood as continuous with the biocentric past of human civilization. Here my list included civilizations that focus on the propagation of terrestrial life on a cosmological scale [1], large-scale research into origins of life, an exhaustive survey of life in the cosmos, and the study of synthetic or artificial life, any or all of which might be taken together in a civilization that exemplified a special case of science as a central project, taking the biosciences in particular as a central project.

These speculations (six described in an earlier Centauri Dreams post and another four described in a newsletter) yield an even ten scenarios for future civilization. We could understand the bioscience scenarios as falling under the umbrella of scientific civilization, making them among “…a class of scenarios… that incrementally depart from the above generic scenarios, and continue to developmentally diverge…” as noted in Space Development Futures. In the same way, further scenarios that would fall under the umbrella of the other five scenarios I outlined could be formulated, yielding as many variations upon these themes as one has the imagination to generate.

However, these biological variations on the theme of scientific civilization touch on human identity in a fundamental way, and so stand out from other permutations of scientific central projects. There is an intrinsic reasonableness to human beings, being ourselves biological beings, being interested in biology in the same way that individuals are sometimes intensely interested in their family origins, and who then take to genealogical research in order to discover their roots. A biocentric-bioscience civilization (with its moral imperative derived from biotic ethics) would embody this human interest at the scale of civilization, discovering the roots of human life by discovering the roots of biology.

E. O. Wilson called our natural interest in living things biophila, and elaborated on the idea in a book devoted to the same intrinsic biological interest on the part of human beings:

“From infancy we concentrate happily on ourselves and other organisms. We learn to distinguish life from the inanimate and move toward it like moths to a porch light. Novelty and diversity are particularly esteemed; the mere mention of the word extraterrestrial evokes reveries about still unexplored life, displacing the old and once potent exotic that drew earlier generations to remote islands and jungled interiors.” [2]

That Wilson also noted the attraction to extraterrestrial exoticism as an extension of biophilia is significant. Some of us are as intensely interested in life in the universe—the domain of astrobiology—as we are in life on Earth—the domain of life simpliciter. The biocentric-bioscience central projects described above would constitute a search for our “roots” on a more fundamental level—at the level of understanding the place and role of biology in the universe.

Many contemporary visions of the human future, especially since the computer revolution, have been technocentric rather than biocentric. It is a common argument, and perhaps even as common an assumption, that intelligence on cosmological scales of space and time, if any such exists, must be post-biological. The bioscience scenarios for the future of civilization discussed above constitute an alternative to, if not a rejection of, the idea of post-biological intelligence being necessarily or inevitably the focus of emergent complexity for advanced intelligence in the universe.

2. A Missed Opportunity

In the same way that some individual human beings are intensely interested in their genealogical origins, and some scientists are intensely interested in the origins of life—because this is, at the same time, the ultimate origins of human life, and thus the ultimate discovery of our roots—some among us are intensely interested in communicating with peers. In mundane terms, our peers are other persons in other terrestrial cultures. In cosmological terms, our peers are other intelligent progenitor species of civilizations, which have constructed technologies of sufficient power, scale, and complexity to facilitate communication over interstellar distances. And, in the same way that some enjoy the exoticism of other lands and other cultures, the exoticism of other minds evolved on other worlds can be understood as an extension of the same curiosity.

The ten scenarios for future civilization noted above did not account for this particular species of curiosity, and so did not include an obvious possibility, which seems like an oversight now that I think of it in retrospect: SETI as a central project. While SETI as a central project could be placed under the umbrella of science-derived central projects, like the biological scenarios discussed above, it is sufficiently independent that it could also be formulated as a distinctive form of civilization focused on communication, or the attempt at communication, with ETI over interstellar distances.

Adapting the theses that I previously formulated in Space Development Futures for scientific civilization, to the particulars of a SETI civilization, I arrive at the following formulations:

SETI Infrastructure Thesis

A SETI program on a civilizational scale will occur only after a SETI infrastructure buildout makes such a program possible.

SETI Framework Thesis

A SETI program on a civilizational scale will occur after a conceptual framework is formulated that is adequate to motivate the construction of a SETI-capable infrastructure at a scale consistent with such a program.

SETI Buildout Thesis

A SETI-capable civilization makes the transition to a SETI civilization through an institutional buildout that facilitates SETI.

SETI Central Project Thesis

SETI as a central project would be the axis of alignment for all institutions of a SETI civilization, integrating infrastructure and framework into a coherent whole with historical directionality.

Some of these formulations feel more natural than the others, as is to be expected given the historical peculiarities of any civilization; a particular kind of civilization, evolved under particular circumstances, is going to bring out distinctive aspects of the institutional structure of civilization, so that historical contingency drives the appearance of radically different institutions and institutional structures in distinct civilizations, which are in reality quite similar. Some of the institutional peculiarities of a SETI civilization are evident to us even without having an existing instance of a SETI civilization to observe.

An interesting twist that makes a SETI civilization distinct from civilizations dedicated to other central projects is that SETI “success” is not directly dependent upon the scope and scale of the SETI undertaking. Certainly, a larger undertaking is more likely to be successful than a smaller undertaking, and it is a consistent talking point of many SETI researchers that SETI efforts so far have been utterly inadequate to any judgment regarding the existence of ETI and communicating civilizations. [3] However, there is no lawlike proportionality of the scale of SETI efforts and the scale of SETI success; the two may be decoupled.

A given civilization might have a very marginal SETI community (i.e., a civilization that does not take SETI as its central project), using only minimal resources and technology, and still be successful in its search if it were to receive a signal communicated over interstellar distances. [4] A civilization might also make SETI central to its identity, expending resources at the scale of civilization on the search, and still find nothing. The former is not a SETI civilization and yet achieves “success”; the latter is a SETI civilization and does not achieve success. The difference lies not in the success of the SETI enterprise, but in the relationship of SETI to the institutions of civilization. [5]

Elsewhere I have made the claim that, had our galaxy been filled with SETI signals from the millions of civilizations postulated in the more permissive scenarios of ETI, the first time a radio telescope had been switched on, it would have been deluged by a riot of signals, and the history of civilization at that time would have been sharply realigned as a result of such a discovery. We would have spent the subsequent decades understanding and interpreting these signals, while there would have been a race to build the best radio telescope that could capture the most valuable signals. We could formulate this as a thought experiment based on several different assumptions, for example:

    1) An apparatus is constructed that unintentionally functions as a radio telescope, receiving signals that are ignored because they are not understood.

    2) A radio telescope is constructed with no anticipation of the possibility of receiving signals from other worlds, so that once the signals are received, time is required to understand what the signals are and what their significance is. [6]

    3) A radio telescope is constructed in a social milieu in which at least the idea is present of intelligence on other worlds, so that the signals are immediately or soon understood for what they are, and the work of interpretation can begin immediately.

    4) A radio telescope is constructed with the anticipation of focusing on a SETI, so that the entire effort (i.e., the design, construction, and operation of the radiotelescope) is predicated upon the SETI paradigm explicitly pursued, and signals received are immediately interpreted in this context.

Civilizations in the early stages of their technological development might follow any one of these paths (depending on the universe in which they evolve, the Drake equation N of which is independent of any given civilization), and the outcome for a given civilization would be quite different depending upon the path followed. [7] It would be an interesting thought experiment to elaborate scenarios (1) and (2) above, which are, for us, counterfactuals, as our conceptual framework already encompasses the possibility of SETI signals. Scenarios (3) and (4) above may still play out, such that if radiotelescopes constructed as part of the VLBI network received a SETI signal this would constitute scenario (3), while if the SETI Institute’s Allen Telescope Array receives a SETI signal this could constitute scenario (4)

3. The SETI Paradigm

In an earlier Centauri Dreams post, Stagnant Supercivilizations and Interstellar Travel, I described what I called the SETI paradigm, which consists of several closely related assumptions about the development of civilization in a cosmological context, which assumptions all point to civilizations being largely confined to their homeworld, not expanding beyond their planetary system of origin to any significant extent, and thus converging upon communication with any other civilizations as the only possibility of contact and exchange between worlds. [8]

The fundamental assumptions of the SETI paradigm [9] could be made explicit as follows:

    1. Other intelligent beings possessing advanced technology may exist.

    2. Interstellar travel is difficult to the point of near impossibility or outright impossible.

    3. It is pointless to expend resources on attempts at interstellar travel, and equally pointless to search for signs of interstellar travel by other intelligent beings possessing technology.

    4. Interstellar communication is possible by radio, and perhaps also by other technological means.

    5. Interstellar communication by radio is the preferred method (or the exclusive method) of contact between civilizations separated by interstellar distances.

    6. Resources for communication with other intelligent beings should be directed into either the search for technosignatures (listening), or the production of beacons (transmitting), or both.

    7. Other intelligent beings possessing advanced technology may have transmitted information or constructed beacons, either of which we might detect if we conduct SETI research.

    8. Other intelligent beings possessing advanced technology may be searching for technosignatures, and in so doing they may discover ours.

I am here using “SETI” as an umbrella term to cover many activities that might be distinguished in a more fine-grained account. SETI is sometimes narrowly construed to mean only radio SETI, but the search for optical beacons and the IR signatures of Dyson spheres are special cases of the EM spectrum, which includes optical, IR, and radio wavelengths. SETI can be understood as falling under an umbrella of related ideas such as CETI (communication with extraterrestrial intelligence, an acronym no longer widely in use) and SETA (search for extraterrestrial artifacts). Insofar as SETI is understood to be the search for technosignatures, the many possible technosignatures all constitute distinct modalities of search, each with its own scientific instruments and its own observational protocols. [10]

In addition to the modalities of technosignature searches [11], there is also the form of interest that a given civilization demonstrates in SETI. There are four broad classes:

    1. A civilization capable of transmitting or listening, but does neither

    2. A civilization that listens only without transmitting

    3. A civilization that transmits only but does not listen

    4. A civilization that both listens and transmits

The first, the null case, is not a SETI civilization; the following three possibilities could be the basis of a SETI civilization, and in a fine-grained account each of these three possibilities could hold for any permutations of the modalities of technosignature sources. [12] SETI civilizations are not one, but many—civilizations that take SETI as a central project constitute a class of possible civilizations, any one of which could exemplify the theses of a SETI civilization as formulated above. How we go about defining the possible taxonomies of SETI civilizations and the modalities of technosignatures determine the possible permutations that would make up the members of the class of SETI civilizations.

4. Space Development of SETI Civilizations

There have already been several papers that outline space development that prioritizes SETI missions. Kardashev, et al., in 1998 (“Space Program for SETI”), and Frank Drake in 1999 (“Space Missions for SETI”), wrote papers specifically about space programs configured around SETI goals. Moreover, some twenty years prior, Kardashev, et al., published “An Infinitely Expandable Space Radiotelescope,” which described a modular radiotelescope constructed in Earth orbit, to which modules could be continuously added, expanding its capacity over time, and thus its functionality (pictured above). Bracewell’s conception of a now-eponymous Bracewell probe (Bracewell, R. N. 1960. Communications from Superior Galactic Communities. Nature, 186(4726), 670-671) is another conception of a space mission configured around SETI goals.

The Kardashev, et al., paper suggests a three step process of building from the more certain and the less spectacular, to the less certain and the more spectacular:

    1. investigation of conditions for the existence of extraterrestrial intelligence (ETI);

    2. search for astroengineering activity;

    3. search for communication signals.

The Drake paper approaches the problem differently, advocating radiotelescopes constructed progressively farther out in the solar system from Earth, and eventually using the sun as a gravitational lens for observations. Both of these measured programs converging on more ambitious goals stand within the SETI paradigm and involve no more space development than can occur within our solar system. In Space Development Futures I distinguished a range of space development buildouts from minimal to maximal, and the same can be done for the space development of SETI civilizations, and indeed has already been done in these papers, from a minimal development of investigating conditions for the existence of ETI and building radiotelescopes in LEO, to more elaborate searches and using the sun as a gravitational lens.

The presupposition of the SETI paradigm that interstellar travel is difficult to the point of near impossibility does not necessary exclude spacefaring within a species’ home planetary system, so that SETI space development programs could range through the possibilities explored by Kardashev and Drake, with Drake’s gravitational lens putting spacefaring activities out to 550 AU, which is outside the solar system proper (the radius of the heliosphere is today judged to be about 120 AU, but this is a developing area of research and this radius estimate is likely to change as additional data are acquired). A successful mission to the focal point of the sun, already well into interstellar space, would suggest the possibility of constructing Bracewell probes, which would extend SETI space development to interstellar missions, albeit not for human beings, but only for automated probes (at least at first); the kind of technology and engineering that would make possible a successful mission to the focal point of the sun would also make possible a Bracewell probe, so that the two programs are at least loosely coupled in terms of technological capability.

Implicitly, the idea of SETI as a central project is in the background of Kardashev’s conception of supercivilizations and Sagan’s conception of the Encyclopedia Galactica. Kardashev’s civilization types imply increasing degrees of space development that would allow for ever greater energies to be channeled into detection and transmission of SETI signals. Kardashev makes this explicit in his classic paper of 1964, with type I civilizations being difficult to detect and hard pressed to effectively transmit, while higher civilization types would be easier to detect and more effective in transmission. [13] A SETI civilization, then, might range in space development from some scientific instruments placed in space to a cosmos-spanning civilization that has effectively wired the universe for surveillance and communications. [14]

In Space Development Futures I emphasized that the scenarios described were all indifferently spacefaring civilizations, meaning that these civilizations were space-capable or spacefaring, not properly spacefaring civilizations as they did not have spacefaring as their central project. SETI civilizations as discussed above are similarly conceived as space-capable and indifferently spacefaring. Again, as with the other scenarios, on the way to becoming a mature SETI civilization, the scenario of a robust buildout of scientific instrumentation and transmission capability a civilization might transition to a properly spacefaring civilization, especially when the SETI project flags even while spacefaring capacity grows and distant worlds beckon.

5. The First Edition of Encyclopedia Galactica: The Null Catalogue

The history of the universe to date—much of it unknown to us—has already determined whether we exist in a cosmos populated with biosignatures and technosignature that we will eventually find, or not. Our little slice of time in cosmological history (a slice of time that I call the Snapshot Effect) is what it is, and our searching or not searching will not change this. A typical mammalian species endures for a million years or so. We are not a typical mammalian species, but as an outlier we cannot expect to remain unchanged; the same human intellect that could potentially preserve the viability of our species beyond its natural term could also potentially transform both our bodies and our minds into something unrecognizable.

The future epochs of the universe that we will not know (i.e., that humanity-as-we-know-it will not know) may be as full or as empty as our current epoch, but however populated or lonely, we will not be there to see it, and our civilization (civilization-as-we-know-it) will not be the civilization that apprehends this future epoch. Humanity-as-we-know-it and civilization-as-we-know-it have this present snapshot of cosmological time, and no other. Our account of the universe, our Encyclopedia Galactica (the edition that we author), must reflect this.

If we find ourselves in a lonely and unpopulated epoch in the history of the universe, there is still an Encyclopedia Galactica to be written about the history of the universe to date, but the Encyclopedia Galactica of an unpopulated or sparsely populated universe would necessarily be different from that of a populated universe. It would still be possible to compile a catalog of SETI searches in a silent universe, albeit searches with a negative result. Some SETI researchers have discussed the hesitancy to publish negative results [15], recognizing that these are valuable in themselves and the results are at risk of being lost if they are not published. While disappointing, negative technosignature searches could be the basis of the earliest Encyclopedia Galactica transmitted in our universe.

Such a catalog of the absence of technosignatures—transmitting an Encyclopedia Galactica of negative technosignature searches—would need to be formulated in such a way that, if received by another civilization, they would be able to decipher the stars and planetary systems that were the objects of unsuccessful technosignature searches, as well as the era in the history of the universe during which these searches were conducted. Reference to distinctive pulsars, as on both the Pioneer plaque and the Voyager Golden Record, can triangulate location, and any change in the rate of the pulsars can be used to determine the time of the observation.

A catalog of observations, with the point of origin of the observation (in both space and time) being precisely reconstructable, would be interesting in itself, both for establishing an observational history of the universe, and possibility also helpful in detecting technosignatures. It has been suggested that disappearing stars are a possible technosignature [16], so that a detailed star map from some time in the past compared to a contemporaneous star map could reveal stars that have disappeared from view. Also, indicating red giants in a star map would also be useful, as stars do not remain in their red giant stage for long periods of time in cosmological terms. A civilization receiving a transmission that identified stars in their red giant stage may be able to identify these former red giants with supernovae remnants in their own time.

A galaxy, or the universe entire, might pass through definite stages of the development of civilizations capable of transmitting or receiving technosignatures and collating an Encyclopedia Galactica from these efforts, which would then correspond with successive editions of the Encyclopedia Galactica, something like as follows:

    0. The Silent Era—The era in the history of our universe before any technosignatures are transmitted or received.

    1. The Lonely Era—The era in the history of our universe in which one or a small number of isolated civilizations record negative technosignature searches and transmit a catalog of the Lonely Era, the first edition of the Encyclopedia Galactica.

    2. The Inflection Era—Second generation civilizations in our universe receive the null catalog of technosignatures, and can add to it themselves and the transmitting civilization, expanding the catalog beyond merely negative technosignature reports, thus constituting an inflection point in the development of the Encyclopedia Galactica.

    3. The Crowded Era—Multiple formulations of the Encyclopedia Galactica exist, in varying degrees of completeness, and are shared throughout the universe, now crowded with civilizations and the Encyclopedia Galactica editions that describe them.

    4. The Terminal Era—As civilization formation slows and eventually ceases, novel technosignatures tail off, and formulations of the Encyclopedia Galactica approach completeness.

    5. The Second Silent Era—As the final transmitters sending out a complete compilation of the Encyclopedia Galactica for the known universe one by one fall offline, or pass beyond the cosmological horizon, the universe eventually goes radio silent again, and a Second Silent Era reigns until proton decay or heat death. If any intelligence survives in the universe, it could compile an account of disappearing transmitters that would constitute the final appendix to the Encyclopedia Galactica.

There is also another sense of the Encyclopedia Galactica in which the Encyclopedia Galactica is to be identified with this entire process of cataloging the universe throughout the period of its development when conscious observers are present to render an account of it. Our temporal snapshot of the universe includes the observational pillars of the big bang and so places us relatively near in time to the origin of our universe, allowing us to give at least a limited account of the origins of the universe up to the present day. Other observers may be able to recount later stages in the development of the universe, when these observational pillars of big bang cosmology are no longer visible, but other phenomena are visible.

In the period following what Krauss and Sherrer called the “end of cosmology,” the observable universe will be limited to only the gravitationally bound galaxies of the local group, which will by then all be agglomerated into a single elliptical galaxy. In order to preserve the knowledge of the much more extensive universe visible to us today, our descendants will need the Encyclopedia Galactica as a record of what has been lost from view—including an account of now unobservable technosignatures from other gravitationally bound groups of galaxies.

6. Transformation of SETI in the Light of Technosignature Detection

As noted above in section 2, “SETI ‘success’ is in not directly dependent upon the scope and scale of the SETI undertaking,” that is to say, there is no necessary correlation between SETI efforts and SETI success, insofar as SETI success is defined in terms of technosignature detection (though we can also define success in other ways). [17] Moreover, if a SETI civilization experiences success in the form of a high-information technosignature, it is not clear that it will remain a SETI civilization. What is the endgame is of a SETI civilization?

In the case of detecting a low-information signal like a beacon, this would be a significant source of morale for SETI [18], a suggestion of greater things to come, but in the case of a SETI civilization the SETI project is already the source of morale for the civilization in question. However, it must be admitted that civilizations sometimes flag in their mission, and even a marginal signal detection could prove to rally greater effort toward the SETI mission.

A SETI civilization might not only continue as a SETI civilization after the detection of a beacon, but might redouble its efforts. This, however, merely kicks the can down the road. The redoubled effort is presumably due to a combination of confirmation that ETI is to be found (SETI proof of concept), as well as the continued pursuit of a high-information signal that could prove to be transformative.

Suppose, as a thought experiment, that a civilizational-scale SETI program moves beyond isolated cases of unambiguous technosignature detection, refines its successful techniques for acquiring technosignature signals, and, after the initial excitement and wonder fades, turns to the more mundane task of cataloging technosignatures. Such a catalog would be our own parochial version of the Encyclopedia Galactica (Earth’s Encyclopedia Galactica of the Inflection Era or the Crowded Era, as described in the previous section).

Presumably, if we could do this, some ETI could do this, and perhaps has already done this. Some ETI transmits its catalog of technosignatures, and we discover and decode this signal. Perhaps several ETIs have done this, and we are able to collect and collate multiple parochial Encyclopedia Galacticas, thus producing the most comprehensive Encyclopedia Galatica to date in the history of the universe. Presumably we “pay it forward” by transmitting our Encyclopedia Galactica in its turn. At this point, the task of SETI appears to be complete, leaving nothing for a SETI civilization to do at this point (the Crowded Era implies the Terminal Era).

It is easy to quibble with this scenario. One could argue that new signals might appear with regularity, necessitating supplementary volumes for the Encyclopedia Galactica. One could argue that the transmission of our Encyclopedia Galactica could be carried out indefinitely, and always at higher energies, reaching a greater part of the universe, and moreover that traditional SETI is predicated upon some civilization doing precisely this, that we might hope to receive such a signal. This is one vision of a mature SETI civilization, transformed not by any signal received, but by the imperative to transmit a signal. It does this for as long as it can, until it expires; this is a civilization that has completed its entire historical arc before fading into oblivion (it has fully realized and exhausted its central project).

One also could argue that the one truly transformative signal—the signal that would elevate the SETI enterprise above the cataloguing of technosignatures—was missed earlier in the cataloging process, and, having discovered this later (perhaps buried deep in electronic archives), the true impact on human civilization begins. But note that this transformation of human civilization in receipt of a transformative transmission is the end of SETI civilization and the beginning of another kind of civilization.

Suppose that there is no transformative message to be found among the welter of successful technosignature detections. The work of cataloging continues, and scholars who study technosignatures devote their lives to compiling and deciphering signals, which is a task that converges on completeness the longer it is pursued. Such a dwindling enterprise cannot be the central project of civilization (though it could be the central project of a stagnant and decaying civilization); inevitably, interest will drift to other matters as the SETI task converges on completeness, and some other project will become the central project around which civilization organizes itself, or civilization will fail.

A SETI civilization will not survive its own success. Either it will be transformed by the knowledge gained from a high information technosignature, or the task will become a mundane matter of compiling signals and converging on a complete catalog. In either case, the SETI civilization ends and something else takes its place, but, in converging on making itself irrelevant through success, a SETI civilization must first work through a number of intrinsic internal tensions arising from the nature of the SETI enterprise.

7. Tensions Intrinsic to SETI Civilizations

The impact that a successful technosignature detection would have on civilization, and whether a SETI civilization in receipt of a signal could remain a SETI civilization, is one of many tensions that would play out within a SETI civilization.

Another predictable tension within SETI civilizations would be that between growing efforts on a civilizational scale to detect technosignatures and the growing skepticism that will inevitably follow from the failure to do so. However, the situation is not likely to be so simple. There may be many false positives of technosignature discoveries that temporarily raise hopes, but while these false positives will give a temporary boost to SETI initiatives, the subsequent failure to confirm the signal as an unambiguous technosignature may be dispiriting, and, recalling that these efforts will, in a SETI civilization, be occurring at a civilizational scale, demoralization at a civilizational scale would have severe social consequences. This demoralization problem alone may be sufficiently severe to prevent an authentically SETI civilization from coming into being.

It could well be that SETI undertaken at a civilizational scale may yield a great many ambiguous detections, the interpretation of which becomes a point of conflict. This could prove to be a source of creative tension in times of growth and optimism, while transforming into destructive conflict in times of contraction and pessimism. While a SETI central project is growing and developing, alternative interpretations of ambiguous signals could drive further research, expanding the conceptual framework of SETI, and the dialectic of conflicting interpretations could play out as a synthesis that moves the debate forward. In times of doubt and pessimism, infighting among SETI schools of thought could become rancorous, poisoning the entire atmosphere of the field and thus holding back the development of research, especially retarding the most adventurous ideas, which are the most likely to generate criticism, but which also hold out the hope of the greatest progress.

Another obvious point of conflict will be that between passive and active SETI. Today the “transmission debate” divides the SETI community to a certain extent, and, insofar as in a SETI civilization these issues would be the primary driver of social institutions, the division would reach down into the foundations of civilization. More advanced technological capabilities and more resources available for these technologies will on the one hand exacerbate the active/passive conflict (i.e., the SETI/METI conflict), as a more powerful transmissions will have the ability to reach deeper into and wider across the cosmos, potentially reaching targets not previously reached. On the other hand, as SETI efforts continue, and should they be undertaken on a civilizational scale, but continue to yield no confirmed signals, the case for any signal sources becomes increasingly weaker over time, and the perception of risk declines as the entire SETI enterprise declines.

8. The Success and Failure of Civilizations

Because there is no consensus on a theory of civilization, there is no consensus on what constitutes failure or success for a civilization, and even if we did have a theory of civilization that gained wide acceptance, that still might not be sufficient to judge the success or failure of a civilization. Insofar as any theory would be scientific, it would ideally be value-neutral, and insofar as our judgments of the success or failure of civilizations are freighted with value judgments, the two possibilities of success and failure might not even be addressed by a theory of civilization. However, if we had a theory of civilization, then we could additionally formulate a theory that stood in relation to a science of civilization as conservation biology stands in relation to biology, involving norms of the social ecosystem in which civilizations thrive or go extinct; this would imply some minimal standard by which to judge civilizations.

The entire social science of civilizations remains to be formulated, so that we are not yet in a position to frame a theory for the success or failure of civilizations. However, we can explore the parameters of civilizational success and failure through well formulated thought experiments, through which we can elucidate the intuitions that might someday play a foundational role in a science of civilizations. Moreover, SETI civilizations constitute a unique lens for focusing on the problems of civilizational success or failure given the paradoxical nature of SETI civilization in relation to success and failure (as noted above in section 2), i.e., successful technosignature detection does not necessarily correspond to the scope and scale of detection efforts.

Say, for the sake of argument, and the SETI civilization coalesces within the next few hundred years, and continues for a thousand years or more. During that thousand years of reasonably stable civilization given directionality and coherence through its SETI central project, science, technology, and engineering will continue to improve to the point at which it becomes indefensible to maintain the impossibility or undesirability of human exploration of the universe. These developments call into question fundamental assumptions of the SETI paradigm. Nevertheless, such a civilization, already dedicated to the exploration of the universe, continues this exploration, although now the techniques of biosignature and technosignature detection are supplemented by actual exploration, first by robotic probes and later by human missions. Is this still a SETI civilization? It is still a civilization that is seeking life and intelligence in the universe, and it has even added to its modalities of exploration, though it has arguably abandoned the SETI paradigm.

Let us extrapolate from this baseline scenario of expanding SETI capabilities over thousand year timeline, with a civilization that not only searches, but also transmits. Say a SETI civilization transmits to the universe at large for a million years, or even for a billion years. Further suppose that this SETI civilization goes extinct, and subsequent civilizations determine by exhaustive survey that there were no other civilizations to contact through such METI transmissions, so that the entire effort was in vain. On the one hand, the earlier civilization fulfilled its central project; on the other hand, its central project was predicated upon a false understanding of the nature of the cosmos. Are we to judge this earlier civilization as a success or as a failure?

Both of these two scenarios involve our civilization continuing to search the universe for signs of life and intelligence, but finding little or nothing. “Unsuccessful” SETI, i.e., ongoing SETI without the detection of an unambiguous technosignature, must, over cosmological scales of time, converge on a null result. At what point in the search do we acknowledge that we are probably alone in the universe? And if we have in the meantime built a SETI-centered civilization, what do we do next when we are relentlessly closing in on a null result? In what direction can a SETI civilization pivot as SETI becomes less meaningful? This really isn’t a difficult question. It seems obvious that marshalling a civilization to search the universe for technosignature would, at the same time, involve the search for biosignatures, and a civilization optimized for search and exploration can take the next step through other forms of exploration that would break with the SETI paradigm, but which would nevertheless constitute a natural extension of the activities of a SETI civilization, as in the first scenario above.

We can argue that a SETI civilization can be “successful” in terms of possessing a coherent social project that gives meaning and purpose to the peoples whose civilization is based on SETI initiatives, even if that SETI civilization is a failure in terms of detecting an unambiguous technosignature. This would especially seem to be the case with any growing civilization that is pursuing SETI initiatives. Any set of institutions that is used to facilitate stability, coherence, and directionality for a large group of persons over a large geographical area for a long period of time cannot be judged to be an absolute failure, though it may be seen to damn with faint praise when we allow that a civilization performed a valuable social function while denying the validity of the purpose to which it devoted itself. And we may feel queasy about civilizations that, in addition to facilitating stability, coherence, and directionality, also facilitate bloodshed, warfare, and exploitation, but no civilization could function that was not fully a creature of its age, and no age has been free of bloodshed, warfare, and exploitation.

Another thought experiment could be formulated such that a SETI civilization successfully receives and decodes a high information transmission, but is not transformed by the reception. I argued above in section 6 that a SETI civilization would not survive its own success—but what if it does? Would we judge a civilization a success if it were so stagnant, so impervious to change, that it assimilated the knowledge of a high information technosignature in the spirit of nil admirari? Or, contrariwise, would we judge a SETI civilization to be a failure if it were to detect a technosignature, and the consequences of the detection led to social destabilization and the collapse of the SETI civilization?

Thought experiments such as this can be employed to explore our intuitions about what constitutes a failed or a successful civilization, and the explication of these intuitions could in turn inform a theory of civilization. The paradoxical nature of SETI civilization in relation to success and failure (as noted above in section 2) makes such a civilization especially valuable in a theoretical inquiry, in which we seek to expand our conceptual framework by challenging our intuitions with counter-intuitive scenarios.

9. The Civilizational-Cosmological Endgame

The most breathtaking visions of interstellar civilization, as we have seen with the conceptions of Sagan and Kardashev, have been, in effect, SETI civilizations. Insofar as we are inspired by this vision, the future for civilization is boundless. Albert Harrison wrote, “…if a succession of other search strategies gain acceptance, SETI could continue indefinitely,” and, “…it is very unlikely that even in the case of a prolonged absence of a confirmed detection everyone will conclude that we are alone in the universe.” [19] Like science elaborated as the central project of a civilization, SETI offers the prospect of an endless central project that could serve as the focus of a civilization of cosmological scale in space and time.

This is not, however, the only vision for the future of civilization. Dyson’s conception of a civilization so uninterested in communicating its presence that, despite its technological accomplishments, would be detectable only by a passive technosignature of the inevitable waste heat of industrial processes carried out on a cosmological scale, could also be extrapolated indefinitely, but would be indifferent as to whether or not it was alone in the universe. Implicit in the Dysonian conception are later conceptions such as John Smart’s Transcension Hypothesis—intelligence that has forsaken the outer world for the inner world, or the actual world for virtual worlds.

There is a bifurcation in conceptions of the most advanced forms of intelligence that we can imagine at our present state of development: there are conceptions that are extroverted and expansive, and conceptions that are introverted and indifferent to expansion. The former project themselves into the cosmos and define themselves through growth; the latter look inward and ultimately would be defined by density. These forms of civilization—if they are civilization—would have radically different cosmological profiles, and they would, over cosmological scales of time, evolve into distinct presences on a cosmological scale, which, extrapolated to the utmost, and integrated with the fabric of the cosmos, would yield, in each case, a different universe.

The two are not necessarily mutually exclusive, at least for the next several billion years; our universe could eventually consist of expansive civilizations that sweep outward even while introverted civilizations achieve ever higher energy rate densities and, by doing so, effectively cut themselves off from the outside world, as the effective sphere of communications must contract as rates of communication accelerate. Could two such diverse adaptations of intelligence to the conditions of the cosmos engage in any kind of coherent communication? This is a larger question for another time, but the implications of this question are relevant for SETI civilizations.

The rate at which history occurs on Earth already effectively excludes SETI/METI communication as a driving force in civilization, as civilizations develop over time scales of hundreds of years and, at best, endure over time scales of thousands of years. If Project OZMA had found signals from Tau Ceti or Epsilon Eridani, then there would have been the possibility of communications over a time scale of decades, which could have been a formative element in human civilizations. However, not having found signals closer to home, we look for signals from hundreds or thousands of light years’ distance, which would mean communication over hundreds or thousands of years. Since terrestrial civilizations at best endure for thousands of years, any communication between terrestrial civilizations and civilizations thousands of light years distance would be at most one communication cycle; there would be no dialogue. Such communication could be transformative, in the sense of redirecting civilization on a new path, but not in the sense of being an ongoing influence through interaction.

To sum up: if the time scale of some form of interaction exceeds the expected longevity of the entity involved in the interaction, then the processes and events that constitute the history of the entity in question cannot be constituted by this means of interaction, though this history may be inflected by such an interaction. The rate at which human history develops (and therefore the rate at which civilizations form), which is in turn derived from the rate at which human beings interact socially (i.e., the rate of human conscious interaction), defines certain parameters of history, and therefore of civilization, such that rates that fall outside these parameters, whether above or below the parameters of the relevant rate of interaction, fall outside our purview. The temporal parameters defined by human consciousness and social interaction define in turn our slice of cosmological time (mentioned above in section 5); this Snapshot Effect defines both civilization-as-we-know-it and humanity-as-we-know-it. [20]

Notes

[1] I know of at least two individuals, Claudius Gros and Michael Mautner, who have explicitly advocated such a biocentric vision of the future of terrestrial civilization. Informally, many individuals have expressed to me their interest in propagating terrestrial life beyond Earth as a central motivation for human expansion into space. Cf. the following papers, inter alia:

Gros, C. (2016). “Developing ecospheres on transiently habitable planets: the genesis project.” Astrophysics and Space Science, 361(10). doi:10.1007/s10509-016-2911-0

Mautner, M. N. (2009). “Life-Centered Ethics, and the Human Future in Space.” Bioethics, 23(8), 433-440. doi:10.1111/j.1467-8519.2008.00688.x

[2] Wilson, E. O., Biophilia: the Human Bond with Other Species, Cambridge and London: Harvard University Press, 2003, p. 1. We could also attribute human interest in our own biology to simple anthropocentrism, which can be informally expressed as, “…the endless fascination of us humans with ourselves,” (David Quammen, The Tangled Tree: A Radical New History of Life, section 61, p. 274)

[3] I call this the “drop in the bucket” argument, as it is often stated that our SETI efforts today are a mere drop in the bucket compared to the size of the universe.

[4] This scenario could be used to illustrate the Buildout Thesis. For example, a scientific civilization might build a significant radio telescope infrastructure for the purposes of astronomy, and if astronomical observations accidentally capture a SETI signal, such a civilization might be transformed by that unexpected observation into a SETI civilization. The infrastructure buildout that would make a SETI civilization possible has already occurred; when the transformation occurs, the pre-adpated infrastructure is exapted for different purposes.

[5] The paper “Positive consequences of SETI before detection” (1998) by A. Tough discusses six ways in which SETI impacts society without regard to the detection of an SETI signal. These impacts acting at civilizational scale would shape the institutions of a SETI civilization. Tough outlines these positive consequences as follows:

“(1) Humanity’s self-image: SETI has enlarged our view of ourselves and enhanced our sense of meaning. Increasingly, we feel a kinship with the civilizations whose signals we are trying to detect. (2) A fresh perspective: SETI forces us to think about how extraterrestrials might perceive us. This gives us a fresh perspective on our society’s values, priorities, laws and foibles. (3) Questions: SETI is stimulating thought and discussion about several fundamental questions. (4) Education: some broad-gage educational programs have already been centered around SETI. (5) Tangible spin-offs: in addition to providing jobs for some people, SETI provides various spin-offs, such as search methods, computer software, data, and international scientific cooperation. (6) Future scenarios: SETI will increasingly stimulate us to think carefully about possible detection scenarios and their consequences, about our reply, and generally about the role of extraterrestrial communication in our long-term future.”

These six consequences of SETI, scaled to the dimensions of civilization, would result in distinctive institutions of a SETI civilization.

[6] An historical analogy could be the Holmdel Horn Antenna, which was constructed by Bell Telephone Laboratories in conjunction with Project Echo, as well as to research noise on telephone lines. The Holmdel antenna was not intentionally optimized for discovering the CMBR, but it did detect the CMBR, and Penzias and Wilson initially did not know what they had found until they heard from Bernard F. Burke about the work of Robert H. Dicke and Jim Peebles. Thus Penzias and Wilson were preparing to publish a paper about the unidentified signal they found, not knowing its source, while Peebles was simultaneously preparing to publish a paper that such a signal might be found.

[7] Implicit in these scenarios is a distinction between intentional and unintentional technosignature detection, which suggests a similarly broad distinction between intentional and unintentional technosignature transmission. Laid out as a table, this allows us to construct what I will call the technosignature matrix:

[8] I have recently learned that the phrase “SETI paradigm” has previously been used in the paper “Testing a Claim of Extraterrestrial Technology” (2007) by H. P. Schuch and Allen Tough: “The traditional SETI paradigm holds that extraterrestrial intelligence can be detected from its electromagnetic signature.” However, I have been using this phrase according to the above exposition for several years, so I will continue to use it as I have described; I will note that my usage is not inconsistent with that of Schuch and Tough, though I include additional assumptions.

[9] In his Disturbing the Universe (chapter 19) Freeman Dyson laid out three presuppositions of SETI:

“Many of the people who are interested in searching for extraterrestrial intelligence have come to believe in a doctrine which I call the Philosophical Discourse Dogma, maintaining as an article of faith that the universe is filled with societies engaged in long-range philosophical discourse. The Philosophical Discourse Dogma holds the following truths to be self-evident:

1. Life is abundant in the universe.
2. A significant fraction of the planets on which life exists give rise to intelligent species.
3. A significant fraction of intelligent species transmit messages for our enlightenment.

If these statement s are accepted, then it makes sense to concentrate our efforts upon the search for radio messages and to ignore other ways of looking for evidence of intelligence in the universe.”

My list of eight presuppositions of the SETI paradigm is somewhat more detailed that Dyson’s list of three, but I make no claim of mine being exhaustive, or of it being the definitive list; there is often more than one way to analyze concepts into their simple components.

[10] I have noticed recently that Adam Frank has been using “technosignature science” instead of SETI. There is still a kind of stigma attached to SETI (sometimes called “the giggle factor”), and “technosignature science” sounds so much more like serious research than search for extraterrestrial intelligence. Avi Loeb discusses this SETI stigma in his recent book Extraterrestrial: The First Sign of Intelligent Life Beyond Earth.

[11] A recent paper, “Concepts for future missions to search for technosignatures” (2021), by Hector Socas-Navarro, Jacob Haqq-Misra, Jason T. Wright, Ravi Kopparapu, James Benford, Ross Davis, TechnoClimes 2020 workshop participants, makes an effort to systematically outline what I have here called the modalities of technosignature searches.

[12] If we take Table 1 in (Socas-Navarro, et al., 2021), with its dozen modalites of technosignatures, there are over 8,000 permutations of modalities, which when distributed across the three possibilities of technosignature interest yields more than 24,000 permutations of search that could be the basis of a SETI civilization. Most of these permutations will not be interestingly different from each other, but the sheer number of possibilities points to the many different pathways that a civilization might take and still be within the SETI paradigm (i.e., a member of the class of SETI civilizations).

[13] On this Kardashev wrote: “Estimates of the possibility of detecting a type I civilization and related experiments in the ‘OZMA’ project in the USA have revealed the extremely low probably of any such event.” And, “…a type I civilization would be capable of sending a return signal only after its energy consumption had increased measurably.” Kardashev also included a table of estimated bits per second that could be transmitted by type II and type III civilizations.

[14] An imaginative elaboration of the Encyclopedia Galactica idea, which also overlaps with the Bracewell probe idea, can be found in Gerard K. O’Neill’s 2081: A Hopeful View of the Human Future, pp. 260-265. In the same way that in a densely populated universe we would have detected a barrage of SETI signals the first time we turned on a radiotelescope, so too if we had lived in a densely populated universe our solar system could be densely populated with something like Bracewell probes. O’Neill notes, “It is possible that there are a thousand probes in the solar system observing us, sent by a thousand, different, independent civilizations—but that every one has been programmed only to observe and never to affect our natural development by signaling us…” (p. 264)

[15] As of writing this I cannot find the quote that I remember about hesitancy to publish negative SETI results, but I did find this: “With a steady stream of refereed, scientific papers carefully documenting negative results from observations conducted by a growing number of research groups in the US and Europe, plus review articles charting the progress of, and potential for, a systematic scientific exploration, the legitimacy of the SETI endeavor gradually enhanced within the scientific community (finally overcoming the stigma of Lowell’s fanciful publications).” Tarter, J. C., Agrawal, A., Ackermann, R., Backus, P., Blair, S. K., Bradford, M. T., … Vakoch, D. (2010). SETI turns 50: five decades of progress in the search for extraterrestrial intelligence. Instruments, Methods, and Missions for Astrobiology XIII. doi:10.1117/12.863128 The explicit mention of “carefully documenting negative results” as a hallmark of scientific legitimacy implies by what it does not say that this has often not been the case.

[16] Cf. Villarroel, B., Imaz, I., & Bergstedt, J. (2016). “Our Sky Now and Then: Searches for Lost Stars and Impossible Effects as Probes of Advanced Extra-Terrestrial Civilizations.” The Astronomical Journal, 152(3), 76. doi:10.3847/0004-6256/152/3/76

[17] If METI means merely transmitting a message, regardless of whether it is received and deciphered, then a METI civilization would experience success in proportion to its resources invested in the METI enterprise, but if METI is judged as a success only if a transmitted message is received, then METI success is no more correlated with the scale of the METI effort than SETI success is correlated with the SETI effort. Assuming a decoupling of transmission and reception, METI is a more durable central project than SETI.

[18] “The detection of evidence of another technological civilization will inform us that we are one among many. The immediate tasks of trying to decipher and interpret any encoded information will be tackled in parallel with deciding whether to respond, and expanding the search to find the other technologies we can now be confident are there. Having succeeded with the discovery of one particular type of technosignature, we are probably entitled to assume that this is the standard for communication among all of our cosmic neighbors. A successful detection would mean a reliable source of funding for future explorations and the ability to optimize the demonstrated, successful strategy so that additional detections will take place more rapidly than the first. Embedded information, if any, could also shape continuing searches.” Harp, G. R., Shostak, G. S., Tarter, J., Vakoch, D. A., Deboer, D., & Welch, J. (2012). “Beings on Earth: Is That All There Is?” Proceedings of the IEEE, 100 (Special Centennial Issue), 1700-1717. doi:10.1109/jproc.2012.2189789

[19] Harrison, A. (2009). “The Future of SETI: Finite effort or search without end?” Futures. 41(8).

[20] Humanity exists on a scale of time about two orders of magnitude greater than the scale of time of civilization, but on cosmological scales humanity equally occupies only a snapshot in time.

tzf_img_post

Notes on the Magnetic Ramjet II

Building a Bussard ramjet isn’t easy, but the idea has a life of its own and continues to be discussed in the technical literature, in addition to its long history in science fiction. Peter Schattschneider, who explored the concept in Crafting the Bussard Ramjet last February, has just published an SF novel of his own called The EXODUS Incident (Springer, 2021), where the Bussard concept plays a key role. But given the huge technical problems of such a craft, can one ever be engineered? In this second part of his analysis, Dr. Schattschneider digs into the question of hydrogen harvesting and the magnetic fields the ramjet would demand. The little known work of John Ford Fishback offers a unique approach, one that the author has recently explored with Centauri Dreams regular A. A. Jackson in a paper for Acta Astronautica. The essay below explains Fishback’s ideas and the options they offer in the analysis of this extraordinary propulsion concept. The author is professor emeritus in solid state physics at Technische Universität Wien, but he has also worked for a private engineering company as well as the French CNRS, and has been director of the Vienna University Service Center for Electron Microscopy.

by Peter Schattschneider

As I mentioned in a recent contribution to Centauri Dreams, the BLC1 signal that flooded the press in January motivated me to check the science of a novel that I was finishing at the time – an interstellar expedition to Proxima Centauri on board a Bussard ramjet. Robert W. Bussard’s ingenious interstellar ramjet concept [1], published in 1960, inspired a generation of science fiction authors; the most celebrated is probably Poul Anderson with the novel Tau Zero [2]. The plot is supposedly based on an article by Carl Sagan [3] who references an early publication of Eugen Sänger where it is stated that due to time dilation and constant acceleration at 1 g „[…] the human lifespan would be sufficient to circumnavigate an entire static universe“ [4].

Bussard suggested using magnetic fields to scoop interstellar hydrogen as a fuel for a fusion reactor, but he did not discuss a particular field configuration. He left the supposedly simple problem to others as Newton did with the 3-body problem, or Fermat with his celebrated theorem. Humankind had to wait 225 years for an analytic solution of Newton‘s problem, and 350 years for Fermat’s. It took only 9 years for John Ford Fishback to propose a physically sound solution for the magnetic ramjet [5].

The paper is elusive and demanding. This might explain why adepts of interstellar flight are still discussing ramjets with who-knows-how-working superconducting coils that generate magnetic scoop fields reaching hundreds or thousands of kilometres out into space. Alas, it is much more technically complicated.

Fishback’s solution is amazingly simple. He starts from the well known fact that charged particles spiral along magnetic field lines. So, the task is to design a field the lines of which come together at the entrance of the fusion reactor. A magnetic dipole field as on Earth where all field lines focus on the poles would do the job. Indeed, the fast protons from the solar wind are guided towards the poles along the field lines, creating auroras. But they are trapped, bouncing between north and south, never reaching the magnetic poles. The reason is rather technical: Dipole fields change too rapidly along the path of a proton in order to keep it on track.

Fishback simply assumed a sufficiently slow field variation along the flight direction, Bz=B0/(1+ ? z) with a „very small“ ?. Everything else derives from there, in particular the parabolic shape of the magnetic field lines. Interestingly, throughout the text one looks in vain for field strengths, let alone a blueprint of the apparatus. The only hint to the visual appearance of the device is a drawing of a long, narrow paraboloid that would suck the protons into the fusion chamber. As a shortcut to what the author called the region dominated by the ramjet field I use here the term „Fishback solenoid“.

Fig. 1 is adapted from the original [5]. I added the coils that would create the appropriate field. Their distance along the axis indicates the decreasing current as the funnel widens. Protons come in from the right. Particles outside the scooping area As are rejected by the field. The mechanical support of the coils is indicated in blue. It constitutes a considerable portion of the ship’s mass, as we shall see below.

Fig. 1: Fishback solenoid with parabolic field lines. The current carrying coils are symbolized in red. The mechanical support is in blue. The strong fields exert hoop stress on the support that contributes considerably to the ship’s mass. Adapted from [5].

Searching for scientific publications that build upon Fishback’s proposal, Scopus renders 6 citations up to this date (April 2021). Some of them deal with the mechanical stress of the magnetic field, another aspect of Fishback’s paper that I discuss in the following, but as far as I could see the paraboloidal field was not studied in the 50 years since. This is surprising because normally authors continue research when they have a promising idea, and others jump on the subject, from which follow-up publications arise, but J. F. Fishback published only this one paper in his lifetime. [On Fishback and his tragic destiny, see John Ford Fishback and the Leonora Christine, by A. A. Jackson].

Solving the dynamic equation for protons in the Fishback field proves that the concept works. The particles are guided along the parabolic field lines toward the reactor as shown in the numerical simulation Fig. 2.

Fig.2: Proton paths in an (r,z)-diagram. r is the radial distance from the symmetry axis, z is the distance along this axis. The ship flies at 0.56 c (?=0.56) in positive z-direction. In the ship’s rest frame, protons arrive with a kinetic energy of 194 MeV from the top. Left: Protons entering the field at z=200 km are focussed to the reactor mouth at the coordinate origin, gyrating over the field lines. Particles following the red paths make it to the chamber; protons following the black lines spiral back. The thick grey parabola separates the two regimes. Right: Zoom into the first 100 m in front of the reactor mouth of radius 10 m. Magnetic field lines are drawn in blue.

The reactor intake is centered at (r,z)=(0,0). In the ship’s rest frame the protons arrive from top – here with 56 % of light speed, the maximum speed of the EXODUS in my novel [8]. Some example trajectories are drawn. Protons spiral down the magnetic field lines as is known from earth’s magnetic field and enter the fusion chamber (red lines). The scooping is well visible. The reactor mouth has an assumed radius of 10 m. A closer look into the first 100 m (right figure) reveals an interesting detail: Only the first two trajectories enter the reactor. Protons travelling beyond the bold grey line are reflected before they reach the entrance, just as charged particles are bouncing back in the earth’s field before they reach the poles. From the Figure it is evident that at an axial length of 200 km of the Fishback solenoid the scoop radius is disappointingly low – only 2 km. Nevertheless, the compression factor (focussing ions from this radius to 10 m) of 1:40.000 is quite remarkable.

The adiabatic condition mentioned above allows a simple expression for the area from which protons can be collected. The outer rim of this area is indicated by the thick grey line in Fig. 2. The supraconducting coils of the solenoid should ideally be built following this paraboloid, as sketched in Fig. 1. Tuning the ring current density to

yields a result that approximates Fishback‘s field closely.

What does it mean in technical terms? Let me discuss an idealized example, having in mind Poul Anderson’s novel. The starship Leonora Christina accelerates at 1 g, imposing artificial earth gravity on the crew. Let us assume that the ship‘s mass is a moderate 1100 tons (slightly less than 3 International Space Stations). For 1 g acceleration on board, we need a peak thrust of ~11 million Newton, about 1/3 of the first stage of the Saturn V rocket. The ship must be launched with fuel on stock because the ramjet operates only beyond a given speed, often taken as 42 km/s, the escape velocity from the solar system. In the beginning, the thrust is low. It increases with the ship’s speed because the proton throughput increases, asymptotically approaching the peak thrust.

Assuming complete conversion of fusion energy into thrust, total ionisation of hydrogen atoms, and neglecting drag from deviation of protons in the magnetic field, at an interstellar density of 106 protons/m3, the „fuel“ collected over one square kilometer yields a peak thrust of 1,05 Newton, a good number for order-of-magnitude estimates. That makes a scooping area of ~10 million square km, which corresponds to an entrance radius of about 1800 km of the Fishback solenoid. From Fig. 2, it is straightforward to extrapolate the bold grey parabola to the necessary length of the funnel – one ends up with fantastic 160 million km, more than the distance earth – sun. (At this point it is perhaps worth mentioning that this contribution is a physicist’s treatise and not that of an engineer.)

Plugging the scooping area into the relativistic rocket equation tells us which peak acceleration is possible. The results are summarised in Table 1. For convenience, speed is given in units of the light speed, ß=v/c. Additionally, the specific momentum ß? is given where

is the famous relativistic factor. (Note: The linear momentum of 1 kg of matter would be ß? c.) Acceleration is in units of the earth gravity acceleration, g=9.81 m/s2.

Under continuous acceleration such a starship would pass Proxima Centauri after 2.3 years, arrive at the galactic center after 11 years, and at the Andromeda galaxy after less than 16 years. Obviously, this is not earth time but the time elapsed for the crew who profit from time dilation. There is one problem: the absurdly long Fishback solenoid. Even going down to a scooping radius of 18 km, the supraconducting coils would reach out 16,000 km into flight direction. In this case the flight to our neighbour star would last almost 300 years.

Table 1: Acceleration and travel time to Proxima Centauri, the galactic center, and the Andromeda galaxy M31, as a function of scooping area. ß? is the specific momentum at the given ship time. A ship mass of 1100 tons, reactor entrance radius 10 m, and constant acceleration from the start was assumed. During the starting phase the thrust is low, which increases the flight time by one to several years depending on the acceleration.

Fishback pointed out another problem of Bussard ramjets [5]. The magnetic field exerts strong outward Lorentz forces on the supraconducting coils. They must be balanced by some rigid support, otherwise the coils would break apart. When the ship gains speed, the magnetic field must be increased in order to keep the protons on track. Consequently, for any given mechanical support there is a cut-off speed beyond which the coils would break. For the Leonora Christina a coil support made of a high-strength „patented“ steel must have a mass of 1100 tons in order to sustain the magnetic forces that occur at ?=0,74.

Table 2: Cut-off speeds ?c and cut-off specific momenta (ß?)c (upper bounds) for several support materials. (ß?)F from [5], (ß?)M from [7]. ?y/? is the ratio of the mechanical yield stress to the mass density of the support material. Bmax is the maximum magnetic field at the reactor entrance at cut-off speed. A scooping area of 10 million km2 was assumed, allowing a maximum acceleration of ~1 g for a ship of 1100 tons. Values in italics for Kevlar and graphene, unknown in the 1960s, were calculated based on equations given in [7].

But we assumed above that this is the ship‘s entire mass. That said, the acceleration must drop long before speeding at 0,74 c. The cut-off speed ?c=0,74 is an upper bound (for mathematicians: not necessarily the supremum) for the speed at which 1 g acceleration can be maintained. Lighter materials for the coil support would save mass. Fishback [5] calculated upper bounds for the speed at which an acceleration of 1 g is still possible for several materials such as aluminium or diamond (at that time the strongest lightweight material known). Values are shown in Table 2 together with (ß?)c.

Martin [7] found some numerical errors in [5]. Apart from that, Fishback used an optimistically biased (ß?)c. Closer scrutiny, in particular the use of a more realistic rocket equation [6], results in more realistic upper bounds. Using graphene, the strongest material known, the specific cut-off momentum is 11,41. This value would be achieved after a flight of three years at a distance of 10 light years. After that point, the acceleration would rapidly drop to values making it hopeless to reach the galatic center in a lifetime.

In conclusion, the interstellar magnetic ramjet has severe construction problems. Some future civilization may have the knowhow to construct fantastically long Fishback solenoids and to overcome the minimum mass condition. We should send a query to the guys who flashed the BLC1 signal from Proxima Centauri. The response is expected in 8.5 years at the earliest. In the meantime the educated reader may consult a tongue-in-cheek solution that can be found in my recent scientific novel [8].

Acknowledgements

Many thanks to Al Jackson for useful comments and for pointing out the source from which Poul Anderson got the idea for Tau Zero, and to Paul Gilster for referring me to the seminal paper of John Ford Fishback.

References

[1] Robert W. Bussard: Galactic Matter and Interstellar Flight. Astronautica Acta 6 (1960), 1-14.

[2] Poul Anderson: Tau Zero. Doubleday 1970.

[3] Carl Sagan: Direct contact among galactic civilizations by relativistic inter-stellar space flight, Planetary and Space Science 11 (1963) 485-498.

[4] Eugen Sänger: Zur Mechanik der Photonen-Strahlantriebe. Oldenbourg 1956.

[5] John F. Fishback: Relativistic Interstellar Space Flight. Astronautica Acta 15 (1969), 25-35.

[6] Claude Semay, Bernard Silvestre-Brac: The equation of motion of an interstellar Bussard ramjet. European Journal of Physics 26 (1) (2005) 75-83.

[7] Anthony R. Martin: Structural limitations on interstellar space flight. Astronautica Acta 16 (6) (1971) 353-357.

[8] Peter Schattschneider: The EXODUS Incident. Springer 2021,
ISBN: 978-3-030-70018-8. https://www.springer.com/de/book/9783030700188#aboutBook

tzf_img_post