Centauri Dreams
Imagining and Planning Interstellar Exploration
Megastructures: Adrift in the Temporal Sea
Here about the beach I wander’d, nourishing a youth sublime
With the fairy tales of science, and the long result of Time…
—Tennyson
Temporal coincidence plays havoc with our ideas about other civilizations in the cosmos. If we want to detect them, their society must at least have developed to the point that it can manipulate electromagnetic waves. But its technology has to be of sufficient strength to be noticed. The kind of signals people were listening to 100 years ago on crystal sets wouldn’t remotely fit the bill, and neither would our primitive TV signals of the 1950s. So we’re looking for strong signals and cultures older than our own.
Now consider how short a time we’re talking about. We have been using radio for a bit over a century, which is on the order of one part in 100,000,000 of the lifespan of our star. You may recall the work of Brian Lacki, which I wrote about four years ago (see Alpha Centauri and the Search for Technosignatures). Lacki, now at Oxford, points out how unlikely it would be to find any two stars remotely near each other whose civilization ‘window’ corresponded to our own. In other words, even if we last a million years as a technological civilization, we’re just the blink of an eye in cosmic time.
Image: Brian Lacki, whose work for Breakthrough Listen continues to explore both the scientific and philosophical implications of SETI. Credit: University of Oxford.
Adam Frank at the University of Rochester has worked this same landscape. He thinks we might well find ourselves in a galaxy that at one time or another had flourishing civilizations that are long gone. We are separated not only in space but also in time. Maybe there are such things as civilizations that are immortal, but it seems more likely that all cultures eventually end, even if by morphing into some other form.
What would a billion year old civilization look like? Obviously we have no idea, but it’s conceivable that such a culture, surely non-biological and perhaps non-corporeal, would be able to manipulate matter and spacetime in ways that might simply mimic nature itself. Impossible to find that one. A more likely SETI catch would be a civilization that has had space technologies just long enough to have the capability of interstellar flight on a large scale. In a new paper, Lacki looks at what its technosignature might look like. If you’re thinking Dyson spheres or swarms, you’re on the right track, but as it turns out, such energy gathering structures have time problems of their own.
Lacki’s description of a megaswarm surrounding a star:
These swarms, practically by definition, need to have a large number of elements, whether their purpose is communication or exploitation. Moreover, the swarm orbital belts need to have a wide range of inclinations. This ensures that the luminosity is being collected or modulated in all directions. But this in turn implies a wide range of velocities, comparable to the circular orbital velocity. Another problem is that the number of belts that can “fit” into a swarm without crossing is limited.
Image: Artist’s impression of a Dyson swarm. Credit: Archibald Tuttle / Wikimedia Commons. CC BY-SA 4.0.
Shards of Time
The temporal problem persists, for even a million year ‘window’ is a sliver on the cosmic scale. The L factor in the Drake equation is a great unknown, but it is conceivable that the death of a million-year old culture would be survived by its artifacts, acting to give us clues to its past just as fossils tell us about the early stages of life on Earth. So might we hope to find an ancient, abandoned Dyson swarm around a star close enough to observe?
Lacki is interested in failure modes, the problem of things that break down. Helpfully, megastructures are by definition gigantic, and it is not inconceivable that. Dyson structures of one kind or another could register in our astronomical data. As the paper notes, a wide variety covering different fractions of the host star can be imagined. We can scale a Dyson swarm down or up in size, with perhaps the largest ever proposed being from none other than Nikolai Kardashev, who discusses in a 1985 paper a disk parsecs-wide built around a galactic nucleus (!).
I’m talking about Dyson swarms instead of spheres because from what we know of material science, solid structures would suffer from extreme instabilities. But swarms can be actively managed. We have a history of interest in swarms dating back to 1958, when Project Needles at MIT contemplated placing a ring of 480,000,000 copper dipole antennas in orbit to enhance military communications (the idea was also known as Project West Ford). Although two launches were carried out experimentally, the project was eventually shelved because of advances in communications satellites.
So we humans already ponder enclosing the planet in one way or another, and planetary swarms, as Lacki notes, are already with us, considering the constellations of satellites in Earth orbit, the very early stages of a mini Dyson swarm. Just yesterday, the announcement by SpinLaunch that it will launch hundreds of microsatellites into orbit using a centrifugal cannon gave us another instance. Enclosing a star in a gradually thickening swarm seems like one way to harvest energy, but if such structures were built, they would have to be continuously maintained. The civilization behind a Dyson swarm needs to survive if the swarm itself is to remain viable.
For the gist of Lacki’s paper is that on the timescales we’re talking about, an abandoned Dyson swarm would be in trouble within a surprisingly short period of time. Indeed, collisions can begin once the guidance systems in place begin to fail. What Lacki calls the ‘collisional time’ is roughly an orbital period divided by the covering fraction of the swarm. How long it takes to develop into a ‘collisional cascade’ depends upon the configuration of the swarm. Let me quote the paper, which identifies:
…a major threat to megastructure lifespans: if abandoned, the individual elements eventually start crashing into each other at high speeds (as noted in Lacki 2016; Sallmen et al. 2019; Lacki 2020). Not only do the collisions destroy the crashed swarm members, but they spray out many pieces of wreckage. Each of these pieces is itself moving at high speeds, so that even pieces much smaller than the original elements can destroy them. Thus, each collision can produce hundreds of missiles, resulting in a rapid growth of the potentially dangerous population and accelerating the rate of collisions. The result is a collisional cascade, where the swarm elements are smashed into fragments, that are in turn smashed into smaller pieces, and so on, until the entire structure has been reduced to dust. Collisional cascades are thought to have shaped the evolution of minor Solar System body objects like asteroid families and the irregular satellites of the giant planets (Kessler 1981; Nesvorn.
You might think that swarm elements could be organized so that their orbits reduce or eliminate collisions or render them slow enough to be harmless. But gravitational perturbations remain a key problem because the swarm isn’t an isolated system, and in the absence of active maintenance, its degradation is relatively swift.
Image: This is Figure 2 from the paper. Caption: A sketch of a series of coplanar belts heating up with randomized velocities. In panel (a), the belt is a single orbit on which elements are placed in an orderly fashion. Very small random velocities (meters per second or less) cause small deviations in the elements’ orbits, though so small that the belt is still “sharp”, narrower than the elements themselves (b). The random velocities cause the phases to desynchronize, leading to collisions, although they are too slow to damage the elements (cyan bursts). The collision time decreases rapidly in this regime until the belt is as wide as the elements themselves and becomes “fuzzy” (c). The collision time is at its minimum, although impacts are still too small to cause damage. In panel (d), the belts are still not wide enough to overlap, but relative speeds within the belts have become fast enough to catastrophically damage elements (yellow explosions), and are much more frequent than the naive collisional time implies because of the high density within belts. Further heating causes the density to fall and collisions to become rarer until the belts start to overlap (e). Finally, the belts grow so wide that each belt overlaps several others, with collisions occuring between objects in different belts too (f), at which point the swarm is largely randomized. Credit: Brian Lacki.
Keeping the Swarm Alive
Lacki’s mathematical treatment of swarm breakdown is exhaustive and well above my payscale, so I send you to the paper if you want to track the calculations that drive his simulations. But let’s talk about the implications of his work. Far from being static technosignatures, megaswarms surrounding stars are shown to be highly vulnerable. Even the minimal occulter swarm he envisions turns out to have a collision time of less than a million years. A megaswarm needs active maintenance – in our own system, Jupiter’s gravitational effect on a megaswarm would destroy it within several hundred thousand years. These are wafer-thin time windows if scaled against stellar lifetimes.
The solution is to actively maintain the megaswarm and remove perturbing objects by ejecting them from the system. An interesting science fiction scenario indeed, in which extraterrestrials might sacrifice systems planet by planet to maintain a swarm. Lacki works the simulations through gravitational perturbations from passing stars and in-system planets and points to the Lidov-Kozai effect, which turns circular orbits at high inclination into eccentric orbits at low inclination. Also considered is radiation pressure from the host star and radiative forces resulting from the Yarkovsky effect.
How else to keep a swarm going? From the paper:
For all we know, the builders are necessarily long-lived and can maintain an active watch over the elements and actively prevent collisions, or at least counter perturbations. Conceivably, they could also launch tender robots to do the job for them, or the swarm elements have automated guidance. Admittedly, their systems would have to be kept up for millions of years, vastly outlasting anything we have built, but this might be more plausible if we imagine that they are self-replicating. In this view, whenever an element is destroyed, the fragments are consumed and forged into a new element; control systems are constantly regenerated as new generations of tenders are born. Even then, self-replication, repair, and waste collection are probably not perfectly efficient.
The outer reaches of a stellar system would be a better place for a Dyson swarm than the inner system, which would be hostile to small swarm elements, even though the advantage of such a position would be more efficient energy collection. The habitable zone around a star is perhaps the least likely place to look for such a swarm given the perturbing effects of other planets. And if we take the really big picture, we can talk about where in the galaxy swarms might be likely: Low density environments where interactions with other stars are unlikely, as in the outskirts of large galaxies and in their haloes. “This suggests,” Lacki adds, “that megaswarms are more likely to be found in regions that are sometimes considered disfavorable for habitability.”
Ultimately, an abandoned Dyson swarm is ground into microscopie particles via the collision cascades Lacki describes, evolving into nothing more than dispersed ionized gas. If we hope to find an abandoned megastructure like this in our practice of galactic archaeology, what are the odds that we will find it within the window of time within which it can survive without active maintenance? We’d better hope that the swarm creators have extremely long-lived civilizations if we are to exist in the same temporal window as the swarm we want to observe. A dearth of Dyson structures thus far observed may simply be a lack of temporal coincidence, as we search for systems that are inevitably wearing down without the restoring hand of their creators.
The paper is Lacki, “Ground to Dust: Collisional Cascades and the Fate of Kardashev II Megaswarms,” accepted at The Astrophysical Journal (preprint). The Kardashev paper referenced above is “On the Inevitability and the Possible Structure of Super Civilizations,” in The Search for Extraterrestrial Life: Recent Developments, ed. M. D. Papagiannis, Vol. 112, 497–504.
The Statistically Quantitative Information from Null Detections of Living Worlds: Lack of positive detections is not a fruitless search
It’s no surprise, human nature being what it is, that our early detections of possible life on other worlds through ‘biosignatures’ are immediately controversial. We have to separate signs of biology from processes that may operate completely outside of our conception of life, abiotic ways to produce the same results. My suspicion is that this situation will persist for decades, claim vs. counter-claim, with heated conference sessions and warring papers. But as Alex Tolley explains in today’s essay, even a null result can be valuable. Alex takes us into the realm of Bayesian statistics, where prior beliefs are gradually adjusted as new data come in. We’re still dealing with probabilities, but in a fascinating way, uncertainties are gradually being decreased though never eliminated. We’re going to be hearing a lot more about these analytical tools as the hunt continues with next generation telescopes.
by Alex Tolley
Introduction
The venerable Drake equation’s early parameters are increasingly constrained as our exoplanet observations continue. We now have a good sample of thousands of exoplanets to estimate the fraction of planets in the habitable zone that could support life. This last firms up the term ne, the mean number of planets that could support life per star with planets.
This is now a shift to focus on the fraction of habitable planets with life (fl). The first to confirm a planet with life will likely make the history books.
However, as with the failure of SETI to receive a signal from extraterrestrial intelligence (ETI) since the 1960s, there will be disappointments in detecting extraterrestrial life. The early expectation of Martian vegetation proved incorrect, as did the controversial Martian microbes thought to have been detected by the Viking lander life detection experiments in 1976. More recently, the phosphine biosignature in the Venusian atmosphere has not been confirmed, and now the claimed dimethyl sulfide (DMS) biosignature on K2-18b is also questioned.
While we hope that an unambiguous biosignature is detected, are null results just disappointments that have no value in determining whether life is present in the cosmos, or do they add some value in determining a frequency of habitable planets with life?
Before diving into a recent paper that attempts to answer this question, I want to give a quick introduction to statistics. The most common type of statistics is Fisher statistics, where collected sample data is used to calculate the distribution parameters for the population from which the sample is taken. This approach is used when the sample size is greater than 1 or 2, and is most often deployed in calculating the accuracy of a mean value and 95% range of values as part of a test of significance. This approach works well when the sample contains sufficient examples to represent the population. For binary events, such as heads in a coin test, the Binomial distribution will provide the expected frequencies of unbiased and small biases in coin tosses.
However, a problem arises when the frequency of a binary event is extremely low, so that the sample of events detects no positive events, such as heads, at all. In the pharmaceutical industry, while efficacy of a new drug needs a large sample size for validity, the much larger phase 4 marketing period is used to monitor for rare side effects that are not discoverable in the clinical trials. There have been a number of well known drugs that were withdrawn from the market during this period, perhaps the most famous being thalidomide and its effects on fetal development. In such circumstances, Fisherian statistics are unhelpful in determining probabilities of rare events with sample sizes inadequate to catch these rare events. As we have seen with SETI, the lack of any detected signal provides no value for the probability that ETI exists, only that it is either rare, or that ETI is not signaling. All SETI scientists can do is keep searching with the hope that eventually a signal will be detected.
Bayesian statistics are a different approach that can help overcome the problem of determining the probability of rare events, one that has gained in popularity over the last few decades. It assumes a prior belief, perhaps no more than a guess, of the probability of an event, and then adjusts it with new observed data as they are acquired. For example, one assumes a coin toss is 50:50 heads or tails. If the succeeding tosses show only tails, then the coin toss is biased, and each new resulting tail decreases the probability of a head resulting on the next toss. For our astrobiological example, if life is very infrequent on habitable worlds, Bayesian statistics can be informative to estimate the probability of detection success.
In essence, the Bayesian method updates beliefs in the probability of events, given the new observations of the event. With a large enough number of observations, the true probability of an event value will emerge that will either converge or diverge from the initial expected probability.
I hope it is clear that this Bayesian approach is well-suited to the announcement of detecting a biosignature on a planet, where detections to date have either been absent or controversial. Each detection or lack of detection in a survey will update our expectations of the frequency of life. At this time, the probability of life on a potentially habitable planet ranges from 0 (life is unique to Earth) to 1.0 (some form of life appears wherever it is possible) Beliefs that the abiogenesis of life is extremely hard due to its complexity push the probability of life being detected as close to 0. Conversely, the increasing evidence that life emerges quickly on a new planet, such as within 100 million years on Earth [6], implies that the probability of a habitable planet having life is close to 1.0.
The Angerhausen et al paper I am looking at today (citation below) considers a number of probability distributions depending on beliefs about the probability of life, rather than a single value for each belief. These are shown in Figure 1 and explained in Box 2. I would in particular note the Kerman and Jeffreys distributions that are bimodal with the highest likelihoods for the distributions as the extremes, and reflect the “fine tuning” argument for life by Kipping et al [2] explained in the Centauri Dreams post [3] i.e., either life will be almost absent, or ubiquitous, and not some intermediate probability of appearing on a habitable planet, In other words, the probability is either very close to 0 or close to 1.0, but unlikely to be some intermediate probability. The paper relies on the Beta function [Box 3] that uses the probability of positive and negative events defined by 2 parameters for the binary state of the event, e.g. life detected or not detected. This function can approximate the Binomial distribution, but can handle the different probability distributions.
Figure 1. The five different prior distributions as probability density functions (PDF) used in the paper and explained in Box 2. Note the Kerman and Jeffreys distributions that bias the probabilities at the extremes, compared to the “biased optimist” that has 3 habitable worlds around the sun (Venus, Earth, and Mars), but with only the Earth having life.
The Beta function is adjusted by the number of observations or positive and negative detections of biosignatures. At this point, the positive and negative observations are based on the believed prior distributions which can take any values, from guesses to preliminary observational results, which at this time are relatively few. After all, we are still arguing over whether we have even detected biosignature molecules, let alone confirmed their detection. We then adjust those expectations by the new observations.
What happens when we start a survey and gain events of biosignature detection? Using the Jeffreys prior distribution, let us see the effect of observing no biosignature detections for up to 100 negative biosignature observations.
Figure 2a. The effect of increasing the null observations on a skewed distribution that shows the increasing certainty of the low probability frequencies. While apparently the high probabilities also rise, the increase in null detections implies that the relative frequency of positives declines.
Figure 2b. The increasing certainty that the frequency of life on habitable planets tends towards 0 as the number of null biosignature detections increases. The starting value of 0.5 is taken from the Jeffreys prior distribution. The implied frequency is the new frequency of positives as the null detections reduce the frequency observed and push the PDF towards the lower bound of 0 (see figure 1)
So far, so good. If we can be sure that the biosignature detection is unambiguous and that the inference that life is present or absent can be inferred with certainty based on the observations, then the sampling of up to 100 habitable worlds will indicate whether life is rare or ubiquitous and can be determined with high confidence. If every star system had at least 1 habitable world, this sample would include most stars within 20 ly of Earth. In reality, if we limit our stars to spectral types F, G & K, which represent 5-10% of all stars, and half of these have at least 1 habitable world, then we need to search 2000-4000 star systems, which are well within 100 ly, a tiny fraction of the galaxy.
The informed reader should now balk at the status of this analysis. Biosignatures are not unambiguous [4]. Firstly, detecting a faint trace of a presumed biosignature gas is not certain, as the phosphine on Venus and the DMS/DMDS on TOI-270d detections make clear. They are both controversial. In the case of Venus, we are neither certain that the phosphine signal is present and that the correct identification has been made, nor that there is no abiogenic mechanism to create phosphine in Venus’ very different environment. As discussed in my post on the ambiguity of biosignatures, prior assumptions about biosignatures as unambiguous were reexamined, with the response that astrobiologists built a scale of certainties for assessing whether a planet is inhabited based on the contextual interpretation of biosignature data.[4].
The authors of the paper allow for this by modifying the formula to allow for both false-positive and false-negative biosignature detection rates, and also for interpretation uncertainty of the detected biosignature. The authors also calculate the upper bound at about 3 sigma (99.9%) of the frequency of observations. Figure 3 shows the effect of these uncertainties on the location and size of the maximal probability density function for the Jeffrey’s Bayesian priors.
Figure 3. The effects of sample and interpretation, best fit, and 99.9% uncertainties for null detections. As both sample and interpretation uncertainty increase, the expected number of positive detections increases. The Jeffrey prior’s distribution is used.
Figure 3 implies that with interpretation uncertainty of just 10%, even 100 null observations, the calculated frequency of life increases 2 orders of magnitude from 0.1% to 10%. The upper bound increases from less than 10% to between 20 and 30%. Therefore, even if 100 new observations of habitable planets with no detected biosignatures, the frequency of inhabited planets is between ⅕ and ⅓ of habitable planets at this level of certainty. As one can see from the asymptotes, no amount of further observations will increase the certainty that life is absent in the population of stars in the galaxy. Uncertainty is the gift that allows astrobiologists to maintain hope that there are living worlds to discover.
Lastly, the authors apply their methodology to 2 projects to discover habitable worlds; the Habitable Worlds Observatory [7] and the Large Interferometer for Exoplanets (LIFE} concepts. The analyses are shown in figure 4. The vertical lines indicate the expected number of positive detections by the conceptual methods and the expected frequencies of detections with their associated upper bounds due to uncertainty.
Figure 4. Given the uncertainties, the authors calculate the 99.9% ( > 3 sigma) upper limit on the null hypothesis of no life and matched against data obtained by 2 surveys by Morgan with The Habitable Worlds Observatory (HWO) and 2 by Kammerer with The Large Interferometer for Exoplanets (LIFE) [7, 8].
The authors note that it may be incorrect to use the term “habitable” if water is detected, or “living” if a biosignature[s] is detected. They suggest it would be better to just use the calculation for the detection method, rather than the implication of the detection, that is, that the sample uncertainty, but not the interpretation uncertainty, is calculated. As we see in the popular press, if a planet in the habitable zone (HZ) has about an Earth-size mass and density, this planet is sometimes referred to as “Earth 2.0” with all the implications of the comparison to our planet. However, we know that our current global biosphere and climate are relatively recent in Earth’s history. The Earth has experienced different states from anoxic atmosphere, to extremely hot, and conversely extremely cold periods in the past. It is even possible the world may be a dry desert, like Venus, or conversely a hycean world with no land for terrestrial organisms to evolve.
However, even if life and intelligence prove rare and very sparsely distributed, a single, unambiguous signature, whether of a living world or a signal with information, is detected, the authors state:
Last but not least we want to remind the reader here that, even if this paper is about null results, a single positive detection would be a watershed moment in humankind’s history.
In summary, Bayesian analysis of null detections against prior expectations of frequencies can provide some estimate of the upper limit frequency of living worlds, with many null detections reducing the frequencies and their upper limits. Using Fisherian statistics, many null detections would provide no such estimates, as all the data values would be 0 (null detections). The statistics would be uninformative other than that as the number of null detections increased, the expectation of the frequency of living worlds would qualitatively decrease.
While planetologists and astrobiologists would hope that they would observationally detect habitable and inhabited exoplanets, as the uncertainties are decreased and the number of observations continues to show null results, how long before such activities become a fringe, uneconomic activity that results in lost opportunity costs for other uses of expensive telescope time?
The paper is Angerhausen, D., Balbi, A., Kovačević, A. B., Garvin, E. O., & Quanz, S. P. (2025). “What if we Find Nothing? Bayesian Analysis of the Statistical Information of Null Results in Future Exoplanet Habitability and Biosignature Surveys”. The Astronomical Journal, 169(5), 238. https://doi.org/10.3847/1538-3881/adb96d
References
1. Wikipedia “Drake equation” https://en.wikipedia.org/wiki/Drake_equation. Accessed 04/12/2025
2. Kipping & Lewis, “Do SETI Optimists Have a Fine-Tuning Problem?” submitted to International Journal of Astrobiology (preprint). https://arxiv.org/abs/2407.07097
3. Gilster P. “The Odds on an Empty Cosmos“ Centauri Dreams, Aug 16, 2024 https://www.centauri-dreams.org/2024/08/16/the-odds-on-an-empty-cosmos/
4. Tolley A. “The Ambiguity of Exoplanet Biosignatures“ Centauri Dreams Jun 21, 2024
https://www.centauri-dreams.org/2024/06/21/the-ambiguity-of-exoplanet-biosignatures/
5. Foote, Searra, Walker, Sara, et al. “False Positives and the Challenge of Testing the Alien Hypothesis.” Astrobiology, vol. 23, no. 11, Nov. 2023, pp. 1189–201. https://doi.org/10.1089/ast.2023.0005.
6. Tolley, A. Our Earliest Ancestor Appeared Soon After Earth Formed. Centauri Dreams, Aug 28, 2024 https://www.centauri-dreams.org/2024/08/28/our-earliest-ancestor-appeared-soon-after-earth-formed/
7. Wikipedia “Habitable Worlds Observatory” https://en.wikipedia.org/wiki/Habitable_Worlds_Observatory. Accessed 05/02/2025
8. Kammerer, J. et al (2022) “Large Interferometer For Exoplanets (LIFE) – VI. Detecting rocky exoplanets in the habitable zones of Sun-like stars. A&A, 668 (2022) A52
DOI: https://doi.org/10.1051/0004-6361/202243846
Unusual Skies: Optical Pulses & Celestial Bubbles
Finding unusual things in the sky should no longer astound us. It’s pretty much par for the course these days in astronomy, what with new instrumentation like JWST and the soon to be arriving Extremely Large Telescope family coming online. Recently we’ve had planet-forming disks in the inner reaches of the galaxy and the discovery of a large molecular cloud (Eos by name) surprisingly close to our Sun at the edge of the Local Bubble, about 300 light years out.
So I’m intrigued to learn now of Teleios, which appears to be a remnant of a supernova. The name, I’m told, is classical Greek for ‘perfection,’ an apt description for this evidently perfect bubble. An international team led by Miroslav Filipović of Western Sydney University in Australia is behind this work and has begun to analyze what could have produced the lovely object in a paper submitted to Publications of the Astronomical Society of Australia (citation below). Fortunately for us, Teleios glows at radio wavelengths in ways that illuminate its origins.
Image: Australian Square Kilometre Array Pathfinder radio images of Teleios as Stokes (the Stokes parameters are a set of numbers used to describe the polarization state of electromagnetic radiation). Credit: Filipović et al.
I’m not going to spend much time on Teleios, although its wonderful symmetry sets it apart from most supernova remnants without implying anything other than a chance occurrence in an unusually empty part of space. Its lack of X-ray emissions is a curiosity, to be sure, as the authors point out:
We have made an exhaustive exploration of the possible evolutionary state of the SN based on its surface brightness apparent size and possible distances. All possible scenarios have their challenges, especially considering the lack of X-ray emission that is expected to be detectable given our evolutionary modelling. While we deem the Type Ia scenario the most likely, we note that no direct evidence is available to definitively confirm any scenario and new sensitive and high-resolution observations of this object are needed.
Odd Optical Pulses
So there you are, a celestial mystery. Another one comes from Richard Stanton, now retired but for years a fixture at JPL, where he worked on Voyager, among other missions. These days he runs Shay Meadow Observatory near Big Bear, CA where he deploys a 30-inch telescope coupled with a photometer designed by himself for the task at hand – the search for optical SETI signals. Thus far the indefatigable retiree has observed more than 1300 stars in this quest.
Several unusual things have turned up in his data. What they mean demands further study. The star HD 9389 produced “two fast identical pulses, separated by 4.4s,” according to the paper on his work. That was interesting, but even more so is the fact that looking back over his earlier data, Stanton realized that a pair of similar pulses had occurred in observations of the star HD 217014 that were taken four years before. In the ‘second’ observation, the twin pulses were separated by 1.3 seconds, 3.5 times less than for the HD 89389 event. But Stanton notes that while the separation is less, the pulse shapes and separation are very similar in both events.
Stanton’s angle into optical SETI differs from the norm, as he describes it in a paper in Acta Astronautica. The work is:
…different from that employed in many other optical SETI searches. Some [3,4] look for nanosecond pulses of sufficient intensity to momentarily outshine the host star’s light, as first suggested by Schwartz and Townes [5]. Others search optical spectra of stars for unusual features [6] or emission close to a star that could have been sent from an orbiting planet [7]. The equipment used here is not capable of making any of these measurements. Instead it relies on detecting unexplained changes in a star’s light as indications of intelligent activity. Starting with time samples of 100μs, the search is capable of detecting optical pulses of this duration and longer, and also of finding optical tones in the frequency range ∼0.01–5000Hz.
HD 89389 is an F-class star about 100 light years away from the Solar System. Using the equipment Stanton has been working with, all kinds of things can present a problem, everything from an airplane blocking out starlight, satellites (a growing problem because of the increasing number of Internet access satellites), meteors and birds. Atmospheric scintillation and noise has to be accounted for as well. I’m simplifying here and send you to the paper, where all these factors are painstakingly considered. Stanton’s analysis is thorough.
Here is a photograph which shows the typical star-field during an observation of HD 89389, with the target star in the center of a field that is roughly 15 × 20 arcmin in size. The unusual pulses from this star occurred during this exposure.
Image: The HD 89389 star-field. “A careful examination was made of each photograph to detect any streaks or transitory point images that might have been objects moving through the field. Nothing was found in any of these frames, suggesting that the source of the pulses was either invisible, such as due to some atmospheric effect, or too far away to be detected.” Credit: Richard Stanton.
A closer look at these unusual observations: They consisted of two identical pulses, with the star rapidly brightening, then decreasing in brightness, then increasing again, all in the fraction of a single second. The second pulse followed 4.2 seconds later in the case of HD 89389, and 1.3 seconds later at HD 217014. According to Stanton, in over 1500 hours of searching he had never seen a pulse like this, in which the star’s light is attenuated by about 25 percent.
Note this: “This is much too fast to attribute to any known phenomenon at the star’s distance. Light from a star a million kilometers across cannot be attenuated so quickly.” In other words, something on the scale of a star cannot partially disappear in a fraction of a second, meaning the cause of this effect is not as distant as the star. If the star’s light is modulated without something moving across the field of view, then what process could cause this?
The author argues that the starlight variation in each pulse itself eliminates all the common signals discussed above, from airplanes to meteors. He also notes that unlike what happens when an asteroid or airplane occultation occurs, the star never disappears during the event. The second event, in the light of the star HD 217014, was discovered later, although the data were taken four years earlier. Stanton runs through all the possibilities, including shock waves in the atmosphere, partial eclipses by orbiting bodies, and passing gravity waves.
One way of producing this kind of modulation, Stanton points out, is through diffraction of starlight by a distant body between us and the star. Keep in mind that we are dealing with two stars that have shown the same pattern, with similar pulses. Edge diffraction results when light is diffracted by a straight edge, producing ‘intensity ripples’ that correspond to the pulses. The author gives this phenomenon considerable attention, explaining how the pulses would change with distance but coming up short on a distance to the sources here.
From his conclusion:
The fact that these pulses have been detected only in pairs must surely be a clue to their origin. How can the two detected events separated by years, and from seemingly random directions in the sky, be so similar to each other? Even if the diffraction theory is correct, these data alone cannot determine the object’s distance or velocity.
He goes on to produce a model that could explain the pulses, using the figure below.
This thin opaque ring, located somewhere in the solar system, would sequentially occult the star as it moved across the field. If anything like this were found, it would immediately raise the questions of where it came from and how it could survive millions of years of collisions with other objects. Alternatively, if the measured transverse velocity proved greater than that required to escape our solar system, a different set of questions would arise. Whatever is found, those speculating that our best chance of finding evidence of extraterrestrial intelligence lies within our own solar system [15], might have much to ponder!
If there is indeed some sort of occulting object, observations with widely spaced telescopes could potentially determine its size and distance. Meanwhile, a third double pulse event has turned up in Stanton’s data from January 18, 2025, where light from the star HD 12051 is found to pulse, with the pulses separated by 1.52 seconds. This last observation doesn’t make it into the paper other than as a footnote, but it’s an indication that Stanton may be on to something that is going to continue creating ripples. As in the case of Teleios, we have an unusual phenomenon that demands continued observation.
The paper on the unusual circular object is Filipović et al., “Teleios (G305.4-2.2) — the mystery of a perfectly shaped new Galactic supernova remnant,” accepted at Publications of the Astronomical Society of Australia and available as a preprint. The paper on the pulse phenomenon is Stanton, “Unexplained starlight pulses found in optical SETI searches,” Acta Astronautica Vol. 233 (August 2025), pp. 302-314. Full text. Thanks to Centauri Dreams readers Frank Henriquez and Antonio Tavani for the pointer to this work.
Eddies, Flows and van Gogh: Probing the Interstellar Medium
Science fiction collectors may well look a the two images below and think they’re both Richard Powers’ artwork, so prominent on the covers of science fiction titles in the mid-20th Century. Powers worked often for Ballantine in the 1950s and later, always refining the style he first exhibited when doing covers for Doubleday in the 1940s. The top image here is from one of the Doubleday titles, but I think of Powers most for his Ballantine work. His paintings could make a paperback rack into a moody, mysterious experience, a display of artistry that moved from the surreal to the purely abstract. At his best, Powers’ renderings seemed to draw out the wonder of the mind-bending fiction they encased.
What we have in the second image, though, is not abstract art but the manifestation of what is being described as “the world’s largest turbulence simulation.” The work comes from a project described in a new paper in Nature Astronomy, where lead author James Beattie describes his investigations at the Canadian Institute for Theoretical Astrophysics, where he is probing magnetism and turbulence as they occur in the interstellar medium. In this image, Beattie’s caption describes “…the fractal structure of the density, shown in yellow, black and red, and magnetic field, shown in white.”
And while Beattie may or may not be familiar with Richard Powers, he does have an eye for the art that this kind of turbulence can produce, saying:
“I love doing turbulence research because of its universality. It looks the same whether you’re looking at the plasma between galaxies, within galaxies, within the solar system, in a cup of coffee or in Van Gogh’s The Starry Night. There’s something very romantic about how it appears at all these different levels…”
And honestly, doesn’t this remind you of Powers?
What Beattie and team have produced, using the computing muscle of the SuperMUC-NG supercomputer at the Leibniz Supercomputing Centre in Germany, is helping us better understand the nature of the interstellar medium. In particular, it is a computer simulation that explores the interactions of magnetism and turbulence in the ISM, which addresses magnetism at the galactic level as well as individual astrophysical phenomena such as star formation. Beattie’s team is international in scope, with co-authors at Princeton University, Australian National University, Universität Heidelberg; the Center for Astrophysics, Harvard & Smithsonian; Harvard University; and the Bavarian Academy of Sciences and Humanities.
So what is the turbulence Beattie is describing? The phenomenon is ubiquitous, showing up in everything from cream swirling in a black cup of coffee to ocean currents to particles moving in chaotic flows in the solar wind. We can produce ultra-high vacuums on Earth, but even in these there are far more particles than are found in the average sample of the ISM. Despite the fact that so few particles exist in the ISM, though, their motions do generate a magnetic field, one that the researchers liken to the motion of our Earth’s molten core which generates the magnetic field that protects us.
The galactic magnetic field is weak indeed, but it can be modeled for the first time at a level of accuracy that is both scalable and high in resolution. At its highest setting, Beattie’s simulation can depict a volume of space 30 light years to a side, but can be scaled down by a factor of 5000 to explore smaller spaces. The latter has implications for how we study the solar wind, which not only produces ‘space weather’ but is also a factor in certain space sail concepts that use superconducting rings to produce a strong magnetic field that can harness the solar wind as thrust.
Always keep in mind that we have anything but a uniform interstellar medium. Some of the early writing about Robert Bussard’s ramjet concepts noted that a design that harnessed interstellar hydrogen would thrive best in dense star-forming regions, where hydrogen would be plentiful. The Bussard concept has fallen on hard times given issues with drag that seem to knock it out of contention, but magsail work remains interesting both as a way of harnessing solar wind particles or braking against the same upon entering a destination stellar system. So the more we can learn about the extreme density variations in the ISM, the better we can envision future interstellar flight.
Moreover, star formation is implicated in the same model. The better our simulations of interstellar turbulence, the more we can learn about the magnetic forces that push outward against the collapse of a nebula that will eventually produce one or more stars. And the model the team has developed stacks up well when run against actual data from the solar wind, which points to short term gains in the forecasting of space weather, the ‘rain’ of charged particles that affects both Earth and spacecraft.
The ubiquity of chaotic turbulence and its coupling with the galaxy’s ambient magnetic fields makes its study all the more provocative. Both generating and scattering off the plasma phenomena known as Alfvén waves, cosmic rays are strongly affected. From the paper:
In the cold (T ≈ 10 K) molecular phase of the ISM, [turbulence] changes the ionization state of the plasma by controlling the diffusion of cosmic rays [1–5], gives rise to the filamentary structures that shape and structure the initial conditions for star formation [6, 7], and through turbulent and magnetic support, changes the rate at which the cold plasma converts mass density into stars [8–13].
So there is plenty to work with here. And a brief return to van Gogh’s ‘The Starry Night,’ which Beattie mentioned in the quote above. Come to find that the author and co-author Neco Kriel (Queensland University of Technology) have produced a paper on the subject called “Is The Starry Night Turbulent?” The goal was to learn whether the night sky in this famous painting “has a power spectrum that resembles a supersonic turbulent flow.” And indeed, “‘The Starry Night’ does exhibit some similarities to turbulence, which happens to be responsible for the real, observable, starry night sky.”
Which I think only means that van Gogh was turning what he saw into art, recognizable to us precisely because it did reflect the night sky he was observing. Still, it’s fun to see these methods, which draw on deep research into turbulent interactions, applied to a cultural icon. I wonder what Beattie’s team would dig out of a deep dive into Powers’ work in the century after van Gogh?
Image: van Gogh’s ‘The Starry Night’ is Figure 1 in Beattie’s paper with Kriel. Caption: Vincent van Gogh’s The Starry Night, accessed from WallpapersWide.com (2018). We see eddies painted through the starry night sky that resemble the structures comparable to what we see in turbulent flows.”
The paper is Beattie et al., “The spectrum of magnetized turbulence in the interstellar medium,” Nature Astronomy 13 May 2025 (abstract / preprint). The paper on van Gogh is Beattie & Kriel, “Is The Starry Night Turbulent?” available as a preprint.
HD 219134: Fine-Tuning Stellar Music
HD 219134, an orange K-class star in Cassiopeia, is relatively close to the Sun (21 light years) and already known to have at least five planets, two of them being rocky super-Earths that can be tracked transiting their host. We know how significant the transit method has become thanks to the planet harvests of, for example, the Kepler mission and TESS, the Transiting Exoplanet Survey Satellite. It’s interesting to realize now that an entirely different kind of measurement based on stellar vibrations can also yield useful planet information.
The work I’m looking at this morning comes out of the Keck Observatory on Mauna Kea (Hawaii), where the Keck Planet Finder (KFP) is being used to track HD 219134’s oscillations. The field of asteroseismology is a window into the interior of a star, allowing scientists to hear the frequencies at which individual stars resonate. That makes it possible to refine our readings on the mass of the star, and just as significantly, to determine its age with higher accuracy.
KPF uses radial velocity measurements to do its work, a technique often discussed in these pages to identify exoplanet candidates. But in this case measuring the motion of the stellar surface to and from the Earth is a way of collecting the star’s vibrations, which are the key to stellar structure. Says lead author Yaguang Li (University of Hawaii at Mānoa):
“The vibrations of a star are like its unique song. By listening to those oscillations, we can precisely determine how massive a star is, how large it is, and how old it is. KPF’s fast readout mode makes it perfectly suited for detecting oscillations in cool stars, and it is the only spectrograph on Mauna Kea currently capable of making this type of discovery.”
Image: Artist’s concept of the HD219134 system. Sound waves propagating through the stellar interior were used to measure its age and size, and characterize the planets orbiting the star. Credit: openAI, based on original artwork from Gabriel Perez Diaz/Instituto de Astrofísica de Canarias. The 10-second audio clip transforms the oscillations of HD219134 measured using the Keck Planet Finder into audible sound. The star pulses roughly every four minutes. When sped up by a factor of ~250,000, its internal vibrations shift into the range of human hearing. By “listening” to starlight in this way, astronomers can explore the hidden structure and dynamics beneath the star’s surface.
What we learn here is that HD 219134 is more than twice the age of the Sun at about 10.2 billion years old. The age of a star can be difficult to determine. The most widely used measurement involves gyrochronology, which focuses on how swiftly a star spins, the assumption being that younger stars rotate more rapidly than older ones, with the gradual loss of angular momentum traceable over time. The problem: Older stars don’t necessarily follow this script, with their spin-down evidently stalling at older ages. Asteroseismology allows a more accurate reading for stars like this and provides a different reference point, providing that our models of stellar evolution allow us to interpret the results correctly..
We need to track this work because how old a star is has implications across the board. For one thing, understanding basic factors such as its temperature and luminosity requires a context to determine whether we’re dealing with a young, evolving system or a star nearing the transition to a red giant. From an astrobiological point of view, we’d like to know how old any planets in the system are, and whether they’ve had sufficient time to develop life. SETI also takes on a new dimension when considering stellar age, as targeting older exoplanet systems allows us to put our focus on higher priority targets.
Yaguang Li thinks the KPF work brings new levels of precision to these measurements, calling the result ‘a long-lost tuning fork for stellar clocks.’ From the exoplanet standpoint, stellar age is also quite informative. For the measurements have allowed the researchers to determine that HD 219134 is smaller than previously thought by about 4% in radius – this contrasts with interferometry measurements that measured its size via multiple telescopes. A more accurate reading on the size of the star affects all inferences about its planets.
That 4% difference, though, raises questions, and the authors note that it requires the models of stellar evolution they are using to be accurate. From the paper:
We were unable to easily attribute this discrepancy to any systematic uncertainties related to interferometry, variations in the canonical choices of atmospheric boundary conditions or mixing-length theory used in stellar modeling, magnetic fields, or tidal heating. Without any insight into the cause of this discrepancy, our subsequently derived quantities and treatment of rotational evolution—all of which are contingent on these model ages and radii—must necessarily be regarded as being only conditional, pending a better understanding of the physical origin for this discrepancy. Future direct constraints on stellar radii from asteroseismology (e.g., through potential breakthroughs in understanding and mitigating the surface term) may alleviate this dependence on evolutionary modeling.
So we have to be cautious in our conclusions here. If indeed the tension between the KPF measurements and interferometry is correct, we will have adjusted our calibration tools for transiting exoplanets but still need to probe the reasons for the discrepancy. That’s important, because with tuned-up measurements of a star’s size, the radii and densities of transiting planets can be more accurately measured. The updated values KPF has given us – assembled through over 2000 velocity measurements of the star – point to a significant aspect of stellar modeling that may need further adjustment.
The paper is Yaguang Li et al., “K Dwarf Radius Inflation and a 10 Gyr Spin-down Clock Unveiled through Asteroseismology of HD 219134 from the Keck Planet Finder,” Astrophysical Journal Vol. 984, No. 2 (6 May 2025), 125 (full text).
Writing a Social Insect Civilization
Communicating with extraterrestrials isn’t going to be easy, as we’ve learned in science fiction, all the way from John Campbell’s Who Goes There? To Ted Chiang’s Story of Your Life (and the movie Arrival). Indeed, just imagining the kinds of civilizations that might emerge from life utterly unlike what we have on Earth calls for a rare combination of insight and speculative drive. Michael Chorost has been thinking about the problem for over a decade now, and it’s good to see him back in these pages to follow up on a post he wrote in 2015. As I’ve always been interested in how science fiction writers do their worldbuilding, I’m delighted to publish his take on his own experience at the craft. Michael is also the author of the splendid World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011) and Rebuilt: My Journey Back to the Hearing World (Mariner, 2006).
by Michael Chorost
Ten years ago, Paul Gilster kindly invited me to guest-publish an entry on Centauri Dreams titled, “Can Social Insects Have a Civilization?” At the time I was planning to write a nonfiction book about the linguistic issues of communicating with extraterrestrials. Though actual, face-to-face contact anytime soon is deeply unlikely, the concept is nonetheless theoretically interesting because it casts light on buried assumptions about language and communication. I sold the concept to Yale University Press. The working title was HOW TO TALK TO ALIENS.
I got busy, but I soon began to feel that the project was rather empty. I realized that in an actual First Contact situation, we’ll suddenly find ourselves in a complicated situation with a deeply unfamiliar interlocutor that has an agenda of its own. We’ll inevitably find ourselves winging it. Theory could be irrelevant. Useless.
For a while I tried to finesse the problem by putting scenarios in between the theoretical stuff. I imagined a human and an alien talking—specific humans, specific aliens, concrete settings. I soon realized that the scenarios were the most interesting material in the book.
That’s because positing a concrete situation made the problems, and the possible solutions, stand out clearly. Given a particular situation, what would people actually do?
It dawned on me: It made more sense to write the book as a novel. I withdrew from my contract at Yale and returned the advance. They were kind and understanding about it.
So I committed myself to a novel—but I wanted it to be as content-rich as the book I’d promised to Yale. I decided that the human characters would be scientists: an entomologist, a linguist, a neuroscientist, and a physicist. To succeed they’d have to pool their expertise, educating each other. That would imbue the novel with the scientific content.
But this risked me writing a deadly dull novel larded with exposition. I wanted the characters—both human and alien—to be vivid and unforgettable, and for their actions to drive a propulsive plot. I wanted the reader to be unable to put the book down.
I think I succeeded. I hired an editor to help me, and she made me do rewrites for three years; she was relentless. But at last she said, “You set yourself one of the hardest imaginative problems you could possibly have chosen, especially for a first novel. I think you managed it in a way that feels genuinely convincing. I want to say clearly upfront: this book is worth it. There is no story like this in the world.”
So I’m pretty confident that I now have a publishable novel—but getting there was really hard. I thought it would take about three years to write, which is how long my two nonfiction books took. I was wrong. It took eight.
When I started, I knew I wanted the aliens to be really alien: no pointy-eared, English-speaking Vulcans. I decided to make them sapient social insect colonies. That would make them aliens without contiguous bodies. Without hands as we know them. Without faces.
Therefore, I first had to figure out what a social insect civilization looks like. I didn’t want to take the easy way out by positing (as Orson Scott Card did) that a social insect colony would have a centralized intelligence, e.g. a Queen that gives orders. I felt that was cheating. I wanted the colonies to be genuinely distributed entities in which no individual insect has language or even much in the way of consciousness. Furthermore, I wanted the insects to be no bigger than Earthly ones, which ruled out big brains of any kind.
This gave me some very challenging questions. (From now on I’ll use the word “hive” as shorthand for “social insect colony.”)
• How does a hive pick up a hammer?
• How does a hive store and process the information needed for language?
• What is the physical structure of the hives?
• How does a distributed consciousness behave?
• What does such a civilization’s technology look like?
• What does its language look like? What’s its morphology, grammar, vocabulary?
• What does a society of hives look like?
• What events in the past set this species on the path to language and technology?
It took me two years just to answer the first one about picking up a hammer. I would imagine a bunch of insects clustering around a hammer and completely failing to get any leverage. Then I’d give up, deciding the question was unanswerable.
But finally, I figured it out: the hives parasitize mammals by inserting axons into the motor cortexes of their brains. That way, they can control the mammals as roaming “hands.”
And this was a key insight, because it helped me understand the hives as truly distributed entities. A given hive could have several dozen “hands” roaming the landscape, doing various things. Furthermore, it would have no front or back in any human sense.
This worldbuilding was fun, but it was the least efficient way imaginable to write a novel. I designed the aliens and their world before working out the plot. This led to a big problem.
Which was this: the aliens were so alien that I didn’t know why they would want to interact with humans in any way, nor us with them. What would we want to talk about? Or do together? This meant I didn’t have a plot.
I didn’t want to default to science fiction’s classic reasons for interspecies communication: war and trade. They struck me as stereotypical answers that would lead to a stereotypical novel. Besides, they begged the question. Species that are trading or fighting have to be similar enough to have things to trade, or to fight about. That would vitiate my goal of writing really alien aliens.
So I knew what kind of plots I didn’t want. But that didn’t tell me what kind of plot I did want. I sat down every day and wrote, hoping to figure out an answer.
This was, as I said, a very inefficient way to write a novel. Why didn’t I practice by writing, and publishing, a few short stories? Build up my cred, get my name out there? But I didn’t want to do those things. I wanted to write this novel. I grimly stuck to it, day after day.
After a while I had a bare-bones plot. When Jonah Loeb, a deaf graduate student in entomology, asks how to deal with an intelligent ant colony besieging Washington, D.C., the answer is, “Ask it to stop.” Jonah gathers a team of scientists and travels to a hive civilization in order to learn how.
I gave the other scientists names, figured out their dissertation topics, and worked out some of their characteristics. The neuroscientist was arrogant. The linguist was prickly and defensive. The physicist was socially awkward. Jonah, the protagonist, was deaf, like me, with cochlear implants. He was smart, but neurotic.
But I didn’t know how to make the characters come alive on the page. They all talked the same. Their only motivation was scientific interest. They had scant backstories or inner lives. They were, in short, boring.
I was even more at sea with the alien characters. They had no personality. I mean, really, how do you give a social insect colony a personality?
The plot, too, remained threadbare. I fabricated encounters, goings to-and-fro, arguments. But it just didn’t hold together. Often I’d add a new element only to realize it invalidated another element.
So I had dull characters and a plot made out of cardboard and duct tape. Finally, I admitted I needed help. I hired a freelance editor, and we started fresh.
The editor had me write up descriptions of each character’s goals and motives, and a detailed plot outline. We went through the manuscript one scene at a time, and she often told me to rework it before we went on to the next.
Slowly, the characters came to life on the page. I had made the protagonist, Jonah, deaf because I thought that would underscore the theme of communication. But Jonah only came to life when I thought back to my own feelings in my early twenties. I realized that Jonah was driven by feeling like an outsider. He desperately wants to be included and to prove himself.
This characterization let me set up a key dynamic: an outsider protagonist trying to communicate with aliens—the ultimate outsiders. Clarity for the character led to clarity for the story.
I slowly got better at solving problems by framing them in terms of character and plot. I knew that Tokic, the hives’ language, would have to be exotic—but creating it overwhelmed me. I’m no grammarian, and certainly no inventor of languages.
But then I realized I only had to develop enough of the language to support the plot. I wanted the plot to turn on misunderstandings and mistranslations as the humans struggled to learn the language.
A key source of confusion, I realized, would come from how differently shaped the hives and humans are. Humans have arms and legs that are attached to them. On the other hand, a hive is essentially a giant, stationary head with dozens of “hands” roaming the landscape. Not only that, the “hands,” as parasitized mammals, have minds of their own. Hives give their hands general orders, and the hands work out the details. A hive can disagree with its parts, and its parts can disagree right back.
I realized that the part/whole distinction would be built deeply into Tokic, rather like how human languages build gender deeply into their grammar. (In English, consider how hard it is to talk about a person if you don’t know their gender.) When you’re addressing another entity in Tokic, you have to be very precise, on the level of grammar, about its partness or wholeness.
Now consider: To a hive, is a human being a whole or a part?
A hive would find this question really hard to answer. As a mammal, a human being looks like a “hand”—a part—but it talks like a whole. Yet in Jonah’s team, each member is legitimately a part. In Latin, membrum means “limb” or “part of the body.”
Jonah, as a cochlear implant user, is even trickier for a hive to understand. A cochlear implant is a computer; it runs on code and constantly makes decisions about what’s important for the user to hear. It’s a body part that literally thinks for itself. As such, Jonah is kind of hive-like. When a hive asks what Jonah is and the team gives it an answer it doesn’t understand, the hive attacks the team and they must run for their lives.
I worked out Tokic’s parts/wholes grammar, and that made it possible for me to write the scenes where things went wrong. These were tough scenes to write, because I had to keep track of what a hive said, what the humans thought it said, the humans’ mistaken reply, and so on. I also had to be careful not to let the scenes get bogged down.
I’ve noted how inefficient my writing process was. But I do think it was productive in one way: I spent so much time thinking about the novel that a great deal of information accreted in my mind. I think that led to more richness in the worldbuilding and the story than would have happened if I’d written it faster.
There’s so much more I haven’t mentioned, like how an alien robot reads Wallace Stevens’s poetry and names itself after him; the brutal 1.8-gee gravity of Formicaris and the unexpected solution that lets the human team function there; the superheavy stable element that facilitates interstellar travel; the electromagnetic weapon that gives humans Capgras syndrome; the octopoidal surgeon who operates on Jonah and Daphne to upgrade their cyborg parts; and the illustrations. I had those done by professional science illustrators.
So now you have a sense of what my novel’s about. It’s still titled HOW TO TALK TO ALIENS; I think its unconventionality, and slightly academic air, will help it stand out. I hope you’re now as excited about it as I am. You can see a bit more about it at my website, michaelchorost.com.
If you know of any literary agents who’d be interested—please let me know.