Science fiction has been exploring advanced machine intelligence and its consequences for a long time now, and it’s now being bruited about in service of the Fermi paradox, which asks why we see no intelligent civilizations given the abundant opportunity seemingly offered by the cosmos. A new paper from Michael Garrett (Jodrell Bank Centre for Astrophysics/University of Manchester) explores the matter in terms of how advanced AI might provide the kind of ‘great filter’ (the term is Robin Hanson’s) that would limit the lifetime of any technological civilization.

The AI question is huge given its implications in all spheres of life, and its application to the Fermi question is inevitable. We can plug in any number of scenarios that limit a technological society’s ability to become communicative or spacefaring, and indeed there are dozens of potential answers to Fermi’s “Where are they?” But let’s explore this paper because its discussion of the nature of AI and where it leads is timely whether Fermi and SETI come into play or not.

A personal note: I use current AI chatbots every day in the form of ChatGPT and Google’s Gemini, and it may be useful to explain what I do with them. Keeping a window open to ChatGPT offers me the chance to do a quick investigation of specific terms that may be unclear to me in a scientific paper, or to put together a brief background on the history of a particular idea. What I do not do is to have AI write something for me, which is a notion that is anathema to any serious writer. Instead, I ask AI for information, then triple check it, once against another AI and then against conventional Internet research. And I find the ability to ask for a paragraph of explanation at various educational levels can help me when I’m trying to learn something utterly new from the ground up.

It’s surprising how often these sources prove to be accurate, but the odd mistake means that you have to take great caution in using them. For example, I asked Gemini a few months back how many planets had been confirmed around Proxima Centauri and was told there were none. In reality, we do have one, that being the intriguing Proxima b, which is Earth-class and in the habitable zone. And we have two candidates: Proxima c is a likely super-Earth on a five-year orbit and Proxima d is a small world (with mass a quarter that of Earth) orbiting every five days. Again, the latter two are candidates, not confirmed planets, as per the NASA Exoplanet Archive. I reported all this to Gemini and yesterday the same question produced an accurate result.

So we have to be careful about AI in even its current state. What happens as it evolves? As Garrett points out, it’s hard to come up with any area of human interest that will be untouched by the effects of AI, and commerce, healthcare, financial investigation and many other areas are already being impacted. Concerns about the workforce are in the air, as are issues of bias in algorithms, data privacy, ethical decision-making and environmental impact. So we have a lot to work with in terms of potential danger.

Image: Michael Garrett, Sir Bernard Lovell chair of Astrophysics at the University of Manchester and the Director of the Jodrell Bank Centre for Astrophysics (JBCA). Credit: University of Manchester.

Garrett’s focus is on AI’s potential as a deal-breaker for technological civilization. Now we’re entering the realm of artificial superintelligence (ASI), which was Stephen Hawking’s great concern when he argued that further developments in AI could spell the end of civilization itself. ASI refers to an independent AI that becomes capable of redesigning itself, meaning it moves into areas humans do not necessarily understand. An AI undergoing evolution and managing it at an ever increasing rate is a development that could be momentous and one that poses obvious societal risks.

The author’s assumption is that if we can produce AI and begin the process leading to ASI, then other civilizations in the galaxy could do the same. The picture that emerges is stark:

The scenario…suggests that almost all technical civilisations collapse on timescales set by their wide-spread adoption of AI. If AI-induced calamities need to occur before any civilisation achieves a multiplanetary capability, the longevity (L) of a communicating civilization as estimated by the Drake Equation suggests a value of L ∼ 100–200 years.

Which poses problems for SETI. We’re dealing with a short technological window before the inevitable disappearance of the culture we are trying to find. Assuming only a handful of technological civilizations exist in the galaxy at any particular time (and SETI always demands assumptions like this, which makes it unsettling and in some ways related more to philosophy than science), then the probability of detection is all but nil unless we move to all-sky surveys. Garrett notes that field of view is often overlooked amongst all the discussion of raw sensitivity and total bandwidth. A telling point.

But let’s pause right there. The 100-200 year ‘window’ may apply to biological civilizations, but what about the machines that may supersede them? As post-biological intelligence rockets forward in technological development, we see the possibility of system-wide and even interstellar exploration. The problem is that the activities of such a machine culture should also become apparent in our search for technosignatures, but thus far we remain frustrated. Garrett adds this:

We…note that a post-biological technical civilisation would be especially well-adapted to space exploration, with the potential to spread its presence throughout the Galaxy, even if the travel times are long and the interstellar environment harsh. Indeed, many predict that if we were to encounter extraterrestrial intelligence it would likely be in machine form. Contemporary initiatives like the Breakthrough Starshot programme are exploring technologies that would propel light-weight electronic systems toward the nearest star, Proxima Centauri. It’s conceivable that the first successful attempts to do this might be realised before the century’s close, and AI components could form an integral part of these miniature payloads. The absence of detectable signs of civilisations spanning stellar systems and entire galaxies (Kardashev Type II and Type III civilisations) further implies that such entities are either exceedingly rare or non-existent, reinforcing the notion of a “Great Filter” that halts the progress of a technical civilization within a few centuries of its emergence.

Biological civilizations, if they follow the example of our own, are likely to weaponize AI, perhaps leading to incidents that escalate to thermonuclear war. Indeed, the whole point of ASI is that in surpassing human intelligence, it will move well beyond oversight mechanisms and have consequences that are unlikely to merge with what its biological creators find acceptable. Thus the scenario of advanced machine intelligence finding the demands on energy and resources of humans more of a nuisance than an obligation. Various Terminator-like scenarios (or think Fred Saberhagen’s Berserker novels) suggest themselves as machines set about exterminating biological life.

There may come a time when, as they say in the old Westerns, it’s time to get out of Dodge. Indeed, developing a spacefaring civilization would allow humans to find alternate places to live in case the home world succumbed to the above scenarios. Redundancy is the goal, and as Garrett notes: “…the expansion into multiple widely separated locations provides a broader scope for experimenting with AI. It allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation. Different planets or outposts in space could serve as test beds for various stages of AI development, under controlled conditions.”

But we’re coming up against a hard stop here. While the advance of AI is phenomenal (and some think ASI is a matter of no more than a few decades away), the advance of space technologies moves at a comparative crawl. The imperative of becoming a technological species falls short because it runs out of time. In fact – and Garrett notes this – we may need ASI to help us figure out how to produce the system-wide infrastructure that we could use to develop this redundancy. In that case, technological civilizations may collapse on timescales related to their development of ASI.

Image: How will we use AI in furthering our interests in exploring the Solar System and beyond? Image credit: Generated by AI / Neil Sahota.

We talk about regulating AI, but how to do so is deeply problematic. Regulations won’t be easy. Consider one relatively minor current case. As reported in a CNN story, the chatbot AI ChatGPT can be tricked into bypassing blocks put into place by OpenAI (the company behind it) so that hackers can plan a variety of crimes with its help. These include money laundering and the evasion of trade sanctions. Such workarounds in the hands of dark interests are challenging at today’s level of AI, and we can see future counterparts evolving along with the advancing wave of AI experiments.

It could be said that SETI is a useful exercise partly because it forces us to examine our own values and actions, reflecting on how these might transform other worlds as beings other than ourselves face the their own dilemmas of personal and social growth. But can we assume that it’s even possible to understand, let alone model, what an alien being might consider ‘values’ or accepted modes of action? Better to think of simple survival. That’s a subject any civilization has to consider, and how it goes about doing it will determine how and whether it emerges from a transition to machine intelligence.

I think Garrett may be too pessimistic here:

We stand on the brink of exponential growth in AI’s evolution and its societal repercussions and implications. This pivotal shift is something that all biologically-based technical civilisations will encounter. Given that the pace of technological change is unparalleled in the history of science, it is probable that all technical civilisations will significantly miscalculate the profound effects that this shift will engender.

I pause at that word ‘probable,’ which is so soaked in our own outlook. As we try to establish a regulatory framework that can help AI progress in helpful ways and avoid deviations into lethality, we should consider the broader imperative. Call it insurance. I think Garrett is right in noting the lag in development in getting us off-planet, and can relate to his concern that advanced AI poses a distinct threat. All the more reason to advocate for a healthy space program as we face the AI challenge. And we should also consider that advanced AI may become the greatest boon humanity has ever seen in terms of making startling breakthroughs that can change our lives in short order.

Call me cautiously optimistic. Can AI crack interstellar propulsion? How about cancer? Such dizzying prospects should see us examining our own values and how we communicate them. For if AI might transform rather than annihilating us, we need to understand not only how to interact with it, but how to ensure that it understands what we are and where we are going.

The paper is Garrett, “Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?” Acta Astronautica Vol. 219 (June 2024), pp. 731-735 (full text). Thanks to my old friend Antonio Tavani for the pointer.