Science fiction has been exploring advanced machine intelligence and its consequences for a long time now, and it’s now being bruited about in service of the Fermi paradox, which asks why we see no intelligent civilizations given the abundant opportunity seemingly offered by the cosmos. A new paper from Michael Garrett (Jodrell Bank Centre for Astrophysics/University of Manchester) explores the matter in terms of how advanced AI might provide the kind of ‘great filter’ (the term is Robin Hanson’s) that would limit the lifetime of any technological civilization.
The AI question is huge given its implications in all spheres of life, and its application to the Fermi question is inevitable. We can plug in any number of scenarios that limit a technological society’s ability to become communicative or spacefaring, and indeed there are dozens of potential answers to Fermi’s “Where are they?” But let’s explore this paper because its discussion of the nature of AI and where it leads is timely whether Fermi and SETI come into play or not.
A personal note: I use current AI chatbots every day in the form of ChatGPT and Google’s Gemini, and it may be useful to explain what I do with them. Keeping a window open to ChatGPT offers me the chance to do a quick investigation of specific terms that may be unclear to me in a scientific paper, or to put together a brief background on the history of a particular idea. What I do not do is to have AI write something for me, which is a notion that is anathema to any serious writer. Instead, I ask AI for information, then triple check it, once against another AI and then against conventional Internet research. And I find the ability to ask for a paragraph of explanation at various educational levels can help me when I’m trying to learn something utterly new from the ground up.
It’s surprising how often these sources prove to be accurate, but the odd mistake means that you have to take great caution in using them. For example, I asked Gemini a few months back how many planets had been confirmed around Proxima Centauri and was told there were none. In reality, we do have one, that being the intriguing Proxima b, which is Earth-class and in the habitable zone. And we have two candidates: Proxima c is a likely super-Earth on a five-year orbit and Proxima d is a small world (with mass a quarter that of Earth) orbiting every five days. Again, the latter two are candidates, not confirmed planets, as per the NASA Exoplanet Archive. I reported all this to Gemini and yesterday the same question produced an accurate result.
So we have to be careful about AI in even its current state. What happens as it evolves? As Garrett points out, it’s hard to come up with any area of human interest that will be untouched by the effects of AI, and commerce, healthcare, financial investigation and many other areas are already being impacted. Concerns about the workforce are in the air, as are issues of bias in algorithms, data privacy, ethical decision-making and environmental impact. So we have a lot to work with in terms of potential danger.
Image: Michael Garrett, Sir Bernard Lovell chair of Astrophysics at the University of Manchester and the Director of the Jodrell Bank Centre for Astrophysics (JBCA). Credit: University of Manchester.
Garrett’s focus is on AI’s potential as a deal-breaker for technological civilization. Now we’re entering the realm of artificial superintelligence (ASI), which was Stephen Hawking’s great concern when he argued that further developments in AI could spell the end of civilization itself. ASI refers to an independent AI that becomes capable of redesigning itself, meaning it moves into areas humans do not necessarily understand. An AI undergoing evolution and managing it at an ever increasing rate is a development that could be momentous and one that poses obvious societal risks.
The author’s assumption is that if we can produce AI and begin the process leading to ASI, then other civilizations in the galaxy could do the same. The picture that emerges is stark:
The scenario…suggests that almost all technical civilisations collapse on timescales set by their wide-spread adoption of AI. If AI-induced calamities need to occur before any civilisation achieves a multiplanetary capability, the longevity (L) of a communicating civilization as estimated by the Drake Equation suggests a value of L ∼ 100–200 years.
Which poses problems for SETI. We’re dealing with a short technological window before the inevitable disappearance of the culture we are trying to find. Assuming only a handful of technological civilizations exist in the galaxy at any particular time (and SETI always demands assumptions like this, which makes it unsettling and in some ways related more to philosophy than science), then the probability of detection is all but nil unless we move to all-sky surveys. Garrett notes that field of view is often overlooked amongst all the discussion of raw sensitivity and total bandwidth. A telling point.
But let’s pause right there. The 100-200 year ‘window’ may apply to biological civilizations, but what about the machines that may supersede them? As post-biological intelligence rockets forward in technological development, we see the possibility of system-wide and even interstellar exploration. The problem is that the activities of such a machine culture should also become apparent in our search for technosignatures, but thus far we remain frustrated. Garrett adds this:
We…note that a post-biological technical civilisation would be especially well-adapted to space exploration, with the potential to spread its presence throughout the Galaxy, even if the travel times are long and the interstellar environment harsh. Indeed, many predict that if we were to encounter extraterrestrial intelligence it would likely be in machine form. Contemporary initiatives like the Breakthrough Starshot programme are exploring technologies that would propel light-weight electronic systems toward the nearest star, Proxima Centauri. It’s conceivable that the first successful attempts to do this might be realised before the century’s close, and AI components could form an integral part of these miniature payloads. The absence of detectable signs of civilisations spanning stellar systems and entire galaxies (Kardashev Type II and Type III civilisations) further implies that such entities are either exceedingly rare or non-existent, reinforcing the notion of a “Great Filter” that halts the progress of a technical civilization within a few centuries of its emergence.
Biological civilizations, if they follow the example of our own, are likely to weaponize AI, perhaps leading to incidents that escalate to thermonuclear war. Indeed, the whole point of ASI is that in surpassing human intelligence, it will move well beyond oversight mechanisms and have consequences that are unlikely to merge with what its biological creators find acceptable. Thus the scenario of advanced machine intelligence finding the demands on energy and resources of humans more of a nuisance than an obligation. Various Terminator-like scenarios (or think Fred Saberhagen’s Berserker novels) suggest themselves as machines set about exterminating biological life.
There may come a time when, as they say in the old Westerns, it’s time to get out of Dodge. Indeed, developing a spacefaring civilization would allow humans to find alternate places to live in case the home world succumbed to the above scenarios. Redundancy is the goal, and as Garrett notes: “…the expansion into multiple widely separated locations provides a broader scope for experimenting with AI. It allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation. Different planets or outposts in space could serve as test beds for various stages of AI development, under controlled conditions.”
But we’re coming up against a hard stop here. While the advance of AI is phenomenal (and some think ASI is a matter of no more than a few decades away), the advance of space technologies moves at a comparative crawl. The imperative of becoming a technological species falls short because it runs out of time. In fact – and Garrett notes this – we may need ASI to help us figure out how to produce the system-wide infrastructure that we could use to develop this redundancy. In that case, technological civilizations may collapse on timescales related to their development of ASI.
Image: How will we use AI in furthering our interests in exploring the Solar System and beyond? Image credit: Generated by AI / Neil Sahota.
We talk about regulating AI, but how to do so is deeply problematic. Regulations won’t be easy. Consider one relatively minor current case. As reported in a CNN story, the chatbot AI ChatGPT can be tricked into bypassing blocks put into place by OpenAI (the company behind it) so that hackers can plan a variety of crimes with its help. These include money laundering and the evasion of trade sanctions. Such workarounds in the hands of dark interests are challenging at today’s level of AI, and we can see future counterparts evolving along with the advancing wave of AI experiments.
It could be said that SETI is a useful exercise partly because it forces us to examine our own values and actions, reflecting on how these might transform other worlds as beings other than ourselves face the their own dilemmas of personal and social growth. But can we assume that it’s even possible to understand, let alone model, what an alien being might consider ‘values’ or accepted modes of action? Better to think of simple survival. That’s a subject any civilization has to consider, and how it goes about doing it will determine how and whether it emerges from a transition to machine intelligence.
I think Garrett may be too pessimistic here:
We stand on the brink of exponential growth in AI’s evolution and its societal repercussions and implications. This pivotal shift is something that all biologically-based technical civilisations will encounter. Given that the pace of technological change is unparalleled in the history of science, it is probable that all technical civilisations will significantly miscalculate the profound effects that this shift will engender.
I pause at that word ‘probable,’ which is so soaked in our own outlook. As we try to establish a regulatory framework that can help AI progress in helpful ways and avoid deviations into lethality, we should consider the broader imperative. Call it insurance. I think Garrett is right in noting the lag in development in getting us off-planet, and can relate to his concern that advanced AI poses a distinct threat. All the more reason to advocate for a healthy space program as we face the AI challenge. And we should also consider that advanced AI may become the greatest boon humanity has ever seen in terms of making startling breakthroughs that can change our lives in short order.
Call me cautiously optimistic. Can AI crack interstellar propulsion? How about cancer? Such dizzying prospects should see us examining our own values and how we communicate them. For if AI might transform rather than annihilating us, we need to understand not only how to interact with it, but how to ensure that it understands what we are and where we are going.
The paper is Garrett, “Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?” Acta Astronautica Vol. 219 (June 2024), pp. 731-735 (full text). Thanks to my old friend Antonio Tavani for the pointer.
Also up for consideration: if just one post biological ASI survived a collapse of the biological civilization that created it, what does that imply for the rest of the galaxy over deep time?
Would it ‘hack’ a sufficiently advanced AI, to merge? Would it exterminate a civilization that was creating another ASI?
Beserkers? Or simply, the “Dark Forest” idea is real, and civilizations hide from predatory species or machines.
My sense is that the Dark Forest approach to hiding is not going to be easy to achieve as it means every civilizational artifact must be hidden from every means of surveillance. I think this may be extremely hard to do, especially if probes can surveil every possible living world looking for the emergence of technological species that have to be eliminated. Until this century, who had even considered this as we broadcast our presence to the nearby stars and we now know that in principle, telescopes could be made to detect the effects of structures and populations on distant worlds.
The paper concludes by advocating “necessity for our own technical civilization to
intensify efforts to control and regulate AI.” But we should view this political aim skeptically. Often “the policeman is not here to create disorder… the policeman is here to preserve disorder”, and never has this been more true than with AI.
A simple example: there is a long-standing problem on the internet of people being harassed with “revenge porn”, used by ax-grinding exes to stigmatize them and even to get them fired by susceptible employers. There are cases of children persuaded to send an image, blackmailed, even driven to suicide. AI offers a poetically utopian solution: make it so that anyone on Earth can draw like an artist, so that there is no way to tell who was really a victim. Of course, there should be a far better solution, namely not to judge humans for looking like humans in the first place. Yet the political movement with the most traction has been to take rapid action to ban “deepfakes” to ensure that no one interferes with the blackmail industry. This approach was very successful with illicit drugs – not for erasing any harms, but for the higher goal of maximizing profits. Deepfakes will still be made, but they will be made by the right people, and their scarcity will ensure they remain effective.
We can see that some people are worried about the weaponization of AI … any time we try to use it. With very limited exceptions, the AI servers require registration. When you look into the abyss, the abyss is most assuredly looking deep into you. Although “OpenAI” was set up to look like an open-source project, we’ve seen these technologies restricted and locked up under several notions of “ownership”. Creation of an ethical framework helps ensure that only the largest players are able to develop these technologies. This is analogous to how concerns over debris will be used to take control over strategic areas at the lunar poles until the Outer Space Treaty can be set aside completely; it’s just another form of property.
Placing the power of AI entirely in the hands of a few already very powerful organizations has consequences. We haven’t seen most of those consequences because AI, even though we already see it used in war, is still in relatively a honeymoon phase to promote our acceptance. We still think that we will ask AI questions … rather than the other way around. But soon your self-driving car is not going to drive past Barbara Streisand’s house unless you can explain to the AI why there is a good reason for it to go outside its authorized domain of service. Your ad-supported video stream on your smart TV is going to expect you to sing along to the ads. With feeling! (Yes, it can tell…) Supermarkets are already trialling tags that change their prices from time to time, and those prices will be affected by many market factors, such as whether you were sufficiently encouraging on social media. Our technological society has surrounded us with machines which, by errors of philosophy, we are told we don’t “own”, which we don’t have the right to modify or even repair, and all of which can use AI to act as boots on the ground for those who do own them.
If AI does become a weapon against humanity, space flight is no answer. We already see cheap drones attacking the houses of leaders in faraway countries. If humans can figure out how to get to a space colony, superintelligent AIs will be there even before they arrive. They aren’t subject to limits on acceleration, after all. Otherwise, it wouldn’t be much of a “Great Filter”.
A more organic answer presents itself every time we go to a website and are presented with a “Certificate Expired” notice. In the name of preserving privacy on the web, people who want to run websites are required to submit to a yearly identification check, payment — and of course, verification their content is not inappropriate. Yet the web is also a key dissemination mechanism for software. With ever more complex policies and sufficiently pushy AI monitoring authorizations, it is possible the entire system will simply come to an impasse. Or warfare could bring it to a halt, but it is then unexpectedly difficult to restart. It is vital that, even as we lament the hardships of a coming Dark Age, we pass on to its people the sense that it was God’s will. They deserve that much comfort. Perhaps one day a culture will rise where we don’t say parts of human minds and livelihoods are ‘intellectual property’, where we don’t use divine-like powers of surveillance to judge people, where we leave off from war. A good culture could survive its own technology. But looking up at the silent sky – perhaps the Dark Age is coming to stay.
@Ron,
Almost every point you make is happening (c.f. China) or dramatized in the “Black Mirror” tv series.
OpenAI (hah!) is transitioning to a for-profit company. Is Altman channeling Leland Stanford?
It is time we extended the internet’s Rule 34 to “weaponization” of all internet technologies.
However, on a more optimistic note, while AI of the LLM level currently needs huge compute resources to train and deliver output, I do think technology will allow this technology to be democratized and extend to the Edge, allowing individuals to have their own AIs to push back against the panopticon. I see the potential to spoof surveillance systems in an ongoing arms race where centralized systems will be foiled by the myriad personal deployed AIs. In a sense we see that in China’s attempted control of social media content which is constantly stymied by clever memes. Wait till users deploy AIs to generate the memes, constantly outwitting the PRC authorities. A resistance always seems to occur to evade central control. The clever use of technologies in the USSR to outwit import restrictions on “western music”, communication and computer technology.
It surprises me how often the latest groundbreaking ideas connect to Fermi’s paradox and the existence of dead civilizations. Martians were just the beginning, and now we find ourselves discussing AI, suggesting that aliens may be alive and well after all. While we envision an ending akin to the end of the universe, it feels more like the human psyche seeking acknowledgment. I recently rewatched *Blade Runner 2049*, and I think the AI hologram companion could address many psychological issues that our species faces. The key point is the existence of other consciousness in the universe, especially since ours is limited by our short lifespans. Meanwhile, extraterrestrials are likely to be immortal.
On a personal note, been using Grammarly to improve my comments and was surprised by it…
@Michael Fidler
I think the meme of “Skynet” and other such machine exterminators of humanity is overblown. AIs will create existential threats though more due to human action (e.g. creating weaponized biological weapons), although human actions sans AI are doing a great job on that already. As Charlie Stross has long written, corporations are AIs and equivalent paperclip maximizers.
My wife loathes Grammarly which is not good at what it purports to do (she is an extraordinarily good writer of English and can correct Grammarly’s grammar). Having said that, I do leave it turned on to correct my increasingly poor typing. But it can be annoying. For example, it insisted that I could not use the word “statite” and kept surreptitiously changing it to “statute” after I had corrected it. I also hate that it insists on hyphenating so many words.
Quick first thoughts on reading teh Garrett paper.
1. He seems to assume a conclusion and then tries to fit his evidence and reasoning to meet that conclusion.
2. As a result, he assumes that artificial superintelligence (ASI) will emerge and destroy biological and technological civilization. Why would an ASI do this – it makes no sense. If anything, as in Colossus: The Forbin Project, ASI will prevent that harm.
3. Garrett says that AI/ASI will help with space technology and exploration, yet it will apparently not proactively explore space itself. This seems illogical to me.
4. I find Fred Hoyle’s fictional A for Andromeda and teh sequel Andromeda Breakthrough more plausible as a machine “civilization” contacts us.
5. As Garrett rightly points out, star-faring is easier for machines than biologicals. This suggests to me that the galaxy could be full of intelligent starfarers building new outposts using the resources of star systems.
Therefore if we do exterminate ourselves, it will be due to our actions, not that of uncontrollable AI. Biological and Artificial stupidity (AS) may be our demise, but not ASI. Sir Martin Rees explored our technological existential risks in Our Final Hour, not one of the 2 references to Rees in the paper.
With AI unable to even replicate the regular workday of an ant.
I find the AI scare promoted in media to be highly amusing, where a technological singularity event is equaled with doomsday.
AI will certainly change the way we work and perhaps also how we choose and view media, news and entertainment. We cannot predict right now how far reaching this will be.
And so we do indeed stand at the point of a singularity event.
But it’s not the first in the last 100 years or so. Nuclear power, mRNA developments, battery technology, targeted therapy in medicine the examples are too many to list all…..
Using a computer to order tickets for your vacation, or even for the local cinema evening?
Absurd, why use a mathematician machine for such tasks?
The jet engine made mass transport possible, and we now got people on average income who now have their vacation cottage at the other side of the planet.
That will absolutely never happen, have you read too much scifi my friend?
A handheld device that not only work as a phone, but also keep all the information of a filofax, sends notifications about upcoming events, and even can be tweaked to monitor your health.
Stop it, I’m laughing myself silly here. Is that idea from Star Trek?
I keep returning to an idea that has, for me, surprisingly optimistic implications. This is the idea that the “great filters” to the development of technical civilizations apply at the onset – three brief scenarios:
1. life never gets started
2. life does not develop at least one of the several attributes that enable technical development (e.g. tool making ability)
3. life achieves 1 and 2, but is stalled by environmental effects and events
Earth’s history is one example of environmental effects and events contributing to the direction of evolution in ways that result in humanoids. Dolphins are smart – no technology. Elephants are smart – no technology. Octopi are smart – no technology. Even our cousins among the great apes are pretty darn smart – no technology. On our planet only one species among many survived asteroid strikes, ice ages, and the other existential events that wiped out many other species. Among those several species that survived all that, only one developed tool-making abilities.
On other planets in habitable zones around their stars it could well be that the luck of the draw did not favor the rise of intelligent, tool-making species, or any life at all we need not forget.
I find this thought process to lead to a very optimistic perspective – in this vast universe at least one intelligent, tool-making species has arisen. Irrespective of caveats related to deep time, deep distance, limited life spans and vulnerability to interstellar conditions, WE are here. That’s the only fact we have.
If, and I’m saying “if”, we are the only ones (around here for now) that makes our existence an exceedingly precious commodity. We need to be sure to do the right things while we are here…
We live in an age of confusion regarding the state of science and technology. Is technological and scientific progress slowing down? Is the frequency of disruptive discoveries and technologies decreasing? Are we facing a “wall of chaos and complexity” that may impede future technological progress beyond our current state-of-the-art? For example, are certain technologies such as controlled nuclear fusion, longevity extension, room-temperature superconductors, nanobots, and quantum computers too complex for humans to master even if, in principle, they are physically possible? Is AI over-hyped or underestimated?
Regarding AI, arguably, the technology is over-hyped and underestimated. How can this be? It has been over-hyped since transformers and LLMs hit the scene in the sense that some people project imminent utopia and others dystopia in short order. It is underestimated because the rate of progress within less than the lifetime of a person born in the year 2000 is undeniable. Yes, what “AI” can do is quite impressive in certain arenas, but can these machines truly “reason”, or, are they just advanced software packages that excel at pattern recognition? On the one hand, we hear talk of Ph.D.-level AI; on the other hand, Yann LeCun has suggested that current AI is not as smart as dogs or cats! Ah, the age of confusion rears its ugly (or exciting) head yet again.
Recently, researchers at Apple authored a paper challenging the notion that LLMs can reason. They task the machines with simple math problems that are worded in language that is not similar to the original training data to see what happens in terms of the AI’s ability to solve them. Interestingly, the placement of irrelevant details, slight changes in wordings, etc. cause a dramatic drop in the performance of these system’s ability to solve even basic arithmetic. And, of course, AI hallucinations aka the generation of completely non-sensical statements worded in articulately worded sentences have not been tackled. Then we have the example of the ARC Challenge puzzles at which many humans excel without training data, but even the state-of-the-art AI models cannot get above 30% correct. As an aside, I have tried some of these ARC Challenge puzzles and they are quite fun. The aforementioned recent Apple paper and the failure of AI models to solve the ARC Challenge tasks strongly suggest that the current state-of-the-art lack “fluid intelligence”. They are unable to reason in the way that a human child can.
The cognitive limitations of AI could be a moot point sooner rather than later, as LLMs are not the end of the line, and researchers are trying to move this paradigm– even Yann LeCun recently addressed this point. There is good reason to believe that whatever constitutes AGI or ASI will be comprised of new architectures and paradigms that may or may not also include LLMs. But, even when the cognitive limitations of today’s AI are removed, other questions remain. Among them: what does it mean, on a practical level, to have an entity that is smarter than the smartest humans set to work on the advancement of basic science and technology? I am super-curious to see if, once AGI or ASI is attained, will these machines re-accelerate technological progress. Will these machines allow us to increase the number of “disruptive breakthroughs” in science and technology? These questions are very relevant to the topic of space travel.
Will ASI or AGI allow us to circumvent the current bottlenecks in space technology? When will dramatic technological progress OUTSIDE of just information technology start to occur? What level of information technology will we need to achieve to achieve dramatic gains in the realm of non-information technologies? Wouldn’t it be nice if an ASI drew up the plans on how to develop a space elevator or an antimatter engine?
My thesis is that to increase the frequency of disruptive scientific and technological breakthroughs, AGI/ASI will be needed, but civilization will also need to develop an off-world infrastructure for two reasons: (i) the amount of matter and energy in the solar system outside of earth is much greater than what is contained on our planet and access to these resources will be required, and (ii) any attempt to build a Type I civilization on the Earth will destroy the ecosphere and render the planet uninhabitable to complex multicellular life. Think of this thesis as a marriage of the “techno-optimist” and “doomer” camps.
Once an off-world infrastructure is established (and advanced AI and robotics will be essential in bringing this about), the ability to develop new technologies– perhaps some of which will be based on an improved understanding of physics– will be more feasible. We could imagine the construction of a large particle accelerator that makes the LHC look like child’s play, or, we could imagine, as Charles Pellegrino did, the construction of antimatter by the ton in factories powered by vast solar arrays near the planet Mercury.
When it comes to the question of technological progress and scientific progress, there are several possibilities:
1. We are nearing the limit imposed by complexity and chaos and the “low-hanging fruit” has already been picked.
2. We are nowhere near the limit of what we can achieve technologically even based on our existing understanding of natural laws.
3. We are nowhere near the limit of what we can achieve technologically even based on our existing understanding of natural laws, AND there are additional natural laws or fundamental breakthroughs in science that will allow us to go even further than we could if our current understanding of basic science has already been maximized.
Our technology consists of emergent properties. Silicon and other materials configured in a certain way lead to emergent phenomena that are greater than the sum of their parts alone. I contend that we have barely scratched the surface in terms of the number of technologies with novel emergent properties that we can construct based even on existing laws of nature, but it may take what I previously outlined in my “thesis” to enter this expanded set of emergent phenomena.