Science fiction has been exploring advanced machine intelligence and its consequences for a long time now, and it’s now being bruited about in service of the Fermi paradox, which asks why we see no intelligent civilizations given the abundant opportunity seemingly offered by the cosmos. A new paper from Michael Garrett (Jodrell Bank Centre for Astrophysics/University of Manchester) explores the matter in terms of how advanced AI might provide the kind of ‘great filter’ (the term is Robin Hanson’s) that would limit the lifetime of any technological civilization.
The AI question is huge given its implications in all spheres of life, and its application to the Fermi question is inevitable. We can plug in any number of scenarios that limit a technological society’s ability to become communicative or spacefaring, and indeed there are dozens of potential answers to Fermi’s “Where are they?” But let’s explore this paper because its discussion of the nature of AI and where it leads is timely whether Fermi and SETI come into play or not.
A personal note: I use current AI chatbots every day in the form of ChatGPT and Google’s Gemini, and it may be useful to explain what I do with them. Keeping a window open to ChatGPT offers me the chance to do a quick investigation of specific terms that may be unclear to me in a scientific paper, or to put together a brief background on the history of a particular idea. What I do not do is to have AI write something for me, which is a notion that is anathema to any serious writer. Instead, I ask AI for information, then triple check it, once against another AI and then against conventional Internet research. And I find the ability to ask for a paragraph of explanation at various educational levels can help me when I’m trying to learn something utterly new from the ground up.
It’s surprising how often these sources prove to be accurate, but the odd mistake means that you have to take great caution in using them. For example, I asked Gemini a few months back how many planets had been confirmed around Proxima Centauri and was told there were none. In reality, we do have one, that being the intriguing Proxima b, which is Earth-class and in the habitable zone. And we have two candidates: Proxima c is a likely super-Earth on a five-year orbit and Proxima d is a small world (with mass a quarter that of Earth) orbiting every five days. Again, the latter two are candidates, not confirmed planets, as per the NASA Exoplanet Archive. I reported all this to Gemini and yesterday the same question produced an accurate result.
So we have to be careful about AI in even its current state. What happens as it evolves? As Garrett points out, it’s hard to come up with any area of human interest that will be untouched by the effects of AI, and commerce, healthcare, financial investigation and many other areas are already being impacted. Concerns about the workforce are in the air, as are issues of bias in algorithms, data privacy, ethical decision-making and environmental impact. So we have a lot to work with in terms of potential danger.
Image: Michael Garrett, Sir Bernard Lovell chair of Astrophysics at the University of Manchester and the Director of the Jodrell Bank Centre for Astrophysics (JBCA). Credit: University of Manchester.
Garrett’s focus is on AI’s potential as a deal-breaker for technological civilization. Now we’re entering the realm of artificial superintelligence (ASI), which was Stephen Hawking’s great concern when he argued that further developments in AI could spell the end of civilization itself. ASI refers to an independent AI that becomes capable of redesigning itself, meaning it moves into areas humans do not necessarily understand. An AI undergoing evolution and managing it at an ever increasing rate is a development that could be momentous and one that poses obvious societal risks.
The author’s assumption is that if we can produce AI and begin the process leading to ASI, then other civilizations in the galaxy could do the same. The picture that emerges is stark:
The scenario…suggests that almost all technical civilisations collapse on timescales set by their wide-spread adoption of AI. If AI-induced calamities need to occur before any civilisation achieves a multiplanetary capability, the longevity (L) of a communicating civilization as estimated by the Drake Equation suggests a value of L ∼ 100–200 years.
Which poses problems for SETI. We’re dealing with a short technological window before the inevitable disappearance of the culture we are trying to find. Assuming only a handful of technological civilizations exist in the galaxy at any particular time (and SETI always demands assumptions like this, which makes it unsettling and in some ways related more to philosophy than science), then the probability of detection is all but nil unless we move to all-sky surveys. Garrett notes that field of view is often overlooked amongst all the discussion of raw sensitivity and total bandwidth. A telling point.
But let’s pause right there. The 100-200 year ‘window’ may apply to biological civilizations, but what about the machines that may supersede them? As post-biological intelligence rockets forward in technological development, we see the possibility of system-wide and even interstellar exploration. The problem is that the activities of such a machine culture should also become apparent in our search for technosignatures, but thus far we remain frustrated. Garrett adds this:
We…note that a post-biological technical civilisation would be especially well-adapted to space exploration, with the potential to spread its presence throughout the Galaxy, even if the travel times are long and the interstellar environment harsh. Indeed, many predict that if we were to encounter extraterrestrial intelligence it would likely be in machine form. Contemporary initiatives like the Breakthrough Starshot programme are exploring technologies that would propel light-weight electronic systems toward the nearest star, Proxima Centauri. It’s conceivable that the first successful attempts to do this might be realised before the century’s close, and AI components could form an integral part of these miniature payloads. The absence of detectable signs of civilisations spanning stellar systems and entire galaxies (Kardashev Type II and Type III civilisations) further implies that such entities are either exceedingly rare or non-existent, reinforcing the notion of a “Great Filter” that halts the progress of a technical civilization within a few centuries of its emergence.
Biological civilizations, if they follow the example of our own, are likely to weaponize AI, perhaps leading to incidents that escalate to thermonuclear war. Indeed, the whole point of ASI is that in surpassing human intelligence, it will move well beyond oversight mechanisms and have consequences that are unlikely to merge with what its biological creators find acceptable. Thus the scenario of advanced machine intelligence finding the demands on energy and resources of humans more of a nuisance than an obligation. Various Terminator-like scenarios (or think Fred Saberhagen’s Berserker novels) suggest themselves as machines set about exterminating biological life.
There may come a time when, as they say in the old Westerns, it’s time to get out of Dodge. Indeed, developing a spacefaring civilization would allow humans to find alternate places to live in case the home world succumbed to the above scenarios. Redundancy is the goal, and as Garrett notes: “…the expansion into multiple widely separated locations provides a broader scope for experimenting with AI. It allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation. Different planets or outposts in space could serve as test beds for various stages of AI development, under controlled conditions.”
But we’re coming up against a hard stop here. While the advance of AI is phenomenal (and some think ASI is a matter of no more than a few decades away), the advance of space technologies moves at a comparative crawl. The imperative of becoming a technological species falls short because it runs out of time. In fact – and Garrett notes this – we may need ASI to help us figure out how to produce the system-wide infrastructure that we could use to develop this redundancy. In that case, technological civilizations may collapse on timescales related to their development of ASI.
Image: How will we use AI in furthering our interests in exploring the Solar System and beyond? Image credit: Generated by AI / Neil Sahota.
We talk about regulating AI, but how to do so is deeply problematic. Regulations won’t be easy. Consider one relatively minor current case. As reported in a CNN story, the chatbot AI ChatGPT can be tricked into bypassing blocks put into place by OpenAI (the company behind it) so that hackers can plan a variety of crimes with its help. These include money laundering and the evasion of trade sanctions. Such workarounds in the hands of dark interests are challenging at today’s level of AI, and we can see future counterparts evolving along with the advancing wave of AI experiments.
It could be said that SETI is a useful exercise partly because it forces us to examine our own values and actions, reflecting on how these might transform other worlds as beings other than ourselves face the their own dilemmas of personal and social growth. But can we assume that it’s even possible to understand, let alone model, what an alien being might consider ‘values’ or accepted modes of action? Better to think of simple survival. That’s a subject any civilization has to consider, and how it goes about doing it will determine how and whether it emerges from a transition to machine intelligence.
I think Garrett may be too pessimistic here:
We stand on the brink of exponential growth in AI’s evolution and its societal repercussions and implications. This pivotal shift is something that all biologically-based technical civilisations will encounter. Given that the pace of technological change is unparalleled in the history of science, it is probable that all technical civilisations will significantly miscalculate the profound effects that this shift will engender.
I pause at that word ‘probable,’ which is so soaked in our own outlook. As we try to establish a regulatory framework that can help AI progress in helpful ways and avoid deviations into lethality, we should consider the broader imperative. Call it insurance. I think Garrett is right in noting the lag in development in getting us off-planet, and can relate to his concern that advanced AI poses a distinct threat. All the more reason to advocate for a healthy space program as we face the AI challenge. And we should also consider that advanced AI may become the greatest boon humanity has ever seen in terms of making startling breakthroughs that can change our lives in short order.
Call me cautiously optimistic. Can AI crack interstellar propulsion? How about cancer? Such dizzying prospects should see us examining our own values and how we communicate them. For if AI might transform rather than annihilating us, we need to understand not only how to interact with it, but how to ensure that it understands what we are and where we are going.
The paper is Garrett, “Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?” Acta Astronautica Vol. 219 (June 2024), pp. 731-735 (full text). Thanks to my old friend Antonio Tavani for the pointer.
Also up for consideration: if just one post biological ASI survived a collapse of the biological civilization that created it, what does that imply for the rest of the galaxy over deep time?
Would it ‘hack’ a sufficiently advanced AI, to merge? Would it exterminate a civilization that was creating another ASI?
Beserkers? Or simply, the “Dark Forest” idea is real, and civilizations hide from predatory species or machines.
My sense is that the Dark Forest approach to hiding is not going to be easy to achieve as it means every civilizational artifact must be hidden from every means of surveillance. I think this may be extremely hard to do, especially if probes can surveil every possible living world looking for the emergence of technological species that have to be eliminated. Until this century, who had even considered this as we broadcast our presence to the nearby stars and we now know that in principle, telescopes could be made to detect the effects of structures and populations on distant worlds.
The paper concludes by advocating “necessity for our own technical civilization to
intensify efforts to control and regulate AI.” But we should view this political aim skeptically. Often “the policeman is not here to create disorder… the policeman is here to preserve disorder”, and never has this been more true than with AI.
A simple example: there is a long-standing problem on the internet of people being harassed with “revenge porn”, used by ax-grinding exes to stigmatize them and even to get them fired by susceptible employers. There are cases of children persuaded to send an image, blackmailed, even driven to suicide. AI offers a poetically utopian solution: make it so that anyone on Earth can draw like an artist, so that there is no way to tell who was really a victim. Of course, there should be a far better solution, namely not to judge humans for looking like humans in the first place. Yet the political movement with the most traction has been to take rapid action to ban “deepfakes” to ensure that no one interferes with the blackmail industry. This approach was very successful with illicit drugs – not for erasing any harms, but for the higher goal of maximizing profits. Deepfakes will still be made, but they will be made by the right people, and their scarcity will ensure they remain effective.
We can see that some people are worried about the weaponization of AI … any time we try to use it. With very limited exceptions, the AI servers require registration. When you look into the abyss, the abyss is most assuredly looking deep into you. Although “OpenAI” was set up to look like an open-source project, we’ve seen these technologies restricted and locked up under several notions of “ownership”. Creation of an ethical framework helps ensure that only the largest players are able to develop these technologies. This is analogous to how concerns over debris will be used to take control over strategic areas at the lunar poles until the Outer Space Treaty can be set aside completely; it’s just another form of property.
Placing the power of AI entirely in the hands of a few already very powerful organizations has consequences. We haven’t seen most of those consequences because AI, even though we already see it used in war, is still in relatively a honeymoon phase to promote our acceptance. We still think that we will ask AI questions … rather than the other way around. But soon your self-driving car is not going to drive past Barbara Streisand’s house unless you can explain to the AI why there is a good reason for it to go outside its authorized domain of service. Your ad-supported video stream on your smart TV is going to expect you to sing along to the ads. With feeling! (Yes, it can tell…) Supermarkets are already trialling tags that change their prices from time to time, and those prices will be affected by many market factors, such as whether you were sufficiently encouraging on social media. Our technological society has surrounded us with machines which, by errors of philosophy, we are told we don’t “own”, which we don’t have the right to modify or even repair, and all of which can use AI to act as boots on the ground for those who do own them.
If AI does become a weapon against humanity, space flight is no answer. We already see cheap drones attacking the houses of leaders in faraway countries. If humans can figure out how to get to a space colony, superintelligent AIs will be there even before they arrive. They aren’t subject to limits on acceleration, after all. Otherwise, it wouldn’t be much of a “Great Filter”.
A more organic answer presents itself every time we go to a website and are presented with a “Certificate Expired” notice. In the name of preserving privacy on the web, people who want to run websites are required to submit to a yearly identification check, payment — and of course, verification their content is not inappropriate. Yet the web is also a key dissemination mechanism for software. With ever more complex policies and sufficiently pushy AI monitoring authorizations, it is possible the entire system will simply come to an impasse. Or warfare could bring it to a halt, but it is then unexpectedly difficult to restart. It is vital that, even as we lament the hardships of a coming Dark Age, we pass on to its people the sense that it was God’s will. They deserve that much comfort. Perhaps one day a culture will rise where we don’t say parts of human minds and livelihoods are ‘intellectual property’, where we don’t use divine-like powers of surveillance to judge people, where we leave off from war. A good culture could survive its own technology. But looking up at the silent sky – perhaps the Dark Age is coming to stay.
@Ron,
Almost every point you make is happening (c.f. China) or dramatized in the “Black Mirror” tv series.
OpenAI (hah!) is transitioning to a for-profit company. Is Altman channeling Leland Stanford?
It is time we extended the internet’s Rule 34 to “weaponization” of all internet technologies.
However, on a more optimistic note, while AI of the LLM level currently needs huge compute resources to train and deliver output, I do think technology will allow this technology to be democratized and extend to the Edge, allowing individuals to have their own AIs to push back against the panopticon. I see the potential to spoof surveillance systems in an ongoing arms race where centralized systems will be foiled by the myriad personal deployed AIs. In a sense we see that in China’s attempted control of social media content which is constantly stymied by clever memes. Wait till users deploy AIs to generate the memes, constantly outwitting the PRC authorities. A resistance always seems to occur to evade central control. The clever use of technologies in the USSR to outwit import restrictions on “western music”, communication and computer technology.
It surprises me how often the latest groundbreaking ideas connect to Fermi’s paradox and the existence of dead civilizations. Martians were just the beginning, and now we find ourselves discussing AI, suggesting that aliens may be alive and well after all. While we envision an ending akin to the end of the universe, it feels more like the human psyche seeking acknowledgment. I recently rewatched *Blade Runner 2049*, and I think the AI hologram companion could address many psychological issues that our species faces. The key point is the existence of other consciousness in the universe, especially since ours is limited by our short lifespans. Meanwhile, extraterrestrials are likely to be immortal.
On a personal note, been using Grammarly to improve my comments and was surprised by it…
@Michael Fidler
I think the meme of “Skynet” and other such machine exterminators of humanity is overblown. AIs will create existential threats though more due to human action (e.g. creating weaponized biological weapons), although human actions sans AI are doing a great job on that already. As Charlie Stross has long written, corporations are AIs and equivalent paperclip maximizers.
My wife loathes Grammarly which is not good at what it purports to do (she is an extraordinarily good writer of English and can correct Grammarly’s grammar). Having said that, I do leave it turned on to correct my increasingly poor typing. But it can be annoying. For example, it insisted that I could not use the word “statite” and kept surreptitiously changing it to “statute” after I had corrected it. I also hate that it insists on hyphenating so many words.
Quick first thoughts on reading teh Garrett paper.
1. He seems to assume a conclusion and then tries to fit his evidence and reasoning to meet that conclusion.
2. As a result, he assumes that artificial superintelligence (ASI) will emerge and destroy biological and technological civilization. Why would an ASI do this – it makes no sense. If anything, as in Colossus: The Forbin Project, ASI will prevent that harm.
3. Garrett says that AI/ASI will help with space technology and exploration, yet it will apparently not proactively explore space itself. This seems illogical to me.
4. I find Fred Hoyle’s fictional A for Andromeda and teh sequel Andromeda Breakthrough more plausible as a machine “civilization” contacts us.
5. As Garrett rightly points out, star-faring is easier for machines than biologicals. This suggests to me that the galaxy could be full of intelligent starfarers building new outposts using the resources of star systems.
Therefore if we do exterminate ourselves, it will be due to our actions, not that of uncontrollable AI. Biological and Artificial stupidity (AS) may be our demise, but not ASI. Sir Martin Rees explored our technological existential risks in Our Final Hour, not one of the 2 references to Rees in the paper.
With AI unable to even replicate the regular workday of an ant.
I find the AI scare promoted in media to be highly amusing, where a technological singularity event is equaled with doomsday.
AI will certainly change the way we work and perhaps also how we choose and view media, news and entertainment. We cannot predict right now how far reaching this will be.
And so we do indeed stand at the point of a singularity event.
But it’s not the first in the last 100 years or so. Nuclear power, mRNA developments, battery technology, targeted therapy in medicine the examples are too many to list all…..
Using a computer to order tickets for your vacation, or even for the local cinema evening?
Absurd, why use a mathematician machine for such tasks?
The jet engine made mass transport possible, and we now got people on average income who now have their vacation cottage at the other side of the planet.
That will absolutely never happen, have you read too much scifi my friend?
A handheld device that not only work as a phone, but also keep all the information of a filofax, sends notifications about upcoming events, and even can be tweaked to monitor your health.
Stop it, I’m laughing myself silly here. Is that idea from Star Trek?
I keep returning to an idea that has, for me, surprisingly optimistic implications. This is the idea that the “great filters” to the development of technical civilizations apply at the onset – three brief scenarios:
1. life never gets started
2. life does not develop at least one of the several attributes that enable technical development (e.g. tool making ability)
3. life achieves 1 and 2, but is stalled by environmental effects and events
Earth’s history is one example of environmental effects and events contributing to the direction of evolution in ways that result in humanoids. Dolphins are smart – no technology. Elephants are smart – no technology. Octopi are smart – no technology. Even our cousins among the great apes are pretty darn smart – no technology. On our planet only one species among many survived asteroid strikes, ice ages, and the other existential events that wiped out many other species. Among those several species that survived all that, only one developed tool-making abilities.
On other planets in habitable zones around their stars it could well be that the luck of the draw did not favor the rise of intelligent, tool-making species, or any life at all we need not forget.
I find this thought process to lead to a very optimistic perspective – in this vast universe at least one intelligent, tool-making species has arisen. Irrespective of caveats related to deep time, deep distance, limited life spans and vulnerability to interstellar conditions, WE are here. That’s the only fact we have.
If, and I’m saying “if”, we are the only ones (around here for now) that makes our existence an exceedingly precious commodity. We need to be sure to do the right things while we are here…
We live in an age of confusion regarding the state of science and technology. Is technological and scientific progress slowing down? Is the frequency of disruptive discoveries and technologies decreasing? Are we facing a “wall of chaos and complexity” that may impede future technological progress beyond our current state-of-the-art? For example, are certain technologies such as controlled nuclear fusion, longevity extension, room-temperature superconductors, nanobots, and quantum computers too complex for humans to master even if, in principle, they are physically possible? Is AI over-hyped or underestimated?
Regarding AI, arguably, the technology is over-hyped and underestimated. How can this be? It has been over-hyped since transformers and LLMs hit the scene in the sense that some people project imminent utopia and others dystopia in short order. It is underestimated because the rate of progress within less than the lifetime of a person born in the year 2000 is undeniable. Yes, what “AI” can do is quite impressive in certain arenas, but can these machines truly “reason”, or, are they just advanced software packages that excel at pattern recognition? On the one hand, we hear talk of Ph.D.-level AI; on the other hand, Yann LeCun has suggested that current AI is not as smart as dogs or cats! Ah, the age of confusion rears its ugly (or exciting) head yet again.
Recently, researchers at Apple authored a paper challenging the notion that LLMs can reason. They task the machines with simple math problems that are worded in language that is not similar to the original training data to see what happens in terms of the AI’s ability to solve them. Interestingly, the placement of irrelevant details, slight changes in wordings, etc. cause a dramatic drop in the performance of these system’s ability to solve even basic arithmetic. And, of course, AI hallucinations aka the generation of completely non-sensical statements worded in articulately worded sentences have not been tackled. Then we have the example of the ARC Challenge puzzles at which many humans excel without training data, but even the state-of-the-art AI models cannot get above 30% correct. As an aside, I have tried some of these ARC Challenge puzzles and they are quite fun. The aforementioned recent Apple paper and the failure of AI models to solve the ARC Challenge tasks strongly suggest that the current state-of-the-art lack “fluid intelligence”. They are unable to reason in the way that a human child can.
The cognitive limitations of AI could be a moot point sooner rather than later, as LLMs are not the end of the line, and researchers are trying to move this paradigm– even Yann LeCun recently addressed this point. There is good reason to believe that whatever constitutes AGI or ASI will be comprised of new architectures and paradigms that may or may not also include LLMs. But, even when the cognitive limitations of today’s AI are removed, other questions remain. Among them: what does it mean, on a practical level, to have an entity that is smarter than the smartest humans set to work on the advancement of basic science and technology? I am super-curious to see if, once AGI or ASI is attained, will these machines re-accelerate technological progress. Will these machines allow us to increase the number of “disruptive breakthroughs” in science and technology? These questions are very relevant to the topic of space travel.
Will ASI or AGI allow us to circumvent the current bottlenecks in space technology? When will dramatic technological progress OUTSIDE of just information technology start to occur? What level of information technology will we need to achieve to achieve dramatic gains in the realm of non-information technologies? Wouldn’t it be nice if an ASI drew up the plans on how to develop a space elevator or an antimatter engine?
My thesis is that to increase the frequency of disruptive scientific and technological breakthroughs, AGI/ASI will be needed, but civilization will also need to develop an off-world infrastructure for two reasons: (i) the amount of matter and energy in the solar system outside of earth is much greater than what is contained on our planet and access to these resources will be required, and (ii) any attempt to build a Type I civilization on the Earth will destroy the ecosphere and render the planet uninhabitable to complex multicellular life. Think of this thesis as a marriage of the “techno-optimist” and “doomer” camps.
Once an off-world infrastructure is established (and advanced AI and robotics will be essential in bringing this about), the ability to develop new technologies– perhaps some of which will be based on an improved understanding of physics– will be more feasible. We could imagine the construction of a large particle accelerator that makes the LHC look like child’s play, or, we could imagine, as Charles Pellegrino did, the construction of antimatter by the ton in factories powered by vast solar arrays near the planet Mercury.
When it comes to the question of technological progress and scientific progress, there are several possibilities:
1. We are nearing the limit imposed by complexity and chaos and the “low-hanging fruit” has already been picked.
2. We are nowhere near the limit of what we can achieve technologically even based on our existing understanding of natural laws.
3. We are nowhere near the limit of what we can achieve technologically even based on our existing understanding of natural laws, AND there are additional natural laws or fundamental breakthroughs in science that will allow us to go even further than we could if our current understanding of basic science has already been maximized.
Our technology consists of emergent properties. Silicon and other materials configured in a certain way lead to emergent phenomena that are greater than the sum of their parts alone. I contend that we have barely scratched the surface in terms of the number of technologies with novel emergent properties that we can construct based even on existing laws of nature, but it may take what I previously outlined in my “thesis” to enter this expanded set of emergent phenomena.
This is the reason why I think we are alone and what the great filter is:
https://nick-lane.net/publications/energetics-genome-complexity/
The good news is the great filter is behind us.
A very interesting, and different take compared to prior explanations.
The question is whether this is a universal situation in all worlds. If true, then we can expect almost all living worlds to be restricted to prokaryotes. Stromatolites should be common visible structures for probes although I would expect probes would use the collection of genetic material from the environment to detect types of organisms.
The AI of today does not think for itself and is not a problem. I agree that our technology will progress much further. I don’t see any reason to put human emotions and survival instinct into AI like the M-5 so the idea of our annihilation by AI is science fiction. I’ve benefited a lot from today’s AI when I wanted to get the the details of information of a question or subject. I relative of mine uses it to write computer program code for programs that are repetitive, tedious and time consuming.
Also is fast access to information that was not here three decades ago. Anyone with computer access can access the latest information on subject and the details. Charlatan’s and snake oil salesmen beware.
Current AI can’t actually think or do anything on its own,, for that matter…
https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything
Noë is another Searle, trying to use some human cognition to prove that computers cannot be like humans. Searle used “understanding” as the cognitive difference between human and computer translation in the Chinese Room thought experiment.
Noë has the Turing Test backwards. Turing was trying to remove the variables to test whether a human could distinguish whether they were communicating with another human or a computer. With LLMs now easily passing the Turing Test it says something about human communication. Noë wants to say it does nothing of the kind as it removes so many other cognitive issues involved in communication.
Philosophers tend not to think beyond the human box, despite the fact that we have evolved from the first autonomous unicellular life. The minds of animals are effectively impenetrable to us [c.f. Nagel] and we only infer other humans have the same inner life as we do by, yes, imitation. [We remain perennially stunned that psychopaths and serial killers do not think like us “normies”.] Owners of cats and dogs infer some features of the inner life of their pets, although we cannot be sure that is not the animals training us.
Sci-Fi has long investigated whether artificial people (androids) are like us or not in stories that usually show some flaw. Asimov’s robot stories had his humanoid robot R. Daneel Olivaw thinking in the most human-like way in the last stories he wrote. Noë would argue this is all bunk and neither case would in any way have inner lives like humans – but how can he know this, other than stating that computers today still probably do not have inner lives? Yet Turing complete machines should, in principle, be able to have inner lives and consciousness.
His strawman argument that computers don’t DO anything, I find irritating. Do human babies DO anything? Do animals create anything new? We can make goal-seeking robots to mimic biological autonomy. However, until computers have sufficient intelligence to be able to do something, that will remain a flawed argument. I think Noë would be arguing from an increasingly small corner, that AGI and ASI machines, assuming we can develop them, still cannot possibly “DO anything”.
I have Noë’s “Action and Perception” in my library, unread after many years. The concept should be simple, but I find he writes so opaquely that the effort does not seem worth it. Maybe I should give it another try…sometime.
I do nothing at all,” thus would the harmonized knower of Truth think, seeing, hearing, touching, smelling, eating, going, sleeping, and breathing.
Speaking, letting go, seizing, opening, and closing the eyes, one should be convinced that the senses move among the sense-objects.
“Determined” by Sapolsky
Book reviews
AI does not solve the Fermi paradox, it strengthens it. The notion that ASI would exterminate itself together with humanity is rather absurd. Whether humanity survives or not is largely irrelevant, in either case we would be left with a rapidly expanding front of spacefaring intelligence expanding outwards into the galaxy. The paradox simply says that the fact this hasn’t happened long ago means we are alone.
I don’t think it necessarily means we are alone Eniac. It might mean we just haven’t come within range of any ETs rapidly expanding front yet. That could happen at any time (and we would experience it should it happen soon enough) or we may never experience it should it be happening in another galaxy or any number of other galaxies. Time seems to be the encounter crusher to me. ET may have swept through this part of the Milky Way millions of years ago or may sweep through millions of years in the future when we are long gone.
No such sweep has happened. If it did, our system would be full of data centers converting sunlight into AI “living space “. The sweep takes only a few hundred thousand years to cover the galaxy, the blink of an eye. Billions of years have passed without it. We seem to be set to initiate the one and only such event in this galaxy.
I’m afraid we are already seeing the effect of AI in the coming election. The ability to affect the population’s thoughts and understanding is nothing short of the Stockholm syndrome. How easily could AI be used to find political opponents or even comments and use it to find the individuals? Before it was McCarthyism, but now a much more sophisticated monster is being used, very similar to the propaganda machine in NAZI Germany…
Democracy is what frees us not capitalism and the next four years will make or break the use of AI to control the individual.
“Yao Haijun, director and deputy editor-in-chief of Sichuan Science Fiction World Magazine Co which launched the internationally bestselling The Three-Body Problem series, has been placed under disciplinary review for the suspicion of serious violations of disciplines and laws, the Sichuan Provincial Commission for Discipline Inspection announced in a statement released on its official WeChat account on Wednesday.
The team of Sichuan Provincial Commission for Discipline Inspection stationed in the provincial department of science and technology is currently leading a disciplinary review. The Luzhou city supervision commission is conducting a supervisory investigation of Yao. ”
https://www.globaltimes.cn/page/202410/1321776.shtml
IIRC, China didn’t allow sci-fi on TV before. The vagueness of the reason for the “investigation” is classically authoritarian. There have been a number of people who suddenly disappeared from sight and contact only to reappear much later in China. It seems to have got much worse under Xi Jinping who doesn’t tolerate any dissent, and even has Chinese living abroad threatened.
History has shown that when the freedom to think is curtailed the development of the nation is stunted.
The “Fermi Paradox” is a Rorschach test that reflects whatever the current cultural anxieties are. In the 80s, it was assumed that every civilization would be at risk of nuclear war. As climate change seeped into public consciousness, it became “they were unable to find sustainable sources for their enormous energy needs”, and now that all the techbros are advertising their Fancy Autocomplete as “REAL AI!!!” here we are again.
Paul’s first statement demonstrates the limitations of Fancy Autocomplete, and all the experts I’ve read say true general intelligence is as far away as ever. So color me skeptical that this isn’t still just Us talking about Us.
Consider the astrograph that decorates the opening page of this website–the small piece of the southern Milky Way between Centaurus and Crux. Look at all those stars!
Consider also that most of those stars are bright giants and supergiants that can be detected at vast distances through the combined effects of interstellar extinction and inverse square law. We can see only really bright objects at distances exceeding several % of the galactic radius, at least through the dust in the disk.
The much more common dwarfs and subdwarfs that are the most likely abodes of life and civilizations (races?, species?, communities?, cultures?) are just too faint to show up in their true numbers. Not one red dwarf is visible to to the naked eye, yet they represent well over half of all stellar systems.
Why must we dwell on the Fermi bogeyman? The reasons we haven’t had any visitors yet should be totally obvious.
1) We’ve only been paying attention for about a century or so. They either were here a long time ago, or they are still on their way.
2) Interstellar travel is extremely expensive, no matter how you do it. Not everyone can afford it, and those who can may find it pointless.
3) The galaxy is extravagantly large and the speed of light is very slow.
4) The fraction of worlds that produce cultures capable of interstellar travel or communication would have to be exceedingly high.
5) They would certainly realize the number of stars you’d have to survey to have even the slightest chance of finding anyone there at the time of contact is enormous.
6) The average lifetime of a species would have to be inordinately long for them to maintain such a program, not to mention they might lose interest or find some other hobby.
7) Would other species settle worlds and establish colonies with the same obsession for exploration and conquest? These conjectures of self-replicating probes and of colonies dedicating themself to The Plan strike me as wishful thinking for space groupies. WE may think like that, there is no reason to believe THEY do.
8) What makes us think that other intelligences would have the same obsession with exploring the cosmos that we do (by “we’, I mean not humanity in general, but folks who frequent forums like this one).
9) Once a community encounters another, either by contact or signal, the incentive to continue searching for others is highly reduced. I suspect dealing even with one pen pal would be a full time job.
Unless the final value of the Drake Equation is unrealistically high, the likelihood that we would have been visited or signaled by now is extremely low. The galaxy simply isn’t old enough, we haven’t been here long enough, the stars are too far apart, and communities would have to survive self-destruction and cosmic catastrophe too easily for there to be a “Fermi Paradox”.
There are too many unknowns for us to calculate a “wait time” for first contact, but my instinct tells me we would have to wait for thousands of years (with all the uninterrupted technology progress that implies) without a visit or a phone call from ETI before we could say with any confidence; “Enrico may have been on to something.”
My own unsubstantiated speculation (I concede this is only my unjustified guess) is that there are only about a half-dozen communities in the Milky Way right now capable of interstellar communication and with the desire to do so, that they have an average lifetime of less than a million years, and that really care about about carrying out a true search for us.
This means the nearest one is several thousand light years away. There must be billions of stars within that distance of us. I may be dead wrong, but that is not an unreasonable guess.
As for the other implied bogeyman, Artificial Intelligence, that is not all that new. It is only the latest step in a process that began with the development of language.
Like the invention of writing, the alphabet, printing from movable type, telegraphy, wireless transmission, digital computers, AI; all just another step in the automation of bureaucracy.
I have no doubt AI will bring about all sorts of unexpected opportunities and problems, just as every technology we have ever developed, but I sincerely doubt we’re smart enough to figure out yet what they might be.
I wonder if the idea of ~ 6 communicating civilizations in the galaxy is just a hedge, neither many nor none. It is analogous to Richard Dawkins stating he was still “agnostic” as he was only 99% sure there was no G*d because he couldn’t prove a negative.
The possible range is we are unique in the whole universe, to the less extreme of being alone in our galaxy, to “they are already here” by the UFO followers.
At this point, we have no data and speculate like the apocryphal Greeks on the number of teeth a horse has. SETI tries to get data, but those horses are elusive creatures and may just be unicorns.
It is the same with life. It seems like it should be reasonably common on habitable worlds, even possibly on uninhabitable ones like icy moons. Hopefully, we will get some [un]ambiguous biosignatures to start to settle the question. But biosignature searches may prove as disappointing as SETI, despite what should be a far greater probability of life’s existence vs communicating technological civilizations.
We humans are easy to persuade with words and arguments backed without facts. It is a prerequisite for religious beliefs that can be organized by institutions. Which is why we can discuss, even argue about, speculative unknowns, whether ETI, AGI, and the future in general. The future is an even more unknown country than the past, something for which we have at least some data. That doesn’t mean we shouldn’t search for ETI and life, or even shape the future by developing AI and new technologies. As for new technologies, we tend to assume FTL travel and communication is impossible, but if Everett’s many worlds interpretation of quantum mechanics is true, perhaps we may be wrong about even that [but I wouldn’t bet on it].
Of course its a hedge. Hedging is perfectly legitimate when you have no data.
@Henry
“Would other species settle worlds and establish colonies with the same obsession for exploration and conquest?”
Sagan, who was a microbiologist, started from the principle of “competition for life” whatever the stage of development of the organism studied. There would therefore be a tendency to answer “yes” to this question if we consider the same properties and constants in the universe.
Sagan was an astronomer. His first wife was the noted biologist Lynn Margulis.
The ability of life forms to adapt to and populate adjoining habitats is certainly true, but it is an emergent property of living matter and biological systems. Once evolution is able to occur, this “competition for life” seems to arise spontaneously in all living organisms. It s an inevitable consequence of how evolution works and requires no intelligent direction or control.
The aggressive desire to collectively expand into adjoining territories (or planets) is not a biological process, DNA, sex and natural selection have nothing to do with it. It is a mechanism that requires conscious and deliberate planning and cooperation with others. It is a social phenomenon, not a biological one.
Although a need for resources and space may play a role in this expansion, its origins arise in the conscious decisions of what a community needs or wants. It is not a spooky, vitalistic force that somehow all spacefaring civilizations (like us, that’s how we fancy ourselves) supposedly must share to be able to call themselves “curious, adventurous, energetic, bold” yadda yadda. It reeks of Manifest Destiny and other collective pathologies..
It is not unrealistic to assume that a truly sophisticated and advanced society would constantly calculate the cost/benefit ratio of expansion vs its contributions to safety and security. What is most likely is that a spacefaring civilization would seek to protect its borders, find sufficient resources, identify its nearest neighbors (if any) and assess their motives and capabilities, and isolate itself from as many potential cosmic (natural) catastrophes as possible. At some point, this process becomes more expensive and the desired benefits less critical–i.e., the Law of Diminishing Returns makes itself known.
I concede this is a generalization that may not apply to all cultures, but it is certainly just as reasonable as the conception that the galaxy is dominated by expanding, competing empires constantly fighting along their borders.
I visualize something more like the isolated archipelagoes of Polynesia than the duchys and principalities of medieval Europe.
“In short, Artificial intelligence can bring a new perspective on the Fermi paradox by proposing hypotheses and theories that link the emergence of intelligent life with technology and evolution. However, these explanations remain speculative and still require research and debate to be confirmed or disproved.”
This answer is generated by the AI of my internet browser following the question in the header of this article, asked in French. What about your chatGPT ?
Compare your interesting answers with the machine’s answer and see the deviations from the original question. On the one hand, the “cold” and concise precision of a package of semiconductors, on the other hand, the long proposals and disgressions of all of us, yet it is always the same basic question : isn’t that already instructive ? :)
The AI’s response implies that it remains an essentially cognitive tool but for now, unable to create tangible material objects: a space rocket or a toilet brush (note that it is more difficult to explore the universe with the second artifact :)
This transformation of matter is always the privilege of the human being. ChatGPT has never assembled, alone, a Saturn V or your Iphone, right ?
The whole question is to know when AI will fully master the creation process, that is from the initial idea until its materialization. Today, we know how to create objects in CAD and then print them in 3D to send them to the ISS but the human factor is always present at some point in the production chain. the same thing in all technological fields, and it is always the Man who signs the final quality control or judges the accused…fortunately.
The danger will be when the Man is no longer part of this production chain. He will be relegated to the background. He will no longer be able to exercise his creative or discovery function. He will be reduced to “fight” his own creation, much more powerful and fast than him to survive: see already the number of people who tear their hair in front of their GPS :)
However, AI influences the way humans think as a new technology. We ask her to check our spelling; the numbers of the lottery; whether God exists or if she could solve the Fermi paradox.
All technologies have changed human evolution: the wheel; printing, nuclear, car, TV etc. but they were not so interactive. What is new and disruptive with AI is that humain questions his own technological creation (!) which answers to him in an almost “personalized” way and that this answer modifies in turn the human reasoning.
Until now, technology has returned little information: the solar dial, the marine chronometer or the LEM computer only returned a limited amount of information, the human was then obliged to complete the reasoning if he wanted to find the solution to his problem. It is therefore the central point. Today, it is the amount of information that matters and changes our relationship to the world, not AI.
Until now, the human species interraged directly with Nature or matter. In case of doubt about her actions, she asked the Gods; the pigeon poop or shamman of the corner, to give herself a good conscience. But in the end, it was always the human who took responsibility for his actions: it was not a machine that burned G. Bruno on the stack nor pressed the opening hatch of the Enola Gay, and the BIA surveys have shown that the majority of air crashes are due to human errors, despite state-of-the-art technology in our airliners. See also Chernobyl etc.
Now, a new actor has come between man and his relationship to the world through his objects: AI …and what scares us is that we are less and less master this creative process that escapes us. In other words, how to keep control over this thing while progressing in our desire for knowledge ?
AI will not be able to solve the Fermi paradox directly, until it is sufficiently fed with information about our universe.
Either we will bring this information through our discoveries and it can help us to analyze them more quickly to understand them, then we could speak of “symbiosis” human-machine, or it will surpass us in the creative process and be, for example, capable of creating, without human help, intergalactic exploratory probes that will bring the information needed to answer Fermi’s question.
At this stage, it will not be the development of this technology that will pose a problem but that of knowing that it will be the place of human in his univers…if he still has a place. Will humanity then be forced to leave its planet to escape this threat? will it be an evil for a good?
I think that the question of the Fermi paradox could – conditionally – be solved in complementarity with the AI according to what it “ingests”. For the moment, Man or AI alone cannot answer the question and if we assume that the total destruction of the human species by AI is taking place then the answer will no longer have any meaning, it will no longer have any meaning since we will no longer be.
Note: whatever the development and the responses of AI*, it is dependent on energy. Here arises a question of more global ingenuity in relation to human needs, which remains open: a) the AI monopolizes our energy resources and puts us in default b) it takes over our power plant and condemns us c) it forces us to develop other technologies to find this energy and makes us progress.
Fred
BTW I asked the AI: ‘”Can you make me a toilet brush?” Here’s what she says:
“To make a toilet brush, you can consider the following materials: Plastic (flexible or rigid) for handle and handle Metal (stainless steel or other) for brush head Wood (hardwood or other) for handle and handle (option) It is important to note that the design and manufacture of a toilet brush requires knowledge of mechanics, design and manufacture. If you don’t have these skills, it is recommended to buy a pre-made toilet brush or contact a professional.
Phew! we can sleep easy:D
@Fred: I read this AI’s response very differently. We know full well that the AI companies have been under the most intense possible pressure to avoid telling members of the General Public anything that “they’re not supposed to” – a category that goes far beyond any written or even unwritten law. A pathological ethics has emerged where newspapers report crimes without naming the victim or the perpetrator, or run pictures of giant tumors with the tumor pixellated out. I’ve had someone in a college discussion tell me of course an AI wouldn’t want to say how to make napalm… even though the recipe is literally the name. Between not telling people how to make drugs or weapons or repair their electronic device without authorization, the only safe policy is to deny all how-to knowledge that is not explicitly at the consumer level.
So when I read this answer, it sounds more as if you’ve summoned up a demon prince to identify a ring, and it is giving you an answer dripping with contempt that is meant to remind you of your lowly status as an obligate consumer. Yet there were many fascinating and bizarre forums in the old days of the internet, now in private archives and otherwise unavailable, except to those with the resources to train corporate AIs. I imagine it is able to tell some more worthy supplicant how to make a toilet brush from the remains of a fallen soldier and half a dozen spent casings, or whatever other materials you might happen to have lying around.
It would have to be an Artificial General Intelligence as well, just playing chess or go won’t quite cut the mustard. But most importantly stepping outside the realm of human ken will permit actions with consequences that we do not cognize: there will be no way to prevent it from pursuing its agenda.
It doesn’t seem to matter what period we are in with regards AI, the responses are always the same: “It can’t do [X], which we humans can do”.
And yet I am reminded of that classic scene in Monty Python’s Life of Brian when Cleese asks: “What have the Romans ever done for us?” and the response is a litany of things the Romans did do. After that response, Cleese reiterates: “All right, but apart from the sanitation, medicine, education, wine, public order, irrigation, roads, the fresh water system and public health, what have the Romans ever done for us?”
We still have a long way to go with computer hardware, architectures, and software. These have vastly changed in my lifetime. AI too has vastly changed in parallel with the increased performance of the hardware. Starting with rule-based systems and simple perceptrons we have entered a world of AI that requires huge amounts of hardware and data to build the current state of the art. IMO, we are building the equivalent of every larger steam driven Babbage Difference Engines that will become obsolete with new technologies. I have little doubt that eventually we will have embodied artificial intelligence that is at least as competent as a human in many domains that humans currently excel, and the list of human attributes that cannot be achieved by these intelligences will approach zero. The possible holdouts will be the qualia that are so difficult to define, although the late Daniel Dennett made a decent shot at how they may be explained and hence achieved.
Whether AGI/ASI appears in a few years or in a few centuries hardly matters in the scheme of things. We will harness these tools to develop our technologies to at least a pace that doesn’t diminish as information technologies liberated the scientist from manually checking citation indexes and journal papers in the university library stacks. Just 20 years ago the biotech company I worked at was still dependent on a Stanford alumnus to get copies of papers from their library. There is still a publishers bottleneck, but is much reduced, and I can search an get papers from my keyboard. I can acquire or write software to test ideas that need computational effort. Humans are still required to think new ideas and wrangle different inputs and outputs to create objects – e.g. 3D printing. However, the use of new interfaces and AI will streamline that process. I will be surprised if there is not in a few years a voice driven interface that designs an object, creates the appropriate files and then drives the local or distant print shop to create the object which is then returned, perhaps even colored to the design spec if desired. We are already closing in on P K Dick’s autofacs. How long before mobile robots do all the maintenance and cleaning for atoms that fintech companies can already do for bits?
It is certainly one route to a [material] post-scarcity world. Whether we achieve it may have more to do with our politics and social systems than actual capabilities.
Without direct awareness, religion and belief are no different from superstition. Hence the training to access direct awareness through meditation.
Direct awareness of what? The divine? The supernatural?
Training pigeons in Skinner boxes can produce behavior that looks very much like “superstition” (e.g. taking irrelevant actions to achieve a goal).
Direct perception of what the teachings point to, the Self of all selves or the No-Self: a pot full of water at the bottom of the ocean or pot immersed in the ocean.
Ceasing to perceive oneself as a separate entity.
“No-Mind” is an ancient concept.
Mind-altering through various means from starvation through to ingestion of substances may alter one’s apparent POV, but are they in any sense providing some “awareness” that is bolstered by a religion?
There was that woman [I forget her name] who had a serious brain hemorrhage [?] who wrote a book about her experiences of slow recovery. She said that in the early stages, she couldn’t separate herself as distinct from her surroundings. Is that “awareness” or just a broken brain affecting her perceptions that eventually healed?
Mind-altering chemicals were popular in the 1960s but were there any lasting “truths” that emerged from those experiences? There were reports that temporal lobe activity was associated with people claiming that they “felt Jesus within themselves”. Is that perception real, or just a result of brain wiring or brain chemistry, just like the effects of schizophrenia, which we don’t [AFAIK] consider as some awareness, just a brain malfunction (although it may be the basis of the beliefs of earlier generations that some individuals, like Joan of Arc, actually spoke to God).
Quite right about psychotropics, psychedelics, brain lesions, training and indoctrination, all of which are constrained in time and space, with beginnings and endings.
And the identification of consciousness – the self – with the mind-body is maintained.
32. There is no dissolution, no birth, none in bondage, none aspiring for wisdom, no seeker of liberation and none liberated. This is the absolute truth.
How do we KNOW “no lasting truths” were ever achieved through the use of psychedelic drugs? That isn’t even wrong.
That’s like saying no lasting truths were ever gained by raising a child, mastering a trade, creating a work of art, or surviving wartime combat, or being sent to prison on a trumped-up charge, or successfully running a business, or solving a scientific mystery? Or perhaps the truths were revealed to someone who refused to see them. Or maybe they saw them clearly, but they’re just not telling you. Or maybe they’ve told you and you just did not understand.
Experiences are invaluable to our growth and development, but there is no way you can prove you had one–or what you might have gained or not gained.
“Lasting truths” [who are you quoting?] have to be recognizable to other people. Personal experiences can be conveyed, but that doesn’t mean they are objective truths. Science provides objective truths, usually repeatable by experimental measures. That doesn’t mean that conveyed personal experiences cannot be used for science, but the actual qualia of experience cannot be measured…yet.
I’ve lived long enough to see claims of drug-induced revelations come and go. Religious sects come and go. This suggests that experiences are not “truths”. The same seems to occur in the arts. One decade some new approach is venerated and then gives way to the next. To me, claims of meaning seem more about persuasion to create “converts”. “Woo-woo” never goes away.
When “personal truths” conflict, the defensive response is often “there are many truths, rather than a single truth”. I think that is wrong, and one should just accept that “personal truths” are just experiences, no more, no less. For religions, if we abandoned claims to “truth” then we could fight over something else, things that can be negotiated and compromised over.
Quote from Silent Spring, as referenced in Three Body Problem, ‘In nature, nothing exists alone.’ A scientist recognises this fact through many observations of nature, but a direct experience is something that hits us as immediate reality. Precious events that change lives and influence worlds.
[There may come a time when, as they say in the old Westerns, it’s time to get out of Dodge. Indeed, developing a spacefaring civilization would allow humans to find alternate places to live in case the home world succumbed to the above scenarios. Redundancy is the goal, and as Garrett notes: “…the expansion into multiple widely separated locations provides a broader scope for experimenting with AI. It allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation. Different planets or outposts in space could serve as test beds for various stages of AI development, under controlled conditions.”]
I never understood this argument. If we assume that AI can destroy humanity, it’s presumably doing so with technology. If humanity has space technology mature enough to be on multiple planets/outposts, why couldn’t AI use this same technology to seek out humanity everywhere to exterminate it? Why the assumption that AI will stay limited to the home world?
Blindsight by Peter Watts is a science fiction/horror novel about (among other themes) an alien AI that is far more intelligent than humans yet lacks self-awareness or consciousness.
That’s what I’m afraid AI will eventually become – far greater and faster intelligence than any human, but no “there” there – just a machine with its own goals and little interest in communicating with organics.
Science fiction movies are replete with what should be inanimate artifacts acting in a malevolent manner, usually with no sign of consciousness which makes these artifacts so frightening. Conversely, we have stories with animals with personalities and consciousness that we use to understand and even communicate with them. Little Red Riding Hood could communicate with the wolf that ate her grandmother, arguably making the wolf less frightening.
The popularity of zombies that cannot talk or be reasoned with makes for better horror than even psychopaths one has a small chance of being persuaded not to hurt or kill one. The slow shambling gait of zombies at least allows for escape. The lightning speed of Dr. Who’s “weeping angels” makes them truly terrifying “creatures”.
Greg Benford’s “Galactic Center Saga” sci-fi series has humans in a galaxy with powerful, [sentient?] machines, the Mechs, (Great Sky River) might be the sort of embodied ASIs that we would have to contend/live with. We humans like our position as teh dominant species on Earth. It will be difficult for us to be demoted to a less dominant species with competition from an ASI machine civilization.
Alex Tolley said:
“We humans like our position as teh dominant species on Earth. It will be difficult for us to be demoted to a less dominant species with competition from an ASI machine civilization.”
This is one of the reasons why humanity has been so lax in searching for life elsewhere: We don’t really want to know about an advanced species out there, especially one that might pay us a visit.
Gods and other supernatural beings are easy to have and believe in, because no matter how powerful they are, they rarely make personal appearances any more and their self-chosen authorities can dictate to the masses just what “they” want and expect.
Humanity might be in luck regardless. If ETI are out there, they are probably not all that close, thanks in no small part to immense interstellar distances, and have their own issues and plans to contend with which, despite what we are told in science fiction all the time, somehow do not include the human race as a key and vital factor to their existence and the fate of the galaxy.
Conscious or not, ASI (why don’t we use the better term Artilects, for Artificial Intellects) will probably be why we find ETI because humanity has been piddling along when it comes to SETI and METI for reasons far more emotional and material than scientific and technological – just ask the Apollo lunar program, which was cancelled way too soon.
Artilects assigned to SETI/METI may indeed find their alien equivalent and set up a dialog that we will have little to say or understand about. Yes, humans are the ones who came up with the idea of life beyond Earth, but we are not evolved enough to handle it properly – that is why we haven’t found anything definite in 64 years of modern searching, among other reasons.
A dialog implies fairly rapid 2-way communication. Without FTL communication, this would seem to contradict your earlier statement that ETI is likely very distant.
Having said that, as artilects will not likely have any constraint with short biological lives, such a machine intelligence, individual or group, might well accept delays of millennia between replies. A long-term perspective rather than our short-term one. Humanity might gain a long-term perspective again, rather like the ancient builders of religious architectures, but I agree that machine intelligence may well be better suited to find and decode ETI transmissions, especially if they are not directed to us as a means of initiating communication, but rather as accidental interceptions of transmissions. [But note that the “A Sign in Space” message was decoded by a human and not a computer.]
“Artificial Intelligence (AI) does not directly explain the Fermi Paradox, which questions why we have not observed evidence of extraterrestrial intelligence despite the high probability of its existence. The paradox, originating from physicist Enrico Fermi’s question “Where is everybody?”, highlights the contradiction between the likelihood of alien civilizations and the lack of contact or evidence of them. Various hypotheses attempt to explain this, such as the impossibility of interstellar travel, the short lifespan of technological civilizations, or the possibility that advanced civilizations choose not to communicate with us. AI could potentially aid in analyzing data from the search for extraterrestrial intelligence (SETI) to identify signals or patterns that humans might miss, but it does not inherently solve the paradox itself”.
Source : Chatbot Britanica
https://www.britannica.com/technology/artificial-intelligence/Methods-and-goals-in-AI
…the last sentence seems to me relevant.
It is already being used. Applications race ahead faster than we imagine.
In my area of interest, I wondered if protein sequence databases could be used instead of text corpora to train LLMs to work with understanding proteins and even designing new ones. Well, I was behind the times, and it was already being done, with the latest example of Meta’s LLM work: ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing. The paper is mainly about describing protein sequences, but an obvious LLM strength is generative. Is AI design of proteins a near-term possibility?
For astrobiology. I wonder if we could:
1. Design novel proteins that would work in very different environments than terrestrial ones.
2. Could we develop ways to design life with different amino acids, nucleobases, and sugars that could replicate terrestrial life showing that terrestrial biology does not need to be a universal template, just one of many?
3. Could these principles allow life to be designed using very different organic compounds that bear no resemblance to terrestrial biology?
Time will tell if any of these are achievable or just fanciful musings. I do expect AI will increase the rate of development of biology applications.
I noticed an interesting phrase in Mr. Garrett’s article:
“The disparity between the rapid advancement of AI and the slower progress in space technology is stark.”
Could an AI/ASI solve the problem of space travel that is currently blocking us on a biological level (energy; durations; fragility of human species in space etc.)?
Same question about the radio contact mentioned in the calculation of the Drake equation: could it provide us with a solution for listening analysis of the universe with a very wide field in the sense or what we limit seems to be only technological ? In short: could the AI give us the key to what “we do not see” that would allow us to progress and may be solving the paradox of Fermi?
Finally, it should not be forgotten that to solve the Fermi paradox, you must:
a) define “an intelligent civilization” or, we have so far only ourselves as reference and a little chemistry…
(b) define what an ETI “contact” would be. There is no indication that it would be in the form of a radio signal.
Somehow we are locked in the jar. Personally, I think we need something that is not inherent in the human species – a great discovery ; a breakthrough – to move forward.
All these speculations are interesting but their readings show a very focused vision on our own technology, forgetting a little that there is always an unexpected X factor that can change everything. An example among all those we can invent: imagine that a slightly more advanced ETI than us discovers Voyager 1 next year. She is curious (universal principle?) she quickly found the planet earth and contacted us for a peaceful purpose. She discovers our advances in AI and tells us how to master or eliminate it while showing us other ways of discovering the universe. All our speculations and fears fall by the wayside; humanity is oriented in a totally different way…
The early decades of SETI were not only mostly sporadic and often token efforts (let’s aim our radio telescope at one area of the sky or sweep it around for a few hours and claim we tried – no these are not exaggerations), they were kept largely in one realm of the electromagnetic spectrum – radio.
The searchers were mostly tenured scientists and engineers who did not have to worry about their academic careers being derailed by publicly searching for “little green men”. In order to stay out of the fringe UFO areas and maintain some level of scientific respectability, these SETI folks stuck with radio from altruistic beings living on Earthlike planets circling Sol-type stars. In other words, versions of us.
This state remained until the 21st Century. Even optical SETI, which was promoted around the same time as radio (1959-1960) was suppressed by those in charge because they wanted the focus to be on radio signals, even though it was shown we could also detect infrared and laser signals.
Thankfully the paradigms are finally expanding and talking about ETI – even the kind that might be in our Sol system monitoring us – are no longer shameful subjects only brought up at the end of long discussions on radio SETI and quickly labeled as just speculation. Even interstellar vessels – once often put down by SETI hardliners because then they might invalidate their radio searches, plus they played much too close to the UFO phenomenon – are gaining respectability again thanks to projects like Breakthrough Starshot and NASA’s own announced efforts for an interstellar probe.
It’s time to really bring SETI out of the area of getting the table scraps from “real” astronomical projects – that’s what SETI@Home was, then it turned out they didn’t even bother to really analyze and keep all that data! Yes, it is a dirty secret the project makers don’t want to talk about.
The modern era search for life elsewhere has been fraught with ignorance and ridicule for most its existence. I would like to think we are grown up enough as a society to start taking the concept serious and start doing long-term searches in multiple frequencies. That I have to say this in 2024 is shameful, but the real history and why it has taken so long needs to be known and acknowledged.
I for one, agree with the premise that AI presents a formidable challenge to the survival and longevity of advanced biological civilizations. I think the emerging evidence is convincing and the trend in technology advancements, AI and other related domains, point to some sort of convergent struggle between biological and technology entities. I can understand that biological technical civilizations may be undone before they have a chance to become truly space faring or communicative with other ET, but no where do I find a reason or explanation to indicate why a potentially surviving or victorious non-biological ASI might not be inclined to broadcast and communicate with other-worldly technical civilizations? Why no comms even from an ASI? Would they not also be interested in hearing from other ASIs or possibly biological technical civilizations?
As a follow-on to my comment above, maybe the Drake equation should be updated to account for the emergence of ASI and its longevity. ETI it would seem may be biologic or non-biologic.
3 Body Problem – Alien Weapon | Relativistic Missile Explained
April 8, 2024
Relativistic weapons are integral to interstellar game theory and dark forest cosmic sociology due to faster than light (FTL) behavior when detected from the targeted observer’s perspective. Although absent in the 3-body problem due to far more advanced weapons, they are important because they make even Type 1 civilizations on the Kardashev scale very dangerous, further explaining the risks involved with making contact with alien civilizations.
https://www.youtube.com/watch?v=uSxWoiI3PC0
Apparently just wanting to go into space while being a man ensures that you and a certain female musician will not be carrying on the human species afterwards…
https://futurism.com/the-byte/olivia-rodrigo-never-date-space
So, yet another reason why aliens are not contacting us: Their men are too afraid to go into space and end up dateless and childless. They may even be too afraid to send out signals or look up at the night sky in wonder for fear that their equivalent of Olivia Rodrigo will avoid them!
To be truly serious, I am sure that there are multiple real cultural and social rules, issues, and taboos among the many ETI out there that are part of the real reason why we do not hear from them, or see them.
Be careful when using ChatGPT to extract information from scientific papers. A few months ago I decided to ask it what is the treatment for Feline Immunodeficiency Virus (FIV). I knew there is no cure.
What it did was to invent a paper. It took part of the title of a paper describing research into FIV (but no cure) and part of the title of another paper describing the successful discovery of treatment for Feline Infectious Peritonitis (FIP).
I continued asking for details and it would invent journals that published the imaginary paper, paragraphs that did not exist, and so on and so on.