Extraterrestrial civilizations, if they exist, would pose a unique challenge in comprehension. With nothing in common other than what we know of physics and mathematics, we might conceivably exchange information. But could we communicate our cultural values and principles to them, or hope to understand theirs? It was Ludwig Wittgenstein who said “If a lion could speak, we couldn’t understand him.” True?
One perspective on this is to look not into space but into time. Traditional SETI is a search through space and only indirectly, through speed of light factors, a search through time. But new forms of SETI that look for technosignatures — and this includes searching our own Solar System for signs of technology like an ancient probe, as Jim Benford has championed — open up the chronological perspective in a grand way.
Now we are looking for conceivably ancient signs of a civilization that may have perished long before our Sun first shone. A Dyson shell, gathering most of the light from its star, could be an artifact of a civilization that died billions of years ago.
Image: Philosopher Ludwig Wittgenstein (1889-1951), whose Tractatus Logico-Philosophicus was written during military duty in the First World War. It has been confounding readers like me ever since.
Absent aliens to study, ponder ourselves as we look into our own past. I’ve spent most of my life enchanted with the study of the medieval and ancient world, where works of art, history and philosophy still speak to our common humanity today. But how long will we connect with that past if, as some predict, we will within a century or two pursue genetic modifications to our physiology and biological interfaces with computer intelligence? It’s an open question because these trends are accelerating.
What, in short, will humans in a few hundred years have in common with us? The same question will surface if we go off-planet in large numbers. Something like an O’Neill cylinder housing a few thousand people, for example, would create a civilization of its own, and if we ever launch ‘worldships’ toward other stars, it will be reasonable to consider that their populations will dance to an evolutionary tune of their own.
The crew that boards a generation ship may be human as we know the term, but will it still be five thousand years later, upon reaching another stellar system? Will an interstellar colony create a new branch of humanity each time we move outward?
Along with this speculation comes the inevitable issue of artificial intelligence, because it could be that biological evolution has only so many cards to play. I’ve often commented on the need to go beyond the conventional mindset of missions as being limited to the lifetime of their builders. The current work called Interstellar Probe at Johns Hopkins, in the capable hands of Voyager veteran Ralph McNutt, posits data return continuing for a century or more after launch. So we’re nudging in the direction of multi-generational ventures as a part of the great enterprise of exploration.
But what do interstellar distances mean to an artilect, a technological creation that operates by artificial intelligence that eclipses our own capabilities? For one thing, these entities would be immune to travel fatigue because they are all but immortal. These days we ponder the relative advantages of crewed vs. robotic missions to places like Mars or Titan. Going interstellar, unless we come up with breakthrough propulsion technologies, favors computerized intelligence and non-biological crews. Martin Rees has pointed out that the growth of machine intelligence should happen much faster away from Earth as systems continually refine and upgrade themselves.
It was a Rees essay that reminded me of the Wittgenstein quote I used above. And it leads me back to SETI. If technological civilizations other than our own exist, it’s reasonable to assume they would follow the same path. Discussing the Drake Equation in his recent article Why extraterrestrial intelligence is more likely to be artificial than biological, Lord Rees points out there may be few biological beings to talk to:
Perhaps a starting point would be to enhance ourselves with genetic modification in combination with technology—creating cyborgs with partly organic and partly inorganic parts. This could be a transition to fully artificial intelligences.
AI may even be able to evolve, creating better and better versions of itself on a faster-than-Darwinian timescale for billions of years. Organic human-level intelligence would then be just a brief interlude in our “human history” before the machines take over. So if alien intelligence had evolved similarly, we’d be most unlikely to “catch” it in the brief sliver of time when it was still embodied in biological form. If we were to detect extraterrestrial life, it would be far more likely to be electronic than flesh and blood—and it may not even reside on planets.
Image: Credit: Breakthrough Listen / Danielle Futselaar.
I don’t think we’ve really absorbed this thought, even though it seems to be staring us in the face. The Drake Equation’s factor regarding the lifetime of a civilization is usually interpreted in terms of cultures directed by biological beings. An inorganic, machine-based civilization that was spawned by biological forebears could refine the factors that limit human civilization out of existence. It could last for billions of years.
It’s an interesting question indeed how we biological beings would communicate with a civilization that has perhaps existed since the days when the Solar System was nothing more than a molecular cloud. We often use human logic to talk about what an extraterrestrial civilization would want, what its motives would be, and tell ourselves the fable that ‘they’ would certainly act rationally as we understand rationality.
But we have no idea whatsoever how a machine intelligence honed over thousands of millenia would perceive reality. As Rees points out, “we can’t assess whether the current radio silence that Seti are experiencing signifies the absence of advanced alien civilisations, or simply their preference.” Assuming they are there in the first place.
And that’s still a huge’ if.’ For along with our other unknowns, we have no knowledge whatsoever about abiogenesis on other worlds. To get to machine intelligence, you need biological intelligence to evolve to the point where it can build the machines. And if life is widespread — I suspect that it is — that says nothing about whether or not it is likely to result in a technological civilization. We may be dealing with a universe teeming with lichen and pond scum, perhaps enlivened with the occasional tree.
A SETI reception would be an astonishing development, and I believe that if we ever receive a signal, likely as a byproduct of some extraterrestrial activity, we will be unlikely to decode it or even begin to understand its meaning and motivation. Certainly that seems true if Rees is right and the likely sources are machines. A SETI ‘hit’ is likely to remain mysterious, enigmatic, and unresolved. But let’s not stop looking.
My (not original) thoughts exactly. I would add that James Lovelock of Gaia hypothesis fame thinks very similarly. Fred Hoyle’s. “a for Andromeda (1961)” seems to end up with a similar argument about the nature of the Andromedans as revealed in the sequel The Andromeda Breakthrough”. Greg Benford’s Galactic Center Saga novels envision a similar scenario.
Has anyone ever “modified” the Drake equation or come up with an estimate of possible number of post biological entities or civilizations? It would seem to me that unless ‘warfare’ or culling occured, over deep time, the number of extant ‘entities’ would increase. Although machines could “sleep” I presume they would sleep with one eye open, ie have biological civilizations under constant observation. (I don’t think we will find evidence of ‘ancient’ techno signatures. More likely old but functioning).
Are there any opinions on the thought that the ‘great filter’ might be the capability of transitioning to a post biological state prior to extinction? Most terrestrial life goes extinct. The trend for homo species seems to follow this. Only one hominin remains.
AFAIK, SETI scientists generally assume biological ETI, not that they care, as their search just looks for extant signals. So civilizations could be very short, very long, something in between, interrupted by barbarism and technological cycles, etc. That L term is open to be filled however one likes to, including trillions of years or more.
One constraint may be resource constraints. For example, for a K3 civilization compromising all ETI species, each now one added will deplete the species division of the resource pie. That might lead to “competition” and a resulting decline in species or populations of biological or machine entities.
The value of L is anyone’s guess, although people seem to use about 10,000 years using terrestrial civilizations to date in toto as a yardstick.
Remember in the 1970s when SETI, the Search for Extraterrestrial Intelligence, was called CETI, which stood for Communication with Extraterrestrial Intelligence. Then the powers that be felt uncomfortable with the notion of actually talking to aliens, so they went with the much safer just searching for them first.
Well CETI is back in terms of communicating with and understanding alien minds, but this time it stands for Cetacean Translation Initiative…
https://www.projectceti.org/
https://gue.com/blog/can-we-learn-to-talk-with-whales-introducing-project-ceti/
https://www.nationalgeographic.com/animals/article/scientists-plan-to-use-ai-to-try-to-decode-the-language-of-whales
The cetacean brain has substantial regions that when compared to humans, are suggestive of an extended limbic system. Sometimes referred to as the lizard brain, the limbic brain is involved in processing emotion, morals, ethics, values, and much that goes on before awareness at a subconscious level; it is however non-verbal and without rationality/logic.
Do you mean the limbic region in general or regarding the cetaceans specifically?
The limbic region in humans mediates emotions, morals, ethics, values and similar aspects (the “lizard brain”); intellect, speech, rationality, logic and similar aspects are mediated by the frontal lobes. The same is true to an approximation in other mammals.
Cetaceans have a much expanded limbic region.
We have definite evidence that humpback whales will protect other animals, especially from orcas…
https://www.nationalgeographic.com/animals/article/humpback-whales-save-animals-killer-whales-explained
To quote:
It’s not clear why the humpbacks would risk injury and waste so much energy protecting an entirely different species. What is clear is that this was not an isolated incident. In the last 62 years, there have been 115 interactions recorded between humpback whales and killer whales, according to a study published in July in the journal Marine Mammal Science.
“This humpback whale behavior continues to happen in multiple areas throughout the world,” says Schulman-Janiger, who coauthored the study.
“these entities would be immune to travel fatigue because they are all but immortal.”
We can’t know that. An immortal body does not that the person will survive the duration. Imagine if you will that you take a pill that makes your body immortal. Sound good? We will see if one feels that way 1000 years hence. A great deal of human psychology and behavior is due to our knowledge of our limited time. Given enough time a human is likely to become a sulking mess or a victim of suicide.
Will the same be true of an artilect? It is very possible. Again, we should not equate an immortal body with an immortal mind or personality. Infinity can become a great burden to any intelligent entity.
This brings to mind Douglas Adams’ description of the immortal Wowbagger the Infinitely Prolonged and its great project to pass the time.
It depends on the article that. If it does not interact with its “peers” then its lack of mental synchronizing with others doesn’t matter. Unlike living organisms, machines can just switch off so that vast lengths of time do not have to be experienced. Even then an article that could become “bored”, but then it may have the ability to discard memories and behaviors to become mentally “young” again.
If the artilect does need to be part of a galaxy spanning civilization then issues of civilization cohesiveness will be an issue regardless of age due to speed of light limitations. However given experienced time compression due to inactivity this could be avoided. Suppose machines only operated for a short time every million years, or their mental “clock speed” could be slowed down for most of the time. Then there would be little divergence in culture.
There are options largely unavailable to most animals unless they have a dormant stage like cicadas.
Do religions worry about their ancient gods get bored. A deity who created the universe 14 billion years ago might be really bored by now if it is still around.
“Suppose machines only operated for a short time every million years, or their mental “clock speed” could be slowed down for most of the time.”
It’s one of theories about possibility of Wow signal being of artificial origin. A short robust “ping” message from a probe or ship.One of wilder ideas but indeed a tempting vision.
“Given enough time a human is likely to become a sulking mess or a victim of suicide.”
Why likely?
I could of course be wrong. Let me explain myself as follows. Our personalities and consciousness are largely an emergent phenomenon driven by our bodies (senses, mobility, etc.) and evolves drives (primarily survival and reproduction). When we’re young we’re full of energy as we explore the world and ourselves and learn to navigate our drives and objectives. Then we do and then we’ve done it. The young are idealists and the old are conservatives, broadly speaking.
Lots of the change is because our bodies change quite radically as we age. It may be that the decline of our ambitions and interests will slow or plateau if our vigor is sustained, and in any case there will always be exceptional individuals.
However, immortality lasts a long time. We have yet to do the experiment. All the data we have is from humans with a strict expiry date. The trends we see may or may not hold if our bodies become immortal. I have strong doubts that we can overcome the ennui of infinite prolongation of those animal drives, or the loss of them.
I am speculating based on a limited body of knowledge. If I am correct, there may emerge methods to combat the psychological decline that are likely to become widespread among the extremely old. Again, infinity is a long time and we are not really built for it.
Again, you have projected biological effects onto a machine. Whether you are correct or not regarding humans and intellect, why would an intelligent machine show a similar behavior with age? There are so many ways an artilect could stave ennui off (if it was a problem) that we have no access to.
Alex, please read more carefully. I was strictly talking about humans, not machines.
My apologies. I thought you were justifying your statement about the human condition and immortality to support your claim that machines would be similar. I now see that you were just supporting the human state, not the machine state with immortality.
The secret is in the comment indents. I was responding to Antonio, so that was the context.
The question of how an “artilect” thinks and behaves remains open, and interesting.
It seems to me the Fermi Paradox is even more difficult to explain when we take into account long-lived AI. Why aren’t they here having a look at us and evaluating us as galactic neighbours? Does it seem likely it is very difficult to arrive at something both artificial and sentient or is it more likely that beings that begin the process of developing AI don’t finish the project for a variety of reasons or is it a combination of both, or something else entirely? And if we receive a signal from ETI which is found to be a long way away (on average that seems the most likely scenario) would this discovery have profound effects on human psychology as has often been stated if we can’t understand it?
If you were a highly advanced alien Artilect existing in another part of the galaxy, where you have a vast knowledge and experience base and function throughout the Milky Way and perhaps even beyond, what would you have to say to an organic species that barely emerged a few centuries ago and hasn’t even left their planetary system in any serious way? Not to mention with over 400 billion star systems spread out across 100,000 light years, we wouldn’t even stand out in any cosmic way.
This comprehensive examination of the subject may be of great use here:
http://www.projectrho.com/public_html/rocket/aliencontact.php
They may not have anything to say to us but why aren’t intelligent machines here studying us as we study insects, birds, mammals and even single celled organisms? Does it always come down to time and distance? I think that has to be at least part of the answer along with extinction of organic species over time, including those that are sentient. There are probably many other negative effects (including negative feedback loops) just as there are positive effects that tend to allow sentient species to extend their time of existence to some extent. Once we can take a good look at our galactic neighborhood (if that ever happens) we will know far more about this.
Who says advanced ETI aren’t studying? And just like those who study wildlife, they try not to interact and interfere with the subjects of their study to get realistic responses. So again, they would not try to contact us.
And if technological civilizations are very rare, an unspoiled one would be worth much more than one subjected to contact.
Makes sense to me L. That might put their probes etc. in the lurker category I suppose. Or they may be able to watch us from outside the solar system. In any case if they are sufficiently advanced we probably won’t ever know. Another frustrating explanation for why we haven’t been contacted but it does seem as likely as many others.
The Zoo Hypothesis entirely fits Fermi’s Paradox if the operators of the zoo are billion-year-old artilects whose science is essentially godlike.
The fact that we can’t detect other extraterrestrial civilizations could be as simple as one of these artilect civilizations putting a field entirely beyond our science around our solar system to filter out and block those tell-tale signs of cosmic engineering while slowly nurturing us out of barbarism.
“Ludwig Wittgenstein (1889-1951), whose Tractatus Logico-Philosophicus was written during military duty in the First World War. It has been confounding readers like me ever since”.
Ten years after writing the Tractatus, LW repudiated it, returned to Philosophy and wrote his “Philosophical Investigations” instead. Good luck with that one.
Have never attempted it. Yes, Wittgenstein actually seems to have launched two major — contradictory — trends in 20th C. philosophy! A wonderful source on all this is Wolfram Eilenberger’s Time of the Magicians, recently published in an English edition by Penguin Press. Very readable, in contrast to Wittgenstein himself.
“But could we communicate our cultural values and principles to them, or hope to understand theirs?”
Sure, like we communicate it to our children, by example. It’s the same way we teach our language to them, without a dictionary!
One can speculate on the criteria in the variants of the Anthropic Principle that enable a system to arrive at sentience, and/or become detectable. If one begins with complex molecules organizing towards molecular cell machinery, three characteristics that would favor their continued presence are survival (against physical and chemical threats) by deploying defenses, replication in numbers, and growth in numbers and spatial distribution.
These characteristics should hold good for post-biologic machine civilizations, and would be the basal imperatives that operate in all manifest civilizations. Systems that lack one or more of these imperatives, would tend towards undetectability and de-manifestation.
However it could be that advanced technologies may be well-nigh undetectable by minimizing disruption of matter and energy, while leveraging time and space towards the same ends. They may choose to ignore us, and be detected only accidentally.
I find this whole AI meme to be an ill-considered cliché, derived from an over-intellectualisation of the concept of “intelligence”. A huge amount of our intelligence and that of other large-brained creatures depends on our intimate brain-body links. We are not disembodied reasoning machines, but are largely conditioned by other activities we take part in, including sports, the arts, sex, and eating and drinking. We can build reasoning machines, but they will remain quite different from us so long as they do not have bodies with the immense variety of uses, sensations and sensory inputs that we have. The idea that reasoning machines on their own could create or maintain a civilisation analogous to human civilisation is no more than an SF speculation. We humans represent a symbiosis between multicellular and single-celled organisms, and it seems vastly more plausible to me that a long-lived civilisation will by analogy be a symbiosis between biological organisms and their machines.
Motivation is the key question. The great “why“ of the universe, why do anything? The instinctual directives of the human body is a great starting motivator, and appreciation of beauty in nature, music, science, mathematics, and other complex systems might be thought of as an accidental byproduct of the brain, an intelligent icing on the meat cake. Would an artelect similarly appreciate beauty?, something an algorithm could never touch. If not, then its spread through the galaxy is no more than a chemical, mechanical, electrical chain reaction.
The thing is, human intelligence and consciousness is simply an expression of latent intelligence and consciousness inherent in every particle of the universe. The God particle isn’t the Higgs bozon, but consciousness itself. Consciousness isn’t a derivative of the human brain, it’s not just another bodily excretion. But the reverse is true, the human brain is a wondrous expression of this universal consciousness, as are atoms, gems, flowers, and animals. The universe is an expression of an infinite consciousness, and when a civilization comes to realizing that they are one with this consciousness, they don’t need to travel to the stars. Instead they become one with them, sitting in Lotus posture in quiet rooms of their home planets. As long as a civilization forgets that all is within itself, it will continue to seek all outside of itself, and suffer the empty expanse between the stars.
So it’s likely that we have yet to be visited outwardly by million year old civilizations, because they are already visiting us within their own selves, and we don’t know it.
We have absolutely no idea what they will be like. They will not only be very different from us, they will also be very different from one another (assuming we succeed in contacting more than one alien species).
Sure, if they are biological in nature, they will probably exhibit some behavioral characteristics we may recognize from ourselves, or from other organisms on our own planet. But look at how different we are from dogs. And how different dogs are from cats. And we are all recently evolved mammals from the same planet.
And if they machine intelligences, who knows what characteristics they may have inherited from their biological creators? Or what characteristics of their creators they may have deliberately abandoned?
In short, we will know absolutely nothing about them, other than we have similar technologies. The latter is merely a selection effect, since it is likely that the evidence of their existence will be a technical signature we will recognize as such.
In other words, it is pointless to speculate about them. I am only reminded of social insects, which exhibit behavior we might characterize as technological. As for their social organization, what can we possibly say about that? We are a society of different individuals. The hive is an individual composed of many identical components.
Where do we even start?
Well said Henry. We can never know a priori the motivations of other intelligent species. Our desire to explore and explain are part of our motivation to travel to other stellar systems but other sapient species may have arrived at entirely different motivations for entirely different reasons. We may never find them if they exist because they may not want to be found.
Our knowledge of the universe, physical laws, the origins of life and what is possible using extrapolations of current technology all suggest that we should be well-aware of alien civilizations. That awareness could come in the form of radio communication, visits by probes (AI or otherwise) or techno-signatures.
Every year that passes suggests, to me at least, that our understanding of reality is grossly flawed – little more than froth on the top of a sea of infinite knowledge. We can continue to add decimal points to the precision of our measurements in hopes something interesting will turn up, or, in parallel, begin serious inquiries into other realms of human experience. I have had enough of “paranormal” experiences that my relatively high levels of skepticism no longer shield me that something else outside of accepted physical laws are within our reality. Many millions of people can say the same. If those experiences were to be seriously and systemically explored, who knows where that rabbit hole would lead. Perhaps it could somehow answer the Fermi Paradox. Just musings.
There is absolutely no doubt in my mind that there is a great mystery behind the universe, that we are obviously missing something, and that perhaps we may never “get it” at all. We may even be fundamentally incapable of “getting it”. After all, we are a subset of the universe, not the other way round.
But even for those of us who have never personally experienced any paranormal experiences, or heard any credible witness accounts of any, that mystery still remains, in fact, it is everywhere we look. The supernatural or the paranormal is not necessarily the antithesis or alternative of that mystery. That’s too simple to be true, and to easy to be real.
“The most terrifying fact about the universe is not that it is hostile but that it is indifferent, but if we can come to terms with this indifference, then our existence as a species can have genuine meaning. However vast the darkness, we must supply our own light.”
…
“The very meaninglessness of life forces man to create his own meaning.”
— Stanley Kubrick
Well said and in total agreement!
I agree. The more you learn about space and what we observe on Earth the more you realize how little we know. I was a very determined sceptic but there are certainly events and phenomenona that we have yet to classify and understand fully, and current science either ridicules or explains in ways that ring hollow. Not little green men of course but perhaps electromagnetic, weather or of other sort that we have yet to comprehend.
There are alternate ways to approach the great mystery behind the universe.
A simpler treatment of the subject.
The YouTube direct link.
The fact that we don’t know what 95% of the matter in the Universe is and what Dark Energy is suggests you are right Patient. Our understanding is grossly limited, but the tendency of many humans is to believe that our present amount of knowledge (no matter when that present moment occurs) is all the significant knowledge there is. We know that isn’t true at all. We also know that unknown unknowns exist i.e. there are many things that we don’t know that we don’t know. Known unknowns are so much easier to accept (at least to some) because they can be worked on. We have almost no data about the possibility of ETI existing somewhere and I think it really frustrates many.
In summary, whether bio, cybo, or techno:
Either:
They are here and there but don’t care.
Here, and are fattening us up for uses we cannot imagine.
Here, and are interested in us so they are cryptic.
Here, dead or waiting.
Here, but we are simulations of theirs.
Were there or here, but are now not here or there.
Not anywhere near enough to hear.
Not here or there.
Very succinct.
“ As Rees points out, “we can’t assess whether the current radio silence that Seti are experiencing signifies the absence of advanced alien civilisations, or simply their preference.” Assuming they are there in the first place. ”
It might still be too early to say that we are experiencing a radio silence. We just have looked at one glassful in an immense ocean, as Jill Tarter says.
Well if you study SETI long and deep enough, there isn’t really clear silence. There are some detections and signals that are ambiguous, and SETI searches for direct, constant messages aimed at us(roughly speaking of course about Old SETI). There were radio signals like Wow Signal, Meta Project signals, things like Przybylski Star, IRAS based study, or red dwarf galaxies and so on. Some of those might potentially be result of advanced activity, some might be natural anomalies. We have no way to tell, or confirm that at our stage of development(and we are after all a primitive civilization, being only on the Moon and using primarily fossil fuels). Interestingly these potential detection would imply uncaring postbiological civilizations on stellar scale.
For now I think we need to await results from new telescopes, exoplanet discoveries will lift the veil a bit.
There seems to be a lot of skepticism that we can ever communicate with ETI. It ranges from the belief that all understanding is mediated through biology, that machine civilization is a very unlikely, to the gap between intelligence levels is too large (e.g. humans and insects),.
Firstly, I would like to discount the biology embodiment issue. The vast amount of scientifically gained knowledge that humans have has very little relation to human biological evolution and experience, albeit mediated by our senses. It should be possible to limit communication to facts (e.g. Nielsen’s recent post on his Encyclopeadia Galactica) across time. While the late John McCarthy espoused the idea that just as physical forms show convergent evolution, so will intelligence (I personally don’t agree with this notion). The fictional counterargument Is Stanislaw Lem’s novel “His Master’s Voice”. However, we can communicate with insects at a very crude level. We can influence their behavior with poisons and food, as well as direct brain stimulation, and they can communicate by their behavior and neural recordings. At less remove we can clearly communicate with our pet cats and dogs, again on a superficial level. I doubt we could ever have a concersation about philosophical questions with them. But then again, we have a similar problem communicating between humans with different worldviews and educations. But we can communicate at some common level. ETI, whether niological or machine, will have some common level of communication with us. Their intelligence may be higher or lower than ours (imagine an ETI having to try to communicate with Voyager 1 or II). We don’t know which side will be the advanced one – we just assume it will be the ETI.
AIs have the advantage that they can occupy an intelligence space far greater than humans can. Forget the human biases due to current data set input. Think instead of generative adversarial networks (GANs) for learning. Machine learning has already helped to decode some neural spiking patterns and it is only a matter of time before ML can decode far more aspects of nature, from math, to physics, to biology and social systems. AI systems may represent the best option for creating an “API” between humans and ETI. Think of this as the modern equivalent of the the Colossus and Guardian AIs in “The Forbin Project” learninmg to establish common communication modes. Not so long ago, a Facebook AI developed a unique language to handle communication. AIs are less likely to fall into human cultural traps in this regard.
What about the idea of machine civilization? I personally see machines evolving into an ecosystem, and just as humans harnessed nature for agriculture and transport, so will some machines do the same in their ecosystems. Unless one thinks that a single intelligent entity is the way AI would evolve, like Asimov’s AI in “The Last Question”, it seems far more likely that there will be many, more sopecialized machines just as humans have become more specialized over time and used cooperation to manage more complex tasks.
But what if that was not true, could we still communicate with a small tribe of machine intelligences, or just one? Well religions seem to have no trouble believing that they can hold such conversations with their gods, whether residing on Mt. Olympus, or a singular intelligence in the sky. Should any advanced intelligence really not be able to understand us by our behaviors, decode our media, and communicate, even if there are misunderstandings?
Suppose there is no machine civilization per se, but just a spreading wave of von Neumann replicators? Depending on their intelligence level, we could communicate with them at some common level, using our AIs or its to establish a communication channel. It may have little to say, or it might have a very interesting trove of data to offer, much like V’ger in Star Trek: The Motion Picture.
Lastly, my bias is that human communication is fraught with misunderstandings. We cannot easily build good simulations of other people unless we have interacted over a long period. Cultural jokes are full of examples of different ways of misunderstanding others. Arts are highly subjective, with abstract painting and sculpture particularly so. Should we ever communicate with ETI, it would perhaps be best to steer away from any cultural subjects and try to stick to facts that are far more objective and less subject to interpretation.
[A small plug for my collaborator Brian McConnell who has just published a new book: The Alien Communication Handbook: So We Received a Signal?Now What?
And an excellent book it is! More about Brian McConnell’s new book soon.
I don’t have a problem with an intelligent, sentient machine civilization even though computers and machines are anthropomorphized. The more closer like us they get, the more we get the same result with the Fermi paradox: Where are they? Machines still would have communicate with a signal that had some repetition of letters or numbers and not just random noise. We have not yet heard anything. like a message with some actual data in it which has to be able to be differentiated from the radio wave emission from stars and the CMB.
For argument’s sake, let us assume that communication uses em transmission, but with low power and requires a gravity lens to focus the signal to maintain a reasonable narrow beam strength for receipt. Unless we are looking in exactly the right direction from a gravity lens focus, we wouldn’t see those transmissions. We would have to look for other technosignatures as indications of their presence.
As others have argued elsewhere, we cannot assume that the natural state of ETI is terrestrial fantasies of planetary cities like Asimov’s Trantor, or Dysonian and mega structures. ETI may assume a far lower profile, with goals that we cannot even imagine. What I am fairly sure about is that continuous economic growth cannot be maintained for any real length of time. As I have shown before, at a 3% GDP growth, we would become a KII civilization in less than 10 millennia, and a KIII in a similar time frame. Any ETI civilization could not be millions of years if they require maintaining this rate of growth. They might be old if they can go through growth, decline, and renewal cycles. If ETI’s goal was to collect knowledge, they might be able to do that with relatively few resources and a planetary sized storage device. If, as per Clarke, they wanted to cultivate mind, then again there might be very little evidence of their presence, perhaps a visit every 100 millennia or so to check on progress (and “weed” if necessary).
I’ll agree that there are plenty of reasons why an very advanced, intelligent ET civilization of any type might not want to communicate with us. The gravitation lensing idea won’t work for weak ER signals, but only strong ones, since starlight is strong which is why we can see it lens. It does not matter anyway since an advanced machines civilization as you define will not have any problems with making a signal strong enough for us to hear it which might be on a frequency we aren’t listening?
The studies of genetic and cybernetic enhancement are bound for trouble in the modern and near-future society. First, we are now entering the age of extreme intolerance for superiority, and these studies will be tabooed as thoroughly as possible. Second, and this is the cause for the first, there are still to many people in the world who would readily use enhancements as the means for seeking domination, just as technological advantages were used all the time before. Now the world is at peace due to power balance and the memories of horrors of World War II and other atrocities of recent past, but will these stop anyone from another attempt at world domination, given the advantage that really breaks the power balance? Any real achievements, which managed to override the taboo and make it out of labs, will likely find malicious use.
If we want to enhance ourselves and not get subdued or annihilated by what comes with it, we have to leave our barbaric roots for good. Every one on the planet Earth. Putting some superficial veils of civilized ethics over what caused the world wars will not do; in must be the change in our deepest nature.
Something similar was wonderfully described in Liu Cixin’s “Three Body Problem” trilogy, where humanity was so afraid of interstellar offshoots “becoming something different” that it tried to stop them even in the face of annihilation.
Regarding post-biological evolution, if there is something to it, I cling to the opinion that our direct offspring will largely inherit our way of thinking and feeling, including our flaws, but distant descendants will be as incomprehensible for us as humans are to ants. Even if some of them will stay on Earth, we will be able to live together with them much like ants live together with us. Even in the densely-populated areas, for every anthill that was burned, crushed by a bulldozer or poked with a stick, there are many thousands which exist essentially intact despite us being everywhere. The same goes for ET intelligence – not literally talking to lions but more like “Roadside Picnic” or demolition by Vogons, if we’re the ones in the way, or Solaris if they’re not here. And the base case of every 999 anthills out of 1000, we won’t even know they are here.
I think that depends on how you define “our way of thinking”. Culturally, we change a lot, even generation to generation. Both the past and teh future are foreign countries. We would have a lot of difficulty understanding life and our fellows thoughts if we time travelled to 1000 CE. I have no doubt that with the rate of cultural change, travelling just a century into the future would be hard to deal with.
Even worse, some of those deep-seated, almost innate, “ways of thinking” might be technologically changed, with drugs and implants.
Seeking advantage by becoming “Nietschean supermen” probably isn’t nearly as effective as the old-fashioned way of having inherited wealth and power, The Nazi plan to build a master race certainly help create a lot of very physically fit boys and men, but the cost was their brains were uneducated. This was at least part of the reason the allies were able to outcompete the Nazis in the technology race. I see the powerful and wealthy primarily interested in longevity, not being “superior” as they can buy or command the resources to keep ahead.
Its very easy, these days, to rely on Nazi metaphors to model current events. And there certainly is good reason to do so. Central to all fascism is the conviction that “We” are inherently superior to “Them”. Sometime this alleged superiority is ascribed to genetic causes–the myths of racial or blood superiority. Other times it is nationalistic, the idea that our connection to a certain geographical region or historical tradition is the source of our exceptionalism–blood and soil.
In our own enlightened times, the origin of our superiority to others is often explained by our unique embracing of supposed shared virtues. We are told our tribe is “better” because only WE truly embrace ambition, deferred gratification, hard work, family values, patriotism, religious devotion, innovation, self-reliance, creativity, competition, individualism or entrepreneurial excellence. The Other, of course, is accused of being utterly devoid of these qualities, and therefor worthy of subjugation, oppression and even extermination.
The horror of twentieth century fascism was eventually defeated, but it took an unholy alliance to do so. It was eventually dispatched by the cooperation of a brutal Stalinist dictatorship, history’s greatest and most widespread imperialist empire and a progressive democracy which had built its strength by exploiting slave labor to work the lands it had stolen from its own indigenous people and its weaker southern neighbors.
I make no personal accusations here. None of us was even born when these events occurred., we need not feel guilt about them. It is not our fault that we may benefit from historical injustice, but we can be rightly condemned for forgetting it.
The powerful and the wealthy are primarily interested in staying that way. They may not necessarily feel they are superior to anyone else, but that is the myth they peddle to their victims when they need allies to seize control of the social, political or economic order.
They will exploit fear, uncertainty, and the biases and prejudices that may already exist in society to do so.
Perhaps this is why we are drawn to SETI. Perhaps we subconsciously seek communities, or civilizations, which have outgrown or abandoned the many forms of human fascism.
>>Perhaps this is why we are drawn to SETI
I really agree!
While I’m not a total pacifist, I at the very least respect the ethic Golden Rule. But will it change If I’m given some abstracted long-lasting ability to do all what I might want _in that circumstances_ without consequences for myself? I don’t exactly know. Of course, in the real-world it’s all about thresholds above which an advantage becomes game-changing. And I guess it will be fairly close to truth to say that anyone knows at least some people who would surely run amok, given an advantage that is game-changing for them, despite them behaving decently in their current reality.
And what if some state, by implementing augmentation, really created the superior “We”? If we postulate that we won’t reach the stars in our “Default State”, this question becomes relevant, too.
Surely, I believe that there exists a level of science/technology/understanding which enables changes in our deepest psychological nature. Both individually, which enables a person to “become something really different”, and on the civilization level (which is much, much harder).
Again, not by patching the human warlikeness by check-and-balance laws and conventions (which all have their thresholds of “breaking the seals”), but by decreasing it at the roots. Making sure that no one will go conquer everything around even in the presence of a real game-changing advantage AND in the absense of check-and-balance system. But, here is a Catch-22. To reach the needed level, we need more scientific/technological progress, and this will be done in our Default State. Maybe a “patched” one, with some current or future checks-and-balances, but still it’s human psychology v1.1., not 2.0. Another catch is collective effect. Imagine idealized setting where a simple “root warlikeness limiter” procedure is invented that is as trivial as a vac shot. But here most people, including very possibly myself, will agree to take it it only if all others will.
This is a real walking on a thin line. That’s why I’m so eagerly awaiting the discovery of ET. Even if we’ll know nothing exept they’re achieved much more than we, it will tell that the solution *exists* !
This is also why I don’t believe in Dark Forest and other forms of malevolent starfaring aliens (not including Vogons :D, thy were just bulldozering anthills). Warlike species don’t make it to the stars. They annihilate themselves or at least undergo tedious and somehow paradoxical biological evolution to become less warlike, probably through a number of cycles of rises and falls. Again, the “Downfall hypothesis” of almost mathematical strictness: initially, technological progress increases civ-bearing species’ power over matter, energy and information much faster than it changes their _deep_ psychology. Our destructive power increases, but our warlikeness remains close to constant and our robustness is also limited (a hard limit is presented by living on the same small crowded globe). If it continues indefinitely, the downfall is guaranteed – first, a threshold is reached where a global war will cause it (nuclear weapons), then progressively smaller disturbances become enough (out-of-control biological or globally-disrupting cybernetical weapons, possible in the more tightly-coupled networks of future, or who knows what else).
We need answers in the stars!
I once had a discussion with a friend about engineering humans to lose their aggressive/warlike tendencies. His counter-argument was what would we do if aggressive aliens arrived?
This would be made moot by your argument that aggressive cultures are self-limiting and will not reach the stars. To date, that is not the lesson of our history. Aggressive cultures have always managed to topple cultures that are less aggressive or become so (c.f. The Eastern and Western Roman empires). So far we seem to have avoided using the self-limiting weapons we have – nuclear and biological. (Maybe we failed with the social idea of capitalism.) Maybe we have been lucky and this will end under the stress of global heating, or it may not. If not, then aggressive human culture may go to the stars and continue its colonialist urges. Or our AGI robots imbued with human biases may do the same.
It is also possible that non-aggressive species never leave their homeworld, content to live in harmony with their world rather than expand beyond it, except with scientific space probes. If only we could be sure that any Bracewell probe (or Lurker) we discovered was from such a civilization. It would be a pity if humans proved as trusting as the Dodo.
I agree, in this frame, our previous history is not a good example because all previous conquests used technology that could not topple civilization itself. And explorativeness don’t universally come hand-in-hand with aggressiveness. On the other hand, it would be pity indeed if we pacified ourselves and found out that the opposite is the case.
There’s another strong illustration about superficiality of changes, the seals and their breaking. We have been thinking long ago that we are changing en masse, but both World Wars came more than century after Enlightenment. Medieval people would be horrified by WWs brutality. The crimes of XX century were as nasty as they went in earlier ages, in terms of individual violence, yet the numbers, the scale and the impact of World Wars were much bigger than everything before just because of the more advanced tech. We seem now to abolish even the possibility of true superiority after what Nazis did only pretending being superior. And since at least within major human cultures the deep psychology is nearly independent of tech level, we have all reasons to fear. Some ETs would make their equivalent of facepalms watching this.
Ideally, we should learn about the state of intelligence in our Galaxy and only then decide our own interstellar strategy. Should we stay as aggressive as we are (or even ramp it up if it turns out that the Galaxy indeed is an arena for interstellar death-match), or not. And I believe that if galactic population density is not exceedingly small, it could be done without making really big steps like human augmentation or next technological revolution. It still requires permanent Martian colony-class effort, but by the end of century we’ll know. Once we found something on the lunar poles, and/or detections by dedicated massive orbital facilities start to come up, there will be little hesitation to search more thoroughly in the first case, and to send nuclear-powered observatories to solar gravlens locations in the second one. Then we’ll get full-HD pictures and our first studies of comparative xenology :-) If an exhaustive search of nearby systems by planet-resolving sparse-aperture space-based interferometers turned nothing but wilderness within 100 parsecs, and all-sky surveys of the same class of effort found no obvious technosignatures farther out there, than… paradoxically, we should be more careful for ourselves whatever than means, just in case. Even more so if we watched closely the tech-sig-polluted worlds and saw only ruins.
Anyway, I think the manipulations with deep psychology would not be totally irreversible once we reach that level. Of course, no exact ctrl-Z and all the new dangers… But at least some forms of backup could be imagined even now. Sci-fi-like indefinitely hibernating warriors or genetic blueprints of humans v1.x specially archived for such cases…
I thought of a case of inhabited stellar systems with much easier access to starfaring than ours, used to deepen the Fermi Question. It also could be applied to warlike species trying to become interstellar. Like the first one, it is also an exceptional case by construction, but has opposite implications.
An interesting space opera-like setting results, if some species managed to get through all difficulties thanks to exceptional starting position while remaining barbarian. All “ANDs”: multiple inhabited worlds in their home system, sharing the same type of biology by lithopanspermy, a binary companion at 3000 AU with it’s own planetary system, a low-grav homeworld. All ensuring they really want to get off their planet, to explore and settle, and can do on chemical rockets only, without nuclear propulsion.
This will come later, together with nuclear wars, when they set out for binary companion while already evaded “all eggs in one basket” scenario by colonizing planetary neighbors.
They achieve iterative interstellar colonization, remaining the same ones who exploited slave labor and throwed nukes at each other. And now they learn that they are the only mighty ones in the galaxy whose ethics evaded the Downfall Conjecture…
The big unknown is how different can the offshoots become just by starting from scratch. The thing that humans feared in “Three Body Problem”. It seems that our “genetic blueprints” support a very wide range of psychologies and cultures, some closer to “eigenstates”, some less. And our own colonization history shows that this sometimes could be the case, even if survival chances are small in presense of more aggressive colonizing cultures. The pacifist tribes of the Pacific and the most of other isolated peculiar cultures were later dissolved by them, but globe is small and crowded and the space is big and empty for all we know…
A secular version of Heaven? If there are other civilizations out there we have no idea what form their polity takes. Human political states seem to have a single ruler and a hierarchical structure that mimics our ape ancestry. Democracy may be fleeting. What if that is the case in the stars? Emperor Ming, Cleon of the Empire, etc, etc. What if there is a godlike artilect, perhaps something akin to Bank’s “Culture” universe?
Just for a moment, suppose that was the message that a SETI communication brought us. How fast would democracy on Earth be overwhelmed by the various autocrats and religious cults?
Maybe we should be careful what we wish for. I would not like to become part of a small minority that wishes to retain enlightenment ideals in a democracy.
There are a number of under estimated risks associated with passive SETI (listening only). It is generally assumed within the SETI community that the detection of an information bearing signal will be a positive and transformational event for humanity.
For a counter-example, one need only look at our response to Covid-19 to see how badly things can go off the rails due to misinformation. I think it is a virtual certainty that if there is a detection, information bearing or not, a rogue’s gallery of bad actors will present themselves as having special knowledge or access to the aliens (send money!), spin QAnon conspiracy theories, etc. A large fraction of the population will believe them. It’s hard to see where this leads to anything good.
This is why it is important for SETI organizations to be prepared for what comes after a detection. The odds of it happening may be low, but the potential for things to go badly wrong if the narrative is hijacked by bad actors.