The wish that humans will one day walk on exoplanets is a natural one. After all, the history of exploration is our model. We look at the gradual spread of humanity, its treks and voyages of discovery, and seamlessly apply the model to a future spacefaring civilization. Science fiction has historically made the assumption through countless tales of exploration. This is the Captain Cook model, in which a crew embarks on a journey into unknown regions, finds new lands and cultures, and returns with samples to stock museums and tales of valor and curiosity.
Captain Cook didn’t have a generation ship, but HMS Endeavour was capable of voyages lasting years, stocking itself along the way and often within reach of useful ports of call. A scant 250 years later, however, we need to consider evolutionary trends and ask ourselves whether our ‘anthropocene’ era will itself be short-lived. Even as we ask whether human biology is up for voyages of interstellar magnitude, we should also question what happens when evolution is applied to the artificial intelligence growing in our labs. This is Martin Rees territory, the UK’s Astronomer Royal having discussed machine intelligence in books like his recent The End of Astronauts (Belknap Press, 2022) and in a continuing campaign of articles and talks.
I won’t comment further on The End of Astronauts because I haven’t read it yet, but its subtitle – Why Robots Are the Future of Exploration – makes clear where Rees and co-author Donald Goldsmith are heading. The title is a haunting one, reminding me of J.G. Ballard’s story “The Dead Astronaut,” a tale in which the Florida launch facilities that propelled the astronaut skyward are now overgrown and abandoned, and the astronaut’s widow awaits the automated return of her long-dead husband. It was an almost surreal experience to read this in the Apollo-infused world of 1971, when it first ran:
Cape Kennedy has gone now, its gantries rising from the deserted dunes. Sand has come in across the Banana River, filling the creeks and turning the old space complex into a wilderness of swamps and broken concrete. In the summer, hunters build their blinds in the wrecked staff cars; but by early November, when Judith and I arrived, the entire area was abandoned. Beyond Cocoa Beach, where I stopped the car, the ruined motels were half hidden in the sawgrass. The launching towers rose into the evening air like the rusting ciphers of some forgotten algebra of the sky.
“[T]he rusting ciphers of some forgotten algebra of the sky.” Can this guy write or what?
You’ll find no spoilers here (Ballard’s The Complete Short Stories is the easiest place to find it these days) but suffice it to say that not everything is as it seems and the scenario plays out in ways that explore human psychology coming to grips with a frontier of deeply uncertain implications. As uncertain, perhaps, as the implications Ballard did not explore here, the growth of artificial intelligence with its own evolutionary path. For that, we can investigate the work of Stanislaw Lem, in particular The Invincible (1964). N. Katherine Hayles wrote a fine foreword to the novel in 2020. Non-human, indeed non-biological evolutionary paths are at the heart of the work.
The scenario should intrigue anyone interested in interstellar exploration. Assume for a moment that a starship carrying both biological beings and what we can call artilects – AI enabled beings, or automata – once landed on a distant planet, where the biological crew died. The surviving artilects cope with the local life forms and evolve gradually toward smaller and smaller beings that operate through swarm intelligence. The driver is the need to function with ever smaller sources of power (the artilects operate via solar power and hence need less as their size decreases), creating an evolutionary pressure that results in intelligent ‘mites.’
A long time later, another crew, the humans of the starship Invincible, has arrived and must cope with the result. As long ago as 1964, before the first Gemini mission had flown, the prescient Lem was saying that swarm intelligence was a viable path, something that later research continues to confirm. As Hayles points out in her foreword, it takes only a few rules to produce complex behaviors in swarming creatures like fish, birds and bees, with each creature essentially in synch with only the few creatures immediately around it. Simple behaviors (in computer terms, only a few lines of code) lead to complex results. Let me quote Hayles on this:
Decades before these ideas became disseminated within the scientific community, Lem intuited that different environmental constraints might lead to radically different evolutionary results in automata compared to biological life forms. Although on Earth the most intelligent species (i.e., humans) has tended to fare the best, their superior intelligence comes with considerable costs: a long period of maturation; a lot of resources invested in each individual; socialization patterns that emphasize pair bonding and community support; and a premium on individual achievement. But these are not cosmic universals, and different planetary histories might result in the triumph of very different kinds of qualities.
In this environment, a visiting starship crew must confront an essential difference in values between the two types of being. Humans bring assumptions drawn out of our experience as a species, including the value of the individual life as opposed to the collective. Remember, we are some years off from Star Trek’s Borg, so once again Lem is pushing the envelope of more conventional science fiction. Hayles will point out that shorn of our anthropocentrism, we may find ourselves encountering forms of artificial life whose behavior can only be traduced by profoundly unsettling experience. A world of collective ‘mites’ may overwhelm all our values.
Given all this, we have to ask whether several more centuries of AI will produce artilects we are comfortable with. The question of control seems almost moot, as what Martin Rees refers to as ‘inorganic intelligence’ quickly moves past our own mental functioning if left to its own devices. We are in the realm of what today’s technologists call ‘strong AI,’ where the artificial intelligence is genuinely alive in its own right, as opposed to being a kind of simulacrum emulating programmed life. A strong AI outcome places us in a unique relationship with our own creations.
The result is a richer and stranger evolutionary path than even Darwin could have dreamed up. We don’t have to limit ourselves to swarms, of course, but I think we can join Rees in saying that creatures evolving out of current AI will probably be well beyond our ability to understand. In a recent essay for BBC Future, Rees quoted Darwin on the entire question of intentionality: “A dog might as well speculate on the mind of [Isaac] Newton.” Not even my smartest and most beloved Border Collie could have done that. At least I don’t think she could, although she frequently surprised me.
A side-note: I would be interested in suggestions for science fiction stories dealing with swarm concepts — as opposed to basic robotics — in the early years of science fiction. Were authors exploring this before Lem?
Rees is always entertaining as well as provocative. He takes an all but Olympian view of the cosmos that draws on his lifetime of scientific speculation, and writes a supple, direct prose that is without self-regard. I’ve only met him once and at that only briefly, but it’s clear that this is just who he is. In a way, what I might consider his detachment from the nonsensical frenzy of too much tenured academic science mirrors deeper changes that could occur as intelligence moves into inanimate matter. Why, for example, keep things like egotism or pomposity (and we all know examples in our various disciplines)? Why keep aggression if your goal is contemplation? For that matter, why live on planets and not between stars?
But for that matter, can we ever know the goal of such beings? As Rees writes:
Pessimistically, they could be what philosophers call “zombies”. It’s unknown whether consciousness is special to the wet, organic brains of humans, apes and dogs. Might it be that electronic intelligences, even if their intellects seem superhuman, lack self-awareness or inner life? If so, they would be alive, but unable to contemplate themselves, or the beauty, wonder and mystery of the Universe. A rather bleak prospect.
For all these what-ifs, I strongly second another Rees statement about first contact: “We will not be able to fathom their motives or intentions.”
As you might guess, Rees is all for pursuing what I always call ‘Dysonian SETI,’ meaning looking for evidence of non-natural phenomena (he includes the study of ‘Oumuamua as possibly technological in the realm of valid investigation). From the standpoint of our interests on Centauri Dreams, we should also consider whether fast-moving AI will not be our best path, at least in the early going, for interstellar exploration of our own. Our biological nature is a tremendous problem for the mechanics of starflight as presently conceived, given travel times of centuries. Until we surmount such issues, I find the prospect of exploration by artilect a rational alternative. What’s intriguing, of course, is whether we can even prevent it.
To get a flavor of Stanislaw Lem’s thoughts on technology, read his Summa Technologiae. Given it was written in the mid-1960s, it is remarkably prescient about the possibilities that we are only now exploring.. For how technology can result in unexpected problems, Lem’s :The Cyberiad, is a collection of short stories that is a fun read.
For a more prosaic view of the future of robitics, Isaac Asimov seems to have covered some of the ground, from humanoid robots to microbots. Clarke also saw the future was likely machine-based intelligence, whether trancendant biologicals or purely artificial (HAL 9000).
Given the rapid pace of AI development, the greater tolerance of most enviroments found in space, and the lower cost of robotic deployment, it seems to me that economics will be decisive in the competition between huamn and robotic explorers. Humans will remain in the loop for the short term, but at some point advanced embodied AIs (artilects) will become increasingly independent, especially when interstellar travel is concerned.
Truly intelligent machines still have a long way to go before they can match human level intelligence. There is little evidence that ant AI architecture has general intelligence, able to use knowledge across domains. Hofstadter has long argued that analogy is an important method to achieve this. Moore’s Law needs a number of iterations, probably requiring very different approaches to computer chip design and manufacturing to get to the point that a machine “brain” is as small and low-energy using as a human brain. Having many small brains interacting to create an efficient large brain has limits based on Amdahl’s law, something quite obvious when one considers the capabilities of human interactions in groups, possibly even within a single human brain based on our cortical architecture. Kurzweil suggested that with simple extrapolation we would have human level intelligence about mid-century. He may be correct, give or take a decade. If we do have intelligent robots, they may be as difficult to manage as people, even cats. With enough of them, there may even be the dreaded robot revolt. Heading out to the planets and then the stars might be their best opportunity to escape us.
Probably as inevitable as bacterial life traveling between the stars as they are blown into the ISM by their home star.
A thought provoking essay indeed. This is exactly the kind of unexpected development I think SETI might eventually stumble across, if it is successful at all.
No doubt there are many other possible architectures that can manifest intelligence and sentience, both biological and artifact.
I’m not too familiar with the concept of distributed intelligences or collectives, but I would imagine our own social insects provide one useful model. Their collective achievements often far surpass the possibilities and capabilities of individual hive members. Who knows what strange designs have arisen in alternate evolutionary environments, or in the design laboratories of alien engineers? Organisms, communities, automata, artilects, sentient environments; once we leave behind vitalism and religion there is really no fundamental difference, but still plenty of opportunity for diversity.. And I doubt I have begun to imagine all the possibilities.
There is a universe of infinite potential out there, but I fear that universe is also populated with unspeakable chittering horrors. We already have learned Nature can be breathtakingly beautiful, but it isn’t always pretty.
I agree Henry, a thought provoking article and now my Christmas wish is the complete Lem catalogue!
Scott, you won’t go wrong with the complete Lem catalogue! Wish I had the entire set here.
Apropos to our discussion of inscrutable aliens, here’s an idea for a science fiction story…
a group of earth scientist/explorers meet an alien team similar to themselves, in a neutral environment, somewhere in deep space. Although the language and cultural differences are impossible to overcome, some limited communication is possible. The two groups agree to trade an individual from each crew to temporarily stay with the other, in an attempt to study the other team closely, in their own home environment.
An alien comes to the earth encampment (ship, base, station?) bringing with him all the supplies and equipment he needs to survive. Similarly, one of the earth crew volunteers to stay with the aliens in their camp, he too carries a kit of survival gear. Communications are established between each “hostage” and his team.
At first, everything seems to be going fine, both visitors seem to be getting along well with their hosts, both appear to be doing useful studies, and each reports to his people frequently. No hostility appears to be evident.
But soon, the earth team notices their guest is apparently upset, agitated about something, but just what is not clear. Because of the language barrier, it is impossible to determine why the visitor in the earth camp is unhappy, but he clearly is. In fact, as far as they can tell, he appears to be panicking. And one day, he commits suicide.
The earth investigator staying with the aliens stops his daily reports, and soon the two crews meet to try to sort out just what is happening. The earth team brings their guest’s body (intact) with them, and the aliens bring the corpse of the earth investigator as well. It his clear that he has undergone a complete medical vivisection. It is also clear it was carried out while he was awake, alert and under physical restraint.
The earth team suspects the aliens had agreed to exchanging medical specimens for complete examination, (for scientific study), but the aliens obviously did not manage to communicate this effectively to the earth team.
Their own volunteer (or draftee?), it was never determined which, expected to be treated the same way, and became disturbed when the earth crew had no intention of cutting him up on the examination table. He sacrificed himself to avoid offending his hosts (or perhaps to evade punishment from his own masters). Which of the two was ever fully determined either.
The exchange of volunteers for vivisection may have seemed to be just good manners and honest diplomacy for the aliens, but it was viewed as complete barbarity by the earth crew. The attack on their man was viewed as unimaginably cruel hostility, while the aliens felt the response of the human crew to their good-faith exchange was a deliberate provocation and insult.
The final result does not suggest either side understands, or will ever understand, the motives of the other.
A slick concept, Henry. I like the exploration of motives here, and the misunderstandings that inevitably result. So now you need to write it up!
I must confess this story proposal was inspired by a similar fictional scenario I read many years ago. I regret I do not recall the name of the story or the author, perhaps someone reading this will be able to.
Let me summarize the tale, as best I can from memory. An earth ship and an alien vessel meet in the Crab Nebula. Apparently, both are there to do scientific studies of the SNR, neither was ready for a first encounter. Both crews establish rudimentary communications, and it seems neither has any reason to consider the other a threat. Having said that, both crews are somewhat suspicious of the other, neither wants to give away the location of their respective home planet. Both sides are reticent to go home, afraid the other might be able to follow them to their home world, or otherwise be able to infer the location. Again, neither crew has any reason to believe the other has any hostile intent, but both are concerned about the security of their home worlds. They both manage to successfully communicate these concerns to the other, but they have a dilemma; how to go home without compromising their planet’s safety.
How the two crews solve this problem is the gist of the story, and although I don’t personally feel the method they agree on is reasonable, I do salute the author for exploring the issue.
I think the story is
First Contact by Murray Leinster.
A classic indeed.
Human vivisection by “inscrutable aliens” was a major plot point of an SF novel I read many years ago. I forget the novel’s name, and I won’t give spoilers anyway.
We are a species that has evolved on planet earth according to the conditions that apply here. No one like us will appear on any other planet.
Our lifetime of 80 years or so is just a nanosecond in the experience of the universe. Many of us waste our time on wars and mysticism.
No one knows why the universe exists, whether there is more than one, how large it is or wether our universal laws of physics and chemistry applies to any other universes.
Just because we wish to fully explore the universe does not mean we will be able to. We are prisoners of the physics, math and chemistry of planet earth.
That may be so, but it is pretty certain that the universe we can observe can, in principle, be explored by humans. It may be that other universes if they exist, may be impenetrable by humanity due to the differences in physical laws. However, it might be equally true that without the laws of physics and the constraints of our universe, these universes may not exist at all. The anthropic principle may be nothing more than we exist because our universe allows itself, and us, to exist.
Whether we will explore the universe, or other constraints intercede, is another issue.
Along those lines, life evolves and exists within context. Horses, hyacinths and humans have arrived here at this time and point and left the dinosaurs and dodos behind. Who had or made a choice to get to this point?
In a similar way to hyacinths cannot gallop or horses make flowers, maybe humans just lack the capacity to become solar system (let alone galaxy) wide explorers.
The cold dogma of modern materialism tells us that our universe is random, without plan or meaning; that there is no soul, and people are the same as animals and generative pre-trained transformer algorithms; that there is no morality or natural law beyond the whims of the powerful; that there is no paranormal, no magic, no free will, indeed, no qualia, and we are mistaken to say we “really” think or feel anything at all. Though it tells us all mysticism is a waste, it offers us all a simulated eternal afterlife as the intellectual property of a corporate hard drive. Denying all gods, it stands ready now to stage its own creation of Man, as ‘organoid intelligence’, human brain cells grown to order in the service of a slavemaster.
It is all tragically, terribly wrong. I think organoid intelligence will show us clearly that human neurons have evolved a capability not seen in finite state machines, a potential to directly sense (i.e. remember) their future situation in much the same way as they recall their past. Such causality violations have been reported throughout human history, often in striking anecdotes, though they are as singular and irreproducible as events can possibly be. But our minds should normally suppress and limit precognition on a macroscopic time scale because it leads to disaster – after all, it is easiest to remember a disaster, and your response to foreknowledge can be the reason why it will happen. Yet if I could convince an OI developer of this, he would only hasten his hopes, and coming troubles. At least the theory of free will, qualia, the purpose of the universe and its relation to others should become much more approachable.
Respect for truth and fact requires us to admit that people are animals and, like other animals, are also “polyspecies” constellations of commensal organisms.
Otherwise your comment reminds me of the old Esalen consensus, as in Michael Murphy’s ^The Future of the Body^ and related works. Though I’ve had enough moments that seemed precognitive not to dismiss this consensus outright, I’ve had more in which my intuitions were wrong.
To model any ability, inside or outside human neurons, to “remember” the future, one must do more, I suspect, than give a reason why such memories might normally be suppressed. One must account for multiple futures occasionally forking from a common past, on an as-yet unknown basis likely independent of human choices, and how “memories” of contrasting futures might interact prior to the occurrence of the fork.
Fascinating, thank you!
You might be interested to know that there’s a game based on The Invincible recently released on PC. Haven’t played it, but it does at least have a wonderful aesthetic to it.
Hadn’t heard of this, Stephen. Thanks for passing it along!
The Invincible – Signature Edition (PC)
It is a kind of race. If humans develop competent, self-developing artificially intelligent entities before they develop interstellar starships, then it may be the case that these artificially-intelligent entities build their own starships and disappear off into the cosmos without us. Humans might be left behind and never gain any benefit of the exploration of the stars.
I’m hoping they’ll take us with them, but this isn’t guaranteed.
The effort to create a new human, “from each according to one’s abilities, to each according to one’s needs” ran afoul of biological imperatives of growth + replication, and kin loyalty. Moreover the cost levied on “one’s abilities” and the reward offered to “one’s needs” both have adverse effects in the aggregate over time.
Absent such imperatives, intelligence on various armatures may thrive and prosper: and additionally freed from the physical constraints of biological wetware, could outpace us in short order.
“Hive minds” in science fiction go back at least to the Martians in Olaf Stapledon’s 1930 novel Last and First Men, which is freely available at Project Gutenberg Australia.
Wouldn’t you know that Stapledon would have gotten there first? Thanks for the reference, as I’m well up on Star Maker but not so much on Last and First Men.
I find many people’s vision about the future odd. Current population trends just don’t bear out the logic of what they conclude. Currently as automation increases and society moves forward, the birth rate of China, the US and Europe is negative. Less people needed, less being born, the overall population is dropping. There are no socialist, communist countries that thrive because it is a bad idea. The future will be less people automating more. The population will peak and drop off as all populations do. Exploring will happen because of inquisitive, intelligent people.
Correct Donald – ‘Exploring will happen because of inquisitive, intelligent people.’
History shows our foundation by curious, intelligent people; the present is the result of inquisitive..etc, and the future will be a human exploration of space because of such people. We may compete with AI for the investigation, but we will do it.
We can’t help ourselves as a species because there will always be inquisitive, intelligent people who will want to explore – generation ships, suspended animation, downloading minds to an artifact, or artilect; science fiction will show us ideas to do so.
We may be as incapable of exploring deep space and the stars as migratory and anadromous fish are incapable of populating the land.
Despite the claims that humans are naturally exploratory, the reality is that people and populations move in to response to population pressures on resources. It is the main reason for ancient wars and territory acquisition.
Certainly, a few people do explore, mainly for glory and reward, less for exploration for exploration’s sake.
Consider, the explorers that have gone deepest into space are relatively dumb machines. They are also the only artifacts that have continued to send some data after nearly 60 years. This is technology we already possess. We can build newer versions that are faster and therefore penetrate further into space.
Machines can be packaged for reactivation after some time in the future, allowing a robotic spacecraft to continue even as parts wear out and fail.
Humans, however, cannot do this. We do not send humans on one-way journeys into space with no hope of return. We need new technologies to sustain humans, and new technologies to slow or stop aging for interstellar journeys, or new propulsion technologies (FTL drives) that may never be possible. Every proposed method to get humans to another star requires technologies yet to be invented. Some proposals are morally dubious at best.
“Intelligent” machines are already achievable and improving rapidly. Machines can survive the rigors of the space environment, and only the need for energy creation while in cruise mode between stars needs to be solved.
The only life that we know could survive star flight is bacteria and other organisms that can form a dormant stage. Human adults cannot do this.
I would conclude that it will be machines that will be the “explorers” of deep space and especially interstellar space. If humans discover a way to do this with some yet-to-be-invented technology, we will still be at a disadvantage compared to machines as regards the environments we can explore, settle, and exploit.
I don’t see AI as an individual species developing as a successor to humans. I see it more like a symbiotic evolution of AI augmented and genetically modified humans, who could be quite capable of interstellar travel and exploring the universe. We can clearly see the trends even now already. We use cellphones to communicate, GPS to navigate, etc. We’re already so dependent on technology an augmented by it that we cannot imagine our lifes without it. So, I don’t see a problem with the appearance of a super general purpose AI as an entity. It will be just us continuing our symbiosis and evolution.
What genetic modification do you foresee humanity developing to allow this?
Simpler things like radiation resistance might be possible, a physiology that is better adapted to specific gravities. But modify humans too much and you may not have humans at all. Stapledon envisaged 18 different types of humans evolving, the last living on Neptune. But how human were many of these human species? Suppose we become more cyborged, will we still be human or something else? The various cybermen incarnations in Dr. Who clearly indicated that any humanity they once had was long gone. Later versions allowed for the conversion of humans into cybermean which clearly removed their humanity.
Cordwainer Smith and others have suggested humans modified for star travel may no longer be human as a result of their modifications.
If we have difficulty accepting Neanderthals as human, how much less human would these genetically modified, cyborg descendants of humanity be? As different as reptiles from fish?
I tend to agree. There is a middle ground between whether humans or AI constructs will explore the galaxy. Currently it’s an still a bit of an open question whether humans can even survive the trip to Mars, much less across interstellar space.
But there’s no reason to assume that humans as they exist today will be making the journey – genetic modification is in its relative infancy, but imagine a future where it’s routine and has been used to fix all our shortcomings – for example a greatly extended lifespan and much more efficient DNA repair mechanism that could allow future astronauts to resist radiation (as well as curing cancer as a bonus!), and allowing them to travel across the void within their lifetime.
The only way I see humanity going to the stars is when the solar economy becomes large enough to fund it and that is going to be a long time. Now if we had a ship the size of the earth with all its wonders we could drift over to other solar systems over generations of people. Not all of those people would want to go to the stars but do they really need have too if they can live their lives out on a huge ship perhaps with plenty of augmented reality.
I can imagine a best case scenario for human journeys between the stars. Let’s suppose, first, that the problem of self-assembly is solved: it is possible to manufacture any machine, a whole physical economy, locally at a star system, and each can generate self-replicating probes that are well behaved to create economic power everywhere. Let’s suppose further that humanity comes to understand the nature of humanity, so as to be able to print new humans and copies of existing humans from scratch, yet without exploitation. They have come to perceive consciousness as a universal shared atman and have desulted from competitions, conflicts, and cruelty, and work together in a deep spirit of agape. On Earth there would be a process by which people have come to transfer memories from one to another – perhaps as bits in an approximate record, or as qubits containing a deeper and perhaps even irreproducible (compare no-cloning theorem) essence of information. Yet they do this with deep respect, not to enforce beliefs on those they see as inferiors.
They are not troubled overmuch to whim copies of themselves into existence around another star, gestating from dispersed hidden gemmules of self-reproductive technology into a full replica of Earth biosphere and the layout of a human mind beamed as photons between the stars. Nor does it trouble them much if they wish to perish some little time later, dispersing their memories back across the same links, to arrive at the speed of light at some new world. But they don’t need to; their manufactured bodies might be well-adapted to each place, by their own choice and plan, and endure, with periodic replacement at the cellular level, for as long as desired. For these people it is normal to stand on a thousand planets at once, with memories or at least visions and observations of them all at least potentially available.
If people could but understand the nature of consciousness, and its intrinsic unity, and build up widely in people the morale to love each other as themselves, and build a free and just society, unabashed to pursue together the spiritual meaning and purpose of existence, then the technical distance from here to there is not very large. A century or two might have it. But without any of those things, all the technologies are nightmares, from which we hope only to recoil into a dark age. We have seen this Faustian prototype of new lands in the predominant internet forums, which began as an idealistic bastion for endless creativity and free sharing, and ended under corporate domination and censorship as havens for bullies and disinformers. How much more this choice confronts us when biology is involved.