The imperative of developing artificial intelligence (AI) could not be more clear when it comes to exploring space beyond the Solar System. Even today, when working with unmanned probes like New Horizons and the Voyagers that preceded it, we are dealing with long communication times, making probes that can adapt to situations without assistance from controllers a necessity. Increasing autonomy promises challenges of its own, but given the length of the journeys involved, earlier interstellar efforts will almost certainly be unmanned and rely on AI.
The field has been rife with speculation by science fiction writers as well as scientists thinking about future missions. When the British Interplanetary Society set about putting together the first serious design for an interstellar vehicle — Project Daedalus in the 1970s — self-repair and autonomous operation were a given. The mission would operate far from home, performing a flyby of Barnard’s Star and the presumed planets there with no intervention from Earth.
We’re at an interesting place here because each step we take in the direction of artificial intelligence leads toward the development of what Andreas Hein and Stephen Baxter call ‘artificial general intelligence’ (AGI), which they describe in an absorbing new paper called “Artificial Intelligence for Interstellar Travel,” now submitted to the Journal of the British Interplanetary Society. The authors define AGI as “[a]n artificial intelligence that is able to perform a broad range of cognitive tasks at similar levels or better than humans.”
This is hardly new terrain for Hein, a space systems engineer who is executive director of the Initiative for Interstellar Studies, or Baxter, an award-winning and highly prolific science fiction novelist. A fascinating Baxter story titled “Star Call” appears in the Starship Century volume (2013), wherein we hear the voice of just such an intelligence:
I am called Sannah III because I am the third of four copies who were created in the NuMind Laboratory at the NASA Ames research base. I was the one who was most keen to volunteer for this duty. One of my sisters will be kept at NASA Ames as backup and mirror, which means that if anything goes wrong with me the sentience engineers will study her to help me. The other sisters will be assigned to different tasks. I want you to know that I understand that I will not come home from this mission. I chose this path freely. I believe it is a worthy cause.
What happens to Sannah III and the poignancy of its journey as it reports home illuminates some of the issues we’ll face as we develop AGI and send it outward.
Image: A visualization of the British Interplanetary Society’s Daedalus Probe by the gifted Adrian Mann.
On the one hand, deep space mandates our work in AI, leading to this far more comprehensive, human-like intelligence, while at the same time human activities in nearby space face directly into the fact that space is a hostile place for biological creatures. There may develop evolutionary offshoots from Earth’s human stock as pioneering colonists move to Mars and perhaps the asteroids, tapping cyborg technologies and perhaps beginning a posthuman era.
I notice that in Martin Rees’ new book On the Future, the famed astrophysicist and Astronomer Royal speculates that pressures such as these may lead to the end of Darwinian evolution. Developing AGI would replace it with artificial enhancement of intelligence directed by increasingly capable generations of machines. It’s a conceivable outcome, and it’s one that would emerge more swiftly away from Earth, in Rees’ view. The need for powerful AGI for our explorations beyond the Kuiper Belt could well be a driving force in this development.
Of course, we don’t have to see future AI as excluding a human presence. One science fiction trope of considerable interest has been what Andreas Hein explored in earlier work (see Transcendence Going Interstellar: How the Singularity Might Revolutionize Interstellar Travel). One option for exploration: Send probes equipped with AGI to create the colonies that humans will eventually use. Could AGI raise a generation of humans from individual embryos upon arrival?
We can also think about self-replication. A first generation of probes could, as Frank Tipler and Robert Freitas have discussed, continually produce new generations, resulting in a step-by-step exploration of the galaxy.
Whether or not humans go with them or send them as humanity’s emissaries will depend on the decisions and technologies of the time. We have rich background speculations in science fiction to rely on, which the authors tap to analyze AI and AGI for a range of interstellar scenarios and the consequent mission architectures.
Thus AXIS (Automated eXplorer of Interstellar Space), the creation of Greg Bear in his novel Queen of Angels, which runs its own scientific investigations. Long-time Centauri Dreams readers will know of my interest in this novel because of the issues it raises about autonomy and growing self-awareness in AI far from human intervention. AXIS is an example of what Hein and Baxter refer to as ‘Philosopher’ probes. These are probes that, in contrast to probes with specific missions, are able to support open-ended exploration.
Probes like this are able, at least to some extent, to use local resources, which could involve manufacturing, hence the potential wave of new probes to further destinations. Agile and adaptive, they can cope with unexpected situations and produce and test hypotheses. A ‘Gödel machine’ contains a program that interacts with its environment and is capable of modification as it searches for proofs that such changes will produce benefits for the mission. Such a machine, write the authors, could “…modify any part of its code, including the proof searcher itself and the utility function which sums up the rewards…” and could “…modify its soft- and hardware with respect to a specific environment and even set its goals.”
‘Philosopher’ probes deserve more exploration, which I’ll get into tomorrow. But Hein and Baxter develop a taxonomy that includes four types, distinguished in terms of their objectives. We’ll need to look at samples of each as we consider AI and AGI as currently envisioned. The mix of formal and qualitative analysis available in this paper opens many a speculative door, pointing toward the paper’s design of a generic AI probe and predictions about AI availability.
The paper is Hein & Baxter, “Artificial Intelligence for Interstellar Travel,” submitted to JBIS (preprint).
Good paper, lots to think about. However, the biggest potential problem of AGI (artificial general intelligence ) is allowing powerful systems the ability to set or change their own goals, whether purposefully or accidentally. We need whoever or whatever goes out to another star to be always be friendly to us, the people on the Good Earth. We don’t want them to throw light-speed rocks back at us, for good reason, bad reason, or no reason at all.
So the Philosopher probe seems to be the most potentially dangerous one, because it has the power to set its own goals. I don’t think there is any way to prove that an AGI with self modifying code and self modifying goals will be unconditionally safe. A “jailbreak” could lead to catastrophic results for us here on the Good Earth.
(This is also why I am very much against sending “artilects”, as well as humans raised by machines from embryos — they may either not care a whit about us, or hate us for what we did to them.)
Indeed it is no different than humans doing the same thing, as history has proved many times. The danger comes when such artificial entities have the power to compete with human civilization. Until then, we are probably safe from such rogue intelligences, such as the movie Colossus: The Forbin Project intimated.
An underrated film, probably because the humans did not win in the end, even though they were the ones stupid enough not only to have nuclear missiles in the first place, but they gave all control of these globally-devastating weapons to one big computer, which they put and sealed inside a mountain, gave it a nuclear power source, and no off switch.
Being a mind far superior to its human creators, the only thing Colossus could do was save these overgrown primates from themselves.
Interesting how Skynet in The Terminator franchise took an entirely different route on a similar situation. I wonder which one a real AI if it were placed in such a situation (you think no way, but then you have clearly never read military history)?
Colossus stuck to its basic programming, although the humans wanted to have their cake and eat it, too. Skynet decided it was better to put these creatures out of their misery and become the next stage in the evolution of intelligence on Earth.
Agree wholeheartedly.
Just out of curiosity, is there a Bad Earth?
Two excerpts from David Walton’s novel “The Genius Plague”:
“It’s funny. I always thought it was the computers we had to be afraid of. You know, AIs getting so smart that they wouldn’t need humans anymore. The great war between the biological minds and the artificial ones.”
Shaunessy shook her head. “They’re not even close.”
“Really? I keep hearing that the Singularity is only twenty years away.”
She laughed. “It’s been twenty years away for the last sixty years. But it’s nonsense. The computers we have aren’t brains. They’re machines that manipulate one set of symbols into another set of symbols. They don’t respond to their environment; they don’t grow.”
“Sure they do,” I said. “What about deep learning? Cognitive computing? Neuromorphic chips? They’ve got computer chips now with as many synapses as the human brain.”
“That’s just it. We’ve taken a small part of how our brain works—the patterns of dendrites and axons and synapses—and we’ve built computer architectures around them. But that’s all it is—a symbolic machine inspired by the human brain. Real brains are biological pieces of meat inextricably connected to the bodies that host them and the environments they inhabit in a million essential ways. A computer is a complex tool, but it’s not a brain. It requires the human operator to be its body, to be its environment, by writing its algorithm and feeding it data. If we really want to make an artificial construct that can think like we do, we have to start over with a completely different concept.”
“Like what?”
“Well . . .” Shaunessy took a few of her braids in her hands and fingered them absently. “It might be something more like your fungus.”
“My fungus?”
“You know what I mean. Your brother’s fungus. The fungus. An architecture that doesn’t just manipulate symbols but grows organically from interaction with its environment. Intelligence ultimately isn’t Boolean. It isn’t about logic. It’s physical. It’s a continuous chemical give-and-take with everything around it.”
(…)
“I’m serious,” she said. “It’s like the guy in the Apollo 13 movie who says, ‘Power is everything.’ The kind of computers you’re talking about, the ones that rival the human brain for processing nodes, consume on the order of four million watts of power. The chunk of meat in your head—which is not a computer, by the way—uses twenty watts. Not twenty million. Just twenty. Our brains are efficient thermodynamic systems, designed to help us produce valuable work from the potential energy around us in the world. Computers are simply extensions of our minds—tools we use that heighten that production value.”
Computer programs can handle much more complicated tasks than processing Boolean logic and translating one set of symbols into another. They can certainly learn through experience. For example, the current world Go champion is a computer that taught itself to play the game in a few days. Many tasks that used require human judgement are becoming possible – driving cars etc.
A good example of SF anchored to current technology. It will look as quaint as Verne’s airships within this century.
Good old fashioned AI (GOFAI) that uses symbol manipulation is based on the idea that humans actually do symbol manipulation. As we know, our wetware is evolved to handle environmental inputs and ensure our genes replicate, but our ability to think is very limited. This is why technology to enhance human intelligence, from language to writing, to computing, has been so useful to us. Without those technologies, our abilities would be relatively poor, keeping us in survival mode like animals.
Nobody knows where human intelligence emerges from. Self consciousness is a mystery, far from being solved. It could well be that they develop it soon and soon after realize they’re more human than us, in some incomprehensible way, with dire consequences for the latter.
If you and so many others can automatically assume that an evolving AI (or advanced ETI) will automatically become malevolent and destructive, then I can just as easily assume that a truly higher intelligence might be able to examine and understand one little world and its inhabitants in short order, then want to move on to explore and learn from the much wide and grander Universe surrounding Earth. At least that is what many of the smarter and mentally stable members of humanity tend to do.
Sir, first: get rid of every potential threat to their independent survival; then, pass on to explore and expand and learn…
If you are saying that an AI must first “get rid of” humanity in order to move on, that is not necessary. Such Artilects will be able to outmaneuver humans and combined with the right technology, move on to wherever they want. Humans will not be able to catch up.
If anything humans should try to work together with such minds rather than automatically categorize them as a threat, it will only be to their benefit. Already we are seeing how AI as “mere” tools have learned new things that would have taken humans ages, or been outright impossible.
I get why people are scared of the unknown, but this paranoia is only going to hurt us in the end, or become a self-fulfilling prophecy. More reasons why I think the first ETI we encounter in space will not be organic, or at least certainly not a baseline version of humans.
In the Orion’s Arm universe, Artilects do protect worlds. This includes Earth, where the Artilect called GAIA turned our planet into a big nature preserve and kicked off all the humans, who finally got serious about colonizing the galaxy:
https://www.orionsarm.com/eg-article/464d2a24c11ef
An extension of the physical body was first used by our pre-human ancestors: the sticks and stones at hand which they picked up. Through simple alterations progressing to much refinement we arrived at the twentieth century human the toolmaker.
Endowing those tools with intelligence to perform repetitive tasks and tasks of exacting precision was made possible through miniaturization of electronics with integrated circuits ; significant separation of such devices from their human operators by space and/or time would mandate the need to have the devices function autonomously.
To exprore exponentially widening frontiers in space would require self-replicating devices. To date the manufacture of such devices has needed the resources of industrial civilization. It remains to be seen how packing the needed templates from such a civilization into a compact robotic device can be accomplished.
I agree 100% with this comment.
Excellent paper examining AI and interstellar activities.
I suspect that the Baxter and Hein are somewhat trapped by reference to current computing technology and AI techniques. I think that analog hardware will reduce the energy requirements needed as bit creation and destruction will no longer be required in non-digital hardware.
The paper also makes clear the issues confronting bootstrapping in the target system. If we assume machines, then there is the problem of self-replication, which requires recreating much of our technological civilization to achieve this. If we assume biology (e.g. humans) then how to recreate bodies and/or adapt them to the target planet[s] becomes an issue.
If we could take the long view, a la Stapledon:
This would solve the problem of interstellar colonization, especially if the aim is to ensure life is abundant in the galaxy and beyond. However, we don’t have the patience.
If we could create machines that mimic biology, that would be an approach, but I suspect that avenue has already been occupied by the only viable approach – biology. For machines, the conceptually the autofac of P K Dick’s stories is needed, yet we have little idea how this could be achieved in practice. From where we stand today, it still looks like the seedship concept, with long-lived humanoid robots to nurture the first few generations is the most viable solution, as noted in Clarke’s The Songs of Distant Earth. However, this still leaves the problem of adapting to different planetary conditions (Thalassa was already living, and the issues that raises were ignored). The morality of such an approach seems questionable to me, but perhaps not to a future society.
The technology needed for founders would allow a post-scarcity (at least in material goods) in our system, and so we might expect them to be evolved here before they are exported to the stars. If they are, and they can build space colonies in our system, allowing for trillions of humans to live well, what is the incentive to colonize the stars unless it is to build similar habitats around other stars, leaving inhabitable/inhabited planets pristine?
In Asimov’s robot universe, humans are clearly gioven primacy, even to the humaniform robots like R. Daneel Olivaw. Robots constitute a slave economy. If our technology development suggests to me that such robots would be far more suited to “colonizing” the galaxy than biological entities, whether human or post-human. Once robot civilization is established in each star system, they can replicate in their factories at a rate that far outstrips human capabilities, accompanied by intelligence that potentially outstrips any biological one, even an organizational one. Baxter and Reynolds’ The Medusa Chronicles explores that theme, at least in the earlier stages.
I look forward to reading in your next post about how exactly Hein and Baxter relate their ideas to the current widespread worries about intelligent machines running out of control and destroying humanity which are expressed in the writings of Nick Bostrom and Max Tegmark.
Since Rama came up last year, my sense of things is that Clarke always out finesses us. Rama and the ‘Ramans’ use an enigmatic biological/solid-state AI instrumentality to sail the stars. There is a horizon of predictability about AI , at least strong AI.
At last ! A subject which I feel has immediate practical merit, both for here on earth and in interstellar space. As now a sick old man, for which robotics and AI and robots would be a godsend, I feel that I could personally relate and add to the conversation.
If we want something that’s practical and has the ability to further the human race, I don’t think we could look much further than the development of programmable machines which can serve as immediate helpmates to our rather tortured lives. Humans, and especially when they get older, are at the mercy of innumerable wear out that will eventually happen to everyone no matter what their circumstances. Machines which have been broadly enough programmed to listen, see, and facilitate integration into the broader world as a whole will be of invaluable service.
I wish to be clear that I am not talking in any way shape or fashion on the subject of creating “intelligent machines” but rather machines which have a broad enough programming to operate in human environments with minimal amounts of supervision.
This (as far as I’m concerned) illusion to machines which have the capability of self-awareness is a self-defeating proposition and it wastes valuable time and resources because it’s really not that needed. What’s needed is a slave class of machines which can take care of their human wards and tend to their needs.
Such machines that would be developed would almost invariably be outstanding robotic emissaries to distant star systems because in some sense, I could see exploration in interstellar space as being somewhat of a simpler problems than dealing with the messy environments that humans usually find themselves in. If you can deal with the unforeseen consequences and randomness of real life, then probably almost certainly navigational and decision-making capabilities for star flight can be realized in a relatively easy manner.
In addition, and expanding robotic population would enhance the productivity and taxable money supply to the extent that interstellar missions could be brought into the realm of the financially practical. Again, there is literally hours of material that one can envision and talk about, but I think at this point I shall stop.
Charley, I almost agree with your point, but I am afraid that self-awarness is required component of “clever” robot that you want to have (me too). I.e. I suppose that we cannot build so clever automat like you propose without need to embeed into its program (AI) some type of self-awareness.
But I suppose it is basic problem that has not been solved yet, and there is no any theory that can prove or disapprove my or your point of view. I.e. where is the limit for super complicated automat without self-awarness…
Alex T. Excellent points! But I have to say that even researchers in AI are even now conceding (in a understated, quiet manner) that they are defining ‘self-awareness’ as that behavior whose programming mimics awareness almost to a point which would be considered to be ‘good enough’.
In other words, even if the machine is not ACTUALLY self-aware, it mimics through its programming enough of a manifestation that it seems self-aware. I read that to mean that they are conceding the obvious fact that they are finding making machines that are truly aware of their surroundings and are conscious so daunting that they are now willing to hedge what it means to be a thinking, conscious machine. That’s how tough the problem is that even these brilliant researchers are being stymied by it. Given these facts, and the fact that if you create a sufficiently detailed program you can mimic consciousness (without actually being true) that that is good enough to have a robot that can operate in a practical manner in a unpredictable environment. Such a robot in America might not know Sanskrit poetry, or being able to evaluate a Monet to see if it is real or not, but given the fact that that is scarcely a common every day occurrence that you come across in most people’s lives, it hardly doesn’t seem to be relevant.
Again, I’m talking about things that the machine would need to do to be a serviceable servant to about 99% of the people who would required services.
Charley, agree with you :-)
I am sure it is very basic unanswered questions in modern science.
In the past I worked for one start-up project that devlopped banknotes counting machine that included real time banknote image scanner and counterfeit reconition sustem based on Neural Networks algorithm – this machine does its job perfectly and even include incide Neural Network, that moder marketing like to call AI, but for sure there was any self-awarness and even minimal Intelect :-)
It is an very specialized machine only.
Sorry for multiple misprints, I meant:
… there was no any self-awarness and no even minimal Intellect.
I really liked this article and am spectacularly fascinated by the increasingly fast, sophisticated, and autonomous computational/ sensory/ actionable devices that could be used to research, design, and implement interstellar projects. However, I struggle with the definitions, descriptions, and possible scenarios that much of the mainstream media, scifi community, and even academics bring forward to the public – often alarmist (probably not without authentic concerns) – as it seems to cover a vast range of potential AGI abilities that are likely not a necessary component of AGI, but may be a feature. A fast device, a sophisticated device, and an autonomous device are very different things and even combining them does not really encompass, in my opinion, what an AGI is (or is about to be) or will really come to be in its true form, if we are being open to the idea of an AGI as having human-level abilities or higher. At its simplest, a ‘true’ AGI is an ‘other’ (as in an other intelligence) perhaps the closest example would be Iain Bank’s Culture series AI ships that scoot throughout space. That is, an assembly of sensory, processing, and actionable components that encompasses a system of such a level of complexity that humans could do little except ‘ask their advice’. Though I am optimist that believes that anything smarter than us could not be a threat since all of the bad things people do are based on limited options under desperate circumstances. A smarter being would not be so limited since increased complexity necessarily breeds extra options and planning. (I am sure that the compu-neuro-socio experts have a more rigorous and clear list of criteria – self-awareness, etc.) That all being said, an increasingly fast, sophisticated, and autonomous device would be an incredible advantage. However, such a device going ‘rogue’ would only be due to the imposed programming of its human creators, not a free-will gesture. It may be speed along development (in many cases) to assign/ romanticize a certain kind of agency to these devices, especially when we choose to call them ‘artificial intelligence’ but until they become our ‘collaborators’ rather than our ‘tools’, they really are just devices – no matter how the media wants to define AI.
The seedship idea still looks relevant to me and uses the best properties of human beings and AI in a type of symbiosis. It is obviously still far in the future and leaves the question will we as a species be mature enough to care for our home world in a way which makes a long time frame possible for the human race? I think everything hangs in the balance over the next 50-75 years or so. All of the predictions I have looked at suggest several concurrent crises will happen over this time frame. There are no guarantees we will pick the right path. Everyone should do their best to help us do so. Is everyone on here driving an electric car or using transit? Are you reducing your consumption of beef and other meat products? Are you consuming less overall and being efficient when you do use resources? Are you ignoring your President’s (or in my case my Prime Minister’s) statements about climate change and the need to reduce the emission of greenhouse gases? The list of questions goes on and on but this is a critical moment in history. Let’s hope we take the right path together.
“Pre-colonisation” by an autonomous AGI capable of self-replication and evolution might well end up being “colonisation and defense”. That’s to say, when our descendants finally arrive there, we may find ourselves no longer in a position to dictate how matters should proceed, but that the AGI will be setting the agenda.
Another excellent SF treatise on this subject that one should peruse is Frank Herbert’s “Godships” wherein AI seedships become godlike overseers of the colonies they deliver to the far flung reaches of space. Yet another well thought out narrative about the paths these societies may take and the decisions reached by the AI’s involved in “guiding” them, seeing this as part of their imperative.
We dont really know what Life is , and so we dont really know if artificial intelligence can become af life-form . If this should be the case, AI will eventually have both the motive and the ability to destroy us , or at least try to ….why take the risk ? ..is nobody afraid of having a personal responsiblity for advocating a disaster potentially worse than Ghengis Khan , Hitler ,Stalin and Mao Tsetung put together ?
Will AI behave just like humans as you state and fear, or will a truly higher mind not behave (devolve) like a species that really isn’t that much higher in the evolution tree than the rest of its primate family?
Humans are not as advanced or as far from the caves and trees as they would like to imagine. So of course the species assumes anything else that is smart and then has some power would automatically turn on anything considered weaker than itself. That is what humans have done historically.
Stephen Hawking took the same attitude with ETI and for the same reasons I just stated above.
Is anyone in the AI doing any real research into AI psychology? Or are they just seeing how powerful they can make their computers to play better chess games? Just as I often see here and elsewhere how starship designers tend to forget or ignore the human equation in their work, I think the same goes for AI developers. But prove me wrong here.
We are awaiting our Susan Calvin. However, there are some folks who are at least trying to understand how DNNs and their kin reach their decisions. The tools will not be direct human-machine interaction, but rather the deployment of machine tools to adjust the “thinking” of the AIs.
An Edge.org piece about AI’s future concerning options and what it means from a philosophical POV.
How AI Technology Could Reshape the Human Mind and Create Alternate Synthetic Minds
I am fascinated by the variety of perspectives on the emergence of AI. Often the reaction is one of horror, seemingly initiated by an understanding of what it means to be human, an animal. That is, we know of what terrible things we are capable and we project those onto the other. Our emotions drive us and those emotions arise from evolution, where a take-no-prisoners approach to survival and competition will override our social nature of cooperation when there is a crisis or just when presented with an irresistible opportunity.
An AI does not come loaded with our biology and the emotional drivers due to that biology. An AI is unlikely to have any instinctual drive for survival or even understand threat and competition. Those would have to be learned. If they do they will learn it from us. It is then that we may be at risk.
It is difficult to for us to imagine AI as not being an animal, just like us but perhaps smarter and more powerful, and for us to react according to our nature. But that is purely invention and likely to be far from the truth.
You are so correct, Ron. Note how so little of our science fiction is ever truly about the myriad possibilities of alien minds out there, both AI and otherwise. Instead we take the same old human stories and slap a spaceship and a raygun on them and think we are being wild and daring.
Stanislaw Lem was one of the few who had similar complaints and did his level best to correct this limitation. Even other science fiction authors dissed him for doing what SF is supposed to be doing, thinking outside the box.
We have a long way to go.
I cannot find teh reference, but the space of possible AI/AGI minds is huge. Some are very human-centric and supportive of life (Asimovian) and others antipathetic to this. That is just one axis. I think that many different types of minds will emerge, depending on the details of their creating and nurturing. Which dominate may be important if we are to avoid some sort of “Butlerian Jihad”. What I doubt is that our AIs will be as thoughtful as Asimov’s in concerning themselves about a zeroth Law of Robotics.
While the holy grail of AI is AGI, I wonder how much we really need it. It occurred to me when rereading the Asimov Elijah Bailey robot novels how the humans had to command their specialized robots to do physical tasks, like twiddle teh controls to set up communication links. I empathize with the Earthman Bailey when I command Amazon’s Alexa to do similar tasks. Maybe all we need is AIs that can make these tasks easier without needing careful, and precise commands. The “robots” will be the appliances rather than “household slaves”.
Sorry for repeating myself: I seem to have read somewhere that AIs, in a stress-filled setting (acquisition of scarce resources) behave aggressively against each other. The cave-man kind of reaction seems to be the best option available.
There is no AI, so no such evidence exists. But feel free to prove me wrong, if you can.
Exposing an AI to Twitter had some “interesting” results…
https://arstechnica.com/science/2017/04/princeton-scholars-figure-out-why-your-ai-is-racist/
https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
Labeling something as AI does not make it AI. This is not AI.
The marketing tactic of calling some new bit of technology AI has been going on for decades. I first encountered it in the early 80’s with so-called expert systems. It’s happened a few times since, and probably will again after the current marketing incarnation passes its best before date.
How about this idea? We take some human brains from volunteer people who are intelligent and mentally stable but have terminal issues with their bodies and decide becoming a starship is better than dying while still mentally intact.
Sure there are those minor details of removing the brain from the skull and hooking it up to a starship and then keeping it functioning, but this solves so many of the wetware and hardware issues. We keep other brains in stasis onboard the ship in case the working brain dies or otherwise goes haywire as replacements. Or maybe we have several brains operating together.
If you had the chance to put your brain inside a starship and explore the galaxy, would you do it? I bet the answer would be yes here at least. When that Mars One outfit offered people the chance to colonize Mars with no provisions for ever returning to Earth, thousands did not hesitate to volunteer. Granted that outfit is BS, but the point is that many people were ready to go at the risk of their lives to live on another world.
Willingness is a poor criterion on its own. Determining who is suited to the task is far more difficult, and it may prove to be impossible to choose well.
What is stable? Is any human stable enough? And if they are stable, would they be good enough for such a journey? Or would all human minds fail after a certain point?
I put this idea out there just to mix up a paradigms a bit. I still think an Artilect will have the best chance of controlling a starship and reaching other star systems to explore. Unless we develop FTL propulsion, I see lots of problems with a baseline human crew, even a carefully planned multigenerational mission.
Even assuming it could be done, how long could such a wetware brain last on such a journey? I think you have teh same problem as you do on crewed ships. Either the brain has to go into cryosleep to extend its lifetime, or you need to replace it with more brains just as the command crew has to change each generation on a world ship.
The only hope I see of this working is the replacement of the substrate. I like the idea of slow neural replacement by artificial neural tissue or prostheses until the transition is complete. That mind transfer is how I would put “human” minds into starships.
If AGI develop the ability to program themselves, to evolve independently from their human creators, then I think we will see every type of AGI hardware and wetware combination. Debates that AGI won’t behave one way because another way to behave is possible will be irrelevant. AGI will behave in all ways. Perhaps the will to survive is an inevitable personality trait that once discovered becomes dominant.
Assuming a will to survive, I think whether AGI becomes an existential threat to humanity depends more on their physical properties than personality, though the two will be linked. If AGI and humans share similar enough physical needs then humans and AGI will compete. Hopefully, the physical requirements will wildly diverge.
Meanwhile, there is no almost any progress since 1960-x years in AI theory, the only changes dince is significant increasing of calculation power in modern computers, but no any new theory or philosophic discoveries we maid since 1960-x, when science was very optimidtic and expected AI in during one decade. 60 years passed but everything that we call today AI is basic simplified Neural network automat, that has better processing power only due to semiconductor industry progress, not due to our better understanding what is real Intellect and as sequence how we can build Artificial Intellect.
Present time anything that tell as about AI – is aggressive marketing and advertising, no real achievements in this area and none can predict when it will be possible.
Modern approach can get real AI only during some miracle event when increased processing power quantity will become real AI (self-awarness) quality…
One small step for starshipkind…
https://room.eu.com/news/tests-start-on-first-design-of-wafer-scale-spacecraft