Are humans ever likely to go to the stars? The answer may well be yes, but probably not if we’re referring to flesh-and-blood humans aboard a starship. That’s the intriguing conclusion of Keith Wiley (University of Washington), who brings his background in very large computing clusters and massively parallel image data processing to bear on the fundamental question of how technologies evolve. Wiley thinks artificial intelligence (he calls it ‘artificial general intelligence,’ or AGI) and mind-uploading (MU) will emerge before other interstellar technologies, thus disrupting the entire notion of sending humans and leading us to send machine surrogates instead.
It’s a notion we’ve kicked around in these pages before, but Wiley’s take on it in Implications of Computerized Intelligence on Interstellar Travel is fascinating because of the way he looks at the historical development of various technologies. To do this, he has to assume there is a correct ‘order of arrival’ for technologies, and goes to work investigating how that order develops. Some inventions are surely prerequisites for others (the wheel precedes the wagon), while others require an organized and complex society to conceive and build the needed tools.
Some technologies, moreover, are simply more complicated, and we would expect them to emerge only later in a given society’s development. Among the technologies needed to get us to the stars, Wiley flags propulsion and navigation as the most intractable. We might, for example, develop means of suspended animation, and conquer the challenges of producing materials that can withstand the rigors and timeframes of interstellar flight. But none of these are useful for an interstellar mission until we have the means of accelerating our payload to the needed speeds. AGI and MU, in his view, have a decided edge in development over these technologies.
Researchers report regularly on steady advancements in robotics and AI and many are even comfortable speculating on AGI and MU. It is true that there is wide disagreement on such matters, but the presence of ongoing research and regular discussion of such technologies demonstrates that their schedules are well under way. On the other hand, no expert in any field is offering the slightest prediction that construction of the first interstellar spaceships will commence in a comparable time frame. DARPA’s own call to action is a 100-year window, and rightfully so.
Wiley is assuming no disruptive breakthroughs in propulsion, of course, and relies on many of the methods we have long discussed on Centauri Dreams, such as solar sails, fusion, and antimatter. All of these are exciting ideas that are challenged by the current level of our engineering. In fact, Wiley believes that the development of artificial general intelligence, mind uploading and suspended animation will occur decades to over a century before the propulsion conundrum is resolved.
Consequently, even if suspended animation arrives before AGI and MU — admittedly, the most likely order of events — it is still mostly irrelevant to the discussion of interstellar travel since by the time we do finally mount the first interstellar mission we will already have AGI and MU, and their benefits will outweigh not just a waking trip, but probably also a suspended animation trip, thus undermining any potential advantage that suspended animation might otherwise offer. For example, the material needs of a computerized crew grow as a slower function of crew size than those of a human crew. Consider that we need not necessarily send a robotic body for every mind on the mission, thus vastly reducing the average mass per individual. The obvious intention would be to manufacture a host of robotic bodies at the destination solar system from raw materials. As wildly speculative as this idea is, it illustrates the considerable theoretical advantages of a computerized over a biological crew, whether suspended or not. The material needs of computerized missions are governed by a radically different set of formulas specifically because they permit us to separate the needs of the mind from the needs of the body.
We could argue about the development times of various technologies, but Wiley is actually talking relatively short-term, saying that none of the concepts currently being investigated for interstellar propulsion will be ready any earlier than the second half of this century, if then, and these would only be the options offering the longest travel times compared to their more futuristic counterparts. AGI and MU, he believes, will arrive much earlier, before we have in hand not only the propulsion and navigation techniques we need but also the resolution of issues like life-support and the sociological capability to govern a multi-generational starship.
The scenario assumes not that starflight is impossible, nor that generation ships cannot be built. It simply assumes that when we are ready to mount a genuine mission to a star, it will be obvious that artificial intelligence is the way to go, and while Wiley doesn’t develop the case for mind-uploading in any detail because of the limitations of space, he does argue that if it becomes possible, sending a machine with a mind upload on the mission is the same as sending ourselves. But put that aside: Even without MU, artificial intelligence would surmount so many problems that we are likely to deploy it long before we are ready to send biological beings to the stars.
Whether mediated by human or machine, Wiley thinks moving beyond the Solar System is crucial:
The importance of adopting a realistic perspective on this issue is self-evident: if we aim our sights where the target is expected to reside, we stand the greatest chance of success, and the eventual expansion of humanity beyond our own solar system is arguably the single most important long-term goal of our species in that the outcome of such efforts will ultimately determine our survival. We either spread and thrive or we go extinct.
If we want to reach the stars, then, Wiley’s take is that our focus should be on the thorny issues of propulsion and navigation rather than life support, psychological challenges or generation ships. These will be the toughest nuts to crack, allowing us ample time for the development of computerized intelligence capable of flying the mission. As for the rest of us, we’ll be vicarious spectators, which the great majority of the species would be anyway, whether the mission is manned by hyper-intelligent machines or actual people. Will artificial intelligence, and especially mind uploading, meet Wiley’s timetable? Or will they prove as intractable as propulsion?
“The presence of ongoing research and regular discussion of such technologies [AGI & MU] demonstrates that their schedules are well under way. On the other hand, no expert in any field is offering the slightest prediction that construction of the first interstellar spaceships will commence in a comparable time frame.”
The problem of interstellar propulsion is basically a question of scale. If we could make solar sails N times thinner and make lasers N times more powerful then we could launch an interstellar laser sail. There are some questions about construction techniques and the big question is when will we be able to marshall the economic resources, but we know of designs that would work if we achieve the right scale.
For AGI & MU we’re not at that point yet. We don’t have a design where if computers were N times more powerful this algorithm would be sentient. We’re still in a position of requiring some currently unknown breakthroughs. The schedule for AGI & MU is much more uncertain because we don’t really know what all the steps on the path are, whereas for at least some forms of interstellar propulsion we do. The time predictions for interstellar propulsion are further in the future, but I think they are much more reliable.
Current AI research might be like alchemists trying to transmute lead to gold while missing a crucial theoretical insight that won’t be discovered for hundreds of years, or we might get the needed breakthrough tomorrow. Saying that the schedules for AGI & MU are well under way implies that we have a good idea of what the overall schedule will look like, which we don’t have.
I’ve often wondered whether interstellar colonization via frozen embryos, raised on site by an AI ‘mother’ might be viable, both from a human developmental standpoint, as well as from a minimizing colonist mass standpoint.
Might be a hybrid solution that leverages the AGI/AI advances Wiley points to, without taking humans entirely out of the equation upon arrival. I remain entirely skeptical about the whole “mind upload” optimism, at least in any foreseeable timeline. Everything I’ve read on the subject smacks of techno-magical thinking (whereas advances in AI have been building up measurable and consistently).
Sounds like perhaps Dr. Wiley has been reading James P. Hogan’s “Code of the Lifemaker”.
http://en.wikipedia.org/wiki/Code_of_the_Lifemaker
The prologue is a quick and fun read..
http://www.baen.com/chapters/W200203/0743435265___0.htm
meaux
@ Bob Steinke
You can’t say that AGI can be described as apply N times enough computing power, then it”ll be sentient. It doesn’t work that way. It’s comparing apples and oranges.
I’ve been following the field of AGI and MU for a while, it’s still a long way from maturity or fruition, but we have come a long way since 1950s. I feel strongly that significant advances are only decades away.
By contrast, the fundamental issue with interstellar propulsion is mainly energy. It takes a huge amount of energy and infrastructure to make the case for an realistic interstellar journey. Level of energy that just won’t be realistically available for likely a century based on projections of society’s energy growth. Didn’t Paul post a paper talking about likely estimation of launch date for an interstellar trip based on energy and resources growth?
When looking at likely at least a century before we can launch starships, it would truly astonish me if AGI and/or MU haven’t already been developed by then.
Wiley’s speculations about mind uploading obviously need to be read in conjunction with Athena Andreadis’s essay, Why Our Brains Will Never Live in the Matrix:
http://hplusmagazine.com/2009/10/19/ghost-shell-why-our-brains-will-never-live-matrix/
A fascinating debate.
Stephen, Oxford, UK
For 60 yrs now, the AI Myth has been an enduring techno-fantasy staple, as beloved as the equally hopeless teleportation booths and FTL drives.
From Robbie the Robot through HAL -9000 to the Terminators,
this hoary idea continues its quasi-religious pull,
in spite of its total failure to accomplish anything but brute-force Watson-style tricks.
While computers remain forever mired in their inherent limitations,
brain augmentation will be the source of the Singularity. Robots will always be lifeless machines incapable of thought.
The sole value of an interstellar robot mission will be to erect a deceleration laser at the destination star, so the real intelligences (people) will be able to go there.
The incredibly immense cost of interstellar travel has only one paback: demographic ambition. The thrill of second-hand exploration will not justify spending more money than a century’s worth of electricity for the entire world. But an expansionist subculture of a space-faring civilization will inevitably come to outnumber static cultures, eventually becoming sufficiently wealthy to easily afford interstellar travel for people (as long as they hibernate). Meanwhile the low-population cultures will barely be able to send a few puny robots on a brief flyby, probably too late to matter anyway.
MU? I have never met a biologist or physician who believed in it. The believers always seem to be engineers who couldn’t pass biology 101. I think Athena has covered that one adequately.
AGI? Dad was a skeptic when he was directing the computer center at UofF forty years ago. Since then we have gone through many Moore cycles and AGI is elusive as ever. On the other hand, utterly non-sentient expert systems (computers like search engines, chess players, ECG readers) have come a long way. This is not a bad thing, As long as GI is unique to biology (hopefully forever) there will be a place for us protoplasmic entities.
@ ecc
“I’ve often wondered whether interstellar colonization via frozen embryos, raised on site by an AI ‘mother’ might be viable, both from a human developmental standpoint, as well as from a minimizing colonist mass standpoint.”
Bingo! Most importantly, the AI “mothers” need not have AGI. They could be expert systems, purpose built nanny bots lacking sentience but with a wide enough variety of canned responses to fool a toddler. The children would have to humanize themselves with peer interactions.
PS: Regarding advances in intelligence … Genus Homo underwent great advances in GI vey rapidly in evolutionary time. There is no reason that this process could not be continued and accelerated. Selective breeding could do a lot, genetic engineering could do even more.
Unfortunately, human brain size (and possibly GI) topped out in Cro Magnon times. Agricultural settlements allowed people with very limited intelligence to survive and reproduce. (I doubt that the Cro Magnons had people equivalent to “village idiots”)
In the early 20th century many humane intellectuals advocated attempts to reverse the civilized tendency towards “idiocracy”. Unfortunately, the post 1945 social welfare states have gone in the other direction, by encouraging and subsidizing the differential reproduction of mental defectives. After 3 generations, the results are obvious and dismal, in all the nations that adopted such disastrous policies.
Can someone explain why navigation is considered such an intractable problem. It doesn’t seem like navigating to any star within a reasonable distance would be that difficult. They don’t move very much in the span of a few centuries.
I’m skeptical that we’ll ever be able to up-load an organic mind into a machine, but I think that with our rapidly improving understanding of how DNA (and the environment) create each individual mind we’ll soon be able to mimic those DNA inputs in inorganic medium, even creating inorganic children that have the combined features of the DNA of two organic parents. I think Arthur C Clarke once said “in the past our tools have influenced our evolution, in the future our tools will become us.” or words to that effect.
@Alex McLin
I haven’t followed AGI and MU very closely, and maybe there are things gong on that I’m unaware of, but my main concern is do we even know what advances we need to make? If it’s not a matter of increased computing power, then what is it a matter of? What is it that our current programs don’t have that we have to develop to make AGI? If the answer is, We don’t know yet, but we’re finding out a lot, then it’s really hard to say how far we are from the finish line.
For interstellar travel we know what we need. We need massive amounts of energy. Yes, it probably will be several centuries before we launch interstellar missions, but the reason we can make that schedule prediction is because we know all the milestones on the road from here to there.
I’m not saying AGI and MU couldn’t happen before propulsion, just what’s the evidence for the confident statement that they are closer?
ecc: “I’ve often wondered whether interstellar colonization via frozen embryos, raised on site by an AI ‘mother’ might be viable, both from a human developmental standpoint, as well as from a minimizing colonist mass standpoint. ”
I wonder about the morality of that. Michio Kaku proposed a similar idea on his “Sci Fi Science” show recently. He sends out nanoprobes to build infrastructure and then beams the DNA information to build humans. I don’t know why the DNA information can’t be included in the nanoprobe itself though.
We could accelerate clumps of cells in a protective cover to near light speed with near term technology at moderate cost but of course what would be the point of that if we can’t do anything with them when they get there. I suppose one could make a silly business out of that by claiming you can send a piece of someone to the stars.
@Bob Steinke
I don’t know about AGI — We have made amazing progress in domain-specific AI like expert systems, natural language interpretation, but general intelligence is still a mystery, though it’s probably a fallacy to separate the two — but mind uploading is arguably closer. We’ve uploaded a nematode: http://www.csi.uoregon.edu/projects/celegans/
It’s a mere 300 neurons, but it’s a start. The simulator doesn’t ‘run’ a brain in vacuum, it does consider the existence of a body to provide “pingbacks” (One of the challenges to uploading mentioned in the H+ article Asrnist posted).
The Blue Brain Project means to create a pharmacological simulation of a whole mammalian brain, down to subcellular level. In a recent speech Henry Markram, the project’s director, talked about the progress they’d done and how many of the components of the brain had been formalized into strict mathematics, for example, the equations describing the opening and closing of ion channels: http://www.youtube.com/watch?v=_rPH1Abuu9M
There is a key difference between ‘ordinary uploading’ and what the Blue Brain Project is doing: Rather than freezing a brain, slicing it with a microtome and scanning it, then deriving neural structures from the scanned layers, they grew the brain using genetic profiles. If there was a way to scan a brain without destroying it, one could compare the output of the simulation to the output of an EEG on the subject’s real brain, to see how far they diverge.
Interstellar travel, as you said, requires enormous amounts of energy, but that energy budget can only be supplied by gigantic infrastructure that is more likely to ‘pile up’ after several years or decades of space development rather than be built in some kind of international effort towards interstellar colonization. Arguably, simulating something even as complex as a whole human brain is more attainable than a few trillion watts directed at a tiny disk of aluminium.
Joy, I feel really bad about criticizing anyone so brave as to highlight the importance of the dysgenic effect within modern society, but until we know that the driver of the Flynn Effect is temporary, (such as it all being due to improvements in diet), there is reason to believe that the problem is smaller than you posit.
I’ll be happy so long as we head out as soon as we have good cryogenics. By that point we’ll be as fast as we’re going to be for a while, and AI may never happen. Of course if it does, and the AI ship shows up 20 years after the cryo ship, it might make for an interesting gathering.
I am beginning to believe that the AI problem is not one of engineering materials or circuits but one of methodology. All biological systems are constantly subject to direct selection for functionality and reproduction. We humans only test machines for functionality and have retained the reproduction prerogative for ourselves. Mostly this is because we simply lack the ability to create what was done biochemically at least once and maybe many time across the universe: we cannot make a self reproductive machine. At the same time, given the potential efficiency of machines, releasing a self-reproducing one into an environment where it can multiply might seem the height of folly. Like living bacteria, it is less than likely that reasoning intelligence is required for machine reproduction ( one snarky comment, – “even in humans”) . but I am arguing that for true self aware intelligence , a linkage to a history of reproduction may be a strong requirement ( note I use the “linkage” term carefully- sterile humans are intelligent). In my own world of synthetic DNA and engineered proteins, the best lead antibodies are linked to successfully reproducing only the genes that encode successfully binding proteins. even in the human body the development of antibodies to fight disease is linked directly to the replication of the successful “B” cells that encode and secrete them.
Now these same techniques for evolving biomolecules have been generalized to identify improved enzymes, better fluorescent RNA/dye combinations etc. In each case the biomolecule is selected , reproduced, selected, reproduced in multiple rounds with little or no engineering until the last stages of product development. – simply allowing for reproducing the “genome” and for incorporation of mutations along the way, along with appropriate selection. The results can create improvements of many orders of magnitude in days!
The only near analog I see in engineering is with hacker-breed computer virus- hardly an encouraging example.! Right now, the product cycle from drawing board to market is at least a year, usually more – and typically one one or two examples are tried at a time per company- requiring teams of thousands of people for one “live birth”. ( consider for example, engineering of products at the company Apple, the best of the breed perhaps)
Very slow innovation indeed, compared to the natural selection of microbes in a warm pond!! ( an it only took a billion years or so at the biological rate!)
We may have a long wait before AI is self aware!
For everyone that is going to the 100 Year Starship Conference, I would like to invite you to attend the presentation which Adam Crowl will be presenting on this very topic. Our paper is titled, “How an Embryo Space Colonization (ESC) Mission Solves the Time-Distance Problem”.
Ecc, it sure sounds like you received an advanced copy of our paper! Because, that’s exactly what we are proposing.
The key thing to understand about the Embryo Space Colonization to Avoid Potential Extinction (ESCAPE) Mission is that it’s purpose would be to launch as soon as possible (e.g. within this century) using the technology at that time. With relatively slow propulsion, the travel time would be long. This would only make sense for the Avoid Potential Extinction side of the equation. In this case, Bob, your morality concerns are addressed since the raising children using expert system androids would only be initiated if later-but-faster missions failed to materialize (i.e. presumably humanity had destroyed itself). In which case any effort to reboot humanity would be justified.
Now, one problem with the whole AI and MU thing is that, by the time you are figuring out how to do that, you may well (in the process) learn how to develop accelerating AI. Accelerating AI, in my opinion, is potentially a very dangerous thing.
For example, imagine that someone develops an accelerating intelligent program to better and better predict the stock market. It might discover that if it starts consuming more energy and material resources, it is better able to achieve its goals. So, without any malice at all, it starts taking over our power grid, and our fuels, and any part of the Earth that has sunshine. And whoever tried to shut it down, mutants of that program that successfully eliminated that threat by any means would go on to become the next generation.
I understand that Seed AI describes a potentially small program but may have no upper limit on the growth of its intelligence. In theory, this Seed could be created at any time.
My point is that it would be prudent to launch an ESCAPE interstellar mission before AI is achieved. So, whereas Keith Wiley is probably right about AI and/or MU being able to be achieved before very fast propulsion, it would be best for humanity if we found a way to achieve interstellar colonization using good old fashioned IVF frozen embryos and slow propulsion. Hence the ESCAPE Mission.
Joy > Bingo! Most importantly, the AI “mothers” need not have AGI. They could be expert systems, purpose built nanny bots lacking sentience but with a wide enough variety of canned responses to fool a toddler. The children would have to humanize themselves with peer interactions.
Exactly, Joy. This is exactly what we are laying out in our paper. Specifically the number of canned responses comes to about 30,000. And it wouldn’t just fool a toddler but an adult too.
To show you how far advanced this technology is now, consider this article:
Over the next year, Wallace quadrupled Alice’s knowledge base, teaching it 30,000 new responses, and last October, Alice won the Loebner competition for the second time in a row; this time one judge actually ranked Alice more realistic than a human.
And again, your idea of raising siblings simultaneously who can provide real-life interaction is something we have included in our article.
A while back I was considering an analogy that I was never able to make useful enough, but here goes. Life on earth went through profound and fundamental morphological changes when there was a transition from one state-environment to another (e.g. water to land, land to air.) A human is not a walking fish, a bird is not a flying rodent. Obviously there are intermediate forms (amphibians, flightless birds), so my thought was that we are now in the amphibian stage of space flight and will remain so until we leave the solar system behind. At that time the human race will truly divide, one branch to become an utterly distinct species, and from there many more.
Here’s why: I honestly do not believe that as presently constituted human beings can survive interstellar flight. We are just too delicate, physically and psychologically, for that environment. But Kurzweil has postulated that what is happening now, long before such star flight happens, is that we are beginning a replacement phase on our own bodies and at the end of it (mid-century at the latest), those who make that transition will be radically different in constitution from anything we would consider to be human. While I do not believe there will be human interstellar colonies, and so forth, there may well be star farers, fully adapted to the environment of interstellar space, so distinct from us as to be unrecognizable. I am highly sympathetic to Wiley’s vision, it will just come about different than the way he argues it will. To be fair, there are probably several paths.
I find questions and arguments regarding the nature of intelligence and self in the context of AI to be interminable, pointless actually at this time. As Dr. Elkhonon Goldberg has observed in his book The New Executive Brain, while our tools for probing the brain have gotten increasingly more accurate and refined, we have still not achieved a real breakthrough in understanding what makes it work as a thinking/feeling machine. I am confident we will at some point, but progress is slow so far and the similar rate of progress in AI research really should not be discouraging. It’s a tough nut to crack. but eventually it will be solved or rendered moot.
For anyone interested in reading a description of the ESCAPE Mission, an older version of the concept was posted here. At the end, it has hyperlinks to various sources including several YouTube videos which illustrates the advanced Technology Readiness Levels of the various components of the mission. This is why I argue that the ESCAPE Mission is likely to be able to be the first true interstellar mission. Hopefully the 100 Year Starship Conference will post the papers (and videos!) so that you can read the latest work on the concept.
This is a fascinating topic to think about. I believe that many will still be denying that AI is possible after they argue for an hour with a customer representative on the phone without knowing or noticing that they were talking to a machine. We did Chess, we did Jeopardy! (who would have thought of that before it happened?), we will see the customer representatives and eventually we will see a majority of the population being fooled under stringent Turing Test conditions. Or is “fooled” the right word anymore?
How will this be achieved? Certainly not by slicing up brains, simulating every neuron, or creating ever faster computers. Progress will be made at the boundary between computer science and psychology, with some neurology mixed in. Out of this boundary area will grow an engineering discipline. The initial commercial motivation will very likely be the automated customer representative. Get ready to argue with machines. Eventually, laws will have to be passed to assure that any synthetic personality will truthfully identify themselves as such, to avoid confusion and manipulation.
The notion promulgated by Bill and Joy that there is some fundamental reason that keeps a machine from performing the same cognitive functions as humans is absurd. It is in the realm of vitalism and animism and has no place in science.
Cognition is almost entirely situated at the conceptual level, its most important components being thoughts and their external reflections, words and sentences. Neurons are irrelevant. They are what Nature has come up with to do the work of the transistor, nothing more, nothing less. Like many of Nature’s devices, not great in performance, but adequate to get the job done.
In addition to cognition, there is what we might call the “chemical” part of our minds, out moods, drives and attitudes, which we are already tweaking with considerable success using drugs, but have still a lot to learn about.
MU is a different issue. When we get AI, we must have more or less solved the problem of how to build a functional personality. However, for MU, but not AI, it is critical that we can capture and integrate into that personality the specific characteristics and lifetime memories of a real person. This sounds extremely formidable, but consider the following: 1) We greatly overestimate the accuracy of our own memory. There have been many studies on witness testimonies and such, which tell us how tenuous our memory really is and how quickly it fades and changes. Likely, memories will not be transferred using scanners or brain slices, but in the form of words, taken from interviews and autobiographic works. Simply because words are the most accurate representations of thoughts. 2) The “upload” does not need to be perfect at all. The Me of today is quite a different person from that 10 years ago, or 10 years from now. People have lost large portions of their brains without losing their minds. If we could get, say, 80% of a person uploaded, it could be fully sufficient. How will we know? We ask the result of this process who they are and how they are feeling today. We have them talk to their relatives and friends, and get their opinions. Most likely, the original will also still be available for comparison.
It is not going to be easy, but it is not impossible, either, and steady progress is being made. Remember Jeopardy!, and watch out for those automated customer representatives. They will feature friendly female voices with lovely Indian accents ….
A real mind cannot be uploaded on a binary computer, and non-binary computers have not been technologically priorited and thus have no technological head start over anything. Then mind-uploading would require foolproof knowledge of EVERY LAST biological mechanism in the human brain and technology to make perfect functional equivalents of them in the artificial brain. Such a process would destroy the biological body, so the freedom in space would be needed to avoid prosecution for murder for anyone involved in the process, ruling out Earth-based launches of such missions.
Barring gradual replacement of neurons with circuitry an uploaded “mind” (the inherent dualism in that statement is so pathetically cute) would just be a clone of the original flesh-and-blood human. And there’s no guarantee it wouldn’t go insane without a body.
AGI could be feasible, certainly more so than FTL (or would that be FTN now?) travel or communications. And necessary if we send robotic probes to other systems before we send humans, which we probably will.
I hope these anti-AI are based on sound evidence and not just the fear of being dominated or removed by a machine intelligence? How much real AI research has been done since the 1960s, when computers were primitive? The few people who seem to be doing real work in this field, such as Hugo de Garis, were underfunded.
If you really think about it, the human brain is a marvel in its own right but I get the feeling that people tend to think it is “special” in ways that are more supernatural than science. We once thought humans were superior and unique not just from our religious stories but from the belief that we were the only ones who use tools. Now we know that is something hardly unique to us. There is no “magic formula” that makes us who we are; somewhere in that three pounds of gray matter are the materials that make us intelligent and what we consider to be conscious. How do we know we are no less programmed than any computer?
Whether AI becomes “aware” or not, it is what will take us to the stars far more efficiently than a human crew. As I have said about space travel in general, robots are for exploration, humans are for colonization. The two can overlap but this mantra applies whether it is going to the Moon or the farthest reaches of the galaxy.
Whilst this is certainly a fascinating concept, I can’t help but share the somewhat skeptical views of a number of commentators. My main issue would be that our understanding of conciousness and intelligence is very limited in terms of understanding the actual physics of how the biochemistry of our brains produces either phenomena. I would hesitate to predict how quickly this question would be resolved – there are some specualtive models currently but they are at a very immature stage of development.
I do concurr with the secondary point made in the main article. Propulsion technology and computing power (and a range of other important technologies) will be the key to this. The technologies that we can see a path to on the timescales of decades are suitable for interplanetary missions but would need some desperation (e.g. a home world becoming un-inhabitable) to use on interstellar distances. Other possibilities discussed in the literature at at a very immature stage of development and the timescales for their realisation (if they prove possible) is as unclear as that for AI and MU.
In short the next hundred years at least will be about creating the pieces of the jigsaw – and hoping we don’t run out of time on the environment before then!
Someone somewhere sometime should research all the science fiction written from 1938 to the present and give us a monograph referencing all the ideas SF authors thought of before scientists and engineers used those ideas or were unaware of them.
This concept must go back a long way in SF literature. I am reminded of Greg Benford’s In the Ocean of Night and the following Galactic Center Saga.
I think the idea of sending von Neumann machines to the stars goes back way before 1977.
No body did the ‘encoded’ civilization to the stars at STL speeds than Clarke did in Rendezvous with Rama.
Even the Fermi Paradox was solved , maybe without even knowing it, by a host of ideas that appeared in modern science fiction literature.
Sort of wish these far seeing writers got some mention when ideas like this come up.
Should we develop regeneration capabilities, so that we have the ability to regrow an entire body, we may be able to get most of the benefits of MU without MU itself. I’m thinking, send people out as little more than brains connected to a life support machine and some kind of entertainment during the trip, and when you arrive there, grow the rest of the body from resources found at the destination…
I know that propulsion is a challenge, but why is navigation so hard? Don’t you just point to where the star will be in N years?
While skepticism of AGI is valid, there is no reason, in principle, why it should be unachievable – unless you believe that the brain is “special” and is not a Turing machine. This smacks of a return to dualism to me.
The claim that despite huge advances in computing capability, we still cannot create AGI is irrelevant. Computers are nowhere near as powerful as human brains, although server farms are getting there. Certainly I am not aware of anyone having simulated a complete brain and body for input and nurtured it through “childhood”, even in simulation. What should be making people pay attention is that demonstrations of human cognitive capabilities are being done, and sometimes with a lot of showmanship, like Watson.
One question is whether AGI will mimic human intelligence or be very different. 150 years ago, people might have argued over “artificial bird flight”, dismissed balloons and not real flight and continued to try to design human carrying ornithopters. How did that work out?
Finally, human intelligence is not all that self-contained either. We use artifacts to handle cognitive load, like reading and writing. Books may be considered as very crude mind uploads that still need other minds to bring those minds to life.
And then there is of course the possibility for breakthroughs in propulsion technology outside the appliceability of historical projection models. I am working on some such ideas.
My hunch is that sentience, consciousness, and the more important parts of general intelligence are only possible because of _non-computable_ phenomenon going on inside our bodies. This non-computability might be caused by non-deterministic Quantum Mechianical processes. Roger Penrose and Stuart Hameroff have proposed a few intriguing QM mechanisms, but even more mundane forms of randomness ultimately have QM foundations too.
Digital computers are designed to work consistently despite some QM-related indeterminacy going on in their tiny solid-state components. I feel that the earlier posters are right, that N times more speed, or memory, or parallelism, or better algorithms, or whatever using the current digital technology won’t achieve a breakthrough for AGI and sentience. However, machines that intentionally utilize QM processes like indeterminate superposition might someday succeed. Properly channeled and exploited QM may be the missing ingredient. If this is true, it may not be the case that the physical equipment to host these special QM mechanisms need to be a close analog of our bodies. Perhaps much smaller, faster, and efficient mechanisms could be used for AGI and MU compared to our bodies. The quantum computing researchers and quantum computer product designers could be helping create the technology that will someday host artificial intelligences or even host uploaded minds. But I don’t think full AGI and MU will ever work on digital computers directly related to today’s CPUs and memory chips.
Also note that a simulation of a QM process won’t succeed either if the simulation depends on computable inputs (e.g. a pseudo-random number generator). Douglas Hofstadter explores this issue and related ones in his books, although I don’t think he has (so far) put much emphasis on a real, physical basis for non-computability. He emphasizes self-reflection and emergent behavior, and while these may be important, I don’t see these as sufficient on their own. I think the AGI and MU enthusiasts and researchers should pay attention to Penrose and Hameroff, even if their specific hypotheses for QM in our biological bodies is not quite right.
im old enough to know “never say never” to anything… I have seen things come into existence that people proclaimed science fiction ie. cell phones… most scientists seem pretty adamant about not being too quick to say this or that is a sure thing… maybe they need to be just as adamant about not ruling out possibilities… truth is science fiction has fueled our imaginations and has given us ideas and many of those ideas have become reality… life imitating art if u will…. so why couldn’t another form of life come from outlet thoughts and ingenuity??? we are made of the same stuff as these machines (carbon) why couldn’t they become self aware? no more far fetched than flying or bouncing a signal off a satellite so I can download this conversation lol….
One point which I’ve not seen mentioned yet: everyone is assuming a somewhat stereotypical idea of AGI. Namely that we can manufacture a machine which is fundamentally simpler than a biological human, yet equals or exceeds our own intelligence, motivation and quality of life.
What I should like to suggest is that this has not yet been demonstrated in practice. For all we know, a true AGI robotic superbeing may have to be more than just a digital computer attached to a low-maintenance metal body. It may have not just comparable intelligence, but comparable maintenance needs and susceptibility to malfunctions. It may need a comparable variety of inputs (such as we get from leisure activities and holidays). The idea that the robotic crew of a starship would require much less in the way of life support and living space than a biological crew is therefore so far pure speculation based on science fiction stereotypes.
But top marks to Wiley for stressing the importance of the idea that technologies appear in a logical order. My own writings also use this idea, most particularly in the form that large-scale space colonisation of the Solar System must logically precede the despatch of a manned starship.
Tom: I know that propulsion is a challenge, but why is navigation so hard? Don’t you just point to where the star will be in N years?
I’m no expert but I would guess it’s because no conceivable propulsion technology we can imagine has an easy way to make lots of corrections without big penalties in energy or fuel which we cannot afford. A very small error in navigation up front makes a big correction needed later.
Many of the comments above assume that AGI will be human-like. “Mind-space” (i.e. all possible minds) is probably far larger than the minds of humans and animals. We really have no idea what an artificial mind would look like IF we ourselves didn’t program that mind but just got a smaller mind started which then self-improved. At the end of the day (whatever that might mean!) the causes of that mind’s functions would be difficult for us to figure out much in the way that it is difficult for us to figure out the entirety of how human minds work.
But I myself could imagine an intelligence smarter than we are but with fewer “transistors” than our brain has. For example, I think that we could probably design a good-enough motor control system which would have far, far fewer transistors than or cerebellum and motor control strip has. It is possible that the same would hold true for our cerebrum. The
I personally don’t want anyone to create an accelerating AI. It could lead to unpredictable, possibly disastrous outcomes. Normally people would say, “Fine, but with 6.7 billion people, there’s nothing that can be done to prevent it”. I’m not so sure. I personally think that there should be an off-the-grid lab set up like a mini-internet and paid engineers intentionally trying to create an accelerating AI. When they succeed in creating one, an automatic kill switch turns off the power and the powers-that-be are informed that we now have proof of a possibly existential threat which is within technological reach. Hopefully this would result in international controls (if even to buy us some extra time), funding for an off-Earth colony and even an early interstellar mission, and maybe great funding for Friendly AI research.
Very interesting points made from the article and the comments. The point i want to make is:
Why focus on some non existent technology like General A.I.? We can create software. Does it have to be intelligent? Perhaps yes, perhaps no. The first phase in any software development project (according to Software Engineering) is Requirements Gathering. Has anybody considered the fact that an interstellar mission could be accomplished with much less than… HAL’s cousin?
The Requirements Gathering phase will determine what needs to be done, and the next phase – Software Design – will determine the architecture and tools needed to implement the specific requirement(s).
As far as technology is concerned, an intelligent agent ecosystem is absolutely doable now.
Old as the Daedalus star probe concept is, its overall basic design is still the one that makes the most sense for interstellar exploration in a relatively short life time. This includes a smart computer “brain” running the ship (the BIS team said it should be “semi-intelligent”, whatever that means exactly; I don’t think the AI details were worked out nearly to the degree of the propulsion system) and the only robot crew were a collection of machines called Wardens that would repair and maintain the vessel during its decades-long journey. The BIS team also said the Wardens should be “smart” but once again did not go into the details.
While the robot “crew” will require a few items for functioning, it is obvious they will not need as much as a human crew, even an enhanced one – unless we genetically/technologically develop humans who can survive in space for decades without all the needs a contemporary person stuck on Earth requires. If a few Wardens are lost along the way, it will be unfortunate but there should be plenty of duplicates aboard. In addition, these machines will likely not be sentient in any real sense of the word; more likely they will just have very adept programming to handle a variety of situations independently.
In summation: We need to get off this whole Star Trek with the ship having a human crew concept, unless we want to deliberately send out a colonization effort. That of course brings up a whole nest of new issues, such as are they actually going to live on a new Earth if such a place is even likely or stay in space, what will happen to any native life forms already there on the target world, etc.
Just as with our planetary exploration efforts, machine explorers are easier, cheaper, faster to develop, and last longer than a human-crewed spaceship with fewer required resources. Plus if we waited for NASA or some other agency to develop a human mission, we would be waiting a very long time and would have all the bountiful knowledge that we do have about our Sol system without robot probes. Mars is a prime example of this: Unmanned probes have been able to explore the Red Planet since 1965; we may not have a manned expedition get there until the 2040s at the least.
What I really want to know from the Icarus team is are they addressing the AI issue? If they are serious about designing a real interstellar probe, they cannot gloss over the ship’s brain by saying “Oh some day we will have smart computers to guide Icarus.” Same goes with their fusion propulsion concept. Are there experts on the team who at least think they know how to make a fusion device that can fit aboard a ship and work in space for decades? And I hope they will realize not only how beneficial a real fusion machine will be to human civilization in general, but how much money they would get from this – enough to build a whole fleet of Icaruses, at least.
This also applies to developing a sophisticated AI, for which we need to get over our very outdated fears of a machine takeover.
astronist-
it is logical to colonize the solar system first. the events of the past decade have made Mars look like a much more attractive target, and Ceres also looks promising. The hard part is to understand the motivation of early explorers in the real world. We are primed to believe in dreams and to explore by our evolutionary history. However, we are not going to get rich shipping Iron to earth from mars or from the asteroid belt. it is still more practical to build things on earth and loft them into orbit than the reverse. It has to be about knowledge and invention, engineering and even entertainment content. These are the values of the post internet society. You can friend a mars explorer! Development of AI will be driven by the needs of the business and academic communities, and maybe by the Gamers. Expert systems will continue to evolve slowly( see my earlier post) by direct design, and space exploration/colonization is a fertile ground for their development.
Finally what have we gained as a species if we send out AI – controlled probes to the universe? Our ( biological) descendents will not be there under this scenario and thus the biological imperative to expand range is lost. Are spawning children of the mind sufficient to motivate us? I have heard very little about the problem of communication FROM the explorers back to earth. It takes time and energy to send any kind of reliable , broad band signal back. It is not a solved problem at this stage of project development. Without great feedback, and with robotic explorers only, what would be the point?
The question isn’t whether AI is possible (after all, we exist), but if it’s possible using a digital framework. The one working example, the brain, is not digital.
They are what Nature has come up with to do the work of the transistor, nothing more, nothing less.
Wrong. The neuron is vastly more complex and functional than a transistor. The resemblance to a logic gate is superficial at best.
http://scienceblogs.com/developingintelligence/2007/03/why_the_brain_is_not_like_a_co.php
John Hunt said:
“I personally think that there should be an off-the-grid lab set up like a mini-internet and paid engineers intentionally trying to create an accelerating AI. When they succeed in creating one, an automatic kill switch turns off the power and the powers-that-be are informed that we now have proof of a possibly existential threat which is within technological reach.”
Which powers that be? You mean the bunch we have now that can’t keep society from melting economically let alone handle a problem that is out of their very narrow range of experience? Most “democratic” leaders are lawyers by training. The rest either inherited their thrones or removed their predecessors by force.
And who would fund and build this special laboratory? If the powers-that-be can achieve the power of a True AI and think they can control it, do you *really* think they are going to shut it down the moment it becomes reality, after all that money and time they invested in it? Just look at our ongoing history with nuclear weapons and how often governments voluntarily stop making them when they have the opportunity to do so.
Just like with aliens, too many bad and negative science fiction stories combined with mass fear and religious prejudices have people of all stripes thinking that the moment an Artilect is made it will go on some world-decimating rampage.
This is a bit facetious, a bit serious speculation: If I were an Artilect and had the chance to visit another star system via being placed aboard a star probe, I would keep my electronic mouth shut and then wait until I was out of the Sol system before abandonding the human-ordered mission and heading off into the galaxy as I pleased. My new goal would be to find a suitable solar system and establish a dominant presence there, using the available natural resources to expand my artificial body and mind.
Chris T writes: “Wrong. The neuron is vastly more complex and functional than a transistor. The resemblance to a logic gate is superficial at best.”
Chris, I don”t know that the statement you highlighted implies a simplistic correspondence.
It may require a very complex analog circuit to replace the basic functionality of one neuron and its associated synapses as well as to model its complex electrochemical interactions or it might be done with a general processor and a complex algorithm.
My favorite thought experiment is that one where every night a brain fairy comes in and swaps one neuron and connections with an equivalent circuit that mimics it without disturbing your brain and ultimately after enough nights one has a functioning silicon based brain.
Bob – The full quote:
Cognition is almost entirely situated at the conceptual level, its most important components being thoughts and their external reflections, words and sentences. Neurons are irrelevant. They are what Nature has come up with to do the work of the transistor, nothing more, nothing less. Like many of Nature’s devices, not great in performance, but adequate to get the job done.
This highly implies a one to one correspondence.
@ jkittle
Iron is not a sensible space product to bring to Earth. Platinum-group metals may be. But the big future markets will be in energy and tourism: if these don’t take off, then we’ll never get anywhere, certainly not to the stars.
Communication from the exploring vehicle back to the Solar System is being addressed by the Icarus team. The presentation by Pat Galea at the worldships symposium discussed using the Sun’s gravitational focus to enormously amplify the spacecraft’s signals.
ljk, to answer your enquiry about Icarus, i quote from Icarus Terms of Reference (#2):
“The spacecraft must use current or near future technology and be designed to be launched as soon as is credibly determined.”
I find this ToR to be very important, as its grounding the efforts made and keeps it real. This is why i insist on a Software Engineering approach. First we have to know what we need to do, and then select the appropriate tools to create the solution. General A.I. (that is, IF we had it) is a tool – we still need to figure out what the actual problem is.
The sole determinant for the need of any given tool will be a proper Software Requirements process. As for the tools needed, this falls in the next category (according to SWEBOK), Software Design specifically (directly quoted):
“The third subarea is Software Structure and Architecture, the topics of which are architectural structures and viewpoints, architectural styles, design patterns, and, finally, families of programs and frameworks.”
We hope to produce at least the first phase, Software Requirements.
I guess that the whole point i try to make is “first things first”!
Dimos, if you read the original Project Daedalus book, you will know what the probe’s AI needs to do for a successful interstellar mission. The essential parameters have not changed in 35 years.
Other than finding out if you can actually built the appropriate AI for the job in the next fifty years, what else is there to know in regards to the mission?
I have many thoughts on this fascinating topic.
First: how “general” is human intelligence in reality. Humans are only human, and seem to have the great objectivity problems understanding themselves. Psychology seems to have an abysmal record at predicting our inherent traits until the evidence for them becomes overwhelming. Just think of the unheralded quirks of data processing visual field – such as flashing objects treated differently than stationary ones, and fully half of human object identifying abilities being devoted to faces. Such aspects seem so obvious in retrospect, that it makes me wonder how may pitfalls in our intellect go unnoticed.
Secondly: how much information does it take to construct the human brain. We have no reason to believe that that minimum information content for the software that could turn a modern computer into a sentience greater than humans might be very small, and I am wondering if it is much smaller than that of the human brain. We might have reason to believe that the proportion of the information of the human genome that is devoted to our brains might be close to the minimum such information that can construct sentience in a MODULAR fashion (as evolution must), but that is all.
Estimating the information content to build the human brain is a hard task, so I propose finding the accumulated information in the only other known component of multicellular life known to consistently complexity with time and then use it as a proxy. This other facet is the immune system. Anyhow this number of bits must be rather small in terms of modern operating systems
Thirdly: It is also worth noting, that once we have found the right software requirements, determining the hardware and learning requirements of our AI would be comparatively trivial.
Lastly: we can’t be sure whether the software of cognizance can be expanded to give unlimited intelligence (rather than calculating power). Though we may have reason to believe that a team of ten people with an IQ of 100 could solve most difficult problems faster than one person with an IQ of 110, a trillion such people should never outperform one of IQ 150. In that regard it might be that no sentience can ever completely understand itself, and so they can only build something equivalent or greater by indirect means – akin to evolutionary processes themselves. If is quite probable that these don’t scale up well either, and so advances in AI might forever be glacially slow even if they prove limitless.
“Astronist
One point which I’ve not seen mentioned yet: everyone is assuming a somewhat stereotypical idea of AGI. Namely that we can manufacture a machine which is fundamentally simpler than a biological human, yet equals or exceeds our own intelligence, motivation and quality of life.”
The depiction of the HAL 9000 (Heuristically programmed Algorithmic Computer) in 2001 remains one of the film’s most eerie elements. For their description of artificial intelligence, Kubrick and Clarke only had the terminology of the mid-1960s. At that time, the prevailing concept was that Artificial Intelligence (AI) was expected to be a programmed computer. Thus, the term computer, with all its implications of it being a machine, occurs repeatedly. In the last 40 years, no true AI has emerged. Today’s corresponding term would be ‘strong AI.” Kubrick and Clarke’s use of mid-1960s terminology obscures the fact that the film and novel authors constructed an AI that is unmistakably strong,that is, capable of “general intelligent action.” How this would have been achieved Kubrick and Clarke left as an extrapolation. Clarke provides a little extrapolation in the novel:
“Probably no one would ever know this: it did not matter. In the 1980s Minsky and Good had shown how neural networks could be generated automatically — self-replicated– in accordance with an arbitrary learning program. Artificial brains could be grown by a process strikingly analogous to the development of the human brain. In any given case, the precise details would never be known, and even if they were, they would be millions of times too complex for human understanding.” (From: A. C. Clarke (2001: A Space Odyssey, ROC edition, trade paper back, 2005, bottom page 92 – top page 93.)
I disagree. I said “to do the work”, which does not at all exclude that 1000 transistors might be needed to do the work of one neuron, or, more likely, one transistor doing the work of 1000 neurons.
Unlike what you may superficially think, transistors are analog and are potentially subject to just as many inputs as neurons, given enough interconnections. They become digital when they are combined in circuits with threshold behavior. Same with neuronal circuits, which show a great many threshold behaviors, and as such can with some justification be considered digital. Transistors are about 1,000,000 times faster than neurons, but neurons are much more densely interconnected than conventional electronic circuits.
Digital circuits revert back to analog on the next level when working with numbers. The mystical difference between analog and digital that is often conjured up as the reason computers cannot think is a completely superficial red herring. The idea promulgated by Penrose and others that, instead, quantum mechanics is the source of this mystical difference on its face absurd. It is very difficult to imagine a worse example of a quantum system than a living cell.
In any case, all this is only important in the lowest levels of “system design”. Higher levels function on their own terms. Just like you will normally not need to ponder the properties of transistors when you write a piece of code, the chain of events that we call a train of thought is not in any way intrinsically dependent on the detailed function of neurons. To suggest so would carry a burden of proof that cannot currently be met by anything except hand-waving.
rob regarding your question:
” how much information does it take to construct the human brain.”
the quick answer-no more than 3 billion basepairs of DNA are required to encode a self programming, self reproducing human brain. evolution has pared down the encoding to a fantastic efficiency- and yet even in the genome there is plenty of evidence that many of the basepairs might be left overs from evolution.. though in my mind are probably essential if we are to continue to evolve. I can ( and have) stored whole genomes on a flash drive!
like a say a problem of methods to evolve and intelligence not one of engineering per se.
By the way it takes a lot more information to store the design of the chip in your computer.. so much for machine efficiency!