At the top of my list of people I’d someday like to have a long conversation with is Nick Bostrom, a philosopher and director of Oxford’s Future of Humanity Institute. As Centauri Dreams readers will likely know, Bostrom has been thinking about the issue of human extinction for a long time, his ideas playing interestingly against questions not only about our own past but about our future possibilities if we can leave the Solar System. And as Ross Andersen demonstrates in Omens, a superb feature on Bostrom’s ideas in Aeon Magazine, this is one philosopher whose notions may make even the most optimistic futurist think twice.
I suppose there is such a thing as a ‘philosophical mind.’ How else to explain someone who, at the age of 16, runs across an anthology of 19th Century German philosophy and finds himself utterly at home in the world of Schopenhauer and Nietzsche? Not one but three undergraduate degrees at the University of Gothenburg in Sweden followed. Now Bostrom applies his philosophical background, along with training in mathematics, to questions that are literally larger than life. As Andersen reminds us, ninety-nine percent of all species that have lived on our planet are now extinct, including more than five tool-using hominids. Extinctions paved the way for the emergence of new species, but for the species du jour, survival is the imperative.
Image: Philosopher Nick Bostrom at a 2006 summit at Stanford University. Credit: Wikimedia Commons.
Colonizing Waves or Individual Explorers?
You’ll want to read Andersen’s essay in its entirety (helpfully, there’s a Kindle download link) to see how Bostrom sizes up existential risks like asteroid impacts and supervolcanoes. One of the latter, the Toba super-eruption about 70,000 years ago, seems to have pumped enough ash into the atmosphere to destroy the food chain of our distant ancestors, leaving a scant few thousand alive to move into and populate the rest of the planet. We do seem to be a resilient species. Bostrom likes the long view, which means he sees a 100,000 year hiatus as humans bounce back from a possible future catastrophe as little more than a pause in cosmic time. That perspective has interesting consequences, as Andersen notes:
It might not take that long. The history of our species demonstrates that small groups of humans can multiply rapidly, spreading over enormous volumes of territory in quick, colonising spasms. There is research suggesting that both the Polynesian archipelago and the New World — each a forbidding frontier in its own way — were settled by less than 100 human beings.
This is a point worth remembering as we contemplate the possibility of interstellar flight. We sometimes think of enormous colonies of humans moving to nearby stars, but early human settlements may be the result of tiny groups who, for reasons we can only guess at, decide to cross these fantastic distances. Maybe rather than a planned program of expansion, our species will see sudden departures of groups heading out for adventure or ideology, small bands who leave the problems of Earth behind and create entirely new societies outside of any central planning or control. Meanwhile, the great bulk of humans choose to stay at home.
Andersen’s essay is so rich that I will, for today at least, pass over his discussions with Bostrom about artificial intelligence and the dangers it represents — you should read this in its entirety. Let’s focus in on the Fermi paradox and why Bostrom hopes that the Curiosity rover finds no signs of life on Mars. For the consensus at Bostrom’s Future of Humanity Institute, shared by several of the thinkers there, is that the Milky Way could be colonized in a million years or less, leading to the question of why we don’t see this happening.
Filters Past and Future
Are we looking at an omen of the human future? Robin Hanson, another familiar name to Centauri Dreams readers, works with Bostrom at the Institute. He tells Andersen that there appears to be some kind of filter that keeps civilizations from developing to the point where they build starships and fill the galaxy. The filter would exist somewhere between inert matter and cosmic transcendence, and thus could be somewhere in our past or in our future.
In other words, what if we have somehow survived a filter that keeps life from developing on most planets? Or perhaps it’s a filter that acts to screen out intelligent life-forms, and we have somehow made our way through it. If the ‘great filter’ is in our past, then we can hope to expand into a cosmos that may be largely devoid of intelligent life. If it is in our future, then we can’t predict what it will be, but the ominous silence of the stars bodes ill for our survival.
But let Andersen tell it, in one of his conversations with Bostrom:
That’s why Bostrom hopes the Curiosity rover fails. ‘Any discovery of life that didn’t originate on Earth makes it less likely the great filter is in our past, and more likely it’s in our future,’ he told me. If life is a cosmic fluke, then we’ve already beaten the odds, and our future is undetermined — the galaxy is there for the taking. If we discover that life arises everywhere, we lose a prime suspect in our hunt for the great filter. The more advanced life we find, the worse the implications. If Curiosity spots a vertebrate fossil embedded in Martian rock, it would mean that a Cambrian explosion occurred twice in the same solar system. It would give us reason to suspect that nature is very good at knitting atoms into complex animal life, but very bad at nurturing star-hopping civilisations. It would make it less likely that humans have already slipped through the trap whose jaws keep our skies lifeless. It would be an omen.
This essay will take up half an hour of your day, but I suspect that, like me, you’ll go back and read it again, reflecting on its themes for days to come. Are there questions of philosophy that are more urgent than others? Ponder a moral issue that is much in play at the Future of Humanity Institute, the idea that existential threats to our species may outweigh our obligations to serve those who are suffering today. “The casualties of human extinction,” Andersen writes, ” would include not only the corpses of the final generation, but also all of our potential descendants, a number that could reach into the trillions.”
In my view, that’s an argument for, among other things, a robust space program going forward, one that is capable of securing our planet from impact threats and establishing off-world colonies that would survive any other forms of planetary catastrophe, from runaway artificial intelligence to the weaponization of microbes. It’s also an argument for taking the kind of long-term perspective so lacking in modern culture, the case for which has never been made more clearly than in this elegant and illuminating essay.
Wishing the chief psychohistorian were alive to read your posts. His Foundation reduced the coming interregnum to just 10,000 years. Paying more attention to the life sciences might help us learn to dwell among the stars and never fall back to earth in some sort of retreat. All of you will eventually build a classic starship that surpasses all our hopes.
Isn’t keeping the human crew sane while in transit of equal concern?
AI upgrading AI…and AI manufacturing AI upgrades grows worrisome.
I for one hopes humanity starts to have more long-term thinking of our future; otherwise, this short-term thinking is going to get us wiped out, and deservedly so.
Example
Our govt should have already created a detection system for asteroids and comets, at the very least, to give us ample warning time of incoming impacts that could cause major damage or a civilization wide destruction. Instead our leaders are bickering about what party says this or that to gain short term political advantage or they are worrying about Michelle Obama being on the Oscars (something so trivial it shouldn’t even be news).
On another note, I disagree with Mr. Bostrum about hoping to not find life on Mars. Like said in the movie Contact:
Young Ellie Arroway: Dad, do you think there’s people on other planets?
Ted Arroway: I don’t know, Sparks. But I guess I’d say if it is just us… seems like an awful waste of space.
To me it would be a terrible waste of space if its only us in our galaxy or even in our on section of the galaxy. I not a scientist but I think that that the Fermi Paradox is wrong. To me the Fermi Paradox is kind of like the naysayers of the past saying we will never invent the airplane, or fly faster than the speed of sound; or like the naysayers of today saying we’ll never travel faster than light. Just because it can’t be done doesn’t mean that it won’t be done. We are only limited by our intelligence, dreams, and the sweat equity we are willing to put into achieving our goals. Getting back to the FP, just because we cannot find evidence of extraterrestrial intelligence (EI) doesn’t mean that it doesn’t exist. Maybe we are so uninteresting to other intelligence why take the time to make us aware of their existence or we maybe we are not advanced enough to detect them, or even know how to begin to detect them.
Personally I hope that intelligent life is plentiful in the universe. I hope it ranges from cave man type species to extremely advance Type 5 civilizations or above is plentiful in the universe. I hope we find species that we have many things in common with to species so exotic we can’t begin to understand them.
However, it is a maverick but plausible theory that life originated on a Mars that solidified sooner than Earth did, and then was transfered to Earth by litho-panspermia. It would even make the “Rare Earth” argument more cogent, as it may require a small planet to “fertilize” a larger planet.
In regards to the existential risk of AGI, or AI at the human-level or greater, and human-AGI competition, let me postulate one likely possibility and that is the use of AI and robotics in space. With the development of space commercialization, one example being the pursuit of asteroid mining, it seems likely that robotics would play a key role. If and when we do develop AGI it seems very likely to me that it will happen in regards to space development or at least be easily exported to space. Space is extremely expensive for humans, but is very cheap for robotics. The availability of energy and important materials (metals, rare minerals, etc) are enormous in space and in quantities that dwarf those on earth. Space based solar power has orders of magnitude more potential than any energy resources on Earth. It’s extremely expensive and unrealistic (at this point) for humans to develop it but once there’s a near human-level AGI then that stops being the case. Even if a future AGI society is focused around energy development and resource exploitation (similar to how it is in human society with the focus on oil), then there would still not need to be any direct competition between humans and AGI because opportunities in space are so much greater and larger that it wouldn’t need to happen. Avoidance would still be far cheaper and lead to no real loss of opportunity so there would never need to be competition, or even cooperation, at least for many decades if not centuries to come.
From the point of view of pure competition it seems like there is always the fall back viewpoint to zero sum type conflict. This is largely the case because of human bias and our long history of zero sum competition. However this isn’t the most productive form of competition which is cooperative competition. In basic game theory the retaliator (the one who seeks cooperation but is willing to respond to protect itself) does better than either the hawk or the dove. As Steven Pinker in Our Better Angels of Nature points out, society is less and less violent over the long term history of modern humanity. One clear correlation is that greater communication and reasoning ability goes along with less violence. Modern warfare often comes about when nations or groups misunderstand each other and wrongly interpret the motivations and reasoning of the other. When communication lines stay open that doesn’t guarantee cooperation, the Cold War shows that, but hot war and direct violent competition is usually preventable. If an AGI society would be so much better at reasoning than humans (as Bostrom argues) then doesn’t it seem more likely that this trend towards cooperative competition would continue, especially if a future AGI society was centered in space?
In regards to this line in the article: “Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.” I’m sorry but I just can’t buy that. Yes I agree with the idea of avoiding anthropomorphizing it, but at its core it is in no way a force of nature but the exact opposite. At its core, a human-level or beyond human-level AGI would be a reasoning intelligent agent seeking to maximize its utility function. No we should not assume that the motivations that drive a future AGI society would be the same as a human. But we can assume that high-level reasoning is the core of its decision making. That doesn’t negate the idea of a future AGI existential risk to humanity but I think it does push it in the trend towards cooperative competition. Also this statement: “the basic problem is that the strong realisation of most motivations is incompatible with human existence” is just flatly untrue. If that were the case then a strictly zero-sum competitive version of capitalism would always beat cooperative capitalistic models, but that isn’t the case and in fact cooperative capitalistic models beat the zero-sum versions in the long run and the zero-sum business models almost always fall apart. Our current system definitely has hiccups (i.e. financial crash of 2008) but it is self-correcting and seems to be pushing towards the cooperative competitive model over the long run. Why is it self-correcting? Because there are rational thinking agents called humans that seek to make it so. It’s not just because we’re empathetic but also because from a purely selfish point of view it is more profitable in the long run. No we as a species have not been great stewards of the planet, but as we have become to better understand our effects on nature and the environment we are becoming more so, slower than many might like, but it’s happening. Wouldn’t an AGI with a higher level of reasoning, even if it didn’t share the same type of empathy as humans, seek that direction?
From the point of view of appreciation for other forms of life: No humans don’t take into account ants when doing construction, but that’s largely because ants show little to no form of individual intelligence and for the most part are in no danger of extinction. But we do absolutely plan construction projects around habitats that affect animals with higher forms of intelligence or facing extinction and as a species share a certain appreciation for those animals that do display higher levels of reasoning capability such as whales, primates, dogs, etc. That has in no way been a perfect case through human history but the trend is that as we as a society gain a greater understanding of other species then we seek to lesson our impact on them. Again I think it is likely that greater forms of AI and AGI with higher reasoning capabilities will also do so. No it is not guaranteed and we should think about the risks, prepare for them, and be careful in how we plan so that it is embedded in the design of any future AGI system. But it just seems like these arguments from Bostrom and Dewey are irrational and displaying a type of threat bias. Let’s think about and plan for worst case scenarios but let’s also not overlook the huge potential positives and risk adverting potential that AGI could provide, especially in regards to the development of space.
Interesting. I hope there’s no life on Mars because it would be an impediment to colonizing and using that planet.
Bostrum is a formulator of a simulation argument
http://en.wikipedia.org/wiki/Simulation_argument
Tipler should be credited here too. You might be interested in Holts Why does teh World exist. Tipler latest is “we are adream in the Mind of God”
Which brings me to this topic maybe a nightmare Add bacteria from factory farms that eat antibiotics,drig resistant TB and Chinas Smog which covers a billion
Besides Borstrom, I recommend Anders Sanberg, who has some certainly interesting views.
As to the the issue itself-remember last discussions…Even rough estimates of resources of our own system gave around 4,000 trillion humans that could be supported. It’s doubtful we will ever grow into such large population, or that our current biological state could cope with such population in terms of social interaction, organization.
If one system can support so many beings, than what about two? Three? Ten?
It doesn’t seem likely that there is much need for colonization in space. If you go post-biological, then even less so.
Lack of large galaxy wide spanning colonies isn’t something that we should worry about.
Charles Lineweaver thinks human-like intelligence is species-specific to humans (and close genetic relatives). His arguments illustrate the classic split between biologists and physicists over how likely it is that intelligence occurs elsewhere in the universe (pdf file):
http://www.mso.anu.edu.au/~charley/papers/ConvergenceIntelligence10.pdf
Hi-tech civilization may be fragile, but not the humankind itself. Humans are intrinsically very versatile, they are omnivorous and can carry very different lifestyles from harvesters to predators and even carrion eaters, so I guess even in case of the worst nuclear war ever possible, with every warhead hitting where hurts most, there would be millions of survivors – the same applies for Chicxulub-scale impacts and supervolcanoes. The extinction of other hominids does not contradict this, since it’s very likely they have been simply out-competed by the more adapted new species. It doesn’t take much to ruin a high tech-civilization, but it’s very difficult to completely weed out the human species. After that, it hardly would take more than 100 millenia to rebuild all – and even in case of repeating cataclysms, after some tens of MYr’s the evolution would lead to something …different. And it was said that first life in the Milky Way could be five billions years older than Earth’s biosphere…
The most probable cause of the great silence IMO is that when a civilization reaches our level of technological sophistication it self-destructs or is destroyed by an impact or natural event it could have protected itself from but chose not to.
An engineered pathogen that leaves no survivors or an impact are my two greatest worries. Both can threats can be addressed by going into space. We are an endangered species and the proof is there for all to see; trillions spent on ways to kill each othe. Very little effort is expended on ways to survive a universe that does not care if we thrive or go extinct. If we do not care either then we will disappear like the dinosaurs. Guaranteed.
I don’t like to be pessimistic but..
already the many newish birth control technologies have had the effect of slowing advanced regions of humanity, since the 60s . Europe and North America , Russia and Auz will have to push on without demographics. We can do it.
We are already radiating radio, TV, radar et.c signals, detectable, in principle, over interstellar distances. If Bostrum’s Great Filter lies in our future, other civilisations should have gotten at least as far as us, and their RF emissions would be evident.
Their apparent absence might be a positive sign that humanity has, so far, made the grade…
“James February 27, 2013 at 13:12
Young Ellie Arroway: Dad, do you think there’s people on other planets?
Ted Arroway: I don’t know, Sparks. But I guess I’d say if it is just us… seems like an awful waste of space.”
I interpret such statements as mere examples of the pernicious, cultivated self-hatred that has become a part of Western culture, not as meaningful comments on anything.
The Fermi Paradox has interesting implications when considering “Great Filters”
Such a filter that produces FP implies every race hits it.
If it comes early such as in the transition to living cells then it could easily account for the silence (we’re the lottery winners). However if it is still to come then there is a finite probability that some would escape it and thus fill the galaxy or local universe (greater than zero over deep time means it will happen at some point). So it seems to me if you run the probabilities over deep time our locale should have been overrun multiple times.
Since at this point we have no evidence of past waves I would think a probability analysis would state any all-encompassing “Great Filters” must occur in the past. Any event that stops galaxy wide expansion (be it bio or machine) must equally stop all attempts (the probability of escaping is so exceedingly rare it has not happened during our solar systems existence)
Consider a subset of the possible FP filters:
(1) Berserker policing
(2) Garden scenario
(3) Resource exhaustion
(4) Self-inflicted or Nature assisted termination
(5) Our universe is non-optimal for complex existence beyond rare bio-spheres or sheltered domains
(6) Our Universe has physics not amenable to travel beyond our locality domain (limited exotic physics)
(7) The jump from pond-scum to complex life is a low-probability chance event
(8) The rise of tool using intelligence is so rare that it has happened but once
Of those items 5 and 6 could apply equally to all races while 1-4 might let some slip by.
Item 7 and 8 on the other hand apply equally to all chances of intelligent existence…
Given the tiny slice of the Earths existence that tool using intelligence has existed and the amount of time it took to get to the Cambrian explosion, I would bet both 7 and 8 are the “Great Filters”… after all they will apply to everyone.
My prediction for the next 100 years; We will find life a mile underground in Martian aquifers and also discover that Martian life is the ancestor to Earth life some 4 billion years ago (panspermia). Then we will find a seperate genesis on Europa and Enceladus …and pre-life on many other moons and planets. It will be microbes only and close but obviously not based on the terran blueprint. Carbon based water solvent life will be recognized to be a cosmic imperative. There will be an acceptance that the main thrust of the Rare Earth argument is valid; the universe is crawling with microbial life-but very special conditions seem to be needed for multicelluar and then more advanced animal life to florish. In hindsight, our basing the claim that we are alone in the universe after only 50-ish years of ground based SETI will be seen as premature. Not until gargantuan radio and optical telescopes are built on the far side of the moon will we hear the very faint “leakage” murmurs of distant civilizations, but they will be tens of thousands of light years away.
DCM said, “Interesting. I hope there’s no life on Mars because it would be an impediment to colonizing and using that planet.”
I’m not so sure such niceties will stop most explorers. Lip service until we drop a comet, or sixty, to build up the surface air pressure. Lip service as the genetically enhanced lichen spreads across the floor of Hellas. Lip service until the summer rains wash the perchlorates out of the regolith.
No, life on Mars is doomed.
The stance how Ross Andersen and Nick Bostrom approach the subject is correct. The inborn human drivers are procreation and survival – in other words sex and fear of loss of lineage. Despite of 7 million years of homini evolution bedrooms are still on second floor. Once it was to have safety buffer from predator. Later it became ensurance of survival of offsprings. There is nothing wrong how they approach from same inbuilt senses. The other thing is project the same experience and future developments. It somehow reminds pre IVF, cloning and DNA era and the fears for the near-future. Better be safe than sorry stands in this case.
There is a huge evolutionary difference did the predecessors build the nest on trees for survival or knowingly.
We have expression:
Life is a Sexually Transmitted Disease with 100% fatality rate.
Yet as sound as this hunans still discuss of soon arriving inevitable extinsion.
“Human brains are really good at the kinds of cognition you need to run around the savannah throwing spears” is a strong understatement of human capabilities as human society has evolved in past 400 000 years from Stone Age community to infinitly complex modern society with same brain capacity – true testimony to brains’ plasticity. Noam Chomsky said – If Pleistocene era baby is born into current family he/she will become a normal citizen of modern society. Same brain which chopped the blade of stone knife contemplates star faring trips on nuclear or other future feasible fuels can’t conclude it’s capabilities with spear throwing in Savannas. I’m afraid Andersen and Bostrom don’t count on brains’ plasticity and human’s adaptability no matter what kind of environment it’s in at the moment – natural, man shaped habitant, purely arteficial, space resourse suited for habitability.
* Early Triassic period had dead zones north and south of equator which were clearly inhibitable 252-247 mln years back. (http://phys.org/news/2012-10-tropical-collapse-lethal-extreme-temperatures.html) Now are same regions most populated ones on Earth. Yes, continents have shifted and the seas and oceans completely reshaped itself. Still life and our shrew-like ancestral creature didn’t exctinct.
* Recent results show that savannas in Africa were in place 12 million years back (http://phys.org/news/2013-01-long-held-theories-human-evolution.html) yet human ancestors evolved bipedalism around 6 – 4 million years back. This has been confirmed by findings of fossile bone dating back to 4.4 million years and still woods were plenty aroud the savannas to wander. We didn’t extinct. (http://phys.org/news174218847.html).
* This is how our ancestors at that time 7 – 4 mln years back looked like. Spot any intelligence? (http://www.dailymail.co.uk/sciencetech/article-2083262/Exhibition-uses-forensics-rebuild-27-faces-mans-ancestors-stretching-7m-years.html)
* 41 000 years ago Earth’s magnetic poles reversed extremely rapidly changing poles in 250 years and the whole period to restore current state took 440 years which led to severe weather extremes and super volcano eruption 39400 years back near today’s Napoli. Europe was inhabited by humans. We didn’t extinct. (http://phys.org/news/2012-10-extremely-reversal-geomagnetic-field-climate.html)
* Last year in Britain was opened exebishion of Ulm’s sculptures which are Ice Age sculptures carved by hominid in German city Ulm. The most famous of them is Ulm’s Lion Man. Carbon dating shows the statuet been 48 000 years old. The most remarkable is that it took *at least* 400 man hours to complete, which is 2 man months at daylight. This is unheard of effort and time even by modern standards and must indicate of social developments of the Ulm’s people. Look at the pictures, skip the text – (http://lenta.ru/articles/2013/02/19/lion/)
Modern, highly developed, skilled Pleistocene society before the pole’s reversal and super volcano eruption and still we didn’t know it and didn’t extinct.
* Michio Kaku in his book Physics of the Impossible described human brain with all its neuro transmitters, connections and synapsises to be as large as 3 galaxies. Assuming by galaxy we mean usually Milky Way somehting as complex as 3 Milky Ways can’t handle unforeseeable future and survived all the nasty cataclysms of past does not play out.
Humans are much more resilient than we assume although we have been fortunate to foresee dangers ahead or adapt to extreme changes in environment. Of course it’s not w/o human casualties.
* In NS paper (http://www.mso.anu.edu.au/~charley/papers/ConvergenceIntelligence10.pdf) there is a quote:
““Sagan adopts the principle “it is better to be smart than to be stupid,” but life on
Earth refutes this claim. Among all the forms of life, neither the prokaryotes nor
protists, fungi or plants has evolved smartness, as it should have if it were
“better.” In the 28 plus phyla of animals, intelligence evolved in only one
(chordates) and doubtfully also in the cephalopods. And in the thousands of
subdivisions of the chordates, high intelligence developed in only one, the
primates, and even there only in one small subdivision. So much for the putative
inevitability of the development of high intelligence because “it is better to be
smart.”( Mayr 1995b)” .
Scientists have came to conclusion that birds are descendant of dinosaurs, they bear all of their traits. Humans are only primate adapt spoken language, but second in mammals next to dolphines. We regard language as inbred trait of ours. Recent studies confirm unborn babies as late as 3 months before birth can distinct wovels and human speach and actually learn to speak. That’s 6 months of pregnancy! Recently scientists have came to notion that human language may have been derived from bird songs. They share a lot of common base components, humans have just evolved further. Now this is intresting moment – descendant of extinct dinosaurs may be the reason why humans have evolved into speaking spiecies. That is just a very unique moment of evolution. (http://phys.org/news/2013-02-human-language-evolved-birdsong.html)
* Same paper provided by NS:
“These ancestors and their lineages have continued to exist and evolve
and have not produced intelligence. All together that makes about 3 billion years of
prokaryotic evolution that did not produce high intelligence and about 600 million years of protist evolution that did not produce high intelligence.”
“Whether there is a trend in the fossil record indicating that stupid things tend to get
smarter, is an important and controversial issue in which the discussion has become
polarized into two camps. In one camp are the non-convergentists (mostly biologists)
who, after studying the fossil record, insist that the series of events that led to humanlike intelligence is not a trend, but a quirky result of events that will never repeat themselves either on Earth or anywhere else in the universe. In the other camp are the convergentists (mostly physical scientists) who believe that stupid things get smarter and that intelligence is a convergent feature of evolution here and elsewhere. See Lineweaver (2005) for more on the protagonists in this debate.”
This is quite close to answer of “Where are they?” question that arrow of evolution as arrow of time is unique, irreversable and the features you encounter are uniqe per spiecies and per person. Environment plays crucial role in shaping current biota as-is including birds song that may triggered human speach evolution.
Yet despite all we strive to achieve AI and communicate with it. What has emerged from the comments and the papers is the view on AIs perception of surroundings is as humans tend to interpret the world – a infinetly stretching plane which edges bend on curvature of time and space where AI will choose countable options on plane. Actually there is infinite possibilities on the same plane between arbitrary points A and B that allow do dig deep into plane itself. Humans have quite good safety feature – mental diseases. You just can’t stretch to that lenght. We are good at abstracting enormously complex features into managable interpretation and don’t look into the extremely deep infinite options. If AI will have computational and cognition superpowers there is no way it can withstand desire to reason the infinitly deeply residing possibilites on plane between A and B. A and B may be a Plank’s length aparat and still AI will finds infinitly many possibilities to reason. They will be pre-occupied with these problems rather than taking over boring lives of mortal humans.
I think the best illustration how AIs behave has been provided by Ian M. Banks in the Culture novel “Excession” (http://en.wikipedia.org/wiki/Excession)
Even if there is no Martian life you can be sure fanatical groups will insist there was and we killed it off.
Something else that no one has mentioned. In scenarious where humanity survived deep into the future we might think about just what kind of
fauna is c0-existing with them, in one novel I read the following happened
The descendants of humanity designed non-sapient space dwelling life forms whose original purpose was to collect resources, transport cargo, even explore. The author pointed out that, Assuming these forms last a few score of million years, we might expect them to evolve, and whole new
ecosystems should be created. The novel said that some larger lifeforms
were able to travel between the stars as their lifetimes were on the order
of a millenia, These lifeforms aquired parasites and some had symbiotic
organism associated with them. If another intelligent speces had done this, and they went extinct, maybe their creations would go on.
Would there not be signs of these space dwelling organisms in every
solar system, including ours? Another gram of evidence against wide spread ETI.
IMO, existential technologic risks are developing faster than establishing a completely free-standing colony off-Earth. Again, IMO, technologic risk far outweigh the calculable risk from an impactor. Impactors cannot explain why no intelligent civilizations arose elsewhere. A 1/65 million chance per year is way too low to prevent colonization within a solar system. But biotech, chemtech, nanotech, AI, nuclear weapons, and high-energy experiments all arise early on during the high-tech development phase we are in. So I
think it possible that there is an inevitable race between existential technologic development and space colonization. Since any specific inevitable technologic xRisk is difficult to predict with certainty, we tend to not know what to fear and avoid until it happens. If the first xEvent is by its nature 100% lethal (e.g. spreads to all parts of the atmosphere) then it is likely to not be prevented by any technologic civilization.
If all intelligent civilizations routinely create existential technology before being able to travel far enough away in space then, if we are to be the first to beat the odds and survive, then we will have to take an improbable action to secure the survival of our species that no other civilization took. To the best of my knowledge, we have taken no such action nor do I see any such action in the foreseeable future. AFAIK, no one on Earth has a self-sustaining, isolated bunker. A self-sustaining colony on the Moon looks certainly no closer than 20 years away. Mars is further yet. There is also little effort to even delay the development of technologic risks to buy us more time. And if the inevitable unforeseen risk were a high-energy experiment accident or
perhaps AI, then we would need to send humans to interstellar distances – highly unlikely before facing technologic threats this century.
Hi John,
So good to hear from you.
“A 1/65 million chance per year is way too low to prevent colonization within a solar system.”
I know Ed Lu has written about this and puts forward the idea that asteroids and comets might be associated with any solar system that can develop complex life; and we are just fortunate to have a configuration that spares us from more frequent impacts. As for the one in many million chance; I do not buy that one. Two dinosaur killers could hit tomorrow and on the scale of geologic time it would just be a blip on that convenient probability curve. Tidal waves and Earthquakes were also not a concern in Japan and Haiti because of probability. Uh-huh.
But I agree with you in principle on at least the Bio-tech. An engineered pathogen created in a modest lab in some backwater could wipe us out completely; natural plagues leave survivors but an engineered bug would not. As for the other threats you listed, I do not think any of them deserve much attention and I lump them in with the last of the big three; the first being a plague, the second being an impact, and the third being something we have not foreseen or consider to be a serious threat.
In any case we are both completely in agreement that an off-world colony is vital to the survival of humankind.
I think Bostrom’s main idea is that death is the real enemy (indeed, our oldest enemy). As long as we live such short lives we will probably not take time out protect our species. We are going to be dead anyway (so who cares?) being the problem.
That is why I advocate cryopreservation research. If we can freeze people and delay death and then research reversing aging and curing disease.
I believe our collective worldview would then change radically
Many will say it would be impossible to freeze and store so many millions of close to death elderly and terminally ill/mortally injured people.
Considering the vast armaments the world produced in the second world war and the much larger resources available today, I would disagree.
“this is one philosopher whose notions may make even the most optimistic futurist think twice.” — not really. He’s got himself carried away with various science-fictional scenarios which may represent future threats, or again may well not. Like any disaster movie, they excite morbid interest, but this is hardly appropriate for an Oxford institute.
His speculations about dangerous AI machines are based on the assumption that a being of intelligence X can deliberately engineer a being of intelligence X+1: this has not yet been demonstrated, and intelligence is not exactly a well understood phenomenon. His speculations about a Great Filter are based on the assumption that we know where and how life first emerged: we do not know anything of the sort. I suggest that Nick Bostrom’s work be taken with a pinch of salt.
Stephen
OXFORD, UK.
“I suggest that Nick Bostrom’s work be taken with a pinch of salt.”
Spoken like a true scientist.
I am not a scientist…..but I am not a conspiracy theorist either. When Stephen Hawking warned of the possible consequences of contact with aliens I listened and I believed him. I do not think anything Bostrom is saying is any more far out than an alien apocalypse. He is a smart guy so I listen- yes, with a pinch of salt.
In my essay on bomb propulsion I cite Hawking’s warning and also warn that, “Several large comets purposely crashed into a planet to wipe out the majority of indigenous life and prepare for the introduction of invasive alien species may be a common occurrence in the galaxy. Before readers scoff, they might consider towers brought down by jetliners, the discovery of millions of planets, and other recent unlikely events. It is within our power to defend Earth from the very real threat of an impact, and at this time self-defense is the only valid reason to go into space instead of spending the resources on Earth improving the human condition. Protecting our species from extinction is the penultimate moral high ground above all other calls on public funds.”
Bostrom seems to be phrasing the issue of AI in terms of human vs machine. I think this is a false choice. My feeling is that humans and machines will become integrated. The Internet, virtual reality, and advanced prosthetics are just the first steps. If we succeed and building machines with powerful cognitive abilities we’ll want to interface our own minds with that hardware to extend our intelligence.
One reason why I find the Borg on Star Trek so interesting is because I think there is a very real chance that is the future of humanity… hopefully without the evil overtones. In any case the business of asking a machine intelligence questions, etc, seems like 1960s science fiction to me.
Astronist wrote:
[“this is one philosopher whose notions may make even the most optimistic futurist think twice.” — not really. He’s got himself carried away with various science-fictional scenarios which may represent future threats, or again may well not. Like any disaster movie, they excite morbid interest, but this is hardly appropriate for an Oxford institute.
His speculations about dangerous AI machines are based on the assumption that a being of intelligence X can deliberately engineer a being of intelligence X+1: this has not yet been demonstrated, and intelligence is not exactly a well understood phenomenon. His speculations about a Great Filter are based on the assumption that we know where and how life first emerged: we do not know anything of the sort. I suggest that Nick Bostrom’s work be taken with a pinch of salt.]
Indeed. It seems that the more free time people have on their hands, the more they seem to become preoccupied with such end-of-humanity scenarios. Reading them, I infer an inflated, extreme, and unhealthy sense of human self-importance. Even more odious (to me, at least) is “the idea that existential threats to our species may outweigh our obligations to serve those who are suffering today.” A humanity with such values would be a humanity whose survival I couldn’t care less about.
I’ve always felt that the Fermi paradox was jumping the gun. First we should examine the local universe in detail, and if we find no signs of intelligence, only then should we make an attempt to explain the absence thereof. A weekend picnic on the moon, and a handful of probes strewn around the solar system, hardly constitutes an exhaustive survey of our surroundings. If, just for the sake of argument, we concede that it’s a true paradox, then I suspect that there isn’t so much a great filter, but a long series of small hurdles. For example, of all of the species that currently inhabit the world, and have come and gone, humans are the only animal to ever have developed a space program. From that we might conclude that the great filter is the biological unlikelihood of intelligence arising. While that’s probably true, it’s only a filter, but probably not the filter. Then there’s the vast distance between the stars, the tendency for powerful technology to be as destructive as it is constructive, and so on. In other words, to win the cosmic Olympics we can’t just be good at track, but need to be all around great athletes. By that analogy, the filter could be in our past, present, and future simultaneously, an ongoing and perhaps eternal struggle for survival. Still, I think that the Fermi paradox is a premature conclusion, and until we become better swimmers, we can’t conclude that there aren’t any fish.
@JohnHunt “if we are to be the first to beat the odds and survive, then we will have to take an improbable action to secure the survival of our species that no other civilization took. To the best of my knowledge, we have taken no such action nor do I see any such action in the foreseeable future. AFAIK, no one on Earth has a self-sustaining, isolated bunker. ”
It has been done in Norway and Paul wrote an entry on this – https://centauri-dreams.org/?p=1589
Same is done for DNA samples of extinct species but I can’t find link of this.
There are some who have done statistical analysis on Viking 1 & 2 sample result and came to opinion they were too complex for simple soil sample. That hint to possibility of previous organic life. (http://phys.org/news/2012-04-proof-life-mars.html)
The best counter argument still is NS’s link to Charles Lineweaver’s paper that comes to a peculiar conclusion:
“Looking back from any particular species we will find the evolution of the traits of that particular species. However, precisely because we can construct such a figure from the lineage of any species, such a construction should
not be construed as a general linear trend applicable to all life. The simple appeal of this figure is a good example of how easy it is to believe that the important events and the major transitions in evolution that led to us, are important events for all organisms (Smith and Szathmary 1995). The problems with this view are detailed in Gould (1989). The prevalence and recurrence of this mistaken interpretation of evolution needs to be avoided as we try to use terrestrial evolution to give us hints about the evolution of extraterrestrial life”
That actually means microbial life has to be on Europe, Titan, Venus, and Mars which is in essence differs from each other and ones on Earth. Can’t say same for lake Vostok and Ellsworth future microbial findings – they probably will have resemblance w/ bacterias on Earth.
Thus, it substantiates Bostrom’s argument of Great Filter mixed w/ Charles Lineweaver conclusion – microbial life is probably in abundance just the very low probability of technological intelligence acts as the GF.
Because Technological Progress in not a natural part of the Universe it exists as far as technological species maintain its progress.
This plays very well in GF argument and can be formulated in a formula.
“Bostrom seems to be phrasing the issue of AI in terms of human vs machine.”
Human vs technology would be my take. Hugo DeGaris is the real visionary (and largely ignored) on the subject of super-intelligent machines. I find the argument very persuasive because it leaves human pride out of the equation. We are not able to accept that a machine can probably be far more intelligent than a human being- and would be able to make itself smarter beyond our comprehension.
“I infer an inflated, extreme, and unhealthy sense of human self-importance. Even more odious (to me, at least) is “the idea that existential threats to our species may outweigh our obligations to serve those who are suffering today.”
It is not “may”; it is certain existential threats DO outweigh any other obligation.
If there is no tomorrow then those suffering today are no different than those who have a better quality of life.
We would all die, and I think that is the key to the problem; we all die anyway so who cares about tomorrow?
Respectfully, I submit you may also be confusing personal death with extinction James; if it is morbid to think about death then why think about it at all?
Two examples of Great Filter in work.
1) Heron of Alexandria (http://en.wikipedia.org/wiki/Hero_of_Alexandria) in 50 AD built a brass ball with 2 tubes, which he filled with water and heated with candles. He called it Aeolipile (http://en.wikipedia.org/wiki/Aeolipile). The same principle was actually harnessed in 1712 Newcomen steam engine and improved in 1763 by James Watt. Stalled technical progress for at least 1662 years.
2) Chinese civilisation made in 990 AD trip around the world around . On their return the ruler decided they were the supreme technical civilisation and there is nothing intrestin in the world on which the fleet was burned. That was the most advanced fleet at the time. Stalled progress at least for 600 years.
I wasn’t going to respond, but the last comment (Dmitri) forced my hand.
There is NO evidence that the Chinese ever circumnavigated and there is NO evidence that they reached America in 1421, before Columbus.
So, your point #2 (no offense Dmitri) is complete BS.
I felt compelled to write this because in the original post,
“There is research suggesting that both the Polynesian archipelago and the New World — each a forbidding frontier in its own way — were settled by less than 100 human beings.”
I won’t speak to the Polynesians, but to the new world this just seems like mindless anecdotal hearsay. There are multiple linguistic groups, and there were multiple waves of people moving into North America. The idea that “less than 100 human beings” settled all of America seems far-fetched given the data we have.
This isn’t to say that small groups of people can colonize large territory, but merely that when data exist, use it. Don’t make up facts and repeat hearsay. And don’t read anything by Menzies (sp?), the author of 1421.
Dmitri, the seed bank which Paul wrote about previously isn’t the sort of bunker that I was talking about. I’m talking about a bunker in which humans would be able to survive given an otherwise existential event. And the seed bank would probably be useless without living humans to remove and plant them. Re: DNA, you might be thinking about the Frozen Ark.
Peter, humans being integrated with machines is not incompatible with machines improving themselves by themselves. What it takes is for someone to successfully write a seed AI program which self-improves by selecting more intelligent variants of itself. This would be like any of us recognizing that someone else is smarter than us. Even if the seed AI were started as a small lowly intelligence, given rapid selection, in time it would dramatically succeed our own intelligence.
Christopher, what really matters is whether or not we are facing an upcoming existential event or not. If we are not then we can afford to explore our solar system first. But if we are, then it is not an academic issue but one of extreme importance. In that case, we may have little time to achieve a solution even if it only means a few people making it through the bottleneck.
GaryChurch wrote (in response to the following that I wrote):
“I infer an inflated, extreme, and unhealthy sense of human self-importance. Even more odious (to me, at least) is “the idea that existential threats to our species may outweigh our obligations to serve those who are suffering today.”
[It is not “may”; it is certain existential threats DO outweigh any other obligation.]
A apple grower is more upset if a grown, apple-producing tree is blown down in a storm than if birds eat apple seeds (potential future apple trees, which could die before reaching maturity for many reasons) that he or she has planted. Similarly, I feel a far greater obligation to existing people than to un-conceived future people. Forsaking the former for the latter, to me, is *in*-human and smacks of “the ends justify the means” and “survival only for the sake of survival.” However, I think that humanity can “walk and chew gum at the same time” regarding helping those suffering today and safeguarding against existential threats to humanity.
[If there is no tomorrow then those suffering today are no different than those who have a better quality of life.
We would all die, and I think that is the key to the problem; we all die anyway so who cares about tomorrow?
Respectfully, I submit you may also be confusing personal death with extinction James; if it is morbid to think about death then why think about it at all?]
I am in no such state of confusion, and being in a decrepit state of health in which death could potentially come at any time, it’s a routine, everyday consideration to me. None of us has a guarantee of another minute, much less tomorrow. All we have for sure is the present moment. I prefer to walk the grey path, helping those around me here and now while also doing what I can to lay the groundwork for a better (and multi-world) future. But I could never participate in letting an entire “generation” (in the broad sense; that is, everyone alive at one particular time) just “go to Hades,” even if doing so would ensure the existence of future generations of humanity. If humanity could only survive by becoming inhuman (and to itself, no less), I would see no reason to bother working to preserve it or caring what happens to it.
“-it is not an academic issue but one of extreme importance. In that case, we may have little time to achieve a solution even if it only means a few people making it through the bottleneck.”
I completely agree with John. I blogged about amazon women on the Moon once and John told me it was correct but probably not good to put it that way if I wanted people to take me seriously (if I recall correctly).
A small colony of very healthy fertile women in a self-sufficient off-world colony with a sperm bank could save us from extinction. Sounds crazy but it is exactly what I would shoot for first.
” I prefer to walk the grey path, helping those around me here and now while also doing what I can to lay the groundwork for a better (and multi-world) future.”
You make some valid points.
GaryChurch wrote (in response to JohnHunt, who wrote [in part]):
“-it is not an academic issue but one of extreme importance. In that case, we may have little time to achieve a solution even if it only means a few people making it through the bottleneck.”
[I completely agree with John. I blogged about amazon women on the Moon once and John told me it was correct but probably not good to put it that way if I wanted people to take me seriously (if I recall correctly).
A small colony of very healthy fertile women in a self-sufficient off-world colony with a sperm bank could save us from extinction. Sounds crazy but it is exactly what I would shoot for first.]
Indeed–and it need not be a “money sink” (although a worthwhile one, like a fire station or a police station); it could make money for the residents via tourism, participation in asteroid mining activities, etc.). Larry Niven-style, sun-melted-and-“rotationally molded” asteroid starships (minus their big nuclear engines–smaller motors would suffice for orbital adjustment) would make fine small colonies/large space stations (“mini-O’Neills”) where groups of people could live in radiation-shielded, centrifugal 1-g environments (with variable gravity away from the equator). Also:
Solar- or nuclear-powered internal lighting (depending on the asteroid stations’ distances from the Sun) or even “lucite-piped” sunlight would enable compact hydroponic or aeroponic farming, or even dirt farming, depending on the population size and the interior surface area. Cupolas with windows (made of lead glass, or perhaps multiple panes with water between them, for radiation shielding) would permit views of the outside surroundings.
Let me correct and rephrase myself:
1) The meaning world in era of 1 – 1000 AD was just a marginal part of the whole globe mainly.
2) Despite the Song dynasty (960 – 1279) technical advances like adjustable height rudder, bulkhead sections, cross beams bracing and other maritime technological advances they did not conquer the unknown part of the world.
3) Ming dynasty’s commanded Zheng He reached West Africa almost a century before Christopher Columbus and Vasco da Gama dwarfing biggest vessels of Portuguese flotilla. That trait was always common to the Chine’s float yet it wasn’t encouraging enough to wander into the unkown parts. (http://ngm.nationalgeographic.com/ngm/0507/feature2/)
4) Despite the latitude problem to be tackled in the Song dynasty in 900 AD the longtitude problem remained unsolved fo another 800. Only in 1761John Harrison build his first marine watch when the British government announced competition. It was at the time the most critical obstacle to navigate accurately East-West bound.
(http://en.wikipedia.org/wiki/John_Harrison)
Great Filter in work – technological advancements are pivotal but not sufficent if the hard problems are not addressed.
GF is in past and in future but never in present.
——————————————————————-
Now the other part – discovery of America and Columbus.
1) Smithsonian magazine published in February 2013 and article which sheds new light on pre-Clovis culture. It’s highly likely that Americas were conquered by descendant of Ice Age Solutrean Culture of France and Spain.
(http://www.smithsonianmag.com/science-nature/When-Did-Humans-Come-to-the-Americas-187951111.html)
(http://blogs.smithsonianmag.com/hominids/2012/07/the-clovis-werent-the-first-americans/)
2) Dennis Stanford and Bruce Bradley published a book on their research and results which no doubt will strip up things. The topic is even so intriguing some papers use headlines as “How the French discovered America”. Clear indication to 2003 and freedom fries.
(http://smithsonianscience.org/2012/01/new-book-across-atlantic-ice-the-origin-of-americas-clovis-culture/)
3) Already in 2000 Canadian geologists published and article about the long held theory of Siberion migration over Bering Sea to Alaska showing by geological evidence that such trip is not possible as the receeding ice sheet did not left enough resources. (http://www.nrcan.gc.ca/earth-sciences/climate-change/landscape-ecosystem/paleo-environmental/3349)
The strongest yet evidences, to support this and other related claims, are findings of Meadowcroft Rockshelter (16000 years), Cactus Hill, (15000 — 17000 years), Buttermilk Creek Complex (15500 years), Page-Ladson ( 14500 years) and so on.
This is remarkable in many ways but especially for my personal side. Ancesters who later became Estonians moved out from Pamir-Altai region roughly 12000 years back. Same happened for other nations who beame Balkan Slavs and Baltic Slavs (Latvians, Lithuanians). That makes wonder how did they know which way to go and why the mass migration before receeding ice sheets was strongly westward. All ones who chose that direction made it.
1492, discovery of America and Columbus will remain in the history despite who actually made to the continent. Human migration and history is more than it seems. Yet reason for such mass migration before end of Ice Age didn’t stem from technological advances. In case if Great Fiters are in work such advances are mere opportunites not precursors to big achievements.
“-multiple panes with water between them, for radiation shielding) would permit views of the outside surroundings.”
“-groups of people could live in radiation-shielded, centrifugal 1-g environments (with variable gravity away from the equator).”
Now you are getting the idea James.
My favorite concept goes all the way back to 1929; the Bernal Sphere. The sage Bernal did not imagine artificial gravity at the equator but did stipulate using asteroid material.
I do not like the idea of a lunar lagrange “gateway station” but it is a great place to melt down a couple million tons of lunar ore and inflate it into a Bernal Sphere. Bomb propulsion would efficiently push these miles in diameter spheres around the solar system.
Like kicking a soccer ball.
DCM said on February 27, 2013 at 14:58:
“Interesting. I hope there’s no life on Mars because it would be an impediment to colonizing and using that planet.”
So why is NASA spending so much money sending machines to Mars to search for life on that planet? Is it just to determine that nothing is alive there now so we will feel okay about colonizing that world? I do not get that impression.
Two Viking robots were sent to Mars in 1976 for the express purpose of finding native life forms there. The scientists involved were pretty certain they would detect microbes in the dirt they scooped up and plopped into their $60 million automated biolab inside the landers. The cameras were even designed to look for larger creatures moving nearby.
When the results came back as increasingly ambiguous and eventually thought to be inorganic surface chemistry interacting with the biolab’s “chicken soup” et al, further interest and funds for exploring Mars dried up. Exploration of the fourth world from Sol did not really pick up again until the Mars meteorite ALH84001 seemed to indicate that the planet may have once had life.
Since 1997 at least four direct attempts to check for little Martians and/or the stuff that could support such life have taken place, with at least three missions being initially successful and two still operating on the alien world.
Folks like Zubrin have also publicly expressed concerns that finding even microbes will derail any human colonization efforts. Which is ironic because in earlier eras not only did the prospect of Martian life not discourage manned missions but they were main impetus for going there. One early mission plan included having the astronauts find macroscopic Martian organisms to determine if the explorers could use them as a food source!
Now we tremble at finding so much as an alien paramecium and are ready to ditch our space efforts because of them. Or worse, that we would rather not learn we are not alone in the Universe. Is this ultimately why NASA twists itself into a pretzel of denial every time one of the rovers sees an object on the surface that would be declared a fossil if found on Earth? Even mere liquid water is subject to this same reaction, as witnessed when the Mars Polar Lander had what was clearly liquid water droplets forming on its landing legs. There is scientific caution and then there is just plain old paradigm inertia.
The next time I am told that if an archaeologist even found an alien artifact or other evidence of ancient visitation on this planet by an ETI they would be quick to alert the world and have it accepted by their peers, I will bring up this example of how even the idea of alien life is still seen as a fringe topic and an impediment to human expansion in space and thus something not to be discussed and quickly passed over.
This is progress? I don’t think we can afford to wait another thousand years or so for humans to wake up and move into the wider galaxy, because if we create another Dark Age, we may never sufficiently recover from that one to do much more than basic survival. There are those who thinking camping for the rest of their lives is the ideal state for humanity, but I am not inclined to agree with or join them.
As they said at the end of H. G. Wells film version of Things to Come, it really is all the Universe – or nothing.
GaryChurch said on March 1, 2013 at 12:22:
“Human vs technology would be my take. Hugo DeGaris is the real visionary (and largely ignored) on the subject of super-intelligent machines. I find the argument very persuasive because it leaves human pride out of the equation. We are not able to accept that a machine can probably be far more intelligent than a human being- and would be able to make itself smarter beyond our comprehension.”
So glad to see I am not the only one who think that Hugo de Garis is one of the few futurists who tells it like it is when it comes to the day that Artilects (his word) are created and what may happen with humanity as a result.
As you can see on any given news day, most people have a less than wonderful time dealing with any big changes to their generations-old lifestyles and ways of thinking, when they bother to truly think at all. So little wonder that the idea of a superior intelligence, be it machine from this planet or an alien mind from another world, bothers folks in all the negative ways.
This reaction can be seen in just about any given science fiction story with AI or aliens as the subject: Much more often than not, they are shown attempting to either enslave or destroy humanity, being under the same delusion as their targets that Earth is the most important location in the Universe and humans are the key species of existence.
I always thought that the 1970 classic SF film The Forbin Project came close to how things might go when an Artilect comes upon the world stage – if we were ever dumb enough to let it have total control of every nuclear missile on the planet and house its mainframe deep inside a mountain and give it a nuclear power source. That aside, the Artilect in this film named Colossus quickly becomes much smarter and faster than its human creators. Colossus is not fooled by any of their attempts to shut it down. It does stick to its basic programming of protecting and serving humanity, but one does not need to be a computer genius to see that once this Artilect no longer needs people to become fully functional, our species will be smart to stay out of its way if it can.
Whereas many transhumanists would have you believe there will be some kind of harmonious utopia of humans and machines living and working together – with many people even becoming some kind of merger of the two – Hugo is honest enough to see and declare that humans will not go gently into that good night even if the outcome is beneficial.
Instead there may be literal wars over and between these disparate species, with the human species going extinct if they cannot overcome their own ancient biological programming, which was more or less fine many thousands of years ago when our cultures were small, scattered, and often isolated tribes, our technology consisted mainly of stones and spears, and our world was only as big an d far as our feet could carry us.
Hugo’s ideas do not come out of some random rabbit’s hole or overcooked SF novel. He is a very intelligent person and has thought long and hard on the subject. He was also long involved in trying to build Artilects, but most of the nations and organizations he sought funding and support from rejected him, including the United States (he is a native of Australia).
Hugo now lives in China where, whatever their ultimate motivations may be, they do recognize who they have residing in their midst.
Hugo’s home page:
http://profhugodegaris.wordpress.com/
Astronist wrote a provocative post on Feb 28.
Very interesting, but almost certainly wrong is his implication in saying “[Bostrom’s] speculations about dangerous AI machines are based on the assumption that a being of intelligence X can deliberately engineer a being of intelligence X+1”.
At first you might think that that statement has backing from Godel’s incompleteness theorem, but look again. That could only ever place one individual as being unable to design true AI not to build it. To extend that to a group think of this. It seems as if an individual with an IQ about 25 points lower than another, many never be able to understand some ideas of the latter, no matter how much help he is given. Thus it seems reasonable, that IF it is only ever possible to comprehend the design of a system operating 25 point lower than yours, THEN a designed AI might never reach the level of our best and brightest in a million years of trying. That sounds great, but design DOES NOT equate to our inability to make it.
Randomly fitting reasonable guesses on AI software combines a bit of our design with a lot of evolution. Even a tiny design contribution, gives this process an edge on the pure natural selection over which our brains evolved. Far more importantly, computer time can try out and measure selective advantages at a rate of million of possible generations per second.
Astronist, it is you who have too much trust in human ingenuity. You should place your trust to evolution!
Also Astronist, while I’m in the critical mood, I note that you still persist in noting the opportunities of our modern exponential growing society, and neglecting the problems brought by that growth. We might both realise that due to its intelligently planned nature we always experience far more of the former than the latter, but that does not mean that we are safe. In fact the nature of that growth means that we can never use our past experience to assure ourselves of continued safety. We might have the capacity to conquer nearly all problems, but complacency is a conduit to near certain destruction. I can’t see us moving from that dangerous zone, since it seems that the only other mode known to the general public is panic, and human politics is a sinister game of appealing to the dreams of the majority.
Regarding the posts by ljk and Gary Church above:
If microbial life is found on Mars, I’d hate to see it potentially be destroyed by human settlement of the planet–*if* it’s native to Mars. But if such life turned out to be aboriginal (sent from Earth via impact ejecta), I would not consider it so precious. Also:
As well-intentioned as they may be, I fear that those who are working on artilects may be sowing the seeds of humanity’s destruction. I too have watched “The Forbin Project,” and while it is fiction, I don’t consider the attitude of Colossus toward humanity to be at all implausible for a real artilect. Those who assume that artilects and human-machine hybrids will share human values (or even human interests, such as wanting to explore the universe, for example) are frighteningly naïve. Because of these risks, I consider such researchers to be as dangerous as terrorists.
Every time humanity develops some new technology, the doomsayers are not far behind – if not actually ahead of time such as with Artilects.
Now I am not putting down doomsayers or their profession, for they certainly have their important role to play. Of course while it is easy and correct to worry about such things as the development of nuclear weapons, other technologies throughout history have also raised alarms that in the end proved their worth to overall humanity despite causing harm and death to individuals.
The automobile is one such example. Many thousands of people get killed and injured each year in car accidents, yet I do not see their existence diminishing and even fewer protesting them. Almost everyone owns a car and those who don’t only do so because they have other means of transportation, which have also caused their own injuries and fatalities.
I hope I do not regret bringing up the Colossus example, for while I found the film very interesting and was relieved in the dramatic and logic sense that the Artilect did not fall sway to some silly little oneupsmanship by the squishy and far slower-thinking but supposedly more noble and righteous humans, we need to remember one very important detail: Colossus is fictional.
We really do not know what a bigger mind might do. Certain whales and dolphins have larger brains than humans, but unless we deliberately interact with them, those aquatic mammals seem to spend most of their time focused on each other and doing the usual eating, reproducing, fighting, and sleeping. Moby Dick and a few others aside, I know of no attempt by cetaceans to take over or destroy humanity.
Here is another important factor to consider: AI development has largely dropped off since the heyday of the 1960s when people thought we would have our own personal HAL 9000s by now. Even Hugo de Garis has kind of given up on his plans to make an Artilect it would seem, after being rejected by just about every place he peddled his wares. I know there are people working on AI, but you don’t hear much from them, outside of beating folks at chess and television games shows, but they are not sentient.
Of course just as with aliens, decades of bad (and some good) SF stories with Artilects as the evil antagonists bent on enslaving or crushing the human race have taken their toll, with the result that AI developers are even being called terrorists. If that is true then every person who ever did any kind of technological advancement back to the wheel and the spear should receive that label. See what I mean?
What I think really frightens people is the possibility that humanity is not the end all of evolution and existence. That we are just the latest stepping stone on the way to something literally bigger and better. It seems to be particularly galling if our future successors are made of metal, silicon, and optic fibers instead of some form of flesh and blood.
Humans were designed to live on Earth in small groups with just enough technology to help them survive and reproduce, like just about every other animal on this planet. So why do we have so much technology that has allowed us to breed way beyond what our world can comfortably sustain? Are we just some kind of virus? Or were we meant to take our intelligence and our technology and use it to create something that will be able to survive in the wider Universe beyond our little rock and achieve true potential in terms of awareness, knowledge, and purpose?
Would you rather see us choke on the filth of our own overgrown civilization or perhaps bomb each other back into the proverbial dark ages – because we cannot keep growing and taking up land and resources as we are now without some major consequences, let us be realistic here. Or would you rather we used our minds and tools to make something better that can be our legacy into the future Cosmos?
I predict that a true Artilect would not try to harm us unless we tried to harm it first. Instead I would think that a superior mind would rather want to see what is beyond Earth in that incredible richness we call the Universe, rather than waste time with a species which most of its members can barely see beyond their noses or the next day. I am sure I will hear otherwise on this, but that is my limited human perception on the subject. At least it is not an old genre cliche.
The fundamental difference between an artilect and *all* other inventions is that the latter were/are all wielded by humans, whose ranges of motives, actions (and reactions) are known, and history and psychology provide some guides to what humans do, and why. An artilect wouldn’t be just another invention, but a new sentient and intelligent being. What an artilect would do, what its motives would be, and how it would react to us are all unknowns. Also:
It might be nice, nasty, mellow, murderous, or totally uninterested in us–we just can’t know; the great intellectual power that it would have makes this unknown territory foolhardy to venture into, especially since we don’t -have- to (and this unknown progeny of humanity could also eventually be a danger to any other intelligent races that may exist). The notion that imperfect humans think they can make a “dinkum thinkum” which lacks their imperfections makes me laugh–but not with mirth.
Any typical group of human beings can also be quite unpredictable and potentially far more dangerous than an Artilect, mainly because they actually exist and have proven their threats to others more than once. How our species has gotten as far as it has without degenerating or going extinct is a subject that has left me to wonder more than once, too.
As for Artilects themselves, you and others may not have to worry for a very long time. Judging by Deep Blue and Watson, we seem to be treating their potential as little more than toys for our amusement, just as the ancient Greeks and Romans had some steam-powered devices but used them primarily for entertainment, then let them fade away for a millennia or so.
We also do not have anything resembling the real colonization and settlement of space or sustainable fusion power, so we are safe from contaminating the Universe with our intellectual progress there, too.
“We also do not have anything resembling the real colonization and settlement of space or sustainable fusion power,-”
I do not believe fusion reactors are a condition for colonizing space. I am of the opinion that the only two places fusion will ever work as advertised is in a star or a bomb.
The good news is we can use fusion in the form of bombs for propulsion. Fission reactors and especially thorium fueled fission reactors are ideal for providing power for long duration missions in the outer system.
December 22, 2012
The Great Filter theory suggests humans have already conquered the threat of extinction
It’s difficult to not be pessimistic when considering humanity’s future prospects. Many people would agree that it’s more likely than not that we’ll eventually do ourselves in. And in fact, some astrobiologists theorize that all advanced civilizations hit the same insurmountable developmental wall we have. They call it the Great Filter. It’s a notion that’s often invoked to explain why we’ve never been visited by extraterrestrials.
But there is another possible reason for the celestial silence. Yes, the Great Filter exists, but we’ve already passed it. Here’s what this would mean.
Before we can get to the Great Filter hypothesis we have to appreciate what the Fermi Paradox is telling us.
Full article here:
http://www.sentientdevelopments.com/2012/12/the-great-filter-theory-suggests-humans.html