Suppose for a moment that life really is rare in the universe. That when we are able to investigate the nearby stars in detail, we not only discover no civilizations but few living things of any kind. If all the elements for producing life are there, is there some kind of filter that prevents it from proceeding into advanced and intelligent stages that use artifacts, write poetry and build von Neumann probes to explore the stars? Nick Bostrom discusses the question in an article in Technology Review, with implications for our understanding of the past and future of civilization.
Choke Points in the Past
Maybe intelligent beings bring about their own downfall, a premise that takes in more than the collapse of a single society. Alaric’s Goths took Rome in 410, hastening the decline of a once great empire, but the devastated period that followed saw Europe gradually re-build into the Renaissance. And as Bostrom notes, while a thousand years may seem like a long time to an individual, it’s not terribly significant in the overall scheme of a civilization, which theoretically might last millions of years. No, a true filter must be something larger, a potential civilization-killer.
Bostrom’s idea of a ‘Great Filter’ comes from Robin Hanson (George Mason University), and consists of the kind of transition that a civilization has to endure to emerge as a space-faring culture. The key question: Is the filter ahead of us or behind? If behind, wonderful — we have already passed the test and can look with some confidence to the future. Recent work, for example, indicates that human beings were reduced to a band of as little as 2000 individuals some 70,000 years ago, near extinction. Yet somehow migrations out of Africa began 60,000 years ago, and all the tools of civilization would emerge in their wake.
Image: The galaxy NGC 6744, a barred spiral about thirty million light years from Earth. Is it possible that such vast congregations of stars may be utterly devoid of life? Credit: Southern African Large Telescope (SALT).
But that’s a filter that still gets intelligent life well on its way, and surely with the number of stars in our galaxy, that would imply at least a few civilizations should have made it through besides ourselves, their presence obvious by now. No, to explain the Fermi paradox, we would like to go further back, making the emergence of complex life of any kind problematic. Making it, in fact, so rare that a galaxy devoid of it (other than here on Earth) is an explicable outcome. That kind of filter gives us hope, because we’ve survived it even though no one else has. The galaxy may be empty of life, but it is also a vast frontier awaiting our expansion.
The Shape of Future Menace
But maybe the filter is still ahead of us. If so, we may be able to see its outline in fairly familiar terms, such as nuclear war, asteroid impact, genetically engineeered disease used as weaponry, and so on. Or maybe, and more likely, it’s something we cannot foresee:
The study of existential risks is an extremely important, albeit rather neglected, field of inquiry. But in order for an existential risk to constitute a plausible Great Filter, it must be of a kind that could destroy virtually any sufficiently advanced civilization. For instance, random natural disasters such as asteroid hits and supervolcanic eruptions are poor Great Filter candidates, because even if they destroyed a significant number of civilizations, we would expect some civilizations to get lucky; and some of these civilizations could then go on to colonize the universe. Perhaps the existential risks that are most likely to constitute a Great Filter are those that arise from technological discovery. It is not far-fetched to imagine some possible technology such that, first, virtually all sufficiently advanced civilizations eventually discover it, and second, its discovery leads almost universally to existential disaster.
Better to have the Great Filter behind us. Then, at least, we know that we are here and that the experience was survivable. And the parameters of the filter have implications for our search for life. Bostrom hopes we find no sign of life elsewhere because such a find would imply that life is commonplace, that the Great Filter kicked in after the point in evolution that that life represents. Well and good if the discovered lifeforms were simple — we could still assume the filter operated early in evolutionary history and that we are past it. But if we found complex life, this would eliminate a larger set of early evolutionary transitions as the filter, and would imply that it is ahead rather than behind us.
Explaining the Great Silence
Remember, we are trying to explain why we are not finding signs of intelligence elsewhere, no von Neumann probes, no artifacts from civilizations that should have had plenty of time to expand through the galaxy. In Bostrom’s view, no news from the stars may actually be good news. It could imply that life itself is improbable, that the Great Filter happened well in our past and we somehow survived it, and that therefore we may be able to make the transition to a higher and better civilization. We are the one species lucky enough to make it this far, and while we cannot rule out the possibilities of other Great Filters lying ahead, we can at least hope we have weathered the worst.
All of which seems to put Earth back into the center of the universe again, a bizarre exception to the overwhelming norm. Bostrom thus has no choice but to explain the observation selection effect, a way to make sense out of our good fortune in being the lucky exception to the rule:
Consider two different hypotheses. One says that the evolution of intelligent life is a fairly straightforward process that happens on a significant fraction of all suitable planets. The other hypothesis says that the evolution of intelligent life is extremely complicated and happens perhaps on only one out of a million billion planets. To evaluate their plausibility in light of your evidence, you must ask yourself, “What do these hypotheses predict I should observe?” If you think about it, both hypotheses clearly predict that you should observe that your civilization originated in places where intelligent life evolved. All observers will share that observation, whether the evolution of intelligent life happened on a large or a small fraction of all planets. An observation-selection effect guarantees that whatever planet we call “ours” was a success story.
Into a Barren Universe
Bostrom is director of the Future of Humanity Institute at Oxford, a transhumanist philosopher (this is George Dvorsky’s description) who notes that even if the Great Filter were in our past, this would not absolve us from future danger. But this is a man who would like to see all that interesting technology, from nanotech to life extension, kicked in to provide us with a ‘posthuman’ existence whose outline we cannot presently imagine. He’s actively pulling against finding life anywhere else because he’s convinced that life’s rarity implies most organisms run into a buzzsaw before they can colonize space. We survivors, then, may find no one else to talk to, but we should have a fighting chance to use our technologies in a transformative way.
And here is where I truly disagree with Bostrom:
…surely it would be the height of naïveté to think that with the transformative technologies already in sight–genetics, nanotechnology, and so on–and with thousands of millennia still ahead of us in which to perfect and apply these technologies and others of which we haven’t yet conceived, human nature and the human condition will remain unchanged. Instead, if we survive and prosper, we will presumably develop some kind of posthuman existence.
I see no evidence in history that the basics of human nature are amenable to change, whether or not such change would be a positive or negative thing. Nor can I go along with those who think we will be able to control our own evolution into some kind of higher lifeform, but long-time readers know my doubts that a genuine ‘transhumanism’ is possible to us. That would be another discussion, though, and I leave this one with the thought that if complex life of any kind is rare, we may have survived only to move outwards into an unexpectedly bleak universe.
Very interesting article. I’m in agreement with you that the basics of human nature, and by extension of the human condition, will persist. I generally dislike the entire Star Wars phenomenon (well, I kinda liked the first two movies), but one thing I do appreciate about the SW universe is that you find human beings, and other beings, living in all of the same conditions, from opulent wealth to abject poverty to slavery, as you find them today – in spite of the technological advances.
Humans will certainly change themselves. The cosmetic body modifications of today will be functional body modifications in the future. Will that evolution, even if these modifications are genetically transferable? Will they make people more than human? Or just differently human?
Now, this is not scientific reasoning, I know, but with all the “Life Is Rare” viewpoints being expressed lately – and what a dismal universe would that be!- I can’t help but feel it’s the last gasp of ‘Homocentric” thought. Like the old saying, “It’s darkest before the Dawn”, we may be closer to Daybreak than we suspect. “The Great Silence” may be nothing more than flawed human assumptions and listening in the wrong places…
I am a biologist, and I pretty agree with the outcome of the “rare Earth” hypothesis (I’d say 1 civilization per galaxy is an optimistic estimate).
But I want to be somehow optimist for once: Is it possible that a future filter is not a catastrophic filter, but merely something that drives away civilizations from spacefaring?
For example, civilizations may find better, ultimately, to move their lives in some kind of Matrix world, supported entirely by machines. This would be somehow desirable, since almost every aspect of life would be under full control and tuned to individual and/or society desires, lessening space or resources limitations.
Machines could take care of moving civilizations from a world to another, with some kind of slow ark, when the planet is in jeopardy, but apart from that no real need for spacefaring, (and much less need for radio transmitting or Von Neumann probes) would arise.
(I know this idea is not new, but I don’t remember the sources).
(please note that my calcs were from an Excel spreadsheet whose formating was lost in translation)
One possible thought is that the Great Filter may be nothing more than the finite speed of light and the large size of the universe. A thought experiment I am fond of on the subject of ET is a simple rough order of magnitude (ROM) calculation:
Let us image for a moment that we know for a fact that there are “x” number of unique alien civilizations in the Milky Way galaxy. Then let us settle on a number of the volume of and number of stars in the Milky Way galaxy. Then let us divide the number of stars and the volume of the galaxy by the proposed number of Alien Civilizations. The numbers always surprise me, but really drive home the magnitude of size and numbers we are dealing with. I have simply used number from Wikipedia, to establish a basis.
Milkyway Galaxy (region with stars)
Avg Dia Avg. Thick Approx Volume Number of Stars
(approx LY) (approx LY) (cubic LY)
—————————————————————————————–
100000 1000 7.85398E+12 2.00E+11
No. of Alien Avg Volume per Civ Avg Cube Size per Civ Number of Stars Civilizations Per Civ
(cubic LY) (LY)
—————————————————————————————–
10 7.85E+11 28025* 2.00E+10
100 7.85E+10 8862* 2.00E+09
1000 7.85E+09 2802* 2.00E+08
1.00E+04 7.85E+08 923 2.00E+07
1.00E+05 7.85E+07 428 2.00E+06
1.00E+06 7.85E+06 199 2.00E+05
* Cube size is greater than Galactic disk thickness, so calculated on a 1000 LY thick disk
While some may quibble over the specifics of such a calc, I think the significance lies in the ROM approach, i.e. the numbers are not likely to change by an order of magnitude regardless of the specifics. Look at a number of 1000 alien civilizations, for example. This might sound like a lot of alien cultures to occupy the galaxy at the same time, but on average, the closest would be ~ 3000 light years distant, and we would have to look at ~200 million stars to find them.
Regards,
Raymond
I’d tend to agree that we may well not be able to control our evolution in the sense of collectively directing what we evolve into, but on the other hand our descendants are unlikely to be very much like us. We will evolve, even if we do not control that evolution.
I say we haven’t found any ETI yet because they, as we are
to them, are very far away and focused on their own concerns,
not those of some hypothetical beings across the galaxy.
Oh yeah, plus they are alien. And as devicerandom said
above, they might all be in their own little virtual worlds
instead of out exploring the galaxy. We certainly could
end up like that.
To think we would find ETI after just 50 years of sporadic
searches with only a few methods in a galaxy of 400 billion
star systems 100,000 light years across in a Universe 13.7
billion years old is a bit premature on our part, to put it mildly.
Raymond,
Since evolution, both galactic, chemical and biological, takes place over billion year timescales, it’s not inconceivable that the first civilization to arise would sweep uo everything. A 10 million year head start would go along way, but really head starts could be several billion years.
If they can’t conquer the galaxy ala Star Trek, they could still have the capability to super miniaturize and send out micro probes. Assume maximal energy efficiency and exploitation of gravitationally focused channels for communications. Their communication nodes could be as close as 550 AUs.
As far as transhumanism goes, is it not likely that SOME people will become “transhuman” and others will not? I think the most unlikely predictions of all are those that postulate that we will all be the same and make the same choices.
I am confident in saying that as we move into the future, that human beings and their choices and desires will radiate outward in all different directions and permutations. The transhumanists who claim that we all will become posthuman may not be correct. However, I am certain that those who claim that NONE of us will become post human or at least augmented versions of themselves (while remaining in biological form) are most certainly wrong.
The notion that balding heads and expanding waistlines represents the final end point of 4 billion years of evolution is silly at best. Certainly we can do better than this.
Scot Stride has argued for a number of years that
SETI should get past the UFO stigma and focus on
looking for alien probes in our Sol system:
http://astrobio.net/news/article919.html
The Great Filter is having more than one basket for your eggs: self-sufficient off-world outposts in such numbers that war and disease cannot wipe us out.
Our rapid advances in nanotech, biotech, high-energy experiments, and AI all make universal self-extinction of intelligent civilizations a very real probability. So long as even a possibility exists, an interstellar mission (not just a lunar base) becomes top priority.
The threat of self-extinction should drive us in three directions simultaneously:
1) the establishment self-sustaining, protected human & biologic shelters,
2) at least slow the development of the above four technologies to give us time to establish the shelters or elucidate the risks, and
3) as fast as possible, develop an interstellar mission with the ability to establish a human colony in a neighboring star system. (e.g. miniature craft, frozen embryos, stem cells, etc)
I think that we interstellar mission advocates need to decide for ourselves if the risk of self-extinction mandates that an interstellar mission be prioritized. If we can’t make this decision then our arguments for an interstellar mission will long be unable to compete with on-Earth priorities or the many discovery and development missions within the solar system.
I for one think that the risk of self-extinction is sufficient to prioritize an interstellar mission. Would anyone else agree?
The filter is now and ahead of us. We like to imagine that we are much more sophisticated than our barbarian ancestors, but that is only partly true. We are still too often given to superstition and rampant irrationality. Neither of which are compatible with an ever growing dependence on ever more sophisticated technology. I think we are already in an age where there is no guarantee of the survival of the species (and I’m not speaking about overblown hype about climate change, per se), we lack sufficient maturity to deal responsibly with the technology we have today. And if anyone thinks that the technology we’ve developed already is disruptive, just wait… Imagine when the physical limitations of the human body are conquered, imagine when you can change your behavior (completely and utterly) or even your IQ with medical treatment, imagine when people no longer need to work in order to eat, have a home, and own the things they want, imagine how poorly societies will deal with those disruptions and the sort of tumult that is in store for mankind. I hope it goes well, but I fear otherwise.
What an odd and contradictory hypothesis! I tend to largely disagree, because I notice a great deal of circularity and unfounded assumptions.
first of all we should not confuse: a) biological life in general (i.e. such as microbes), b) complex life, c) intelligent life, d) technological civilization.
Then, more important, there is the assumption that (complex, intelligent) life is rare, which we still don’t know. Next, even more important, the assumption that it is rare because of some ‘Great Filter’ erasing it, instead of a more intrinsic rarity of the arising and development of advanced/intelligent life, as was discussed in another thread about rarity of intelligence recently.
Then there is the both ‘romantic’ and rather fatalistic view that it is probably technology itself that erases civilizations. Which is not only unfounded but also rather modernistic and contradictory: natural disasters cannot do the job so well (why not?), so civilizations probably erase themselves thoroughly. But the advancement of such technology also offers us hope for survival, so let’s keep going in that direction.
Worst, almost laughable, I find the circularity: if (complex, intelligent) life is rare it is an indication of a Filter (one that happened already). If such life is common, it is also an indication of a Filter (one that hasn’t happened yet).
We would even be lucky if (advanced) life is rare, because it implies that the Great Eraser has already passed for now, phew!
I would rather suggest the opposite: if we find (complex) life to be common, it is very good news, because it apparently means that our galaxy is amenable to it over the eons. Just as a rainforest can be characterized as a place where circumstances for diverse life have been favorable for quite a while, and not as a place where the bulldozers just haven’t been yet (though that may sometimes happen, unfortunately).
And even if advanced life, intelligence and civilization appear to be rare, we don’t need any Great Filters to explain it.
I was referring to our recent thread “Life as Rarity in the Cosmos”, of April 11th, 2008, describing a more likely explanation for rarity of (intelligent) life, namely a series of requirements, each being rare chance events.
And even that met with a lot of discussion and criticism.
I’m more inclined to believe that we’re listening/looking for the wrong kind of signals. After all, if we look at how we exchange information on our planet now, the vast bulk of the talking we do occurs over fiber-optic land lines. When SETI was formed, this wasn’t the case, but I think that it’s fair to assume that 50 years from now all planetary communications can be taken care of by fiber optic land lines, and even comm traffic between spacecraft and planets/colonies can be taken care of by focused laser pulses. It’s easy to imagine that an advanced civilization can carry on unnoticed by only putting out strong radio signals for a handful of decades.
We’ve already experienced on our own world that populations tend to stabilize once a reasonably comfortable living standard has been reached; even if we make a major push to colonize our solar system over the next millennia, we probably won’t grow continuously so that we risk using up the system’s resources. Even if we make the interstellar push, is it likely that we’ll set up shop in more than a few hand full of dozens of star systems?
If there is any truth to this idea of a “Great Filter,” it probably has more to do with rapid depletion of fossil energy sources and other non renewable minerals before adequate substitutes can be found. There is no indication that we as a species could have had an industrial revolution without fossil fuels; if intelligent species exist on planets that don’t have the blessing of ancient sunlight in easy to combust forms, would they even be able to produce recognizable radio signals, let alone begin to colonize space?
Hi Folks;
Suppose life is exceedingly rare in the cosmos especially ETI, this would only mean that the resources available within the cosmos that are at our disposal to utilize can support in theory an infinite number of humans. That is an infinite variety of beautiful female personalities to behold and and an ensemble of beautiful genetypes to behold.
I had a mental image while I was out driving a few days ago at how human colonization of the universe could work out even if we are never able to circumvent the light barrier. At first we would go on the colonize the Oort Cloud and local interstellar neighboorhood. Eventually we could have settlements throughout the galaxy that are linked be evacuated transit tubes which have had all traces of real electromagnetic radiation removed from them by a nested layered arrangment in the construction of the tubes and a thermodynamic gradient set up along the radial direction from inside the tubes outward. The evacuated tubes would allow arbitrarilly high gamma factors, for all practical purposes, and would act as mass drivers to accellerate cars to ultra-relativistic velocites.
Eventially, other galaxies would be colonized and this huge mass of humanity would radiate effectively forever out into the cosmos in the ultimate prolife project of letting as many humans come into existence as possible.
If ETI civilizations are common, then obviously, we must share the cosmos with them in an ethical, peaceful, cooperative manner. However, if they are so rare so as to be essentially absent, then the territory is essentially ours to grab in so far as it is not already occupied.
What ever happens in the comming millenia and eons, we can all look forward to its beginning with the establishment of a permanent outpost on the Moon, and then onward to Mars, and then to worlds beyond to paraphrase President Bush.
Thanks;
Jim
Right now, we’ve got a sample size of one, the Earth. We know that simple life has been around almost since the planet first formed. We know that complex life (multi-cellular organism) cam much later. Tool making intelligence came only 50,000 years ago and we got civilization for 10,000 years only.
This suggests that simple life is common, complex life rather rare, and intelligence exceedingly rare.
Again, this is based on one sample, us.
As Jill Tarter put it, we can debate and theorize and postulate until the cows come home, but until we hunker down and do some serious research and engineering, we’re just spinning our wheels. (Well she didn’t put it exactly that way, but you get my drift… Wasn’t the TZF expected to make some announcement for the new year)?
Peter, re TZF yes, we’re running behind but construction of the TZF Web site is well along, and we’re in the latter stages of a significant publishing project that I’ll be describing here soon. I’ll post more as soon as I can on the latter.
peter thank you i agree with you and jill 101% !……..paull, thank you too i will await any news you may be able to forward! all my best to both of you your friend george
When we speak of rare earths, just how few earths does the galaxy need in order for earth like planets to be considered rare. How few intelligence tool making, species are required for such species to be considered rare?
If the filter is something in the future and the galaxy is brimming with species at our current technological level then the filter would not be something like nuclear war or resource depletion. If the galaxy has, or had, 1000 species at our level of development then unless they are carbon copies of us, very unlikely, some would not be as warlike as we are, some would not use fossil fuels as a primary source of energy, some would have had put massive effort into space development as soon as they learned how to build rockets. Not saying those things won’t get us, but out of 1000 species some would have avoided those pitfalls.
Now if the number is 10 such species, it might be the case that species stupidity wiped them out, just as it has a very good chance of doing with us.
Even if were to spend 0.25 percent of the world’s collective GDP on space it still would not be given the sort of importance it should have. 5 percent seems more reasonable to me, or around 400 times our current spending. With the sort of productivity we humans are capable of now with things like automation and robotics even with 5 percent put into space development there would still be enough left over for the other things we may want to do. With that 5 percent we would be able to establish a number bases and put massive amounts of instruments into space. Instruments that would answer a lot of our questions, like what is the frequency to earth sized worlds, and some characteristics of those worlds.
On the other hand if technological species like we are common and some of them did that, then where are they.
A 500 year outlook? I hate the thought of waiting even 5 years to find out what the frequency of earth sized planets are.
The “Great Filter” theory reminds me of the classic science fiction movie Forbidden Planet (1956). For those who haven’t seen it, the Krell- a long extinct, highly advanced civilization on Altair 4- created a gigantic machine capable of projecting each individual Krell’s thoughts into solid matter. This was to be their greatest creation, one that would free them from all “instrumentalities” (love those 1950’s terms) and allow them a life devoted to intellectual pursuits. The forgotten danger was that like humans, the Krell evolved from a primitive, ape-like being, and once the machine was switched on, the Krell destroyed themselves by creating deadly “monsters from the id” that sought revenge for every injustice both real or imagined.
As corny as the movie might seem today with its primitive special effects and politically incorrect “booze ‘n broads” subplots, that’s still one heck of a “Great Filter.”
Yes, and speaking of ‘instrumentalities,’ let’s not forget Cordwainer Smith!
http://en.wikipedia.org/wiki/Instrumentality_of_Mankind
Within 100 years someone will make a machine that allows you to enter in a sequence and out will come an atom-by-atom precise chemical or even nanotech device. It’s not unreasonable then that someone will invent a self-replicating chemical or nanotech device before the time that we will be able to send humans to Alpha Centauri. Such technology might have the potential of eradicating life on Earth.
Here we are one of the few groups that knows about and occasionally discusses Fermi’s Paradox. And yet we haven’t gone beyond discussion. The Lifeboat Foundation has done more than discuss. They’ve organized. Yet their position is that we should allow further development of these technologies but prepare to respond to their consequences.
What if that’s not possible? What if once the above machine is invented that anyone with any motivation could design any self-replicating molecule they jolly well pleased? And what if said molecule self-replicates, spreads by the wind, and consumes all carbon in about a month? We are 26 years into the AIDS epidemic and we still don’t have a good vaccine. Why should we be so certain that we’d be able to adequately respond to such a disaster?
The Filter could well be right before us and even be obvious and yet we seem to walk forward toward the danger without all that much concern. If the Filter is of our making in the next 100 years then all that we care about is at risk. Every hope that we have for mankind to spread to the stars would be at risk.
It sure looks like we are accelerating not decelerating towards the capability to produce self-replicating systems. So it looks like such technology is inevitable even though it is us (humans) who are moving things forward.
Perhaps the only way to prevent this from happening is for a small group to accept the danger as real and then come up with some highly unusual way of surviving it. By highly unusual I mean some approach that not one single ET civilization has tried.
So how about us? Are we that group? Or will we just move on to discuss the latest exoplanetary finding and miss this opportunity to try and find that odd chance to survive the Great Filter?
That is the one thing about Forbidden Planet I found to be
a major plot flaw: How did the Krell, with all their high
knowledge and technology, miss the most basic fact that
their brains still contained basic elements from their
distant ancestry?
Yet another SF film from the 1950s warning humanity not
to mess with “things Man was not meant to know,” like
playing God and nuclear radiation and glistening blobs
coming out of meteorites.
On the plus side, like the other great SF films 2001 and
Contact, the actual appearance of the aliens were left
to the imagination.
And oh yeah, I think the Great Filter, especially as Bostrom
has it, either does not exist or can be avoided if we use our
intelligence properly. Otherwise why did we get as far as
we have, only to be shoved back into the mud? Even if the
Universe has no point to it other than existing, we can and
must make our own meaning.
Here is another interesting and different story about
contact with an advanced ETI:
http://en.wikipedia.org/wiki/Roadside_Picnic
It is in the tradition of Stanislaw Lem’s classic work,
Solaris, about humanity encountering a planet-size
alien that is enigmatic, just like the Universe is to us.
Two related SF novels that may be of interest here are
Roadside Picnic and Solaris, about humans encountering
enigmatic aliens whose motives are as mysterious as the
general Universe is to us.
http://en.wikipedia.org/wiki/Roadside_Picnic
http://en.wikipedia.org/wiki/Solaris_%28novel%29
mark and ljk,i very much enjoyed the movie forbidden planet and even own my owncopy of it on tape.heck i liked itso much i even remember walking back home with my father after the show let out.i was about8 years old!! as to mistakes made by the krell,well, i think it was just a warning to us to NOT DESTROY OURSELVES! don’t forget that those where the days of the cold war when everybody expected a nuclear exchange between our selves and the soviet union! and yeah,the booze etc subplots where childish but…it was the 1950’s!!!!! also,somebody up there mentioned that in the great sf movies 2001 and contact (copies of which,once again i own),the aliens where left to your imagination.i think that was best.because to describe them would be about as childish as that booze subplot from forbidden planet! i wonder what aliens that advanced might look like? i kind of get the impression that they might be pure mind stored in space.where will we be in a million years come to think of it!?just saw a bit of the new series universe the other day where they where talking about the the future of the universe trillions and or quintillions of years in the future! COOOL! thank you one and all your friend george
Reading discussions between transhumanists and non-transhumanists, Singularity etc are funny. Why? Because it is really no use to discuss: Singularity is fact. Relative fact. For a man from 1000 A.D. we are already deep into Singularity – defined as level of technological, social and mental progress that results in completely unpredictable outcome for someone that live before Singularity (like my example: man from 1000 A.D).
I have just skimed the comments and the general tone is very low on the practical aspects of communication. Try to decide why anybody would try to comunicate and then proceed with further discussion.
Let us assume advanced life is fairly rare, and that it has had at least 100 million years to exist somewhere. This reduces the chances or even point of trying to communicate to virtually zero.
Further, let us assume that most civilizations will be populated by lifeforms with lifespans similar to our own. Thus for meaningful communication both parties would probably have to be less than 100 ly apart at the maximum. How many civilizations capable of communicating with us would you expect within a 100 ly radius from earth? Chances of meaningful communication just drop dramatically.
All is not lost, because we can make one more assumption. It is very probable that our science/technology is still in its infancy. Consider that radio wave communication is slightly over 100 years old and look at how that has changed and evolved. On my belt I have a communication device that uses spread spectrum,and anybody listening to the transmissions without proper decoding would only hear static and bits of words from all of the cell phones in use. Can you predict how we will communicate next century, or next millenium? For all we know there is an interstellar internet out there with virtually instantaneous communication, but we are too primitive to find it.
Hi George;
Thanks for the above enthusiastic commentary! Quite a profound set of ideas in your above posting.
I do not quite recall the movie Forbidden Planet but I keenly remember the original Planet of the Apes series especially the one where the human astronuats land back on Earth several centuries after a nuclear holocaust ended human civilization. As a child, I actually felt a sense of dread when one of the astronuats on horseback saw the Statue of Liberty at which point it became clear in the movie plot what had happened.
Speaking of the future of the human race, we are headed somewhere into the future for sure and things will change. Hopefully, we will embark on the final frontier of interstellar and intergalactic travel. One thing is sure, we are all traveling into the future of space time. As a Catholic, I like to think of the Final Ressurection of the Dead, but since I am not trying to convert anyone here, I just bring up the concept as an anecdotal account of how one of us numerous space heads deals with his own mortality, but hopefully I still have several more decades yet as I am in good health overall.
This great thing we call creation becons as all to explore and contemplate the mysteries of the infinite extent of the future, not only in terms of our evolution, but also in terms of where we will travel in the vast reaches of space and time.
Next clear night in town, I am going to drive out into the country and just ponder the seemingly and probably boundless depths of just our universe and ask myself like a young child with a sense of eager anticipation, where am I going from here.
Thanks;
Jim
Issac Asimov had a great quote about speculations like this I will paraphrase:
“The may be a Great Filter in the universe we know nothing about, ……….,
but! we know nothing about it!”
That should read:
“There may be a Great Filter in the universe we know nothing about, ……….,
but! we know nothing about it!”
MaDeR: what you are describing is somewhat different from the concept of “the Singularity”. The Singularity as usually described is when technology outstrips humanity and renders us obsolete. Even from the perspective of a person in the year 1000 CE, this has not happened – sure the current state of the world would not be predicted, but humans are still the ones running civilisation.
Problem with the idea of Singularity is it corresponds to indefinite exponential growth, something which is a very dangerous extrapolation: typically limiting factors come in which cause growth to either plateau or reverse (the latter probably being rather unpleasant for the civilisation to which it happens). Whether computer technology will continue to advance to the point of the development of superhuman AI does not seem nearly as certain as the transhumanists predict. Most predictions of the future are wrong after all.
At a guess, it might be that we don’t develop strong AI and computer tech begins to plateau (as seems normal for any kind of technological innovation to date), but biotech dominates the cutting edge of the next phase of technological growth. Unpredictable future, but no runaway Singularity.
Another view of transhumanism :-p
jim, yes we have quite a future to look forward too! possibly encompassing millions of years! ALOT can be done in timeframes like that!!!! wouldn’t you guess!!?? and i too recall planet of the apes and the statue of liberty scene.we had better be careful that something like that does not happen and ruin our plans for the future! also i agree and always have that…what could be more interesting than the depths of space? (!) finally…yes we must all find our way to deal with the concept of mortallity,but as medical science marches on we have more and more reason to hope for the best! well,i guess those are my comments for now,all the best ,your friend george
In the “old” days – the 1970s – Carl Sagan and others used
to say that finding an ETI would be wonderful, as it would be
proof that somebody knew how to survive their cultural and
technological adolescence – meaning they didn’t blow
themselves up in a nuclear war or another major disaster.
Now what do we have? Oh, let’s not find anybody else in
the Cosmos, it’ll be bad for our egos.
Human egos have survived knowing they are not the center
of the Universe, mainly because they continue to ignore and
forget that fact 500 year-old fact (2,500 year-old fact if you
count Aristarchus of Samos).
Hopefully it won’t be too late before our species wakes up
and truly realizes it is just part of a much larger system
called the Milky Way galaxy, which is just one of 100 billion
or more galaxies in a Universe that might be one of an
infinitie number. These facts apply even if we are the only
highly intelligent beings around.
I wonder what Bostrom’s real objectives are with his
SETI statements?
I notice that a number of “Transhumanists” and Singularity
supporters actually subscribe to the view that no one is out
there because we haven’t been contacted yet. If these so-
called “forward thinkers” maintain this backwards attitude,
it will only harm the SETI projects. And just as our politicians
and religious leaders would prefer there to be no cosmic
competition for their own purposes, transhumanists like
Bostrom also seem wary of competition from alien Artilects
and other aspects of Singularity technology.
Then again, most people think of transhumanists – if they
are aware of them at all – as geeks (they are into all that
sci-fi stuff, which means they can’t get dates or throw a
football) so maybe there is hope for ETI after all.
ljk…”the old days of the 1970’s!!!!!!!” wow,tragically you are far from wrong.just yesterday i made a comment…”remember 1994? that was the date you set the sf story in in order to denote that we where talking about the faaaar futue!!”isn’t it hard to believe that now it is the past?!! now you could start a sf story with words like….the star ship captain looked at his electonic clock and noticed that it was already january 3rd 2284 just as his rooms communication channel to the bridge sounded.lol and thanks everybody.your friend george ps did that sound decent at all? i have been playing with the idea of writting some science fiction is all and i’d like to know. ;) your friend george
@andy: well, looks like I heard different definition than you. Looking at what Wikipedia have to say, I do not see “rendering humans obsolete” as neccessary component of Singularity. Kurzweil’s concept is more in line with my understanding of Singularity than Vinge.
But yes, I agrree that relying on expotential growth is dangerous. Look at baceria on plate: it will grow until space and resources run out. But if bacteria could fall of plate into pond, it again could for some time proceed with growth, even bigger than on plate, until inevitable halt.
I see us like that: technology not only accelerates, but itself allows to “open up” space for another period of growth. From primitive tribes, bigger nations to global village and further, into space and other worlds. And again, again and again. Provided that we survive every phase.
And I see nothing that, in principle (law of nature), prevent creation of strong AI. We create equiwalents of AI everyday (in pleasant, if somewhat messy way). Of course, accessible technology and economy of AI is different matter.
There is one possibility nobody seems to be considering (that I’m aware of anyway).
Suspend disbelief for a moment and imagine that there are civilizations abroad in the universe that are billions of years old. Imagine that, as in Star Trek, they have a ‘Prime Directive’ not to interfere in emergent civilisations.
They would be wise not to interfere, firstly because they would not want their technology to fall into primitive hands, and secondly, because of the disastrous cultural contamination on an unready species.
The Universe is according to some estimates about 14 billion years old. Taking our own evolutionary time (4-5 billion years) as a model, this means there could be civilizations more than 5 BILLION years old.
The evolutionary distance between us and them is, literally, greater than between us and an amoeba. And with an almost unimaginable mastery of just about any branch of science, they could be, as the physicist Paul Davies speculated, ‘lords of creation’.
What would they be capable of, do you think? Perhaps as the late great Arthur C Clarke put it, their tech, to us, would be “indistinguishable from magic”.
Perhaps they would go one step further than just ignoring us, and this is the unconsidered possibility I mentioned:
Perhaps they would have the means, way beyond our current physics, to quickly locate emergent civilisations and place them under quarantine.
By quarantine I mean that they could ‘jam’ all incoming signals from any other civilizations, including their own, preventing any premature cultural contamination. Imagine if, Contact style, we got a stray transmission from an advanced civilization that could enable us to build a super-weapon.
So perhaps there are good reasons to stop those transmissions from reaching us?
To block all incoming signals might be an enormous undertaking to us – but then we have achieved things today that would stagger a caveman, and we’re taking about technology enormously more advanced than us. Perhaps they would have the means to make such jamming undetectable to us.
If this were to be true, then SETI would never detect a damn thing until we’re a lot more mature a species than we are now.
There are competing versions of transhumanism going around. The one that gets all of the hype and attention is Vinge’s or Kurzweil’s singularity concepts. However, I can tell you that there are a great many people who are interested in (and are working to develop) biotechnological life extension, but do not subscribe to any singularity concept. I am one of these people and we are much more numerous (and less obvious – one of us could be the guy or lady you sit next to on your next flight) than the so-called singulatarian transhumanists.
If you define transhumanism as the Vinge or Kurzweil scenario, then I agree tht this is unlikely to occur. However, if transhumanism is defined as where we use biotech to cure aging and have indefinitely long “youthful” life-spans, but where we remain biological discrete entities; I think this version of transhumanism is a near certainty.
I think the development of biotechnological cure for aging is as historically inevitable as the invention of the telephone or motorcar.
Of course there’s always the possibility that intelligent life is relatively common but that for whatever reason we’re the most advanced civilisation in this galaxy. We haven’t heard from all those aliens in our “local” region, say within a 100 light year sphere, because they’re still busy trying to figure out how to succcessfully produce iron, or their version of Lee de Forest has just invented the vacuum tube.
Hi Tim
To be within ~ 50 years of each other would be quite a feat. Unless there was something ensuring that outcome it’s unlikely to be due to chance.
Why Don’t They Do SETI?
By Seth Shostak
SETI Institute
posted: 08 May 2008, 12:30 am ET
A widespread and popular impression of SETI is that it’s a worldwide enterprise. Well, it’s not, and there’s something modestly puzzling in that.
…
So what’s the story? Why is SETI nearly exclusively an American game?
The oddity of this was brought home to me a few years ago when I held a colloquium on SETI research at the Dutch university in Groningen where I was once employed. The room was full — overfull actually, with students and faculty braced against the walls. My first question was, “How many of you think it’s likely there are intelligent extraterrestrials out there in the Galaxy?” Virtually every hand went up.
I followed with “and how many of you are willing to spend one guilder a year to look for it?” (That’s the cost of one cup of subsidized university coffee. One cup per year.) The hands all went down.
I was stunned. When, after my talk, I inquired of a faculty member why the Dutch were reluctant to mount a SETI program, his answer was, “We’re too sober for that.” I didn’t understand his comment, especially given the concordant opinion that there could be something to find.
Let’s be clear: it’s not that the Dutch don’t have the radio telescopes or technical smarts. They do. It’s not because they don’t have the money. They do.
And so do the British, French, Germans, Canadians, Japanese and lots and lots of others.
So, as Gertrude Stein asked, “What’s the answer?” What’s so singular about Americans that only they are willing to spend a small (very small) amount of money and a bit of time to try and answer a truly important question about life, the universe, and everything?
Full article here:
http://www.space.com/searchforlife/080508-seti-why-dont.html
The Director of the Vatican Observatory said that the concept
of alien life does not contradict Catholic religious faith:
http://news.yahoo.com/s/ap/20080513/ap_on_re_eu/vatican_aliens
“Vatican: It’s OK to believe in aliens”
AP / Yahoo 5/13/08
“Believing that the universe may contain alien life does not contradict a
faith in God, the Vatican’s chief astronomer said in an interview published
Tuesday.
The Rev. Jose Gabriel Funes, the Jesuit director of the Vatican Observatory,
was quoted as saying the vastness of the universe means it is possible there
could be other forms of life outside Earth, even intelligent ones.
‘How can we rule out that life may have developed elsewhere?’ Funes said.
‘Just as we consider earthly creatures as “a brother,” and “sister,” why
should we not talk about an “extraterrestrial brother”? It would still be
part of creation.'”
[snip]
“Funes said science, especially astronomy, does not contradict religion,
touching on a theme of Pope Benedict XVI, who has made exploring the
relationship between faith and reason a key aspect of his papacy.”
Full article here:
http://news.yahoo.com/s/ap/20080513/ap_on_re_eu/vatican_aliens