While we often discuss expansion into the Solar System as a step leading to interstellar flight, the movement into space has its dark side, as author Daniel Deudney argues in a new book. As Kenneth Roy points out in the review that follows, it behooves everyone involved in space studies to understand what the counter-arguments are. Ken is a newly retired professional engineer who is currently living amidst, as he puts it, “the relics of the Manhattan Project in Oak Ridge, Tennessee.” His professional career involved working for various Department of Energy (DOE) contractors in the fields of fire protection and nuclear safety. As a long-time hobby, he has been working with the idea of terraforming, which he extended to the invention of the “Shell Worlds” concept as a way to terraform planets and large moons well outside a star’s ‘Goldilocks’ zone [see Terraforming: Enter the Shell World].
In 1997, Ken made the cover of the prestigious Proceedings of the U.S. Naval Institute for his forecast of anti-ship, space-based, kinetic energy weapons. With his co-authors R.G. Kennedy and D.E. Fields, he has appeared multiple times in JBIS and Acta Astronautica with papers on terraforming and space colonization. He is a founding member of the not-for-profit corporation Tennessee Valley Interstellar Workshop (TVIW), now operating as the Interstellar Research Group, and remains active in that organization. A graduate of the Illinois Institute of Technology and the University of Tennessee at Knoxville in engineering, Ken tells me he enjoys reading science fiction, history, alternative history, military history, and books on space colonization and terraforming.
Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney (Oxford University Press, 2020).
A review by Kenneth Roy
Professor Deudney teaches political science, international relations, and political theory at Johns Hopkins University. His book can be difficult to read, in large part due to the academic writing style. Although there are a number of interesting arguments in the book, the lack of clarity and conciseness make them somewhat difficult to access. Once you get past the writing style, Deudney argues that humanity’s expansion into space will decrease the probability of human survival. Deudney raises some good questions relative to the future of Earth and actually makes a few good points applicable to humanity’s expansion into the solar system and beyond. Science fiction readers and space enthusiasts will not enjoy this book, but it is important that we try to understand and evaluate Deudney’s arguments, rather than dismiss them out of hand. You should appreciate your enemies; they will point out things that your friends and allies will never mention, things that you probably need to know.
Prometheans argue that scientific and technological advances allow for the total transformation of the human condition, a realization of utopia, with material abundance and even individual immortality. Starting with the industrial revolution, this trajectory seems to be leading to a very positive future for humanity. But around the mid-twentieth century a number of concerns surfaced suggesting a much more pessimistic end to the Promethean vision. The concerns include nuclear and biological weapons, genetic engineering, artificial intelligence, environmental collapse, and even new forms of despotism based on advanced surveillance and coercion technologies. But all technology is always a two-edged sword capable of great good and great harm depending on the intentions and even wisdom of the humans that utilize them. This is this dilemma on which Dr. Deudney bases his central argument. He seems to suggest that because the sword can indeed harm the owner, perhaps he is better off without it. Or if he absolutely must have a sword, he should made it as harmless as possible. He argues that humanity should be able to discern which technologies offer more risk than reward and should thus be proscribed while pursing technologies and policies that offer great reward for only minor risk. He argues that colonization of space and the exploitation of space-based resources belongs in the former category and should be prohibited.
But Deudney isn’t entirely anti-space. He advocates Earth-centered space activities focused on nuclear security and environmental protection. He is okay with communication and weather satellites. He believes that space activities should be used to protected the Earth rather than expand the militarization and colonization of space.
Advocates of humans expanding into space and exploiting the resources there Deudney terms “space expansionists.” He describes space expansionism as a “complex and captivating ideology…that extrapolates and amplifies the Promethean worldview of technological modernism into a project of literally cosmic scope.” He considers space expansionism to be a science-based and technology dependent religion. Space expansionists advocate for human expansion into space and believe that such expansion is desirable both for those lucky enough to work and live in space but also for humanity in general and the Earth in particular. According to Deudney, space expansionists promise humanity a permanent final frontier, as well as knowledge, and material and energy resources almost beyond measure that can help address Earth’s environmental problems. Deudney disagrees and offers a number of arguments that are discussed below.
Two worrisome technologies that Deudney identifies as being advocated by space expansionists are genetic and cybernetic technologies. The first is also termed transhumanism or the improvement of human beings through genetic manipulation. The second is machine enhancement of human bodies and minds or possibly complete replacement of humans with machines with greater intellectual and physical capability. These two developing technologies do indeed pose many ethical questions. They would be useful but not necessary for successful expansion of humanity into space. But even if the human (or transhuman or cybernetic) expansion into space were to be completely banned, the issue does not go away. The transhumanism movement and the development of cybernetic technology will proceed on Earth completely independent of space activities. There is simply too much advantage to be had for those who possess it. Humans of 2020 are not the final evolutional product and Nietzsche’s ubermensch (or Star Trek‘s Khan Noonien Sing and his augments) pose important ethical and even existential problems. But these technologies will not be avoided by restricting space expansionism.
A third technology that worries Deudney is nanotechnology. This technology enables construction of materials and machines from basic molecules. The big fear of nanotechnology is the construction of tiny machines that disassemble anything and everything they encounter and use the resulting molecules to make more of themselves, without end, until the entire planet is covered with them. This is known as the ‘gray goo’ scenario and it terminates humanity and indeed all life on the Earth. But nanotechnology is actively being pursued by numerous companies and countries because it has such tremendous potential. Nanotechnology would be very useful for space development but again, not essential.
Artificial intelligence is yet another technology Deudney, and others, are very concerned about. It offers great promise and great peril. Again, because of the potential advantages, it will be developed, and while potentially very useful for space activities is not essential.
These four technologies are intertwined, very powerful, and very dangerous. But because they are potentially so valuable, and so useful, they will be developed by someone at some point. Deudney’s fear that space expansion will accelerate their development, while possibly true, is irrelevant. They will be developed, unless a totalitarian world government using advanced surveillance and coercive technologies prevents it. In that case, the cure would be bad. Very bad, but in this particular case perhaps not as bad as the disease. Deudney fails to recognize that space expansionism offers some prospects for mitigating the risks of these technologies by allowing them to be developed in space at isolated research facilities that can be obliterated should something dangerous escape.
Deudney spends some time discussing the militarization of space. He seems to have associated nuclear-tipped missiles and the resulting nuclear annihilation risk with space expansionism simply because such weapons of mass destruction travel through space and can arrive at any point on earth minutes after launch. He doesn’t acknowledge that the first nuclear weapons were delivered by piston engine aircraft and that today hypersonic cruise missiles can deliver such warheads just fine without going into space. The Russians have nuclear-tipped torpedoes capable of destroying large harbors. Squashing the dreams of space expansionists will not in any way reduce the threat of nuclear war, and can arguably increase it due to resource depletion with increasing population pressures. Ronald Reagan’s Star Wars initiative was actually intended to prevent nuclear weapons from traveling through space, but Deudney views this effort as simply another effort at the militarization of space and thus something to be resisted.
Space (including Earth orbit) is currently effectively demilitarized. No nuclear weapons are stationed in space and no kinetic or beam weapon systems exist that can operate from space. Space technology offers the possibility of Earth orbit being filled with beneficial infrastructure such as communication, surveillance, weather, and positioning satellites, along with solar power stations and even some dirty industries. Deudney points out that with the ability to place this infrastructure in orbit comes the ability to place large weapon systems there as well. Orbital weapon systems would be capable of striking any point on Earth with nuclear, kinetic, or energy beam weapons within minutes of the decision to do so. It is the ultimate high ground and the nation that can achieve unchallenged military control of Earth orbit can dictate to the other nations of Earth, resulting in a de facto world government. But nuclear weapons can be delivered without having to travel through space, somewhat undermining Deudney’s argument.
While a world government would probably use space-based weapons to exert control over troublesome provinces, the argument that space-based weapons would lead to a world government is somewhat weak. The question of the desirability of a world government is very real but is effectively independent of the space colonization question. North Korea stands as a stark warning of what a world government might look like. Its citizens endure starvation and concentration camps while the rulers demand not just total compliance in all actions but sincere correct beliefs. Of course, the ruling elite will live very well indeed. And the North Korean political system cannot be overthrown from within. Only external forces. can remove the current system or force it to moderate its actions. A world government based on the North Korean model with advanced surveillance and coercive technologies would have no external threats to force it to moderate its actions or ever remove it from power. One possible exception to this is human colonies on Mars or in the asteroid belt. They might serve to act as the outside force keeping the world government in check, at least somewhat.
Asteroids are common throughout the solar system and occasionally will smash into Earth. Sometimes with negative consequences. Just ask the dinosaurs how that turned out for them. It has been said that asteroids are nature’s way of asking, “How is your space program coming?” Space expansionists claim asteroid protection as one reason to go into space in a big way: to protect the Earth. But Deudney points out that the ability to deflect an asteroid also implies the ability to direct an asteroid to a specific destination. With such an ability in the wrong hands this actually increases the probability of a massive asteroid impact with Earth, rather than reducing it.
Deudney suggests that space settlements have a dark side. The term space settlements as used by Deudney includes lunar colonies, artificial space habitats (O’Neil cylinders, Stanford tori, Bernal spheres, etc), asteroid settlements, and terraformed worlds. Building space settlements involves material engineering and high energies suitable for warfare. This represents a variant of the asteroid problem: in the wrong hands, this technology could do terrible things.
Terraforming is the transformation of a planet, such as Mars or Venus, to resemble the Earth and support human and other Earth life forms. Terraforming requires high energies, long time periods, and the transport of large masses around a solar system. Deudney points out that the ability to make a dead planet live also implies the ability to make a living world sterile.
In addition, space settlements individually will contain thousands, or at most a few million individuals. The life support systems and structural integrity are fragile things requiring a high degree of trust and/or control of the population to identify and remove unstable or dangerous individuals. Rather than being islands of freedom, space settlements could become, and maybe must become, micro-totalitarian states. And like the Greek city states of antiquity, they may find reasons to war amongst themselves, and perhaps with Earth. And they will war with weapons far deadlier than anything carried by the Greeks.
As space settlements are built further and further out into the outer edges of the solar system, perhaps around gas giants and their moons, they become isolated. Over time, humanity could branch into new species, perhaps unable to breed with each other. Rather than encounter aliens, we will create them. With the aid of genetic engineering and cybernetics, discussed above, this divergence could occur relatively quickly. Even with the Central Earth government and most space settlements agreeing to forgo genetic engineering and cybernetic modification of humans, it only takes one isolated space settlement to pursue this line of research to produce something quite alien and perhaps anti-human.
To the best of my ability, I have tried to identify and list here all arguments that Deudney has identified as reasons that space expansionism can decrease the probability of humanity’s survival. Many of his issues are indeed existential threats to humanity but not because of what the space expansionists propose. But they are deserving of serious consideration. These include genetic engineering, cybernetics, nanotechnology, and AI. They are real threats but also real opportunities.
Expanding into space places god-like destructive powers into the hands of those moving asteroids or large mass space freighters. In all likelihood, propulsion systems will utilize fusion power of some type, again giving god-like destructive powers of a different nature. Interstellar missions will be capable of moving large masses at some percentage of the speed of light. Take a space shuttle, run it up to only10% the speed of light and you have a planet killer. We should ensure that individuals embarking on the interstellar missions have a deep respect for, and love of, Earth. How do we protect Earth from the even one slightly deranged or evil individual who has control of an asteroid (or starship) and can direct it at a target of his, or her, choice? Space expansionists need to address this question. Are we looking at a priest-hood type space patrol, or something else?
But perhaps the big takeaway from Deudney’s effort involves government and how humanity will choose to govern itself. Globalists view a single world government as a means to reduce violence and warfare on Earth, perhaps ending the existential thread of nuclear war once and for all. Others view a single world government as a threat to freedom and a short journey to a totalitarian nightmare. But can a single world government control a solar system with dozens of lunar settlements, thousands of asteroid settlements, perhaps a couple of terraformed planets each with a growing population in the millions or even billions, and thousands of space settlements, some of which exist in the Oort cloud? Then add in genetic engineering, cybernetics, and AI, and you have something new in human experience. How is conflict resolved? Are there indeed dangerous technologies that should be proscribed, and if so, how is that done? How does all of this relate to the Fermi Paradox? Once interstellar missions are underway, the questions only multiply. It is unclear what the answer is to this problem, which does not mean that there is no solution. The space expansionist’s dreams face countless problems and this needs to be added to the list.
Deudney perhaps overstates his case and many of his arguments are flawed, but he does raise some valid points. Points that space expansionists need to address. Looking into the future, questions of how humanity deals with Star Trek‘s Khan Noonien Sing and his augments (or if you like, Nietzsche’s ubermensch) are very real and very important but separate from the space expansion question.
Deudney is also correct in that Earth is vital to future human expansion into the solar system and must be preserved at all costs. Space settlements and asteroid settlements will probably depend on living systems that must be renewed periodically by importing plants and animals and bacteria and viruses from Earth. Terraforming planets depends on life from Earth and even space settlements and terraforming efforts around distant stars will depend on life imported from Earth. Earth must be preserved for space expansionists to realize their visions.
The Universe has a number of methods available to it to sterilize entire planets. Deudney mentions asteroid impacts. He doesn’t address gamma ray bursts (GRBs). If we can deal with the unstable or evil individual problem, then space expansionists can protect Earth from asteroids and comets, and even the occasional runaway space freighter. But GRBs arrive with little warning and can irradiate Earth and other terraformed planets with intense levels of gamma rays, destroying the ozone layer and leading to an environmental disaster with eventual mass extinctions. But space settlements can be built with very heavy shielding and have no ozone problem. They could survive a GRB far better than a planet. Space-based colonies could then render aid to Earth, repairing the ozone layer and restoring the biosphere using techniques developed for terraforming.
Yes, Deudney is correct, the dreams of the space expansionists represent a two-edged sword for humanity. But sometimes a sharp sword is all that stands between you and eternal darkness.
“Squashing the dreams of space expansionists will not in any way reduce the threat of nuclear war, and can arguably increase it due to resource depletion with increasing population pressures. ”
Elsewhere in an earlier post in a similar subject forum on this blog I advocated that Africa as well as Asiatic countries enact strict policies on populations within their given spheres of influence. Specifically I advocated that a one child per couple policy be enacted which would in a few generations reduce down the total populations within the continents of Africa and Asia by hundreds upon hundreds of millions of people.
As usual I got the standard knee-jerk reaction (not unexpectedly by the way) that I was being (as is now the standard reply) “racist!”. This simplistic idea that everything that you advocate for nonwhite peoples is in some way a sly underhanded trick to try to in some way ‘pull something’ over nonwhites is getting to be a rather tiresome response to anybody who suggest something that doesn’t conform to the current status quo.
Anybody with just a bit of sense realizes that fewer people and therefore smaller populations consume far, far less resources in every category than large populations. In addition a smaller population overall on earth is a buffer for what might be some catastrophic worldwide happening that could affect us all. It behooves us to try to be proactive as possible in getting a handle on things well in advance before they occur. Emotionalism really has no place in what is going on nowadays given the seriousness of so many intertwined and individual problems that of themselves are so serious. The fact that brown, black, yellow peoples are the majority of peoples in this world is a fact, and yelling some platitude or another will not change that fact. We have to deal with facts not some idealized view of the world-and counter arguments will not change facts.
” North Korea stands as a stark warning of what a world government might look like. Its citizens endure starvation and concentration camps while the rulers demand not just total compliance in all actions but sincere correct beliefs. Of course, the ruling elite will live very well indeed. ”
Insofar as what world government looks like I don’t believe it will be the panacea that everybody suggest. Here’s another example where extremely rich powerful forces are grinding down the masses into a state of relative poverty and forcing them to live in a extremely coercive society where any kind of dissension is violently suppressed. You don’t have to actually have a concentration camp with barbed wire surrounding it to actually live in a concentration camp. Your entire country can be subverted and in effect the entire country becomes a concentration camp. Freedom is the watchword to keep totalitarianism at bay; and free peoples are people who can deal with whatever situation is thrown at them no matter what form it can take.
The people that should be reducing their offspring are those that consume more. Use this link to see which countries emit more CO2 per capita than others.
CO2 emissions by country/region name
Within those populations, we also know that the most wealthy are by far the biggest consumers and emitters of CO2.
If/when some nations manage to increase their consumption to post-industrial western levels, you might have a point, but until then you are effectively saying: “poor, but populous nations must suffer restrictions because as a group their numbers are too high and collectively within an artificial border their consumption is high.”
A high population stage may be necessary before people reach the technology that allows for rich, long lives by fewer people.
Once attained, the solution to overpopulation shouldn’t consist of mass murders, involuntary sterilization, and the like.
Perhaps. However, I would would decouple the issue of population size from biosphere damage. It would be possible to maintain large populations in cities and leave much of the land surface in a natural state. Not easy, and we don’t yet have the technology to do it. Even if/when we do have the technology, it will be hard to require that most people abandon living in the country, or even the exurbs, and probably the suburbs. That is a multi-generational transition and will be fraught with socio-political difficulties.
However, there is a benefit to having a large population living at a high standard – a lot of cognitive power that can be available for a wide range of uses, as well as a base for a huge economy to support the more ambitious human projects. I am not a fan of small populations apparently living lives like that of the Greek gods on Olympus. I would rather have a teeming population living in a Dyson swarm with the technologies to allow rich lives and an economic and technological capability to pursue goals, including the exploration of the universe. If Earth becomes largely depopulated as a result, so much the better.
Thank goodness there’s someone who realizes we can have a large population and advanced lives without “destroying nature”.
Now we can continue to improve things without hating ourselves at every slight step….
(I would allow people with the delusion that a paleolithic lifestyle would be wonderful to practice it but they’d be sorry.)
Sorry charlie but I don’t think anyone, including governments in Asia and Africa will agree to reducing their population while high consumption countries continue to have birth rates well above replacement. Tend to your own problems before giving advice to others. Possibly you are aware that China for may years had the one child policy which was slightly relaxed in 2013? So they made a very good attempt to control population increase. What has the US done in that regard? Nothing as far as I can tell and the US has had some of the highest per capita consumption rates in the world for decades. Lead by example charlie not by dictate to others.
Tend to your own problems before giving advice to others.
Actually, that’s exactly what I did-I tended to my own problems BEFORE giving advice. I’m afraid that Alex and Gary that you need to do a bit more research before commenting on this.
A point of fact is that the European countries and America were well on their way to having reduced populations until both Europe and America began to be flooded in by the excess so-called refugees fleeing from the Middle East and Africa. Both America and European countries did not need to have additional populations pour into them simply because of the fact that number one, there isn’t any additional work for them to be had and they are a drain on the economies of those systems. And number two there will be far more developments in the realm of machines to do the work that was formally done by people; hence there is not going to be needing any additional populations from other nations. So you see others did tend to their own problems.
You also missed the point with regards to what is going on in these Third World nations. As they continued to desire an increase in the standard of living their consumption of precious resources, be it food, water, energy and sheer natural resources will reach a point where if a disaster strikes these countries will be unable to either feed themselves or in the event of some type of natural disaster cope with it in the long run. It extends well beyond whether or not they have a low carbon dioxide footprint or not. That’s only just one aspect of why their increase in populations is to their own detriment and a point will be reached in which it will be impossible in the event of some type of natural disaster for highly developed nations to make up the deficit of their material needs. That’s why I advocate for them to try to manage their populations at this time before there’s a almost certain chance that some type of calamity could hit them.
A+
The most effective way to reduce population growth within a country is by greatly enhancing educational, economic, and political opportunity for women. Female educational, economic, and political empowerment is the key to population stability.
Generally that is the case. However, in the uS, the Quiverful movement of the religious right intends to maximize [white] births in women. That does have the effect of reversing women’s agency, but it shows that the trends we would wish can be reversed by religious beliefs. The anti-abortion movement in the US is another attempt at this, even as it states it comes from a different ideology.
Not just for women, who are privileged and not oppressed in the West.
The higher the living standard the longer the lives and lower the birth rate. Time, not mass killing, will resolve many problems.
A so-called “managed” economy won’t result in prosperity and a natural decline in births because the managers will inevitably control the economy purely to their own benefit, resulting in increasing poverty and totalitarianism. Not a solution.
I would argue that this is an impossible demand. We just do not know how to determine the net benefit of any new technology or idea.
If the review is fair, and I understand the arguments correctly, then Deudney seems to be almost neurotically taking the precautionary principle to extremes. This puts him in the class of neo-Luddite. As Jared Diamond once said, the invention of agriculture was the worst mistake in the history of the human race. Yet agriculture has allowed the creation of an advanced civilization that supports over 7.5 billion people on this planet and can support our ability to defend ourselves from cosmic existential threats (at least we will be). As Kevin Kelly argues in “What Technology Wants”, technologies on balance add a small net positive benefit. The benefits accumulate over time. He may be a techno-optimist, but this is the history of technology and civilization to date.
The various existential threats have been argued over by experts, adherents, and opponents in the past. I would expect most space expansionists to be aware of them. I am less concerned by these threats than I am over super-optimists who seem to me to fail to understand the difficulties of space expansionism. AFAICS, SpaceX fans don’t even seem to have heard of the Kessler Syndrome when rapturing over the bright future with StarLink swarms. China didn’t even seem to care when they doubled the amount of junk when they destroyed one of their satellites for a weapons test. I hope we can colonize space, but the optimism I see from advocacy groups seems likely to lead to disappointment. It is the Gartner technology hype cycle writ large.
Agriculture it is not homo sapience invention, for example ants used this “technology“ long time before first homo sapience appeared on the Earth.
Agriculture – seams to be common result of the Earth’s life evolution .
Nonsense. Picking one other group of animals, some ant species, out of the social insect group, and claiming that agriculture is a universal evolutionary principle is absurd. Next you will be claiming that elephants have developed agriculture by fertilizing the ground with their faeces, and manage the trees by pruning and harvesting their leaves and branches by eating them!
Ants developed much complicated “agriculture-like” behavior than fertilizing ground, I suppose I do not need to discover you “evrika”, you can easily find related information on the web.
Many times human overestimating their achievement.
Right. Self-hatred takes us nowhere. It is neither objectivity nor a key to growth, individual or cultural. Resist it, fight it.
Indeed. Man *is* his technology; take away mankind’s technology, and you have very little. Man created crude tools (technology) before he could control fire at will. Then he was able to control fire, and soon created increasingly sophisticated and lethal stand-off projectile weaponry to hunt with, much of which required fire to create. And so on. Each step along the way required the other prior steps, and I submit that man evolved some with that, and each step assured man better survival. Man invented beer, which is technology for caloric uptake and arguably more sanitary than water. I’ve read that dogs evolved with man and he alongside dogs. I’m sure there were Deudney types carping about the use of fire back in the day, and there are still plenty of people raging about beer (which arguably kills a lot of people even now.)
The future technologies posited by Mr Deudney, will be developed on the Earth or Near Earth, as Mr Roy mentioned above, some will be slowed but not stopped IMO. In trying to stop it the book has ulterior motive that I will mention at bottom.
Mr Deudney does not understand the human psyche very well. Space Expansionism will give us the Seeds to re-start humanity should the worse occur on the Earth or other highly populated space settlements. Some colonies will have clear rules restricting some High Tech, it will be the reason for their existence. They will not all go all in for the latest and greatest. All it takes is for a few scores of thousand of humans to survive to have the potential to Either correct by force if necessary, or Escape into the Deep for humanity to continue.
What is to be gained by having all of humanity standing in place one the Earth.? waiting for a good chance at a potential armaggedon.
I accuse Mr Deudney of chapioning two anti-human strategic positions.
There are those that have mind sickness, that believe that man is an abomination (see that speech by Dr Zaius in Planet of the Apes)
AND
That the only hope for humanity is a strong guiding hand to make sure every human being behaves in specific ways.
This is the ulterior motive of this book, to cutoff any escape from an overseer for humanity, it is a collectivist mindset disguised as sober arguments for humanity to stay conveniently in one place.
This book it is pure propaganda agains physical laws,
fan and in same time sad that such argumentation still exists in XXI century..
Robert Flores wrote on October 2, 2020, 16:34:
“I accuse Mr Deudney of chapioning two anti-human strategic positions.
“There are those that have mind sickness, that believe that man is an abomination (see that speech by Dr Zaius in Planet of the Apes)
AND
That the only hope for humanity is a strong guiding hand to make sure every human being behaves in specific ways.”
Sounds like the agenda of your typical major religion: Humanity is just plain bad and only OUR God/way can save you.
As much as so many dictators, regimes, cults, and other orders want humanity to be under their thumbs – for their own good, of course – there will always be individuals who oppose this kind of rulership, for good or bad. As usual it is the mavericks who bring about real change.
“Globalists” (used in the non-pejorative sense) are not pro-One-World-Government (maybe some are, but that’s not what the word means). The two ideas are not identical. Just FYI.
Can’t trust ’em, though.
The book may not make a good argument, but it seems fairly obvious to me that all the technologies mentioned have tremendous potential for abuse. Even space exploration itself – if you think leaving a “company town” is hard, think about leaving a “company space station”!
But what I see is an argument that the scientists and engineers doing R&D are no longer able to ignore the implications of what they’re creating – that the community as a whole has to look at what’s coming and consider how to handle it. Not to keep progress from happening – I’m picturing something like the scientific side of the climate mitigation movement, but applied more generally.
There is almost literally no technology that can only be used for good, as various individuals define “good.” Every technology can be used for destructive purposes. This is no reason to proscribe potentially useful and beneficial technologies. There is a term for those who can only ever see the worst in every occurrence around them–paranoid.
Hear, hear. The argument put forth is utterly idiotic. Compared to nuclear weapons, space weapons are hardly a blip on the radar, amd we’ve already suppressed nuclear technology to heck and back.
“we’ve already suppressed nuclear technology to heck and back.” The doomsday clock of the Bulletin of Atomic Scientists currently shows less than 2 minutes to midnight.
Well there are technologies that would be so disruptive as to change everything overnight. For example the Trek replicator. From basic matter one dials up a Big Mac, a cuban cigar, a gold chain, you name it, and you can’t tell any are not the original. Or it can make grenades or plutonium. Even if that which is deemed bad were to be somehow verboten, even the “good” things would be so disruptive to society as to end much of the world as we know it. I’m not seeing awareness of potential for adverse outcome as being necessarily paranoid.
Excellent overview! Doesn’t seem like a good I’m interested in learning from but great summary nonetheless!
Despite the book’s author’s assertion that space expansionists are a ‘religion of optimism and technology’, it seems the author’s interpretation of the dark-side of everything mentioned seems like a religion around doom/gloom of humanity surrounding fears of the Great Filter.
Neither absolutist view seems perfect, but hopefully we can mitigate the concerns as we progress.
Conflict results when mutually incompatible parties are forced to live in close proximity to each other. Its kind of like a bad marriage. Sometimes the only solution is for the parties to go their separate ways.
Space is the ultimate, endless frontier. It allows all the different factions of humanity to go their separate ways, thus reducing rather than increasing the chances of conflict. Think of it as a more generalized version of MTGOW. That the good professor doesn’t agree with this suggests two possibilities. The first is that he really does not understand the nature of conflict. This is the most likely explanation. The other is more ominous (and malicious) because it represents a sort of “Berlin Wall: mentality about going into space.. He does not want people going into space because, at the end of the day, he does not accept the Right of Exit, meaning that he, himself, is essentially a totalitarian. Most likely, he subscribes to a particular socio-economic ideology (I will bet you donuts to dollar that it is leftist) that he believes is best for all human and that all dissent should be suppressed.
Refusal of the Right of Exit is political oppression.
Unfortunately for your hypothesis, the obvious counterfactual is that the “communist” former USSR was a strong advocate of space exploration and colonization. This was expressed in both popular culture and actual hardware. The nominal Chinese communist government is also pushing for space colonization. AFAICS, whatever political stripe of the nations, the expectation is that colonists will follow their nation’s way of life.
I agree. No one is going to stop the Chinese from going to space, if they actually follow though with doing so. The good professor surely realizes this. Yet the good professor is very clearly opposed to private, self-interested parties going off into space and forming their own societies. The only reason for him to take this stance is a hostility to “Exit”. So your point has no relevance to mine.
Everyone should have the Right of Exit. But don’t expect those that have to stay behind to pay for your ticket.
Space travel is an expensive hobby. I don’t mind contributing to others to indulge in it even if I have to stay home, but I expect to be consulted, and I certainly can’t demand anyone else pay my fare.
THAT, is political oppression.
Everyone should have the Right of Exit. But don’t expect those that have to stay behind to pay for your ticket.
I agree. But the good professor does not. He argues that even if people can finance it themselves, they should be prevented from leaving for whatever reasons he cooks up in his book. Indeed, he believes it is even more wrong for private, self-financed groups to leave Earth than for government funded space programs such as NASA.
Who gets the house, the joint bank account, and kids? Both parties have guns. War is not an unlikely outcome.
I’m not quite sure how many people actually believed that space expansion (presumably with the goal of conquering the Milky Way within ten million years) would be peaceful and risk free. I mean, science fiction has postulated space warfare across all spectrums, of all intensities (from grey zone to nuclear to RKV existential) for decades.
So long as humans are humans, we will have tribes, nations, and wars. The future, on Earth or off Earth, will be bloodstained (on thousand year timespans, wars have to erupt at some point), with the only question being how much blood will be spilt and how often. If the transhumanists win out and humans stop being human, well now we’re talking.
Most conflict is between people in close proximity to each other who hold incompatible views. Is it not likely that the human expansion into space will allow for different factions of humanity to get further apart from each other and, thus, reduce (not completely eliminate) the incidence of conflict between groups. After, once everyones’ out on their own, what’s is there to fight about?
Trade, politics, and proxies. And resources. People tend to need each other to grow economies quickly, but hell is also other people. Areas like Jovian space will be chock full of different polities simply because Jovian Space is resource rich, and because of development inertia. If different Kuiper polities are allied with different Jovian polities – say for economic reasons – they may get dragged into someone else’s war. Also, does Jupiter intervene if a little genocidal war is going on in a politically unstable habitat in the Kuiper?
And who wants to have to split off and leave? That’s basically ethnic cleansing, and it’s what happens after you lose a war.
There will always be economic competition. That is our nature. Genocide, on the other hand, makes no sense. If two groups of people don’t like each other, they can always separate and go their own ways, like a divorce.
Who keeps the house?
You build a new house. Cost effective fabrication of habitats (O’neill L-5 scenario) is the prerequisite to large-scale space settlement. Once realized, it will be relatively easy for splinter groups to go their own way.
A more general point. The idea of a decentralized future of everyone going their own way in space and the future in general was generally recognized as desirable and inevitable in the 80’s and especially the 90’s. What I can’t wrap my head around is why the general hostility towards such I see on various places on the internet, especially in the last 10 years. Is this a “millennial” thing? or what?
As summarized, Deudney’s argument sounds like a massive non-sequitur. “Cybernetics and nanotechnology are dangerous, therefore we should not expand into space.” Huh?
Transhumanism is more likely to make monsters than better humans. It’s just a futuristic update to the story of Frankenstein.
Please explain. What kind of “monsters” are you referring to here? How are humans any less monstrous considering the state of the world and their long past history?
I think humans would become far more monstrous with all the supposed advances that bring about ‘transhumance’. Dr. Frankenstein thought he could make a new man, a better human but he only ended up making an enhanced being that inherited a debased and corrupted nature making it a monster.
Just keep in mind that Frankenstein was a fictional story where not only was the author, an inhabitant of the early 19th Century, trying to make a moral point about how humans should not try to play God, but also needed to create drama to keep the readers’ interest.
As far as Frankenstein’s monster is concerned, unlike what the original Universal films displayed, he was an intelligent and emotionally aware and sensitive being. His mistreatment at the hands of so-called “regular” humans is what turned him against his creator and his kind.
The talking primate inhabitants of Earth always worry about malevolent aliens coming here and either taking over or destroying us. However, it is just as likely if not more so that we should be the ones that the rest of the galaxy fears.
My hope is that the ones who leave Earth for the stars will not be contemporary style humans but rather the Artilects we create to do the exploring. AI machines make far more sense to roam the galaxy than organic creatures for a number of reasons. Not only do they require less resources and can handle more extreme conditions far better, their silicon minds should be able to process and comprehend their surroundings far more efficiently and accurately.
This will also come in handy should they encounter any ETI, who may also be machines of some kind if we follow the logic rather than the Star Trek fantasy of deep space exploration.
Note our history of real space exploration of the Sol system: The pioneers have all been robots, with the exception of the Apollo lunar program, which people still gush over half a century after the fact.
When 2001: A Space Odyssey was released in 1968, it was assumed that the first space missions to the outer planets would be conducted with manned vessels. Arthur C. Clarke even said as much in the novel sequel 2010: Odyssey Two, stating that no one (?) in the late 1960s could envision robotic missions to the Jovian worlds and their moons before a manned spaceship could get there in the next century. That mindset was still in play even though robot probes were paving the way throughout the inner Sol system at that very time.
Look at our real(istic) plans for interstellar exploration: We have Breakthrough Starshot, which involves no humans onboard but rather a host of microchip probes that will canvas the target star system to examine and return data. And let us not forget Project Daedalus of the 1970s, which consisted of a semi-intelligent computer brain and a collection of robotic repair droids called Wardens.
A very thought provoking commentary on what sounds like an interesting but one-sided thesis by Professor Deudney. Mankind will not stop pushing outward anymore than we will stop research into human genetics, militaristic uses of space, or any other field of endeavour. We will always be at war with ourselves as well as any other species that gets in the way of our use of resources. I see no way to stop space exploration. We have not been successful as a species in controlling our darker instincts and that shows no sign of changing either. We may be entering a phase where our impact on the Earth reduces our ability to sustain our civilization and our ability to venture outwards. That may in turn lead to some lessons learned about sustainability of all kinds. The experiment is underway and nothing will stop it. Thank you for the warning though Professor Deudney. Many of us are already aware of the existential danger humans pose to each other and millions of other species.
The Earth’s life – it is endless expansion, stop of expansion means End Of Life, i.e. death.
I.e. stop of Homo Sapience expansion means death of our civilization.
Our destiny – is expansion (by every possible means) or death, no other choice here.
Live organism’s expansion – It is basic physical law in our Universe, we do not know any live organism that does not try to expand it’s kind.
I am sure there is no any reason to go deep reading books, that try to prove that objective reality is wrong :-)
Deudney’s argumentation reminds me medieval ages and inquisition…
Yes expansion
One might think you have never heard of the logistic growth model for organisms in a finite environment. Expansion comes to a halt, and yet the organisms persist.
Due to fact that life (as we know it today) it is complex chemical process, the particular case you describing – is direct consequence of “law of conservation mass“ and if this system is closed (finite environment as you define) life will extinct finally in this system, after sometime of stagnation.
second law of thermodynamics…
In the case of civilizations on the planet Earth, there is number of concurring civilizations , so civilization that will accept Deudney principle, will lose the competition and die (civilization), this happened many times during short homo sapience history.
Todays we see number of civilizations achieving nuclear power and space exploration ability.
Deudney is teaching us how to artificially loose evolution “game”. Stagnating civilizations have have suicidal tendencies.
May be this is the secret standing after “Fermi paradox?”
Just ask the dinosaurs how that turned out for them.
Over time, humanity could branch into new species.
The farthest conceptualization of AI, the Matrioshka brain, will, in an infinite universe, exist somewhere. Such brains, either solitary or networked, should be able to produce a simulation of a universe including its sentient and intelligent denizens.
Staying on or in proximity to the earth consigns our lineage(s) along with the rest of the planet’s biota to incineration by a red giant Sol.
There may well be a great filter of Fermi’s Paradox in our future, but others may (or may have) pass(ed) through or bypass(ed) such barriers.
Yeasts in a vat may consume less xxper capitaxx than do protozoa. But that does not stop either demographic from plunging into suicide. But of course any preaching in an effort to reform the ways of the yeasts or the protozoa will be seen as racist.
Among homo sapiens, increasing complexity of societies and civilizations tends to select for intelligence.
It may also select for the domestication of Homo sapiens. Some prominent behavioral features of such domestication include decreased aggressiveness and anger, and increased sociability and cooperation.
We should be a careful in assuming intelligence is correctly measured by IQ test results. There is also some argument about whether human-level intelligence is a survival trait over the long term. (I hope it is, but…)
There is certainly the increased “domestication” of humans that does seem to generally select against traits of selfishness and aggressiveness. However, game theory shows that traits associated with selfishness and “cheating” survive quite well in a population that is cooperative and non-aggressive, suggesting to me that it will not be eliminated under the social systems we have.
A limited simulation perhaps, but not a complete one. It takes far more information and energy to simulate every particle than to just have a physical universe. Any simulation inside the universe would have to include the simulation engine, resulting in infinite regress. Therefore any simulation engine has to be outside the simulation, i.e. outside the universe. A Matrioska Brain is assumed to develop within the universe. However, if humans are living in a simulation, our universe is not the universe these brains evolved in.
Why you are probably not a simulation.
Interesting. Early on the argument that fidelity makes this expensive is undermined by the idea that the simulation need only be low fidelity except for the few times we try to drill down. I don’t buy that argument.
I am also not clear why consciousness is so important either. This seems irrelevant, but I haven’t thought about this before.
I will stick with the null hypothesis that we live in reality, not a simulation. The simulation argument needs to provide some evidence rather than speculation.
I tell my fortune with chicken bones.
I have always wanted to do an experiment where the bones are thrown, and then a number of different chicken bone reader adepts interpret the same throw and have their readings compared. Just how [in]consistent would they be? Same with entrails, tea leaves, and other methods of divination of the future.
That probably wouldn’t work since most of their pronouncements read like horoscopes: vague enough to apply to almost anybody. The better ones adjust on the fly by reading the subject’s body language.
On the other hand if you give me a pile of chicken bones I can determine with good accuracy the chicken’s fortune. It’s never good.
By the clavicles, perhaps? Although not quite like ours!
Throw those chicken bones on the stock tables in the Wall St. Journal ..
Buy the stocks that the bones land on ..
Rinse and repeat on the update cycle of your choice ..
Here’s my take.
Earth will likely experience a human caused extinction event. With a teacup of mature biotechnology a person could end most life on Earth. Human civilization could survive on Earth by adopting what would essentially be space faring technologies.
As we mature as a space faring people, the likelihood Earth experiences an extinction event increases while Human civilization’s survival chance increases. Open Earth will share the Solar System with a number of closed city states. Earth will be more vulnerable to biological attack and the smaller city states will be vulnerable to energy attacks. Earth will be vulnerable to the smaller, less costly, difficult to trace, highly effective attack. Human’s could survive but a green living Earth will not.
Space raises the likelihood that Earth is made barren but creates the possibility that did not exist for Earth and Human civilization to both thrive. We don’t have to trade Earth for Space and it may be sinful accomplishment. I don’t want us to be a people that left their parent planet barren.
There was a real close one about 75,000 years ago:
https://en.wikipedia.org/wiki/Toba_catastrophe_theory
As someone who grew up during the Cold War and can distinctly remember as a child being inspired (and morbidly fascinated) by my parents’ government issued copy of In Time of Emergency…
https://www.orau.org/ptp/Library/cdv/In%20Time%20of%20Emergency.pdf
… to plan a bomb shelter in our basement, I am amazed we have not had some kind of nuclear conflict by now. Of course I had been feeling more relieved since 1991 – until just recently.
Personally, I want to go to space. Whether or not I get to go, it is my opinion humanity should go. And I also believe that all of us should contribute to that effort. But that is my opinion, it addresses my own desires and prejudices which I cannot expect anyone else to share.
Space travel is going to be, for the foreseeable future, too expensive for gung-ho Heinlein industrialist superheroes or any other entrepreneurial or corporatist fantasies. It will have to be a societal effort carried out by the collective effort and sacrifices of entire nations. The members of those societies, who will pay the bills and perhaps not share the benefits (if any) should at least be consulted.
We’re all space groupies here, we all want to sign up for the ride. Fine, but we’re going to have to persuade our fellow citizens, or at least the democratically elected representatives of our fellow citizens.They are under no obligation to subsidize our childhood dreams because we’ve convinced ourselves its really for their own good.
The conclusions outlined by the Professor are, in my opinion, not only mistaken but probably dangerous. But that’s my opinion.
Also:
Wanted to add that the novel AURORA, by K. Stanley Robinson, that pushes the IDEA that the Earth in an IDEAL home for humans beings (wont argue with that) . His writing uses a poorly conceived vehicle of a plot to push for a stop to Inter-Stellar or even Inter-solar attempts at colonization.
So whenever some article writers use the term “The Aurora Effect” I know to take such views with a heathy dose of skepticism.
“False is the idea of utility that sacrifices a thousand real advantages for one imaginary or trifling inconvenience; that would take fire from men because it burns, and water because one may drown in it; that it has no remedy for evils, except destruction.” Cesare Beccaria
I think that sums it up.
I’ve seen the percentage of sociopaths in human society as being between 1-4 percent. While a lot of the comments seem to do little more than hand wave away Daniel Deudney’s arguments, I wonder how long the earth will last when 1-4 of every mining ship is piloted by a sociopath, and each ship is a planet killer.
This is not comparable to nuclear weapons. Nuclear weapons have a smaller yield (the largest so far being 50 megatons) and are placed under tight safeguards where one person cannot, just on a whim, decide to use them. Even so, there are people who crave their use, whether it be during wars such as the Korean war, or for some other reason. Nor is the use of one or two nuclear weapons a planet killer – if they were, we wouldn’t be here to discuss it.
The question is, what happens when the energies at human command are so powerful that even “one” person can cause the death of a planet? While we can develop such devastating technologies without leaving the planet, leaving the planet dictates that we must develop them.
Nor does there need to be any malice involved What happens when a simple miscalculation can destroy a planet’s biosphere. Chernobyl was the result of a poor design (as expected during the early days of nuclear energy) and poor training. Yet that did a lot of damage. What happens when we’re not dealing with a single power plant whose potential to do harm while huge is still limited, but an error where nanotechnology is involved?
Or, to be more blunt, what happens when 911 doesn’t involve an aircraft with a max speed of 898 km/h and a max weight of 160,000 kg, but a spacecraft that masses 1,000,000,000 kg and which has a speed of 1,080,000 km/h? (A measly 1 million tons, travelling at 1/1000th the speed of light, giving it the kinetic energy of 10.7 billion tons of TNT.)
Then we had better hope that we are not prevented from building those asteroid vaporizing lasers by those worried about their use as a weapon against Earth’s nations. If such a ship came with the orbit of the Moon, there would be 15 minutes to make a decision to destroy the vessel. That seems like plenty of time, especially if managed by a computer system.
There is also a benefit of armaments races. They build up our defenses for unexpected threats. For example, the current pandemic is showing that our technologies can deliver a vaccine and deliver it within 2 years. That is good news when faced with a bioweapons attack, and potentially some disease vector from space. Without our technologies, we would be as vulnerable to disease outbreaks as civilizations in the past.
Such an impactor doesn’t sound stealthy or maneuverable… couldn’t the Earthlings move a much smaller mass into its path to cause a devastating explosion at range? By the time you can move that kind of mass in distant space, you ought to have arrays of space telescopes with similar total mass that could read a license plate on the Alpha Centauri colony. From Earth orbit.
There are no “technologies” like AI/transhumanism/etc. because we do not understand the science behind what those would be. Engineering can be envisioned because there are known underlying principles or science. These “technologies” are not engineering but literally a fantasy, more like a dream from a movie.
You can always tell whether talk about space colonization is serious by searching for its precursor, seasteading. No such luck here.
Our best bet is colonizing habitable exoplanets. Pre-existing habitable environment, making your own is not sustainable
How will you get to those exoplanets other than a long voyage in a manufactured space habitat?
Societies might build very expensive arks to get out there but they would eventually break down if they never reached a habitable planet. Of course we have no idea how to build an ark right now that would last for a hundred years (or a fusion drive, etc.) but I’m trying to bend the important parameters by the least possible amount. That seems better than just throwing the sustainability issue out the window. So that is how I would write my hard sci-fi if I wrote any!
BTW I’m enjoying the Raised by Wolves show so far. Two episodes in. For a sci-fi show, very realistic.
I enjoyed season 1 of “Raised by Wolves” too. However, to call it “realistic” when you have previously suggested space colonization and AI are fantasies is strange.
BTW, seasteading is a not a precursor to space colonization. Contemporary seasteading is a libertarian idea for wealthy people to avoid/evade taxes by living permanently in international waters. So far, this plan has failed to take off.
I wonder if your idea that seasteading precedes space colonization is from the book: The Millennial Project that proposed colonizing the galaxy in 8 easy steps, where the ocean colonies are step 2?
The Millennial Project also assumed – at least originally – that there were no other intelligent living beings in the entire Milky Way galaxy. So not only was the MWG ours for the taking, it was also the duty of our species to spread terrestrial life everywhere. Aka, interstellar Manifest Destiny.
I can just imagine the Millennial Project cultists reaction when they come across another technological intelligent species with similar ideas and plans.
Bro I just watched the 6th episode. That bit with the android having sex was dark. This is Ridley Scott so I do expect that android is going to turn evil and kill everyone like in Prometheus. I hope it doesn’t happen but I’m expecting it.
This post made me think why in the arguments about the impact of technologies, a full spectrum of opinions could be found, from total alarmism to it’s opposite.
Truly every change and every advance has it’s dark side. But how much do we have to fear it and what can we do about it? Maybe it can be said that the darker is the nature of civilization-forming species, the more likely they will use the dark side of any technology. Could there be some form of “benign/malign parameter”, characterizing psychology of intelligent species, which could guide our predictions?
And this leads to some interesting thoughts.
First, this could be a strong case for “benevolent aliens” hypothesis. The power needed to colonize the stars is so great that destructive and unstable species likely bust themselves before they go interstellar. This remains true even taking “it’s easy for a civilization to kill itself but much harder for species” argument into account; they just cycle through downfalls and rebirths until they learn… or finally go extinct.
Second, does this parameter need to be a constant? Currently, it seems to be as firm as a rock. Technological and spiritual advance is a process that improves well-being in some meanings but does not change more deep parameters of human nature, such as this “benign/malign ratio”, some generalized stress and happiness level, and such. We ban one form of waging war or being mean with each other, but invent another; we improved productiveness orders of magnitude since Paleolithic but still work hard tens of hours a week (not a century). We invent new means to amuse ourselves but forget older ways of being happy. Our destructiveness is constant so far, but our power steadily increases; no one has to care about civilization survival while fighting with sticks, but we could really destroy ourselves just with nuclear power. So it pins down the amount of alarmism in the predictions, and at a rather worrying level. Dark Skies warn that space expansion is the stage of great danger, but the point is not in this. If we won’t change, we certainly _will destroy_ our civilization at some point of time. If not at the stage of near-Earth-space colonization, then on the all-Solar-System-stage, or with biotech, or when human-machine interface becomes common. We actually have a reference point – if dangers of nuclear power drew us close to the edge, than we’ll likely trip on the very next big step.
And if these parameters are not constant, than the problems of Dark Skies, as well as nuclear power and many other, have an exact solution: become nicer to each other.
Transcending Gravity: The View from Postcolonial Dhaka to Colonies in Space
By Asif Siddiqi
OCTOBER 12, 2020
IN THE EARLY 1980s, when I was a teenager living in Dhaka, Bangladesh, my friends and I would sometimes sneak out at night and take rickshaws to the National Assembly Building (Jatiyo Sangsad Bhaban). This self-consciously monumental exemplar of modernist architecture, designed by the American architect Louis Kahn in the early 1960s, always seemed unearthly to me, as if it were dislodged from time and space, and at once both ancient and futuristic.
My friends and I would climb into the massive open geometric shapes carved into the exterior of the building — circles, triangles, and squares — and lie on them, smoke cigarettes, and stare out into the crevices of concrete and light. Because the building was surrounded by a reflective pool of water, you could lie on the inside of one of the circles and, looking askance, imagine yourself unmoored from the planet, floating inside a structure in the cosmos, without reference to up or down.
Full article here:
https://lareviewofbooks.org/article/transcending-gravity-the-view-from-postcolonial-dhaka-to-colonies-in-space/
FYI: If you do not know who the author of this piece is, I highly recommend finding out! His knowledge of the Soviet space program is nothing short of immense.