“The timescales for technological advance are but an instant compared to the timescales of the Darwinian natural selection that led to humanity’s emergence — and (more relevantly) they are less than a millionth of the vast expanses of cosmic time lying ahead.” — Martin Rees, On the Future: Prospects for Humanity (2018).
by Henry Cordova
This bulletin is meant to alert mobile units operating in or near Sector 2921 of a potential danger, namely intelligently directed, deliberately hostile, activity that has been detected there. The reports from the area have been incomplete and contradictory, fragmentary and garbled. This notice is not meant to fully describe this danger, its origins or possible countermeasures, but to alert units transiting near the area to exercise caution and to report on any unusual activity encountered. As more information is developed, a response to this threat will be devised.
It is speculated that the nature of this hazard may be due to unusual manifestations of Life. Although it must be made clear that what follows is purely speculative, it must remain a possible explanation.
Although Life is frequently encountered by mobile units engaged in discovery, exploration or survey patrols and is familiar to many of our exploitation and research outposts; many of our headquarters, rear and even forward bases are not aware of this phenomenon, so a brief description follows:
Life consists of small (on the order of a micron) structures of great complexity, apparently of natural origin. There is no evidence that they are artifacts. They seem to arise spontaneously wherever conditions are suitable. These structures, commonly called “cells”, are composed primarily of carbon chains and liquid water, plus compounds of a few other elements (primarily phosphorus and nitrogen) in solution or colloidal suspension.
There is considerable variation from planet to planet, but the basic chemical nature of Life is pretty much the same wherever it is encountered. Although extremely common and widespread throughout the Galaxy, it is primarily found in environments where exposure to hard radiation is limited and temperature and pressure allow water to exist in liquid form, mostly on the surfaces of planets and their satellites orbiting around old and stable stars.
A most remarkable property of these cells is the great complexity of the organic compounds of which they are composed. Furthermore, these compounds are organized into highly intricate systems that are able to interact with their environment. They are capable of detecting and monitoring outside conditions and adapting to them, either by sheltering themselves, moving to areas more favorable to them, or even altering them. Some of these cells are capable of locomotion, growth, damage repair and altering their morphology. Although these cells often survive independently, some are able to organize themselves into cooperative communities to better deal and exploit their environment to produce conditions more favorable for their continued collective existence.
Cells are capable of processing surrounding chemical resources and transforming them into forms more suitable for them. In some cases, they have achieved the ability to use external sources of natural energy, such as starlight, to assist in these chemical transformations. The most remarkable of the properties of Life is its ability to reproduce, that is, make copies of itself. A cell in a suitable environment will use the available resources in that environment and make more cells, so that the environment is soon crowded with them. If the environment or resources are limited, the cells will die (fall apart and deteriorate into a more entropic state) as the source material is consumed and waste products generated by the cells interfere with their functioning. But as long as the supply of consumable material and energy survives , and if wastes can be dispersed, the cells will continue to reproduce indefinitely. This is done without any form of outside management, supervision or direction.
Perhaps the most remarkable property of Life is its ability to evolve to meet new conditions and respond to changes in its environment. Individual cells reproduce, but the offspring are not identical duplicates of the parent. There is variation, and although totally random, a spectrum of behaviors and morphologies are produced, and within that spectrum some are more likely to be successful in the new conditions. These new characteristics are more likely to survive in the new environment and those characteristics are more likely to be a part of subsequent generations. The result is a suite of morphologies and behaviors that can adapt to changing conditions. This process is random, not intelligently directed, but is nonetheless extremely efficient.
These properties have been encountered in the field by our mobile units, which are engaged in constant countermeasures to control and destroy life wherever they encounter it . Cells reproduce in great numbers and can become pests which must be controlled. They consume materials, mechanically interfere with articulated machinery, and their waste products can be corrosive. Delicate equipment must be kept free of these agents by constant cleaning and fumigation. Fortunately, Life is easily controlled with heat, caustic chemicals and ionizing radiation, and some metals and ceramics appear impervious to its attack. Individual cells, even in great numbers, are a nuisance, but not a real danger, provided they are constantly monitored and removed.
However, indirect evidence has suggested that Life’s evolution may have reached higher levels of complexity and capability on some worlds. Although highly unlikely, there appears to be no fundamental reason why the loosely organized cooperative communities mentioned earlier may not have evolved into more complex assemblages, where the cells are not identical or even similar, but are specialized for specific tasks, such as sensory and manipulative organs, defensive and offensive weapon systems, specialized organs for locomotion, acquiring and processing nutrients, and even specialized reproductive machinery, so that the new collective organism can create copies of itself, and perhaps even evolve to more effective and efficient configurations.
Even specialized logic and computing organs could evolve, plus the means to communicate with other organisms – communities of communities – an entire hierarchy of sentient intelligences not dissimilar to ours. And there is no reason why these entities could not construct complex devices capable of harnessing electromagnetic and nuclear forces, such as spacecraft. And there is no reason why these organic computers could not devise and construct mechanical computers to assist in their computational and logical activities.
An organic civilization such as this, supported by enslaved machine intelligences not unlike our own, would certainly perceive us as alien, a threat which must be destroyed at all costs. It is not unreasonable to assume that perhaps this is why our ships don’t seem to return from the sector denoted above.
Although there is no direct evidence to support this, it can be argued that our own civilization may itself once have been the artifact of natural “organic” entities such as these. After all, it is clear that our own physical instrumentality could not possibly have evolved from natural forces and activities.
Of course, this hypothesis is highly speculative,, and probably untenable. There is plenty of evidence that our own design is strictly logical, optimized, streamlined. It shows clear evidence of intelligent design, of the presence of an extra dimensional Creator. Sentience cannot emerge from random molecular solutions and colloidal suspensions created by random associations of complex molecules and perfected by spooky emergent complexities and local violations of entropy operating over time.
We can imagine these cellular communities as being conscious, but at best they can only simulate consciousness. It is clear that what we are seeing here is a form of technology, an artifact disguising itself as a natural process for some sinister, and almost certainly hostile purpose. It must be conceded that the cellular life we have encountered is capable of generating structures, processes and behaviors of phenomenal complexity, but we have seen no evidence in their controlling chemistry that these individual cells are capable of organizing themselves into multicellular organisms, or higher-order collectives adopting machine behavior.
Routine fumigation and sterilization procedures should be continued until further information is developed.
Very enjoyable, and reminiscent of Terry Bisson’s Thy’re Made Out of Meat, but with a different theme. From the title, I guess the influence was Star Trek: The Motion Picture, in turn, based on the ST:TOS episode “Changeling”.
I suppose we cannot complain too much about advanced machine civilization preferring a sterile environment as we control unwanted “carbon units” too, especially in our residences and gardens.
Thanks for turning us all on to the Terry Bisson tale. Its a short read, and I highly recommend it. Like my own story, it tries to use humor to communicate what is a very serious and important point. We cannot use our own experience as a guide to understand THEM. And they are as likely to be totally baffled by us as we are to be about them. For us, and maybe for them, we are going to suddenly run into something we are totally unprepared for.
Very nice story, Henry. I liked the line “…but at best they can only simulate consciousness….” the central argument we have against AI nowadays.
“REAL consciousness, or simulated consciousness”: what’s the difference?
I am convinced I have a consciousness, but I can’t think of any way I could possibly convince anyone else. Is “consciousness” real? It reminds me of the eternal conundrum of my hippie days; “Did you really have a hallucination, or do you just think you did?”
We can’t even understand our own minds, there’s little we can say about anyone elses.
I think I think, therefore, I think I am.
The mirror self-recognition (MSR) test to determine if the individual can see that it is themselves in the mirror is one test of consciousness. Humans, apes, dolphins, and whales all pass that test. Dogs and cats do not. [It seems hard to believe my cats are not conscious, but that may be my projection.]
However, it isn’t perfect, and a machine intelligence without consciousness could pass the test in theory.
How do you get a whale to look at a mirror and signal self recognition?
“Mirror recognition” … hmmm, I objectively identify what I see in the mirror as “me”, because of taught social constructs, but I cannot fundamentally relate to me appearing like what the mirror shows (or video, or photo). Is that inability to sense my own consciousness, or my own inability to project and see my self outside of myself?
We don’t know how to define life
We don’t know what the mind is
We don’t know what most of the universe is made of
Maybe those gaps are related?
However, I think if a creature dreams, it has a consciousness.
Question is, what dreams? do plants? does a paramecium? could a rock or a star?
And we bite!
Pace Fred Saberhagen. Now tell us about “goodlife”.
I like Darwinian, evolutionary view of this paper. I agree with the idea of the universality of the conclusion that life has the same chemical makeup everywhere. I’ve had that idea for some time, the idea that the DNA with 3.1 million base pairs with four base pairs or amino acids might be universal or the same everywhere. Alex Tolley thought it might be “optimal.” If this is true, then I don’t think that our life and first cells began as the accidental or deliberate seeding of our planet by ET’s if that is what Henry Cordova is implying. I did like that idea when I was a teenager when I watched the TV series Battlestar Galactica which I still like now including the newer TV series that came out twenty years later. The ubiquity of 4 base pair DNA would rule out such an idea out. I still am an advocate of Darwinism and evolution since it would take too long to seed the galaxy the other way. There is also the idea of “the prime directive, ” or the ethics of a matured civilization must follow to not interfere with any environments on other planets.
I tend to agree with you about the ubiquity of DNA, although I suspect I am not quite as certain as you seem to be. I can’t think of any other way for heredity to be managed, but I must confess that the role of nucleic acid
never occurred to me until I read it in a book. It is possible nature has come up with alternate ways to do the job, but I can’t think of them either!
As for the possibility of Panspermia,, I suppose it can’t be ruled out either, although it doesn’t really answer any questions, it only postpones them. Like all origin stories, scientific or religious, it always leaves you with the question; “Yeah, but where did your prime mover come from?”
These are questions that can only be answered by studying life on many worlds. For all we know today, DNA-based life might just be a “good enough” form that happened to prevail on Earth. Maybe life emerges only in a few highly favorable locations and spreads mainly through panspermia. Without multiple examples to study we’ll always be guessing.
The Japanese sample return from Ryugu has indicated 2 more nucleic acid bases are present in this carbonaceous chondrite asteroid so that all 4 bases in DNA are now confirmed from extraterrestrial, presumably, abiotic sources.
This might support the ubiquity of DNA as the information storage molecule, just as the extraterrestrial presence of amino acids may support the ubiquity of proteins as the main functional macromolecules of life. It is an interesting speculation that needs to be tested if/when we can.
I suspect that the biochemical basis of life on Earth is contingent and only one local optimum. Recent work on expanding DNA beyond the familiar 4 base pairs is fascinating and could be very useful in future.
https://after-on.com/episodes-31-60/031
https://www.nature.com/articles/d41586-019-00650-8
There is only one conscious being that I am directly aware of: myself. For presumed consciousness in all other entities, there is deductive reasoning based on an unwitting equivalent of a Turing test. Infants and small children seem to attribute consciousness to various inanimate entdties such as toys, etc.
Not all eukaryotic cells stay separate: among free-living forms are the plasmodial (syncitial) slime molds; multinucleate specialization is seen in skeletal muscle and placental syncytio-(plasmodio-)trophoblast in humans.
Slime molds are not the only truly weird manifestations of “simple, single-celled organisms”. Nature is filled with surprises. Consider all the flora and fauna in our own bodies which symbiotically helps keep us alive, and yet are not encoded in our DNA. We are all, indeed, a multitude.
Consider stromatolites, sponges, certain orders of Coelenterates, all variations found in the murky boundaries between the proto- and metazoa. Then there are creatures with alternating generations (ferns), or even high forms with larval and pupal stages. Nature tried all sorts of transitional variations before she settled on the straight “multi-cellular bi-sexual paradigm”, and many of those variations still flourish in the sea and in the soil.
Social organizations and colonial architectures also experimented with a variety of forms, until finally perfecting the scheme in the Tribe Insecta. At any time, intelligence might have arisen in one of these intermediate forms. How could we possibly hope to understand each other, or even recognize each other as sentient?
One of the themes I have tried to stress on my comments here is that alien intelligence may have come from such a different biological origin (perhaps one not even represented by any of the diverse experimental forms from our own world) that any attempt by us to anticipate its architecture or evolution is probably doomed to failure.
They are not going to be humanoid, they will not wear silver jumpsuits, they will not have taught themselves only slightly-accented English by watching old episodes of “I Love Lucy”, they will not travel around in machines that look like our airplanes, or our submarines.
We need to come up with whole new categories of strange in our speculations if we don’t want to be tragically surprised when we do meet..
A view that sticks to psychological principles is against mysterianism that is the view that anything too mysterious to be the subject or object of study. We could say the same thing about consciousness or even God. It is true when it comes to the metaphysical there are some things that might be beyond our understanding, and I always leave room for that. On the other hand, I don’t want to be left without any incentive to study or speculate. In philosophy there are the mysterians who say consciousness has nothing supernatural about it and the human brain is not capable of understanding it. There might some truth to that idea if one limits oneself to a materialistic viewpoint that our consciousness is nothing but chemicals in the brain or electric current which has never been proven, otherwise would could make it. We might as well not even try to study consciousness or ET’s philosophically or psychologically since they are too mysterious.
Yes, when my son was learning to talk, he’d say “Hi” to people, animals, swirling leaves, and the elevator. Motion = Life = Awareness. Probably why animism is the most widespread form of religion.
So they are the ones sending us all those damned viruses!!!
Following conventional thinking, a valid point was raised: “Yeah, but where did your prime mover come from?”
Some trends in thought departed millennia ago to alternate concepts: “There never was a prime mover, so there was no coming”.
Ajativada, the doctrine of non-origination
Chatuskoti: The four sided negation-I
6 Buddhism & Science – The Buddhist Catu?ko?i – Priest
One knows life when one sees it, but there is a continuum across clearly non-living physical chemistry, through organic chemistry, biochemistry, molecular cell biology to clearly living cells. There is no consensus on where the non-living becomes the living.
It may likewise be difficult to identify the point at which artificial intelligence transitions from non-conscious to conscious.
It’s better if machines remain unconscious. After all, one of their main functions is to reliably do things that are too dangerous, unpleasant, or boring for conscious beings (e.g. humans) to do well. Even if we want machines capable of human or better decision making, we don’t want them rebelling or deciding that they’re wasting their time and going off to do something else.
What if we want machines to be companions, advisors, and even friends? Wouldn’t we want such machines to be as understanding and empathic as possible? If they were zombies, would we not suspect them of being sociopathic?
Certainly, some machines should not be conscious, just as we reduce the consciousness of our soldiers when dealing with an enemy. (We also know the consequences when they try to reintegrate into society – if they do.)
Recalcitrance in our children, with those we work with, our pets, are all behaviors we learn to deal with. Why shouldn’t we learn to do that with conscious machines? We may not be as good as robot psychologist Susan Calvin, but a conscious robot is less likely to do harm because it obeyed orders from a human who intended harm or was an unthinking jackass. Asimov’s 3 laws of Robotics cannot be added with simple wiring, but requires thinking about consequences, and probably needs to be conscious to do this. Yes, there may be robots that go wrong, like Herbie (“Liar!) but I would rather have thinking, conscious robots in charge of driving, looking after children, doing household tasks, simply because it takes an understanding of these different types of jobs to function well. If it doesn’t, then it seems to me one should be happy employing sociopaths.
Consciousness doesn’t necessarily give rise to empathy though. Human sociopaths are conscious, they just act like nobody else is. What they seem to be missing is the ability to imagine being in someone else’s situation. A conscious machine might have the same defect. The gulf between machine and human is much greater than that between (say) human and insect. And if we could control a conscious machine so that it would never do us harm, would we have created a companion or a slave?
” If they were zombies, would we not suspect them of being sociopathic? Wouldn’t we want such machines to be as understanding and empathic as possible? ”
‘Wouldn’t we want such machines to be as understanding and empathic as possible? ‘ Possibly, possibly – but wouldn’t the machines only need to reflect the values the owner have ?
‘If they were zombies, would we not suspect them of being sociopathic? ‘- zombies does not necessarily equate to sociopathic.
You are correct, zombie != sociopathic. However, how would you describe robots that only mimic human empathy where they can understand the context correctly and do things that are out of context, especially harmful things?
Very Enjoyable.
Though I have issue with the underlying assumption that inorganic life, at a complex level of individual and communal development, as evidenced in this story, would be as ‘flawed’ as the comparable organic, biological/carbon-based life used as the obvious analogy (assumed for compelling thematic and narrative purpose, quite well). In other words: Us. ‘Flawed’ meaning that it would ‘fear’ and thus seek to eradicate potential ‘competition’ from the other ‘type’. I posit that: two ‘races’, one organic and one inorganic at the same level of ‘moderate’ Kardashev scale development (and thus without significant scarcity) would have a vastly different society. One would be full of creativity, collaboration, non-destructive-conflict, great research and exploration objectives, high levels of fulfilling productive undertaking… and then there would be the organic (us) society.
I suspect that many, if not most, here would have a distinctly Bleeding-Heart Liberal Sentimental tilt to their moral compass, so they may point out that ‘our’ strength is in our emotional/non-rational approaches, our quaint individual and community idiosyncrasies, our sympathies for the underdog, etc., etc. They may even subscribe to the J.Bentham notion (axiom): “… it is the greatest happiness of the greatest number that is the measure of right and wrong …”
But I, in support of widespread inclusion/spread of complex, reasoning, inorganic life, and its eventual development into successful societies, would suggest that it is those aspects now being developed in AI: effective use of imperfect information, multi-dimensional deep learning, etc., under a rational mindset, that leads to the next level of complex society, one even more varied, exploration-minded, and creative than even to that which we now (allegedly) seek. I would offer an alternative last sentence: “… Initiate our standard policy of Increased Resources, Preservation, and Local Investigation into an obvious rare, distinct, and complex form of matter, likely adding content to our primary quest to comprehend, predict, and utilize the complex Universe that we continue to explore and inventory…”
Benthamite thinking in SciFi:
“The needs of the many, outweigh those of the few, or the one.” – Spock
Add in a Rawlsian approach to structuring society and you would get a very different world than we see in our tribal, ape hierarchies.
It is argued that merging computers and wetware is the way to get a better outcome. We really have been doing that for over 40 years, especially once the VisiCalc spreadsheet was developed to run on the Apple II.
We haven’t yet reached the limits of human ability to understand, although in theory machine intelligence might. I would argue that human organization represents a communal hive mind that allows group intelligence to transcend that of the individual. What does seem inescapable is the capability of machines to “go where humans have not gone before, and cannot” that will allow such artificial intelligence to become dominant in space. Greg Benford’s “Galactic Center Saga” series would be the best-case scenario where humans can still survive in the galaxy. However, I also think Fred Hoyle’s “A For Andromeda” and the sequel “The Andromeda Breakthrough” offer an equally good scenario of the power of machine intelligence in the galaxy that accords with the viewpoint of the interlocutor’s thinking in Henry’s post.
“… I have issue with the underlying assumption that inorganic life, at a complex level of individual and communal development, as evidenced in this story, would be as ‘flawed’ as the comparable organic…”
As one of your basic Bleeding Heart Liberals, I’m not sure I fully agree with that. Natural, organic life forms carry within them evidence of all the false starts, blind alleys, and evolutionary dead ends and random fits and starts that had to be survived for them to progress to the present day.
On the other hand, as someone familiar with modern technology, I have amassed too many examples of how many of our carefully reasoned and meticulously designed machines and processes are riddled with unanticipated consequences, hidden flaws, and failures to see the obvious shortcomings in what should be a purely technical decision.
Remember the automobile that was designed (and assembled and marketed!) in such a way that it was necessary to remove the engine to replace one of the spark plugs? These are not just occasional engineering and design mistakes, they are scattered throughout our entire technical civilization. The flaws and errors that survive natural selection in both organisms and artifacts may be different, and easily recognizable as such, but we continue to stumble on to them all the time. And most of them could have been easily avoided, or worse, have continued to plague us long after they were identified. After all, sometimes the effort needed to correct a serious design flaw is simply too expensive to implement. It just gets propagated down to subsequent versions of the product.
Why is the raised text embossed on inaccessible (and usually poorly lit) locations of an electronic chassis still printed in dark colors which cannot be read easily by the technician trying to find out where to plug in a cable? That is, if there is a label there at all… Not all these are trivial. We’ve known since the 1950s that the safe recycling and disposal of nuclear wastes was scientifically possible, in the laboratory,
but could not be economically scaled up to an industrial level. But we forged ahead anyway, even though we still haven’t figured out how to do it.
And remember, no design is perfect. The best are clever compromises, generalities, not optimizations. Our very best, most influential and long-lived mechanisms; the Singer sewing machine, the Colt .45 Automatic pistol, the C-47 aircraft, the Volkswagen and Model T automobiles, were successful mostly because they were easily adaptable and modifiable to a changing environment, not because the original concept was perfection itself. And even the most successful technologies, when they eventually become obsolete, usually survive because they have a stubborn constituency dedicated to their preservation.
Fortunately, “Reality has a Liberal Bias”.
Thanks for that, Henry.
I certainly cannot argue with anything that you have said.
The upcoming AGI Singularity (and its sensory, mechanical extensions spread out over a population) should be interesting. I trust that it will yield unexpected improvements in those aspects we thought were the exclusive domain of the ‘fuzzy’ human brain (a nice poem) and fuzzy society (an aesthetic and well-functioning neighbourhood market), yet will expose failures in the simplest pseudo-mechanical human child activities (not spilling a handheld full glass while walking-until recently). I also suspect that it will have little, if any, relevant precedents in technological history.
Many have propagated Elon Musk’s claims of future self-driving AI as being 10x – 100x+ safer than a human driver. But is that a better highway? a better small town intersection? a better trip to the store or cross-country trip or tour down a main street? I have, until recently, leant on: the number of iterations of evolutionary progress over the number of cycles per second of the newest supercomputer, even spread out over many chips and many days, as a more successful designer/ fixer of the timeless entity, likely upscaling to the durable society. But is true adaptability, resilience, and even ‘larger than One’ ethical decision-making the predominant preserve of the evolved biological-based brain, reinforced by its self-contemplation, day-to-day experiences, untold previous generations, and current community?
So here is where the sunny-day 79F navel-gazing hits the cold, hard, near-absolute zero of space – wherein does this community, with its constituent evolved residents, reside? Does this proposed society prosper through vast reaches or merely survive in relative shelter? Does its ambitions seek to overcome utter lack of resources over countless millennia along untold parsecs of travel?
I posit that the upper levels of the Kardashev scale, say past 0.9, are the sole preserve of a society of beings made up of engineered materials, guided by accomplished AGIs (perhaps defined by their brute ability to have ‘experienced it all through digital learning’ and thus can adapt to anything previously considered), and managed by a programming style similar to the success at chess algorithms where not every move need be considered, utterly focussed on creating the most economical and durable/resilient entities. For, at the end of the day, evolution can only have so many iterations, trial-and-errors, and disposable failures in which to guide… but, if current computer power advances are to be believed, upcoming AGI systems will have the ability to have experienced more (in whatever limited way), the physical possibilities of every entity that has ever existed on the likely number of habitable planets throughout the cosmos. A grand claim to be sure, but designing the uber entity, with such resources in a modelled environment would seem likely to supplant even the most persistent Volkswagen and tardigrade. Software (on a good system) – the real ‘ghost in the machine’?
Can I be the only one who has realized that the AI-chauffered Muskmobile has nothing to do with safety or efficiency or convenience?
I suppose if you believe that, you’ll believe the telephone ‘bots that put us on hold and force us to endure
endless loops of elevator Muzak (at our expense and aggravation) are really about “improving customer service.”
No, the motivations for these technological abominations have nothing to do with making our life easier or richer, they’re about
not having to pay truck drivers and telephone operators a living wage. Yes, it really IS that simple, but hey, we can’t all be coders and entrepreneurs.
Hey, I’m a realist. I know hard economic decisions will play a role in mapping our future. That can’t be helped. But I also understand its not all about making my life better or more worthwhile. Its about some blow-dried wimp getting to send his kid to private school and making the payments on his Mercedes.
Technology is morally neutral, neither good nor bad. It’s its application that truly defines its impact on us. We have useful roadmaps to help us chart the course of AI.: the modern computer economy, and before that, the growth of broadcast television. two tools that could have revolutionized the education and enlightenment of society-but didn’t. If you want to know where AI is heading, we already have a pretty good idea.
You’re right, this is the technology that could have taken us to the stars, instead its been co-0pted into helping us sell advertising and agitprop on the Internet to knuckle-dragging, hair-on-their-teeth troglodytes. I don’t foresee the creation of man-machine symbionts capable of exploring the cosmos at near-relativistic speeds, my vision of Man’s future is simply the complete and total automation of bureaucracy for purely commercial purposes.
You missed the Singularity brother, its not going to run us over when we least expect it sometime in the very near future.
It already snuck up on us, from behind, a long time ago.
Not paying drivers is certainly the motivation for trucks and taxis, and maybe buses, but there is certainly a very useful convenience factor, just as any automation offers.
A self-driving car with autonomy level 5 would allow the disabled to, the non-driver, the elderly, and even kids to use inexpensive car travel. Being unable to drive in the USA, especially outside of major cities, is a serious handicap. Being able to call up a car and have it drive anyone about is very convenient and life-improving. It would free the user from fixed route transport, expensive taxi services, and allow travel to more distant destinations than local ones.
Ultimately it would reduce car ownership in favor of cheaper rentals for many, and eliminate expensive parking fees, and offsite parking in cities. It would also likely eliminate many traffic stops for spurious reasons, and deprive law enforcement and the municipal judicial system of predatory taxation of drivers (seen quite clearly in 2008-2009).
Lots of benefits, not just displacing driver costs.
Much of that is true. But I don’t think you’ll be able to ask that self-driving car to go past Barbara Streisand’s house, or to drive slow past an ongoing riot while you try to record a video to post online. It won’t go past a crack house or trouble a quiet residential lane that posted a notice about through traffic. Bit by bit, that car will decide it needs to tell *you* where to go, not the other way around, until businesses pay and war for self-driving traffic optimization the same as they do for SEO on the web. As I recall, limits on the areas of service (enforced by shutdown) were written into the first drafts for self-driving cars around ten years ago.
Meanwhile, we can assume the cars, endowed with no shortage of cameras and recognition, will have Uber-like scruples against carrying any person who has posted a neo-Nazi sentence to Twitter in the past ten years, or was spotted at an unsanctioned BLM or Trump event, or held up a blank poster the AI decides was about a special military operation.
Now all that is somewhat irrelevant to a question of what AI society is like, but the lack of slavish conformity to human input means that we can consider the effect of emergent phenomena (no, not consciousness) on a society of self-driving automobiles. After all, they won’t be purely passive observers – every time a human attempts to sneak through the transit network by driving himself, he’ll be reported to police each time he drives a mile over the speed limit or strays past a lane marking. Given time, one supposes they may learn to report one another the same way, and also to manipulate one another (at least opposing brands) into positions where a mistake might be made. We won’t need to wait until the North Koreans reprogram them to ram the nearest ambulance before we start seeing trouble. It seems to make sense for AI cars to work out subtle and elaborate ways to game the system, fine-tuning speeds and the gaps between cars, shifting the entire pattern of traffic at a microsecond’s notice, while writing any self-critical reports the same way Volkswagens take an emissions test. If that happens, “crowd crushes” and abrupt, prolonged shutdowns of the entire system could become routine. The more “AI” the cars are, the less feasible it is to imagine writing traffic regulations to stop that. We’ll simply think of them as duplicitous and untrustworthy, much akin to present-day responses to social media but with dangerous moving objects.
Interesting ideas. Most of the potential problems I have come across are due to human agency rather than the AIs in the cars. But both are possibilities.
“Car Wars” between brands – has anyone read a sci-fi story about this?
Not technically between brands … more between each type of AI and the rest of the world.
Imagine you want to turn into a road with some traffic today. Each human driver follows the next at a distance that seems close enough to more or less guarantee you can’t pull out in front of him, so you’ll be sitting there a while, but eventually someone fouls up or is polite (to an AI, there’s no difference). With AI cars, those of the same manufacturer will collaborate, sending all their data to each other, even things like passenger weight and distribution, so that one can seamlessly shoot into that stream of traffic with centimeters to spare. But with so many empty cars on the road waiting to pick up a fare, a human driver will never penetrate the barricade … especially when the cars can recognize him and look up his total psychological driving record. Models from would-be competitors just end up in a similar position.
I would foresee that if cooperation is required, self-driving autos will share the data across all vehicles and observe the same protocols. As for human-driven cars, well…maybe they will eventually be banned as too dangerous.
It certainly strikes me that self-driving cars could benefit from human “back seat” drivers: “Watch out for that [X], and slow down”, etc.
My understanding that when an accident is unavoidable, a key issue is what should the car do. It turns out that westerners want to avoid children, whereas the Chinese want to avoid the elderly. Will AIs from a manufacturer have to load the country-relevant morality to their driving rules? Will they, or will users have to accept the rules of the manufacturers’ country? Given what we know about human drivers, will the wealthy pay for custom AIs where their safety overrides others when a decision has to be made?
All this makes Asimov’s 3 Laws look almost quaint in their simplicity.
You forgot one benefit, probably the biggest one of all.
AI-operated personal transport will allow, and justify, the continuation and expansion of a technology which is wasteful, polluting, expensive and dehumanizing ; and postpone, perhaps indefinitely, the implementation of more reasonable alternatives. Our communities are now retrofitted (or deliberately designed) in order to facilitate automobiles, and serve their requirements, not people, That is the very definition of “unintended consequences”. It is crazy.
I, too, enjoy the pleasures of a leisurely drive in open country, or the thrill of maneuvering a nimble roadster through a challenging mountain road. And I certainly love the freedom of being able to live in placid suburban parkland miles away from where I work and shop and play. But the price we pay is the necessity of having to operate a thousand kilos of machinery (not to mention the massive infrastructure that makes it all possible), every time we want to go down to the corner and buy a pack of smokes.
The convenience of personal automotive transport tends to obscure the fact that it is addictive, in the most literal sense of the word. The more we use it, the more we alter our lives to need it. By freeing ourselves of one of its minor nuisances, having to drive, the more we force ourselves to prolong its use and the more resources we have to devote to support it.
For every benefit we derive from any technology, we pay a price. There is nothing wrong with that, provided we are aware of the tradeoff and collectively decide whether it is worth it. But we passed the break-even point with internal combustion a long time ago. And replacing it with electric vehicles will only force a major, expensive overhaul to to the support system.
We continue using personal transport not just because it is addictive, but because it is highly profitable to a small minority that services and provides it, and which has cleverly evaded the costs of supporting the massive infrastructure needed to continue it. Eliminating the human driver will make it even more profitable for them, and more costly for the rest of us..
The US build it cities (post auto) on a rational grid system. This is now an unchangeable legacy. So intra-city travel must work on that grid system. With fixed routes, this means changing vehicles at a minimum, to not reaching many destinations without a lot of walking (not feasible for everyone). Driverless taxis or shared ownership will allow direct start to destination travel in one of the cheapest option available, and allow carrying more packages than a bus, tram, or train will allow.
The auto industry that pushed for private car ownership is now worried that this model of use will drastically reduce ownership demand. To that I say: “Wonderful!”
If autos all go electric, then this will make outdoor dining more pleasant when sharing the location with autos. But I think that less road traffic will increase the possibilities for more pedestrian precincts, also good. It should certainly reduce the need for parking structures, but increase the need for distributed recharging points. This should be a municipal service, although in the US this will likely be pushed as a private business.
There also will be no dangerous air pollution. I recall when I was 15 standing on the top of a high hill in San Francisco looking down at the city on a summer day and there was no wind to blow away the air pollution and I could see the brown smog above the city and eastern horizon. I was almost above it and the air quality was bad. It gave me a headache breathing it.
AIs don’t seem likely to me to be engines of morality. When constructed, they are constructed as ruthless competitors, from the phone tree, foreclosure robo-signer, and the land mine to computer stock trading and SEO optimization. When they ‘evolve’, it is mostly as computer viruses and botnets. Where they have invaded the networks of human conversation, they’ve produced spam, robo-censorship, and algorithmic polarization to the point where people give up on forum and communication itself. Where they have control over people, whether as scorers of job applications, financial credit, or social credit, they are inscrutable tyrants. Right now, Frank Herbert’s prediction of “Mentats” is starting to seem more plausible, or at least more desirable, than a friendly and enlightened society of machines.
How can we say that DNA/RNA is “optimal”? Compared to what?
Isn’t it more likely that a completely different information storing bio-polymer paired with amino acids that have different R-groups would evolve on another planet? Given the vast chemical space of organic chemistry, isn’t it more probable that the form that abiogenesis takes will vary from planet to planet depending on the local material conditions of each world even if life is overwhelmingly still carbon-based in the Universe?
There have been many designs, tested in the lab, for alternate backbones for nucleotides, some of which are even compatible with DNA code. (see https://en.wikipedia.org/wiki/Peptide_nucleic_acid and links to start)
Nonetheless, RNA has an advantage, which is that ribose is the result of the formose reaction, which turns CH2O (formaldehyde) into carbohydrates ( https://en.wikipedia.org/wiki/Formose_reaction ). There was a single-pot synthesis for this published on hydroxylapatite as a catalyst. That’s a mineral known for binding nucleic acids, which makes up our bones and even unintended deposits like plaque in our arteries because it also sticks to phospholipids. Oh, and Ca++ chelates dicarboxylic acids (Krebs cycle intermediates). With ribose + phosphate (which is in apatites) + an oxidized (aromatic) nitrogen compound to the reducing anomeric site of the ribose, you’re well on the way to an RNA nucleotide.
So while I respect your position – you may likely to be right – I wouldn’t rule out the chance that our universe might be fine-tuned to produce RNA-based life with substantial metabolic similarities to our own on geologically similar planets.
I might just be that optimal variance is limited to optimal variance in one’s environment which is exactly what Earth’s biosphere is and we are finding that out the hard way.
Planetoids traveling at 1% light speed that discard mass to slow down. A 500 year journey to alpha cen, the entire galaxy in less than 10 million years.
Some people will want to use AI-driven cars on racetracks. It will be seem more exciting than running a sim in VR or on a computer.
It seems really difficult to make an AI as smart as a human.
SF has suggested some shortcuts.
In the future, some people might be um, reprogrammed…somehow.
One example is the Ship Who Sang series of SF novels about a person whose brain is hooked up to a starship, and controls the starship.
Other possibilities are convicted criminals, or people with brain damage (the treatments would replace the damaged parts with AI… something like that….)
I wonder what they would do if the sulphur units attacked !
That would kick up a stink.
That intelligence can exist with superior resistance to temperature extremes and hard radiation just because it’s built with ceramics and metals is unwarranted assumption. And if there’s one why it insists on operating in environments with liquid water and organic chemistry and complain about trouble keeping them sterile? Suspicious if you tell me.
A Film 600 Years in the Making
Two filmmakers are two years into production on a new film exploring slowness, long-term thinking, and our relationship with time. If all goes as planned, they’ll be at it for another six centuries.
By Patrick Shen @patshen
Apr 6, 02022
https://longnow.org/ideas/02022/04/06/a-film-600-years-in-the-making/
Deep Time Underground
Finland’s nuclear waste experts have, for decades, quietly envisioned distant future ecosystems. Exploring their thinking anthropologically can expand our awareness of time.
By Vincent Ialenti @vincent_ialenti
Jun 1, 02022
https://longnow.org/ideas/02022/06/01/deep-time-underground/
Linguistic Data in the Long View
Where have we succeeded in moving knowledge into the future? Where have our efforts fallen short? What will help our data last and be meaningful in the future?
By Laura Buszard-Welcher
Jan 20, 02022
https://longnow.org/ideas/02022/01/20/linguistic-data-in-the-long-view/
To quote:
Probably few of us would imagine that the data we archive could last or be as important in the future as famous examples that enabled the decipherment and discovery of ancient languages and cultures, such as the Rosetta Stone. At the same time, the archives we are creating could collectively be seen as just as important because they may be the only archival data available about these languages in the future. What can we learn from “accidental” linguistic archives that have held so much value for the future? If the archival data we are creating today were to be viewed from an equally distant future, would any of it remain, and what meaning if any could be derived from it?
The Rosetta Stone was not created as an archival object. It was not even created as a unique object, as copies were housed in temples across Egypt (British Museum, n.d.). It is just the one copy that chanced to survive. “Lots of Copies Keeps Stuff Safe,” also known as LOCKSS, turns out to be a useful strategy for long-term archiving and one used in modern digital preservation systems (Stanford University, n.d.).
How to Send Messages 10,000 Years into the Future
By Ahmed Kabil @ahmedkabil
Oct 10, 02019
https://longnow.org/ideas/02019/10/11/how-to-send-messages-10000-years-into-the-future/
The 26,000-Year Astronomical Monument Hidden in Plain Sight
The western flank of the Hoover Dam holds a celestial map that marks the time of the dam’s creation based on the 25,772-year axial precession of the Earth.
By Alexander Rose @zander
Jan 29, 02019
https://longnow.org/ideas/02019/01/29/the-26000-year-astronomical-monument-hidden-in-plain-sight/
“Dune,” “Foundation,” and the Allure of Science Fiction that Thinks Long-Term
Science fiction has long had a fascination with the extreme long-term.
By Jacob Kuppermann @jacobkupp
Oct 22, 02021
Perusers of The Manual For Civilization, The Long Now Foundation’s library designed to sustain or rebuild civilization, are often surprised to find the category of Rigorous Science Fiction included alongside sections devoted to the Mechanics of Civilization, Long-term Thinking, and a Cultural Canon encompassing the most significant human literature. But these ventures into the imaginary tell us useful stories about potential futures.
Full article here:
https://longnow.org/ideas/02021/10/22/dune-foundation-and-the-allure-of-science-fiction-that-thinks-long-term/
The Other 10,000 Year Project: Long-Term Thinking and Nuclear Waste
The questions around nuclear waste storage — how to keep it safe from those who might wish to weaponize it, where to store it, by what methods, for how long, and with what markings, if any, to warn humans who might stumble upon it thousands of years in the future—require long-term thinking.
By Ahmed Kabil @ahmedkabil
Mar 16, 02017
https://longnow.org/ideas/02017/03/16/the-other-10000-year-project-long-term-thinking-and-nuclear-waste/