If biological life gives way to its own creations, should we adjust our SETI outlook to include entire civilizations composed of artificial intelligences? A postbiological culture was certainly on the mind of the Czech writer Karel ?apek (1890-1938), whose work is the subject of today’s essay by Milan ?irkovi?. It’s a good time to reassess this author as we careen toward what may or may not be a ‘singularity,’ when digital intelligence eclipses our own. As ?irkovi? explains, ?apek was an utterly indefatigable writer whose work is less well known in the west than it should be given its significance not only to science fiction but the study of the future. Dr. ?irkovi? is a research professor at the Astronomical Observatory of Belgrade, the author of numerous research and professional papers as well as three research monographs and four books, the most recent of which is The Great Silence: The Science and Philosophy of Fermi’s Paradox (Oxford University Press). Read on to learn about the life and work of a key figure in our conception of the human, and post-human, future.
by Milan ?irkovi?
This year marks a century since the world premiere of Karel ?apek’s drama R.U.R. in the town of Hradec Králové in the Czech Republic. By that very fact, it marks a century of the word “robot” which has spread into all world languages from the very title of ?apek’s drama, since the latter is an initialism standing for Rossumovi Univerzální Roboti in the Czech original (meaning Rossum’s Universal Robots). It is the name of the megacorporation responsible for introduction of robots as a cheap and versatile workforce, and – no big spoiler there, considering how famous and even canonical the drama has become – ultimately responsible for the extinction of the human species.
According to an interview given to a Czech newspaper more than a decade later, the true creator of the word was Karel’s brother, Josef, a talented painter and poet. While Karel could not decide what to call artificial people, Josef (who lived with him at the time) suggested the work “robot” coming from robota, meaning hard work, usually done involuntary in serfdom. A similar word, denoting labour or hard work, is present in all Slavic languages (e.g., “??????” in Ukrainian, “rabota” in Serbian and Croatian, etc.). Formally, of course, one could argue that the word was coined in 1920, when the drama was written, and Karel’s conversation with Josef took place, but the neologism began to spread only after the premiere in 1921.
R.U.R. is arguably still the most important and the best written science fiction play ever. It is far from being ?apek’s only claim to fame, however. On the contrary, he was extremely, unbelievably, fantastically prolific for a man of lifelong poor health who died at 48. He wrote a dozen novels, hundreds of short stories in all genres, a book on gardening, 5 books of letters describing various travels all over Europe, children books, literally thousands of newspaper articles, essays, vignettes, and Borges-like “apocrypha”. His novels, both realistic (Hordubal, Meteor, Ordinary Life, Life and Work of the Composer Foltýn) and SF (Krakatit, The Absolute at Large, and his most significant work, War with the Newts) are all very complex affairs, full of difficult philosophical and psychological ideas, but also mostly ironic and often spiced with Monty-Pythonesque black humor.
His work was extraordinary popular all across Europe between the world wars, especially among people of the Enlightenment tradition, frightened by the rise of mass-murdering totalitarianism in Italy, Germany, and the Soviet Union. He was repeatedly nominated for the Nobel Prize in literature, but according to the since uncovered documents, his fierce anti-totalitarian stance made him a politically undesirable laureate for the (always politically shy) Swedish academy. A self-identified radical centrist, ?apek viscerally hated nationalism and nationalism-inspired tyrranies, such as those of Mussolini and Hitler, and he openly mocked the latter’s pretensions in War with the Newts; he also deeply despised communism and called it the surest road to total economic and cultural impoverishment.
The Munich Agreement, signed on September 30, 1938, effectively enabled the destruction of Czechoslovakia by the Nazis, first by cessation of the so-called Sudetenland, and subsequently by occupation of the rest of the country by March 15, 1939. The occupation was followed by immediate imposition of all the measures of totalitarian terror, destruction of Czech institutions and culture, bloody purges, arrests, deportations, etc. ?apek was perhaps fortunate not to be able to see such a horrible denouement; he died on December 25, 1938, apparently a consequence of a sudden heart attack while working in his garden. By a dark irony worthy of Kafka, or indeed ?apek himself, the Nazi occupiers ordered his arrest and transfer to a concentration camp a few months later, while unaware (“an administrative error”) that he was already deceased. Josef ?apek was arrested and later murdered in the infamous Bergen-Belsen concentration camp.
To return to R.U.R., there is some debate on the proper conceptualization of ?apek’s robots. The author himself contributed to this, since he maintained a kind of delightful ambiguity between the “natural” and the “artificial”, which was a kind of philosophical point with him. In a subsequent letter, whose English translation was published in Science Fiction Studies, he insisted on biological substrate as the basis of his androids, imagined as “biological machines”, rather than the metallic creations we continue to associate with the concept of a machine.
In the final analysis, the nature of the substratum is irrelevant: even if we did not know it in the 1920s, we are confident now that life is just biochemical machinery of high complexity. Just a couple of years after ?apek’s drama had its world premiere, the first serious hypotheses about the origin of life in a completely naturalistic manner were put forward by Alexander I. Oparin and John B. S. Haldane. They were considered speculative (if not, ironically, more science fiction than science) until, a few decades later, when an ambitious graduate student named Stanley Miller and his mentor Harold Urey performed what was perhaps the most spectacular and most important experiment of the 20th century. In the analog simulation of Earth’s early, reducing atmosphere, Miller and Urey achieved synthesis of many organic compounds, including vital amino acids (and many more were discovered decades later in the original samples by Miller’s student Jeffrey Bada). This and many subsequent developments in the abiogenesis studies showed how easy abiogenesis probably is in the realistic situations where many millions of years and huge spatial volumes/surface areas are available. Therefore, even the “rare Earth” theorists have consistently argued that simple life is probably ubiquitous throughout the universe. And from the information view of life, the substrate is positively irrelevant.
Obviously, the heritage of R.U.R. is not all roses – after all, bots of internet infamy are just a shortening of robots. And while the robot rebellion is unlikely to take such a melodramatic form as in ?apek’s drama, the threat of making humans irrelevant in the work market or even entirely superfluous looms large. Ultimately, all worries about the AI risk, as elaborated in such a brilliant book as Nick Bostrom’s Superintelligence, follow from the apocalyptic vision of R.U.R. and its prototypical Robot Rebelion.
The drama is highly relevant today for some additional reasons, however, notably for study of the future as well as the astrobiology/SETI field. It offers a first glimpse of postbiological evolution, which is likely to be the dominant form of evolution in the universe in the fullness of time, as suggested, since about the turn of the century, by diverse authors such as Steven J. Dick, John Smart, Juan Chela-Flores, Anders Sandberg, Abraham Loeb, Joseph Voros, and others. Until we accept the (transhumanist?) premise that the design space of postbiological evolution is much larger still than the design space of the good ol’ biological one, our way of conceptualizing searches for bio- and especially technosignatures will be seriously limited.
Image: A first edition of the play, with cover designed by Josef ?apek. Aventinum, Prague, 1920. Credit: Wikimedia Commons.
Perhaps the most valuable legacy of R.U.R. is, in fact, its thought-provoking ethical ambiguity, which clearly follows from ?apek’s wedding of a deeply understood evolutionary perspective with his unquenchable humanism. Superficially, it’s an unsolvable dilemma: if one understands evolution, one has to admit that humans and all their creations are emergent, but ephemeral, accidents. Humans are mammals; all mammal species last a couple of million years before going extinct. On the opposite side, humanism tells us that our creations and values carry a spark of persistence, if not true immortality.
A solution, as ?apek powerfully intuited, is a kind of postbiological evolution. If present-day humans become obsolete, the process undoubtedly quickened by our many flaws, this need not mean that our creations cannot and indeed ought not succeed us. Technically speaking, ?apek’s robots commit the ultimate genocide – and yet, strangely enough, we do not feel offended or enraged by such a turn of events (neither does the last surviving human protagonist of the drama, who helps the robots with procreation). Something new and wonderful is happening in the universe.
Sounds rather similar to the Polish writer, Stanislaw Lem.
As for robots and intelligence, I am rereading Pamela McCorduck’s Machines Who Think. The introduction includes the mythological creations of Hephaestus, the various animorphic machines like Vaucasson’s duck, and the creation of the Golem, before considering ?apek’s robots.
Inorganic, metal robots like Asimov’s vs organic androids are a recurring theme in SF. As androids tend to be much closer to humanity than metallic robots, the moral issues are examined more frequently, e.g. Bladerunner (Do Androids Dream of Electric Sheep?). We tend to see metallic robots as inhuman, and therefore we worry less about how we treat them. Asimov blurred that line with his humaniform robot characters, like R. Daneel Olivaw. Clarke (and Kubrick) also blurred the line with the depth of character of HAL 9000, arguably more human than the awake crewmen, Bowman and Poole. The fear of humans being supplanted is a perennial plot in SF, from humans losing (Terminator series) to humans winning (Herbert’s Butlerian Jihad). It remains topical, e.g. Bostrom’s Superintelligence.
If AI continues to improve to where we get true AGI, at some point we are going to have to address the rights of those AIs however embodied, just as we have given animals increasing rights. If we fail to do so…
There are no robots; and there is no such thing as artificial intelligence.
Or, to phrase it another way, all intelligence is artificial and all living things are robots.
Unless we are committed to some form of vitalism or shaped by religious conviction, all organisms and all personalities or consciousness(es) are machines. The “mind”, “soul” or the “spirit” is not in the modern scientific lexicon, unless as a euphemism for mere process. Even if that philosophical distinction does indeed exist in reality, at least it is not often described as such in scientific communication.
Living things are all conceptualized today as machines, complex devices composed of colloidal suspensions and solutions of proteins. There is no reason any animal, plant or microbe today cannot be visualized as a machine, albeit not one of cams, springs, levers and gears, or of wires, electrical components or silicon chips and wafers. Whether wetware, steampunk or digital, all our concepts of life are strictly mechanistic and totally subservient and explainable by natural laws–be they physics or chemistry. As for thought, consciousness and all the rest, well we just call it “software”.
But we still cling to the idea that some forms of life are “biological”, that is, naturally evolved without intelligent design by the random laws of evolution and natural selection. AI and robots are conceived as artifacts, things deliberately created by highly evolved biological entities. Living, evolved life and machine or artificial life may even be indistinguishable. There is no reason why artificial life may not be at least partially composed of proteins, cells, tissues and organ systems. And many purely biological constructs exhibit machine-like components such as skeletal levers and joints, or nerve wiring. Artificial life could very well borrow from and take advantage of the many engineering successes of natural selection. It is not even clear if we could determine if artificial life was actually machine or not. Perhaps the only clue would be whether a system or process exhibited some evidence of Darwinian evolution and adaptation of some prior structure or whether it was clearly designed from the ground up by an intelligent designer for a specific purpose.
Look at a brewery, a complex man-made system of kettles, pipes, reactors and pumps, yet it also employs as essential components various cultivated yeasts and microbes, as well as human operators and builders. People can’t make beer, and neither can microbes. The brewery itself is also incapable of making the product. But put all three together…
And why have a substrate at all? Many purely abstract structures exhibit life-like characteristics such as metabolism, mobility, growth, reproduction–even behavior. A bureaucracy is an administrative machine very often with its own desires and needs, independent of those who toil within it, or are manipulated by its activities. (I’m thinking of my last job!) Governments, corporations, civilizations all act as if they had a mind of their own, independent of the men they supposedly serve, since, after all, living things are really just organized information processes that manipulate matter and energy. The internal communication need not be an AI and computer network, or even telephone lines and switchboards, it could simply be memoranda inscribed on papyrus.
The only real question we need consider is would we even recognize a machine civilization or organism as such. It might look a lot like a biological or social collective. We might even be living in one right now, or in some transitional form, blissfully unware that we really aren’t in charge at all. Maybe the Singularity already happened, centuries ago, and we just missed it.
I disagree. I think we should stick to the meaning of natural and artificial. As Darwin’s theory of evolution was based on artificial selection – i.e. breeding, vs natural selection where populations evolved in response to their environment including other individuals in the population.
Natural intelligence evolves to meet the survival needs of the phenotype. Artificial intelligence is under no such constraint – e.g. theorem solvers have no impact on the survival of the software or computer it runs on.
If we find metallic robot civilization in space we can be sure that their ancestors must have been created, as they could not evolve naturally as we have. Conversely, if we find biological organisms and can determine through evidence that they have lineages going back in deep time, we can be fairly sure they are natural.
It is certainly possible that life was created by other beings and deposited on Earth and therefore makes our origin artificial, but clearly natural processes have continued since then. Clarke’s suggestion that man-apes were uplifted by aliens in 2001:A Space Odyssey if true might indicate our intelligence is not entirely natural. I would argue, however, that our culture and education is making our intelligence less natural and more artificial. However, I don’t think any machine substrate and AI can be considered in any way natural or equally artificial to our intelligence.
I thought I made that clear.
“It is not even clear if we could determine if artificial life was actually machine or not. Perhaps the only clue would be whether a system or process exhibited some evidence of Darwinian evolution and adaptation of some prior structure or whether it was clearly designed from the ground up by an intelligent designer for a specific purpose.”
Granted, an Asimov-type robot or a Hal-type computer would certainly not evolve naturally, and a quick dissection would certainly settle that question. But it is not too difficult to come up with examples (from science fiction) where that might not be so obvious. The xenomorph in “Alien” was being weaponized by the corporate villains in the film, and there are hints in some of the sequels that another alien species had already done exactly that. In the “Expanse” mini-series, the protomolecule is an artificial life form created specifically as a weapon. And there are Fred Saberhagen’s “Berserkers”. Of course, appearance in fiction does not prove it can happen in the real world, but it does demonstrate that the distinction between natural and artificial is hazy, and possibly meaningless. Robots need not be able to reproduce, they can build factories that make copies of themselves. This isn’t all that different than the larval-adult metamorphosis developed by evolution right here on earth.
We deliberately modify natural organisms for our own purposes, as in the case of domesticated animals, plants, and even microbes (yogurt, cheese, beer, antibiotics). We may even unwittingly alter natural evolution by our own collateral actions, as I suspect occurred with canids and felines: it is possible they (unconsciously) altered their own evolution and behavior in order to increase their commensal relationship, their usefulness to us,. They insured their survival by domesticating us! This process was relatively recent with cats, in fact, it is still in progress. And this happens with little or no technology or conscious planning involved.
This can be said not just for organisms, but whole ecosystems. Neolithic man altered biomes by selective hunting, controlled burns, introduced life-forms, etc., sometimes in ways that were not even realized until our own sophistication in ecology made it possible for us to understand what was happening. We can’t always tell the difference between true wilderness and properly managed parklands. We know now the Australian aborigines made their landscapes more biologically productive and hospitable to human habitation by simple range management techniques. In the American Great Plains, something similar happened when the Native Americans applied the introduced horse technology to the existing grassland/buffalo economy. Human range management doesn’t inevitably lead to environmental desolation, that requires unthinking application of technology and capitalism. Beavers and their dams can make forests more biologically productive and diverse.
The point I’m trying to make is that it may not always be possible to distinguish between natural organisms and artificial ones. The same goes for entire ecosystems. And if we can’t necessarily tell the difference, it isn’t unreasonable to assume that perhaps extraterrestrial intelligences may not be able to tell the difference either.
I agree things will get blurred. If we artificially modify a biological organism, some modifications will be relatively indistinguishable from natural selection, while others will stand out as artificial, e.g. modifying the genetic code to allow different amino acids to be incorporated, or to move away from a 3 base codon to a different number, as these will differ from the ancestral forms in the fossil record. Our planet has a mix of natural and artificial ecosystems, although the artificial ones are still objectively different from natural ones today, but if humans disappear, may become “naturalized” over time.
If I remember correctly, Darwin used animal breeding to illustrate how natural selection could work over long periods of time. The only difference between the farmer and the wolf is their selection bias and how efficiently they can realize their bias. The farmer and wolf prefer different traits and have different tools for selecting those traits. The animal being chased or domesticated is still receiving selection inputs from their environment. The most we can say about a fundamental difference is that the selection pressure for the wild sheep is more broadly distributed.
Artificial intelligence that isn’t intelligent will be selected against. Breeders of Ai will cull and any more intelligent, capable Ai will replace their less intelligent, capable relatives. Ai will still have traits demanded by their environment.
In essence just different objective functions. What might be interesting is the use of GANs to drive the development of consciousness. It would take some effort to define how to measure consciousness, but once done, systems might very quickly bootstrap themselves to consciousness, much as AlphaGoZero did for playing GO.
Yes, I agree.
I am biological.
> suggested the work
Should be:
suggested the word
Looking at robots from an intuitive viewpoint which includes depth psychology and philosophy, we can come to the conclusion that if computers don’t have creative thought, then they will never surpass humans or make them obsolete. If machines could make us obsolete, then they might have a value that is equal or greater than humans and I don’t think that will ever become true.
I don’t know for sure if computers can’t have creative thought and I would like to see the scientists make some experiments to see if computers can be designed to have an intuition or even a scientific intuition. Can they produce ideas and see how they fit into a system of principles like those in physics?
I was impressed by the super computer Watson which competed with the two top Jeopardy contestants in history and beat them, but it barely won only because the top contestant did not bet the total amount of money he had, but the computer did. What happened with Watson is interesting information for making a robot because it used a super computer which used up the space of a whole room with many desk top computers on shelves all wired together. If we wanted to make a robot like data on Star Trek the Next Generation, we would have to have a lot of memory and processing power put into a small space, the human head. Maybe with quantum computers and solid state computers which don’t use or need a hard drive could we make a data like Robot. It would also have to have learning capability. Watson had learning capability. At first they through it might not be able to used Watson because gave some non sensical answers that a human would not do with the Jeopardy questions during the testing and trial run before filming the show. They put in a learning program which allowed Watson to look at the answers of the other top contestants so it could correct any such errors. Watson also only answered trivial questions and it’s data was from encyclopedias, dictionaries, thesauri, newswires or the facts, but also ontology and domains according to Wikipedia. It did not have to answer any unknown questions, or make general judgments based on the whole.
Even if computers don’t have creative thought it does not mean that we will not have humanoid like robots in the future. I don’t like the idea that they could replace us or they could be considered some kind of species or life form. If we anthropomorphize them, then we get evil robots we have to battle like in science fiction TV shows.
At this point, computers have been made to be as creative as some animals, like chimpanzees. although creativity is in the eye of the beholder. Computers can create original paintings, prose, poetry, music, etc. It may not be good to human perception, but it is a start.
Cognitive scientist Doug Hofstadter has long thought that true intelligence requires making analogies, which is surely a creative process. Decades ago his group made toy programs to make analogies of the form, “ABCD” is to “BCD?”. The results were comparable to human responses, although the program came up with some unexpected sequences. Making more abstract analogies requires a lot more capabilities than computers have today, but how long in the future?
As most organisms grow from an egg using a developmental program, their brains and intelligence must be “on board”. Social insects also have “distributed brains” in that their responses is based on a higher level order of the colony. Human intelligence is arguably similar as we are a social species, unlies, say domestic cats. However, robots need not operate that way. Our Martian rovers have some onboard intelligence, but rely on humans for direction. Asimov wrote a robot story where instead of each robot having its own positronic brain, all the robots could be controlled by a centralized computer brain.
Watson is more like having all of Wikipedia in your head ready to interrogate. Interestingly, the program to have Watson integrate all medical knowledge to diagnose and treat cancers at Sloan Kettering proved a dud and the experiment terminated. This may have been because of the high level of expertise at that hospital. For other institutions with less expertise, it has proven of more value to the doctors.
If a Watson or derivative successfully integrates knowledge and suggests novel treatments/theories is this creative or not? If a human does the same, I think we would say that was creative, and in that sense the computer has passed that particular Turing test.
One would still have to write a program that could do that and that would be some sophisticated program with algorithms. One might also have to change hardware also, the type of computer like one that uses quantum qubits and the computing power.
There’s nothing to keep some elements from faking AI creativity and consciousness for their own ends.
Like encouraging apathy, for example.
By coincidence, the BBC news had a piece today on ?apek.
The 100-year-old fiction that predicted today.
Also mentioned is Yevgeny Zamyatin’s “We” which is dramatized on BBC’s Sounds: We with Anton Lesser in the main role as D-503 and is a clear influence on Orwell’s “1984”.
“should we adjust our SETI outlook to include entire civilizations composed of artificial intelligences?”:
Absolutely. Maybe bio-to-cyber is a very common evolutionary step. And the machines note we mostly care to talk about fears (running out of resources, being invaded, …), sex and war: maybe we have nothing of interest that they want to hear or maybe they’re just waiting until we undergo our own phase transition before we have commonality that we can talk about the things that interest machines. Off-planet living does have advantages for machines: no volcanoes or earthquakes, ice-ages etc. If the machine does not need to live on the surface of a planet, but can mine asteroids etc, they could be anywhere: not just on planetary surfaces. To me, the most unfathomable thing is: what are their motivations? If those are sufficiently different than ours, how can we guess where they are or what they want to talk about and when.
Familiar with RUR from an anthology found when I was a child and re-read decades later. There are interesting departures from our expectations about robots. And I would like to add Russian to the list employing this root word for work: rabota (n.), rabotat’ (infinitive), and the toiling classes rabochie (adj. plural). All this does make Capek’s examination of this topic parable like with many possible interpretations. Reading RUR later, was surprised to re-discover how life like (?) the robots were. Yet still, they were a sober type of supermen, perpetuated in many other stories. The last words or directive of the play are memorable too…
Given all that, I am still skeptical of the notion that some type of synthetic life emerging from biological life is an inevitable ascendancy in the nature of things. If it is, then shouldn’t the oceans be filled with them by now? Or as H.C., mentions above, then why not a brewery or the General Motors Corporation constitute such? You could say that the latter has an artificial intelligence, composed of organic entities that have little voice in what it does or decides. And the brewery cited above illustrating humans fulfilling the role of single cell organisms. And with the analytical knife, devised by Robert Pirsig in Zen and the Art of Motorcycle Maintenance, it is possible to dissect or flowchart living beings and motorcycles much in the same manner without addressing
cognition or self awareness…
This is not a proof of equivalence, however. This is more like astronomy’s issue with astrology. Both astronomy and astrology perform calculations, astronomy would not attempt to address individual human destiny or volition. In other words, astronomy does not address neurological manifestations such as the mind, whether astrology has a grip on them or not ( it does not, my opinion fwiw).
When we speak of artificial intelligence, much of our attention is directed toward Jeopardy and chess playing computers. But there is no motivation in the computers other than the programming. And they certainly did not invent either of the games. Even if we programmed a computer to like these games, we would still have the issue of convincing ourselves that computer really liked the games or was actually faking it. Another facet of “artificial” intelligence.
Life before the presumed artificial intelligence arose through organic chemistry and changing environments on this world, which both deterministic and other approaches (fill in your preferred) can very well regard as opportunistic. You might say that AI has an opportunistic situation with capital and labor to displace human life of limited capability, focus and high economic cost. But why should it go any further? Faced with environmental crisis, for example, what would artificial intelligence do? Consult the programmer? In a variation on Capek’s end to RUR, perhaps it would pray…?
Good point. Does a computer have values and feeling which judges something as good or bad. It might not.
Dobro dan!
Thank you for a provocative and articulate article. I wonder if Frankenstein could be broadly classified as the first literary attempt at post-human or man-made life created by “science”.
I saw Metropolis recently (the most complete version). It was a masterpiece of futurism with a robot literally replacing a human as a central element of the plot. The sets, imagery and choreography have aged well in my opinion.
I remember having much the same reaction to Metropolis when I first saw it back in the 1960s at a gathering of science fiction fans. Extraordinary work.
I suppose that “motivation” applicable only to existing entity, meanwhile we do not know any such “object” to think about his/her/it’s motivation.
Shortly, no object – no motivation…
Miller & Urey showed that organic molecules can be formed by natural processes from simpler ones. The presumption is that further molecular complexity can be similarly generated. When such molecules interact with each other and simpler molecules, they in effect constitute molecular machinery. Those sets with more pronounced tendencies to survival and replication would dominate. Fencing themselves off with cell membranes would be a watershed event.
Cells whose molecular cell machinery is better adapted to survival, growth and replication will dominate, and similar domination will be seen at every level from flocks/herds to bands/tribes (and in humans) to nations and empires.
Backtracking one should be able to reduce it all to physics. However, there remains the hard problem of consciousness. Many mental phenomena can be explained in terms of the firing of specific sets of neurons. Yet when we see a red flower, we do not experience electrical shocks in the brain; we see a red color.
Or at least I do. Not having experienced anyone else’s consciousness, I cannot say for certain that everyone else is not a zombie. But we unwittingly perform abbreviated Turing tests to exclude that possibility. Infants and small children often do not seem to make that differentiation with regard to toys, etc.
IBM’s Deep Blue was able to program itself; was given exposure to several hundred thousand high-level chess games and spent several hours playing against itself. Gary Kasparov felt that the program had human intuition.
Subsequently IBM’s AlphaGo performed very well; its successor AlphaGo Zero was trained entirely by playing against only itself for a few hours; it trounced the world champion and had a 100 to 0 winning streak. A subsequent version Mu Zero learnt the rules for chess and Go from games given to it.
However, many such computers fail at common sense (John is James’ son; who was born first?) and extrapolation to novel situations. Shown pictures of dogs and cats, a computer may not recognize a tortoise as an animal.
With enough «brainpower» computers may overcome limitations. But even if then conscious, they may need a substantial evolutionary phase to become subject to the same imperatives of survival, growth and replication.
Even with a singularity, the matter of imperatives remains. What if anything, will a singularity want? Likewise with a machine civilization, what will be its values and motivations?
Perhaps this is a result of synthesis of arguments or lines of reasoning:
Several of us are skeptical about AI emergence. But it is offered as a conceptual evolutionary step from biological life originating from organic chemistry, contingent on biological life’s emergence in the first place, facing its own odds. Unless you have reason to think that silicon layers and appropriate doping takes bid strides on planets appropriatedly arranged. But given the first precept, the number of inorganic life forms would be dependent on the number of successful life forms of the earlier variety. And in some environments, there might not be any impetus for such a leap – or else it would be beyond our SETI scope ( e.g., ocean worlds without a concept of an external cosmos). Then on the other hand, we have arguments about distinguishing intelligence whether artificial or authentic. You could look at this in two ways: immersed in an institution as an organ or else, as usual, an even less significant element equivalent to a motile cell, you can’t see much evidence of intelligence – only things to complain about. From another perspective, an institution like the Ford River Rouge plant could be seen as an organism eating up iron, coal and fluids to excrete automobiles with cellular bodies stoking the fires…
Not intelligent, no. But if an originator of a device akin to Deep Blue ( if it has not done so already), in despair turns over the board and presidency to such an entity and gives it instructions to “Save us.” Competitors in defense counter with the same strategy. Draw your own inferences.
It was the acquisition of common knowledge that was/is the purpose of Doug Lenat’s Cyc project. It had some success but I gather it has not worked as expected despite decades of input of facts.
Nikola Tesla invented robotics in 1898, one hundred and twenty three years ago.
https://www.teslasociety.com/robotics.htm
I strongly disagree with the idea we are just robots, I myself have had numerous incidences of Precognition. We can not make life except in the old fashion way and AI is just a learning process not intelligence. Why do you think your mother had eyes in the back of her head…
Agreed! Our awareness transcends the recognized physical senses. A future realm of science will be the exploration those aspects of our reality. Can AI have similar capabilities? Only speculation but I suspect that life has evolved in milieu of the non-physical influences. Perhaps DNA is the resonator with the non-physical. Total speculation of course. AI based on microchips may never be more than elaborate adding machines.
What’s the big deal about consciousness? Or for that matter, self-awareness? Creativity? This is all spooky talk, minds, souls, spirits.
W haven’t even got good working definitions for any of these concepts.
We know for a fact there exists a machine that exhibits these characteristics, or properties: The human brain. If blind evolution can do this, why can’t a sufficiently advanced technology? Of course, if these spiritual dimensions are not physically manifested, or if they are part of some supernatural intervention, then maybe I’m mistaken. But that isn’t what we are arguing, is it? That is a different question altogether.
If consciousness can’t be duplicated, can it be simulated? But that just brings up another question. if it can be simulated convincingly, what’s the difference? If there is no way to tell the difference, does one actually exist?
Some contemporary philosophers are convinced there is no such thing as consciousness, that it is an epiphenomenon. In other words , it is a self-referential loop that our brains have evolved in order to impose artificial order on a chaotic, random universe. Not only do we think we see ourselves as autonomous creatures (psychology), we even see others that way (sociology). We even see nature that way (religion).
Maybe its just an algorithm running in the background to help us give meaning to the meaningless with some kind of confidence.
I think I think, therefore, I think I am.
I think you are on the right track. The consciousness people feel is essentially a paranormal phenomenon, because there is no scientific way to define what it means to “really feel” something. If biological neural networks *have* developed some entropy-defying means of recalling a bit of information written in the future rather than the past, then the doctrine of past causality falls apart. There is more than one possible value for bits of data following a circular path in spacetime. In that case the boundary conditions of the universe may be imposed here and now, in the present or during some part of human history, rather than strictly from events before/at the Big Bang. When these boundary conditions have an influence on past events, they can be called qualia, and when they have an influence on future events, they can be called free will. (There are seemingly major contradictions with this that are essential to explore, but too much to write here)
Unfortunately, robots of almost the R. U. R. variety seem altogether closer than we might have thought possible twenty years ago. We see disturbing images of “organoids” made of human brain cells, grown for research. How far is left to go before not merely organs as promised, but entire human-ish organisms, or organisms of other shapes made from human cells, are routinely 3D-printed for industrial purposes? Such organisms might be all that ?apek feared, but if such organisms actually do possess some ability to defy causality, which might be loosed from repression mechanisms evolved for self-preservation, they could create extraordinary hazards that he never envisioned.
P.S. Looking at the story at http://preprints.readingroo.ms/RUR/rur.pdf , I am struck by its succinct expression of modern sensibilities regarding education: (“Young Rossum invented a worker with the minimum amount of requirements…”)
I seriously doubt it. There are a number of reasons to believe that your brain is either fooling your perception and/or that your understanding of probability and perception of events is fooling you.
Any true ability to see the future beyond forecasting would have serious consequences concerning the nature of reality.
[As we know, all prior claims of precognition that have been tested proved false. The claims of Dixon and her ilk to have ESP have been thoroughly debunked. Sadly the Randi offer of a substantial remuneration for demonstrated ESP phenomena has been closed with Randi’s death, so you cannot accept their challenge.]
I don’t want to get off topic, but James Randi did not disprove or invalidate ESP, or precognition because he was not a scientist and his tests were not in any way sophisticated and did not use any type of experimental, physical setup but only anecdotal evidence.
Furthermore, he did not know anything about parapsychology, Jungian or depth psychology and he clearly he admitted and said that he did not want to know any theories of how ESP, precognition or parapsychological effects might work. It was much easier for him to invalidate something that appeared to be magic or impossible physically to the five senses, but only made him look good in front of the average bear or intelligence. Learning any theories might take the use of grey matter and intuition.
He did show how Uri Geller’s spoon bending could easily be faked or was a hoax, but that does not mean in any way that ESP, precognition and parapsychology has been invalided or is for the people who are too imaginary.
You don’t actually believe that Uri Geller, an ex-stage magician, has psychic powers, do you?
Randi’s experiments were perfectly fine – showing how controlled experiments stymie dowsing and other supposed ESP powers. He used his skills to detect cheats. If there are any people with ESP powers, they have never taken up his foundation’s challenge to claim the reward.
Scientists, like Taylor, who assumed that people were like natural objects were easily fooled by real-world trickery. It took magicians to uncover the various means by which these powers were faked. One has to be willfully ignorant to not understand how psychics work people in an audience, whether to communicate with the dead or cure diseases. With over a century of psychic investigation, if there were any true capabilities, wouldn’t someone have proved it by now?
One cannot prove a negative, but science pares away the false ideas by experiment. By now, it should be evident that psychic powers have not be able to pass controlled experiments which means that the most parsimonious explanation is that they do not exist.
Just because there are people faking paranormal abilities does not mean that it is not a viable subject of study or there is no such thing. It’s not that his experiments were not science, but they were biased from the start. Some things are more difficult to prove like precognition, and the ideas in parapsychology like clairvoyance, clairaudience and clairsentience, synchronicity etc. One has to study the effects of the unconscious like dreams, visions and active imagination. James Randy had no knowledge of these and he did not study these.
My point was he kept everything simple by only looking for magic and the physically impossible which he knew from the start that not anyone could replicate on demand. The products of the unconscious are assumed in Jungian theory to be involuntary or come to one and therefore are not reproduceable on demand and consequently are not given any value by a viewpoint limited to the physical sciences and causality. For this reason I am theoretically biased against the ideas of telekineses and psychokineses as the premise is too will centered and limited to conscious control. One does not control the unconscious in Jungian theory.
My point is that James Randy never moved beyond his comfort zone and learned any new knowledge so his rationalistic or materialistic world view was never challenged. He was not interested in any psychological theories that might support ESP which require complex thinking and intuition. I assume and infer that his inferior function was intuition and he did not develop it. It’s not just about disproving some other peoples ideas, but having knowledge and experience with them oneself, and self corroboration. He sold his soul for fame, attention and the image of success in the eyes of the general public, but did he have any mental development as a result, I bet not. I am not saying that his efforts were all bad though. He might have inspired someone to want to become a detective, and maybe get interested in science.
Ad hominem attacks on James Randi are not relevant here. His foundation’s challenge was clearly set up to provide definitive answers to claims that could be tested. It was particularly aimed at famous psychics who were bilking the gullible public, but anyone could apply. Not one person managed to pass the challenge during the time it was in operation. That is a strength, not a weakness.
For example, if a believer says they “Know God exists because I can feel Jesus in my heart”, how is that testable? It is purely a subjective experience. Hiostorically, people claiming to hear voices (of angels, God, etc) were believed, sometimes with tragic results, e.g. Joan of Arc. Today we think they were schizophrenic, but there is no way to test this on historical figures.
Back in the 1970’s Arthur Koestler wrote an influen6tial book The Roots of Coincidence that I read while at university. [I suspect it influenced Rupert Sheldrake and his idea of “morphic resonance”]. Since then, there has been no scientific evidence that Koestler’s ideas were anything more than magical thinking.
The world is full of magical thinking, and science is the best tool we have to separate reality from magical thinking. It is the reason he wrote The Demon-Haunted World: Science as a Candle in the Dark to explain why science was the best tool to do this. There are magazines like The Skeptic that have articles investigating and debunking false claims of psychic powers and other phenomena.
Scientists assume that nature is not out to fool them. But people do exactly that, and that is why Randi and his colleagues were so valuable. Psychologists understand human motives too, which is why their experiments are constructed to prevent this as much as possible.
The secrets of nature throughout history were not easy to understand and that still is a valid argument today since there are still unanswered questions or unknowns and more to go, so not anyone knows them. Debunking charlatans and hoaxes does not always use complex science and physics and James Randy’s writings and books are not very mentally challenging. Extra ordinary claims require extra evidence, not simple evidence anyone can figure out. He debunked the preposterous, but not ESP.
I’ll agree that I don’t have to attack him since I never believed ad absurdum claims anyway and I won’t mention him again. I still think that computers can’t access the psyche including consciousness.
Also, by the fact that James Randy was not interested in any psychological theories of ESP, I assume his auxiliary, inferior undeveloped function was thinking.
@Alex: your refutation of the paranormal is premised on the notion that it is reproducible, safe, and useful, the product of a particularly well-developed mind rather than one that is defective, so that experts can be found willing and able to share their wisdom and prove their case. Note that many traditional attitudes in regard to witchcraft make different assumptions.
A week after 9/11/1, there was a painting from the 1970s hanging at the east end of a hallway in the UW Madison student union building, called “Action: Apocalypse”, which depicted twin skyscrapers obliterated by an immense fireball. For all I know it may still be there. On the right side of the painting, there were four lines of numbers (mostly 0, 1, and 9), and in four different places among those numbers you could make out the sequence 91101. How do you convert that into a controlled experiment The Amazing Randi would accept?
If people paused a moment and looked at that painting in the 1980s, did that affect who they ran into in the hallway, who they met, who they married, when they went on vacation, which flights were early or late? Could the attack have happened precisely the same way, had someone not seen it? What happens if you are able to prod more people to try to foresee more disasters?
See my comment about Koestler above. There is also a lot of misunderstanding probabilities that lead people into numerology such as “The Bible Code”.
Your example of the twin Towers painting is an example of that. Beyond that, of course, there are influences. This post and the comments influenced me to check facts on Wikipedia and post replies. But those influences – causality – have nothing to do with “predicting or seeing the future”. Cassandra was a famous mythological person able to accurately predict the future but that no one would believe. This is not about fact but rather a tale about human nature, much like our [science]fiction that warns of the consequences of our nature. Should 1970s eco science fiction that got some things right be regarded as predicting the future?
No. There is no contemporary Cassandra, although there are a number of charlatan claimants.
Such peculiar instances, even well-attested cases such as the novel “Wreck of the Titan”, are not *proof*, it is true. As I said, it is hard to convert them to a controlled experiment. However, they are also not *disproven*. More to the point, having been offered up in advance with what skeptics must maintain could be no hope of reward, they cannot possibly be derided as charlatanry. The opposing doctrine of causality only from the past is inconsistent with general relativity results and seems inelegant in regard to Feynman diagrams. It has no legitimate right to wear the tyrant crown of Null Hypothesis.
Causality violations in conscious thought have a status similar to life on Mars – we can imagine them, we can propose them as explanations for certain observations, though these will probably to be dismissed. If space agencies take elaborate precautions to protect Martian life from potential Earthly contamination, the same seems prudent for human consciousness — with the added impetus that we do know there is something unusual going on there.
Excellent point. Computers might not have access to the unconscious psyche which only living things might have a link like animals and people. Jung wrote that the psyche includes both conscious and unconscious and he mentions precognition works through his theory of synchronicity in his paper on synchronicity.
David Bohm and Erwin Schrodinger have argued what is consciousness. Quantum entanglement, we still do not understand the basis of reality. The wave and the particle, which side are you on. Yin and Yang… ;-}
It is indeed a hard problem. Whether Dennett is correct that it is an emergent property of the brain monitoring itself, IDK. I read Anil Seth’s view of this and despite his fame du jour, I think he is wrong.
I suspect that qualia are not that problematic. When you see “red” I think you are simply recreating the image of the flower that is associated with “red”, as well as related nearby images like a swatch with the color red. While we cannot know what other people think, I believe that we have common responses to various arts suggests we do generally think in the same way and see the same things, such as colors, as others do. We probably see colors the same way as our ape cousins see them too, but differently from animals with difference cones in their eyes, like cats.
There is big difference between general intelligence being difficult to create and being beyond the realms of observation. The latter sounds like an effort to buttress the “great chain of being”. Theoretically, there should be nothing preventing the creation, or more accurately the nurturing and growing, of new forms of general intelligence. We will do the same thing as Nature, take something without intelligence and mess with it until its smart. We will force mutation and provide selective pressure.
I believe we are getting ahead of ourselves figuratively speaking.
We still don’t know why/how our wetware produces minds/consciousness.
How can we state the limitations and
ultimate outcomes of a post biological intelligence when no single human can completely explain down to the smallest detail how they themselves are conscious? When said humans still can’t construct a self conscious machine? Or if it can’t be constructed, why not? Is it a knowledge/technological problem? Or does it somehow violate rules of physics/quantum mechanics?
I would suggest that we still have issues with computational power, algorithms, and lack of embodiment.
“transcends the recognized physical senses”
“life has evolved in milieu of the non-physical influences”
At the risk of going down a rat hole (which I won’t), what you state isn’t correct. Anything that interacts with physics is part of physics. That there may be undiscovered physics changes nothing. Yet to be discovered physical interaction is physics and will be accommodated within physics. The consequence is that, ephemeral as they may be, physics encompasses life, mind and self-awareness.
“Anything that interacts with physics is part of physics. ”
That’s quite a statement for a discipline which set up shop a few centuries ago describing itself as “natural philosophy”.
While I do suspect that astrology, say. versus astronomy has run into a cul de sac, there was a period in the historical record when they were much one and the same, including in the minds of many of astronomy’s
Renaissance pioneers. Which is to say that there are many things beyond physics that physics, perhaps for good reason, will not touch.
We could explain the diverse opinions expressed on this forum with statistical mechanics and likelihood of patterns and delivery times on keyboards which would give us a high unlikelihood of natural repetition. A good solid physics analysis but missing the point entirely.
Or I suppose that there is also a statistical probability of a whole lot of mechanisms insisting they are not machines, with a whole lot of mechanisms countering that they really are…. Proof, however, could be shown with a laboratory demonstration of machines duplicating a lot of the currently observed effects within certain error bars.
But whether it’s a question of transporting a minimum package of consciousness across interstellar distance – or trying to figure out how consciousness got started I fail to see how how physics today has
bagged that question. Sounds more like B. F. Skinner trying to convince an auditorium full of students that he’s the only one in attendance.
There is an exception to that. The laws of physics and the boundary conditions of the universe affect physics, but physics does not necessarily affect them. The shape of an orbit is affected by the value of pi, but not the other way around. Suppose some causality violation is possible – perhaps the content of a message relayed around a Tipler cylinder, or perhaps something more commonplace. In a situation like that you can come face to face with a piece of data that is not the “result” or “effect” of anything that happened in the past, and therefore not controllable or alterable by anything that you could have done. It is not merely ‘random noise’. It is an imposition on the universe, on physics, from … beyond.
While physics is at the root of the universe and everything in it, the appropriate descriptions for chemistry, biology, psychology do not require the laws of physics, but rather the appropriate descriptions at the level of organization. Even when we discuss the mind, is the appropriate level the neural firing, or the higher levels of the brain’s organization? We see it played out in the tension between neural network and symbolic AI models. The neural folks think that their models will encompass everything needed for AGI as this approach is closer to the wetware. However, symbolic systems can capture much of our intelligence with much simpler approaches that assume a higher level of thinking. Pragmatists suggest the easiest path is a hybrid of the 2 approaches.
“the appropriate descriptions at the level of organization”
Yes, we certainly have models for these phenomena that do not invoke fundamental physics. However (and I’m sure you know this) these models rely on fundamental physics, and many of these are statistical models of large numbers of “particles”. Because they are statistical, it is possible, though often improbable, for those models to occasionally fail. For example, that common though absurd example of suffocating when all the air molecules simultaneously jump to the other side of the room, or tunneling through the walls.
This gets at the heart of the Bohr-Einstein dialog. Bohr treated QM as a good enough model of the world to proceed as if it were a fundamental theory. Einstein disagreed, considering that QM is a statistical model emerging from a more fundamental, though unknown, theory. So he kept looking for those gotchas (and UFT) and Bohr kept finding QM solutions to those problems.
Who knows, Einstein might still be ultimately correct though perhaps not for his reasons.
Yes but let’s look at that word ‘physical’ for a moment in the context of modern physics. Physical reminds me of the ‘billiard ball’ model of mechanics where perfectly elastic bodies collide in kinematic perfection. All very neat and deterministic. But how in the world does the ‘spooky action at a distance’ of quantum entanglement relate to anything remotely physical? The notion on non-locality implies a relationship between particles that is not contained within the physical world. Like mathematics, with its complex and imaginary numbers, non-local relationships exist both ‘somewhere’ else and in-between. To no-one’s fault, the words we use sabotage the conversation.
“The notion on non-locality implies a relationship between particles that is not contained within the physical world.”
I believe you’re referring to the hidden variables model of QM. Regardless of the resolution of entanglement and non-locality, it will still be physics about particles interacting in this universe. Our incomplete knowledge of the physics of our universe is not an excuse for magical thinking.
I didn’t say anything about magic. Just the same, “Any sufficiently advanced technology is indistinguishable from magic” As you say, the physics of our world is an incomplete portion of a metaphysics that would explain reality. Of course reality does not need to explain itself; any explanation it may someday offer would be dependent on our mental capacity to receive that explanation.
“not contained within the physical world”
You have not defined a thing by asserting what it isn’t; i.e. not physics. If you’re offended by the label I assigned to your description –magic (supernatural, etc.) — choose a clearer definition so that I can understand you.
I simply suggested that there may be physics that is as yet undiscovered to explain (for example) the non-locality phenomenon you referenced, and you appeared to claim that are non-physical forces (magic???) at play. This I don’t understand since any material interaction is physics.
It is my opinion, and I believe the opinion of most scientists and philosophers today, that biological life–the human brain and the human consciousness and intellect (plus any analogues of it that may exist elsewhere in the cosmos) are purely natural collections of ordinary atoms found in nature, organized by the same principles of physics that govern other interactions of matter and energy in space and time.
Artificial life, that is, living machine-organisms conceived of and constructed by biological organisms is the same kind of creature in kind, if not in degree. Artificial life may be an artifact developed for a specific purpose (other than mere survival and self-perpetuation), but it is no different from natural, evolved, organic life. There may even be hybrids and symbiotic relationships between “natural” and artificial life-forms.
Now, I concede I may be mistaken, perhaps some form of vitalism or supernatural intervention does operate in the universe. I normally reject religious explanations but I must concede that it is too early in our species’ intellectual development to rule them out altogether. Purely mechanistic explanations of observations have failed us in the past, and I have no doubt many will fail in the future, so we should keep an open mind. There is a lot of high strange we have yet to discover.
When I was studying astronomy as an undergraduate the Steady State Theory was still held by many astronomers, including myself, and those slowly spreading blue-green shapes visible on Mars’ surface were still believed to be lichens responding to the summer melt from the polar caps. Try reading some of the old reasoning justifying these conclusions, they were quite convincing then, but quite humbling today.
But until we can demonstrate otherwise with some degree of certainty, we must accept that life, however wonderful and complex it may be, is not magic, it is a natural phenomenon that can be duplicated, at least in principle, by human effort and ingenuity.
The same goes for intelligence. If nature can do it, then we can do it too. At least, in principle. To believe otherwise is to make assumptions about the nature of reality we simply cannot justify.
You’re largely right, but the issue here is slightly different: not whether sentience *can* be duplicated, but whether it *will* be duplicated by those who don’t understand it. Imagine a society had no concept of paper money, and you asked their best artist to counterfeit a bank note. No matter what his skill as a copyist, he would never think to take his masterpiece and crumple it between his hands to feel the quality of the paper.
We should not be confident that artificial intelligences, however intelligent, would be suitable replacements for humanity, or deserve rights at the expense of human rights, and we should certainly be afraid of anyone who promises to “enhance” the human mind with artificial components, no matter how astounding the economic efficiency of skipping childhood and schooling may appear.
Point taken.
But don’t forget, we shower humanitarian awards and praise on “anyone who promises to “enhance” the human” body with appliances like artificial limbs, spectacles, dental implants, artificial hearts and other prostheses. And what is an aid to thought, like a pocket calculator, computer or paper and pencil, but just another prosthetic to enhance human performance? Where do we draw the line? Should we?
While I appreciate you are likely thinking of direct neural interfaces, if you step back a moment and realize that we enhance our cognition with our technosphere. My ability to see clearly is enhanced by my “face furniture”, and humans have been grinding lenses to do this for hundreds of years. My memory is enhanced by offloading those memories to my library, music and video collections. Now my brain is adapted to some sort of indexing and searching. Epileptics have their defective brains enhanced with surgery. A host of brain issues are handled by adding pharmacological agents, and people regularly imbibe addictive compounds like caffeine, alcohol, and a whole host of other substances that alter mood and cognition. We can control animal brains via optoelectronics, and I daresay techniques like this will reach the clinic. I I wear a device to read my thoughts that in turn connect me to the external world and retrieve information without needing a visual or aural interface, is that really any different that using a smartphone to do so? If that brain-reading device is migrated to inside a skull to interface more directly with the brain, is that really so much different? Clarke considered the braincap a boon in his novel 3001: The Final Odyssey. While that was the end result of a millennium of development and ignores the likely problems of early models, that is no reason to fear early models, any more than people feared early motorcars.
Some feared early motorcars sufficiently to make quite restricting laws at first, and others were run over. In time, people came to accept that the singing of birds would always be overshadowed by the noise of traffic, and the IQs of children would be reduced several points by omnipresent lead contamination, and that one day the ice caps would melt and the cities would sink under the ocean. Still, at least here there have been people at every stage, to decide whether it was worth it.
To use a machine to read a fact or bend light is very different from using it to alter how people think. Even when we are manipulated by algorithms from afar, in search terms and shared “personal” thoughts, we see a degradation of our ability to research and decide what is true.
But to program a man with all the recorded facts, opinions, assumptions, ethics, tolerances, and judgments of a profession, as a substitute for natural learning? To replace his honestly felt and spoken thought with the pull-cord of a programmed doll, parroting words of the masters perhaps without human cognition at all? That is altogether something else.
Sometimes we need to fear, and never is that more true than when someone toys with the very essence of the human soul without claiming to comprehend it. When Fermi’s Great Filter comes scratching at our door, it may well come posing as a friend.
Nobody is saying that we preprogram a person in the way that you are stating. What we can do, and have done for millennia, is to educate and norm our children to live within their social system. That requires making people conform to social rules. [That was one of the reasons that Sacha Baron-Cohen’s Borat character was so funny because he broke those rules.]
How an individual responds to the world is based on the social system they are nurtured in as well as the access to a wealth of ideas that are available today at the click of a mouse. Different cultures and different “idea bubbles” will mold the individual’s responses. Sometimes those ideas change, such as in my lifetime non-heterosexual relationships and interracial marriages are no longer criminalized or acceptably frowned upon. Scientific discoveries also change ideas – a rich source of new concepts that open up new avenues of thought.
Given human history, I am far more concerned about the power of social systems to do harm than I am about any technological means to “program” a person. Even Huxley’s dystopia “Brave New World”, with “genetically programmed classes of people” was not nearly as horrific as the reality of Nazi Germany.
Natural, artificial, physical, non-physical, intuitive, precognitive, creative, mechanical — are the words just getting in the way of any common understanding? Our notion of robot, whether natural or artificial, is an automaton — a closed system. But the ‘closed system’ is itself an artificial abstraction used for convenience in exploring the thermodynamics of machines. We are not closed systems, we are not machines. An artificial being would not be a close system either. Broken down ad infinitum the mechanical design would give way to quantum probabilities and uncertainties just as with natural biological organisms. At that point we have to face the interplay of uncertainty and determinism that relate mind, matter, energy and time. Non duality isn’t just a good idea…
Some seem quite convinced that hardware cannot emulate wetware in manifesting conscious awareess; consciousness requires “living” matter, with some inviolate distinction between living matter and hardware. Others are at the other end of the spectrum, quite convinced that consciousness is an emergent epiphenomenon ultimately rooted firmly in physics.
A sine qua non for bridging the gap is a clear exposition of consciousness in terms of physics or its derivatives. Otherwise one cannot doubt the doubter — from the perspective of either extreme.
Alternatively, doubters must clearly explain why machines cannot be Turing Complete when emulating wetware. Brains are incredibly complex and poorly understood, but those who claim consciousness is a special property of wetware is dualism in disguise.
The Abolition of Man; nearly the heart of the New Religion.
It’s baffling that this is something some seem to happily anticipate. Why?
Missionaries have been very successful in spreading their religion around the world and converting the locals. From the locals’ POV, is this not the same as obsoleting their old worldview and accepting a new one?
Do you reject your children if they do not follow your beliefs but instead follow different ones? (Local conservatives where I live have a great problem with this.)
Nietsche thought humans would cross the metaphorical chasm to become supermen.
Some religions believe that your soul/mind leaves its mortal biological embodiment and transcends to an afterlife which is more pleasant to exist in for eternity.
Techno-utopians like Ray Kurzweil believe that they will have their minds uploaded into computers before they die, thereby obsoleting their biological body.
Clarke’s SF had ETI discard their biological bodies, placing their minds in machines (spaceships) and later into the intangible substance of space itself.
There have been suggestions about reprogramming our genetic code to create a superior species of humans.
Most people believe that education improves one’s lot in life freeing one from a more impoverished state.
All these examples show me that we are quite happy with obsoleting our biology and even letting control of humanity pass to our descendants. In most cases, the assumption is that the individual, or the collective we, continues, but with modification.
Should we have the technology to upload our minds into more capable bodies, I see no reason to reject or fear that outcome, per see.
The fear I see is that minds we create de novo in artificial bodies will no longer think like us or have much interest in humans 1.0. We then get displaced from the top of the pyramid.
Doesn’t this echo with what white supremacists believe with their “The Great Replacement”? I’m not saying that those who fear robotic civilization obsoleting humans are supremacists, but rather it is similar “lizard brain” thinking that generates the fear/disgust of such an event.
Self-hatred isn’t a good basis for adaptation or survival.
It’s an attitude that’s been cultivated since WWII, infecting nearly every cultural niche. It isn’t “inclusiveness” or “tolerance” and certainly not objectivity or openness. It only impresses the “other” as neurotic weakness.
Let’s hope it will fade away in the future.
I just want to say that I find some of the ideas in this post and in the comments very disturbing. What does it mean to say that humanity may become “obsolete”? By what measures? Are you really judging humans by purely material, utilitarian criteria, as nothing but machines? Doesn’t this way of thinking run the risk of leading directly to Skynet scenarios, or death camps? It seems to me that scientists and technocrats who think like this need to do some soul-searching, to find out if they even believe in souls, or if we really are nothing but machines who should accept our “obsolescence” when machines materially surpass us. It’s a rather frightening, soulless way of thinking, if you ask me. Is this where Enlightenment, scientific thinking is taking us—to a War on the Soul and the total devaluation and possible extinction of humanity? Very scary stuff.
Homo sapiens have treated much of the wildlife on Earth as dispensible, i.e. obsolete to human needs. Abrahamic religions suggest we are made in God’s image, but if so, that God is very uncaring, even cruel.
Should artificial sentience of our making exceed the abilities of humanity, why should it not treat us as we have treated the rest of the animal kingdom? [“Do unto them, as you would have them do unto you.”] We may be dispensible or perhaps kept as pets (cats and dogs have replicated well in human society). Better that we become obsolete, just as horses became obsolete as motive power and humanely destroyed, than kept as slave workers, like RUR’s robota.
It must be remembered that there is no soul in Budhism and only a non-eternal one in the Advaita Vedanta shcool of Hindu philosophy. And all the Indic religions acknowledge the transience of each universe in its cyclic manifestation and demanifestation.
Everything with a beginning must perforce have an ending. Including all of this.
I guess from the standpoint of ‘Philosophy’ this seems to say at all:
https://assets.amuniversal.com/74ac4400cd6a01396cfc005056a9545d
From a quick scan of the comments I’m happy to report that with the exception of one individual, there isn’t here in the comments section anybody who agrees wholeheartedly with the notion that these artificial intelligences should be given what we have euphemistically called ‘rights’.
For my part I am totally diametrically opposed to the idea that we need to impart on any machine that we develop the notion of some kind of innate, natural Rights to them. There appears to be something of a soft spot in people which suggest that we need to anthropomorphize anything that seems to embody the notion of some kind of living spirit. We feel bad for animals, yet many of us freely consume them for nourishment. Should we suggest that all human food embodies some type of ‘universal spirit’ and therefore represents a sacred inviolable and noble existence that we can’t in any fashion offend ?
Those of you that feel way are going to end up on the route to starvation. Likewise, machines that we create are not sacrosanct and therefore could not be used to are liking. I’m not suggesting that we take a robot and bang it up for the sake of being destructive; any more than we would take our toaster and throw it against the wall just because we wish to damage it. But the idea that somehow it embodies some type of ‘universal spirit’ – I think that is inane. Machines are created for us to make our lives simpler and easier when we are facing so much complexity and extensible needs.
I see absolutely no need whatsoever for machine to have the capabilities of actual Thought. What purpose would it serve? What we need is machines that will be extensively programmed in such a manner as to deal with issues as they arise within the sphere of their use and can (within reason) deal with reasonable problem that would be expected for it to encounter. I feel rather certain that any competent software engineer could provide sufficiently detailed degree of programming to handle any real life problems that might arise in the course of using a robot.
Right now we are wasting our time building manlike robotic machines that, quite honestly, are not suited at this present time for use within the home environment with sufficient degree of programming to handle what would be called ‘fuzzy logic’ situations. Machines that could operate within the home environment and do all the multitude of required task that people need to have done are far, far away in terms of usability-in my opinion. There is a crying need for companies who are in this business to start addressing in a realistic manner the needs of ordinary people and the problems that they face in their daily home and office environments. So far I see companies like Boston dynamics which seems to be wasting it’s time on a lot of robots that are in the extreme category rather than in the practical category. With an aging population and a population in which free time is indeed a precious commodity now more than ever we need to start seeing machines that will actually enhance human living not create more complicated unsolved problems. That’s why I’m so glad to see that a lot of people on here are interested in this issue.
For any machine to optimally adapt to its milieu, its initial programmig will have to be permissively modified by inputs enabling learning that in due course could supplant the original programming. Some of it is already seen in personal computers that learn to anticipate the user’s actions.
A program chiselled in stone ain’t gonna cut the mustard.
Absolutely. While symbolic rules programming (GOFAI) has a role to play, much like our rote learning how to do arithmetic, the way forward is with systems that adapt their responses to the environment. We already have various means and AI techniques to do this so I expect that any advanced robot will be able to learn and adapt. It may even be easier for a machine mind to unlearn old programming and learn new responses than for humans.
“A program chiselled in stone ain’t gonna cut the mustard.”
Never said it did; but a robot leaping into quoting poems won’t get the trash taken out – will it ?
Clearly you are not aware of Buddhism. Vegetarians and Vegans will not consume animals, only plants. (Douglass Adams made fun of that in The Restaurant at the End of the Universe” where a talking cow offers Arthur Dent its flesh, and when he prefers to order a salad, Ford Prefect indicates that some lettuces might have something to say about being eaten.)
The West has steadily tightended how animals may be slaughtered and how prepared. When I was youndg you could eat “Blue trout” – throwing a live trout into boiling water like a lobster. That is now illegal. Foie gras is no longer legal to be offered in California. Veal, while still legal, is frowned upon by those who understand how it is produced.
Intelligence and conscioussness is a sliding scale from almost absent in an insect, increasing as one moves up teh evolutionary scale. You can draw your own line where eating the animal is no longer acceptable. Clarke thought that at some point in teh future, eating flesh from animals would disgust most people. The grtowing trend to eat a more plant based diet is partly environmental concern and partly animal welfare concern.
I don’t think you have thought this through enough. There is a need for robots to converse with people in a way that mimics another human being. But you really don’t want such robots not to have empathy, be zombies, or effectively sociopathic. The fear of such robots hurting you will make them unacceptable in society. [Exemplified in the Doctor Who episodes “The Robots of Death” where one of the crew tells another who is being massged by a humanoid robot that he knew someone who was being massaged by a robot that suddenly tore his arm off. We have enough issues with “the uncanny valley” without worrying that a robot that is apparently quite safe suddenly decides to murder you.]
The world we have made fits human beings. There is a valid reason to make robots that operate at the human scales with “arms” and teh ability to do tasks that require a human to do, rather than a dedicated mindless machine. We can have lots of single task robots like dishwashers, or a multitask capable robot that can do lots of things from doing the laundry to caring for the children. Japan already understands the need for caregivers for the elderly and are in the forefront of making humanoid robots to manage this for their aging population.
“That is now illegal. Foie gras is no longer legal to be offered in California. Veal, while still legal, is frowned upon by those who understand how it is produced.
Intelligence and conscioussness is a sliding scale from almost absent in an insect, increasing as one moves up teh evolutionary scale. You can draw your own line where eating the animal is no longer acceptable. Clarke thought that at some point in teh future, eating flesh from animals would disgust most people. ”
I think to use the California argument vastly weakens the strength of what you’re saying; I would hardly point to California as a harbinger of ‘must do’ things or forward thinking ideas. As for whether or not someone has stated that eating animal meat is somehow categorized as ‘immoral’ or what have you is more of a function of one group using shame, intimidation, and often times outright brute force of law to get what they want at the expense of somebody else.
Take the Covid 19 vaccine as a point of contention, there are people out there who insist that others be inoculated against their will-their rights be damned! What they are attempting to do is do just what I said above, intimidate and ultimately use government power to get their way over somebody else. Your arguments do not impress me in the slightest as to why I should reject what I said.
As for whether or not robots should be empathetic to people’s needs I am not saying that the machine could not look after the human that it is entrusted to. Putting aside for a moment the arm ripping/eye gouging/strangulation of the human in the middle of the night robotic entity, I’m certain that we could find a class of robots which would be more suitable as caretakers for human beings. It does not follow in my view that because a robot has a set of instructional rules built into its software/hardware brain it must automatically go into a murderous mode just because it has been assigned a given task.
It’s difficult if not impossible within a short venue such as this to totally nuance what should or should not be a parameters of a robotic operational machine. Of course the machine should be of such a programmable nature as to not inadvertently or intentionally be of danger to the human charges that it is entrusted to. I would say that this is going to be an iterated process and putting aside malevolent purposefully introduced programming by the builder of the robot, I would say that a machine and a robot could be sufficiently programmed to operate within a given environment and do so safely. To me to go and intentionally talk as you have about how a machine could go berserk and therefore not be acceptable seems to be begging the question as to whether or not there should be robots introduced into human society. It seems much like a “strawman” argument introduced merely to win a debate. I’m assuming here that the Corporation/programmer is a benign interest which only has the consumer’ s best interest at heart and is not intentionally trying to do harm to its customer base.
Therefore I would have to say that there would be a much needed examination of all sorts of odd circumstances that might exist within a home and to try to program the machine as best as possible to learn the peculiarities of its environment that it is expected to encounter. I’m merely saying that the machine should be a dedicated entity to doing its job and not become some kind of SJ W warrior in which it become so entangled in some kind of if/or situation that it is unable to function in a productive and useful matter for its human charge.
Public health should be a rational response untainted by politics, but the polls show it has not. The states with the lowest vaccination rates are also those involved with removing half the populations’ control over their own bodies.
Far too many forget that freedom comes with a responsibility to the rest of the population. Yes, that implies some sort of collectivism rather than one-sided liberalism. Society does create laws to try to balance the wants of the individual with the larger needs of society. [And yes, that does sound like the rationalist Spock’s famous quote from ST.] The benefit is that personal freedom does not impinge on others’ freedoms. It also reduces unnecessary costs and burdens. The pandemic is a good mirror for this. States with low vaccination rates are yet again overloading their hospitals. So much so, that frontline healthcare workers are resigning. One only has to see what is happening to poor countries without access to sufficient vaccinations and lower healthcare infrastructure to glimpse where parts of the US are headed because of politics. Over 600,000 have died in the US in less than 2 years, more than all US deaths from wars from the start of the 20th century. The Civil War directly killed 2% of the US population over 4 years (mostly men) while the current pandemic has directly killed at 1/10th that rate so far. A century after the Spanish Flu the US response has been remarkably similar, despite having the technological benefit of a rapidly developed vaccine. Other countries have demonstrated that this did not have to be the result.
Sci-Fi Short Film: “REWIND” | DUST
https://www.youtube.com/watch?v=icjPGYmVo6w
Per site instructions I don’t want to sidetrack into animal politics per se (if we do, we might as well double down on abortion while we’re at it). The relevant issue here:
Is there a technical way we can *answer* those questions?
Suppose sentient thought actually does involve causality violations. In other words, you get variable A in the past by looking up the value variable A in the future, and A can be one of two or more values, and which value it is is actually a boundary condition of the universe that affects the solution for all past and future events, distinguishing this universe from others that could have existed with some other value of A.
In that case, we know A is not in a superposition of states (or at least it doesn’t have to be … let’s assume not, for now). If Schroedinger’s Cat is conscious due to this kind of causality violation, then it is not both live and dead. It is one or the other based on the boundary condition. The cat doesn’t experience a combination of both states, but just one. This means that a box containing the cat should at some hypothetical level act differently than a box containing an electron!
What I’m not so clear on is … using methods from quantum computing, is there any validly conceivable way to do that sort of measurement and *see* whether putting a cat in the box has this effect? Or a person, fetus, Mexican jumping bean, protozoan?
“Doesn’t this way of thinking run the risk of leading directly to Skynet scenarios, or death camps?”
No. You are thinking along the lines in an old American western: “This planet is too small for the two of us. Draw!”
It is conceivable in the future that AI and humans will coexist, either cooperatively or begrudgingly. It’s impossible to predict. It is unlikely that they’d need to compete for resources that we currently deem important. It will be a very different world by then.
It is also quite possible that humans will choose, of their own free will, to discard their biology, for advantages that we can only speculate upon.
If that does occur they will no longer be human since our essence (motivations, instinct, perspectives) is inextricably bound with our biological bodies. I don’t believe there can be a “human” mind in a non-human “body”.
That may seem a horror to many, now, but the future will be a very different place. Of course many will opt not to change. Will this group be obsolescent and decline in number until it fades away? Maybe, maybe not.
“Thou shalt not make a machine in the likeness of God.”
Commandment from the Orange Catholic Bible, as quoted by Frank Herbert in the novel “Dune”.
How about “Thou shalt not make a God in the likeness of machine.”
Tech giants are rushing to develop their own chips — here’s why.
https://www.cnbc.com/2021/09/06/why-tesla-apple-google-and-facebook-are-designing-their-own-chips.html
O’Donnell said there’s a shortage of people in Silicon Valley with the skills required to design high end-processors. “Silicon Valley put so much emphasis on software over the past few decades that hardware engineering was seen as a bit of an anachronism,” he said.
“It became ‘uncool’ to do hardware,” O’Donnell said. “Despite its name, Silicon Valley now employs relatively few real silicon engineers.”
This is why we are in this situation, relying on someone else to do the hardware. China will have us by the balls in no time! I suppose the robots will build the foundries and make the chips? Time to wake up!