The end of one year and the beginning of the next seems like a good time to back out to the big picture. The really big picture, where cosmology interacts with metaphysics. Thus today’s discussion of evolution and development in a cosmic context. John Smart wrote me after the recent death of Russian astronomer Alexander Zaitsev, having been with Sasha at the 2010 conference I discussed in my remembrance of Zaitsev. We also turned out to connect through the work of Clément Vidal, whose book The Beginning and the End tackles meaning from the cosmological perspective (see The Zen of SETI). As you’ll see, Smart and Vidal now work together on concepts described below, one of whose startling implications is that a tendency toward ethics and empathy may be a natural outgrowth of networked intelligence. Is our future invariably post-biological, and does such an outcome enhance or preclude journeys to the stars? John Smart is a global futurist, and a scholar of foresight process, science and technology, life sciences, and complex systems. His book Evolution, Development and Complexity: Multiscale Evolutionary Models of Complex Adaptive Systems (Springer) appeared in 2019. His latest title, Introduction to Foresight, 2021, is likewise available on Amazon.
by John Smart
In 2010, physicists Martin Dominik and John Zarnecki ran a Royal Society conference, Towards a Scientific and Societal Agenda on Extra-Terrestrial Life addressing scientific, legal, ethical, and political issues around the search for extra-terrestrial intelligence (SETI). Philosopher Clement Vidal and I both spoke at that conference. It was the first academic venue where I presented my Transcension Hypothesis, the idea that advanced intelligence everywhere may be developmentally-fated to venture into inner space, into increasingly local and miniaturized domains, with ever-greater density and interiority (simulation capacity, feelings, consciousness), rather than to expand into “outer space”, the more complex it becomes. When this process is taken to its physical limit, we get black-hole-like domains, which a few astrophysicists have speculated may allow us to “instantly” connect with all the other advanced civilizations which have entered a similar domain. Presumably each of these intelligent civilizations will then compare and contrast our locally unique, finite and incomplete science, experiences and wisdom, and if we are lucky, go on to make something even more complex and adaptive (a new network? a universe?) in the next cycle.
Clement and I co-founded our Evo-Devo Universe complexity research and discussion community in 2008 to explore the nature of our universe and its subsystems. Just as there are both evolutionary and developmental processes operating in living systems, with evolutionary processes being experimental, divergent, and unpredictable, and developmental processes being conservative, convergent, and predictable, we think that both evo and devo processes operate in our universe as well. If our universe is a replicating system, as several cosmologists believe, and if it exists in some larger environment, aka, the multiverse, it is plausible that both evolutionary and developmental processes would self-organize, under selection, to be of use to the universe as complex system. With respect to universal intelligence, it seems reasonable that both evolutionary diversity, with many unique local intelligences, and developmental convergence, with all such intelligences going through predictable hierarchical emergences and a life cycle, would emerge, just as both evolutionary and developmental processes regulate all living intelligences.
Once we grant that developmental processes exist, we can ask what kind of convergences might we predict for all advanced civilizations. One of those processes, accelerating change, seems particularly obvious, even though we still don’t have a science of that acceleration. (In 2003 I started a small nonprofit, ASF, to make that case). But what else might we expect? Does surviving universal intelligence become increasingly good, on average? Is there an “arc of progress” for the universe itself?
Developmental processes become increasingly regulated, predictable, and stable as function of their complexity and developmental history. Think of how much more predictable an adult organism is than a youth (try to predict your young kids thinking or behavior!), or how many less developmental failures occur in an adult versus a newly fertilized embryo. Development uses local chaos and contingency to converge predictably on a large set of far-future forms and functions, including youth, maturity, replication, senescence, and death, so the next generation may best continue the journey. At its core, life has never been about either individual or group success. Instead, life’s processes have self-organized, under selection, to advance network success. Well-built networks, not individuals or even groups, always progress. As a network, life is immortal, increasingly diverse and complex, and always improving its stability, resiliency, and intelligence.
But does universal intelligence also become increasingly good, on average, at the leading edge of network complexity? We humans are increasingly able to use our accelerating S&T to create evil, with ever-rising scale and intensity. But are we increasingly free to do so, or do we grow ever-more self-regulated and societally constrained? Steven Pinker, Rutger Bregman, and many others argue we have become increasingly self- and socially-constrained toward the good, for yet-unclear reasons, over our history. Read The Better Angels of Our Nature, 2012 and Humankind, 2021 for two influential books on that thesis. My own view on why we are increasingly constrained to be good is because there is a largely hidden but ever-growing network ethics and empathy holding human civilizations together. The subtlety, power, and value of our ethics and empathy grows incessantly in leading networks, apparently as a direct function of their complexity.
As a species, we are often unforesighted, coercive, and destructive. Individually, far too many of us are power-, possession- or wealth-oriented, zero-sum, cruel, selfish, and wasteful. Not seeing and valuing the big picture, we have created many new problems of progress, like climate change and environmental destruction, that we shamefully neglect. Yet we are also constantly progressing, always striving for positive visions of human empowerment, while imagining dystopias that we must prevent.
Ada Palmer’s science fiction debut, Too Like the Lightning, 2017 (I do not recommend the rest of the series), is a future world of technological abundance, accompanied by dehumanizing, centrally-planned control over what individuals can say, do, or believe. I don’t think Palmer has written a probable future. But it is plausible, under the wrong series of unfortunate and unforesighted future events, decisions and actions. Imagining such dystopias, and asking ourselves how to prevent them, is surely as important as positive visions to improving adaptiveness. I am also convinced we are rapidly and mostly unconsciously creating a civilization that will be ever more organized around our increasingly life-like machines. We can already see that these machines will be far smarter, faster, more capable, more miniaturized, more resource-independent, and more sustainable than our biology. That fast-approaching future will be importantly different from (and better than?) anything Earth’s amazing, nurturing environment has developed to date, and it is not well-represented in science-fiction yet, in my view.
On average, then, I strongly believe our human and technological networks grow increasingly good, the longer we survive, as some real function of their complexity. I also believe that postbiological life is an inevitable development, on all the presumably ubiquitous Earthlike planets in our universe. Not only does it seem likely that we will increasingly choose to merge with such life, it seems likely that it will be far smarter, stabler, more capable, more ethical, empathic, and more self-constrained than biological life could ever be, as an adaptive network. There is little science today to prove or disprove such beliefs. But they are worth stating and arguing.
Arguing the goodness of advanced intelligence was the subtext of the main debate at the SETI conference mentioned above. The highlight of this event was a panel debate on whether it is a good idea to not only listen for signs of extraterrrestrial intelligence (SETI), but to send messages (METI), broadcasting our existence, and hopefully, increase the chance that other advanced intelligences will communicate with us earlier, rather than later.
One of the most forceful proponents for such METI, Alexander Zaitsev, spoke at this conference. Clement and I had some good chats with him there (see picture below). Since 1999, Zaitsev has been using a radiotelescope in the Ukraine, RT-70, to broadcast “Hello” messages to nearby interesting stars. He did not ask permission, or consult with many others, before sending these messages. He simply acted on his belief that doing so would be a good act, and that those able to receive them would not only be more advanced, but would be inherently more good (ethical, empathic) than us.
Image: Alexander Zaitsev and John Smart, Royal Society SETI Conference, Chicheley Hall, UK, 2010. Credit: John Smart.
Sadly, Zaitsev has now passed away (see Paul Gilster’s beautiful elegy for him in these pages). It explains the 2010 conference, where Zaitsev debated others on the METI question, including David Brin. Brin advocates the most helpful position, one that asks for international and interdisciplinary debate prior to sending of messages. Such debate, and any guidelines it might lead to, can only help us with these important and long-neglected questions.
It was great listening to these titans debate at the conference, yet I also realized how far we are from a science that tells us the general Goodness of the Universe, to validate Zaitsev’s belief. We are a long way from his views being popular, or even discussed, today. Many scientists assume that we live in a randomness-dominated, “evolutionary” universe, when it seems much more likely that it is an evo-devo universe, with both many unpredictable and predictable things we can say about the nature of advanced complexity. Also, far too many of us still believe we are headed for the stars, when our history to date shows that the most complex networks are always headed inward, into zones of ever-greater locality, miniaturization, complexity, consciousness, ethics, empathy, and adaptiveness. As I say in my books, it seems that our destiny is density, and dematerialization. Perhaps all of this will even be proven in some future network science. We shall see.
I don’t think this has anything to do with development, but rather cultural norming. That is a very different thing. Human beings should all end up the same according to a developmental model, but in reality, which culture they develop in makes all the difference in beliefs, attitudes, and behavior.
You may be confusing this with survivorship bias. Any genetic defect will quickly eliminate the fetus, whist those without the defects will develop normally.
What do you mean by network here? What evidence is there to indicate this “network” has any bearing on evolution by the accepted current version of neo-Darwinism?
I am sure you must be aware of the criticisms of Pinker’s thesis on violence. Is the “violence” in western countries just more subtle? For example, is the cruelty of imposed inequality the new form of violence in our anglophone countries dominated by extreme shareholder vs stakeholder laissez-faire capitalism?
All that progressive thought might be lost if our collective inability to mitigate the climate crisis results in a degraded civilization, even an eventual collapse. Would the thesis only survive if the next civilization proves better than our current one? It took more than a millennium after the collapse of the Western Roman Empire for western civilization to reassert the equivalent greatness again. Rather a long timescale for that arc of progress to recover.
I agree with this, although I am not convinced humans will merge with our robotic intelligence rather than remain separated, and that the robots will become the dominant space-faring “species”. Whether they will have advanced intellectually as you suggest, or just extend the “paperclip maximizers” of our corporate beings is uncertain.
Unless there are other civilizations overlapping with ours and close enough to communicate with beyond just a 1-way signal, METI will be as useless as all that prayer for divine guidance and intervention that so many religions promulgate. Religion might have cultural advantages to hold societies together, but there is zero evidence any benefit comes from the gods it abases itself to. METI is probably just shouting into the void, and arguably just a technology-based religion hoping for some sort of communion and possibly salvation. Wouldn’t it be interesting if by some surprise we get a message back, saying in effect, “Don’t bother to call again. You are on your own. You must solve your problems yourselves. If you don’t survive, we may note your passing in the Encyclopedia Galactica.”
In this alternative future, the HHGTTU’s Ford Prefect might again amend the Hitchhikers Guide entry for Earth from “Mostly harmless”, to “Mostly stupid. Now dead”.
“For example, is the cruelty of imposed inequality the new form of violence in our anglophone countries dominated by extreme shareholder vs stakeholder laissez-faire capitalism?”
Huh??? If something characterises recent decades in the anglophone and in general western countries (the Americas and Europe) is a sustained increase in regulation and dictatorship, which has exploded in the pandemic years.
Anglophone countries are UK, US, Canada, Australia, and New Zealand. Europe is not Anglophone. It is the these countries that have run with the maximizing shareholder return approach to running corporations which has also resulted in tax reductions and tax manipulation by these entities, control of the legislatures (using these excess returns), and the undeniable increase in inequality, incarceration rates (especially the US), poor treatment of outgroups in society (especially Australia) and a general hardening of attitudes by victim blaming the poorest.
One wonders if there are any other civilizations out there, whether this is the convergence they aspire to.
Alex, your analysis here is spot on. Right-wing “Libertarianism” aka authoritarianism-in-practice aka closet authoritarianism leading to Oligarchy has been the dominant strain in the Anglophone countries during the last 40 years. Your point about progress being negated if ways to balance the individual, the collective, and the environment are not improved on upon in short order. The ship has sailed for incremental tweaks of the system.
Hello Alex!
I have long enjoyed the rigor and clarity of your posts on this site, and it is a pleasure to get this kind of detailed and thoughtful feedback. Let me respond in line to your points (A: and J: are you and I, I don’t know if this cut and paste will preserve indents).
J: Developmental processes become increasingly regulated, predictable, and stable as function of their complexity and developmental history. Think of how much more predictable an adult organism is than a youth (try to predict your young kids thinking or behavior!)
A: I don’t think this has anything to do with development, but rather cultural norming. That is a very different thing. Human beings should all end up the same according to a developmental model, but in reality, which culture they develop in makes all the difference in beliefs, attitudes, and behavior.
J: Actually, it does. The further along in the life cycle any organism gets, the more it has a set of highly predictable behaviors. Culture of course influences these developmental features, but that is evolutionary variety (of beliefs, norms, ideas) overlaid on the developmental plan. Any developmental psychologist will cite many predictable features of all adults. Youth are the most unpredictable, in all cultures, on many axes. Elders the most predictable. Even elders with no children and few social ties, and thus little to culturally inhibit them, do not “re-radicalize” their behaviors as they did in youth. Instead, they have self-constrained, into routines whose future you can predict greatly from just a few days of study. Can’t do that with kids.
J: how many less developmental failures occur in an adult versus a newly fertilized embryo.
A: You may be confusing this with survivorship bias. Any genetic defect will quickly eliminate the fetus, whist those without the defects will develop normally.
J: I’m describing a survivor curve, and it the opposite of survivorship bias. Survivorship bias would exist if we assume that just because adults are more developmentally stable, development has always been stable. What happens is actually the opposite. Survival is very questionable for an embryo, early in its developmental unfolding. There is less tolerance for errors of a certain type, including certain obviously checkable genetic defects as you describe. But many defects don’t threaten development, only adaptability of the organism. Besides a growing tolerance for errors, as complex regulatory processes and circuits mature, they themselves stabilize the organism in ways not possible in the less developed organism. It is not clear to me which of these two factors is more important to stability. My intuition is that it is the developed complexity of the mature organism that is the dominant factor in its growing stability. The mature networks stabilize the adult organism further (and in an elderly organism, overstabilize it, setting it up for brittleness and eventual failure and recycling).
J: At its core, life has never been about either individual or group success. Instead, life’s processes have self-organized, under selection, to advance network success. Well-built networks, not individuals or even groups, always progress.
A: What do you mean by network here? What evidence is there to indicate this “network” has any bearing on evolution by the accepted current version of neo-Darwinism?
J: This is a great question that may be difficult to answer well today. But I can offer some starter answers. One way to define what I mean by network is to look at life itself. It has so many levels and scales of structure and function, yet it is a network, with vast connections between it. Constant communication, feedback, and dynamic balancing exists across it, at so many scales. Estimates are that fully 10% of even our genome is viral in origin. All of life acts like bacteria, which with their horizontal gene transfer can be considered both a single network and a collection of groups (species). What I’m describing here is that we need what the theoretical biologists call an Extended Evolutionary Synthesis. Standard neo-Darwinism is so narrow a view of what is actually happening that it is dangerous to assume that it is the dominant set of drivers for macrobiological complexity. I wrote a post last year about molecular convergent evolution that speaks to both of these points (centrality of networks and the limits of Darwinian models). (https://eversmarterworld.com/2020/01/22/the-tangled-tree-isnt-so-tangled-telling-the-story-of-molecular-convergent-evolution/)
A: I am sure you must be aware of the criticisms of Pinker’s thesis on violence. Is the “violence” in western countries just more subtle? For example, is the cruelty of imposed inequality the new form of violence in our anglophone countries dominated by extreme shareholder vs stakeholder laissez-faire capitalism?
J: Yes, Pinker took on a mammoth and thankless task, trying to show the curve of goodness in human history. He was by no means the first. Many others have seen this “arc of self-constraint” and have commented on it or attempted to document it. Norbert Elias did a great job in The Civilizing Process, 1939, covering Europe’s arc of chaotic improvement in ethics and empathy from 800-1900 CE. Rutger Bregman’s Humankind, 2019, is a great update to Pinker’s work. It deftly describes Pinker’s work, deals with his many ideologically motivated critics, and shows the mistakes he made in not describing how much less violent conditions were in the period before the age of empires. So the arc has more humps than Pinker portrayed, but discerning folks knew that anyway. Bregman endorses Pinker’s thesis and then goes well beyond it. I think you in particular may enjoy the book as he exposes many false assumptions in 20th century popular anthropology in the process.
J: Not seeing and valuing the big picture, we have created many new problems of progress, like climate change and environmental destruction, that we shamefully neglect. Yet we are also constantly progressing, always striving for positive visions of human empowerment, while imagining dystopias that we must prevent.
A: All that progressive thought might be lost if our collective inability to mitigate the climate crisis results in a degraded civilization, even an eventual collapse. Would the thesis only survive if the next civilization proves better than our current one? It took more than a millennium after the collapse of the Western Roman Empire for western civilization to reassert the equivalent greatness again. Rather a long timescale for that arc of progress to recover.
J: Yes, climate change is important, and we must greatly decarbonize our food, energy, manufacturing, and lifestyle, but are we not doing so exponentially on an annual basis now? Population is collapsing globally. Everyone with electronics is choosing increasingly dematerialized lifestyles and sustainability ethics. (See Andrew McAfee’s More From Less, 2019 for a great recent account). Are you not yourself slipping into political fashionability to intimate our climate crisis as something that might lead to civilizational “collapse?” Climate change is surely a major problem of human adaptation, yes. But of human survival? I think not. The best brief piece on climate change I’ve ever read is Matt Ridley’s Why Climate Change is Good for the World, in the UK’s The Spectator, 2013. (https://www.spectator.co.uk/article/why-climate-change-is-good-for-the-world). I like to go back and reread that at least once a year, to give me perspective. That is the true assessment that will never be front page news, in my view. It would be reckless to shirk our responsibility to change, assuming it will occur without us, but it is the accurate assessment when we do change. At the same time we need ever heightening panic stories to get ourselves to change. That’s just a political and psychological reality. But those stories never describe the most probable future—that we always adapt.
Your point on the Roman Empire is one I originally believed as well, until I looked closer at the “fall”, and started to see that it was not much of a fall for the network, or even for civilization, but rather, for the particular set of organizations and priorities that was Rome. It is hard to briefly defend this view, but let me try anyway. In evo-devo models, there is a constant tension between bottom-up, exploratory, unpredictable evolutionary change, and top-down, convergent, conservative, developmental change. That tension is evident in phases where one or the other (evolutionary or developmental processes) dominates network complexification, for a time and context, before giving way to the other again. The incredible organizational and technical progress we made during the Roman Empire was one of those dominant developmental phases, in my book. Not for science (we can thank the Greeks for that, and the East after Rome’s fall), but for technical and organizational advances. Many parallels there to how China is advancing today. That empire (but not the network!) eventually became overdeveloped, brittle, senescent, and had to fail and reorganize. It was replaced with another top-down empire, based on Christian monotheism, that devalued many Roman advances (engineering, cities, commerce, warfare, etc), kept others (authoritarianism and Rome’s late conversion to Christianity) and tried to just as ruthlessly prevent bottom-up, creative, individualized progress and experimentation. Nevertheless, as Medieval scholars like Anne RJ Turgot and Lesley White have shown, technological and organizational progress continued throughout the Dark Ages, and remained on a slow exponential (ran increasingly fast) over time. It was now far more local and practical in scale. Benedictine monks were a network spreading “the useful arts” (technology practices). Water wheel networks sprouted across Europe, driving productivity and commerce. Eventually we got the Renaissance, and a shift to a bottom up (evolutionary) phase of progress, and our first democracies. In this network-centric view I’ve sketched, network complexity and acceleration, and the cumulative knowledge within the network, is very, very hard to stop, or slow down. It is redundantly stored (one basic feature of adaptive networks), and the centers will often shift location after local catastrophes. Scientific advances and knowledge shifted to the Middle East after Rome fell, for example. The network, in this story, is always accelerating, at its leading edge, and it is almost always progressing somewhere. Sorry for the long-winded response.
J: I also believe that postbiological life is an inevitable development, on all the presumably ubiquitous Earthlike planets in our universe.
A: I agree with this, although I am not convinced humans will merge with our robotic intelligence rather than remain separated, and that the robots will become the dominant space-faring “species”. Whether they will have advanced intellectually as you suggest, or just extend the “paperclip maximizers” of our corporate beings is uncertain.
J: Wonderful! I did not expect your agreement on the postbiological life development on Earthlikes topic. I am very curious as to how you personally come to this conclusion. I came to this conclusion via the phenomenon of accelerating change. Carl Sagan’s Cosmic Calendar made a deep impression on me as a youth. You don’t see any long periods of stasis in that calendar. Even in the places where you might expect them, such as Earth’s many extinction events, when you look closer, you can see network acceleration everywhere, as with the Dark Ages. Earth’s major extinction events, for example, seem to have done little to reduce the diversity of the genetic network. Yes, most species were lost, but very little (amazingly little!) of our *genetic diversity* (especially of the conserved core of developmental genes), and its intrinsic evolutionary molecular, functional and morphological diversity. That was the *network that mattered most,* in keeping the biological morphofunctional acceleration going. In fact, immediately after each extinction, the record shows a new acceleration of evolutionary diversity in the surviving forms. So the catastrophes themselves are *catalytic* to network complexity. I think that is one defining feature of well-built networks. Not only are they redundant and fault-tolerant, they are antifragile. They get stronger and more innovative under stress. That will have to be a central feature of leading postbiological life, in my view. As for the postbiological issue, even as our current primitive neuroscience and AI advance, we’re learning to put ever more critical features of our own biological and neural networks into our bio-inspired machines. There are research programs to make them both more evolutionary and developmental, in both hardware and software. Deep learning AIs are today trained, no longer coded. No one understand their algorithms. They are associative, like human neural algorithms. And they can think, and simulate, at electronic rather than electrochemical speeds. That is at least a *seven-million-fold faster rate of learning*. I think General AI is still many decades away, but with this learning differential (and once they are self-improving, a similar evolutionary and simulation differential) I don’t see how biology remains influential for much longer, from a cosmic perspective. Many others have written on this for a century now, so I won’t belabor the point. But I do have a few posts on the future of human and machine mind and their merger that might be of interest.
Your Personal AI (Five Part Series), Medium, 2016 https://johnsmart.medium.com/your-personal-sim-a07d78ffdd40#.jhfytmbf9
Contemplating Mortality: Personal AIs, Mind Melds, and Other Paths to Our Postbiological Future, Medium, 2020
https://johnsmart.medium.com/contemplating-mortality-personal-ais-mind-melds-and-other-inevitable-paths-to-our-b37f091191c9
If you don’t mind Alex, I would love to ask you a few questions, to better understand the worldview and assumptions you use to understand complexity and change: What does the phrase “universal development” mean to you? What kinds of universal processes do you think are a candidate for being not simply an evolutionary (random, contingent, unpredictable) process, but a developmental (convergent, conservative, predictable) process? Do you see the mathematical case for at least some universal fine tuning? Do you find value in cosmology models that treat the universe as a replicating system, embedded in a multiverse, and possibly being under some sort of selection? Can you see the case that all interesting things in the universe (except galaxies and large scale structure which we can assume replicate as dependent elements of their parent universe) appear to be replicative, evolutionary and developmental? Suns, for example are clearly all of these three things, by the above definitions. Finally, would you agree with me that the process of *biological development* how it actually works and can be so consistent, as we see in two genetically “identical” twins even when separated at birth and raised in very different environments, is both the most amazing, and one of the most incompletely mathematically modeled, processes in our known universe? I’m not trying to be argumentative, I am just curious. It is very helpful for me to understand different worldviews.
Finally, on the subject of worldviews (cosmos views?) I would like to make one small observation about how our views may differ at present, at least as I see it today. It seems to me that some of your CD posts display a preference toward, or an assumption about, a “randomness-centric” view of complex systems change. For example, your exoplanet post of May 2021 https://centauri-dreams.org/2021/05/28/are-planets-with-continuous-surface-habitability-rare/ analyzed a paper by Tyrell in which “It is assumed that there is no inherent bias in the climate systems of planets as a whole towards either negative (stabilising) or positive (destabilising) feedbacks.” This is to me a poor assumption to make about such an important feature (feedback) in any complex system. I would argue that it leads to a math that is so one-sided and simplistic that its conclusions cannot be trusted. You have a great facility with math and rigor, and I am very impressed with it, but I do think it is easy for us to get our assumptions wrong when we set up our analysis. I’m sure you know that there are many geochemists, planetologists, astrobiologists, physicists, ecologists, and biologists who find value in some variation of the geo-chemo-bio-climate homeostatic Gaia Hypothesis. Not as aggressively as Lovelock stated it, but in some milder and much more qualified form. Earthlikes do appear to be very very good nurseries for life. Life sprang up on our Earth almost immediately as the crust was cooling. Earthlikes may be both a great nursery and a great “finishing school”, for taking life to the postbiological state, in an accelerating, network-centric process. Many of these climate stabilizing Gaia processes are prebiotic. Plate tectonics strongly stabilizes atomospheric CO2 buildup on Earthlikes, for example, even without ocean plankton. To make an assumption of feedback randomness in climate systems takes a piece of the whole complex puzzle and simplifies it dangerously, as I see it at least. Perhaps I am misunderstanding your analysis and the paper. To me, that kind of assumption makes sense if we live in a randomness-dominated universe. But if we live in an *evo-devo universe* where both random and predictable physics operate at all scales, in all hierarchies of complexity, and throughout the life cycle of the complex system (organism, star, planet) then we will need math of both types to describe such critical system properties. And if not just evolution but evolutionary development is occurring on our most complex planets we will need math that describes how the critical networks on those planets are increasingly stabilizing and antifragile (and in my view, ethical and empathic) as a function of their complexity, just as biological network development appears to be. Without that kind of math, I don’t think we can’t expect to rigorously address the big questions. I expect that math will come from biology and complex networks in coming decades, and we’ll eventually learn to apply it on cosmic scales.
This is all quite speculative of course, but I find great value in making our limited arguments today, as best we can. Thanks again for all you do for the CD community. Warmest regards, John
Hi John,
Thank you for what I think is the most thoughtful reply to my comments on CD to date.
1. development of networks and stability. Let’s just the brain as an example. The lower predictability of the pre-adult brain (those unruly teenagers) is due to the greater connections between neurons and chaotic firing. With aging, connections get reduced, reducing network complexity, in favor of predictability. I would argue that network complexity is reduced with age, not increased. A counterargument might be that effective network complexity is improved with age, at the cost of plasticity (old dogs, new tricks, etc.). What about bodily homeostasis? Anyone who has aged knows that homeostasis starts to fail with age, e.g. inability to stay warm in winter.
2. survivorship curve vs survivor bias. I am reminded of Freeman Dyson’s wartime experience when his team had to determine where the armor on the aircraft had to be reinforced. It wasn’t where the cannon holes were, but rather where they were not. Fetal development rapidly goes wrong at an early stage due to genetic and other defects. Miscarriages, early post-natal death, etc. Once those individuals have been removed, those that remain, the survivors are relatively free of defects. I apologize if I have the terms mixed up, but as far as I can tell, what we are seeing in adults is survivorship bias. It is why one does not pay attention to CEOs who succeed, or oldsters explaining why they have lived a long life. The reality is that they have just had the good fortune to be free of the problems (and random accidents) of their peers. Survivorship Bias.
3. I read your piece about the tangled world. I don’t think anyone doubts that the constraints of physics drive organisms to try to climb the same fitness hill using the assets they already have. We certainly see the classic morphological convergence between sharks, dolphins, and ichthyosaurs. Your example of the antifreeze proteins is similar. But what are the constraints for the non-physical world, e.g. abstract thought? As I said, McCarthy thought that this would cause convergence in intelligence as the universe seems to have common laws of physics, logic, and math. But we really cannot be sure that this will drive intelligence to converge, other than perhaps its subsystems that deal with these constraints. The average human spends more time thinking about other things, such as social interactions, which are surely not constrained. For example, the traits that drive sex selection appear entirely arbitrary, once one abandons the Kiplinesque “just so” explanations of the features.
Yes, I agree, that the web of life is likely more tangled than we thought. Lynn Margulis was one proponent of horizontal gene transfer between higher animals, and there may be some elements of this in different invertebrate life stages too. Retro-viruses do insert themselves into genomes, but AFAIK, most cause damage, such as cancer. Some will confer benefits when modified and given a promoter sequence to express them. Do we have any idea of how frequent retro-virus insertions are and how many confer advantages, rather than disadvantages?
lastly, the increasing complexity of ecological webs does confer robustness – usually. But bear in mind that it is highly dependent on network structure. Small-world networks often have single points of failure. In the animal world, the removal of just one species, such as a top predator can degrade the whole ecosystem that depended on a key function of that organism. One we are facing right now is the loss of honey bee pollinators. They are just one type of pollinator, but by far the most important. If honey bees disappeared tomorrow, an awful amount of plant diversity and our foods would disappear too.
4. I would really try to avoid Matt Ridley. I consider him as unreliable as Bjørn Lomborg (the Skeptical Environmentalist).
RegardingAncient Rome, I like the economic analysis of Paul Kennedy’s “The Rise and Fall of the Great Powers”. I would argue that the failure to be able to maintain the frontier represents a diminution of the network as peripheral networks gain relative power. Conditions in England certainly got worse for many after Rome abandoned the island. It was also the beginning of the dark ages. Travel became more unreliable. Trade was more difficult. I see all this as a breaking of established networks, not increasing them.
If the Pax Americana ends, and there is no replacement by a Pax Sinica, do you really believe that networks are getting more complex? The recent problems with supply chains show how brittle those “complex networks” are. And when broken, network effects can cause cascade failures.
Why do I think that we will be followed by postbiological “life”? Let me clarify and say that robots and artificial intelligence will be the dominant “life” in space. I have used this analogy many times: We are like Devonian fish wanting to colonize the dry land and needing to build aquaria to do so. It was evolution organisms, in the vertebrate case, of retiles that achieved this goal. Robots are pre-adapted to space, able to survive in many environments, effectively naked. They can slumber through long journeys and therefore achieve interstellar travel. Biological humans might follow in some cases, building supportive habitats and ingenious ways to travel vast distances, but robots have a clear advantage. On Earth, I see humans as remaining biological, albeit speciating with genetic engineering and becoming increasingly cyborg in some cases. But there will be immense variety, from humans 1.0, to all manner of species and biology-machine hybrids.
In the space of a mere century, we have seen robots develop from concept (Capek’s RUR), Asimov’s robots, to crude metal demos, to semi-intelligent humanoid robots, software AIs, and robotic probes. This pace will continue. Unless there is some limit that artificial computation comes up against to prevent AGI, then I see this as inevitable.
So a mix of fable and reality, some knowledge of symbolic and neural AI, and the likely merging of the 2. Because we can already assemble, disassemble and rebuild robots, transfer their brains electronically, robotic minds can spread across the galaxy at the speed of light as long as there is an assembler to recreate the physical form at the other end.
Re: Universal development. This is the first I have heard of the term. I have no thoughts on it at the moment.
Re: processes that are developmental. As I don’t know that I even agree with your viewpoint, this is difficult. I do think about attractors – whether purely mathematical, or embodied in life. Stuart Kauffman’s book “The Origin of Order” is full of attractors, computationally modeled, but reflecting what he sees in biology. It is that very problem that I am currently trying to deal with computationally to arrive at ways biology can display complex innate behaviors without learning. Given how stochastic so much neural growth seems to display, I am at a loss to understand how that development leads to innate behavior, such as requiring responses to intricate sensory input like mating dances in insects. As regards the fine-tuning of our universe, that could just be chance and the fact that we inhabit the only one that works. Without knowing the details of the creation of universes, I have no idea what factors could be involved in the natural selection of certain types of universe. I don’t accept the idea of a creator, but if there was proof that one exists[ed] then I would change my mind.
Re: The post on Tyrell’s paper. He and I had discussions about the model. I understood what he was trying to achieve, but personally, I think the real feedbacks are more stabilizing than he suggested. I ran a number of experiments altering his model parameters to understand it better and I know he is looking to try to incorporate more realistic models for planetary climates than his simple Matlab model used.
Having said that, randomness is a very powerful tool in computation. It solves a lot of problems in goal-seeking and optimization that breaks purely deterministic models. It should always be used as a null hypothesis to test against. I have David Raup’s book “Extinction” in which he shows that random models explain extinctions without resorting to external events, or even genetics. IOW, the idea that the 5 great extinctions are caused by cosmic or geologic events is coincidental. I don’t think that is true, but if randomness explains the extinctions, it is important to show why the extinctions are truly caused, not just coincidental, by external events.
I do agree with you that mathematical modeling is going to help find the answers to some big questions. I am always interested in models that appear to show how events happen, such as planetary formation, but at the same time, I am aware that bias can creep in to include “magic numbers” and the use of some functions that will result in the outcomes wanted. Science is a self-correcting process that will eventually find the truth, especially when anchored to data.
Hi Alex,
Fantastic response, thank you. Let me make brief reply (work starts again tomorrow).
1. Yes, I can see many ways network complexity reduces with age. I was referring to regulatory stability, a different concept. That goes up as those connections prune away. Half the brain cells disappear in the first five years, roughly. It’s a massive pruning, and a stabilization as well. That stabilization is a real hallmark of development (and overstabilization and brittleness in advanced age). We can find it this curve in people, ecosystems, organizations, cultures, and I’d argue, for all of biological life, prior to getting its developmental code rejuvenated.
2. We may be talking past each other here. Demographers have a concept, the survival curve, that describes the odds of continuing to survive from where you are at present. Your odds go up steadily until you hit maturity. That’s the stabilization I’m talking about that happens with normal development. You may misperceive the danger of the past (survivorship bias) or accurately know the past survival curve, but that’s a perception topic. I’m just talking about the a known feature of biological development, one that may operate if the universe is also developing. Developmental genetics, and metastable environmental boundary conditions presumably guide that process. The subset of universal parameters that appear to be finely tuned, and metastable multiversal boundary conditions, could simultaneously guide the same curiously smooth acceleration we see on Earth, and presumably, other Earthlikes. In other words, the core features intelligence, ethics, and empathy that all complex living networks may have to develop, along with our universe’s special physics that allows continual miniaturization digitization, and network redundancy, may be core to that stabilizing process (the “chaotic arc of progress” we are musing about.).
3. re: sex and cultural behaviors being “unconstrained.” I wonder if this analogy helps. Developmental processes always create an “envelope” to constrain evolutionary stochasticity. So for every “butterfly effect” there is an “envelope effect” to limit the size and scope of the downstream effect. The way the brain develops, for example, is enveloped by a small number of cell types creating a framework (radial glial cells, etc.). Their future is predictably fated. The rest of the cells populate and wire up stochastically. Culture and mating work like that. Lots of unpredictable violent actions on the individual level in a city. But it is always enveloped by space, time, and function. Much is predictable (predictive policing will tell you crime statistics down to the block, in advance). Individually, it is stochastic. Mating works the same way, in all species. Individually, it is stochastic. Observed over time the envelope of mating strategies used by each species, on average, are highly predictable. It is the subset of developmental genes that are creating that predictability. So I think we are both correct. We must see both the butterfly effect and the envelope effect, evo and devo, always operating simultaneously in any complex replicating system.
4. A: “It is that very problem that I am currently trying to deal with computationally to arrive at ways biology can display complex innate behaviors without learning. ”
This is a core question in evo-devo philosophy of biology. I would commend you to look into that literature, it may help you. As I understand instinct, it a form of evolutionary learning, but it required many previous developmental cycles, under selection. So the learning occurred in the past, by gene-protein regulatory networks, using evolution under selection, via accreting into (adding onto) the core set of developmental genes and regulatory systems (which has some heterochrony capacity, but not much). The DGT (developmental genetic toolkit) of a human is a kludge of accreted systems. We share the same trunk with many other much simpler organisms. And of course, there is also learning within the life cycle of the organism, epigenetic, individual, cultural, etc. overlaid on all this genetic learning.
I really think the evo-devo biologists are working on the most important problem for the future of AI, how do you take a cycling developmental system, give it the ability to evolve, and use that evolutionary variety under selection to keep updating (progressing) the developmental code. Part of what makes life so resilient is not just that it develops all manner of diverse forms, but that it develops higher intelligence, an intelligence which becomes *increasingly general*, increasingly able to represent anything and be useful in any context, at its leading edge of complexity. It is that *generality* of intelligence, along with its stabilizing ethics and empathy, and the knowledge accumulation and niche construction that intelligence affords, that really makes life so special, in my view.
Because of the generality of our intelligence, this acceleration train just keeps running faster, until you get postbiological life, then, as you and I both agree, you don’t even need planets, or suns. That’s a truly different state of existence, and it seems right around the corner for us, in astronomical time. We are doing a very poor job governing our planet, in many ways, as you point out. But we are also riding an accelerating train toward postbiology in the process, a train that we could not stop even if we wanted to. We are evolving unpredictably and developing predictably toward a very specific set of future capabilities. We don’t fully run the show. The universe, in my book, runs the developmental aspects of cosmic complexity, via past selection. We aren’t talking about a God or an entity here, simply the same processes that created us, and that drive certain predictable features of our own futures.
Finally, thank you for your points on randomness. It indeed is a very powerful and useful tool, all I am saying is that it is easy to overuse it. To study complex systems, I think we need a math that includes both butterfly effects and envelope effects. Consider Conway’s Game of Life. That is an interesting model, because it has developmental predictability (gliders, guns, etc.) and evolutionary randomness (not knowing where the matrix will go next, if there is randomness in initial conditions. I’m not saying the universe is a deterministic cellular automata. What I am saying is that we need models that have both functions, complex, hierarchical predictability and stochastic unpredictability. When I see only Monte Carlo methods in a model, I immediately realize there is no developmental thinking going into that model. It can describe individual and evolutionary effects quite well, but there are also environmental and developmental effects that are cumulative, on a life cycle curve (fetus, youth, maturity, replication, senescence, reycling). If we aren’t using both kinds of math in our models, we can’t accurately model long run dynamics in any system that is both evolutionary and developmental. Thanks for the conversation!
Hi Alex,
I’d like to add a bit more on the randomness point. I had a day to think about what you said here:
“randomness is a very powerful tool in computation. It solves a lot of problems in goal-seeking and optimization that breaks purely deterministic models. It should always be used as a null hypothesis to test against. I have David Raup’s book “Extinction” in which he shows that random models explain extinctions without resorting to external events, or even genetics. IOW, the idea that the 5 great extinctions are caused by cosmic or geologic events is coincidental. I don’t think that is true, but if randomness explains the extinctions, it is important to show why the extinctions are truly caused, not just coincidental, by external events.”
Thanks for this. To me this is a good example (I expect to cite it in my next book) of how easily our randomness models can be misused. They are so powerful and flexible, and so true for the great majority of contexts, that we can use them to explain all kinds of phenomena, including ones we that we know, obviously, are not fully random. There are important causal processes and constraints operating at all levels, all the time. It’s just often very hard to see their history in the daa.
One of the key things I’ve learned about the intersection of evolutionary and developmental processes in complex living systems is the 95/5 Rule. In brief, it states that on average, roughly 95% of the change is unpredictable (evolutionary), and 5% is predictable (developmental). Yet the 5% of top-down, constraining and converging processes that are in-principle predictable (if you can see and model them) are easily as important, to long range dynamics, as the 95% that are bottom-up, exploratory, stochastic and contingent. For example, 95% of metazoan genes can and do change unpredictably from one replication to the next, in any organism, and 5% are highly conserved over replications (the “conserved core” developmental genes). This has many downstream consequences that work the same way. Remember those radial glial cells I described in the developing brain? They are part of a class of only 5% of cells (actually, less) that have predetermined spatiotemporal fates, driven by those developmental genes. The rest stochastically populate, bottom-up and locally, within the constraining envelope of those top-down, global, developmentally-controlled cells. Likewise for our psychological attributes, etc. I give many other examples of this 95/5 rule in my book, including in organizations and societies. Happy to send you and any others here a PDF if helpful.
Here’s a question: If the 95/5 rule is operating in a complex system, randomness will be a valid descriptor only for 95% of events (19:1 in the sample). What is the right approach to test for the existence of a few parameters that will have downstream convergent and constraining “envelope effects” on the stochasticity of the system ? That kind of statistical test would seem to be particularly important as a *counterpart* to the null hypothesis, in probing any replicative complex adaptive system, given that it seems to describe best how development works. Planet climates are clearly a replicative system. Our universe is hugely replicative of them, in many variations.
Predictable events are always occurring in every replicative complex system, but they aren’t the simple causal processes we learned in discrete math and logic. I’d call this test the “developmental hypothesis”, and in addition to the classic NH, the DH should have to be tested in a number of variations before we’d be confident it could be rejected, if we are probing any replicative complex system. If you know of any statistical work or terms that address this topic, seeking downstream convergent and enveloping effects, based sensitively on the values of a few system parameters, I’d be grateful for the guidance. Thanks.
The late John McCarthy of AI fame once stated that intelligence was likely convergent, rather than very varied. He based his argument on what he believed the commonality of math, physics, and logic across the universe would do. One could argue that if he was right, it might be extended to your thesis of universal goodness in the universe. A consequence of that, would be to indicate that the anti-METI people are wrong, and that Zaitsev would be doing no harm in broadcasting, whether or not ETI existed during the time of humanity’s L of the Drake equation.
Thank you for this great McCarthy reference. I’ll use it in my next book. Asking what converges, on a statistically reliable basis, at various levels of evolutionary complexity, seems core to understanding the nature and future of life, intelligence, values, and adaptiveness. Just as there is a continuum of statistical predictability in complex systems, around many variables, there is continuum of structural and functional convergence.
There seem to be two convergence groups especially worth discussing. Universal convergences, expected in all systems of a certain developmental complexity, and majority convergences, expected only in the majority (the bulk of the Gaussian) in a population at that level of complexity.
Human ethics and empathy clearly have both convergence types. We all have a few developmental universals, ethical and empathic algorithms encoded as instincts, expressed in all of us in normal development. These include things like prosociality, reciprocity, and a constant search for positive-sum games we can play and enforce (see Bob Wright’s Nonzero, 2000). We also have a much larger set of majority-only convergences. Yet well-built networks (including well-built democracies) can very effectively police the sociopaths, criminals, creatives, and geniuses as long as there is a majority convergence. I think most convergence is of this weaker (not universal) type, because each individual is so limited in our intelligence. Adaptive groups always need healthy experimentation away from the norm. Sometimes the outlier is what we need, especially when conditions suddenly change.
I expect this is true for our ETs as well. Frank Drake famously said that were we to look at ET from across a darkened room, he’d expect them to look like us, in general anthropoid form (and he said implicitly, in certain ethical and empathic universals), but curiously and helpfully not like us when we turned on the light. This is the kind of careful thinking that I believe we need a lot more of in evolutionary astrobiology. It will be speculative today, but we can build better models over time, and as our computational capacities improve, simulations from first principles will increasingly settle the issue of what is universal, what is majority, and what is unpredictable and unique.
What I would add to Drake’s observation is that when we turn on the light, we should expect to find a much larger class of majority convergences, things that are still the same in most ETs in a universal population, on average, but not all. Then the largest class of observations will be all the evolutionary differences.
An important insight, from an evo-devo perspective, is that all of those evolutionary differences will be occurring *within the constraint environment* imposed by those first two classes of convergence. That means the vast majority of them will not be extreme or large enough to threaten those convergences. When we see that level of evolutionary variation biological processes, development itself fails (as in your comment about genetic abnormalities in utero, or in cancer in an adult)
From the evo-devo perspective that I favor, all of life’s evolutionary variety, in other words, occurs within and services the developmental life cycle. The better we understand the predictabilities and the hierarchies of that life cycle, the better we understand the degrees and kinds of variation and uniqueness we will see. If our universe is an evo-devo system, all the evolutionary variety of our ETs will be kept in service to the successful development (and replication) of the universe as a system, by those two classes of constraint. We shall see if any of this holds up, but it seems to me to describe part of how life, as a network, has remained so successful for so long, in so many different environments. Life has a well built core of developmental regulatory systems that have, so far, carefully harnessed evolutionary variety to keep it in service to an ever-increasing network adaptiveness, at the leading edge of complexity. It can’t do that forever, in the biological substrate, because the code is accretive, and life cannot freely edit, by and large, its developmental code. There are good arguments that life’s developmental code, now over three billion years since LUCA, is increasingly restrictive, brittle, and sclerotic, in its most complex organisms. When life goes postbiological, it will be able to edit and rejuvenate that code. That seems yet another serious adaptive advantage. Yet if evo-devo processes are at the heart of all well-built complex networks, postbiological systems will continue to live in this tension between evolutionary creativity and developmental constraint. They won’t be able to escape that universal dynamic.
To apply all this back to McCarthy’s observation, I’d like to cite the late, great mathematician Michael Atiyah, who famously said that math is both invented and discovered. We humans explore and experiment with most of our theoretical math (in an evolutionary process) and we discover and generalize with special subsets of applied math (a developmental process). To simplify, we could say that the first activity serves a universal value of Beauty, the second, of Truth. But note also that we can subdivide the “discovered math” into these two important constraint groups: There will be inevitable constraints (truths), like number theory, geometry, etc. discovered by all ETs. But there will also be only majority mathematical truths, some set of generally useful applied math that most ET civs, but not all, will discover. Then there will be a third class, a huge variety of evolutionary (creative, beautiful) theoretical maths, uniquely invented by each ET civilization, which none of us (not being omniscient or omnipotent) will be able to call true or useful, *yet.* As with our experience to date, some subset of that third category will be found true or useful, in the future, if our local environment, or the universe itself, attains some future more evolved and develop state of complexity.
To me, all this suggests the deep adaptive value of a universe that develops (self-organizes?), perhaps over many past cycles, in such a way that:
1) all the interesting intelligences are *kept spatially and informationally separated from each other,* over the great majority of their evolutionary development so that each evolves (varies) in usefully unique ways. In other words, the inaccessibility is self-organized, to benefit the whole. [PS: this is how Graafian follicles and ovulation works in the female ovary as well.]
2) has developmental laws which drive all evolved intelligences to accelerate their complexification, at the leading edge, toward a highly local, dense, and dematerialized network that looks increasingly like a black hole, or some Planck-scale structure. In other words, all the fastest growing ones go inward on average, not outward, and
3) requires all of them to exist in a universe with a special structure and topology (wormholes? Hyperspace? entanglement?) such that when we approach that black-hole-like or Planck-like domain, we all get to meet each other, and compare and contrast the limited science, intelligence and wisdom we have each obtained. All of them also learn, as their science, ethics and empathy improve, that it is best to let each civilization develop in its own unique way, to maximize local uniqueness prior to meeting and merger. [Arthur Clarke said something roughly like this, in a few of his essays].
I expect our future science to give us the outlines of these three things, long before we actually are able to “meet and greet” any ETs. My transcension hypothesis paper https://www.researchgate.net/publication/256935188_The_transcension_hypothesis_Sufficiently_advanced_civilizations_invariably_leave_our_universe_and_implications_for_METI_and_SETI describes the kind of optical SETI results that I think we will discover if this model is correct. Within a few decades, I predict our space-based SETI will see signs of Earthlikes “winking out”, and turning into something that to us, look like a black hole. It won’t be a black hole, it will be some kind of gateway to all the other ETs, but like black holes, it will be importantly outside of, and beyond, this ancient Universe, which will be simple, boring, senescent, and unaware by comparison to the complexity and consciousness that exists in all those special points (and network) of entities.
This is a lot of speculation. One day I hope we’ll see if any of it is even roughly correct. Thanks again for the comment.
If you are interested, here is the link to the SETI talk with John McCarthy. I was at that talk and got to ask a question.
Convergence of Intelligence – John McCarthy (SETI Talks)
Here is Matt Devlin giving a [partial?] counterargument.
Contact with ET using Math? Not so fast. – Keith Devlin (SETI Talks)
Thanks for sharing these. It would be interesting to see both of them debate. McCarthy would surely have laughed when Devlin wasn’t willing to concede that “any sufficiently intelligent life” wouldn’t necessarily have number theory and primes. Devlin takes Plato’s Cave way too far, into Idealism. There is a real objective world that all sufficiently advanced representation systems must find, in my view. We all use that representational grounding to build language and swap and vary memes, for example. That’s not saying that all ETs would share the same theoretical math, by any stretch. But they would all have to share core universal math, and most would share various applied maths. Just my perspective of course, feel free to disagree.
The evolution of morphology is highly contingent. If Drake is saying that of all the organisms that can attain technology that we might meet with, then only a humanoid form will work, then that is a very limited, almost anthropocentric-superior view. Far more likely is that any form that has appendages with fine manipulators and a good brain should be possible. But more importantly, he assumes ET will be biological. I think that ETs that we will meet will be machines with very different morphologies to humanoid. IOW, there is no convergence to humanoid form, either biologically, or artificial.
I disagree here. The DNA is more like a set of building blocks that can be arranged in many ways. Just as LEGO started with simple bricks, the company has diversified the range of pieces to allow a much greater variety of structures to be built. The same is true with DNA as the assembling of parts can be altered by a range of processes from breaking genes, evolving new genes, making different proteins from a single gene, turning on and off transcription promotors and enhancers, epigenetic controls, and so on. What is constrained is that each new structure must work, so it can only incrementally change from the parent. But over time we have seen an explosion of forms and capabilities. But there are constraints. IIRC Niles Eldredge thought that there were important morphological constraints and “good” forms that were not considered by the gene first proponents of evolution like Dawkins. For physical forms, there is none better than Darcy Thompson and his classic “On Growth and Form”.
Regarding Math. Mathematicians want proofs. Hence the need to prove conjectures that show universality. But the usefulness of all sorts of ideas and conjectures can be shown to be true up to a limit by brute force computations. I cannot prove Fermat’s Last Theorem (it would be so far above my capabilities), but I can write a program that can show that it holds for as many integer combinations and powers as I have computational resources as my disposal. David Gelernter has proposed this experimental approach to doing math. Whether ETI has a largely overlapping set of similar mathematical ideas or very different ones, IDK. I would be fascinated to find out what they do, as computation is likely universal and they will apply it to their problems.
IDK quite what to make of this. It seems to me that your argument about biological complexity and, rigidity, and senescence applies to this situation too. Unless they create lots of separate sub-networks that can loosely connect, then the centralized network will be likely robust but relatively static. Both biological and cultural evolution get major boosts when small populations are isolated – the “founder effect”. Change availability of alleles in small populations allows rapid evolution, through various means, such as genetic drift, whilst the big populations are constrained by the Hardy-Weinberg law. I believe the same is true of cultures. If ever there was a good reason for humans to form small colonies in widely separated space habitats it is to stimulate cultural diversity, which ultimately is more robust in toto than one large connected population that will be subject to an unexpected perturbation.
If we do transcend to another form or state, I think we must remain connected to the real universe. Even if the culture as a whole lives virtual lives, there will be those who wish to grapple with reality, just as there will always be mountain climbers tackling mountains, and not simulating those mountains in some virtual world, however real.
To mathematically capture both diversity and constraint, perhaps the math has to follow some logistic path. Early on it can model accelerate growth and test new ideas, and later converge to some constrained optimum. Technologies follow this logistic growth curve. Perhaps we need a math that can expand within this logistic envelope?
Alex: “The evolution of morphology is highly contingent.”
Yes, but there are also optimal morphologies and functions that exist that will be discovered by all that stochastic evolutionary search. We have to learn to see them and talk about them. It’s not anthropomorphic, its finding the outlines of universal development. Bilateral symmetry will win in multicellularity. That gets you to tetrapods. You only need two grasping appendages, and the ability to use them in air (not water), for collective tool use to make you dominant over all other non-tool users. There’s your anthropoid form. It’s an optimum, for all Earthlikes, sitting there in the middle of all that contingency. Dinosaurs were trending toward this form with the increasingly successful Troodon clades when the meteorite hit.
So yes, many forms are possible, but only a few will be deeply accelerative, and thus dominate their environment, via competitive exclusion. Octopi could never reach our level even though they have two prehensile limbs and can build huts, because they can’t use tools and groups to dominate their environment. Water is too dense a medium relative to the force that can be generated by creatures made of protein. Cultural acceleration had to emerge first on land, etc.
The convergence to humanoid form is only relevant to get you to the next stage of development. Yes once you have postbiological life, humans can go into any biological form. Some surely will. But will many? Or will most or all all find it by far the most valuable to migrate to postbiology? One thing is certain, the ethics and goals of the postbiologicals will drive the future at that point, if biology has an unbridgable multi-million-fold learning and action disadvantage, and all the other limitations of being dependent on biochemistry to survive.
Regarding the senescence of life’s developmental code, see figure 22 in my 2009 paper, Evo Devo Universe? here:
https://www.researchgate.net/publication/253514068_Evo_Devo_Universe_A_Framework_for_Speculations_on_Cosmic_Culture/citations?latestCitations=PB:356127378#fullTextFileContent
I’d love it if you have any feedback on any aspect of that paper (if you ever have a chance to skim it). Even life’s family origination rate has been saturating since the Cambrian. Constraint just goes up and up over time in any developing system. Not only does everything new still have to work, as you point out, it also can’t break a lot of the old legacy code its built on, and it has to compete with and adapt to increasingly complex actors and environment. Most evolutionary biologists entirely miss this process, but it is very real. I call it “terminal differentiation” in the paper. It’s just our familiar logistic curve, applied to all of biological life as a morphological explorer. I’d expect it to work the same on other Earthlikes. Within a few decades, I predict astrochemists will also know that liquid-phase organic chemistry is a unique developmental portal to complex cells, and that Earthlikes are needed to keep that chemistry stable for billennia. Many would call view that “Earth-centric”. I’d call it simply looking carefully for the special subset of circumstances that will sustain billions of years of predictable acceleration of complexity. I’m always looking for flaws in this argument however. It’s pretty speculative at present. If you find any in my paper, I’d be very appreciative to learn of them.
Thanks for the points about computation and math. Gelernter is a hero of mine. His book Mirror Worlds was very helpful to me in understanding simulation. Yes, evo-devo dynamics and life cycles, with eventual senescence, would seem to have to apply to any real physical entity, no matter how dense and dematerialized it becomes. But you know all the weird things that happen in general relativity when you get to physical extremes. When you are a near-black-hole entity, I would expect you’d surely still need to be an ecosystem of separate networks, each evolving differently yet in contact with the whole, as you point out, and you’d surely want to be in the physical world as well as in the virtual world.
But how much time would you spend in each? If the large scale physical world gets increasingly old, slow, boring and highly simulable, the more intelligent your civilization becomes, wouldn’t you spend most of your time living and experimenting in the small scale (pico, femto) physical world? And how much would you act vs “think”? As kids, we acted all day. As adults, we think much more, and act much less. Once we had enough action primitives simulated, we play much less in physical space, and much more in virtual space (imagination). This seems inevitable, as soon as the simulation gets sufficiently good. We always do both, but the ratios change, and at those extreme scales, the physical:virtual ratios may be simultaneously extreme.
I like your (implied?) idea that the math (theoretical and applied) we can learn follows a logistic path. There must be a limit to what we can learn, if we are finite creatures. The universe is big but apparently finite, and we know there is a limit to its lifespan and the acceleration that can occur. Once we hit the Planck scale, there’s no more room at the bottom for our own local acceleration. And if there is no faster than light capability, a constraint I expect is there for a very good (self-organized) reason, to keep us all evolutionarily isolated for a time, then there will be a limit to what we can all learn in a rapid timeframe. If our intelligence has been self-organized to be of use to the universe, it seems that that the big payoff, at that point, would be for all the finite understandings that each of us has made to be compared, via some way for all of us to meet each other by transcending the universe, just as the acceleration stops, rather than attempting to expand across it. We shall see if the physics bears that idea out. Thanks for the stimulating conversation Alex.
The aged, network tending people aren’t making themselves known to this youthful, network pillaging people. If this is a measure of their goodness or universal goodness, then I don’t understand how they could consider our messaging of nearby stars as good, or conscientious of the network.
Interesting topic. Consider the classic ‘The Day the Earth Stood Still.’ What is regarded as good for the cosmos might well be bad for the People of Earth, i.e., the greater good of the galaxy requires the subjugation or destruction of our planet’s unruly denizens. Also, increasing developmental consciousness would not be facilitated by walling ourselves off in a black-hole simulation but instead would be accommodated by an increase of sensitivity to the outer cosmos, as in the case of an advanced radio astronomy, that brings the ‘real’ universe to us. Beyond good and evil, we should consider if a non human intelligence in the universe will even be capable of making this kind of distinction (to be fair, we can equally well ask this of human intelligence). What would the ‘form of the Good’ be to an alien mind?
I’m reminded of Clarke’s “galaxy building” background in the odyssey series that the aliens who seeded the galaxy to cultivate mind WEEDED and well as harvested the results. In the last novel, 3001: The Final Odyssey, the plot included the idea that the monolith may have judged humans to be a species to be weeded.
Organized religion might get a severe jolt is we received a message that humanity was found forever unacceptable to join a galactic civilization and would therefore be terminated. However, if such an end was to come, I think is will be more like that of the Vogon constructor fleet demolishing Earth, except that the intelligences would no more notice us than we notice the insects during construction.
Perhaps a non-human intelligence (especially artificial) would be indistinguishable (as we see things) from evil. They (it?) would have no problem having all members on the same page because they’re equally intelligent. Here, not so much. Humans come in a wide variety of intelligence levels, and many aren’t overly bright. We have institutions like democracy guaranteeing that IQ isn’t the determining factor in voting for policies. Many of us seem to think it’s unsafe to eat GMO foods and are active in preventing GMO crops in poorer counties that really need the increased yields. At least part of this is IQ driven. An artifical intelligence wouldn’t likely be so fractured. Humans wouldn’t be able to have the universal focus.
I don’t understand why this would be the case. It certainly isn’t the case for our AIs.
IQ and voting in democracies. We have plenty of well-educated politicians and business leaders who hold very unscientific views. There is a long history of the elites wanting to restrict voting from “the masses” but I don’t think the results of this show improved outcomes, other than for the interests of those elites. Democracy seems to be the least worst approach of governance, offering a self-correcting mode. Not as self-correcting as the scientific method, but better than rule by restricted means – monarchs, oligarchs, plutocrats, etc.
Hi Project Studio,
Thank you for your great comment. It turns out, due to the way gravitational lensing works, a manufactured black hole is actually the ideal universal observational environment, via a “focal sphere” of microscopic entities orbiting it. Clement Vidal and I have both done some basic estimation work on this, based on Claudio Maccone’s great work with a Sol orbiting system. So you aren’t walling yourself off as you densify and miniaturize, you are actually becoming the best eye you can become.
What’s more, if you can miniaturize and densify your local civilization at a rate that is substantially faster than you can travel *through* the galaxy, you’re also learning, observationally, far faster by going to inner space than by sending any kind of eyes or probes into outer space. This is not to take away from all the ingenious work of many at this site, on solar sails and the like. But to me, we are in a situation similar to A E van Vogts sci-fi short, Far Centaurus, 1944 https://en.wikipedia.org/wiki/Far_Centaurus where faster interstellar voyages were forever overtaking older programs.
No matter what kind of interstellar travel any local sentience comes up with, it seems very likely to me that inner space travel, to build a black hole focal sphere, will beat it to being able to observe the whole galaxy very well, at least, by creating our own local black hole. If our ability to do this kind of minaturization is truly on a Moore’s law like curve, as some have speculated, we might gain this capability not thousands, but amazingly, just hundreds of years into the future. I believe it was Larry Kraus who estimated a “galactic Moore’s law” might run for just 600 more years before we reached Planck-scale engineering, if current trends continue. I wouldn’t believe any of this was possible if I hadn’t seen the billions of years of acceleration that are represented in Carl Sagan’s Cosmic Calendar, or personally witnessed what Moore’s law has done since 1965. But I’ve studied the former and lived the latter, so I think a black hole like entity within the next millennium is a quite reasonable future possibility, and it is described in my next book. Unlike macroscale acceleration, like human population growth and our unsustainable consumer culture, which always run up against resource limits, there are no resource limits that prevent this inner space acceleration. The only limits to that acceleration are our local intelligence, and what the laws of physics allow.
We haven’t even talked about all the computational (and presumably, ethical, empathic, and consciousness) advantages of turning one’s postbiological civilization into a near-black hole density network. As far as we have found today, the further we venture into inner space, the more amazing capabilities emerge. For example, consider Neven’s law in quantum computing. https://www.quantamagazine.org/does-nevens-law-describe-quantum-computings-rise-20190618/
Not only will we not be walled off, we’ll be the most capable entities in the universe of observing anything of value.
But here’s a funny thing: I believe there will be less and less of value to actually observe, and to simulate, in old, slow, and simple outer space, the more complex we locally become. The astronomer Martin Harwit published a great book, Cosmic Discovery, 1981, that made this point early and well. See my transcension hypothesis paper if you’d like many more arguments, including a summary of Harwit’s and tentative evidence on this point. Everything worth observing in our universe is increasingly in inner space.
https://www.researchgate.net/publication/256935188_The_transcension_hypothesis_Sufficiently_advanced_civilizations_invariably_leave_our_universe_and_implications_for_METI_and_SETI
Regarding “what would be the form of the good, to an alien mind?” I have some tentative thoughts on that, based on evo-devo systems thinking, in a paper of mine. See “Five Goals of Complex Systems” here:
https://evodevouniverse.com/wiki/Evolutionary_development_(evo_devo,_ED)#Evo-devo_models_require_advances_in_a_variety_of_theories.2C_especially_our_theories_of_intelligence
This is all speculative of course, but we all have to start somewhere in thinking about these important topics. I hope it is helpful to you in some way.
Warm regards, John
Thank you for your reply. I imagined we were considering the limit of density as inside a black hole and I’m not sure whether the gravitational lensing you describe would be observable from there. Using the black hole as an observational lens for orbiting observers would be akin to radio astronomy in ‘bringing the universe to us.’
Seeing the universe reflected in a grain of stand is a perfectly valid study and validated by the simple principle that every outside has an inside.
I did have a look at the theories of intelligence section in your article. Representational intelligence (modelling) appear to be akin to Plato’s forms which he believed to be an aid intelligibility, and therefore, were of the form of the Good. But a form that lends intelligibility to the mind of a human being might not do so for a being with a radically different types of sensory and formatory apparatus. Even in our science of mechanics, intelligibility of the forces of nature hinge on our physical experience of tension, pressure and the mental constructs built up from them. An alien’s living conditions might not provide the same opportunity to develop these constructs, and alternative experiences and forms would be substituted.
There is such a broad body of work that yours work references with which I am unfamiliar. It appears to me to be a theory of everything. In reference to the evolution-development of the universe, perhaps your hypothesis could provide the basis for a more satisfying understanding than that provided by the anthropomorphic principle.
It grows late…
Aum, Aum, Aumigod…
Perhaps the secret of the universe is that there is no secret.
https://www.thirdmindbooks.com/pictures/2226.jpg?v=1440450143
An interesting topic and even more interesting debate to be had. With the last conference in 2010 it is time for a new one, these should be decadal events as undermining, technological and scientific knowledge evolves and expands.
My own personal view is that the life we discover away from Earth will be in the clouds of Venus, as I have advocated since I was 12, under the surface of Mars, and now we know many moons in the solar system harbour potential habitats, there possibly too.
Whilst it is not impossible that we may find and prove evidence of an advanced and intelligent society at some point, communication will be all but impossible other than ‘ proof of life communication’ and meeting them directly will never happen.
I believe that most, if not all, advanced societies concentrate on exploiting and using the resources of their home planetary system and eventually go naturally extinct, after all, we know from life on Earth most species only live for around 2 million years before being supplanted by a new one. Evolution of DNA is likely a key factor, and whilst an advanced species can likely correct for most changes, the commutative effect may be inevitable.
Of course there are other reasons, natural disasters and conflict are likely candidates, with natural disasters being able to reset societies back to earlier levels of technology but not wipe them out, that could happen to humanity with little or no warning.
Lastly, whilst I would love it to be possible, I believe interstellar travel to be a pipe dream, possible by drones, but not living creatures.
Happy New Year to You Paul and all the members of this wonderful, inspiring community. The past year threw so many obstacles in our lives, but perhaps the successful launch of Webb telescope(fingers crossed it will continue to be success) will be the sign of welcome change in coming days!
Thank you for a thought provoking hypothesis!
Consider the masking of quantum mechanical phenomena by the sheer numbers of particles in macroscopic matter. While there is a finite and well known probability that a single “particle” may behave in a manner that defies classical physics, such behaviors are not observed macroscopically. Is this not an example of system complexity leading to well-constrained behavior?
Arguing that the constraints represent “good” is something else entirely :)
Great point David! Yes, these are perilous ladders we climb together. Easy to slip and make a flawed analogy or a mistaken assumption. Thanks!
Predicting how consciousness will change in form or behavior depends on understanding what it is – solving the hard problem of consciousness. Without that, we can’t know if we can increase “interiority” by transferring consciousness to a finite state machine, quantum computer, the magnetic field of the Sun, or a drop of water. We can’t know if those substrates already possess their own sort of consciousness, which focuses on attributes of the universe that seem irrelevant to our experience.
I don’t want to re-rant here so soon, but in brief I suspect consciousness evolved when animal neural networks evolved a way to detect faint physical traces of the *future* state of some of their cells. The resulting temporal paradoxes act as physical boundary conditions of the universe, and these conditions act as an I/O device for the universe as a whole, i.e. qualia and free will.
Is consciousness a requirement? For example, are corporations conscious beyond that of the individuals in the organization? Is the global economy conscious? If the answer is no, which I believe to be the case, why should ETI need to be conscious, as it should be able to operate purely on goal-seeking. It may not look like anything like our terrestrial life, but I question whether consciousness is a requirement, i.e. a necessary requirement.
The entirety of the “not-I”, including the physical, the mental, and the imaginary, and the body-mind complex – is realized by the “I” as an “of” in its awareness. (Since one can observe one’s thoughts, and they possess no intrinsic sentience, the mind belongs, along with th body, to the “not-I”). The surmise of a “not-I” anywhere and anywhen is predicated upon an awareness “of”. In the case of the awareness, if there is the absence of an “of”; that “of” being the sine qua non of all “not-I”, there is therefore no “not-I” and correspondingly no “I”. The “I” is manifest when any of the “non-I”, including time & space, matter & energy is projected/reflected in it.
The Ashtavakra Gita elucidates some of this.
Thank you all for these lovely and thoughtful comments! I will try to get responses here by tomorrow. Happy New Year to all.
I feel the need to defend Ada Palmer’s series, including the 3 books following Too Like the Lightning. Of course it predicts an unlikely future–as is any other future you can describe. But Dr. Palmer is an extremely creative and thoughtful historian, and the books are mind-bending in the best way.
Thanks Tom! I should have qualified the comment about not personally recommending the rest of the series, sorry. It was no attack on Ada Palmer, but rather an expression of my own thoughts on the kind of sci-fi I find most valuable. We each have our reasons for reading sci-fi. I personally read most of my sci-fi for its plausible future description value, with entertainment and education as distinctly secondary priorities. As such, I often will find the most value in the first novel of a good sci-fi series, as I do here. I must disagree with your proposal that “any other future you can describe” is equally unlikely. There is an entire subgenre of sci-fi, called future fiction, that strives to accurately predict the future, and that cites articles and papers that explain the current state of many of the story’s future elements. You can find many good examples of that genre. One example is Burn-In, by Peter Singer and August Cole, 2020 https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution/dp/1328637239
You can find more in my book, Introduction to Foresight, 2021. https://www.amazon.com/gp/product/1736558501
Thanks for the feedback, I’ll try to be less pedantic in the future.
Much to ponder but not tonight. First thought is “The Matrix” which already seems to have been virtually implemented on a large fraction of the Western population by our narcissistic rulers. As pointed out by various critics, unlike the movie, we voluntarily crawled into the virtual pods. Is this the merger that the author suggested? Gaming + porn + social isolation?
And what is “good”. If the author suggests something related to “Western values”, I will fall to floor in laughter as would much of the world’s population bombed and sanctioned for “good”.
The topic is inherently interesting but moralistic criteria such as “good” renders the discussion banal.
I am afraid I have to agree, the use of the word ‘good’ in a subject like this cause a philosophical dissonance. And while a race might be both morally impeccable and holding the flag of truth justice and the Procyon way high. It’s no guarantee they would have the same standards toward another race and civilisation. Just see how we humans mostly treat other (semi) intelligent species, pigs for example. They learn how to open a door from just seeing you do so. But we put them in concentration camps, step into a barn of a pig farm and you will hear such screaming that will hunt you for life. They got language, with a limited vocabulary, no one have bothered to figure it out. But the tone of despair in that screaming cannot be mistaken.
If there’s other civs out there, so advanced we will suffer a culture shock. Or go cargo cult at best, we can only hope they do not have the capacity to reach us physically, else we will end up as pets at best, or pigs ….and the aliens will still remain on their pidestal of ‘goodness’ since they remain as moral toward their own kind.
Just considering how we humans have treated others over history suggests we might be following a “goodness” arc, albeit a very bumpy one. Medieval torture and witch-burning were horrendous. The Nazi death camps were clearly not as bad as those tortures with their industrialized slaughter (although their “medical experiments” get closer. The Chinese would regard their treatment of the Uighur population as positively “progressive” by comparison to the death camps.
The Rawlsian approach to how to structure society definitely looks more communitarian than ours, both between individuals in a nation and between nations. The response of wealthy nations in the current pandemic illustrates this.
We could certainly extend the Rawlsian approach to animals. Certainly, we have started granting some rights to animals, especially with regard to experimentation and cooking. In my lifetime “Blue Trout” was banned, most of us don’t eat veal, and foie gras is illegal to sell in California. The old discrimination between vertebrates and invertebrates has been breached with the UK banning boiling lobsters (how will we ever watch that scene in “Annie Hall”?) and is proposed for octopus and squid (there goes calamari). It is said that a trip to an abattoir can put one off eating meat. Just as a childhood visit to Madame Tussaud’s and a brief look around in the chamber of horrors has left an indelible memory of the horrors such that even today I cannot watch horror movies. it makes me wonder if school trips to abattoirs would drastically curtail the consumption of meat, although with a possible side effect of PTSD for the population (the meat industry would never allow this).
Does a Rawlsian view, extended to all life, push us in the direction of Buddhism, or at least for a greater reverence for all life on Earth? Would it enforce the UN declaration of human rights across all people, perhaps even ending many socio-political organizations from religions, corporations, municipalities, and nations?
Asimov’s famous 3 laws of robotics was an idea to allow robots as slaves yet offer enough loopholes for interesting stories. Yet the robots invented a zeroeth law that required them to remove themselves from human affairs. While a clever way to join the robot and foundation universes, it does have resonance with those who today believe humans are so destructive that the Earth would be better off without us.
How to get from where we are to a much better world isn’t at all clear. KSR’s The Ministry for the Future is a rather elitist, authoritarian approach to solving the global climate crisis. But as with his earlier books on eco-SciFi, he avoids “pocket utopias” and explains how the better future can arrive, however messy that journey may be. How we might arrive at a far better world preserving Western Enlightenment values and without authoritarian domination by either of the 2 major communist experiments is unclear. Preserving our current way of life seems likely to result in a lot of misery over the next century, the start of which we are seeing today.
If there are ETIs with a higher “goodness” state than ours, what would they make of our global civilizational situation? Would their “goodness” require them to intercede on humanity’s behalf? Would they maintain a strict Prime Directive as that is the only way to contain the damage (in Clarke’s terms, weed us out)? Or would they be so indifferent, as they had far more important things to think about and do, that we would be as ants and termites fighting for territory?
I’m not convinced that animal rights is applicable here. To my eye, it seems like there is something *different* about human consciousness compared to the others. And if this difference were only a matter of degree, then why don’t we see the occasional octopus writing mathematical theorems, or at least pigs building pyramids? It is all the rage for people to excoriate how humans treat other animals .. but why is it only humans who are supposed to have a duty to treat other animals in an ethical way, when any other species is excused as acting according to its nature when it harms other animals? No one scolds a chimpanzee when it rips a monkey limb from limb. It can be more than a trifle absurd at times: no one can blame a cat for harming a bird, so they blame the person who feeds the cat for it being around to harm the bird. That is not consistent with the premise that humans and cats have the same consciousness and the same rights.
Our consciousness is a mystery, and it is not a mere matter of language or tool use, which others point out are hardly unique, and in any case are nowhere spooky enough to explain something so fundamental. I don’t think responding to that mystery by saying it doesn’t exist and we are the same as houseflies is really the right answer. We need to understand this mystery to know if our consciousness will be the same as that of aliens. If the aliens are more like pussycats, we are in deep trouble no matter what we do.
Just reflect on what you have said about animals in relation to humans with low intelligence Are they qualitatively different from humans who can write mathematical theorems? If so, how should we treat them?
IMO, there is a case for humans to treat animals with a version of noblesse oblige.
Sorry if I was confusing – the main issue about human behavior is not merely a question of intelligence (though it is often attributed to that), but of diversity. Humans transform themselves and the world with a seemingly endless variety of behavior, not all of it good. Doing math, arguing politics, fighting wars over ideas, playing musical instruments, forming cults with special rituals, performing plays, inventing technologies … there is no telling what people will do next. And the astounding part is that all the endless variety of human behavior seems immune to classical genetics. I don’t believe you can point to one allele, one family, or one geographical population that has otherwise normal human behavior except those involved can’t do math, never had religious beliefs, or are immune from criminal behavior. Every human, regardless of origin, has the potential to be a Gandhi or a Hitler, a soldier or a singer or a scholar or a sycophant. We can’t even use the enormous resources amassed on human genomic variation to predict which kids will do well or poorly in class.
Yet beside unpredictability, consistency! H. neanderthalensis seemed to have some of the same odd practices as modern humans that were within the reach of their technology, from making jewelry and textiles to drawing pictures and bringing flowers to their dead. We don’t know as much about H. floresiensis, but there is evidence they could hunt and eat giant komodo dragons – a supreme feat of courage and teamwork no modern hunter will surpass.
There is a lack of visible evolution of behavioral change among modern populations, but also a very recent origin of humanity. Humans and chimps diverged about as recently as species of fruit flies that have slightly different spots on their wings. The only way I can see to reconcile this is to accept that a wide-ranging qualitative distinction really did arise between humans and other animals.
I think much of the variety of human behavior is cultural “technology”. I am reminded by economist Brian’ Arthur’s thesis about technology, that as new technologies are invented, so the range of technologies becomes a combinatorial explosion. Once liberated from a Malthusian existence, humans can do the same with cultural inventions, thus increasing the range of behavioral possibilities. Just think of the drastic change in work titles and jobs in just the last 50 years. The arts often go through explosions of new forms as experimentation takes place. Animals are far more restricted. But consider that apes taught sign language can teach that to their offspring, potentially setting off an early cultural explosion in these apes. The barest form of uplift, but a start.
But the reason to give animals rights and treat them well is that we understand that they have inner lives. The consciousness test shows that many animals are conscious. More objectively, animals experience pain, and therefore can suffer. As animals can communicate, they can understand that their fellows are suffering (e.g. in an abattoir). We try to alleviate suffering in humans, and that should be extended to animals. With that, they need advocates and legal rights to prevent abuse.
The author noted some sort of convergence or lessening of human behavioral diversity as if this were proof of an evolutionary advance. Nonsense. What we see is the intrusion of mass/social media’s calibrated psychological coercion on the human psyche that stunts potential for creativity. It results a hollow/fake society full of fakers and posers. This is not good but fortunately it is not sustainable.
Though I don’t believe that there is a compelling definition of ‘good’ and that humanity is not fundamentally near such a concept, I do believe that with increasing complexity, and thus nuance, humanity is realizing answers that may not have been available before. Answers from increasing awareness, data collection, processing of that data, including others in that data, and including others in the recommendations resulting from that data. It is hard for us to visualize, i think, of what constitutes a greater intelligence, either AI or extraterrestrial, but I think it may include notions that may run askew from current human values. What we may consider empathy and foresight and thoughtfulness may simply be an increased complexity in sensing, processing, and considering of all options, for I believe that many, if not most, acts of human non-good come from acting impulsively, blindly, without comprehensive information, and with little data from previous precedent. All short-comings that would be mitigated with the next level of intelligence: complexity, with the sensing and memory to manage and maximize such. But this would be a true Other. Beyond Star Trek’s Data and into a being of bizarre values; perhaps a species/ network/ civilization whose complexity was so great that many of its features would appear unapproachably wise, recklessly selfless, and counter-productive. Consider an advanced intelligence with some or all having little sense of self-preservation, because they value the complex, long-term results over the short-term and localized. Consider an advanced intelligence which seeks a path where its members choose to disperse and lose communications with each other since that furthers its values of the greater ‘complexity’. Which is fascinating since such an intelligence appears to be confronting universal entropy itself, the noblest of all long term goals.
Goodness as network success with network success describing a growing population of diverse nodes is as close to religion as I get. Any node can increase or decrease network success and the demand for regulation has to be directly proportional to network success. Connectivity and inter-dependence can provide a measure of distributed regulation.
I’ve just described the premise behind a successful market economy. The conservatives in the crowd shouldn’t be having a negative reaction to premise of network success.
The potential successful network of self-regulating galactic people are not trying to add us to the network. How do we square their expertise with our efforts to add ourselves to the network?
With goodness I always associate empathy. Without empathy (the ability to understand and share the feelings of another) there is no goodness. A certain level of complexity is required to achieve this. I see evidence for both increasing inward and outward behaviour with humans. I also see a lack of long term planning for survival of both our species and the millions of others we cohabit the planet with. The loss of E O Wilson recently was a terrible blow for us. He spoke passionately about the need to preserve the Earth’s ecosystems. This will become increasingly vital and hopefully more obvious to people. Many powerful (politicians including demagogues) are speaking more and more dangerous lies that prevent us from acting before ecosystem crashes become commonplace. We are in an increasingly complex and increasingly fragile world. Anyone who can’t see the effects of human created climate change now is deliberately deceiving themselves and probably others through the unregulated social media whose names I won’t mention but are almost ubiquitous in peoples’ lives now. We need goodness and empathy now more than ever.
I used to argue for some time that the Transcension Hypothesis is the explanation of Great Silence. All civilizations necessarily become non-expansive, in all terms of used matter, space and energy, somewhere between our point of development and Kardashev II+ type. But the mechanism remains completely unknown, it is probably something we cannot forsee now.
There’s a possible mechanism for Goodness of Universe, specifically the Benevolence of aliens. Technological progress increases the risk of self-annihilation, and in TP-oriented civilizations with static psychological nature the eventual collapse is inevitable because destructive potential remains close to constant, but destructive power increases. Achieving iterative interstellar colonization requires such advanced technology that aggressive species overthrow themselves (long) before they could reach the stars. For most of intelligent species, IIC requires massive psychological self-engineering or biological evolution through many cycles of falls and attempted rises, spanning many millions of years. The latter process is somewhat paradoxical, because survival in post-apocalypse environment requires aggression, but the more downfall cycles passed – the more it is possible to learn on mistakes.
Aggressive species don’t make it to the stars, and the more aggressive, the more confined they are – to their home systems, or even to their home planets. This could explain why we are not yet forcefully incorporated into Galactic Empire despite all arguments like the abundance of earthlike planets, easiness of thermonuclear-powered galaxy-spanning flight on sub-megayear timescales, and the possible five-to-nine billion year head-start for the first intelligence in the Galaxy, compared to us.
But if someone managed to pass the Barrier of Benevolence and reached IIC, why they would cease it?
Hi Torque,
I very much like the failure-driven, trial and error way you use here to model the evolution of goodness. That seems to be the most accurate and practical way to frame evolutionary learning, that the good (general network adaptiveness) is what is left over when we are continually throwing away the bad (less fit) in cooperation, competition, and survival. It is a net subtractive process, like the pruning in developing brains, or the statue inside the marble block.
There’s a great hypothesis, self-domestication, that takes this “goodness is what’s left over after eliminating badness” approach. The SDH describes how we humans may have done this to each other over the last 15,000 years. Basically, as tribe sizes grew, we killed or ostracized many of the most overly aggressive, violent, and untrustworthy among us, perhaps even decreasing our brain size about 10% on average, and making us much much more prosocial, ethical, and empathic. Richard Wrangham’s The Goodness Paradox, 2019 has a lot on this. He’s a leading scholar of this hypothesis. I think it will be proven correct (including the 10% brain shrinkage piece). We “niced up” our brains just as we weakened our bodies. Both made us much more dependent on our networks, not on our own self-sufficiencies, as was the case for early paleolithic humanity.
https://smile.amazon.com/Goodness-Paradox-Relationship-Violence-Evolution/dp/1101870907/
Warmest regards, John
I haven’t read the book, but the 15,000-year time scale is within the divergence of individual human races. Selection of human behavior and intellect on that time scale would imply that some races went through this process more effectively than others, and it would also be likely that remaining “bad” alleles would mean that children of criminals would be inherently predisposed to become criminals. This was the naive expectation based on how any other characteristic we can think of evolved.
My understanding is that people have been looking for such evidence since the dawn of the Human Genome Project (and before), but never came up with a real example. There are disabilities; there are some interesting cases like COMT that affect stress responses (but it isn’t a gene for criminal behavior); there are discredited ideas like the “XYY supermale”. Even the one obvious case – the ordinary Y chromosome – seems in doubt; entire categories of rude behavior (duels, bar fights, groping) that once seemed inherent to the male sex have nearly disappeared, within too short a time for selection to operate, simply by weakening the enforcement of social stereotypes. If that large a genetic difference can be set aside without the need for selection, I can’t believe unproven genes predispose, or ever predisposed, children with the wrong genetics to be brutal.
Hi Mike,
Thank you for this. We wade into difficult and controversial territory with such hypotheses. You make excellent points. I agree that social constraints are far more important to modern humanity, also that racial differences are minor and hugely overblown. I was much too specific in my attachment of 15,000 years to the SDH. I’m sure I’m greatly oversimplifying. I was basing it on the average age of the Boskop skulls (25% greater cranial capacity than current average humans), but they are just one data point, and their middle stone age era stretched back at least 200,000 years earlier. I am unsure when the effect would likely have started, if it is real. Perhaps before the first migration out of Africa? Wrangham surely has theories, but I doubt there is any consensus yet. How long did it take for our skeletal system to become anatomically modern (and so much weaker)? 100,000 years? 200?
There may be a game theory to self-domestication that makes it more evenly applied in all cultures, over a much longer period, than other selective variables. One of the mechanisms Wrangham documents is group ambush killing of powerful deviants who kill within the tribe. That might be universal or cultural, I don’t know. As to whether one modern race is more impulsive and aggressive than another, I wouldn’t know or care to say, since we all clearly want so much for that kind of difference to disappear. Thus I expect it will disappear, with advancing ethics and empathy driving our global genetic reconvergence, in my view. I grant this SDH could be more minor than I am claiming. But my intuition would be that it is real, with most of it happening weakly over much longer timescale prior to migration. We shall see. Thanks for the great points.
Another reason why I’m skeptical about the “self-domestication” as summarized here is that it makes it sound as if ancient societies had a fair and effective form of justice, to remove heinous evildoers. To me it seems like even in modern societies it is the MLKs and Navalnies who are most at risk. If selection had an impact, over the course of history that impact would be made on those most at risk of being killed, i.e. the slave classes. While any given culture tries to make their hierarchy sound eternal, the helots of ancient Sparta and the slaves of 6th-century Rome are long since mixed back into the gene pool. If autodomestication *did* alter brain size in some past era, I would be more likely to interpret it as a sign of the suppression of activism, creativity and critical thinking relative to some former ARHGAP11B-rearranged “Hopeful Monster” ancestor, as of any positive development.
Nonetheless, I would expect the change in brain size is only the simple consequence of (a) death during childbirth and (b) a decrease in overall body size, since smaller muscles need fewer motor units and less motor cortex, and smaller skins direct fewer sensory neurons to the somatosensory cortex. Changes in brain size don’t have a very strong correlation with IQ, and the causality is another issue.
Interesting points. I recall Mumford’s Technics and Civilization, which talked about the incredible effort it took to get people synchronized around the clock. I do think autodomestication would reduce many forms of behavioral creativity. Certainly rulebreaking is both activism and behavioral creativity. But I do think there is a game theory of interdependence being applied there, and I can see how it would make quite a bit of previously independent circuitry unnecessary and disruptive, once the group gained sufficient complexity. Evolution loves to prune out unnecessary things. So the hypothesis works for me in theory at least.
IIRC death during childbirth has always been horrific until just the last century, and body size has grown (and there is a good argument that motor control has gotten more sophisticated as our technology has proliferated) all while brain size has shrunk. That’s the curious thing.
I should have mentioned that every single animal humans have domesticated has seen their cranial capacity shrink 20-30%. Dogs, cats, cows, pigs, sheep, you name it. That’s the the main reason we call it self-domestication. I’d also argue that hounds are smarter, in new ways, than wild dogs are. They and we both lost a lot of self-sufficiency in orientation to the group (or to us), but we gained new capacities within the group. Our brains are probably much more selected as carriers and variers of conceptual and behavioral memes. I suspect we’ll eventually settle this question with genetics, given that we have access to some paleolithic DNA. Cheers Mike.
I’ve been thinking about mechanisms for self-domestication, and your comment about childbirth sparked a thought. I wonder if its true that in all cultures, more prosocial people have more children. It certainly seems to fit what I’ve seen in modern cultures. If the increasing complexity of culture also gently constrains for prosociality, this may be enough to explain much of the cultural domestication we see. It is a form of Cultural Selection. We know Sexual Selection in individuals selects for lots of crazy variation. Culture, by contrast, seems to me to be a net constrainer, with many universals across all cultures. The AI pioneer Marvin Minsky had a lot to say on that point. He called it a “virus of the mind”. (He was quite individualistic and iconoclastic, and a personal acquaintance of mine). You could say dogs had their brain capacity shrunk 30% by being constrained by our cultural selection, in a very similar process. We bred more of the dogs that were more prosocial. So if this is true, Wrangham’s ambush killing (which he documents in fourth world tribes today) might be a minor contributor to SD. The major contributor might be this positive effect. Culture breeds for more kids who fit into culture. We shall see.
“when our history to date shows that the most complex networks are always headed inward”
The problem with this line of reasoning is that it’s almost circularly-defined and selectively deterministic. That which “heads inward” ends up forced to become more complex, not the other way around, while the density forces systems less adapted to that density farther outward.
I liken it to stellar evolution, and how cores densify while less-“distilled” elements expand into larger and cooler shells that can eventually do ironically complicated things like molecule-forming. One could even expand that definition out to the existence of planets, and the systems that occur on planets.
Human history has a startling level of analogy to this type of evolution, and it makes no more sense to interpret that history by selectively staring at Hong Kong or Mumbai while pretending the “outer shell” of humanity hadn’t exploded out to the farthest corners of the planet in opposition to such density.
It would be hard to explain a phenomenon like urban sprawl and wanderlust if humans are somehow destined toward become the equivalent of a neutron star. Even harder to explain why the products of such sprawl and “dissipation” so often in history end up leapfrogging the more complex core societies.
How and why did little cities nestled between the mountains of Anatolia and Greece end up towering over the already-ancient and philosophical civilizations of the great river deltas? How did hillside villagers in Italy in turn conquer those? How did a gloomy, rain-sodden island at the edge of Europe invade the entire planet?
The answer is they had a different balance that made them more energetic than predecessors.
That dynamic means that if we get the chance, humankind probably will head to the stars. It will also stay on Earth while it can, densifying and complexifying, but I suspect that like in the history, far-flung offspring may return as conquerors.
Maybe “simpler” on the surface to the eyes of the desperate, myopic crowds they find in the ruins of their mythologized past, but perhaps more interested in accomplishing something that ennobles humanity than fighting for crumbs or mystic illusions.
Brian,
Thanks for these great points. I agree strongly with most of what you say here. I probably oversimplified my argument. Apologies if so.
I agree that the more dense systems can quickly overdensify, and over develop, and get overtaken by other nodes in the network. Accelerating complexification seems to be a network phenomenon. It is lumpy, like biological networks. It’s not easy to find it persisting in any one node forever. Rome overcentralizes, and the East takes over for a time for science advances. Ancient China becomes too inward focused, and loses its technological edge, etc.
Even slime molds know when to come together to move as one hierarchical organism under scarcity, taking directions from the nipple at the top, and when to become a more amorphous network. As you point out, we’re always moving between both states, and various nodes in the network are failing or succeeding. Many of our arguing mindsets (nodes) lose out over time, and their circuitry becomes less spatiotemorally dense (activity theory, long term suppression).
So I agree we are constantly swinging between expansion and densification, and bottom-up, and top-down control modes, and testing the value of both. I think of the communes of the 1960s, the fleeing of decaying US inner cities in the 1970s, the fleeing of overpriced cities in our current pandemic, and many other examples as periodic dedensifications of less adapted network nodes. We are also always voyaging outward, into “next adjacent” evolutionary spaces, both physical and conceptual. Steven Johnson writes elegantly about the “adjacent possible”.
Yet if civilization is not just evolving, but developing, there are always new regulatory systems emerging as complexity grows, systems that predictably constrain the amount of exploration we do. The new regulatory system I see coming for us, full speed, is this technological layer that is today simpler yet far faster than us, and is beginning to think for itself.
As Cesar Hidalgo of MIT points out in his books (Atlas of Economic Complexity, Why Information Grows) there is also always a growing virtual density to the network proper. Most of those folks fleeing the big cities today seem to be going to better run (and locally walkable and dense) small towns, not to rural areas. When they do go fully country, they must be supported by a strong virtually dense (dematerialized) network, with high-value information streaming to and from their nature escapes. These places are not like their grandparent’s weekend trips to the woods. They are bringing their bandwidth and eventually,
robots with them.
Ed Glaeser, in Triumph of the City, 2012 observed that two thirds of Americans already crowd into 3% of its land. I’d predict 75% or more Americans in 3% of our land by 2100. We’ll make our cities increasingly attractive. We’re already bringing nature to the city now with these parklets, and reclaiming of urban greenspace. Even if physical density saturates, just like human population will soon saturate, our virtual density is still on a very rapid exponential. The total digital data for our planet grows around 55% a year, much faster than Moore’s law. There’s no reason I know that this stunning rate of digital exponentiation can’t continue, pushing us into a world with pervasive AI, AR, sensors, and effectors, in a cosmic blink of future time.
I suspect didn’t explain what I mean by network densification sufficiently well. If any of this is right it’s not a monolithic neutron star we’re headed for, it’s something more like the shell of the sun that you described, a rich, diverse, cooperative and competitive evo-devo network. Back in the 1940s Teilhard de Chardin pointed out the “finite sphericity” of the planet, and the growing electronic linkage density (he was observing telephones at the time) across its surface. The coming iterations of the web are going to continue that densification and push it to incredible new levels. Digital transparency will greatly increase, and one huge unknown and moral challenge for us will involve ensuring that it respects privacy and personal freedoms. David Brin’s Transparent Society, 1998 is still a touchstone on this, in my view.
Finite sphericity plus digital exponentiation seem to set our civilization up for a phase transition in complexity. Because of that, and the great natural obstacles to interstellar travel, space habitats, and terraforming neighboring planets, that we have been discussing on this site, we seem to be on a course for what de Chardin called “planetization”, and “encephalization” of our planet. Others have called it the “global superorganism”. I like that term best because it asks us to think about all the systems, organs, and functional diversity of the coming system, not just its “highest thoughts” or “noosphere”.
I suspect that versions of ethics, empathy, and immune systems are regulatory systems that must emerge in any complex collective. If our planet is becoming like a superorganism, with future societies, organizations, and individuals much more interdependent and self-regulated, I suspect it will have some form of ethics, empathy, and an immune system. There are always deviant and opportunistic actors in an organism, but we use such systems to keep individual actions in check, and in service to the greater purpose of the whole.
So when I look at the future of our human-machine hybrid civilization, it looks to me increasingly like development. Evolutionary processes will surely continue, but perhaps no longer in biological morphology, or even much in areas like urban design, but more in ideas, culture, and computation, and in the domain of the very small (nanoscale).
My proposal in the transcension hypothesis is that all our universal civilizations are separated by vast spaces, and are deeply accelerative, because that separation and acceleration maximizes the value of our own unique experiences. No finite system will ever be godlike, and this local acceleration must stop, eventually.
I suspect that future information theory will show that we need two way communication and selection to create more adaptive network complexity. That’s how information seems to work in biology, at least. One way information flow in biology is useful to protect existing complexity (development) but it doesn’t create diversity, instead it removes it. If it is also true that accelerating and densifying local systems eventually connect up with all the other such systems in our universe, via some type of black hole/worm hole/hyperspatial physics, I can see strong reasons why intelligent civilizations would want to prevent Von Neumann probe colonization or encyclopedia galactica sending. Doing so would just homogenize our local transition, and make a less diverse and intelligent network. We’d meet clones of ourselves on the other side. Useful differences would be minimized.
I described elsewhere the competition between expanding out into space to learn more, and turning ourselves into a black-hole-like civilization, able to use the lensing physics of black holes to observe everything in our galaxy as if it is within our solar system (if I understand the astrophysicists correctly). If we can do that in just a few hundred years of continued exponentiation of computation, as Seth Lloyd and Larry Strauss have suggested, colonization seems doomed, to me.
I find it curious that black holes are the best eyes we can imagine for observing our galaxy, the best computers we could produce, and in standard relativity (no exotic physics), an entity at the event horizon will experience instantaneous (from its reference frame) merger with all the other gravitationally bound black holes (intelligent and otherwise) in it and its neighboring galaxies. There seem to be many benefits to this densification, and much to learn here.
So I agree, that if we biological humans get a chance, many of us will head to the stars. But if we are developing as well as evolving, I don’t think we will get that chance, and I suspect postbiological humanity will be much more ethically and immunologically constrained than we are. If we truly are headed to inner space, and will meet everyone else there, and if the universe self organized under selection to maximize our uniqueness prior to such an event, and if Earth is becoming something like a superorganism, I think the ethical and empathic and immunologic injunctions against expansion will work to constrain us, with a force proportional to their complexity.
I hope I haven’t offended with any of these words. I’ve tried to boil this argument down to a paper that you might enjoy skimming:
https://www.researchgate.net/publication/256935188_The_transcension_hypothesis_Sufficiently_advanced_civilizations_invariably_leave_our_universe_and_implications_for_METI_and_SETI
If you get a chance to look at it and find any flaws or mistakes in it, do let me know, and I will try to fix them. Thanks for the conversation!
Warmest regards,
John
The paper depends a great deal on the physics of black holes, and the most basic issues with black holes are anything but clear to me. To begin with, does anything ever fall into a black hole? This seems to be one of those issues where two sides each have their own truth, and I’m not qualified to say who is right… but when I look at something like https://arxiv.org/abs/2106.08935 I think that the infalling observer, in this case even light, *really is* moving through vast distances of bent space without reaching the event horizon. (That was the paper about a galaxy whose straight-line light will take two decades longer to reach us than light bent around a group of galaxies in the middle). I’m thinking Schwartzschild was right all along in calculating a singularity at the event horizon. If nothing falls into a black hole in a finite time, and all black holes decay by Hawking radiation in a finite time, then black holes are a one-way trip to the future, but not outside of space and time as we know it. Carter-Penrose diagrams showing people passing outside an entire spacetime through an event horizon would be discarded as a fantasy requiring an immortal hole. I don’t understand why people much smarter and vastly more knowledgeable about physics than I am are agonizing about “firewalls” that seem to magically destroy information at the event horizon!
I also have trouble with the notion that the universe could be ‘ergodic’ (boring) to the point that these civilizations need to escape. If their processing faces a speed of light limit, why not just spread out and slow down to fast-forward the show, rather than diving into a black hole? How do I reconcile the notion of a ‘prime directive’ to make sure other civilizations stay diverse and ‘interesting’ with a planet-wide merger that fuses all of their own people to one consciousness?
And … I’m skeptical about the very popular notion that our way of existence – stars, nuclei, chemistry – is the only relevant state for our universe – and that after our stars die out, the show is forever over, or needs to be restarted in a ‘baby universe’ with only slightly different rules. It seems to me that there have been many eras before us with very different rules, on very short time scales. Maybe once there was a culture among the all-pervading quark-gluon plasma interacting according to a complex and fascinating physics. With their immense heat, the sheer number of collisions and interactions occurring, their subjective time should have made that era seem cosmologically long. The round trip of their tiny universe might have seemed as implausible to them as a round trip of our universe does to us. Likewise, there may be eras long after our own when all the protons and black holes have decayed and assemblages of neutrinos (‘neutrino nuggets’) carry out interesting chemical-like reactions, taking long than the history of our present universe to complete a single metabolic step. Between these, who knows? Maybe a Big Rip, another episode of cosmic inflation. I picture the universe as a circus where an untold number of acts move through, each abiding by their own laws of physics for what to them is a long show, before vanishing into the ‘Big Bang’ sunset of unmeasurably short and energetic events. And in the future … always more surprises.
Fantastic comments Mike. I try to respond to some below.
M: The transcension hypothesis depends a great deal on the physics of black holes.
Absolutely. And as you point out, those physics are anything but clear today. Yet there appear to be hundreds of trillions of these entities in our universe. And perhaps also microscopic, primordial black holes (dark matter candidates). They do seem fundamental to the fabric of spacetime.
If black holes are seeds for new universe creation, as Lee Smolin argues in his Cosmological Natural Selection model, they seem to be the best candidates for such seeds.
https://blogs.scientificamerican.com/guest-blog/the-logic-and-beauty-of-cosmological-natural-selection/
To me, the most parsimonious model for the emergence of complexity is a replicator that has to follow evo-devo processes, under selection. It seems plausible to me that black holes are some manner of seed, and universes “organisms”, and the multiverse a selection environment, perhaps a finite environment, in which competition for finite spacetime resources exists, as one of many potential selection pressures. In such a multiverse, it seems reasonable that universes might begin as primordial replicators, but over time, become complex replicators, in which internal evolved and developed intelligence aids replication.
I also find it interesting that Einstein-Rosen wormhole networks might exist, but that current physical models suggest they’d only be stable for entities far smaller than us in size. Macroscopic matter apparently collapses the holes, but as we approach the Planck scale, they may not do so. Everything in our universe seems to get quite strang,e and often much less constrained, when we model it at very small scales.
https://www.scientificamerican.com/article/wormhole-tunnels-in-spacetime-may-be-possible-new-research-suggests/
M: I don’t understand why people much smarter and vastly more knowledgeable about physics than I am are agonizing about “firewalls” that seem to magically destroy information at the event horizon!
Yes, these arguments suggest to me that we are beginning to ask the right questions about something important that presently doesn’t make sense. I take comfort in the knowledge that paradigm shifts in science often come from resolving particularly fundamental and irritating paradoxes and anomalies. Black hole and quantum information theory seems to be one of those paradoxical domains.
M: If their processing faces a speed of light limit, why not just spread out and slow down to fast-forward the show, rather than diving into a black hole?
As I understand it, spreading out and slowing down would be a phase transition for complexity in our universe, after billions of years of accelerating densification and dematerialization. It might happen, as our only option, if we can’t use black holes as forward time travel devices or gateways to some kind of wormhole network. But if the topology of our universe allows these kinds of shortcuts through spacetime (just as quantum entanglement is a shortcut through spacetime) it seems to me that this would be by far the most desired developmental path.
M: How do I reconcile the notion of a ‘prime directive’ to make sure other civilizations stay diverse and ‘interesting’ with a planet-wide merger that fuses all of their own people to one consciousness?
In biological evo-devo, every new transition to greater complexity seems to always incorporate a primarily bottom-up, evolutionary network of competing and cooperating nodes. Our own consciousness is emergent from all of these continually arguing and significantly unique and “interesting” mindsets we have. We can even be conscious of them when we are arguing with ourselves (and they may stay independent for years, until the data comes in to resolve their disagreements). Any kind of planetization (global superorganism) layer that emerges in our transition to postbiology would have to keep this mostly bottom-up architecture, in my view. To a rough approximation, all complex adaptive systems seem to have this mainly bottom-up feature, a topic I call the 95/5 rule. The universe itself appears to me to be such a mostly bottom up, massively parallel developer of unique intelligences, all of which are presumably as finite and incomplete as ours.
https://evodevouniverse.com/wiki/Evolutionary_development_(evo_devo,_ED)#Evolutionary_development_in_organisms:_The_95.2F5_rule
M: I’m skeptical about the very popular notion that our way of existence – stars, nuclei, chemistry – is the only relevant state for our universe – and that after our stars die out, the show is forever over, or needs to be restarted in a ‘baby universe’ with only slightly different rules. It seems to me that there have been many eras before us with very different rules, on very short time scales. Maybe once there was a culture among the all-pervading quark-gluon plasma interacting according to a complex and fascinating physics.
These are fascinating points! If Smolin’s CNS model is roughly correct, then earlier replicating universes may have had these different rulesets. Note also that Smolin’s model leaves out a role for internally evolved and developed intelligence, thus it seems to be only weakly evo-devo based, at present. In biological development, there is a very old (and roughly correct) concept, that “ontogeny recapitulates phylogeny”. It tells us that early-stage developmental environments will have different structural and functional dynamics (rulesets and environmental conditions) than later stages. The strange physics (inflation, etc.) that may happen in our universe long before first light (100K years in) may be partly developmental legacy code, inherited from previous developmental states necessary for simpler replicators.
Yet I would also argue that there appears to be an arc of network complexity to all replicating and selecting cultures in our universe. Quark-gluon culture would be likely to be less generally adaptive, and intelligent, than ours, in my view. So too with prokaryotic cultures on Earth versus ours. I don’t think such reasoning is human-centric, but simply evo-devo centric. We shall see if it holds up.
M: I picture the universe as a circus where an untold number of acts move through, each abiding by their own laws of physics for what to them is a long show, before vanishing into the ‘Big Bang’ sunset of unmeasurably short and energetic events. And in the future … always more surprises.
I agree strongly with this idea of “always more surprises ahead”. If ours is an evo-devo universe, every death is accompanied by replication, with further evolutionary activities and complexity in the next cycle. As our intellect advances we can increasingly understand and simulate our developmental architecture and constraints, but our evolutionary possibilities always become increasingly unpredictable and unknown, the further ahead we look. Always more surprises and change ahead.
Thanks for the stimulating conversation. It is greatly appreciated.
I am in agreement with the evo-devo model, and much of what is written here, but I am of the mind that certain societal ills will have to be improved before we can merge with our AI. Who, for example will ensure that our AI will not carry the biases of the rather homogeneous group of AI developers? (https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html)
The unceasing drive for greater profits under capitalism is another problem I fear will hinder our progress. Just look to how pharmaceutical corporations are widening the vaccine divide. (https://jacobinmag.com/2021/11/covid-19-pfizer-moderna-jj-mrna-profits-poor-countries) I worry that our species won’t have the time to wait for advanced AI to save us. There are things we will have to solve before the singularity.
Hi Jolly Ray,
I agree deeply with all of these points. Thanks so much for making them. I agree that AI, in its current incarnation, is primarily a tool of powerful actors, being used to advance their influence and profits, and perpetuate our unsustainable consumer culture. Today, in its early developmental state, AI is a mostly top-down, centralizing force in society. But as our AI tools democratize, I have long-predicted that the great majority of us will get and use Personal AIs (PAIs), which will increasingly understand our values, tasks, and desires, and will increasingly aid us in our daily lives. My first public writing about this was in 2003, here: https://www.accelerationwatch.com/lui.html
Corporations will surely give us their own versions of these Personal AIs, slick in design and oriented to maximizing their profits, but there will also be open source versions, oriented to improving our lives, and finding the best evidenced-based solutions consistent with our values. We’ll endlessly customize our PAIs just by speaking to them, and their models of us will be behind our private firewalls, just like our email and texts are today. For the first time since the information revolution began in the 1950s, we’ll have personal data models that are better than the ones the marketers have always had and sold to each other, models they increasingly use to manipulate us. Some fraction of PAI users will use them to escape into fantasies and echo-chambers, and they will “lean back”, and let powerful actors continue to control them. But another fraction will choose to use them in a “lean forward” mode, where they let their PAIs seek out evidence-based information, and nudge them into personally and collectively empowering, ethical, and sustainable behaviors. I expect the latter group to increasingly be the more informed and empowered, and democratic societies will depend on a healthy fraction of people being in “lean forward” mode, if they are to remain adaptive.
I expect major battles over PAI use and control in coming years. But once they emerge, perhaps in the 2030s, I can see a substantial fraction of people learning in an exponential manner throughout their lives, via their PAIs. Exponential personal learning is not possible today. I also expect that “lean forward”, evidence-based PAI users will increasingly be economic winners and opinion-leaders in the complex network world ahead. Once PAIs make up the vast majority of applied AI in our civilization, I think the mostly bottom-up, network driven improvement of society that we need to get to the next level of societal complexity will finally be able to emerge. Democracies can be revitalized, and greatly improved, in such a world.
Today’s AI is very weakly bio-inspired, neuro-inspired, and evo-devo. Yet it is already a rapid learning system. Many of the widely publicized perceptual biases in Google’s first gen autocaptioning system (misidentification of ethnic groups) were rapidly unlearned by the system itself over time. Some of that unlearning was supervised (by those homogenous developers you refer to), but a good deal of it was unsupervised. The associative network itself learned its way out of those biased associations.
That is why I am optimistic that even with today’s homogeneous tech titans building today’s expensive, centralized versions of AI, problems like fake news, unconsented microtargeting, and all the hidden agendas in much of today’s digital information content will be overcome in the future. Consider the way our antivirus and spam filters (mostly) work today. In other words, if our best AI increasingly becomes an evo-devo, network learning system, we can expect it to increasingly self-improve and self-direct over time, even as powerful actors seek to keep controlling it for their own ends.
The fact that the public GitHub has by far the largest single code base on the planet (28 million repositories at present), open source coders are the largest group of developers (73 million at present), and many of our tech titan’s (Google, Amazon, Microsoft, Baidu, FB) freely post their top AI tools there, in order to gain the innovation advantages of that community, gives me optimism that these tools will become far more personalized and individually controlled in the future, and thus that a mostly bottom up AI future on our planet will eventually emerge.
I speculate about that future in my Medium series on Personal AIs.
https://johnsmart.medium.com/your-personal-sim-a07d78ffdd40#.jhfytmbf9
But your points are excellent! I need to state what you say here much more often in my own writing. It will take much bold and enlightened activism, education, entrepreneurship and policy to get us to a less dehumanizing and extractive world than the one we live in today. We need to describe what is coming much better, and enlist others who want to increase personal and group empowerment and freedom, while improving ethics, empathy, and sustainability.
We may be headed for a particular developmental future (Personal AIs, General AIs, and eventually, postbiological life) but the evolutionary paths that our societies take toward such futures are entirely in our hands. There are many poor paths we can and will take to this apparently developmental future. Our tools will become increasingly powerful, with increasing ways to misuse them. Our empathy, ethics, foresight, and strategies matter now more than ever.
Thanks for pointing all this out. It is the most important stuff, the question of our daily choices and actions, as we try to construct better futures for all life on this precious planet.
This is something that has been on my mind for a long time. As I was raised in liberal religion, the idea of the world being a good place comes naturally to my mind. As a non-religious agnostic, I no longer think in religious terms, but I nonetheless maintain the moral vision of my upbringing.
I have read Pinker’s book that you mention. I’m not familiar, though, with Bregman. I don’t know if I’ll get around to Bregman’s book, as I already know the basic argument. Is there anything particularly different between the case made by Pinker and Bregman? What does the latter bring up that former did not?
It is an interesting argument that “we have become increasingly self- and socially-constrained toward the good, for yet-unclear reasons, over our history.” The “yet-unclear reasons” is the important part, of course. You argue that the reason “why we are increasingly constrained to be good is because there is a largely hidden but ever-growing network ethics and empathy holding human civilizations together.”
This is supposedly based on the complexity of networks. Accordingly, “Well-built networks, not individuals or even groups, always progress.” That is an intriguing perspective, but understanding its implications would require knowing exactly what is the nature of these networks. It makes me think of hyperobjects.
Where your thinking really captures my curiosity is when you conclude that, “our history to date shows that the most complex networks are always headed inward, into zones of ever-greater locality, miniaturization, complexity, consciousness, ethics, empathy, and adaptiveness.” That seems to relate to a previous post of yours where you also speak of complexity, in quoting Sam Harris from his book The Moral Landscape:
https://eversmarterworld.com/2012/01/17/the-moral-landscape-a-six-part-review-part-4/
“Political conservatism… is a fairly well-defined perspective characterized by a general discomfort with societal change and a ready acceptance of social inequality… The psychologist John Jost and colleagues analyzed data from twelve countries, acquired from 23,000 subjects, and found this attitude [political conservatism] to [also] be correlated with dogmatism, inflexibility, death anxiety, need for closure, and anticorrelated with openness to experience, cognitive complexity, self-esteem, and social stability.”
I’m familiar with that kind of research, including the work of Jost, although I’m not sure I’ve looked into that specific study. Obviously, cognitive complexity is necessary for the development of complex networks. But it’s also core to ever more complex ideological worldviews and identities, in how cognitive complexity is linked to cognitive empathy in the development of theory of mind.
This is seen with how “white liberals” are the first US demographic that has been measured to have a pro-outgroup bias, which simply means they don’t identify narrowly as a mere demographic. Imaginatively empathizing with others different from one requires greater cognitive ability. By the way, this probably relates to the WEIRD bias, as liberals are specifically among the WEIRD demographic of those with higher rates of education, literacy, etc.
https://benjamindavidsteele.wordpress.com/2021/02/20/we-are-all-white-liberals-now/
Related to that, have you read Joseph Henrich’s book The WEIRDest People in the World? If not, I highly recommend it. He theorizes that the main component is literacy and literary culture. Regularly reading books since a young age alters brain development. This might fit in with the explanation of novel-reading as assisting in greater development of theory of mind and cognitive empathy.
There are real world consequences to this, of course. Even though literacy has been around a long time, a literary culture only came into existence the past few centuries. During the Middle Ages, most of the elite didn’t read at all or else didn’t read much. The Enlightenment saw the spread of literacy among both the elite and non-elite.
Maybe as expected, the Enlightenment was also the period during which there was popularization of the belief that every person had a common human nature; i.e., the increased abstraction of theory of mind as a complex universalization of cognitive empathy. So, everyone had a ‘soul’ and so had the same human potential — everyone including women, the poor, slaves, and ‘savages’. That might’ve been the single most radical idea to come out of that era.
You might like the blog of another scholarly writer who tackles issues of empathy and systems thinking:
https://empathy.guru/