Imagine a future in which we manage to reach average speeds in the area of one percent of the speed of light. That would make for a 437-year one-way trip to the Alpha Centauri system, too long for anything manned other than generation ships or missions with crews in some kind of suspended animation. Although 0.01c is well beyond our current capabilities, there is absolutely nothing in the laws of physics that would prevent our attaining such velocities, assuming we can find the energy source to drive the vehicle. And because it seems an achievable goal, it’s worth looking at what we might do with space probes and advanced robotics that can move at such velocities.
How, in other words, would a spacefaring culture use artificial intelligence and fast probes to move beyond its parent solar system? John Mathews ( Pennyslvania State) looks at the issue in a new paper, with a nod to the work of John von Neumann on self-reproducing automata and the subsequent thoughts of Ronald Bracewell and Frank Tipler on how, even at comparatively slow (in interstellar terms) speeds like 0.01c, such a culture could spread through the galaxy. There are implications for our own future here, but also for SETI, for Mathews uses the projected human future as a model for what any civilization might accomplish. Assume the same model of incremental expansion through robotics and you may uncover the right wavelengths to use in observing an extraterrestrial civilization, if indeed one exists.
Image: The spiral galaxy M101. If civilizations choose to build them, self-reproducing robotic probes could theoretically expand across the entire disk within a scant million years, at speeds well below the speed of light. Credit: STScI.
But let’s leave SETI aside for a moment and ponder robotics and intelligent probes. Building on recent work by James and Gregory Benford on interstellar beacons, Mathews likewise wants to figure out the most efficient and cost-effective way of exploring nearby space, one that assumes exploration like this will proceed using only a small fraction of the Gross Planetary Product (GPP) and (much later) the Gross Solar System Product (GSSP). The solution, given constraints of speed and efficiency, is the autonomous, self-replicating robot, early versions of which we have already sent into the cosmos in the form of probes like our Pioneers and Voyagers.
The role of self-replicating probes — Mathews calls them Explorer roBots, or EBs — is to propagate throughout the Solar System and, eventually, the nearby galaxy, finding the resources needed to produce the next generation of automata and looking for life. Close to home, we can imagine such robotic probes moving at far less than 0.01c as they set out to do something targeted manned missions can’t accomplish, reaching and cataloging vast numbers of outer system objects. Consider that the main asteroid belt is currently known to house over 500,000 objects, while the Kuiper Belt is currently thought to have more than 70,000 100-kilometer and larger objects. Move into the Oort and we’re talking about billions of potential targets.
A wave of self-reproducing probes (with necessary constraints to avoid uninhibited growth) could range freely through these vast domains. Mathews projects forward not so many years to find that ongoing trends in computerization will allow for the gradual development of the self-sufficient robots we need, capable of using the resources they encounter on their journeys and communicating with a growing network in which observations are pooled. Thus the growth toward a truly interstellar capability is organic, moving inexorably outward through robotics of ever-increasing proficiency, a wave of exploration that does not need continual monitoring from humans who are, in any case, gradually going to be far enough away to make two-way communications less and less useful.
[Addendum: By using ‘organic’ above, I really meant to say something like ‘the growth toward a truly interstellar capability mimics an organic system…’ Sorry about the confusing use of the word!]
From the paper:
The number of objects comprising our solar system requires autonomous robotic spacecraft to visit more than just a few. As the cost of launching sufficient spacecraft from earth would quickly become prohibitive, it would seem that these spacecraft would necessarily be or become self-replicating systems. Even so, the number of robots needed to thoroughly explore the solar system on even centuries timescales is immense. These robots would form the prototype EBs (proto-EB) and would ultimately explore out to the far edge of the Oort Cloud.
The robotic network is an adjunct to manned missions within the Solar System itself, but includes the capability of data return from regions that humans would find out of reach:
These proto-EBs would also likely form a system whereby needed rare resources are mined, processed, and transported inward while also providing the basis for our outward expansion to the local galaxy. EB pioneering activities would also likely be used to establish bases for actual human habitation of the solar system should economics permit. Additionally, this outward expansion would necessarily include an efficient and cost effective, narrow-beam communications system. It is suggested that any spacefaring species would face these or very similar issues and take this or a similar path.
Note that last suggestion. It’s gigantic in its consequences, but Mathews is trying to build upon what we know — civilizations with technologies that allow them to operate outside this paradigm are an illustration of why SETI must necessarily cast a wide net. Even so, EB networks offer an area of SETI spectrum that hasn’t been well investigated, as we’ll see in tomorrow’s post.
To analyze how a robotic network like what the paper calls the Explorer Network (ENET) might be built and what it would need to move from the early proxy explorers like Voyager to later self-reproducing prototypes and then a fully functional, expansive network, Mathews explores the various systems that would be necessary and relates these to what an extraterrestrial civilization might do in a similar exploratory wave. In doing this, he reflects thinking like Frank Tipler’s, the latter having argued that colonizing the entire galactic disk using these methods would involve a matter of no more than a million years. Note that both Mathews and Tipler see the possibility of intelligence spreading throughout the galaxy with technologies that work well within the speed of light limitation. Extraterrestrial civilizations need not be hyper-advanced. “In fact,” says Mathews, “it seems possible that we have elevated ET far beyond what seems reasonable.”
This is an absorbing paper laced with ingenious ideas about how a robotic network among the stars would work, including thoughts on propulsion and deceleration, the survival of electronics in long-haul missions, and the ethics and evolution of our future robot explorers. Tomorrow I want to continue with Mathews’s concepts to address some of these questions and their implications for the Fermi paradox and SETI. For now, the paper is Mathews, “From Here to ET,” Journal of the British Interplanetary Society 64 (2011), pp. 234-241.
Greg,
I disagree. Manufacturing variation does not matter, it will not be transmitted to the next generation. The only thing that could lead to evolution is a change in the genetic code, or, in this case, stored blueprints and procedures. In place, or during download to the progeny.
So, I would submit that we really are talking only about fidelity in copying digital information. Everything else is what biologists call somatic, affecting the present organism, but not transmitted to the progeny
I heard you claim that error correcting codes and redundancy will not work for longer than a few decades. Are you serious about that? Such methods are mathematically exact and can be made arbitrarily safe with very little extra effort. Information that is encoded using error correcting codes will not deteriorate at all over time if it is periodically refreshed (i.e. recovered and re-encoded). A simple checksum makes it impossible (ok, astronomically unlikely) for mutations to remain undetected.
I agree with Eniac on the subject of error-correcting codes. They can be made arbitrarily robust from even simultaneous errors in a significant fraction of the bits, via greater redundancy and more frequent refreshes.
OTOH, it’s far far more difficult to “refresh” the hardware the digital codes must run on. On the scale of thousands of years such maintenance and recycling/re-manufacturing of high-tech parts requires, given our present kind of technology, no less than a planet-wide division of labor. This isn’t a problem with reproductive mutation, though — it’s a problem with survival itself.
“a trillion details would require 100 million books to write down. Google estimates that there are 130 million books ever written.”
Only a miniscule fraction of the details that go into products are ever published in books. Often it is never written down at all — most of it is tacit knowledge, like knowledge of how to ride a bike.
Most of it? You are saying that the preservation of high-technology knowledge is driven primarily by folklore? I beg to differ.
I can tell that you are lucky enough never to have to deal with Standard Operating Procedures and other forms of process documentation… :-)
Only the most basic of details do not need to be written down. Which way to turn a screwdriver, perhaps. Although, I would not be surprised to see more than ten words written on this subject, also. Everything more complex needs to be taught, and the primary mode of teaching is in a classroom, with textbooks and all the other trappings of education.
Care to estimate how many pages exist in the literature about how to ride a bike? billions of trillions, I am tempted to say, if I didn’t know better. :-))
@Eniac
“Most of it? You are saying that the preservation of high-technology knowledge is driven primarily by folklore? I beg to differ.”
Never heard of “institutional/tribal knowledge”? These are common terms in manufacturing. You’d be surprised how many processes required for manufacturing are undocumented, simply passed on through OJT. Also, the instructions needed for a robot to manufacture something are different than those needed for a human to do it. In an ideal world, everything would be documented in error proofed computer code, but alas, the real world is not ideal. The universe does not work according to our mathematical models; rather, our mathematical models only approximate the way the universe acts. Perhaps this is why your mathematical models show that error proofing is perfect. Biological transcription also utilizes error correction mechanisms, but really, evolution itself is the ultimate error proofing.
Some tautologies are interesting purely because of their high potential to mislead. I am wondering if the following abovementioned quote is in that category.
“The only thing that could lead to evolution is a change in the genetic code”
Evolution is about the accruing of enduring advantage of a line over many repetitious cycles. However, the mode of expressing a genetic code can vary dramatically with environment. Furthermore, the form of the probes send to identical subsequent star systems can vary according to their previous location, and that can have consequences for the next generation or so. There is thus much potential for unexpected change with unexpected consequences, even if that change is not evolutionary in the strict biological sense.
Greg,
No, it is the other way around. We use mathematics to obtain perfect transmission of information in an imperfect world. This is the very essence of digital technology. It really works. Take the lowly Windows 7 installations disk. 4 Gb of mostly junk, but every single bit of it is unchangeably perfect, despite millions of flipped bits and scratches on the actual physical disk. Until there are so many of them that the disk is unusable. There is no room in between, where the information changes just a little bit, unnoticed. I am surprised that with your professed engineering experience you do not know this. It really works. It does.
Rob,
I think you have it exactly correct. The genetic code of a self-replicating machine is a computer program, nothing less, nothing more. We can make sure that the program will not change from generation to generation. We can NOT make sure the program does exactly what it is supposed to. It may seem a subtle difference, but it is a huge one.
Evolution requires the program to change from generation to generation. If it does not, that does not amount to evolution. The program may do unanticipated things, you can call that a design flaw, and a necessary consequence of complexity. The big difference is that evolution is open ended and unlimited, while design flaws remain strictly limited by what was there from the outset.
I’m surprised anyone would put “Windows installation” and “perfect” in the same sentence with a straight face! Seriously, I can’t tell you how many times I’ve had to reinstall, because it didn’t take the first time. Just as with biology, most random errors are catastrophic. Only when one slips through that is advantageous does it get replicated. Should also note that the only real way to make robots resistant to software viruses/malware is to enable code evolution and variation.
Greg,
You have a good point, although my face wasn’t entirely straight at the time :-)
See my note above though. Even though the information itself is far from perfect, it is still perfectly replicated. Evolution needs imperfect replication of the genome, and what I am saying is that this can be eliminated quite easily, no matter how lousy the protected content is.
Malware is another matter. Even well-meaning tinkering with the code can have catastrophic consequences. But that is different from evolution, and it is not unique to self-replicating systems. I do agree that intentional intelligent intervention, with some effort (not sure how major), could indeed produce an evolving system.
Greg,
Actually, for biology, this is not true. The overwhelming number of mutations are neutral. Of the remainder, most are detrimental. I am not sure about software, my guess would be if there were random errors on that Windows disk, they would also be neutral in most cases (Most of those ~32 billion bits is probably never, ever actually used). I would expect the ratio between advantageous and detrimental mutations to be a LOT smaller, though, which by itself would destroy the balance between deterioration and natural selection, aborting evolution before it can even get started.
I think I’ll have to take this back. I believe Windows uses checksums to guard against counterfeits and malware, so it is quite possible that it would refuse to run if any, even just one, bit is flipped.
Interesting discussion about the Fermi Paradox on Next Big Future:
http://nextbigfuture.com/2012/03/mass-effect-3-and-goat-guy-provide.html