Imagine a future in which we manage to reach average speeds in the area of one percent of the speed of light. That would make for a 437-year one-way trip to the Alpha Centauri system, too long for anything manned other than generation ships or missions with crews in some kind of suspended animation. Although 0.01c is well beyond our current capabilities, there is absolutely nothing in the laws of physics that would prevent our attaining such velocities, assuming we can find the energy source to drive the vehicle. And because it seems an achievable goal, it’s worth looking at what we might do with space probes and advanced robotics that can move at such velocities.
How, in other words, would a spacefaring culture use artificial intelligence and fast probes to move beyond its parent solar system? John Mathews ( Pennyslvania State) looks at the issue in a new paper, with a nod to the work of John von Neumann on self-reproducing automata and the subsequent thoughts of Ronald Bracewell and Frank Tipler on how, even at comparatively slow (in interstellar terms) speeds like 0.01c, such a culture could spread through the galaxy. There are implications for our own future here, but also for SETI, for Mathews uses the projected human future as a model for what any civilization might accomplish. Assume the same model of incremental expansion through robotics and you may uncover the right wavelengths to use in observing an extraterrestrial civilization, if indeed one exists.
Image: The spiral galaxy M101. If civilizations choose to build them, self-reproducing robotic probes could theoretically expand across the entire disk within a scant million years, at speeds well below the speed of light. Credit: STScI.
But let’s leave SETI aside for a moment and ponder robotics and intelligent probes. Building on recent work by James and Gregory Benford on interstellar beacons, Mathews likewise wants to figure out the most efficient and cost-effective way of exploring nearby space, one that assumes exploration like this will proceed using only a small fraction of the Gross Planetary Product (GPP) and (much later) the Gross Solar System Product (GSSP). The solution, given constraints of speed and efficiency, is the autonomous, self-replicating robot, early versions of which we have already sent into the cosmos in the form of probes like our Pioneers and Voyagers.
The role of self-replicating probes — Mathews calls them Explorer roBots, or EBs — is to propagate throughout the Solar System and, eventually, the nearby galaxy, finding the resources needed to produce the next generation of automata and looking for life. Close to home, we can imagine such robotic probes moving at far less than 0.01c as they set out to do something targeted manned missions can’t accomplish, reaching and cataloging vast numbers of outer system objects. Consider that the main asteroid belt is currently known to house over 500,000 objects, while the Kuiper Belt is currently thought to have more than 70,000 100-kilometer and larger objects. Move into the Oort and we’re talking about billions of potential targets.
A wave of self-reproducing probes (with necessary constraints to avoid uninhibited growth) could range freely through these vast domains. Mathews projects forward not so many years to find that ongoing trends in computerization will allow for the gradual development of the self-sufficient robots we need, capable of using the resources they encounter on their journeys and communicating with a growing network in which observations are pooled. Thus the growth toward a truly interstellar capability is organic, moving inexorably outward through robotics of ever-increasing proficiency, a wave of exploration that does not need continual monitoring from humans who are, in any case, gradually going to be far enough away to make two-way communications less and less useful.
[Addendum: By using ‘organic’ above, I really meant to say something like ‘the growth toward a truly interstellar capability mimics an organic system…’ Sorry about the confusing use of the word!]
From the paper:
The number of objects comprising our solar system requires autonomous robotic spacecraft to visit more than just a few. As the cost of launching sufficient spacecraft from earth would quickly become prohibitive, it would seem that these spacecraft would necessarily be or become self-replicating systems. Even so, the number of robots needed to thoroughly explore the solar system on even centuries timescales is immense. These robots would form the prototype EBs (proto-EB) and would ultimately explore out to the far edge of the Oort Cloud.
The robotic network is an adjunct to manned missions within the Solar System itself, but includes the capability of data return from regions that humans would find out of reach:
These proto-EBs would also likely form a system whereby needed rare resources are mined, processed, and transported inward while also providing the basis for our outward expansion to the local galaxy. EB pioneering activities would also likely be used to establish bases for actual human habitation of the solar system should economics permit. Additionally, this outward expansion would necessarily include an efficient and cost effective, narrow-beam communications system. It is suggested that any spacefaring species would face these or very similar issues and take this or a similar path.
Note that last suggestion. It’s gigantic in its consequences, but Mathews is trying to build upon what we know — civilizations with technologies that allow them to operate outside this paradigm are an illustration of why SETI must necessarily cast a wide net. Even so, EB networks offer an area of SETI spectrum that hasn’t been well investigated, as we’ll see in tomorrow’s post.
To analyze how a robotic network like what the paper calls the Explorer Network (ENET) might be built and what it would need to move from the early proxy explorers like Voyager to later self-reproducing prototypes and then a fully functional, expansive network, Mathews explores the various systems that would be necessary and relates these to what an extraterrestrial civilization might do in a similar exploratory wave. In doing this, he reflects thinking like Frank Tipler’s, the latter having argued that colonizing the entire galactic disk using these methods would involve a matter of no more than a million years. Note that both Mathews and Tipler see the possibility of intelligence spreading throughout the galaxy with technologies that work well within the speed of light limitation. Extraterrestrial civilizations need not be hyper-advanced. “In fact,” says Mathews, “it seems possible that we have elevated ET far beyond what seems reasonable.”
This is an absorbing paper laced with ingenious ideas about how a robotic network among the stars would work, including thoughts on propulsion and deceleration, the survival of electronics in long-haul missions, and the ethics and evolution of our future robot explorers. Tomorrow I want to continue with Mathews’s concepts to address some of these questions and their implications for the Fermi paradox and SETI. For now, the paper is Mathews, “From Here to ET,” Journal of the British Interplanetary Society 64 (2011), pp. 234-241.
“A wave of self-reproducing probes (with necessary constraints to avoid uninhibited growth) could range freely through these vast domains.”
I’m unclear if there are any possible “constraints” that would prevent uninhibited growth. To the extent that each probe would contain instructions for its own replication, and that replication of these instructions is necessarily prone to possible error, I don’t see how it is possible to prevent a Darwinian process of probes eventually overcoming any imposed limits. How would the galaxy not end up with the equivalent of kudzu, or rabbits in Australia, or Africanized bees?
We do have 1 maybe 3% maybe more for an unmaned now if we were willing to do it . That is Alpha Centauri in 100 years
http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion)
“self-reproducing robotic probes”
So far the only self-reproducing systems we know of are carbon based and rather fussy about their environment. At a minimum, liquid water is required, and the self-replication is error prone. The more complex systems require a support pyramid of many other carbon based systems just to function.
In the past 60 years there has been zero progress in the direction dreamed of by von Neumann, Dyson, and Drexler. “TSMC will be investing 9.3 billion dollars in its Fab15 300 mm wafer manufacturing facility in Taiwan to be operational in 2012.” Why such a waste of money when computer chips could be made to self-replicate? Because they can’t.
As a former molecular and cell biologist, I have numerous reasons to believe that “dry life” is impossible. People who have actually worked trying to build small complex things out of metals and semiconductors tend to agree with me, “One of the most outspoken critics of some concepts of “molecular assemblers” was Professor Richard Smalley (1943–2005) who won the Nobel prize for his contributions to the field of nanotechnology. Smalley believed that such assemblers were not physically possible and introduced scientific objections to them. His two principal technical objections were termed the “fat fingers problem” and the “sticky fingers problem”. He believed these would exclude the possibility of “molecular assemblers” that worked by precision picking and placing of individual atoms.”
Jack Williamson in his book, The Humanoids” explores this subject well; the terrifying or ennobling unexpected consequences of AI exploring the universe in our name. Ordinary young men seem to imagine robots as enemies; but what if they are not enemies?
A novel thought to explore: what if 200 IQ robots become our race’s alter ego? Perhaps men will explore the solar system and not the stars? What if going to the stars is not for biological beings?
We must try harder to reach for the stars or when and how does the human era finally end? Perhaps the stars will come to men in the form of benevolent alien intelligence?
SETI is more important than ever…One way or another…
The best stories end in a question mark, as does The Humanoids…will men become pets to the race of metal men? The list of depressing or ennobling possibilities is endless…
Williamson feared the sleek 200 IQ humanoid robots. Do you?
One current name for this fear is: The Singularity in AI.
Computer intelligence which gradually overpowers humanity…
Odd it this happens about the time men start exploring the planets…
Philosophers might say: right on time…
JDS
The problem with self replicating probes is that they will inevitably have errors in replication. Some of those errors will increase the number of offspring. those that minimize time spent on the designers well meaning intentions may be the first to arise. Darwinian selection will turn explorerbots into galaxy devouring pests.
Since this hasn’t happened yet, we can assume that either civilizations capable of building them don’t arise for all the usual fermis paradox reasons, or that civilizations that can build them are wise enough not to, or that most civilizations master the self replication part sooner than the 0.01c part so the destruction is limited to the local system.
I believe we will be much more likely to colonize other systems before our destruction if we adopt a species wide moral prohibition against constructing self replicating entities outside of virtual containers.
The paper by John Mathews on the type of signal he expects the ExplorerRobot (EB) communications network would use and suggestions that listening for it, essentially leakage radiation is a SETI search strategy.
There is a recent paper by David Messerschmitt on the type of signal that would be sent by a beacon or a communications system to maximize reception of it. The result is that it will be spread-spectrum.
The Messerschmitt paper is at
http://arxiv.org/abs/1111.0547v2
I expect the EB communications network, which Mathews suggests listening for would also have such encoding. Messerschmitt is now working the issue of plasma dispersion in the interstellar medium and its effects on such signals. He concludes that this impairment makes strong implications on the nature of an information-bearing signal, in particular its modulation. To me that mean only amplifiers need apply for the transmitter. They’re harder to do than oscillators, especially at high power and high frequency, where mode competition limits cavity size.
While I wouldn’t go quite so far as to say “impossible”, I agree at least that these “self-replicating robot” scenarios are an extreme form of hand-waving. Given our current and plausible near-future state of the art, abiological self-replication requires planet-sized economies, not little “probes” or “seeds”.
“Dry life” is a non-sequitur, though. First, there’s plenty of water and other volatiles to be had in almost any star systems, second, we already know that most technology is “wet”: it requires lubrication, cooling, etc. which usually involves fluids (either liquid or gas or both).
The basic problem is that manufacturing is far more efficient when it is specialized. The technology we enjoy is possible because of a highly elaborated division of labor among billions of people. Try to make a general-purpose machine, a machine that can make a thousand kinds of things instead of one, and its output will fall dramatically. An economy, whether human or “AI”, consisting solely of such machines would be one of dire poverty, not one capable of interstellar exploration.
Some day manufacturing technology may have evolved to such an advanced state that we can have our cake (general machines and self-replication) and eat it too (efficient production), but we are nowhere near that point and indeed have hardly any clue how to get there.
I have some agreement with Joy here.
In a way the problem of self replication of mechanical devices has some of the same problems as interstellar flight. The conventional approaches ( to both problems) lack the power and finesse needed for success and we are stuck waiting for new science or at least new insights into the problem. On the other hand, building self replicating BIOLOGICAL systems from the ground up is approaching feasibility very rapidly. Artificial biology and synthetic intelligence ( or you can mix the modifier words the other way) may both be approaching at a rate that makes them possible in our ( extended) lifetimes. By the way , it looks like the Alzheimer’s treatment may be around to help us all, in time. -as will personal genome sequencing ( in next three years, yes the machines to do this already exist!)
One more note, and you heard it here first, the WISE infrared telescope is preparing to post its whole data set, so look for some important announcements about outer system planets soon, if they are to come from this data, or within two years, if they require more survey data from , say , Pannstarrs. We will then have no choice but to explore these outer solar system bodies robotically, at least until our systems can throw a human occupied ship across the solar system in under two or three years…
The opposite side of our Galaxy is 75,000 ly away from us. At 0.01 c that’s 7.5 million years. If we assume 10 ly hops and equal time to reproduce probe and propulsion system, then 15 million years is the time to encompass the Milky Way disk, though reaching the furthermost globular clusters might need ~ 30 million years.
Tipler clearly wasn’t using 0.01c as his benchmark then, though the reference to ‘using these methods’ is really to the idea of self-reproducing automated probes rather than to specific velocities. Point well taken, though, Adam, and thanks.
Joy is certainly correct that self assembling robots have not proof of concept. However, biology shows that in principle, self assembly is possible, at least for carbon based life with a water environment. We just don’t know if current [lack of] progress is because the avenues pursued are dead ends (like ornithopters with feathered wings) or impossible in principle (despite Von Neumann).
If biology is to be the organizing principle for our machines, then my imagination fails at understanding how we construct such machines to be useful robotic probes. But failure of imagination is not a guide post.
A minimal deep space robot would need some sort of sensor, a radio/optical transmission system and a high Isp propulsion system. Could biology create all the necessary components? Sensors I can see. High Isp propulsion is perhaps simplest with a solar sail (so slow unless it uses the inner sol system for gravity boosts and higher solar intensity for acceleration. Perhaps some sort of mirror/lens and a light source to send signals? I can certainly see self assembling organic circuitry for processing data. Presumably comets/icy bodies would be the breeding grounds. Would the robot need to bury itself in a comet, create a “warm” pocket and replicate?
0.1c probes might be a stretch, but maybe, just maybe, we could build self replicating robots for outer solar system exploration. It would certainly be interesting to try and build/evolve the components to see if that could be done at all.
Tulse:
matt grosso:
A surprisingly common misperception. It is no problem at all to make sure replication is error-free, and in fact great effort would be needed to make the system tolerant of errors should they happen anyway.
Error free copying of information is routine, you do it every day on your computer. For that we have error correction codes, hash keys and check sums, the works. What happens if the computer is not well designed and there is an error? You guessed it, the computer crashes, the movie won’t play, the document will no longer open. Is there a chance it will improve? No, not even once in a billion times.
Biological systems are deliberately designed to tolerate errors and allow for improvements. Without this, life could never have evolved. Machines do not have this constraint, they will be designed to replicate error-free. Or not at all.
@Joy:
You should also mention that these purely theoretical objections are strongly contested by many in the field. I am not in this field, but my impression is that Smalley is in the minority on this one.
We do know that the fingers are neither too fat nor to sticky to arrange individual atoms into the letters IBM (http://news.cnet.com/8301-30685_3-10362747-264.html), or create really small origami (http://www.dna.caltech.edu/~pwkr/), so in one version of the story Smalley has already been proven wrong by experiment.
This is true. However, we are not trying to supply a billion people with automobiles, so we can trade a lot of throughput for flexibility. There is great progress being made in machines that can make “almost” anything. One good place to start reading is this: http://en.wikipedia.org/wiki/Fab_lab. I also like direct metal laser sintering (http://en.wikipedia.org/wiki/Direct_metal_laser_sintering) and there is much other interesting stuff out there to find on this issue, if you haven’t already.
Even the old-fashioned machine shop can make “almost” everything.
Now, I agree with you that this is not the same thing as a true autonomous self-replicating machine, that one is VERY much harder. But with biological systems as a proof of principle, and industrial manufacturing as a starting point, I do not see how we could not make good headway on this if we really tried. In my view, miniaturization is the key. Not so small that we won’t have any tools or experience or existing building blocks (the problem with nanotechnology), but much smaller than common industrial processes. I am thinking toy sized, with watchmaking as the model for mechanics and direct-bonding of chips for electronics. Mass goes with the third power of dimension, and so does power requirement. Scaling down is nearly a freebie, and thus a no-brainer, if you think about it. It also allows important work to be done in a suburban garage…
Up to now, what has been missing is the human labor part, the thousands or tens of thousands of people that would be necessary given the division of labor required because no person could possibly have all the skills required to pull this off. Henry Ford came pretty close, his auto manufacturing plants were designed to manufacture every part on site, with only raw materials coming in. This has fallen out of favor these days because of the increased efficiency you mention that comes with specialization. However, there is no reason the model could not be revived and improved upon, perhaps with some of the above mentioned flexible manufacturing techniques to help out.
Just in the last decade or two, we have developed information technology that is capable of storing and processing ALL of the information necessary for EVERY industrial process that ever existed on a hard drive the size of a book. This opens up the possibility that human specialization and skill can be entirely eliminated from manufacturing. If you have ever seen a modern manufacturing plant, you will realize how far we have come this way already, without really noticing. This process is going to accelerate, even without anyone trying to build a self-replicating machine. And very soon, we will all notice.
Project Longshot claimed 4.5% c cruising speed, with a design that doesn’t require contained fusion: a fission reactor provides power to drive laser-inertial fusion rockets where plasma leakage is presumably a feature not a bug. There’s also ideas for fission-fragment rockets, exploiting their natural 3% c exhaust velocity. And Orion, of course. While I wouldn’t say We Can Do It until we’d actually done it, it does seem like a fission or fission/fusion interstellar rocket might well be doable if we wanted to, for a lot easier than a fusion reactor where we’ve tried and failed. The hard parts might be something to last for 100-400 years (maybe not that hard, robustness and redundancy) and automation/self-replication (this is the hard part.)
As for “dry life”… planes don’t fly like birds, computers don’t work like brains, and von Neumann probes needn’t work like bacteria. I’ve suspected they’d likely work more like social insects, with a factory/robot system. Your probe isn’t a small locust that crawls around feeding itself, but an industrial complex that has robots going out and bringing back raw materials for the factory to work on, making new robots and factory parts for the robots to assemble somewhere. Suits our industrial style more, and has some advantages of efficiency and error correction.
There’s still hard-waving there, of course, given how depending real industry is on human expertise on the order of millions of people and not that easy to replicate. What’s unclear is if different approaches can help. If it’s simpler but more expensive to refine by blasting materials into plasma, it’d be uncompetitive on Earth with chemical engineering but might be appropriate for probes.
Also, hybrid approaches are possible, starting with the factory containing bioreactors of bacteria for various tasks.
Speaking of errors, we can get the error rate of digital programs down to arbitrarily low levels. Especially if your units are large factories, it takes fewer replications to cover the galaxy than it does to make a human body, and you can have much better error correction than our cells do.
My point about self-replicating robots (as usually conceived) being “dry life” is that the only possible advantage they would have over carbon based life is that they would not require an (internal) environment at liquid solvent temperature and pressure. Otherwise, what would be the point?
Also in agreement with jkittle, that engineered biological systems (already known to be excellent at self-reproduction) are a better bet for self-reproducing space probes. There are already clams that live 400+ years, and could be engineered with an aeroshell/pressure vessel and upgraded (octopus derived?) brains. As to engineering a high delta V self-replicating biological deep space propulsion system, I am at a complete loss, and this goes for silicon and metal replicators as well. Biological solar sails are the best I can imagine. Beyond that, one is back to crewed spaceflight and one has to ship living crews to (eventually) build more silicon and metal ships.
Hi Paul
Has been a while since I used that nome de plume on CD, but maybe it’s due a revival. My question for the comms experts is that since there’s nothing physically against using gravitational lensing for signal amplification, then sufficiently advanced EB networks would be near impossible to detect. Even more so using spread-spectrum signalling.
Daniel Cartin’s paper, On the maximum sufficient range of interstellar vessels, makes the interesting observation that destinations at most 10 ly apart form a network which can be reached in a series of short hops across the Galaxy. Decelerating and accelerating a long distance starship every 10 ly would be tedious, but a network of “repeater stations” would have no such issues, meaning long-distance signalling to a central home receiver would be pointless for EBs. Instead they would form a real network, able to maintain fairly narrow, high bandwidth links all the way back to the point of origin (The Creator, according to V’Ger…)
As a side note Philip K Dick may have written the first SF story to explore the implications of von Neuman machines and probes. This would be Autofac, 1955.
It’s in his collected short stories, but I am sure it appeared in Galaxy magazine , November 1955.
http://en.wikipedia.org/wiki/Autofac
Good grief, Al, are you reading my mind or what? “Autofac” is on the agenda as part of tomorrow’s post — I first wrote about it in the Centauri Dreams book back in 2004. And you’re right, it was Galaxy, Nov. of 55.
Oh no, now I have to read PKD again. It always gives me a taste in the mouth of badly made ??speed and weird flashbacks from my demented youth…
The human race won’t go to the stars until the development of suspended animation technology with known propulsion technologies or “magical FTL” technologies are discovered.
The biologists with an objection to transistor based “life” might have a point.
The Fermi paradox still sits in the background hinting at what’s possible.
‘Self-reproducing’ is a hollow abstraction covering up a complete inability to picture something concrete, by people with zero experience at actual manufacturing. In biology this term is a label for the cycle of cell growth and division, which is an elaboration of the constant self-remodelling within every cell. No cell externally manufactures another one de novo by assembling dead parts, and certainly no multicellular organisms does that, yet this is all that von Neumann really meant.
von Neumann committed terminological abuse by using this bio-science phrase to label what is mere automated manufacturing of automated manufacturing equipment. No robot is actually going to reproduce in outer space, because the extraction of scarce elements from asteroids will be an intricate large-scale process that no single robot will be able to do. How many ounces of rare earths are you going to find in a single small asteroid? Not enough to supply even one robot, I’d bet.
With all these decades of airy talk of von Neumann probes, there has yet to be a single concrete design proposal for one, because its nothing but a myth, with no possible basis in reality. Robot construction is always going to require an entire industrial economy, manned by people, not just computers, which by their very nature can never be ‘intelligent’.
We’ve already abandoned the FTL myth, so it shouldn’t be so hard to abandon two more hopeless myths: AI and von-Neumann probes.
Both have had zero progress since their mid 50’s conception, and don’t cite Deep Blue, Watson, or driverless cars. Their intelligence is entirely that of their highly capable programmers. As computers per se they have zero intelligence, since they merely run their programs.
Hansen’s brain emulation hopes notwithstanding, no path to AI has ever been discovered, and I’m betting it never will. Until Google has not just keyword-associativity but a true natural language interface, which would require true AI, we need to stop wasting time on fantasies of AI rescuing us from the harsh realities of interstellar expansion, which isn’t going to be done by robots.
@Interstellar Bill. I think you may find yourself the subject of Clarke’s 1st law:
When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Also bear in mind that as a human, you cannot assemble new cells without external help, and you certainly cannot reproduce yourself without help. And yet ….. you exist.
Continuing the theme of biologic probes, could the propulsion be catalyzed H2O2? The biology is already available – the Bombardier beetle.
The Isp is low even with engineered rockets, ~ 162 seconds. But if we are talking about probes in the outer solar system, even 1 km/s delta v is enough to hop icy bodies. The probes might look like bags of propellants, with “wings” to radiate excess heat and possibly reflect communication light pulses to a target receiver near the sun.
I would anticipate an “r reproduction strategy”, simple probes, most dying in space, unable to reproduce, just a few lucky ones landing on new bodies to reproduce.
Just how long would it take such slow probes to explore every body in the Kuiper belt, or even the Oort? Hundreds of years? Thousands?
Interstellar Bill, your argument is disproven by the existence of the plan you say doesn’t exist. Robert Freitas did a massive study of a self-reproducing Lunar Factory in the late 1970s. It has only gotten more plausible since then. The original report is online too, as a Google hunt will quickly tell you.
Just to clarify a couple of historical points.
(1) Project Longshot assumed Daedalus style pulsed fusion engines and added nothing to the research required to actually build them.
(2) Project Orion used small fusion-boosted fission pulse-units. The performance was too low for interstellar. Freeman Dyson’s 1968 Interstellar Orion paper assumed large deuterium fusion devices and didn’t prove the oft quoted performance was doable either.
(3) Project Daedalus, to date, has been the study which examined fusion pulse
drives for interstellar, but didn’t prove their viability at the required performance level either. Simplifications about the fusion ignition process make the assumed performance dubious, but not unreasonable or disproven.
(4) Fission fragment rockets are physically possible, but will need serious development to hit interstellar performance levels. That might come quicker than fusion drives because fission fragment reactors are being developed with much of the features of the rocket engine already in place.
@Interstellar Bill
This logic is meaningless, since the assertion of those who believe in AI is that it is precisely by merely running a program that it can be achieved. Your rebuttal is a exercise in faith, not reason.
Since you are merely a bunch of atoms, you opinion cannot possibly be valuable anyway, so why go to the trouble to post here? ;-)
@Interstellar Bill
No, but it takes in dead parts and utilizes them to increase its size. Inside the cell there are little “machines”, which do all the work necessary. This model will suffice for a mechanical system as well. I am not sure where you see the distinction.
@Interstellar Bill
You seem to be implying that an industrial economy requires intelligence to run. This is not the case, and we can again look at biology for a proof of principle: there is nothing remotely intelligent in a cell, yet it can maintain a fully closed metabolism and reproduces.
Joy: “the only possible advantage they would have over carbon based life is that they would not require an (internal) environment at liquid solvent temperature and pressure. Otherwise, what would be the point?”
I have to disagree with Joy on this one. The engineered version eventually ends up better. Planes can be much bigger and fly much faster than birds. Ships can be much bigger and travel much faster than fish. Etc. etc. And machines can do many things that biology never evolved (x-ray, radio, radar, launching things into space, travelling through space, nuclear power, etc. just very briefly off the top of my head).
Adam: ” your argument is disproven by the existence of the plan you say doesn’t exist. Robert Freitas did a massive study of a self-reproducing Lunar Factory in the late 1970s.”
Studies don’t prove anything, and certainly not this one, which was a silly exercise in hand-waving. As well as committing some obvious errors, such as forgetting that the machines in his design depended on volatiles for lubrication, cooling, etc. — which at that time were thought to not exist on the moon. He completely ignored this issue, as well as a number of more important issues such as several raised in this forum on this subject.
@Alex Tolley “Continuing the theme of biologic probes, could the propulsion be catalyzed H2O2? The biology is already available – the Bombardier beetle.”
Good one! Also some yeasts, bacteria, and mushrooms can make hydrazine! The existing biological toolkit is much greater than generally appreciated.
The problem with ensuring controlled growth of the replicators is not error in the replication — that can be rather easily prevented with error correcting coding — but rather predictability. The robots will be so complex that it will be difficult to assure they will behave as you want, when interacting with a complex universe. This becomes doubly true if the probes are capable of learning.
Adam: I’ve read that report. If handwaving could get us to the stars, we’d be on Alpha Centauri III by now.
Nick & Paul, the point was that such a study existed when Bill said none such were in evidence. Attacking minor details is non-germane to the main argument. Hand-waving is a weasly term of abuse which allows the critic to move the goal-posts. Claiming something isn’t possible or can’t be done is a mammoth task in evidentiary terms when there’s no substantive physical objection being made, a point ignored by both your responses.
Joy states a belief that only “wet life” is technically feasible. Interstellar Bill writes, “No cell externally manufactures another one de novo by assembling dead parts”. They seem to hint at a possible unrecognised problem.
Surely, the only difference between living parts and dead parts relevant to the above discussion is that the living ones are integrated into the current operating phase of other living parts. If this was an economy I would say only living parts receive a price signal, and dead ones do not, this resulting in rapid deterioration.
Note that it is undemonstrated whether self-replicating systems with this property might be any simpler than ones without. The fact that all life uses it is unsurprising, since once *born* they need this type of flexibility – so it is much easier to start with it also. The question then becomes to what degree do other types of von Neumann probes also need it.
Eniac disagrees that “an industrial economy requires intelligence to run”, but I think that this underestimates the problem. Depending on the minimum necessary information content in that massive network of price signals, the human economy itself might be anywhere from very simple, to more complex than our fastest supercomputers can cope with. If anyone has any way of estimating this complexity, I would love to know it.
@Rob: It does not make sense to distinguish between living parts and dead parts. All parts are dead. Only the complete organism can be called living.
When you talk about “price signals”, I assume you mean the method by which it is determined which parts need to be produced/acquired most, given the current state of the system and its environment. The supply and demand system that we use in our economy is particularly suitable for us because it is simple and does not require central planning, of which humans are notoriously incapable. This could also be used in mechanical systems, but more likely, given the absence of intelligence, there will be a direct control network similar to what we find in living cells, where the balance is achieved by a fairly elaborate network of feedback loops. In the cell, these take the shape of genetic and chemical cascades, in the mechanical system they would all be programmed into a distributed computer system.
“Attacking minor details is non-germane to the main argument.”
No, in the case of self-replication the “minor” details are the main argument. It’s the astronomical complexity of the task that is the issue, and “hand-waving” is exactly the metaphor needed to accurately describe approaches like Freitas’ that take a mile-high view of a design that would to actually implement require mastery of trillions of details, to throw out a number that probably severely underestimates the complexity of the problem.
Of course Freitas didn’t even get the mile-high view correct: by ignoring the need for volatiles, he demonstrated a gross misunderstanding of how the machines he was invoking work.
“If anyone has any way of estimating this complexity, I would love to know it.”
Well consider this:
In other words, Beinhocker estimates that around 10 billion distinct kinds of material products are being sold just in this one, hardly self-sufficient region. And each of these is typically made from a very large number of parts and material components, and requires an even greater variety (and economic value of) supporting services. The number of distinct parts and material components, and distinct combinations thereof, and supporting services used globally (and thus making up a self-sufficient economy, and thus one capable of self-replication) is of course far greater still.
“(1) Project Longshot assumed Daedalus style pulsed fusion engines and added nothing to the research required to actually build them.”
OTOH, it removed the assumption of self-sustaining fusion engines, or of being able to extract power from the engine, replacing that part with a fission reactor. And we can make fusion pulses readily enough, it’s power extraction that’s hard.
I forgot to note: being dependent on fissionables might limit the field for self-replicators. Especially old red dwarf systems, say.
@Eniac, to me dead parts can be identical to living ones just as you state. They just don’t form part of a living whole. Of cause they could also be different, such as denatured proteins, and that adds difficulty. I could restate the problem such: note how a bunch of parts in a dead unit seems to suffer a rapid rise in entropy.
Furthermore, we are not just interested in producing new parts, but also in maintaining energy supply to every single part of manufacturing, and this catabolism is just as important to life on Earth as the anabolic processes on which you concentrate.
However, your solution looks disarmingly good, unless we are still missing something. Life is seldom as simple as it looks.
Nick,
You left out the following paragraph preceding your quote:
Which could be interpreted in another way: Several orders of magnitude could be gained by simply reducing product redundancy. Self-replicating machines do not require 275 varieties of breakfast cereal and 150 types of lipstick.
I also do not follow Beinhockers numbers, they seem almost as exaggerated as yours. According to his claim, the articles stocked at Walmart would constitute 0.001 % of all items available, which seems incredibly low to me.
I am not sure where there is a dichotomy here that you seem to be implying. Energy is just one of the inputs of every process. In the simplest case, this would be provided from sunlight, either as electric power using photovoltaics, or as process heat using concentrators. Most likely both, whichever one is most useful for a given process. Out in the Ooort cloud, sunlight would not be available, so energy has to be gotten from somewhere else. This we discussed in another thread.
One thing some here may not be visualizing enough is that these probes would not literally “self”-replicate. They would do many things cooperatively, such as collecting raw materials, building power plants and factories, maintaining fuel depots, and so on. A single probe/robot would be just as helpless on its own as an ant, it would not be able to replicate.
Nick,
Of course Freitas did not ignore volatiles, as a quick inspection of a random piece of his work easily demonstrates:
(from http://www.molecularassembler.com/KSRM/3.13.2.2.htm)
You can see that your above assertion is plain wrong.
As for handwaving, Andy has characterized this particular accusation better than I could. Freitas’ analysis is hands-down superior in this respect to anything anyone here on this comments section has come up with, yourself included. It is in fact superior to almost any treatment of this topic that is out there, and Freitas deserves credit for it.
Nick,
Assuming a “detail” takes about 10 words to describe, and an average book having 100,000 words, a trillion details would require 100 million books to write down. Google estimates that there are 130 million books ever written. So, that would just about fit, until you consider what fraction of these are actually dedicated to recording details about the workings of the economy. And of those, how many are about cereal and lipstick. And whether there is really one important detail every 10 words.
So, no, I do not think that particular number seriously understates the complexity of the problem, just the opposite. You throw around extra large numbers with no connection whatsoever to reality and use that as the basis for your arguments. Not convincing, not to me.
Eniac, I was referring to the Freitas study I had been discussing previously in my response to Adam, which he had cited , about self-replicating robots using lunar material, not the study you cite. Yes I have read his lunar factory study and yet he did indeed commit the egregious error in that study of neglecting the crucial role of volatiles in machinery (including the role of air which is often taken for granted) for lubrication, cooling, etc. Of course, paying attention to the issue would have ruined his study, as at that time volatiles were thought to not exist in extractable amounts on the moon. In other words, when confronted with an issue that ruined his thesis, he either very dishonestly or very ignorantly swept it under the rug.
It seems Freitas has since discovered volatiles for his replicators on earth, but between his ignorance of engineering and his ignorance of economics we still should not come anywhere close to trusting his crazy claims.
I’m with Joy and Matt on the big picture. Biological solutions are preferable to creating abiotic replicators, but what’s even more preferable is to go ourselves, and do the work required, one person, one act at a time. Any kind of automation is really hand waving. Once you take the human effort out of the equation, we become nothing more than parasites to the system, which will eventually evolve more effectively without us. It would be far preferable to create a solution to populating the stars where humans are as integral to the system as mitochondria are to our own systems. That is the only way to ensure our species survival over evolutionary timescales.
I also agree that abiotic replicators would eventually evolve mutations. Electronic transcription errors are inevitable. I work at a space electronics company and can assure you that data error correction strategies are really only viable for decades (not even considering the damage of ionic collisions at relativistic speeds). That’s using components built on Earth in clean rooms, with human QA at every step along the way. But really, we aren’t just talking about fidelity in copying electronic patterns across prebuilt memory chips, we are talking about physical transcription errors in duplicating chips, circuits, and mechanical systems, physically made by a replicator. Mutations are inevitable. Not accepting this reality takes not just a lack of manufacturing understanding, but a lack of imagination. Murphy’s Law is just as binding as the 2nd Law of Thermodynamics. Variation is inherent in manufacturing; that’s why there are quality engineers.
That said, it is not evident that abiotic replicators would necessarily evolve intelligence. Most biological life did not, and it has still managed to permeate every crevice we’ve ever explored on this planet.
Interstellar travel probably cannot feasibly be done on chemical and solar power alone. Leaving propulsion aside, just the energy to sustain operations (especially if life support is a consideration) would probably require a nuclear source to be sustainable over the course of centuries between stars. Can anyone conceive of a biological solution to that?
@Greg
…just the energy to sustain operations (especially if life support is a consideration) would probably require a nuclear source to be sustainable over the course of centuries between stars. Can anyone conceive of a biological solution to that?
Biological solutions are just energy use minimization strategies – hibernation, estivation, quiescent spore and seed production, etc.
“Beinhocker estimates that around 10 billion distinct kinds of material products are being sold just in this one, hardly self-sufficient region.”
So there are more than 100 unique kinds of product for every New Yorker? This seems difficult, even if we’re counting every variation of blue jeans and toothpaste and pens. Which as Eniac points out isn’t very meaningful for a von Neumann system which can get by with the equivalent of “leg coverings” and “screw toothpaste, I’m using dentures”.
Human technology may reach the level of sophistication needed to explore the galaxy in this manner given the rate of progress being made in materials science, computing, biotechnology, etc. However, on the propulsion front less progress is being made and, even though progress is being made in the other areas just mentioned, what is the likelihood that any of this advancement will ever be put to the use envisioned in this post? After all, we have had nuclear fission technology for several decades, but we have yet to apply it to the problem of more efficient space exploration. Our capabilities in space, frustratingly, have diminished at the same time our overall technological prowess has increased dramatically. The will is missing; sadly, we seem to have a hard enough time even making sure that an amazingly successful scientific observatory, the Kepler mission, will get enough funding to continue to look for an Earth-sized planet in the habitable zone of a sun-like star.
I think of the Robotic Network mentioned here as a way of bringing home the galaxy to us. How fascinating it would be to have data and pictures streaming back of billions of extrasolar environments. In terms of the SETI angle, our creation of such a network could have just as great implications for SETI as a current SETI search for an extraterrestrial version of the network already in existence, as individual members of the Robotic Network could land on and explore the surfaces (perhaps oceans and sub-surfaces as well) of countless worlds to search for Life. A lack of discovery of anything remotely meeting the requirements for Life (self-replication, complexity, self-repair, chemical system capable of undergoing Darwinian evolution) on even a single other Milky Way planet would decisively answer the question as to whether or not we are alone in the Universe on the most basic microbial level let alone the more complex levels (e.g. intelligent communicative beings).
On the one hand I find it hard to believe that such a Robotic Network would find nothing except sterile landscapes, seascapes, etc. On the other hand, every day I look around me and I see how varied and complex the Life on our own planet is, how special and almost magical it is, how drastically different it seems from the puddle of mud on my dirt road or the big boulder off in the woods; it is times like these when I wonder if all of the splendor here is just a once in the Universe affair.