Shrinking our instrumentation is one of the great hopes for extending spacecraft missions into the Kuiper Belt and beyond. No matter what kind of propulsion system we’re talking about, lower payload weight gets us more bang for the buck. That’s why a new imaging system out of Rochester Institute of Technology catches my eye this morning. It will capture images better than anything we can fly today, working at wavelengths from ultraviolet to mid-infrared.
It also uses a good deal less power, but here’s the real kicker: The new system shrinks the required hardware on a planetary mission from the size of a crate down to a chip no bigger than your thumb. The creation of Zeljko Ignjatovic and team (University of Rochester), the detector uses an analog-to-digital converter at each pixel. “Previous attempts to do this on-pixel conversion have required far too many transistors, leaving too little area to collect light,” said Ignjatovic. “First tests on the chip show that it uses 50 times less power than the industry’s current best, which is especially helpful on deep-space missions where energy is precious.”
Precious indeed. But imagine the benefits of carrying miniaturization still further. Nanotechnology pioneer Robert Freitas has speculated provocatively about space probes shrunk from the bulk of a Galileo or Cassini into a housing no larger than a sewing needle. Launched by the thousands to nearby stars, such probes could turn their enclosed nano-scale assemblers loose on the soil of asteroids or moons in the destination system. They could build a macro-scale research station, working from the molecular level up to create tools for continuing investigation and communicating data back to Earth.
The new sensor out of Rochester is a long way from that kind of miniaturization, but surely the dramatic changes in computing over the past few decades have shown us how potent shrinking our tools — and packing more and more capability into them — can be. And when you’re working with finite payload weight and can insert a new set of tools because they’re smaller than before, you’ve dramatically extended what a given space mission can accomplish. Getting a millimeter-wide needle to Alpha Centauri may not be Star Trek, but it could be how we start.
Hi Paul
Forgive my skepticism, but I doubt such vigorously reproducing assemblers will be viable because of the information density required and the vulnerability to data corruption via cosmic-rays and a stiff proton-wind while travelling at relativistic speed.
Yet, having said that, I think of biology and how mighty trees begin as tiny seeds. So a scientific base may yet be compressible, but unpacking it will be quite a trick. I’m not optimistic that it will done soon, but it’s a goal to strive towards.
Adam
You’re absolutely right, Adam, and Freitas worries about the same thing. One thing he pointed out when I talked to him about this for my book was that devices on the nanometer scale are uniquely vulnerable to cosmic rays. All it takes is one carbon atom knocked out of a nanogear and you may have corrupted the device. That means lots of redundancy if this is to work.
Why would relativistic travel be necessary with such technology. If you’re not going in person, does it really matter how long it takes? Does it make much difference to the parent civilization if it takes 10 years or 100 years to reach the target star?
With machinery capable of self-replicating from bulk, unprocessed feed stocks (e.g. rock, ice, etc.) the sorts of limitations that we are used to don’t apply in the same way. Consider that the idea of “redundancy” as we understand it based on the perspective of ordinary machinery is transformed into something more akin to “population stability”. Imagine, for example, that you have a population of a mere 5 self-replicating machines. Imagine that one is damaged in a fatal fashion. Now imagine that one or several of the other machines deconstruct the broken machine and reconstruct it into a functional unit again (either from the atom-up or by disassembling into component parts, fixing/rebuilding a broken component, and reassembling). Now the issue with regard to the total accumulation of cosmic rays becomes not the total number of machines that could remain undamaged by the end of the journey but rather the energy available to the machines on the journey (which is related to the total number of repairs possible) and whether the rate of damaging cosmic rays can overwhelm the repair rate of the machine population. The machine population that reached the target star could then be the same as that which left the origin star only in a sort of “Lincoln’s Axe” sort of way, with each machine having been rebuilt by the others at some point.
Consider also how the nature of travel is changed by self-replicating molecular assemblers. You only ever need to send one self-supporting population of such devices to a planetary system, ever. Once they are at their destination, they can bloom in population and manufacture any device they have instructions for. Which instructions they can receive at the speed of light. Let’s say you discover a better way to build such self-replicating assemblers, well, you can transmit that information to the population at the target star and they can begin constructing the new assemblers, which would then replace the old assemblers (though the old assemblers could surely stick around as well). Similarly, anything you wanted to build that you had devised a way to do so could be constructed at the target star merely by transmitting the instructions.
The perfect example being, of course, a fleet of interstellar space craft to send more machines to yet more stars (and so on).
This gets especially interesting in the case of sentient machine intelligences that have such technology.
Fermi’s paradox comes to minde: if it were so easy to send “thousands of needles to nearby stars”, then, where are they?
And, what will we do if a self-replicating probe lands on earth? Should we let it grow?
Hi All
I suspect that Fermi’s Paradox is telling us that such systems do become irretrivably corrupted by cosmic rays because there’s no sign of them here – unless bacteria are nanomachines, as Robert Zubrin and others have suggested.
But then would we know a nanomachine if we saw its activities in space? I once read that there’s only about 1% of all the short period comet remnants that would be expected based on current arrival rates. Have they all been fragmented by probes?
It could be argued that a nanomachine technology capable of getting such probes to us might well be able to avoid detection. Could assemblers build a planet-wide monitoring system that would be both ubiquitous and all but invisible to inhabitants? I’m not arguing for anything but the proposition that Fermi’s Paradox doesn’t get us very far when we’re talking about technologies this far beyond our own. The only supposition would have to be that they want to remain unseen, which is probably what we would want to do if we were studying another stellar system with a technological civilization in it.
Maybe bacteria are nanomachines and we are the first product able to accomplish the task?
Time for a new religion? No ancient books, ‘just’ decypher the olded DNA/RNA to find the instructions the colony was supposed to follow.
Hans
How many people on this planet are looking for nanotech
probes? How many people would actually recognize one
as such even if they did find one?
If a nanotech probe did come to our star system, why would
they need or want to build a science/comm relay station on
Earth when a planetoid would do much better.
Why do we continue to assume that ETI with the ability to
send nanoprobes across the galaxy are all so fixated on one
little planet with inhabitants that are so backwards that they
consider a vehicle that can parallel park itself to be amazing?