If Breakthrough Starshot can achieve its goal of delivering small silicon chip payloads to Proxima Centauri or other nearby stars, it will be because we’ve solved any number of daunting problems in the next 30 years. That’s the length of time the project’s leaders currently sketch out to get the mission designed, built and launched, assuming it survives its current phase of intense scrutiny. The $100 million that currently funds the project will go into several years of feasibility analysis and design to see what is possible.
That means scientists will work a wide range of issues, from the huge ground-based array that will propel the payload-bearing sails to the methods of communications each will use to return data to the Earth. Also looming is the matter of how to develop a chip that can act as all-purpose controller for the numerous observations we would like to make in the target system.
If the idea of a spacecraft on a chip is familiar, it’s doubtless because you’ve come across the work of Mason Peck (Cornell University), whose work on the craft he calls ‘sprites’ has appeared many times in these pages (see, for example, Sprites: A Chip-Sized Spacecraft Solution). Both Peck and Harvard’s Zac Manchester, who worked in Peck’s lab at Cornell, have been active players in Breakthrough Starshot’s choice of single-chip payloads and continue to advise the project.
Image: A small fleet of ‘sprites,’ satellites on a chip, as envisioned in low Earth orbit. Can single-chip spacecraft designs now be developed into payloads for an interstellar mission? Credit: Space Systems Design Studio.
Meanwhile, NASA itself has been working with the Korea Institute of Science and Technology (KAIST) on the design of single-chip spacecraft. A key issue, discussed at the International Electron Devices Meeting in San Francisco in early December, is how to keep such a chip healthy given the hazards of deep space. For Starshot, the matter involves not just the few minutes of massive acceleration (over 60,000 g’s) of launch from Earth orbit, but the 20 years of cruise time at 20 percent of the speed of light before reaching the target star.
The first part of the question seems manageable, as hardening electronics against huge accelerations is an area well studied by the military, so data are abundant. The cruise phase, though, opens up concerns about radiation. According to KAIST’s Yang-Kyu Choi, interstellar radiation can degrade performance through the accumulation of positively charged defects in the silicon dioxide depths of the chip. Such defects can produce anomalous current flow and changes to the operation of critical transistors. The matter of malfunctioning chips is discussed in this recent story in IEEE Spectrum.
At the San Francisco meeting, self-healing chips were the theme, drawing on work that comes out of the 1990s that showed heating could help radiation sensors recover their functionality. Mixing this with work on flash memory out of Taiwan’s Macronix International, an integrated device manufacturer in the Non-Volatile Memory (NVM) market, the new NASA study uses concepts developed at KAIST to make on-chip healing more efficient. From the IEEE story:
This study uses KAIST’s experimental “gate-all-around” nanowire transistor. Gate-all-around nanowire transistors use nanoscale wires as the transistor channel instead of today’s fin-shaped channels. The gate, the electrode that turns on or off the flow of charge through the channel, completely surrounds the nanowire. Adding an extra contact to the gate allows you to pass current through it. That current heats the gate and the channel it surrounds, fixing any radiation-induced defects.
It might seem natural to simply provide more shielding for the chip during the two decades of interstellar cruise, but shielding adds mass, a critical issue when trying to drive a payload to a significant fraction of the speed of light. Thus the self-healing alternative, which assumes potential damage but provides self-analysis of the problem and heat inside the chip to work the healing magic. We also gain from the standpoint of further miniaturization — at scales of tens of nanometers, nanowire transistors are significantly smaller than the kind of transistors on chips currently used in spacecraft, adding savings in chip size and weight.
According to the IEEE report, KAIST’s “gate-all-around” device is likely to see wide production in the early 2020s at it begins to replace the older FinFET (Fin Field Effect Transistor) technologies. From the standpoint of single-chip spacecraft, it’s heartening to learn that radiation repairs can be made over and over, with flash memory recovered up to 10,000 times. A scenario emerges in which a chip on an interstellar flight can be powered down, heated internally to restore full performance, and then restored to service.
Pondering interstellar performance for chips that weigh no more than a gram is cause for reflection. Within just a few years we’ve gone from the idea of massive fusion-driven designs like Project Daedalus to payloads smaller than smartphones. The idea invariably brings to mind Robert Freitas’ concept of a ‘needle’ probe that could be sent in swarms to nearby stars, loaded with nanotech assemblers that would construct scientific instruments and communications devices out of material they found in the destination system.
It wasn’t so long ago that former NASA administrator Dan Goldin was speaking of a probe as light as a Coke can, but the Freitas probe and Breakthrough Starshot go well beyond that. The trick here is not getting too far ahead of the curve of technological development. With a 30-year window, Starshot can anticipate breakthroughs that will solve some of its key challenges, but relying on the future to plug in a solution doesn’t always go as planned. Thus it’s heartening to see potential answers to the cruise problem already beginning to emerge.
Thanks for the insightful take. Here’s some of my current thinking.
Starshot needs innovation way beyond the traditional approaches to nanofabrication. Rather than working the very hard problem of making traditional microdevices survive radiation, think differently: radiation harden these systems by design. Specifically, Starshot might benefit not from a single chip with tiny transistors but much larger, thin-film electronics deposited on the laser sail. Such large devices can’t be damaged catastrophically by impacts with interstellar medium or high-energy radiation. Starting with the assumption that the solution will come from the chip-fab world puts the cart before the horse.
Spacecraft engineers are better at solving spacecraft problems. We know that the everyday approach to shielding these devices is a mistake: the amount of material necessary to deflect impacts at 20%c is just not going to be lightweight enough for this mission to be successful. So, don’t shield. Spread out the circuit material so that a few holes in it don’t matter.
And, by the way, a single dense chip in the middle of a large, diaphanous sail isn’t great for the structure, either. When this system accelerates at 60,000 times Earth’s gravity, that chip risks tearing free from the sail. Even spinning the sail might not be enough to stiffen it in response to differential loading from uneven mass distribution. So, even it out by smearing the electronics over a large surface area.
Hunter, Zac, and I continue to evolve Sprites using internal funding, Kickstarter $, and volunteer labor. NASA has been very helpful in offering launch opportunities, e.g. manifesting Kicksat 1 and 2 for Sprite flight experiments. Actual money would be really nice, though.
This enthusiastic post by Dr. Peck is a terrific way to start my morning- which is with Centauri Dreams, of course. It’s a case of dramatically lateral thinking. Quite exciting and inspiring. And enviable as well.
About something like that – to put thin-film chips on the sail material with multiple redundancy.
https://naked-science.ru/article/column/o-vozmozhnom-sposobe-peredachi
However, for Starshot is the problem of heating.
I like large monolithic designs. Starshot needs to invest in something like Sea Dragon be built in Russian shipyards We need increased HLLV size–not lots of new space junk.
A large differentiated membrane can take a hole–like a needle through a slab of jello.
The large surface area of a bunch of discrete spacecraft–that could lead to a chain reaction like GRAVITY.
Accelerating the chip to 0.2c in the near vicinity of Earth is going to subject the chip to quite intense radiation as it ploughs through te interplanetary medium, even assuming it can avoid all dust grains that might well destroy it.
Once into interstellar space, the density of the ISM is less, although the total number of collisions is probably larger.
Are these self-healing chips really able to cope with this radiation? Will we need to accelerate large numbers of chips with the assumption that a small number will survive intact?
A huge number of chips will make it likely some will survive.
The r strategy for survival.
I gather that the idea for Starshot is to turn the sail (and chip) edge on to the direction of travel to reduce the surface area that will sweep up particles. We already have materials that can change shape in response to some stimulus, so possibly the craft could roll itself up into a tube further protecting the chip.
There are probably so many failure modes that the r-strategy is the best way to solve the problem. It also fits nicely into ideas of swarms of craft that can communicate, combine their observations and possible even join and cooperate to send back the data to Earth. Such designs might even be used to create very sensitive receivers for that data, located in suitable places in out solar system. [ And as suggested by someone else in a previous post’s comments, a better way to develop huge telescopes than laser carving asteroid material . ]
“We already have materials that can change shape in response to some stimulus, so possibly the craft could roll itself up into a tube further protecting the chip.”
https://naked-science.ru/article/column/kak-razvernut-kosmicheskiy-parus
It could be launched as a tube. It doesn’t need to change shape. A tube is a better design for the optics anyway. The sensor could be on one side of the tube and the lens on the other, creating a larger focal length. And the travel cross-section would still be very small. Also, the tube does not have to be round. I have been looking at triangular tubes for structural stiffness. This would allow for three cameras facing different sides.
At the moment there is a higher probability of winning a lottery by buying a handful of tickets than there is of 1 chip out of a million surviving this interstellar voyage. We cannot blithely wave away serious technical challenges with large numbers.
There must be a quantifiable survivable mode, and only then can statistics be used to determine how many chips are needed to credibly engender success. We are not there yet.
What do we really know about interstellar hazards?
For now there’s only one way to find out more.
Really? I’d say you need to think this through more fully. For example:
– Is this really the only or most economical experiment to test for and measure hazards?
– Are you sure we have no available data on the hazards?
– How will you know whether the experiment succeeds or fails?
The Apollo Rules should apply; Freeze the technology ,to at least proven 10 years old considering the project perhaps 25 years would be better
Now, this approach is more compelling than other Interstellar probes,
if the problem of damage in cruise phase can be worked out.
I am particularly intrigued because it put into play 2 things
With such lightweight payloads, we can think about 2x-3x faster speeds
than heavier probes
And as a consequence, it opens up more targets than just Proxima Centauri.
Thermal healing for the chip is a great idea. This answers the radiation question, and this article answered the 60,000g question, which I did not know had already been solved by the military. I guess it makes sense if you want to shoot chips for smart artillery from a cannon.
We may not be able to make a Starchip yet, but we can already have the technology to simulate them. The Raspberry Pi Zero weighs only nine grams and has a camera port. The camera for it, available with or without infrared capability, weighs 3.4 grams. Then its just power and point. Attitude control and power would have to be added for a space flight, but for ground tests a USB power supply is pretty handy. Not sure the Pi Zero is actually space worthy, but it could be used to test imaging software.
‘silicon dioxide depths of the chip’
How thick does a silicon chip need to be? I’ve always thought of chips in terms of a two dimensional circuit pattern, the thickness of the chip being some function of the manufacturing process, probably coming down to cost somehow.
If we are thinking about chips as spaceships, with weight being important, this is going to be important. Perhaps a minimal amount of semiconductor could be vapour deposited on something else, like carbon fibre perhaps?
Trying to predict future technology and technology trends is as hazardous as the radiation these chipships will have to face. As an example, we’ve achieved much of the “ancillary” Star Trek technology, e.g., smartphones, sensors, etc., in less than fifty years but are not really close to the primary tech of antimatter, transporters, and warp drive. In another example, we don’t have the flying cars predicted in the forties and fifties. So accurate technology prediction is a longshot at best.
One danger I’ve seen in my experience as a program manger is waiting for the next big development, or trying to achieve the perfect design by bringing in the latest technology update. You end up with delays and cost increases if you ever finish the task. At some pint you need to freeze the design and go with your best shot. Good is good enough, particularly if you meet the basic requirements with what you have. We’re not at that point, yet, of course, and still need technology development to have a decent chance of making this actually happen.
I thought about using metal filled nanotube as transtitors and wires, after the impact event the metal cools back to a solid self annealing. Nanotubes can allow metals inside due to capillary action, i can’t see a problem here.
Forgot to attach the link
http://www.nanowerk.com/spotlight/spotid=6371.php
This is probably the tip of the iceberg regarding challenges for this project. The project still has to figure out how to get a signal back.
It would be good if there was a sub-light windtunnel that could throw dust and interstellar radiation at the probe… Creating or even approaching a simulation of that environment would be as great a challenge as any we have faced.
Particle accelerators could do the job, at least for charged impactors.
Is there any way these chipsats could send signals over interstellar distances without the need for radio waves or lasers? If we just want them to show us that they made it and transmitting devices are too much to deal with, could they somehow trigger something at the target star system to let Earth know?
If they carried a certain element with them then plunged into the star, would there be enough so we could distinguish their presence if astronomers looked at the star’s spectrum for an element or elements not normally found in that star?
I was thinking of Robert Goddard’s idea in 1920 to impact a rocket on the Moon and to release some flash powder to let astronomers watching back on Earth that the rocket had made it there. I know making an actual explosion at even Alpha Centauri would not work, just thinking out loud.
A neutral impactor source is likely viable. One of the species used in a proton beam sources is the H(-) ion. The two electrons are stripped off in a second step after initial generation to yield H(+). Conversion is nowhere near 100%, though, and much of the loss is in neutral hydrogen H(0). Now optimize for H(0) production instead of H(+).
This device would need its own development cycle. At 0.2c, the beam energy of an H species is around 20 MeV (assuming no bone-headed mistakes on the back of the envelope). The current H(-) proton sources operate at lower energies, so you’d need an intermediate accelerator and a stripper that operates at higher beam energy.
Neutral beams are routinely used in fusion research to heat the plasma.
Here’s a doable soon way to space test such chips:
Place samples of them on the front of a probe which is to be fired at hypervelocity into a sun grazing trajectory. Time the shot such that the probe runs directly into the coma/tail of an opositely orbiting comet.
The farther from the sun such a probe is lanched from the faster solar passage would be, falling down the sun’s gravity well all the way in while getting hit by solar wind and radiation. Then it gets slaped in the face by dust and ions from the comet (while zipping thru the million degree solar corona).
Survive THIS, and then these chips shuold be good to go.
Re: getting signals back
How about sending many probes to the target, each probe separated from the following one by some distance? Then as one probe zips by the target, it can send its data to the following probe, which relays to the one after, et cetera, all the way back to us. If the probe separation is 1000 AU, for example, we will only need approx 250 probes at a time in the pipeline to Alpha Centauri.
There are many advantages. If we keep sending probes through the pipeline, we will have continuous science return as one probe after another zooms by the target.
Also, the many probes will give us some redundancy and therefore greater likelihood of a successful mission.
You can probably imagine other advantages.
Is communication feasible when the sender and receiver are separated by 1000 AU, especially for tiny probes? If not, we can reduce the separation (and increase the number of probes in the pipeline).
The very concept of a swarm suggests an intrinsic shielding strategy. “Loss Leaders” becomes a repurposed idea. Non-trivial is how we create the swarm with a single laser array as driver.
We could encode a signal into each laser that is picked up by the individual components of the swarm, if each knows where it is in the grand scheme of things then they can find their way towards each other.
World’s smallest radio receiver has building blocks the size of two atoms
December 16, 2016
Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have made the world’s smallest radio receiver – built out of an assembly of atomic-scale defects in pink diamonds.
This tiny radio—whose building blocks are the size of two atoms—can withstand extremely harsh environments and is biocompatible, meaning it could work anywhere from a probe on Venus to a pacemaker in a human heart.
The research was led by Marko Loncar, the Tiantsai Lin Professor of Electrical Engineering at SEAS, and his graduate student Linbo Shao and published in Physical Review Applied.
The radio uses tiny imperfections in diamonds called nitrogen-vacancy (NV) centers. To make NV centers, researchers replace one carbon atom in a diamond crystal with a nitrogen atom and remove a neighboring atom—creating a system that is essentially a nitrogen atom with a hole next to it. NV centers can be used to emit single photons or detect very weak magnetic fields. They have photoluminescent properties, meaning they can convert information into light, making them powerful and promising systems for quantum computing, phontonics and sensing.
Full article here:
http://phys.org/news/2016-12-world-smallest-radio-blocks-size.html
To quote:
The radio is extremely resilient, thanks to the inherent strength of diamond. The team successfully played music at 350 degrees Celsius—about 660 Fahrenheit.
“Diamonds have these unique properties,” said Loncar. “This radio would be able to operate in space, in harsh environments and even the human body, as diamonds are biocompatible.”
To be honest, I find this entire scheme to be highly problematical and probably unworkable as well. If we look at just the communications conundrum alone, we will see that the inability to communicate en route to the new solar system will be a almost for certain a nonstarter.
Presently, with the radio dishes that they now use they been millions of watts of power so that microwatts of energy can be received by the spacecraft at interplanetary distances. Even if lasers are used to perform transmissions to the spacecraft, where is the power going to come from from these chips to permit transmission of data back to the earth-based receivers ? As the chips are so is tiny and small they are probably going to send off at the most, a few billionths of power that is going to be almost vanishingly small. When it reaches Earth. Whole point of the trip is to send back data, and I don’t see how chips are going to be able to accomplish this.
In addition, any star system they pass through their, going to be passing through at 20% the speed of light. So the question arises is how much data can you expect to get as you pass through?
I totally agree. Big ships propelled by fusion still seem the most viable option to me, specially if we don’t use inertial fusion but a design based on a flow-stabilized Z-pinch like that of Uri Shumlak from Washington University.
The problem with big ships, apart from the energy to propel them, is teh kinetic energy they acquire. A 100k MT starship (about the size of a CVN nuclear carrier) traveling at 0.1c would have the kinetic energy of a 2 km asteroid travelling at 30km/s. Hitting a planet would cause a mass extinction.
A contemporary John Carpenter might warn Earth not about nuclear weapons, but the use of large, fast starships. Space might be big, mind-bogglingly big, but aiming a starship towards a star with worlds might as foolhardy as using firearms near crowds.
The solution to the kinetic energy problem is to slow down when you get there and not just fly by. Sending large, crewed ships that stop at the target star makes sense. Sending the very tiniest of flyby probes to check to the target star also makes sense. We just don’t know yet how small we can make them to suvive the trip and return useful data.
That’s nonsense. First, the ship will decelerate much before reaching the star system. And, if for whathever reason it can’t decelerate, it will be destroyed by dust much before reaching a planet. And in the infinitesimal case where it isn’t destroyed by dust, the probability to hit a planet, since the star systems are mostly, by far, empty space, is also infinitesimal.
As if our own probes do not have problems with firing their engines in the solar system. Juno just had such a glitch that forced mission control to delay engine firing until it was fixed. A starship traveling for many years could have all sorts of reasons to fail, resulting in it not slowing down in the target system. Would the dust in the system be sufficient protection. I don’t know. Can you guarantee it? As for missing the target planet, we’ve recently demonstrated incredible precision at targeting our probes, from Curiosity to Juno. Why assume the starship wouldn’t be extremely accurate too, possibly even using manoeuvring thrusters to fine-tune the approach? It may be unlikely, but so are asteroid hits on Earth, but we are slowly setting up ways to ensure near misses are not so near, just in case. One might forgive ETIs from wanting to take similar precautions against nearby starships as well as their originating system.
More nonsense.
As I said, the ship will decelerate MUCH BEFORE reaching the target system. Years before. If it does fail in this phase, then trajectory correction will be aborted or simply don’t work either, and thus the very low probabilities apply again, making it passing very far away from the system.
We know the dust content of many stellar systems by using radio and submillimeter telescopes. We even saw exozodiacal light in the visible. And, of course, simple astrophysical reasoning shows that dust must be much more abundant near a star than in interstellar space.
There are billions of asteroids, they are under the gravitational influence of the Sun and they move very slowly. All of this increases the probability of collision hugely, and none of this applies to the ship.
Nanocircuits are an interesting avenue for research and development. The “self-healing” strategy seems promising. In addition to this, building on a tiny scale should leave plenty of room for redundancy – a vital concept for keeping both data and systems intact.
Nay you say, Charlie?
Well then, overcoming objections both valid and not so valid will also be a hurdle to get past too. Sure this concept is hard. But so was going to the Moon. Dreams CAN be attained IF the can do additude isn’t lost.
That’s not an argument. The same can be said for any other technology.
Nay you say, to my spacecraft-throwing slingshot to Proxima?
Well then, overcoming objections both valid and not so valid will also be a hurdle to get past too. Sure this concept is hard. But so was going to the Moon. Dreams CAN be attained IF the can do additude isn’t lost.
True Antonio, I wasn’t making much of an argument at all. My comment was meerly a response to Charlie’s.
I have a trampoline we need to get rid of. Perhaps it will help with your spacecraft-throwing slingshot project. ;)
The only solutions to the seemingly impossible problems seems to lie in using swarm behavior capabilities . Like social insects each of our chips wil be helpless in space alone , but may contribute to the organisation of surprising strength when working together .
As Michael said :
”We could encode a signal into each laser that is picked up by the individual components of the swarm, if each knows where it is in the grand scheme of things then they can find their way towards each other.”…and so the ability to move independently in order to find each other become CRITICAL for any solution . First a ‘hive’ of chips could be accelerated by the main propulsion laser in a way that would gradually bring them as close as possible to each other (which would not be very close ) , and then they would need an independent ability to close the remaining gap . This maneuvering may be achieved by using the ‘leftover’ last few percent of effective lacer beam-power in an individual and selective way before the chips get out of range ….if the chips can find each other after being acelerated and combine their resurces , any number of ‘hive’ scemes can be cooked up to solve the rest of the problems
REPLY
As I have mentioned on the Starshot blog we could have thin films of alpha emitters on moveable MEM actuators which would allow directional changes and power the chip. Since we will only need a small change in velocity and distance only a small amount of radioactive material will be needed.
Here’s a sketch of an idea for the signalling.
. Build chip craft with tiny gyroscopes that can be used to alter their orientation in space.
. Run loops of wire around the edge of the sail. If a pair of craft in the swarm ran a current around this they would start drifting towards or away from each other, depending on the directions of current. If the craft were able to communicate with each other they would be able reorganise their positions about the swarm’s common entre of mass. Movements would be slow, but there are decades to play with.
. Cover the entire rear side of the sail with overlapping microscopic MEMS mirrors, like in an overhead projector. The lasers on earth shine their beam at Alpha C at a frequency where there is a space in the spectrum. The swarm arranges itself into a reflector that will reflect the light back at Sol. The MEMS mirrors modulate the light, either reflecting the light at Sol or away in another direction, thus achieving signalling. A big telescope on or about Earth observes the reflected data.
Im reminded of the organisation of sponges and their relatives where the sponge can go through a liquidizer and still reassemble.To radiation harden pick the highest temperature semiconductor.
Robert G
“The MEMS mirrors modulate the light, either reflecting the light at Sol or away in another direction, thus achieving signalling. A big telescope on or about Earth observes the reflected data.”
‘Mirrors … Flashing light signals from light years away to be detected by telescopes on Earth…’ Oh, really? That is so unlikely when we have difficulty being able to detect planetary bodies that are in orbit around other stars with spaceborne telescope, says it is all. Again, the problem of signaling is going to be, I think a showstopper. Unless you have something on board with a fair degree of power that can broadcast back to earth, you’re engaged in wishful thinking.
It seems to me that if we’re going to pursue a laser type of propulsion system which will drive a spacecraft to 20% lightspeed, we should at least look at laser systems that can push payloads probably at least in the 10s of kilograms mass that at least can have some reasonable power source with a chance of being able to broadcast back to earth.
I realize, of course, that such a more massive payload will require lasers that are probably in the nature of 1000, perhaps 10,000 times more powerful and might require basing such a laser system on the moon, so as to not have to blast energy through the atmosphere. But the real question has to be asked here, are we going to go ahead and try to do this on the cheap, or is our goal here to ACTUALLY get some kind of space probe on its way to another stellar system even if it is considerably more costly ?
Sometimes you have to realize that you are running up against physical limits and no amount of cleverness or workarounds is going to be able to do the job other than coming down to some bare minimum size spacecraft that can actually perform its function. And chips just don’t seem to be the way in which you can go if you have any realistic expectation of sending back some kinds of data that you can actually analyze using. Small radio receivers mean nothing if there does not exist power to rebroadcast a meaningful amount of detectable power here at home. So why do all this shenanigans?
It doesn’t seem even remotely conceivable that we will ever solve the problem of having vanishingly small amounts of power, detectable by any type of optical or radio system that we can conceive of to allow detection of a signal from a fleeing spacecraft that’s light years from us.
Are we truly willing to risk a 100 billion (I think a reasonable guess, given what’s their specified mission) spacecraft in the form of a chip on a gamble that it is going to give us something in return? That’s, at least, my two cents worth anyway, thanks for allowing me to have a forum.
Enabling small interstellar probes to communicate back to earth really is a big issue. The fundamental problem is the nature of the drop off in signal strenght with increasing distance, the inverse square law. This even effects tightly beamed/focused signals.
So the question becomes what is the max distance that a probe can reliably send back data? Then send batches of these out toward the target system to relay the data back home.
Wasn’t there some idea of using a relay of probes to boost the signal strength much the same way early transatlantic cables worked? The energy a tiny 1g probe can muster is still extremely small, so whether this would even work is still an open question.
It will be interesting to see what solutions can be devised, or whether this will be a showstopper.
Just what power would we need to detect any probe in a distant solar system? What conceivable detection technology and infrastructure would be needed and what are the cost tradeoffs?
If suitably synchronised spread spectrum signals from many probes could be phased to sum at a point of reception to send a signal back when required.
Some very usefull things have surfaced here , about how the chips could ”find” each other in space . several different mecanisms have been sugested :
1. The power of main acceleration can be changed over time for a ‘swarm’ of units ,to create at moment in time when they should be ‘passing by’ each other . (The last one will overtake the first )
2. Alpha-emitters on movable MEM actuaters for smaller velocity change .
3. Electro magnetic atracktion force activated in the split second where two units colide , or gets close enough .
4. Coding information into the main laser acelerator which can be used for course corrections ….. this will demand that information goes both ways , so perhaps it will be necessary to position one or more highpowered targetting radar stations ,as far away as tecknology will allow …maybe as far as mars-orbit. The chips will fly close past this satelite , enabling a measurement of individual deviation from optimal behaviour . This information is send back to earth for encoding in main laser
5. When a swarm of units have connected , its first task is to function as a relay station connecting other swarms more far away …and perhaps also to re-emit laser energy for charging batteries
The encoding would just be to give a position in space where they should all be, the individual chip sails can communicate with each other as the distances will be quite small.
”The distances will be quite small” ….thats a VERY optimistic statement ,we are talking about gigantic acceleration forces, even if they can be balanced with all the tricks in the book there will be a deviation …and it will grow fast if not corrected . The good news is that the whole plan is perfectly fitted for experimentation . As soon as a minimum of lasers are operating , it should be possible to start learning about how to control the trajectory
The laser beam footprint is expected to be around 2-3 m at 2 million km. So the deviation distance should be on a similar divergence, it won’t be millions of km’s I would think.
A new advance in diamond radio nano-receivers can help building the StarChip: http://phys.org/news/2016-12-world-smallest-radio-blocks-size.html
Wanted to mention the Sci_fi novel
EXISTENCE by D. Brin deals with the Aftermath of Shall we say
Excessive Exuberance to explore the Galaxy (without giving anything
away)
Personally I did not like the way the the Central Ideas were framed
in plot of the novel, but the Ideas themselves were fairly deep,
Your mileage may vary.
In what now may be the ultimate nail in the coffin of this entire scheme behind any type of star travel vie such methods as has been outlined above, this is just been announced.
If what has been tested by the Chinese holds out to be true by test systems conducted by our satellite people, then this system will eventually be able to replace pretty much anything else for star flight, given the fact that it is practically the equivalent to flight through use of no onboard propellant.
Let the following speak for itself:
“China says tests of Propellentless EMDrive on Tiangong 2 space station were successful ”
The establishment of an experimental verification platform to complete the milli-level micro thrust measurement test, as well as several years of repeated experiments and investigations into corresponding interference factors, confirm that in this type of thruster, thrust exists.’
According to a chart that appears here on this particular link, if you use a minimum time trajectory at .01, .1, or 1 G you will for example, show a flight time from earth to Jupiter corresponding to those accelerations of 3.5 months, 23 days, or 12 days as long as you maintain constant acceleration throughout this process. These are staggering flight times using this particular new type of drive, which would operate presumably on a continuous basis to get you to where you’re going. Certainly there would be no need to use such things as lightweight chips to get to the nearest star system. And it’s predicted that at one G a flight to the nearest star would take up 52 years. The advantage here is obvious; no heavy propellant to take on board, thus making equivalent to the idea of the ramjet or the laser sail, but without all the extra external paraphernalia that has been dogging these concepts.
As for my own particular viewpoint, I still think this drive is some type of radiation emitting drive rather than something that smacks of a new type of physics, but this is something for researchers to determine. The link is provided below:
http://www.nextbigfuture.com/2016/12/china-says-tests-of-propellentless.html
I’ll be running two comprehensive reviews of the peer-reviewed work on the EMDrive concept within the next ten days. These have been assembled with the help of a number of our Tau Zero network scientists; both have been in progress for several months.
You may conclude that I see FRBs as clues to every astromystery:)
However a system to project real mass across the universe by propellentless drive (EM) approaching light speed in a single pulse would have a byproduct that would look like an FRB after dispersion.
This in no way invalidates small laser probes for interstellar exploration.
It also makes no mention of the energy or fuel mass for the propellantless drive. The EM drive is interesting, but the thrust levels are very small for the power required, and generating that power requires fuel, whether it leaves the spacecraft as rocket exhaust or not. And if it does not, how is it a good thing to keep all the spent fuel on-board? Do the math on the energy calculations and you will see that the EM drive would have to be powered by antimatter to have an advantage over a fusion rocket.
In my eagerness to announce this particular findings, I made an error regarding flight time to the nearest star using this drive. It would have been at .01 G, rather than at 1 G as I stated above; also to, this would assume a and acceleration at that lower acceleration value to the halfway point and then a be acceleration into the star system at that same acceleration value. This leads to the 52 year flight time, which would be at 10% the speed of light at its highest point. Sorry for the error
Wow. Warp drive will likely never happen, but impulse power could be just around the corner!
A lateral solution would be to accept 100s of years for the journey and slow down a group of humans? still within the sol system to be available to respond.
So far all we really know about the EMdrive , is that it has been very hard to prove if it is big enough to prove anything about it ….so perhabs we should wait a little more before casting doubt on existing plans involving acelerations in the 1000G class….
Sources are variable, but there are a few experimental results:
http://emdrive.wiki/Experimental_Results
Inside the Breakthrough Starshot Mission to Alpha Centauri
A billionaire-funded plan aims to send a probe to another star. But can it be done?
By Ann Finkbeiner on December 22, 2016
In the spring of 2016 I was at a reception with Freeman Dyson, the brilliant physicist and mathematician, then 92 and emeritus at the Institute for Advanced Study in Princeton, N.J. He never says what you expect him to, so I asked him, “What’s new?”
He smiled his ambiguous smile and answered, “Apparently we’re going to Alpha Centauri.” This star is one of our sun’s nearest neighbors, and a Silicon Valley billionaire had recently announced that he was funding a project called Breakthrough Starshot to send some kind of spaceship there. “Is that a good idea?” I asked. Dyson’s smile got wider:
“No, it’s silly.” Then he added, “But the spacecraft is interesting.”
Full article here:
https://www.scientificamerican.com/article/inside-the-breakthrough-starshot-mission-to-alpha-centauri/
The “silly” part is that the point of the Starshot mission is not obviously science. The kinds of things astronomers want to know about stars are not the kinds of things that can be learned from a quick flyby—and no one knows whether Alpha Centauri even has a planet, so Starshot could not even promise close-ups of other worlds. “We haven’t given nearly as much thought to the science,” says astrophysicist Ed Turner of Princeton University, who is on the Starshot Advisory Committee. “We’ve almost taken for granted that the science will be interesting.”
But in August 2016 the Starshot team got lucky: a completely unrelated consortium of European astronomers discovered a planet around the next star over, Proxima Centauri, a tenth of a light-year closer to us than Alpha Centauri. Suddenly, Starshot became the only semifeasible way in the foreseeable future to visit a planet orbiting another star. Even so, Starshot sounds a little like the dreams of those fans of science fiction and interstellar travel who talk seriously and endlessly about sending humans beyond the solar system with technologies that would surely work, given enough technological miracles and money.
and…
Of course, even the presence of Proxima Centauri b still does not make Starshot slam-dunk science. The chip could take images, maybe look at the planet’s magnetic field, perhaps sample the atmosphere—but it would do this all on the fly in minutes. Given the time to launch and the eventual price, says Princeton astrophysicist David Spergel, “we could build a 12- to 15-meter optical telescope in space, look at the planet for months and get much more information than a rapid flyby could.”
But billionaires are free to invest in whatever they wish, and kindred souls are free to join them in that wish. Furthermore, even those who question Starshot’s scientific value often support it anyway because in developing the technology, its engineers will almost certainly come up with something interesting. “They won’t solve all the problems, but they’ll solve one or two,” Spergel says. And an inventive solution to just one difficult problem “would be a great success.” Plus, even if Starshot does not succeed, missions capitalizing on the technologies it develops could reach some important destinations both within and beyond our solar system.
and…
The contradictions inherent in such dreams are perhaps best expressed by Freeman Dyson. Starshot’s laser-driven sail with its chip makes sense, he says, and those behind the project are smart and “quite sensible.” But he thinks they should stop trying to go to Alpha or Proxima Centauri and focus on exploring the solar system, where StarChips could be driven by more feasible, less powerful lasers and travel at lower speeds.
“Exploring is something humans are designed for,” he says. “It’s something we’re very good at.” He thinks “automatic machines” should explore the universe—that there is no scientific justification for sending people. And then, being Dyson and unpredictable, he adds, “On the other hand, I still would love to go.”?
It seems to me we should do both. Small sails with low power beams would be used to test the technology and do useful work. Only then would the system be used to try for the stars if it worked well. The high power lasers needed are very expensive to build, even with anticipated reduced costs for space launch. Instead, use small lasers to power sails to deliver data throughout the solar system, refining the technology as we develop it. By all means keep an eye on the stars, but if that proves too big a leap at first, we shouldn’t abandon the idea or technology which could be developed for local system use.
The great point about this concept is the modularity of the design, we can send many local probes out to explore the solar system and just get faster and faster probes as the project develops. It can also be used in a laser ablation rocket system. If we add a balloon system we can send out probes into all sorts of orbital configurations. This project has many other uses than just a starshot, if cleverly done it does not need to cost the earth to build.
In several places, it was stated above that an EM drive would require an enormous amount of fuel to permit its operation, given the level of thrust available for the power levels that were stated in the given article. It should be known that these are ‘initial’ values concerning the thrust levels, and that you would expect, as the science progresses that the efficiency would progress as research proceeded to optimize the given system that’s being deployed.
It’s apparent that from the standpoint of energetics (and hopefully from the standpoint of cost) that this particular system will easily deliver appreciable thrust levels that constant accelerations to permit both interstellar and interplanetary missions. I refer you to the link in my previous posting.
I agree a lot of power; and if it was light years away it would look like an FRB when seen from Earth.
December 27, 2016 07:00 AM ET
How Realistic Is the Starship From ‘Passengers’?
The science fiction movie features Jennifer Lawrence, Chris Pratt and an interstellar spaceship transporting 5,000 people to a distant planet and much of the starship’s design is rooted in real science.
http://www.seeker.com/interstellar-movie-jennifer-lawrence-chris-pratt-science-fiction-2166848033.html
https://arxiv.org/abs/1612.08733
Artificial Intelligence Probes for Interstellar Exploration and Colonization
Andreas M. Hein
(Submitted on 24 Dec 2016)
A recurring topic in interstellar exploration and the search for extraterrestrial intelligence (SETI) is the role of artificial intelligence. More precisely, these are programs or devices that are capable of performing cognitive tasks that have been previously associated with humans such as image recognition, reasoning, decision-making etc. Such systems are likely to play an important role in future deep space missions, notably interstellar exploration, where the spacecraft needs to act autonomously.
This article explores the drivers for an interstellar mission with a computation-heavy payload and provides an outline of a spacecraft and mission architecture that supports such a payload.
Based on existing technologies and extrapolations of current trends, it is shown that AI spacecraft development and operation will be constrained and driven by three aspects: power requirements for the payload, power generation capabilities, and heat rejection capabilities.
A likely mission architecture for such a probe is to get into an orbit close to the star in order to generate maximum power for computational activities, and then to prepare for further exploration activities. Given current levels of increase in computational power, such a payload with a similar computational power as the human brain would have a mass of hundreds to dozens of tons in a 2050 – 2060 timeframe.
Subjects: Popular Physics (physics.pop-ph); Space Physics (physics.space-ph)
Cite as: arXiv:1612.08733 [physics.pop-ph]
(or arXiv:1612.08733v1 [physics.pop-ph] for this version)
Submission history
From: Andreas Hein M. [view email]
[v1] Sat, 24 Dec 2016 14:13:45 GMT (1092kb)
https://arxiv.org/ftp/arxiv/papers/1612/1612.08733.pdf
Interesting paper, but Hein is way out of hs depth, made obvious early on when he raised the issue of consciousness and AGI.
We are already reducing power requirements of neural AI models with neuromorphic chips that promise to put brains in boxes with human level power demands. But a probe could avoid that payload mass by building its AGI level brain with the target star’s resources.
I don’t expect StarShot’s 1 gm chips to have more than rudimentary AI, but future interplanetary and interstellar craft could house quite substantial AI capabilities, even possibly AGI with quite small masses.
Alex, could you expand on your objections to Hein being out of his depth? He seems to have a fairly good grasp of AI concepts and problems to me.
The probe would have a hard time building an AGI from target star resources unless it stops, which is not in the plan for Starshot, at least not yet.
I agree that larger craft could have larger AIs, but the launch method is not likely to be the same. Once the laser has been built and tested with gram-sized payloads, perhaps they could test with larger, lower-g payloads. However, that increases the acceleration time to the point that the laser could not be on Earth, and an Earth-bound laser is the reason for the high-g launch in the first place. A space based laser of that power would be a powerful weapon if it were used against an Earth-based target.
1. Hein seems to be assuming general computaional hardware trends. These define his contsraints on power, mass and heat load. e.g. IBM True north chip could be a “brain in a box” of Cortex numbers of neuroms massing 1kg and using 1 KW by 2020. By Hein’s mid century timeframe this will be a lot better. We are talking a platform that should support AGI.
2. AI can be put on nano chips. See Pete Worden’s blogpost: “Nano-computers are coming!”
3. “There are conceptual and practical issues to overcome to create an AGI. The conceptual issues are basically questions about the nature of consciousness, which are still hotly debated. One of these assumptions is computability of human consciousness. It assumes that the human brain is based on computational principles. If yes, it should be replicable on an artificial substrate. If not, we may have to modify our models of computability. The practical issues are considered with the development of algorithms.” True. But the hard problem of consciousness is not required. The digression into “mind” suggestrs to me that Hein doesn’t really know what he is talking about. Packing AI existing algorithms is probably sufficient for most purposes.
4. Assumptions of human level AGI computation. While Hein suggests a lower bound of 10W (a human brain is 25W) he actually uses:
IMO, that is already obsolete. (See point 1)
His conclusion is:
I think this is wildly conservative given the technology we already know we have.
I don’t think it’s that conservative. Silicon is getting leaner energy-wise but simulating neurons is still incredibly energy intense. Hundreds of megawatts is likely unless the silicon itself is arranged in the fashion of neurons with all their inherent noise and imprecision.
Can we provide a “honeypot” for radiation? A place on the craft that radiation might prefer to go that might decrease the proportion of radiation impacting sensitive electronics. Possibly an EM field that diverts some types of radiation but doesn’t increase mass?