Time dilation has long been understood, even if its effects are still mind-numbing. It was in 1963 that Carl Sagan laid out the idea of exploiting relativistic effects for reaching other civilizations. In a paper called “Direct Contact Among Galactic Civilizations by Relativistic Interstellar Flight,” Sagan speculated on how humans could travel vast distances, reaching beyond the Milky Way in a single lifetime by traveling close to the speed of light. At such speeds, time for the crew slows even as the millennia pass on Earth. No going home after a journey like this, unless you want to see what happened to your remote descendants in an unimaginable future.
Before Sagan’s paper appeared (Planetary and Space Science 11, pp. 485-98), he sent a copy to Soviet astronomer and astrophysicist Iosif Shklovskii, whose book Universe, Life, Mind had been published in Moscow the previous year. The two men found much common ground in their thinking, and went on to collaborate on a translation and extended revision of the Shklovskii book that appeared as Intelligent Life in the Universe (Holden-Day, 1966).
This one should be on the shelf of anyone tracking interstellar issues. My own battered copy is still right here by my desk, and I haven’t lost the sense of wonder I felt upon reading its chapters on matters like interstellar contact by automatic probes, the distribution of technical civilizations in the galaxy, and optical communications with extraterrestrial cultures.
Much has changed since 1966, of course, and we no longer speculate, as Shklovskii did in this book, that Phobos might be hollow and conceivably of artificial origin (the chapter is, nonetheless, fascinating). But for raw excitement, ponder this Sagan passage on what possibilities open up when you travel close to lightspeed:
If for some reason we were to desire a two-way communication with the inhabitants of some nearby galaxy, we might try the transmission of electromagnetic signals, or perhaps even the launching of an automatic probe vehicle. With either method, the elapsed transit time to the galaxy would be several millions of years at least. By that time in our future, there may be no civilization left on Earth to continue the dialogue. But if relativistic interstellar spaceflight were used for such a mission, the crew would arrive at the galaxy in question after about 30 years in transit, able not only to sing the songs of distant Earth, but to provide an opportunity for cosmic discourse with inhabitants of a certainly unique and possibly vanished civilization.
The songs of distant Earth indeed! An Earth distant not only in trillions of kilometers but in time. Memories of Poul Anderson’s Leonora Christine (from the classic novel Tau Zero) come to mind, and so do Alastair Reynolds’ ‘lighthuggers.’ Could you find a crew willing to leave everything they knew behind to embark on a journey into the future? Sagan had no doubts on the matter:
Despite the dangers of the passage and the length of the voyage, I have no doubt that qualified crew for such missions could be mustered. Shorter, round-trip journeys to destinations within our Galaxy might prove even more attractive. Not only would the crews voyage to a distant world, but they would return in the distant future of their own world, an adventure and a challenge certainly difficult to duplicate.
But while the physics of such a journey seem sound, the problems are obvious, not the least of which is what kind of propulsion system would get you to speeds crowding the speed of light. The Bussard ramjet once seemed a candidate (and indeed, this is essentially what Anderson used in Tau Zero), but we’ve since learned that issues of drag make the concept unworkable and better suited to interstellar braking than acceleration. And then there’s the slight issue of survival, which William Edelstein (Johns Hopkins) and Arthur Edelstein (UCSF) discussed at the recent conference of the American Physical Society (abstract here). The Edelsteins worry less about propulsion and more about what happens when a relativistic rocket encounters interstellar hydrogen.
Figure two hydrogen atoms on average per cubic centimeter of interstellar space, and that average can vary wildly depending on where you are. A relativistic spacecraft encounters this hydrogen in highly compressed form. Travel at 99.999998 percent of the speed of light and the kinetic energy you encounter from hydrogen atoms reaches levels attainable on Earth only within the Large Hadron Collider, once it’s fully ramped up for service. This New Scientist article comments on the Edelstein’s presentation, noting that the crew would be exposed to a radiation dose of 10,000 sieverts within a second at such speeds. Six sieverts is considered a fatal dose.
Traveling near lightspeed seems a poor choice indeed. The Edelsteins calculate that a 10-centimeter layer of aluminum shielding would absorb less than one percent of all this energy, and of course as you add layer upon layer of further shielding, you dramatically increase the mass of the vehicle you are hoping to propel to these fantastic velocities. The increased heat load would likewise demand huge expenditures of energy to cool the ship.
If travel between the stars within human lifetimes is possible, it most likely will happen at much lower speeds. Ten percent of lightspeed gets you to the Centauri stars in forty three years, a long but perhaps feasible mission for an extraordinary crew. If we eventually find shortcuts through space (wormholes) or warp drive a la Miguel Alcubierre, so much the better, but getting too close to lightspeed itself seems a dangerous and unlikely goal.
People seem to be talking at cross-purposes here. Many are discussing the physics or engineering feasibility of certain types of propulsion and how energy-consuming they will be, or whether we will ever attain the technology for really exotic. Yet the key issue is: how fast can starships go and still successfully carry carbon lifeforms? It will avail us naught if we manage to fold space or get warp drive (unlikely, but you get the drift) and find out that the shielding requirements are prohibitive or that the radiation renders every eukaryotic lifeform on the starship sterile.
Kenneth: I personally don’t think there’s anything magic about 150 years. But yes, renewal of brain neurons is the ultimate showstopper. I discuss this and related issues (including the question of uploading into non-carbon frames) here:
Ghost in the Shell: Why Our Brains Will Never Live in the Matrix
http://hplusmagazine.com/articles/ai/ghost-shell-why-our-brains-will-never-live-matrix
Eniac said:
“I like the idea of the lithium/deuterium fusion drive. It addresses several problems: 1) The easiest of the fusion reactions (D-T) can be used, 2) the dreaded neutrons are not lost but captured to breed the tritium, 3) the fuel is very stable, 4) the fuel can be arrayed ahead of the craft for shielding without any mass fraction penalty, and 5) most of the structural mass can also be fuel, giving a good head start on the rocket equation.”
Eniac is on the right track. Deuterium is the most practical fuel for a star ship because of its nuclear stability, low ignition temperature and abundance (it can be mined from Saturn’s rings). Also deuterium makes a dandy cosmic radiation shield if it’s presented in a thick enough layer. The problem as Eniac correctly stated are those fast neutrons. The enabling technology for interstellar travel is to come up with a scheme that converts most of those neutrons into useful thrust.
Transmuting lithium into tritium is one possible solution and maybe the correct solution. Lithium deuteride is the fusion fuel for thermonuclear weapons so this is already a well established technology. However to use the lithium there is a mechanical process issue: There would need to be a liquid lithium jacket around the combustion chamber. The tritium bred from the lithium would need to be extracted and then fabricated into target pellets through some mechanical process. The design is already getting complicated.
Getting back to the thermonuclear weapon model, the lithium deuteride serves as the secondary stage but there is the tertiary stage of the U-238 tamp that surrounds the lithium deuteride thermonuclear fuel. Most of the yield from a typical thermonuclear weapon comes from the U-238 tamp that undergoes fission from the fast neutron flux emanating from the tritium-deuterium fusion reaction. Maybe(?) that’s the answer. The whacky idea is to create a hybrid propulsion scheme of a U-238 plasma trapped in either a magnetic or hydrodynamically confined torus surrounding an inertially confined deuterium-deuterium fusion reaction that’s compressed either with lasers or relativistic particle beams. The fast neutrons emanating from the fusion reaction would cause a secondary fission reaction in the U-238 plasma. The super heated fission products could then serve as the working fluid for a high Isp fission product rocket. One might be able to model the combustion chamber after a diesel engine: As the intake stroke, the U-238 plasma is injected into the combustion chamber and forms its torus. The pellet is then injected into the center of the torus and zapped. After detonation. the fissioning plasma is expelled from the combustion chamber in a unilateral direction as the power stroke.
Lots of devil in the detail. The combustion chamber walls would have to be protected from the plasma with magnetic fields and still get hotter than hell due to radiation. This is where the lithium jacket becomes really handy, i.e. use it as a coolant to transfer heat from the combustion changer walls to radiators and then extract tritium as it accumulates. This could enable a deuterium-tritium fuel cycle instead of the harder to ignite deuterium-deuterium reaction. The same magnetic field in the combustion chamber would have to be constructed to expel the plasma and take an EMF to recharge the lasers or relativistic particle beams. Would the U-238 fission process make the neutron issue even worse (more of a problem than a solution)? Lots of questions… Again, the problem is getting rid of the fast neutrons….
“Is there anything seriously wrong with this concept, compared to others?”
Deceleration? You still need the shield in front but likely need to send engine exhaust in the same direction.
ljk said:
“Shklovskii may not have been entirely off about Phobos. …”
No way Phobos is hollow but it maybe a big carbonaceous chondrite. Phobos is definitely the gateway to sending people to Mars. This was more or less recognized by the Augustine Commission’s “flexible approach”. As we discovered with the Apollo Program and the recently cancelled Constellation Program, human exploration of the Moon is a dead end and/or a no-starter. The obvious alternative is human Mars exploration but it’s politically impossible to achieve due to the high price tag’s sticker shock. So the next best solution is to sneak up on Mars one asteroid at a time through a flexible approach. Eventually we’d find ourselves on Phobos orbiting above Mars. We could also use all that lovely organic material on Phobos for in situ propulsion production. Pictures of astronauts standing on Phobos with the disk of Mars in the background would provide the political basis for that last (expensive) push to get people down onto the Martian surface.
gamma factor at 0.05 c = 1.00125
gamma factor at 0.1 c = 1.005
Increase from about 0.125 % to 0.5 % you call serious ? WT … ?
from relativistic rocket at wikipedia
Athena Andreadis asked:
“Yet the key issue is: how fast can starships go and still successfully carry carbon lifeforms?”
I would respond that this is a non-issue because it’s physically impossible to carry most (all?) carbon based lifeforms to other star systems. The 0.05 c upper limit means a biological payload is exposed to cosmic radiation for about a century. Unless it’s Deinococcus radiodurans or something custom built for a high radiation environment, it’s not going to survive. However as I have said in previous discussions, we don’t need to physically transport people to other star systems. Instead we transport digital information with parity bits describing human/terrestrial biology and then reconstruct it using in situ resources.
the 0.05 C limit is completely fictious,
Or the ship has some shielding.
T_U_T,
Read the previously provided link discussing the relativistic rocket equation. The algebra is straightforward yielding the exhaust velocities for different propulsion concepts.
Gary:
Deuterium, as opposed to Helium3, can be refined from water, right here on Earth, and is commercially available in large quantities. No need to go to Saturn. This is one of the killer advantages of this concept over Helium3 based ones.
The lithium might make a good shield, too. Or lithium deuteride. Both have the advantage of being solid and stable up to fairly high temperatures, such as should be expected with the energy deposited into the shield.
If mechanical process issues are what we most need to worry about, we have come pretty far, I would think.
Ron:
That issue would affect most other concepts the same, or not?
If the shield goes far enough in front of the engine, it will block very little of the exhaust, and the exhaust could be sent slightly sideways to address even that. Adopting an engine in front / tensile truss design as that of Pellegrino would be another possible solution.
Gary, digital information is as prone to being disturbed by radiation as biological systems. Also, you cannot “reconstruct” terrestrial biology by the method you describe. That’s La-La Land unless you want to start (maybe) from bacteria… and let it take its course from there.
Gary:
I have trouble seeing the advantage of adding the U-238, given that fission converts much less mass to energy compared with fusion, and given that this fraction is really the fundamental limit in the achievable velocity.
My guess is that the tamp in thermonuclear weapons is there at least as much for its inertia as for its fission yield. It keeps the fusion fuel contained just a little longer, increasing the yield. The fission is an extra bonus, along with all the nasty fission products that make for very “productive” fall-out. No clean tech here…
Both Lithium isotopes can be bred to Tritium, one by fast and one by slow neutrons. It is likely that the isotope ratio can be optimized to take maximum advantage of neutrons, but whether that is worth the extra enrichment trouble is not clear. A competing consideration would be which of the two provides more energy overall.
Indeed. Radiation shielding is troublesome, but far from a fundamental limitation to space travel. The fundamental limits are set by the rocket equation and the heating and deceleration caused by oncoming ISM and CMB. Both permit travel near light speed, but not so near as to enjoy radical relativistic effects. My guess would be 0.1-0.3 c with fusion and 0.6-0.9 c with antimatter.
It will take very dedicated pioneers to go on such a long journey, but it is feasible, in principle.
Athena Andreadis said:
“digital information is as prone to being disturbed by radiation as biological systems.”
As I earlier mentioned, digital information can have Extended Error Correction (EEC) parity bits that allow information to be restored if an arbitrary bit gets flipped by a cosmic ray. This brings me to a technical question that I’ve long wondered about. Years ago I had a lunchtime conversation with a molecular biologist who told me that DNA had its own form of EEC parity checking but did not explain the precise mechanism. I never had the opportunity to complete that conversation. I under stand that DNA is a double strand helix made up of codons that are each made up of three nucleotides. Linking to the following:
http://en.wikipedia.org/wiki/Genetic_code
I am simply amazed by the sophistication of the RNA codon table. I guess for organisms as complicated as ourselves, we need something with this sort of sophistication in order for our genetics to work. Is there error checking beyond the requirement that the codon’s fall within the structure of the RNA codon table? For example, in the DNA / RNA transcription process is there some mechanism that inspects the transcripted RNA and determines whether the copy is correct?
Also, I have this vague memory that the DNA or the RNA of the Archea is slightly different from everything else, e.g. they use a weird extra amino acid in their genetic structure that nothing else uses. Do you know anything about that?
Athena also said:
“That’s La-La Land unless you want to start (maybe) from bacteria… and let it take its course from there.”
On the subject of La-La Land, I’m reminded of our earlier discussion where I speculated whether ETIs might have diddled with our genetic ancestors using viruses. I was thinking that if I was an ETI fiddling with the genetics of unevolved life forms on primitive worlds, I’d want to leave some form of control mark or version number encoded in the genetics of the creatures that I was manipulating. Doing so would be a necessary step since it would be likely that other ETIs of my kind would follow in my wake and want to know what prior modifications had been made. However if I left the control marks in human DNA then over the eons, natural mutation might make the control marks unrecognisable. To deal with this possibility, I could leave the control marks in another life form’s genetics that was modified to be very insensitive to mutation. I could design this other life form to be some sort of parasite (staph bacteria?) that associated with the life forms of interest or even put it into something useful like the mitochondria’s DNA. Are there bacteria associated with human beings that are very insensitive to mutation?
Eniac said:
“I have trouble seeing the advantage of adding the U-238, given that fission converts much less mass to energy compared with fusion, and given that this fraction is really the fundamental limit in the achievable velocity.”
I tend to agree but the problem of fast neutron elimination remains. With the U-238 plasma idea, I was originally thinking of using a metallic plasma to absorb unwanted fast neutrons (thermalize them). The idea crossed my mind of a plasma based upon cadmium. However I then thought: Why not have it produce additional energy rather than passively absorb the neutrons? Again, there needs to be some sort of mechanism to make the fast neutrons contribute to the exhaust velocity. I understand that the cure of using a U-238 plasma maybe worse than the disease. Also, making the lithium jacket so thick that it absorbed almost all of the neutrons is probably not the answer (it would be too massive).
During a lunchtime conversation with some Lawrence Livermore physicists, we were discussing the National Ignition Facility (NIF) and I was told of a mechanism where fast neutrons could be converted directly into electrical energy using fine wires. Unfortunately I was not provided with an physical explanation that I could understand. Does anyone know anything about this?
Actually, Gary, you don’t understand correctly. A little knowledge is a dangerous thing. I will repeat the suggestion that you invest in a basic biochemistry/molecular biology text before resorting to statements about biology, and to biological hypotheses that resemble godly interventions so much that you might as well call yourself a creationist.
To begin with, DNA and RNA are not made of amino acids. They are made of nucleotides. Proteins are made of amino acids. Let me translate your sentence to the equivalent in chemistry, to illustrate how it sounds: “Water is made of nitrogen and sulfur.” Now tell me how you would consider the rest of the statements that stemmed from a person who said this.
DNA is a double helix made of four nucleotides (large aromatic molecules on a sugar phosphate backbone), which can pair by hydrogen bonding, A to T, C to G. That’s how it attains the capacity to give rise to identical daughter molecules during replication. Simple but ingenious. DNA nucleotides are not grouped in any way. Each chromosome is a continuous backbone. If you looked at it, you wouldn’t see any happy triplets waving group flags.
DNA has embedded signals that direct chromosomal packing/unpacking, replication, transcription, splicing and translation. All in the same molecule. It’s the equivalent of having a single text that can be read in six languages. How the DNA gets “read” depends on what function it is discharging. The only time it’s read in triplets (codons) is when it has been transcribed into RNA and is being translated into protein. And to do that you need a large, dynamic hybrid organelle called the ribosome, made of both RNA and protein and using a large retinue of small adaptor RNAs.
The genetic code of translation triplets is IDENTICAL for all terrestrial lifeforms. There is no better proof that all terrestrial lifeforms arose from a single event and a single source. And if some organisms have a slightly different amino acid, it is always done post-translationally in all organisms by specialized biosynthetic enzymes. This is true across the board, from archaea to plants to humans (example of the latter: hydroxyproline in collagen; gamma-aminobutyric acid, aka GABA, a major brain neurotransmitter). So much for archaeal uniqueness.
Furthermore, RNA has the capacity to autocatalyze its replication without the presence of enzymes (unlike DNA, which is more stable as an information storage template but less independent). It is almost certain that the transition between chemistry and biochemistry was achieved through RNA. Nothing mystical, and no missing links — although, as usual, the details have not been worked out.
There are several systems of DNA repair enzymes, each specialized for subfunctions. Some are there to proofread the strands during replication, some during stress of different kinds (heat, radiation, toxins, viruses). RNA is more error-prone because it’s single-stranded. Hence the very fast evolution of retroviruses (HIV is a prime example by virtue of its familiarity).
No terrestrial lifeform is insensitive to mutation; those that are would have reached evolutionary dead ends and died out long ago. When biologists say “insensitive to mutation” it’s shorthand for “survives well and/or recovers easily from mutational pressures”.
So please refrain from statements of the “archaea have unique amino acids in their DNA” kind. They hurt my brain but, worse yet, hurt your credibility.
There were two more points I wanted to make.
Given the repair capacity of DNA, which I described only briefly in my previous long post, it’s actually better than digital parity bits. Hence my point about biological forms having an advantage over computer bits.
Secondly, even if anyone wants to defend the idea that aliens seeded life on earth, that just moves the question of origins one step. Eventually, life has to arise from non-living ingredients. However, it’s abundantly clear from the evidence we have that terrestrial life arose here and arose once (the successful version, that is; earlier or parallel less successful starts have disappeared). The uniformity of the genetic code alone is proof.
Hi Kenneth;
Thanks for asking. I am sorry to have taken so long to get back to you regarding the above question. I was buzy running errands all day today from about Noon EST, USA and only now just had the opportuinity to log on again to Tau Zero.
I look at being able to achieve ever higher gamma factors as both a physics and engineering puzzle, that will possibly never have a final solution. I anticipate that perhaps given billions of years of human Sci-Tech evolution, gamma factors that are practically cuurently conceptually inconceivable will present them selves in real world practical star flight, and galaxy flight.
However, as you mentioned, getting beyond 0.7 C this century or next century will take some doing. I see 0.7 C as doable over the next couple of centuries and I agree with your mindset that it is a good goal to aim for.
A perhaps interesting way to obtain 0.7 C with a large M0/M1 ratio might be to use antimatter catalized fusion to process fuel all the way up to the most stable forms of iron if such can be done with as little of antimatter fuel as possible. No doubt, producing higher atomic number exothermic fusion fuel will result in less return for given incremental fusion cycles, but if we can some how do such complete fusion processing of the fuel, then perhaps we can achieve an effective Isp greater than 0.119 for fusion fuel. If we can somehow obtain an effective Isp of say 0.13 or 0.15, the Mo/M1 ratio to relativistic rockets to achieve 0.7 C will be significantly reduced.
But to start with, I see the 0.2 C or 0.3 C perhaps obtainable by Project Icarus, as a good starting point. Aiming for the 0.7 C of the ISV Venture Star of “Avatar” seems like good material for the 22nd century, baring some exotic physics breakthroughs.
There is something beautiful about a pioneering vision wherein we aim at ever higher gamma factors, even to a gamma factor of infinity, assumming that the human race will somehow last forever, in the temporal limits of infinity.
A gamma factor of (5.4 x 10 EXP 44)(3.1 x 10 EXP 7) would enable a space craft to travel one lightyear in one Planck time unit ship time, or travel the radius of the currently observable universe in about 4 x 10 EXP – 34 seconds ship time and to travel 5.4 x 10 EXP 44 years into the future in one second ship time if the ship could somehow be brought up to speed. I would imagine that zero point field energy sequestration might be required as well as the whole host of other necessaries that you refered to as miracles, and I must say,rightly so.
What is the ultimate Special Relativistic Lorentz Factor limit according to the expression factor gamma = {1 – [(v/C) EXP 2]} EXP (-1/2)? We know that the equation would pose a limit of v = C exactly at gamma = infinity, and so one is hard pressed to see how the associated energy could be captured short of artificially inducing an inflationary big bang tailored made universe, in order to provide the stupendous energy requirements, but given trillions if not quadrillions of years of human evolution and the same for any of our ETI brothers and sisters, I say never say never although I remain guarded about stating possibilities of gamma = infinity. We may never obtain gamma = infinity, even if we last eternally in some form of bodily life, but I believe and hope that the race to obtain ever higher gamma factors will delight physicists and engineers for millenia, eons, Terayears, and perhaps for all eternity.
I like to think of myself as an eternal optimist when it comes to human progress.
When I was in highschool, one of my councilors who was a Catholic Religious Brother use to give me and some other students a gentle ribbing when we expressed undue enthusiasm and ambition during periods of occasional poor academic performance, and sometimes my performance was very poor, as I would often get caught up reading Sci Fi novels and the like, while neglecting my homework: I was an average B student through most of highschool. He would chant the theme song from a popular 1977 era sitcom that went “Movin on up, To the East Side…”. I first took an interest in the possibility of real star flight concepts at about this time. I now live the mantra of “”Movin on up, to the future” with Special Relativity”.
I enjoy the famous line of President JFK and that goes something like, “We choose not to do thinks because they are easy, but rather, because they are hard!”
But back to the present reality, once again, I feel that your ball park “kneebend” calculations of 0.2 C, 0.3C and 0.7 C are probably right on track. The bold collaboration betweeen the Tau Zero Foundation and the British Interplanetary Society, is aiming for somewhere around 0.2 C in Project Icarus. I would be more than happy to settle for this velocity and a respective ship launch by 2060. I will even hope to crack a bottle of good wine over the stern of the ship in orbit for its commissioning ceremony. I would be 98 years old then so I had better drop much wieght and start excercising again so that I might have such an opportuinity.
Regards;
Jim
Yes, it is, indeed. And it says that fission drive with exhaust velocity 0.04 c can reach 0.1 c with fuel/payload ratio of 11.285 which is not that much different from payload to LEO by chemical rockets. And we DO fly to LEO. for now. at least.
I’d like to add on to my previous statement that I can definitely see the advantages of getting close to c, which would cause time dilation and lead to less experienced time on star voyages. That would replace the purpose of hibernation/suspended animation at lower speeds, and could work as a forward time machine as you mentioned. However, while the Lorentz factor slowly inclines throughout most relativistic speeds, it increases much more after 90% c and spikes way up very close to c… and thats not even considering hydrogen atoms. Speeds of near-c may not be manageable even with advanced technology.. but even so, the stars are still within reach of possibility.
This is true in principle, but not absolutely, all-caps, true… There is some limited variety in mitochondrial and even some nuclear genetic codes, as described here: http://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi.
DNA uses a very primitive means of redundancy: duplication. There are two strands carrying the same information. If a loss of information occurs on one of them, a sophisticated complex of enzymes can reconstruct the damaged section using the intact strand as a template.
Duplication is indeed better than a single parity bit, yes, but it is much worse than sophisticated ECC schemes, which can be made extremely robust. For example, a 10 fold redundant forward error correction (FEC) code can reconstruct the original information perfectly if 90% of the transmitted bits are destroyed (Gory details at http://en.wikipedia.org/wiki/Forward_error_correction). No chance of that with DNA.
While the genetic code is somewhat redundant, this does not appear to be motivated by error correction. First, as Athena has pointed out, the code is not involved in germ-line information transmission. Second, it’s redundancy is likely incidental, because maintaining 64 instead of 20 amino acids and all their biochemical pathways would be much more trouble than it is worth. You could imagine a two-letter code, but it would be constraining, given that some codes must be initiation and termination signals. A more roomy code provides leeway to accomodate the vagaries of chemistry when it comes to accurate code matching by t-RNA. In addition, translated DNA may simultaneously carry non-translated signals on it, which is possible only if the code is sufficiently redundant.
I don’t think there is sufficient, much less abundant evidence that life arose here on Earth. How would we know? The uniformity of the genetic code only tells us that there is one single strain of life, not two or three. It tells us nothing about where this strand is coming from, or, as you say, whether and when others might have existed.
That said, I agree that, following Occam, it should be assumed life arose here, because it is the simplest explanation fitting the evidence.
This is true, all natural lifeforms are capable of mutation, because that is how they arose. However, the original question was: If we wanted to create a life-form to seed a particular niche, would we be able to do it in such a way that a permanent signal remains? There are several aspects:
1) How long can biological sequences be preserved? The existing tree of life has a number of extremely conserved DNA sequences, with sequence similarity across the entire tree. These must have been preserved from very close to the beginning of life. As would be expected, they tend to be ribosomal sequences, i.e. those related to the genetic code. The genetic code itself can be seen as a message of extremely long persistance.
2) Do artificial organisms have to evolve? I think the answer is no. If you design an organism that is well-adapted to the target environment, if that environment is stable within the design parameters, and if there are no competing evolving organisms, the organism will thrive without adaptation. So, given you want to seed a stable, otherwise lifeless environment, the question reduces to can it technically be done. You cannot get the physical mutation rate to zero, but you could ensure that mutation leads to death. While in theory this is easy by using a good error checking code, it is not at all easy to implement such a thing to the necessary precision using biochemistry. Difficult, but not impossible, I would think. Some variation of existing proof-reading mechanisms coupled to apoptosis, or perhaps a more exotic check-summing proof-reader that walks along the DNA, counting up checksums and cutting the strands when it encounters a violation. The mechanism would have to be redundant, so that it cannot be itself disabled by one or two chance mutations.
So, if we wanted to do such a thing, we’d have at least two reasonably realistic options: 1) redesign existing organisms to change their genetic code to something that carries a message. Not many bits, admittedly, but we would be absolutely assured that the message would persist as long as our lifeforms. We could add more bits by altering functionally preserved ribosomal sequences, but that would be more tricky. 2) design a mechanism to integrate into existing lifeforms that will make them absolutely intolerant to mutation.
Neither is easy to do, and would have to await much more progress in the field of synthetic life, but neither is impossible in principle.
Since there does not seem to be a message in our genetic code, it appears that no-one beat us to it, here on Earth.
A thickness of 50-100 cm is needed for breeding one tritium for each tritium burned, according to here: https://lasers.llnl.gov/programs/ife/how_ife_works.php.
Because of the constraints of the rocket equation, we would likely have a stacked design, with many engines, to be discarded as the stack gets smaller. As long as we empty out the lithium from engines before we discard them, the lithium jacket counts as fuel and its mass does not impact our mass ratio. We would be left with one jacket worth of lithium at the end which will not be burned, perhaps a couple of tons worth. It could still serve as reaction mass for an ion drive when running about in the target system on star power.
Eniac, you will note that in my short post that followed the longer one, I state that no matter how/where/when life arose, it will eventually have to be accounted for by non-living antecedents. Bootstrapping will be with us, whether on Earth or Rigel IV. I can write tomes on the genetic code, DNA repair mechanisms and the conservation of genomes and function, but will forbear since I don’t consider nitpicking of details productive.
Also, by definition if you create an organism impervious to mutation, it’s doomed even if there are no competitors. Environments change, and what is completely optimal in one context becomes seriously suboptimal in another. If you actually look at genomes, cells and organisms, you will see redundancy and jerry-rigging at all scales. There’s a reason for this lack of “streamlining”: it leaves them leeway to change and adapt.
Nature follows the engineer’s dictum: The perfect is the enemy of the good. In the case of living organisms, perfection leads to death. Literally.
I completely agree with that, which my reference to Occam was supposed to express. I objected to the “abundant … evidence we have that terrestrial life arose here”, which, I now realize, you may not have meant the way it sounds. I promise to try and nitpick less in the future…
You do not need perfection, just positive growth. You have a good point, of course, but I still cannot see why an organism with a static genome cannot be sufficiently flexible to thrive in the absence of competition. Organisms adapt to short term changes all the time, without mutating their genome. How are long-term changes qualitatively different? Larger, perhaps, but large enough to render the possible impossible?
Athena: I reread the long post, and after subtracting the comments directed at Gary, it is easily the shortest and most well-written introduction to molecular biology and the origin of life that I have ever seen. I tend to pick the nits without acknowledging the beauty of the whole, in my search of something to say that has not already been said.
First. something that lacks potential for evolution is not technically an organism. ( fire can “breed” does have “metabolism” yet we don’t consider a flame to be an organism )
Second. Niches appear and disappear over time. Virtually no niche lasts infinitely long. So an organism has either to spread to other niches or dies when the one it is currently in disappears.
An addendum on the genetic code: It is fairly clear from the code that it evolved first as a two-nucleotide (C/G) code and that the (A/T) was added somewhat later. So, it is likely that the original peptide synthesis pathway had only 8 distinct codons, and that the other nucleotide pair was recruited to improve the chemical versatility of peptides in a separate evolutionary step. A two-place codon has no likely place in this path, thus three really is the most plausible codon size, then and now.
I could not find reference to evolution or mutation in any of the major dictionary definitions of organism. In any case, I’ll be glad to call it by a different name if you propose one.
Some environments are not niches, i.e. are large, widespread, and do not appear and disappear over time. Some organisms can live in a wide range of environments, and have no problem spreading to other places if necessary. Without mutating.
You quoted my statement but left out my argument without addressing it. Can you?
Regarding the point that the lack of genome of changes leads to death, does that mean that we’re all just “dead ‘men’ walking” (just a phrase, I don’t mean only men but humanity as a whole and it is not meant to offend women – just incase someones wants to go there…).
I’ll qualify that a bit further. Usually genomes evolve due to selection pressures such as, the weather, predator evasion, increased prey capture, adapting to new resources and avoiding/fighting off/adapting to diseases and viruses – there are probably a host of others. Humans have no requirement to kneel to thesesselection pressures anymore and so their genomes are by and large under no pressure to change – hence relatively “static”. Are we already dead?
Or have we escaped the life that bread us and “moved on”.
Eniac, earlier said:
“Both Lithium isotopes can be bred to Tritium, one by fast and one by slow neutrons.”
I didn’t think this through. The problem is to get rid of the fast neutrons. Eniac proposed to breed tritium from lithium to dispose of the fast neutrons. Initially this works but then what do you do with with tritium? Obviously, you burn the tritium in a fusion reaction with deuterium. There are two possible reactions, i.e. tritium + deuterium and tritium + tritium. The first reaction produces a single fast neutron as a by product and the second produces two fast neutrons as by products. The problem has been made worse.
Why is it important to get rid of the neutrons? Because they have no charge and will not provide useful thrust against a magnetic nozzle. Getting back to my thermonuclear diesel engine concept. The U-238 plasma torus could act as a hydrodynamic nozzle assuming its density was sufficient to absorb most of the deuterium-deuterium fusion generated neutrons and the resultant fission products were sufficient to absorb most of the fission generated neutrons. This is where a nuclear engineer needs to step in and kill or support the concept. The nuclear engineer needs to determine the density and volume of the torus that is sufficient to capture all of the neutrons from a given spherical volume of inertially confined fusing deuterium. The other limiting factor is the U-238 plasma must not be so dense that the thermal radiation and convective heating from the fission reaction would cause the combustion chamber walls to melt. The heat from the fission reaction would have to be convected from the walls by a liquid lithium (or sodium) jacket surrounding the combustion chamber and rejected via a radiator as black body radiation. The mass and size of the radiator is the final problem. The radiator must have an area large enough to reject the heat but at the same time remain within the shadow of the erosion shield protecting the vehicle from the interstellar medium.
You have not thought it through, yet. Because we want to breed at least one new tritium for each tritium burned, we actually need a little more than one neutron generated, because we cannot hope to catch them all. As you say, the T-T reaction is a source of extra neutrons, and the D-D reaction is a source of extra tritium. Both of these help push the breeding ratio above 1. The cross section of both of these reaction is much smaller than that of D-T, so the margin is small, which is why we need a thick blanket to capture 99% or so of the neutrons. That also means that only a few percent will go to waste. Tritium breeding is not idle conjecture, it has been very well researched and is the way that commercial fusion reactors are going to work, it they ever arrive. It is also the way tritium is produced today.
Of course, not all of the energy of the fast neutrons will be saved in the lithium-tritium conversion, there definitely are thermal losses. But that is pretty much par for the course with any type of fusion engine, and this one does have huge readiness and fuel availability advantages over the others.
Gary: It occurred to me that you may be talking about D-D fusion, while I am talking about D-T fusion. D-T is easier because its cross-section is about a factor of a hundred higher. D-D and T-T are minor side reactions of a few percent, at most. And “easier” is not just a small advantage, it is a huge difference that under most circumstances makes D-T break-even feasible and D-D not.
google definition of life
Then an asteroid hits. Or a supervolcano blows. or both. Or the planet enters a global ice age. Or all three at once. Bye bye unadaptive organism.
tesh:
We have moved on. Evolutionary pressure is negligible on short timescales, and the rate of our technological development has completely outstripped our capacity to evolve since the time we started holding tools and making fire. This does not mean, though, that we are doomed. On the contrary, technology has replaced evolution in what it does: allow us to adapt to the environment, or better yet, adapt the environment to ourselves.
You might think that the irrelevance of evolutionary selection that comes with this would be detrimental to our genomes. It might, in a few million years, but this is quite irrelevant. At the pace that progress is being made in human genetics, we will start improving our genome by targeted intervention in a few decades. In some ways we are doing this already, by prenatal genetic testing and genetic selection during in vitro fertilization. At first it will motivated just be the elimination of horrible genetic diseases, where the ethics of it is a slam-dunk, but it will spread and I cannot say what social and ethical consequences there might be. One thing is clear, though: The impact on the human genome will be far grater than anything natural selection can offer.
Thus, for us, evolution has outlived its purpose and is not coming back.
None of these events leave organisms enough time to evolve their genome to respond. In the face of a meteor, volcano, or ice age, all organisms might as well have static genomes. Evolution is slow. You are confusing adaptation in general with evolutionary adaptation. An organism with a static genome can still be very adaptive. You have not yet addressed my original reasoning.
Ok, you win, I will call it “thing formerly called organism with static genome that T_U_T is still searching for a better name for” then, in the future.
i think “nonliving autocatalytic system”
or “‘a selfreplicating robot” would do it ;)
but they evolve before and if their diversity becomes big enough, then at least some will happen to survive. Diversity of selfreplicating but otherwise static robots does not increase. If their working conditions are exceeded, they will all be destroyed.
I’ll call it “critter” for brevity, how is that ;)
You are right, there is no diversity, and the working conditions cannot be exceeded. But if the critter is widespread, working conditions would have to be exceeded everywhere, at the same time. None of your catastrophies does that, given a sufficiently tolerant critter.
The environment of choice would be the open ocean, and the critter a photosynthetic autotroph, living on CO2, water and light, plus trace minerals. It would have to be released into a reducing environment, like the early Earth, and it would grow until its population size would be limited by excess oxygen or lack of CO2. It would survive freezing and thrive in temperate and tropical temperatures. Nothing short of the evaporation of all oceans could stop it, I think. Except, of course, the rise of competing organisms.
Three words. Late Heavy Bombardment. Perhaps not strong enough to boil away all oceans, but strong enough to kill anything within a few hundreds of meters below the surface. And plunge the world into a several year darkness. Bye-bye anything that can not survive w/o photosynthesis for extended periods of time. Of course you can claim that your hypothetical ‘critter’ could be pre-programmed to have alternative means of obtaining energy, to make durable spores, etc. However, I feel, that the program of such ‘all in one’ robot would be too complex to be capable of being copied without error at a reasonable energy cost.
You also have to cope with the fact that your critter would be better off using energy for reproduction instead of survival, so if a mutation arose that makes it more likely to reproduce at the expense of survival, it would rapidly be propagated.
The LHB probably lasted hundreds of millions of years. How many hundreds of years between impacts? Even the largest would leave plenty of habitat on the other side of the Earth. Darkness is relative. It is not going to be pitch dark. Prochlorococcus can survive on 1/1000th of full sunlight. And excluding that entire early period still gives us almost 4 billion years of peace and quiet.
Exactly.
I don’t think this is the problem. Some existing organisms are very versatile and hardy as is, and there is no reason to believe that the additional proofreading necessary to freeze the genome would have an unsurmountable energy cost. Remember, you do not have to copy without error, you just have to detect errors and abort.
I think the real problem is that it will be difficult to predict what might happen. On Earth, perhaps we had a fortunate set of circumstances that allowed the oceans to remain a fairly stable environment for nearly 4 billion years, despite a 30% solar radiation increase, radical transformations of the atmosphere, etc. etc. We can perhaps design a critter to make it through all that, but there is a good chance that we would forget something, and the “experiment” would fail. It gets worse with an alien planet, where there can be no assurance that similar benevolent circumstances will exist over billions of years. The other option, to encode information in the immutable parts of the genome (the genetic code and perhaps a few enzymes critical to its interpretation) would avoid this problem, at a substantial cost of storage space. Hybrids would not work, I think, as the evolving part would eventually eject the “useless” static part.
Given the obvious silliness of conducting experiments that last billions of years, on the whole I completely agree that this is something we should not try, and would probably fail at if we did.