by Kimberly Trent
Here we depart briefly from the norm by looking at the work of Kimberly Trent, a graduate student in the Applied Physics Program at the University of Michigan. Working as an intern with Marc Millis at NASA’s Glenn Research Center, Trent examined the broad issues of advanced propulsion and focused on a research topic that takes off on a Robert Forward idea from the 1960s. The goal: To develop a propulsion concept involving non-Newtonian frame-dragging effects, which Trent studies in relation to the work of Martin Tajmar. The details follow, in an article designed to show one student’s involvement in the kind of studies Tau Zero hopes to encourage at other institutions.
This past summer, I interned at the NASA Glenn Research Center in Cleveland, OH through the NASA Academy program. My individual research project was in theoretical spacecraft propulsion. This area involves research into devices and concepts such as space drives, warp drives, gravity control, and faster-than-light travel. This research began in the 1960s and was mainly carried out by private companies, individual efforts, and the Air Force. For example, Robert L. Forward who founded Forward Unlimited and who consulted for NASA and the Air Force, did studies on the possibility of generating gravity controlling forces using an electromagnetic torus configuration with an accelerated mass flow instead of an accelerated electric current.
In the 1990s, governmental funding increased for theoretical propulsion research. One of the programs funded was the Breakthrough Physics Propulsion (BPP) Project (1996-2003) headed by Marc Millis, who was my advisor this summer. This project supported research that strove to uncover new physics that would led to the development of more advanced propulsion methods. Since current propulsion technology is not suited for extended manned missions and interstellar travel, new physics, or a deeper understanding of the laws of nature, is needed so that more advanced propulsion methods can be developed. If new propulsion physics is discovered, a new class of technologies would result, revolutionizing spaceflight and enabling humanity to learn even more about the Universe.
Image: An artist’s conception of an interstellar ramjet. Credit: European Space Agency (ITSF)/Manchu.
On the other hand, even if a breakthrough does not exist, using the narrower goal of breakthrough propulsion introduces a different point of view for tackling the lingering unknowns of physics. Analyzing the unanswered questions of science from this perspective can provide insights that might otherwise be overlooked from just curiosity-driven science. Theoretical spacecraft propulsion research is important and useful because its minimum value, which is what can be learned along the way, can still be revolutionary. The BPP Project ended in 2003 when NASA’s priorities shifted.
Even though research has decreased in this area due to lack of funding, progress is still being made by private companies and individual effort. For example, Millis and some of the scientists who worked with him on the BPP Project, along with other scientists that he networks with around the world, continue to pursue research in their spare time. Millis founded the Tau Zero Foundation, which formally connects this group of researchers. In addition to collaborating on further research and reviewing papers, they create tools that will make it easier for the next generation of researchers to access past research in this field.
One of these tools is a textbook put together by Millis and his associates entitled Frontiers of Propulsion Science, which is pending publication through AIAA. A great amount of research in these areas has been compiled for the first time in this book. In addition, the book provides guidance for researchers who wish to enter the field, stressing to them the importance of approaching this research in a scientifically rigorous way. In order for this research to be taken seriously by critics and other scientists, Millis and his colleagues emphasize that researchers must strike a balance, conducting their study within the rigorous constraints of conventional physics while still remaining open to the possibility of results that may test or extend our understanding of those principles.
This summer, I used the manuscript of this book along with a number of other individual research papers to help me carry out my research. During the first part of the summer, I read through much of the book to familiarize myself with past and current theoretical spacecraft propulsion research. Then I decided on a research topic that was of interest to me. The topic I chose explores the non-Newtonian frame-dragging gravitational forces that have been observed near distant rotating massive cosmological objects, and that NASA’s Gravity Probe B has attempted to detect near the earth. In addition, these gravitational forces can be derived from Einstein’s general relativity field equations.
Due to the small magnitude of these forces on a cosmological scale, it was thought that they could only be detected and studied through astronomy and very sensitive satellite experiments. Then in the early 2000s, Martin Tajmar started to investigate rotating ultra-cold rings cooled by liquid Helium, and detected that they were generating what appeared to be macroscopic frame-dragging fields. These effects are still undergoing examination in ongoing experiments. However, preliminary mathematical analysis of these fields using Einstein’s equations shows that these frame-dragging forces and the Newtonian gravitational force may have a relationship analogous to that described by Maxwell’s equations for electricity and magnetism.
Using this preliminary analysis and these initial results from Tajmar’s experiments, I wrote a paper in which I performed a cursory frame-dragging propulsion assessment. I proposed a conceptual device similar to the gravitational version of the electromagnetic torus Robert Forward proposed in the 1960s. However, his torus was based on cosmological observations and therefore required a mass flow with the density of a dwarf star flowing through tubing with the diameter of a football field to produce a gravitational field at the center of the torus equal to 1g.
Using Tajmar’s results, rotating ultra-cold rings with a diameter of about 6 inches replaced the massive tubing and mass flow and resulted in a gravitational field with the same 1g magnitude at the center. Still, relativistic accelerations on the order of 1011 m/s2 would be needed even with this version of the torus. However, this analysis shows that we may be making progress towards figuring out how to make something like gravity control a possibility in the future.
Keep in mind that this analysis was a preliminary assessment in which many assumptions were made. Its purpose was to demonstrate how the frame-dragging effect might be used for propulsion so as to stimulate additional thought and further research on this topic. At the end of my paper the “next-steps” for further research in this area were outlined. These include deriving equations for the torus configuration from Einstein’s equations directly so that a more exact analysis can be performed, and carrying out additional experiments on the frame-dragging effect Tajmar is observing so that it can be better understood. At the end of my summer program, Millis found the paper I wrote to be suitable for submission to the Journal of the British Interplanetary Society. We submitted the paper and are now awaiting the peer review process.
I don’t have a subscription to JBIS, wouldn’t mind reading your paper though, any chance you can send the PDF once published? Thanks.
Although I am a layman boob, would not a side effect of being able to manipulate gravity also be the ability to manipulate “apparent” mass? This would be a holy grail for space travel.
You could fart your way to the nearest start system in a reasonable amount of time with next to zero apparent mass.
Study should continue into breakthrough propulsion physics because there is always a chance that a real breakthrough will happen which changes everything. But reletavistic acceleration of a large dwarf star density torus shows that many concepts in this field remain far from practical.
Does Tau Zero give undue attention to breakthrough propulsion at the expense of exploring much more practical near-term concepts? If so, then can Tau Zero change it’s priorities or does there have to develop an Interstellar Society which will take up the task of figuring out that mission design which is most likely to be the first?
John, Tau Zero incorporates both near and long-term prospects. The current focus, largely the result of work on the Frontiers of Propulsion Research book, is but one aspect of what we hope to accomplish. The two tracks, near-term and breakthrough concepts, aren’t perceived as exclusive.
By the way, Gregory Matloff, who is (among other things) a Tau Zero practitioner, has just released his new solar sail book. Written with Les Johnson (MSFC) and Giovanni Vulpetti, it’s Solar Sails: A Novel Approach to Interplanetary Travel (Copernicus, 2008), and despite its sub-title, it makes abundant reference to the interstellar question. I’ll be posting review notes on the book soon.
You should know that the Earthtech people are planning a replication of the Tajmar experiments. They are also doing another inertia modification experiment as well. The Earthtech guys have some unorthodox ideas about physics. However, they are first rate experimenters.
Rob’s methane propulsion would certainly represent a breakthrough. With his propellant coming from an all-natural source, I think that it would quickly result in his being found to be set apart from the crowd and ultimately recognized to be outstanding in his field!
Rob,
Regardless of whether manipulation of inertial mass is possible, it’s certainly not something we would ever, ever want to do. The reason is that the dimensions of atoms and molecules are acutely sensitive to the mass of the electron, and to the other masses as well; reduce the mass, and there is hell to pay in disrupting all of the chemistry going on in your spaceship.
For example, there is currently a proposal to use muons to catalyse fusion, which has experimentally (I seem to recall, correct me if I’m wrong) come close to break-even energy returns. In hydrogen, the radius of an atomic orbital is inversely proportional to the mass of the “electron”; with a muon substituted (about 207 times the electron mass) the radius is much smaller, to the point that it’s entirely possible for fusion to occur spontaneously, with the two nuclei bound very close together and tunneling across the internuclear potential.
So, in short, playing with mass is a serious business at the atomic scale.
Not to mention that humans evolved under a regime of normal inertia. I’d be curious to see how the body would respond if normal inertia of its organs, blood, and chemistry was to be suddenly reduced.
I’m not sure if manipulating inertia would result in mass reduction or would it mean in less responsiveness of mass to forces. Are the two things equivalent?
@Alex: my impression from what I read is, the latter: that anti-gravity would not result in any real mass reduction, hence no risk there, but in its behaviour relative to forces (which inertia is also). E.g. a 1000 kg mass could be lifted and accellerated as if it were a 1 kg mass. Interesting question whether this would also imply a modified speed of light, c, within a limited space influenced by the antigravity.
I just read the Wiki on Muon catalyzed fuson.
If inertial mass can be manipulated in a cost effective manner, could a form of fusion power be developed? Also what about transmutation (manufacturing expensive metals out of base metals)?
There is money in them thar inertia modification experiments.
Thinking about future propulsion schemes always makes me look back to the past.
After that it gets weird.
If you told someone in 1869 that in 100 years man would set foot on the moon, then asked them how they think the men got there, they would probably describe something along the lines of Jules Verne and his huge Columbiad cannon.
I doubt if anyone in 1869 would have dreamed up a 3-stage Saturn V, the LM and Command Module, or the computer/process controllers and precise docking manuevers involved, let alone the life support, celestial navigation, complex metallurgy, exotic plastics, and other systems and materials that were unheard of back in the 1860’s.
So what do future propulsion schemes look like to us? They look like the “cannons” we know today, basically ejecting a reaction mass, either stored onboard or collected enroute, or provided from the kick of a laser beam photon reflecting off a mirrored surface. Not sure if it’s possible to escape this kind of thinking today. It’s what we know, so we build on that. Even the warp drive of Star Trek isn’t too far removed from this concept. Stretch out space in front, compress it in back, and you’re essentially pushing off one thing and into another.
So the transportation scheme of 2069 that carries the first 3-man crew to the nearest star system will be quite unlike a Saturn V rocket. I don’t think it will be any system which expels mass rearward, or rides on a beam of laser light. It will have to be based on some physics which we don’t yet understand. I think once we do understand the new physics required, the trip will follow soon afterward.
Maybe if the extra dimensions of string theory are finally proven to exist and thoroughly mapped, we can find some way to twist ourselves through them, *instantly* appearing wherever we like in the universe. Who needs a ship? Who needs reaction mass? Who needs warp drive? I suspect if we ever do this, we could go any “when” as well as any “where.”
Careful where you point that thing. ;-)
“Maybe if the extra dimensions of string theory are finally proven to exist and thoroughly mapped, we can find some way to twist ourselves through them, *instantly* appearing wherever we like in the universe.”
It only works if the extra-dimensions are around 10^ -6 meters. If these dimensions are far smaller, let say 10^ -18 m, then the speed is reduced from 10^32 c to ~ 200 ly/s. Another problem is the energy we need to create this exotic bubble, it is equivalence to the binding energy of a gas giant. Therefore, I don’t think we have this kind of ability in this century. Well, may be the technological singularity will do something about it.
Hi All
That putative top speed of 10^32 c is intriguing. Makes just about anywhere within 10^25 lightyears about a second away. In an infinite flat or hyperbolic cosmos that’s a mere drop in the endless ocean though.
Here’s some interesting historical dates regarding flight to the moon.
129 BC –
Hipparchus estimates the distance to the Moon
904 AD – First known use of rockets
1605 – Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy
1813 – William Moore derives the rocket equation
1865 – Jules Verne’s novel From the Earth to the Moon
1891 – Tsiolkovskiy clarifies the theory of rocket flight
As for inertia manipulation, it’s as yet unclear:
a) if this is possible
b) if the inertial and gravitational masses of a particle can be altered independently
My best bet for the fusion idea is this: it’s like lighting a fire with an atom bomb, except shifted up a tier. To increase the mass of the particles you need to put in energy equal to at least their relativistic rest mass to conserve energy, plus any gravitational potential energy they might have. Which would be orders of magnitude higher than the energy you’d get from the nuclear fusion, which changes masses by a very small percentage at best – less than a percent difference in the mass of 12 moles of hydrogen and 1 of carbon, for example, which is a fair few rungs up the fusion ladder.
What might be more interesting is reducing masses to liberate their mass energy, though you have the problem that if you extract work from the process, you’ve got a collection of particles, or a region of space or however it works, which doesn’t have enough energy to snap back to what it was before. The idea of an expanding novo-vacuum as in Schild’s Ladder comes to mind.
This is of course extremely speculative musings from someone with a knowledge of physics amounting to high school plus general interest, so far. I’d be interested to see any hard research on the topic.
It does, however, raise a more general point. Even using classical physics to propel a ship, the forces involved are not amenable to transporting humans efficiently, with any more than a few g being dangerous. Using exotic physics can only be worse. If you modify particle masses, as I’ve said, on a molecular level – even lower, perhaps, but I don’t have enough of a grounding in particle physics to say – but at least on a molecular level, you cause havoc. Now when we’re talking about exotic space-time effects (e.g. the Alcubierre drive), string-dimension warping or gravitophotons in Heim theory, what is more interesting than whether it is possible to travel very quickly, is the question as to whether you could do so and survive it. Survive even in a very specialised sense – whether you could send anything at all which could resume useful functioning and transport information, whether a human or a computer or something more advanced. Scrambling physics that much can’t be healthy, and when going over the speed of light, you hit causality problems which could necessitate such a scrambling.
SPOILER ALERT
Something like this happens in Alastair Reynolds’ new novel House of Suns, I might add. They find that wormholes can be easily created and maintained, but that the process scrambles matter so much that anything above the scale of atoms is irreversibly given over to entropy. They use wormholes, for example, for transporting matter in bulk over large distances, such as when they funnel hydrogen from a young star with no inhabited worlds into an old one to keep it going.
The climax of the novel – and please, stop reading if you don’t want major spoilers – is that when AIs managed to fix the wormhole information problem, they did so by creating a causality barrier around the ends of the wormhole, expanding at the speed of light in all directions, preventing any information from being transmitted until the light cones once more overlap – effectively, resetting the light cones. Thus when they set up the wormhole connection to Andromeda, the galaxy disappears from view of the Milky Way several million years late, so that no causality violation could occur.
hello there,
Where can we find the book Frontiers of Propulsion Science, please ?
thanks
Charles
Slightly off-topic, but Adam’s brief comment (August 27th, 2008 at 17:46) triggers me to ask the question that I have had for quite a while but were afraid to ask (well, did not find an opportunity for):
We know by now that the observable universe is an estimated 46.5 billion ly in radius *as observed from us* (not 13.7 billion as the age of the universe, because of cosmic inflation). This is commonly known as *the* universe.
However, we also know that this is just that: the observable universe, i.e. the part of the universe from which light and any other radiation has had the time to reach us since the beginning.
Scientific assumption is that the *total* universe is even vastly bigger, i.e. that our observable universe is only “a miniscule fraction of the total universe”, as an astronomer put it.
Questions:
– Is there any indication or guesstimate of the size of the *total* universe (WMAP established only a bare minimum)?
– Are there any (even just theoretically) conceivable ways to estimate this total size of the universe (since light etc. is not available from beyond the edge of the observable universe), such as gravity, …?
– If the total universe is so much bigger than the observable universe, does this imply that the expansion has been much faster than previously calculated (since so much is beyond the edge)?
Adam, James, anybody?
C. Philipps, we’re still in the page proof process with the Frontiers of Propulsion Science book. It will be coming out as an AIAA publication some time this fall, probably in the late fall or even early winter. I’ll be sure to announce its availability at that time.
I’d take 200 light years per second no problems. Three and a half hours to Andromeda, anyone? It’s probably more manageable than 10^32 c which would make any target smaller than a galactic cluster extremely difficult to aim at.
(Hmmm… that’s an interesting idea for sci-fi, a universe where FTL is possible but unmanageably fast)
i hope that tajmar martin experiments be cofirm and be publish a important scientific journal like Nature or science
if the gravity control could be useful on the begining at 1 step to launch tons of cargo,absolutely better that rockets, and explore the solar system that will be a great step for mankind for colonization of solar system
i think that gravity control will be a cheap way to go to space,i think like a price of airplane ticket or maybe you can buy your own spaceship, ever like car today!
thanks of the gravity control that i hope that will happen on the future
i found this interest site about the tajmar exeriments http://www.hpcc-space.de/ and this interest conection between tajmar exeriments and gravity probe B http://www.hpcc-space.de/publications/documents/JPC2008.pdf
Andy,
It is impossible to travel at the speed 10^32 c because the requirement of the energy is more than 40 orders higher than the total amount of the energy in our universe. Besides, I think it is not possible or hard to believe that the extra-dimensions are just a few micrometers away, so 10^32 c only exists in fiction.
Much of the credibility of the article is lost in the title….Or is it too much to expect accurate spelling and respectable research?
Good grief, it was just a typo (mine, not Kimberly Trent’s). Now fixed.
The http://www.hpcc-space.de site is run by Droecher and Hauser, who are promulgating a version of Heim theory called Extended Heim Theory (EHT). They’ve had limited contact with Tajmar. Droecher and Hauser have proposed a modified version of Tajmar’s experiment to test their version of EHT. They have been talking about this for over a year and need to run the experiment.
Needless to say, EHT is controversial and is not well accepted by mainstream researchers. However, EHT is a credible theory. If EHT is real, there is the possibility of an FTL that does not require the ridiculous energy conditions of the M-brane based concept mentioned earlier in this thread.
I think it’s too early to call Heim’s theory, or the later extensions by Dröscher and Häuser, credible. A theory has to be well studied and understood to achieve credibility, and we’re at the very beginning of that process with Heim. We’ll see how this plays out — mainstream researchers can’t endorse it simply because there is insufficient study upon which to base the necessary judgments.
Here’s one possible next step, as suggested by Marc Millis in an earlier article here (https://centauri-dreams.org/?p=504):
Another Millis note:
Hi All
Ron asks a few pertinent questions. I’ll re-post his text and my answers…
Slightly off-topic, but Adam’s brief comment (August 27th, 2008 at 17:46) triggers me to ask the question that I have had for quite a while but were afraid to ask (well, did not find an opportunity for):
We know by now that the observable universe is an estimated 46.5 billion ly in radius *as observed from us* (not 13.7 billion as the age of the universe, because of cosmic inflation). This is commonly known as *the* universe.
However, we also know that this is just that: the observable universe, i.e. the part of the universe from which light and any other radiation has had the time to reach us since the beginning.
Scientific assumption is that the *total* universe is even vastly bigger, i.e. that our observable universe is only “a miniscule fraction of the total universe”, as an astronomer put it.
Questions:
– Is there any indication or guesstimate of the size of the *total* universe (WMAP established only a bare minimum)?
[there are estimates based on certain topological assumptions – for example the Poincare Dodecahedral space-time is a bit smaller than the visible universe according to some analyses of the WMAP data. Or if the Universe is actually a hypersphere then its size can be worked out from the relevant equations, assuming a certain level of cosmological constant etc.]
– Are there any (even just theoretically) conceivable ways to estimate this total size of the universe (since light etc. is not available from beyond the edge of the observable universe), such as gravity, …?
[the heat-haze of the CMB limits how far the astronomers can view the distant cosmos, and so only what data the CMB can give us can be used for topological studies. This leads to “Circles in the Sky” searches – looking for repetitions in the CMB due to a finite size. The implications are currently equivocal – some see “circles” and some don’t. Because the evidence is statistically derived from the raw data it’s really hard to be definite in an absolute way.]
– If the total universe is so much bigger than the observable universe, does this imply that the expansion has been much faster than previously calculated (since so much is beyond the edge)?
[Not necessarily. The visible Universe is the causal horizon for our little patch – beyond is largely unknown. Certain theories make assumptions – the most gratuitous being an assumed *infinite* universe – but the evidence beyond our current horizon will only come to our eyes as aeons pass. If the cosmological constant is truly *constant* then NO further evidence will become visible and our cosmic horizon will shrink to a *small* value. If our region is but a perturbation away from some true state of affairs – for example it only *appears* to be accelerating, but in reality the universe is closed – then eventually the rest of the Universe will come into view, but probably not for a trillion years or so.]
[There is an upper limit on overall cosmic density – the density at which the actual radius is equal to the current Hubble radius. That’s at twice the critical density. Lower densities produce a larger, but finite, true radius – assuming the Universe is a hypersphere, of course.]
[There are many ways of producing what we see, but some topologies and mixes of cosmological constants are more contrived than others. The current challenge is seeing where the *simplest* assumptions can take us – but the risk is that might not be weird enough for the *true* universe.]
Adam
oh… final point. Our little visible universe would have expanded alongside other patches out of the much larger body. No especial super-speed expansion is needed, except in theories like inflation or whatever. Space-time can expand much faster than ‘c’ because it’s the background against which *local* speeds are restricted to c. Thus why the Alcubierre warp-drive isn’t a violation of SR because at no instantaneous point is ‘c’ exceeded by anything. However causality does suffer some violations as seen by some observers, but whether that means anything in the real universe is arguable.
@Adam: thanks for your reply!
“there are estimates based on certain topological assumptions”. You make me curious. Well, what are those estimates, do you know of any concrete ones?
Thanks again.
I myself have had a few ideas ref. total size of the universe and theoretical ways for estimation, but these may be totally wrong and scientifically unsound:
– ‘gravity leaks’ from beyond the edge (comoving frontier), resulting in more gravity (or apparent mass) than can be explained from visible mass. But this only works if the speed of gravity is (significantly) > c.
– Further to the previous: the missing mass of the universe and/or dark matter being actual normal mass/matter within a much bigger universe (but this may not work, because the mssing mass/dark matter/dark energy seems to be missing per galaxy).
– If the total universe is so mucch bigger than our visible universe, shouldn’t there be noticable (but very small) deviations in the expansion rate in different directions?
The speed of gravity is indeed higher than c in 4+1 space-times.
Your last question is very interesting. However, the Einstein equations are very hard to solve even in 3+1 space-times. If the metrics are 4+1 or higher, then the equations are almost impossible to deal with, because most of the realistic nonlinear wave equations only make sense in 3+1 space-times. I’ve never seen 5-D Navier-Stoke equations before.
Ok, I’m a caveman compared to most of you, but reading through this article I’ve come to the following conclusion:
If we where to use a method for Inertial Mass Manipulation System for space flight, it would have to be an extremely wide area of effect, which would make control extremely difficult or close to impossible.Along with that is the danger of the mass imploding on itself or being fused together under the pressure created. This would call for a method of a complete (closest term I could thing of) anti-gravity/matter field to hold everything in place. Such a system would basically nullify the IMMS unless you opened a small “window” or “tunnel” to act as a vacuum, pulling the ship/object forward.