Centauri Dreams
Imagining and Planning Interstellar Exploration
Dyson Sphere ‘Feedback’: A Clue to New Observables?
Although so-called Dysonian SETI has been much in the air in recent times, its origins date back to the birth of SETI itself. It was in 1960 – the same year that Frank Drake used the National Radio Astronomy Observatory in Green Bank, West Virginia to study Epsilon Eridani and Tau Ceti – that Freeman Dyson proposed the Dyson sphere. In fiction, Olaf Stapledon had considered such structures in his novel Star Maker in 1937. As Macy Huston and Jason Wright (both at Penn State) remind us in a recent paper, Dyson’s idea of energy-gathering structures around an entire star evolved toward numerous satellites around the star rather than a (likely unstable) single spherical shell.
We can’t put the brakes on what a highly advanced technological civilization might do, so both solid sphere and ‘swarm’ models can be searched for, and indeed have been, for in SETI terms we’re looking for infrared waste heat. And if we stick with Dyson (often a good idea!), we would be looking for structures orbiting in a zone where temperatures would range in the 200-300 K range, which translates into searching at about 10 microns, the wavelength of choice. But Huston and Wright introduce a new factor, the irradiation from the interior of the sphere onto the surface of the star.
This is intriguing because it extends our notions of Dyson spheres well beyond the habitable zone as we consider just what an advanced civilization might do with them. It also offers up the possibility of new observables. So just how does such a Dyson sphere return light back to a star, affecting its structure and evolution? If we can determine that, we will have a better way to predict these potential observables. As we adjust the variables in the model, we can also ponder the purposes of such engineering.
Think of irradiation as Dyson shell ‘feedback.’ We immediately run into the interesting fact that adding energy to a star causes it to expand and cool. The authors explain this by noting that total stellar energy is a sum of thermal and gravitational energies. Let’s go straight to the paper on this. In the clip below, E* refers to the star’s total energy, with Etherm being thermal energy:
When energy is added to a star (E? increases), gravitational energy increases and thermal energy decreases, so we see the star expand and cool both overall (because Etherm is lower) and on its surface (because, being larger at the same or a lower luminosity its effective temperature must drop). A larger star should also result in less pressure on a cooler core, so we also expect its luminosity to decrease.
Image: Artist’s impression of a Dyson sphere under construction. Credit: Steve Bowers.
Digging into this effect, Huston and Wright calculate the difference in radius and temperature between the normal and irradiated stellar models. Work on irradiated stars goes back to the late 1980s, and includes the interesting result that a star of half a solar mass, if subjected to a bath of irradiation at a tempearature of 104 K, has its main sequence lifetime shortened by approximately half. The star expands and cools overall, but the distribution of the thermal energy causes its central temperature to increase.
A bit more on this background work: The 1989 paper in question, by C. A. Tout and colleagues, has nothing to do with Dyson spheres, but provides data on stellar irradiation of the sort that would be produced by proximity to a quasar or active galactic nucleus. Tout et al. worked on isotropic radiation baths at constant temperatures up to 104 K, finding that while the effects on stars whose energy is radiative are minor, convective stars increase in size. Keep in mind that cooler stars of low-mass are fully convective; hotter and more massive stars transport their energies from the interior through a radiative zone that forms and expands from the core.
Applying this to Dyson spheres, a star surrounded by technology would have light reflected back onto the star, while at the same time the sphere would become warm and emit thermal energy. I was intrigued to see that the paper gives a nod to Shkadov thrusters, which we’ve discussed in these pages before – these are stellar ‘engines’ using a portion of a star’s light to produce a propulsive effect. See Cosmic Engineering and the Movement of Stars, as well as Greg Benford’s look at the physics of the phenomenon, as developed by himself and Larry Niven, in Building the Bowl of Heaven.
Huston and Wright model how a Dyson sphere would affect the structure and evolution of a star, incorporating Dyson sphere luminosity as returned to the surface of the star. Each star is modeled from the start of its enclosure within the Dyson Sphere to the end of its main sequence lifetime. Beyond luminosity, the authors use Wright’s previous work on Dyson sphere parameters and his formulation for radiative feedback, while deploying a tool called Modules for Experiments in Stellar Astrophysics to assist calculations.
The authors consider stars in a mass range from 0.2 to 2 solar masses while varying luminosity fractions from 0.01 to 0.50. The effects of energy feedback on stars and the calculations on Dyson sphere properties produce absolute magnitudes for combined systems incorporating a central star and the Dyson sphere around it. From the paper:
Irradiated stars expand and cool. A Dyson sphere may send a fraction of a star’s light back toward it, either by direct reflection or thermal re-emission. This returning energy can be effectively transported through convective zones but not radiative zones. So, it can have strong impacts on low mass main sequence stars with deep convective zones which extend to the surface. It causes them to expand and cool, slowing fusion and increasing main sequence lifetimes. For higher mass stars with little to no convective exterior, the returned energy cannot penetrate far into the star and therefore has little effect on the star’s structure and evolution, besides some surface heating.
The effects are observationally significant only for spheres with high reflectivity or high temperatures – remember that Dyson assumed a sphere in the ~300 K area to correspond to a planet in the habitable zone. The authors combine the spectrum of the host star and the Dyson sphere into a ‘system spectrum,’ which allows them to calculate absolute magnitudes. The calculations involve the star itself, the interior of the sphere (both would be hidden) and the exterior of the sphere, which would be unobscured.
Wright has previously developed a set of five defining characteristics of a Dyson sphere, involving the intercepted starlight, the power of the sphere’s thermal waste heat, its characteristic temperature and other factors in a formalism called AGENT. The authors run their calculations on hot Dyson spheres and their opposite – cold, mirrored Dyson spheres that return starlight to the star without significant heating. Thus we go through a range of Dyson spheres intercepting starlight, including the familiar notion:
As the classical idea of a Dyson sphere, we can examine a solar mass star with low transmission of starlight through the sphere and a Dyson sphere radius of roughly 1 AU. We see that the feedback levels are very low and that the systems will appear, relative to a bare solar mass star, to be dimmed in the optical range and reddened in both optical and infrared colors.
Notice the range of temperatures we are talking about, for this is where we can expand our thinking on what a Dyson sphere might involve:
For our 0.2 and 0.4 M stars, feedback levels above roughly 1% cause at least a 1% change in nuclear luminosity; their effective temperatures do not significantly change. For our 1 and 2 M stars, feedback levels above roughly 6% cause at least a 1% change in the star’s effective temperature; their nuclear luminosities do not significantly change. Physically, these limits may correspond with a cold, mirrored surface covering the specified fraction of the star’s solid angle. For light-absorbing, non-reflective Dyson spheres, these feedback levels correspond to very hot spheres, with temperatures of thousands of Kelvin.
Dyson spheres in the latter temperature ranges are utterly unlike the more conventional concept of a civilization maintaining habitable conditions within the shell to gain not just energy but vastly amplified living space. But a hot Dyson sphere could make sense from the standpoint of stellar engineering, for feedback mechanisms can be adjusted to extend a star’s lifetime or reduce its luminosity. Indeed, looking at Dyson spheres in the context of a wide range of feedback variables is useful in helping jog our thinking about what might be found as the signature of an advanced technological civilization.
The paper is Huston & Wright, “Evolutionary and Observational Consequences of Dyson Sphere Feedback,” accepted at the Astrophysical Journal (abstract / preprint). The paper by Tout et al. is “The evolution of irradiated stars ,” Monthly Notices of the Royal Astronomical Society Volume 238, Issue 2 (May 1989), pp. 427–438 (abstract). For an overview of Dyson spheres and their background in the literature, see Wright’s “Dyson Spheres,” Serbian Astronomical Journal Issue 200, Pages: 1-18 (2020). Abstract / preprint. Thanks to my friend Antonio Tavani for the early pointer to this work.
The Long Result: Star Travel and Exponential Trends
Reminiscing about some of Robert Forward’s mind-boggling concepts, as I did in my last post, reminds me that it was both Forward as well as the Daedalus project that convinced many people to look deeper into the prospect of interstellar flight. Not that there weren’t predecessors – Les Shepherd comes immediately to mind (see The Worldship of 1953) – but Forward was able to advance a key point: Interstellar flight is possible within known physics. He argued that the problem was one of engineering.
Daedalus made the same point. When the British Interplanetary Society came up with a starship design that grew out of freelance scientists and engineers working on their own dime in a friendly pub, the notion was not to actually build a starship that would bankrupt an entire planet for a simple flyby mission. Rather, it was to demonstrate that even with technologies that could be extrapolated in the 1970s, there were ways to reach the stars within the realm of known physics. Starflight was incredibly hard and expensive, but if it were possible, we could try to figure out how to make it feasible.
And if figuring it out takes centuries rather than decades, what of it? The stars are a goal for humanity, not for individuals. Reaching them is a multi-generational effort that builds one mission at a time. At any point in the process, we do what we can.
What steps can we take along the way to start moving up the kind of technological ladder that Phil Lubin and Alexander Cohen examine in their recent paper? Because you can’t just jump to Forward’s 1000-kilometer sails pushed by a beam from a power station in solar orbit that feeds a gigantic Fresnel lens constructed in the outer Solar System between the orbits of Saturn and Uranus. The laser power demand for some of Forward’s missions is roughly 1000 times our current power consumption. That is to say, 1000 times the power consumption of our entire civilization.
Clearly, we have to find a way to start at the other end, looking at just how beamed energy technologies can produce early benefits through far smaller-scale missions right here in the Solar System. Lubin and Cohen hope to build on those by leveraging the exponential growth we see in some sectors of the electronics and photonics industries, which gives us that tricky moving target we looked at last time. How accurately can you estimate where we’ll be in ten years? How stable is the term ‘exponential’?
These are difficult questions, but we do see trends here that are sharply different from what we’ve observed in chemical rocketry, where we’re still using launch vehicles that anyone watching a Mercury astronaut blast off in 1961 would understand. Consumer demand doesn’t drive chemical propulsion, but in terms of power beaming, we obviously do have electronics and photonics industries in which the role of the consumer plays a key role. We also see the exponential growth in capability paralleled by exponential decreases in cost in areas that can benefit beamed technologies.
Lubin and Cohen see such growth as the key to a sustainable program that builds capability in a series of steps, moving ever outward in terms of mission complexity and speed. Have a look at trends in photonics, as shown in Figure 5 of their paper.
Image (click to enlarge): This is Figure 5 from the paper. Caption: (a) Picture of current 1-3 kW class Yb laser amplifier which forms the baseline approach for our design. Fiber output is shown at lower left. Mass is approx 5 kg and size is approximately that of this page. This will evolve rapidly, but is already sufficient to begin. Courtesy Nufern. (b) CW fiber laser power vs year over 25 years showing a “Moore’s Law” like progression with a doubling time of about 20 months. (c) CW fiber lasers and Yb fiber laser amplifiers (baselined in this paper) cost/watt with an inflation index correction to bring it to 2016 dollars. Note the excellent fit to an exponential with a cost “halving” time of 18 months.
Such growth makes developing a cost-optimized model for beamed propulsion a tricky proposition. We’ve talked in these pages before about the need for such a model, particularly in Jim Benford’s Beamer Technology for Reaching the Solar Gravity Focus Line, where he presented his analysis of cost optimized systems operating at different wavelengths. That article grew out of his paper “Intermediate Beamers for Starshot: Probes to the Sun’s Inner Gravity Focus” (JBIS 72, pg. 51), written with Greg Matloff in 2019. I should also mention Benford’s “Starship Sails Propelled by Cost-Optimized Directed Energy” (JBIS 66, pg. 85 – abstract), and note that Kevin Parkin authored “The Breakthrough Starshot System Model” (Acta Astronautica 152, 370-384) in 2018 (full text). So resources are there for comparative analysis on the matter.
But let’s talk some more about the laser driver that can produce the beam needed to power space missions like those in the Lubin and Cohen paper, remembering that while interstellar flight is a long-term goal, much smaller systems can grow through such research as we test and refine missions of scientific value to nearby targets. The authors see the photon driver as a phased laser array, the idea being to replace a single huge laser with numerous laser amplifiers in what is called a “MOPA (Master Oscillator Power Amplifier) configuration with a baseline of Yb [ytterbium] amplifiers operating at 1064 nm.”
Lubin has been working on this concept through his Starlight program at UC-Santa Barbara, which has received Phase I and II funding through NASA’s Innovative Advanced Concepts program under the headings DEEP-IN (Directed Energy Propulsion for Interstellar Exploration) and DEIS (Directed Energy Interstellar Studies). You’ll also recognize the laser-driven sail concept as a key part of the Breakthrough Starshot effort, for which Lubin continues to serve as a consultant.
Crucial to the laser array concept in economic terms is that the array replaces conventional optics with numerous low-cost optical elements. The idea scales in interesting ways, as the paper notes:
The basic system topology is scalable to any level of power and array size where the tradeoff is between the spacecraft mass and speed and hence the “steps on the ladder.” One of the advantages of this approach is that once a laser driver is constructed it can be used on a wide variety of missions, from large mass interplanetary to low mass interstellar probes, and can be amortized over a very large range of missions.
So immediately we’re talking about building not a one-off interstellar mission (another Daedalus, though using beamed energy rather than fusion and at a much different scale), but rather a system that can begin producing scientific returns early in the process as we resolve such issues as phase locking to maintain the integrity of the beam. The authors liken this approach to building a supercomputer from a large number of modest processors. As it scales up, such a system could produce:
- Beamed power for ion engine systems (as discussed in the previous post);
- Power to distant spacecraft, possibly eliminating onboard radioisotope thermoelectric generators (RTG);
- Planetary defense systems against asteroids and comets;
- Laser scanning (LIDAR) to identify nearby objects and analyze them.
Take this to a full-scale 50 to 100 GW system and you can push a tiny payload (like Starshot’s ‘spacecraft on a chip’) to perhaps 25 percent of lightspeed using a meter-class reflective sail illuminated for a matter of no more than minutes. Whether you could get data back from it is another matter, and a severe constraint upon the Starshot program, though one that continues to be analyzed by its scientists.
But let me dwell on closer possibilities: A system like this could also push a 100 kg payload to 0.01 c and – the one that really catches my eye – a 10,000 kg payload to more than 1,000 kilometers per second. At this scale of mass, the authors think we’d be better off going to IDM methods, with the beam supplying power to onboard propulsion, but the point is we would have startlingly swift options for reaching the outer Solar System and beyond with payloads allowing complex operations there.
If we can build it, a laser array like this can be modular, drawing on mass production for its key elements and thus achieving economies of scale. It is an enabler for interstellar missions but also a tool for building infrastructure in the Solar System:
There are very large economies of scale in such a system in addition to the exponential growth. The system has no expendables, is completely solid state, and can run continuously for years on end. Industrial fiber lasers have MTBF in excess of 50,000 hours. The revolution in solid state lighting including upcoming laser lighting will only further increase the performance and lower costs. The “wall plug” efficiency is excellent at 42% as of this year. The same basic system can also be used as a phased array telescope for the receive side in the laser communications as well as for future kilometer-scale telescopes for specialized applications such as spectroscopy of exoplanet atmospheres and high redshift cosmology studies…
Such capabilities have to be matched against the complications inevitable in such a design. These ideas are reliant on the prospect of industrial capacity catching up, a process that is mitigated by finding technologies driven by other sectors or produced in mass quantities so as to reach the needed price point. A major issue: Can laser amplifiers parallel what is happening in the current LED lighting market, where costs continue to plummet? A parallel movement in laser amplifiers would, over the next 20 years, reduce their cost enough that it would not dominate the overall system cost.
This is problematic. Lubin and Cohen point out that LED costs are driven by the large volume needed. There is no such demand in laser amplifiers. Can we expect the exponential growth to continue in this area? I asked Dr. Lubin about this in an email. Given the importance of the issue, I want to quote his response at some length:
There are a number of ways we are looking at the economics of laser amplifiers. Currently we are using fiber based amplifiers pumped by diode lasers. There are other types of amplification that include direct semiconductor amplifiers known as SOA (Semiconductor Optical Amplifier). This is an emerging technology that may be a path forward in the future. This is an example of trying to predict the future based on current technology. Often the future is not just “more of the same” but rather the future often is disrupted by new technologies. This is part of a future we refer to as “integrated photonics” where the phase shifting and amplification are done “on wafer” much like computation is done “on wafer” with the CPU, memory, GPU and auxiliary electronics all integrated in a single elements (chip/ wafer).
Lubin uses the analogy of a modern personal computer as compared to an ENIAC machine from 1943, as we went from room-sized computers that drew 100 kW to something that, today, we can hold in our hands and carry in our pockets. We enjoy a modern version that is about 1 billion times faster and features a billion times the memory. And he continues:
In the case of our current technique of using fiber based amplifiers the “intrinsic raw materials cost” of the fiber laser amplifier is very low and if you look at every part of the full system, the intrinsic costs are quite low per sub element. This works to our advantage as we can test the basic system performance incrementally and as we enlarge the scale to increase its capability, we will be able to reduce the final costs due to the continuing exponential growth in technology. To some extent this is similar to deploying solar PV [photovoltaics]. The more we deploy the cheaper it gets per watt deployed, and what was not long ago conceivable in terms of scale is now readily accomplished.
Hence the need to find out how to optimize the cost of the laser array that is critical to a beamed energy propulsion infrastructure. The paper is offered as an attempt to produce such a cost function, to take in the wide range of system parameters and their complex connections. Comparing their results to past NASA programs, Lubin and Cohen point out that exponential technologies fundamentally change the game, with the cost of the research and development phase being amortised over decades. Moreover, directed energy systems are driven by market factors in areas as diverse as telecommunications and commercial electronics in a long-term development phase.
An effective cost model generates the best cost given the parameters necessary to produce a product. A cost function that takes into account the complex interconnections here is, to say the least, challenging, and I leave the reader to explore the equations the authors develop in the search for cost minimums, relating system parameters to the physics. Thus speed and mass are related to power, array size, wavelength, and so on. The model also examines staged system goals – in other words, it considers the various milestones that can be achieved as the system grows.
Bear in mind that this is a cost model, not a cost estimate, which the authors argue would not be not credible given the long-term nature of the proposed program. But it’s a model based on cost expectations drawn from existing technologies. We can see that the worldwide photonics market is expected to exceed $1 trillion by this year (growing from $180 billion in 2016), with annual growth rates of 20 percent.
These are numbers that dwarf the current chemical launch industry; Lubin and Cohen consider them to reveal the “engine upon which a DE program would be propelled” through the integration of photonics and mass production. While fundamental physics drives the analytical cost model, it is the long term emerging trends that set the cost parameters in the model.
Today’s paper is Lubin & Cohen, “The Economics of Interstellar Flight,” to be published in a special issue of Acta Astronautica (preprint).
Interstellar Reach: The Challenge of Beamed Energy
I’ve learned that you can’t assume anything when giving a public talk about the challenge of interstellar flight. For a lot of people, the kind of distances we’re talking about are unknown. I always start with the kind of distances we’ve reached with spacecraft thus far, which is measured in the hundreds of AUs. With Voyager 1 now almost 156 AU out, I can get a rise out of the audience by showing a slide of the Earth at 1 AU, and I can mention a speed: 17.1 kilometers per second. We can then come around to Proxima Centauri at 260,000 AU. A sense of scale begins to emerge.
But what about propulsion? I’ve been thinking about this in relation to a fundamental gap in our aspirations, moving from today’s rocketry to what may become tomorrow’s relativistic technologies. One thing to get across to an audience is just how little certain things have changed. It was exhilarating, for example, to watch the Arianne booster carry the James Webb Space Telescope aloft, but we’re still using chemical (and solid state) engines that carry steep limitations. Rockets using fission and fusion engines could ramp up performance, with fusion in particular being attractive if we can master it. But finding ways to leave the fuel behind may be the most attractive option of all.
I was corresponding with Philip Lubin (UC-Santa Barbara) about this in relation to a new paper we’ll be looking at over the next few days. Dr. Lubin makes a strong point on where rocketry has taken us. Let me quote him from a recent email:
…when you look at space propulsion over the past 80 years, we are still using the same rocket design as the V2 only larger But NOT faster. Hence in 80 years we have made incredible strides in exploring our solar system and the universe but our propulsion system is like that of internal combustion engine cars. No real change. Just bigger cars. So for space exploration to date – “just bigger rockets” but “not faster rockets”. [SpaceX’s] Starship is incredible and I love what it will do for humanity but it is fundamentally a large V2 using LOX and CH4 instead of LOX and Alcohol.
The point is that we have to do a lot better if we’re going to talk about practical missions to the stars. Interstellar flight is feasible today if we accept mission durations measured in thousands of years (well over 70,000 years at Voyager 1 speeds to travel the distance to Proxima Centauri). But taking instrumented probes, much less ships with human crews, to the nearest star demands a completely different approach, one that Lubin and team have been exploring at UC-SB. Beamed or ‘directed energy’ systems may do the trick one day if we can master both the technology and the economics.
Let’s ponder what we’re trying to do. Lubin likes to show the diagram below, which brings out some fundamental issues about how we bring things up to speed. On the one hand we have chemical propulsion, which as the figure hardly needs to note, is not remotely relativistic. At the high end, we have the aspirational goal of highly relativistic acceleration enabled by directed energy – a powerful beam pushing a sail.
Image: This is Figure 1 from “The Economics of Interstellar Flight,” by Philip Lubin and colleague Alexander Cohen (citation below). Caption: Speed and fractional speed of light achieved by human accelerated objects vs. mass of object from sub-atomic to large macroscopic objects. Right side y-axis shows ? ? 1 where ? is the relativistic “gamma factor.” ? ? 1 times the rest mass energy is the kinetic energy of the object.
Thinking again of how I might get this across to an audience, I fall back on the energies involved, for as Lubin and Cohen’s paper explains, the energy available in chemical bonds is simply not sufficient for our purposes. It is mind-boggling to follow this through, as the authors do. Take the entire mass of the universe and turn it into chemical propellant. Your goal is to accelerate a single proton with this unimaginable rocket. The final speed you achieve is in the range of 300 to 600 kilometers per second.
That’s fast by Voyager standards, of course, but it’s also just a fraction of light speed (let’s give this a little play and say you might get as high as 0.3 percent), and the payload is no more than a single proton! We need energy levels a billion times that of chemical reactions. We do know how to accelerate elementary particles to relativistic velocities, but as the universe-sized ‘rocket’ analogy makes clear, we can’t dream of doing this through chemical energy. Particle accelerators reach these velocities with electromagnetic means, but we can’t yet do it beyond the particle level.
Directed energy offers us a way forward but only if we can master the trends in photonics and electronics that can empower this new kind of propulsion in realistic missions. In their new paper, to be published in a special issue of Acta Astronautica, Lubin and Cohen are exploring how we might leverage the power of growing economies and potentially exponential growth in enough key areas to make directed energy work as an economically viable, incrementally growing capability.
Beaming energy to sails should be familiar territory for Centauri Dreams readers. For the past eighteen years, we’ve been looking at solar sails and sails pushed by microwave or laser, concepts that take us back to the mid-20th Century. The contribution of Robert Forward to the idea of sail propulsion was enormous, particularly in spreading the notion within the space community, but sails have been championed by numerous scientists and science fiction authors for decades. Jim Benford, who along with brother Greg performed the first laboratory work on beamed sails, offers a helpful Photon Beam Propulsion Timeline, available in these pages.
In the Lubin and Cohen paper, the authors make the case that two fundamental types of mission spaces exist for beamed energy. What they call Direct Drive Mode (DDM) uses a highly reflective sail that receives energy via momentum transfer. This is the fundamental mechanism for achieving relativistic flight. Some of Bob Forward’s mission concepts could make an interstellar crossing within the lifetime of human crews. In fact, he even developed braking methods using segmented sails that could decelerate at destination for exploration at the target star and eventual return.
Lubin and Cohen also see an Indirect Drive Mode (IDM), which relies on beamed energy to power up an onboard ion engine that then provides the thrust. My friend Al Jackson, working with Daniel Whitmire, did an early analysis of such a system (see Rocketry on a Beam of Light), The difference is sharp: A system like this carries fuel onboard, unlike its Direct Drive Mode cousin, and thus has limits that make it best suited to work within the Solar System. While ruling out high mass missions to the stars, this mode offers huge advantages for reaching deep into the system, carrying high mass payloads to the outer planets and beyond. From the paper:
…for the same mission thrust desired, an IDM approach uses much lower power BUT achieves much lower final speed. For solar system missions with high mass, the final speeds are typically of order 100 km/s and hence an IDM approach is generally economically preferred. Another way to think of this is that a system designed for a low mass relativistic mission can also be used in an IDM approach for a high mass, low speed mission.
We shouldn’t play down IDM because it isn’t suited for interstellar missions. Fast missions to Mars are a powerful early incentive, while projecting power to spacecraft and eventual human outposts deeper in the Solar System is a major step forward. Beamed propulsion is not a case of a specific technology for a single deep space mission, but rather a series of developing systems that advance our reach. The fact that such systems can play a role in planetary defense is a not inconsiderable benefit.
Image: Beamed propulsion leaves propellant behind, a key advantage. It could provide a path for missions to the nearest stars. Credit: Adrian Mann.
If we’re going to analyze how we go from here, where we’re at the level of lab experiments, to there, with functioning directed energy missions, we have to examine these trends in terms of their likely staging points. What I mean is that we’re looking not at a single breakthrough that we immediately turn into a mission, but a series of incremental steps that ride the economic wave that can drive down costs. Each incremental step offers scientific payoff as our technological prowess develops.
Getting to interstellar flight demands patience. In economic terms, we’re dealing with moving targets, making the assessment at each stage complicated. Think of photovoltaic arrays of the kind we use to feed power to our spacecraft. As Lubin and Cohen point out, until recently the cost of solar panels was the dominant economic fact about implementing this technology. Today, this is no longer true. Now it’s background factors – installation, wiring, etc. – that dominate the cost. We’ll get into this more in the next post, but the point is that when looking at a long-term outcome, we have a number of changing factors that must be considered.
Some parts of a directed energy system show exponential growth, such as photonics and electronics. And some do not. The cost of metals, concrete and glass move at anything but exponential rates. What “The Economics of Interstellar Flight” considers is developing a cost model that minimizes the cost for a specific outcome.
To do this, the authors have to consider the system parameters, such things as the power array that will feed the spacecraft, its diameter, the wavelength in use. And you can see the complication: When some key technologies are growing at exponential rates, time becomes a major issue. A longer wait means lower costs, while the cost of labor, land and launch may well increase with time. We can also see a ‘knowledge cost’: Wait time delays knowledge acquisition. As the authors note in relation to lasers:
The other complication is that many system parameters are interconnected and there is the severe issue that we do not currently have the capacity to produce the required laser power levels we will need and hence industrial capacity will have to catch up, but we do not want to be the sole customer. Hence, finding technologies that are driven by other sectors or adopting technologies produced in mass quantity for other sectors may be required to get to the desired economic price point.
System costs, in other words, are dynamic, given that some technologies are seeing exponential growth and others are not, making a calculation of what the authors call ‘time of entry’ for any given space milestone a challenging goal. I want to carry this discussion of how the burgeoning electronics and photonics industries – driven by power trends in consumer spending – factor into our space ambitions into the next post. We’ll look at how dreams of Centauri may eventually be achieved through a series of steps that demand a long-term, deliberate approach relying on economic growth.
The paper is Lubin & Cohen, “The Economics of Interstellar Flight,” to be published in Acta Astronautica (preprint).
Energetics of Archaean Life in the Ocean Vents
If SETI is all about intelligence, and specifically technology, at the other end of astrobiology is the question of abiogenesis. Does life of any kind in fact occur elsewhere, or does Earth occupy a unique space in the scheme of things? Alex Tolley looks today at one venue where life may evolve, deep inside planetary crusts, with implications that include what we may find “locally” at places like Europa or Titan. In doing so, he takes a deep dive into a new paper from Jeffrey Dick and Everett Shock, while going on to speculate on broader questions forced by life’s emergence. Organisms appearing in the kind of regions we are discussing today would doubtless be undetectable by our telescopes, but with favorable energetics, deep ocean floors may spawn abundant life outside the conventional habitable zone, just as they have done within our own ‘goldilocks’ world.
by Alex Tolley
Are the deep hot ocean vents more suitable for life than previously thought?
In a previous article [1] I explored the possibility that while we think of hot planetary cores, and tidal heating of icy moons, as the driver to maintain liquid water and potentially support chemotrophic life at the crust-ocean interface, radiolysis can also provide the means to do the same and allow life to exist at depth in the crust despite the most hostile of surface conditions. On Earth we have the evidence that there is a lithospheric biosphere that extends to a depth of over 1 kilometer, and the geothermal gradient suggests that extremophiles could live several kilometers down in the crust.
Scientists are actively searching for biosignatures in the crust of Mars, away from the UV, radiation, and toxic conditions on the surface examined by previous landers and rovers. Plans are also being drawn up to look for biosignatures in Jupiter’s icy moon Europa, where hot vents at the bottom of a subsurface ocean could host life. It is hypothesized that Titan may have liquid water at depth below its hydrocarbon surface, and even frozen Pluto may have liquid water deep below its surface of frozen gases. The dwarf planet Ceres also may have a slushy, salty ocean beneath its surface as salts left by cryovolcanism indicate. Conditions conducive to supporting life may be common once we look beyond the surface conditions, and therefore subsurface biospheres might be more common than our terrestrial one.
Image: Rainbow vent field. Credit: Royal Netherlands Institute for Sea Research.
The conditions of heat and ionizing radiation at depth, coupled with the appropriate geology, and water, are energetically favorable to split hydrogen (H2) from water, and then reduce carbon dioxide (CO2) to methane (CH4) via the serpentinization reaction. Chemotrophs feed on this reduced carbon as fuel to power their metabolisms. This reaction has an energy barrier that results in more reactants than products than would be expected at equilibrium. As the reaction energetics are favorable, life also evolves to exploit those reactions, with catalytic metabolic pathways that overcome the energy barrier and allow the equilibrium to be reached, realising the reaction energy..
Biologists now classify life into 3 domains: bacteria, eukaryotes, and the archaea. The bacteria are an extremely diverse group that represent the most species on Earth. They can transfer genes between species, allowing for rapid evolution and adaptation to conditions. [It is this horizontal gene transfer that can create antibiotic resistance in bacteria never previously exposed to these treatments.] The eukaryotes, which include the plants, animals and fungi, range from the single cell organisms such as yeast and photosynthetic cyanobacteria, to complex organisms including all the main animal phyla from spongers to vertebrates. The archaea were only relatively recently (1977) recognized as a distinct domain, separate from the bacteria. Archaea include many of the extremophiles, but perhaps most importantly, exploit the reduction of CO2 with H2 to produce CH4. These archaea are called autotrophic methanogens and require anaerobic conditions. The CH4 is released into the environment, just as plants release oxygen (O2) from photosynthesis. In close proximity to the hot, reducing ocean vent conditions, cold, oxygenated seawater supports aerobic metabolisms, resulting in a biologically rich ecosystem, despite the almost lightless conditions in the abyssal ocean depths.
While CH4 and other reduced carbon compounds are both abiotically and biotically produced, we tend to assume that the formation of biological compounds such as amino acids requires energy that is released from the metabolism of the fixed carbon from autotrophs, whether CH4, or sugars and fats by complex organisms. While this is the case in the temperate conditions at the Earth’s surface, metabolic energy inputs do not appear to be needed under some ocean vent conditions.
The energetics of principally amino acids and protein synthesis is explored in a new paper by collaborators Jeffrey Dick and Everett Shock [2], building on their prior work. The paper examines conditions at two vent fields, Rainbow and Endeavour, compares the energetics of amino acids in those locations, and relates their findings to the proteins of the biota. The two vent fields have very different geologies. The Rainbow vent field is located on the Mid-Atlantic Ridge, at the Azores, and is composed of ultramafic mantle rock that is extruded to drive apart the tectonic plates, slowly widening the Atlantic ocean. In contrast, the Endeavour vent field is located in the eastern Pacific ocean, southwest of Canada’s British Columbia province, and is part of the Juan de Fuca Ridge. It is principally composed of the volcanic mafic rock basalt.
Mafic rocks such as basalt have a silica (SiO2) content of 45-53% with smaller fractions of ferrous oxide, alumina, calcium oxide, and magnesium oxide, while ultramafic peridotites such as olivine have a SiO2 content below 45%, and are mainly comprised of magnesium, ferrous silicate [(Mg, Fe)SiO4]. As a result of the difference in composition and structure, ultramafic rocks produce more hydrogen than the higher SiO2 content mafic rocks.
Typically, the iron sequesters the O2 from the serpentinization reaction to form magnetite (Fe3O4), preventing the H2 and CH4 from being oxidized. The authors use the chemical affinity measure, Ar, to explore the energetic favorability of the production of CH4, amino acids, and proteins. The chemical affinities are positive if the Gibbs free energy releases energy in the reaction, and the reaction is kept further from completing the reaction to equilibrium; that is more reactants and less product than the equilibrium would indicate. Positive chemical affinities indicate that there is energy to be gained from the reaction reaching equilibrium.
Figure 2 below shows the calculated chemical affinity values for a range of temperatures at the two ocean vent fields Rainbow and Endeavour, at different temperatures as a result of the hot vent water mixing with the cold surrounding seawater. They show that the ultramafic geology at Rainbow has positive affinities for both CH4 and most amino acids, while Endeavour has positive, but lower, affinities for CH4, but negative affinities for amino acids. The Endeavour field not only has lower CH4 affinities for any temperature compared to Rainbow, but this field also has a positive affinity cutoff temperature at about 100C, well above that of Rainbow. As few organisms can live above this temperature, this indicates that methanogens living at Endeavour cannot use the potential free energy of CH4 synthesis to power their metabolisms.
Figure 2b shows that the peak affinities for the amino acids at Rainbow are at around 30-40C, similar to that of CH4. While the range of temperatures where most amino acids have positive affinities at Rainbow to allow organisms to gain from amino acid synthesis, the conditions at Endeavour exclude this possibility in its entirety. As a result, Rainbow vents have conditions that life can exploit to extract energy from amino acid, and hence protein production, whilst this is not available to organisms at Endeavour.
Exploitation of these affinities by life at these two vent fields indicates that autotrophic methanogens will only likely gain metabolic energy from producing CH4 and from anabolic metabolism to produce many amino acids at Rainbow, but not at Endeavour. This would suggest that the Rainbow environment is more conducive to the growth of methanogens, whilst Endeavour offers little competitive advantage against chemotrophs.
Figure 1. The 20 amino acids and their letter codes needed to interpret figure 2b.
Figure 2. a. CH4 production releases more energy at the Rainbow hot vent field with ultramafic geology compared to the mafic Endeavour field when the hot fluids at the event are mixed with cold 2C seawater in greater amounts to reduce the temperature. b. The energetics of amino acid formation at Rainbow. More than half the amino acids are energetically favored. c. All amino acids are not energetically favored at Endeavour primarily due to the much lower molar H2 concentrations at Endeavour.
Figure 2b shows that some amino acids release energy when hot 350C water with reactants from Rainbow vents is mixed with cold seawater (approximately 6-10x dilution), while others require energy. The low H2 concentration in samples from Endeavour vents, about 25x more dilute, accounts for the negative affinities across all mixing temperatures at Endeavour. Why might this difference in the affinities between amino acids exist? One explanation is shown in figure 3a, that shows the oxidation values (Zc) of the amino acids. [Zc is a function of the oxidizing elements, charge, and is normalized by the number of carbon atoms of each amino acid. This sets a range of values as [-1.0,1.0].] Notably, those more energetically favored in figure 2b are also those that tend to be least oxidized, that is, they are mostly non-polar, hydrophobic amino acids with C-H bonds dominating.
Figure 3. a. The oxidation level of amino acids. The higher the Zc value, the greater the number of oxidizing and polar atoms composing the amino acid. b. Histogram of all the proteins in the archaean Methanocaldococcus jannaschii based on their per amino acid carbons oxidation score.
Figure 3b shows the distribution of the Zc scores for the proteins of the archaean Methanocaldococcus jannaschii that is found in samples from Rainbow field. The distribution is notably skewed towards the more reduced proteins. The authors imply that this may be associated with the amino acids that have higher affinities and therefore their energy release of formation can be exploited by M. jannaschii.
The paper shows that all the organism’s proteins with their varying amino acid sequences have positive affinities from 0C to nearly 100C. As M. jannaschii has a preferred growth temperature of 85C, its whole protein production produces a net energy gain rather than requiring energy at this vent field, but would not have this energetic advantage if living at Endeavour. While M. jannaschii has an optimum growth temperature of 85C, one might expect other methanogens with optimal growth at lower temperatures closer to those of the optimal affinity values would have a competitive advantage.
As the authors state:
“Keeping in mind that temperature and composition are explicitly linked, these results show that the conditions generated during fluid mixing at ultramafic-hosted submarine hydrothermal systems are highly conducive to the formation of all of the proteins used by M. jannaschii.”
As the archaea already exploit the energetics of methane formation, do they also exploit the favorability of certain amino acids in the composition of their proteins, which are also favored energetically as the peptide bonds are formed?
While figure 3b is an interesting observation for one archaean species found at Rainbow, a natural question to ask is whether the different energetic favorability of certain amino acids is exploited by organisms at the vents by biasing the amino acid sequence of their proteomes, or whether this distribution is common across similar organisms both hot vent-living and surface-living organisms of methanogen archaea and other types of bacteria.
To put the M. jannaschii proteome Zc distribution in context, I have extended the authors’ analysis to other archaea and bacteria, living in hot vents, hypersaline, and constant mild temperature environments. Figure 4 shows the proteome Zc score distribution for 9 organisms. The black distributions are for the archaeans, and the red distributions for bacteria. The distributions for M. jannaschii and the model gut-living bacterium Escherichia coli are bolded.
Figure 4. Histogram of proteome oxidation for various archaea (black) and bacteria (red). Several archaea living in hot temperatures are clustered together. The anaerobic, gut-living E. coli has a very different distribution. The bacterium Prosthecochloris that also lives in the hot vents has a distribution more similar to E. coli, whilst the hot vent living T. hydrothermale has a distribution between the vent-living archaea and E. coli. Two of the archaea also have distributions that deviate from the vent archaeans, one of which is adapted to the hot, hypersaline volcanic pools on the surface. (source author, Alex Tolley)
Figure 4 suggests that the explanation is more complex than simply the energetics as reflected in the proteome’s amino acid composition.
Firstly, the proteome distributions of M. jannaschii and E. coli are very different. They represent different domains of life, inhabit very different environments, and only M. jannaschii is a methanogen. So we have a number of different variables to consider.
Several archaea, all methanogens living in vents at different preferred temperatures, have similar proteome Zc score distributions. The two hypersaline archaea, Canditatus sp., have their distributions biased towards higher Zc scores that may reflect proteomes that are evolved to handle high salt concentrations. One is likely a methanogen, yet its distribution is further biased to a higher Zc score than the other. Of the bacteria, the hot vent-living Thermotomaculum hydrothermale has a Zc score distribution between E. coli and the similar archaean group. It is not a methanogen, but possibly it exploits the amino acid affinities of the hot vent environment.
The other vent-living bacterium, Prosthecochloris sp., has a distribution like that of E. coli. It is a photosynthesizing bacteria similar to green sulfur bacteria, and extracts geothermal light energy. It is not a methanogen. It is found in the sulfur rich smoker vents of the East Pacific Rise.
There seems to be two main possible explanations for the proteomic Zc distributions. Firstly, it may be due to a bias in the selection of amino acids that could release energy when in the H2-rich Rainbow habitat. Secondly, it could be the types of protein secondary structures that are needed for methanogenesis, so that structural reasons are the cause.
Figure 5 shows the protein structures for three approximately matching Zc scores and sequence length for M. jannaschii and E. coli. What stands out is that the lower the Zc score, the more the alpha-helix secondary structure appears in the protein tertiary structure. Both organisms appear to have similar secondary structure compositions when the Zc scores are matched, suggesting that the distribution differences are due to the numbers of proteins with alpha-helix structures rather than some fundamental difference in the sequences. Is this a clue to the underlying distribution?
The amino acids that principally appear in helices are the “MALEK” set, methionine, alanine, leucine, glutamic acid and lysine [6]. A helix made up of equal amounts of each of these amino acids has a Zc score of -0.4, all coincidently in the positive affinity range of amino acids in Rainbow, as shown in figure 2b. This is highly suggestive that the reason for the different distributions is a bias in the production of proteins that have an abundance of alpha-helix secondary structures.
Figure 5. Comparison of selected proteins from M. jannaschii and E. coli spanning the range of Zc scores.
Which proteins might be those with sequences that have higher high alpha-helix structures? As a distinguishing feature of archaea is methanogenesis, a good start is to look at the proteins involved in methane metabolism.
Figure 6 shows the methanogenesis pathways of methane metabolism highlighted. The genes associated with the methanogenesis annotated proteins of M. jannaschii are boxed in blue and are mostly connected with the early CO2 metabolism. From this, some proteins were selected that had tertiary structure available to be viewed in the Uniprot database [3].
Figure 6. Methane metabolism pathway highlighted. Source: Kegg database [4].
Figure 7. Selected proteins from the methane metabolism pathway of archaea showing the predominance of helix structures [The Kegg #.#.#.# identifiers are shown to map to figure 5.].
The paucity of good, available tertiary protein structures for the methanogenesis pathways makes the selection support a more anecdotal than analytic explanation. The selected proteins do suggest that they are highly composed of alpha-helices. If the methanogenesis pathways are more highly populated with proteins with helical structures, then the explanation of the hot vent-living archaeans might hold.
In other words, it is not, particularly the energetic favorability that determines the proteome composition, but rather the types of metabolic pathways, most likely methanogenesis that is responsible. It should be noted that pathway proteins are not populated by one unique protein as the Kegg pathway indicates where several closely related genes/proteins can be involved in the same function.
Figure 8. Cumulative distribution of proteins for methanogenesis and amino acid metabolism for M. jannaschii. The methanogenesis proteins are biased towards the lower Zc values, indicating a greater probability of alpha-helix structures.
Figure 7 shows the normalized cumulative distributions of 15 methanogenesis proteins and 58 amino acid metabolizing proteins that have been well identified for M. jannashii.
The distribution clearly shows that there is a bias towards the lower Zc values for the methanogenesis proteins than the more widely distributed amino acid metabolic proteins. While not definitive, it is suggestive that the proteome Zc score distribution between organisms may be accounted for by the presence and numbers of methanogenesis proteins.
Lastly, I want to touch on some speculation on the larger question of abiogenesis. It is unclear whether bacteria or archaea are the older life forms and closer to the last universal common ancestor (LUCA). Because the archaea share some similarities to the eukaryotes, this implies that either the bacteria are the earlier form, or that they are a later form that branched off from the archaea, and the eukaryotes evolved from the archaean branch. The attractiveness of the archaea as the most ancestral forms, as their domain name suggests, is their extremophile nature and their ability to extract energy from the geologic production of H2 to form CH4 as autotrophs, rather than consuming CH4, which has been shown to be relatively out of equilibrium due to the energy barrier to complete the reaction.
If so, does the energetic favorability of amino acid formation at ultramafic hot vent locations suggest a possible route to abiogenesis via a metabolism first model? While the reaction to create amino acids abiotically may be difficult to proceed, they may accumulate over time as long as the reverse reactions to degrade them are largely absent. As peptide bonds are energetically favored, oligopeptides and proteins could form abiotically at the vents as the hot fluids mix with the cold ocean water.
If so, could random small proteins form autocatalytic sets that lead to metabolism and reproduction? A number of experiments indicate that amino acids will spontaneously link together and that they can be autocatalytic for self-replication. Peptides replacing the sugar-phosphate backbone can link nucleobases that also can replicate, the model that was held to be a feature of the RNA World model.
But there is a potential fly in the ointment of this explanation of abiogenic protein formation. The proteins should be formed from amino acids that are composed of both L and D chiral forms. Life has selected one form and is homochiral, a feature that is suggested as a determinant for the origin of any extraterrestrial biologically important molecules detected. Experiments have suggested that any small bias in chirality, due perhaps to the crystal surface structure of the rocks, can lead to an exponential dominance of one chiral form over the other. Ribo et al published a review of this spontaneous mirror symmetry breaking (SMSB) [5].
So we have a possible model of abiotically formed peptides of random amino acid sequences that collect in the pores of rocks at the vents and may be surrounded by lipid membranes. The proteins can both form metabolic pathways and self replicate. If the peptides mostly form self-replicating helices, and these can be co-opted to further extract energy via methanogenesis, then we have a possible model for the emergence of life.
As my earlier article speculated that radiolysis could ensure that chemotrophs in the crust of a wide variety of planets and moons could be supported, we can now speculate that the favorable energetics of amino acid and protein formation may also drive the emergence of life.
As autotrophic organisms like archaea can evolve to exploit the energetics of CH4 and protein production under favorable conditions at seafloor vents, and support the evolving ecosystems of chemotrophs, this suggests that abiotic reactions may have started the process that evolved into the sophisticated methanogenesis pathways of methanogens we see today.
If correct, then life may be common in the galaxy wherever the conditions are right, that is that where ultramafic rocks in the mantle, heated from below by various means, and in contact with cold ocean water exist in combination, whether on a planet similar to the early Earth or possibly at the boundary of the mantle and the deep subsurface oceans of icy moons outside the bounds of the traditional habitable zone.
References
Tolley, A “Radiolytic H2: Powering Subsurface Biospheres” (2021) URL accessed 12/01/2021:
https://www.centauri-dreams.org/2021/07/02/radiolytic-h2-powering-subsurface-biospheres/
Dick, J, Shock, E. “The Release of Energy During Protein Synthesis at Ultramafic-Hosted Submarine Hydrothermal Ecosystems” (2021) JournalJournal of Geophysical Research: Biogeosciences, v126:11.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021JG006436
Uniprot database
uniprot.org
Kegg database
genome.jp/kegg/
Ribo, J et al “Spontaneous mirror symmetry breaking and origin of biological homochirality” (2017) Journal of the Royal Society Interface, v14:137
https://royalsocietypublishing.org/doi/10.1098/rsif.2017.0699
Alpha-Helix
https://en.wikipedia.org/wiki/Alpha_helix
The ‘Disintegrating Planet’ Factor
Using machine learning to provide an algorithmic approach to the abundant data generated by the Transiting Exoplanet Survey Satellite (TESS) has proven unusually productive. I’m looking at an odd object called TIC 400799224, as described in a new paper in The Astronomical Journal from Brian Powell (NASA GSFC) and team, a source that displays a sudden drop in brightness – 25% in a matter of four hours – followed by a series of brightness variations. What’s going on here?
We’re looking at something that will have to be added to a small catalog of orbiting objects that emit dust; seven of these are presented in the paper, including this latest one. The first to turn up was KIC 12557548, whose discovery paper in 2012 argued that the object was a disintegrating planet emitting a dust cloud, a model that was improved in subsequent analyses. K2-22b, discovered in 2015, showed similar features, with varying transit depths and shapes, although no signs of gas absorption..
In fact, the objects in what we can call our ‘disintegrating planet catalog’ are rather fascinating. WD 1145+017 is a white dwarf evidently showing evidence for orbiting bodies emitting dust, with masses of each found to be comparable to our Moon. These appear to be concentrations of dust rather than solid bodies. And another find, ZTF J0139+5245, may turn out to be a white dwarf orbited by extensive planetary debris.
So TIC 400799224 isn’t entirely unusual in showing variable transit depths and durations, a possibly disintegrating body whose transits may or may not occur when expected. But dig deeper and, the authors argue, this object may be in a category of its own. This is a widely separated binary system, the stars approximately 300 AU apart, and at this point it is not clear which of the two stars is the host to the flux variations. The light curve dips are found only in one out of every three to five transits.
All of this makes it likely that what is occulting the star is some kind of dust cloud. Studying the TESS data and following up with a variety of ground-based instruments, the authors make the case: One of the stars is pulsating with a 19.77 day period that is probably the result of an orbiting body emitting clouds of dust. The dust cloud involved is substantial enough to block between 37% and 75% of the star’s light, depending on which of the two stars is the host. But while the quantity of dust emitted is large, the periodicity of the dips has remained the same over six years of observation.
Image: An optical/near-infrared image of the sky around the TESS Input Catalog (TIC) object TIC 400799224 (the crosshair marks the location of the object, and the width of the field of view is given in arcminutes). Astronomers have concluded that the mysterious periodic variations in the light from this object are caused by an orbiting body that periodically emits clouds of dust that occult the star. Credit: Powell et al., 2021.
How is this object producing so much dust, and how does it remain intact, with no apparent variation in periodicity? The authors consider sublimation as a possibility but find that it doesn’t replicate the mass loss rate found in TIC 400799224. Also possible: A ‘shepherding’ planet embedded within the dust, although here we would expect more consistent light curves from one transit to the next. Far more likely is a series of collisions with a minor planet. Let me quote the paper on this:
A long-term (at least years) phase coherence in the dips requires a principal body that is undergoing collisions with minor bodies, i.e., ones that (i) do not destroy it, and (ii) do not even change its basic orbital period. The collisions must be fairly regular (at least 20-30 over the last 6 years) and occur at the same orbital phase of the principal body.
This scenario emerges, in the authors’ estimate, as the most likely:
Consider, for example, that there is a 100 km asteroid in a 20 day orbit around TIC 400799224. Further suppose there are numerous other substantial, but smaller (e.g., ≲1/10th the radius), asteroids in near and crossing orbits. Perhaps this condition was set up in the first place by a massive collision between two larger bodies. Once there has been such a collision, all the debris returns on the next orbit to nearly the same region in space. This high concentration of bodies naturally leads to subsequent collisions at the same orbital phase. Each subsequent collision produces a debris cloud, presumably containing considerable dust and small particles, which expands and contracts vertically, while spreading azimuthally, as time goes on. This may be sufficient to make one or two dusty transits before the cloud spreads and dissipates. A new collision is then required to make a new dusty transit.
Amateur astronomers may want to see what they can learn about this object themselves. The authors point out that it’s bright enough to be monitored by ‘modest-size backyard telescopes,’ allowing suitably equipped home observers to look for transits. Such transits should also show up in historical data, giving us further insights into the behavior of the binary and the dust cloud producing this remarkably consistent variation in flux. As noted, the object in question evidently remains intact.
Digression: I mentioned earlier how much machine learning has helped our analysis of TESS data. The paper makes this clear, citing beyond TIC 400799224 such finds as:
- several hundred thousand eclipsing binaries in TESS light curves;
- a confirmed sextuple star system;
- a confirmed quadruple star system;
- many additional quadruple star system candidates;
- numerous triple star system candidates;
- “candidates for higher-order systems that are currently under investigation.”
Algorithmic approaches to light curves are becoming an increasingly valuable part of the exoplanet toolkit, about which we’ll be hearing a great deal more.
The paper is Powell et al, “Mysterious Dust-emitting Object Orbiting TIC 400799224,” The Astronomical Journal Vol. 162, No. 6 (8 December 2021). Full text.
Rogue Planet Discoveries Challenge Formation Models
As we begin the New Year, I want to be sure to catch up with the recent announcement of a discovery regarding ‘rogue’ planets, those interesting worlds that orbit no central star but wander through interstellar space alone (or, conceivably, with moons). Conceivably ejected from their host stars through gravitational interactions (more on this in a moment), such planets become interstellar targets in their own right, as given the numbers now being suggested, there may be rogue planets near the Solar System.
Image: Rogue planets are elusive cosmic objects that have masses comparable to those of the planets in our Solar System but do not orbit a star, instead roaming freely on their own. Not many were known until now, but a team of astronomers, using data from several European Southern Observatory (ESO) telescopes and other facilities, have just discovered at least 70 new rogue planets in our galaxy. This is the largest group of rogue planets ever discovered, an important step towards understanding the origins and features of these mysterious galactic nomads. Credit: /ESO/COSMIC-DANCE Team/CFHT/Coelum/Gaia/DPAC.
A brief digression on the word ‘interstellar’ in this context. I consider any mission outside the heliosphere to be interstellar, in that it takes the spacecraft into the interstellar medium. Our two Voyagers are in interstellar space – hence NASA’s monicker Voyager Interstellar Mission – even if not designed for it. The Sun’s gravitational influence extends much further, as the Oort Cloud attests, but the heliosphere marks a useful boundary, one that contains the solar wind within. The great goal, a mission from one star to another, is obviously the ultimate interstellar leap.
How many rogue planets may be passing through our galactic neighborhood? Without a star to illuminate them, they can and have been searched for via microlensing signatures. But the new work, in the hands of Hervé Bouy (Laboratoire d’Astrophysique de Bordeaux), uses data not just from the Very Large Telescope but other instruments in Chile including the adjacent VISTA (Visible and Infrared Survey Telescope for Astronomy), the VLT Survey Telescope and the MPG/ESO 2.2-metre telescope.
Tens of thousands of wide-field images went into the survey. The target: A star-forming region close to the Sun in the constellations Scorpius and Ophiuchus. The work takes advantage of the fact that planets that are young enough continue to glow brightly in the infrared, allowing us to go beyond microlensing methods to find them. About 70 potential rogue planets turn up in this survey. Núria Miret-Roig (Laboratoire d’Astrophysique de Bordeaux), first author of the paper on this work, comments:
“We measured the tiny motions, the colours and luminosities of tens of millions of sources in a large area of the sky. These measurements allowed us to securely identify the faintest objects in this region, the rogue planets…There could be several billions of these free-floating giant planets roaming freely in the Milky Way without a host star.”
Image: The locations of 115 potential FFPs [free-floating planets] in the direction of the Upper Scorpius and Ophiuchus constellations, highlighted with red circles. The exact number of rogue planets found by the team is between 70 and 170, depending on the age assumed for the study region. This image was created assuming an intermediate age, resulting in a number of planet candidates in between the two extremes of the study. Credit: ESO/N. Risinger (skysurvey.org).
But we need to pause on the issue of stellar age. What these measurements lack is the ability to determine the mass of the discovered objects. Without that, we have problems distinguishing between brown dwarf stars – above 13 Jupiter masses – and planets. What Miret-Roig’s team did was to rely on the brightness of the objects in order to set upper limits on their numbers. Brightness varies with age, so that in older regions, brighter objects are likely above 13 Jupiter masses, while in younger ones, they are assumed to be below that value. The value is uncertain enough in the region in question to yield between 70 and 170 rogue planets.
Planet formation is likewise an interesting question here. I mentioned ejection from planetary systems above, but these free-floating planets are young enough to call that scenario into question. In fact, other mechanisms are discussed in the literature. The paper notes the possibility of core-collapse (a version of star formation), with the variant that a stellar embryo might be ejected from a star-forming nursery before building up sufficient mass to become a star. We need to build up our data on rogue planets to discover the relative contribution of each method to the population.
This work uncovers more rogue planets by a factor of seven than core-collapse models predict, making it likely that other methods are at work:
This excess of FFPs [free-floating planets] with respect to a log-normal mass distribution is in good agreement with the results reported in σ Orionis [a multiple system that is a member of an open cluster in Orion]. Interestingly, our observational mass function also has an excess of low-mass brown dwarfs and FFPs with respect to simulations including both core-collapse and disc fragmentation . This suggests that some of the FFPs in our sample could have formed via fast core-accretion in discs rather than disc fragmentation. We also note that the continuity of the shape of the mass function at the brown dwarf/planetary mass transition suggests a continuity in the formation mechanisms at work for these two classes of objects.
So the formation of rogue planets is considerably more complicated than I made it appear in my opening paragraph. The authors believe that ejection from planetary systems is roughly comparable to core-collapse as a planet formation model. That would imply that dynamical instabilities in exoplanet systems (at least, those containing gas giants) produce ejections frequently within the first 10 million years after formation. Investigating these extremely faint objects further will require the capabilities of future instrumentation like the Extremely Large Telescope.
The paper is Miret-Roig et al, “A rich population of free-floating planets in the Upper Scorpius young stellar association,” published online at Nature Astronomy 22 December 2021 (abstract).