Archives For astrophysics

black hole accretion disk

Apologies for the sudden hiatus everyone. In the last several weeks, life has interfered with any possibility of writing and when there has been time for anything, it’s been occupied by Project X which actually does concern this blog and will be detailed in the future. But I’m finally back, and back with an astronomical bang, or FRBs to be exact. You see, recently astronomers have been puzzled by extremely energetic bursts that last for fractions of a second and vanish forever. It’s like a GRB, the birth cry of a newly born black hole, but it all happens in less than the blink of an eye rather than depending on the size of the cataclysm. These bursts are currently called FRBs and no one is really sure what they are, where they generally originate in the night sky, and how much energy they’re really emitting, repeating the original dilemma with GRBs when they were first discovered. Now we have our first theoretical contender called SURONs, or Supramassive Rotating Neutron Stars, the end result of supernovae that should have created black holes but didn’t, not yet at least. They’re essentially ticking black hole time bombs floating in space.

When our sun will die, it will slowly pulsate and cool into a white dwarf because its mass is below the Chandrashekhar limit, the point at which a star becomes too heavy not to collapse on itself as a supernova. There are some objects that challenge exactly where this limit comes into play, but it seems to be about 1.44 solar masses. Stars heavier than that produce iron in their cores during the last stages of their lives and the unique thing about iron is that fusing it produces no net energy output. Bascially, the strong nuclear force’s interactions with iron’s nucleons create a point of diminishing returns on the nuclear binding energy and the tightly wound nuclei of iron is the first element from which a nuclear reaction can’t extract anything worthwhile. No matter how much iron is being fused, there’s just not enough energy to keep its outer layers from collapsing inward and detonating as a supernova. This is when another important astronomical limit comes into play, the Tolman-Oppenheimer-Volkoff limit. (Yes, that Oppenheimer.) If a neutron star left after a supernova is about two solar masses, it will collapse on itself as a black hole.

Although "will" is kind of a strong word really, a better one would be "should." And this is exactly where the SURONs come into play. Neutron stars are made of degenerate matter, or particles in such a high density environment that the only thing keeping them from falling into each other is, well, each other. Compressing them any more shatters matter as we know it and creates chaotic maelstroms of energy that flow into each other. Degenerate matter at the core of neutron stars can be so hot and dense that it’s basically a weird quantum fluid with no viscosity already, so it’s not going to take all that much to turn it into a black hole. In fact, SURONs are just over the limit and the pressure of its outer layers should’ve triggered a collapse but the particles in their cores were given a brief reprieve. Stars spin and whatever momentum is left after their fiery death has to transfer to the pulsar left behind. Because the star was well over a million miles across and a typical pulsar is tens of miles across, that energy sends the little pulsar spinning wildly arouns its axis, sometimes as fast as 1,122 times per second. This releaves just enough pressure to keep the core from imploding and leave the SURON a neutron star spinning wildly through space.

But there’s a catch. SURONs have extremely strong magnetic fields and those fields will interact with the nebula left behind as will the interactions between its radioactive death beams and gas and dust. Over thousands of years, this will all put a brake on how quickly the neutron star spins which means that at a certain point, the pressure on its core will start building back up until the inevitable happens and the degenerate matter swallows itself and becomes a black hole. Since the SURONs is relatively puny, this collapse happens in a fraction of a second. Its fearsome and powerful magnetic fields will be severed from the just formed event horizon and re-connect very, very violently just outside of it, generating a potent and very short radio pulse. An FRB. This is a nice and tidy explanation because SURONs would be roughly the same size and the event would be pretty much uniform, almost like a Type Ia supernova used as a standard unit for measuring the rate of the universe’s expansion. We don’t know if these neutron stars ticking away into new black holes really do dot the sky and this is not the only possible explanation of FRBs, but it is a pretty good one and it seems quite solid. And that’s often as good as it gets in astronomy…

See: Falcke, H., Rezzolla, L. (2013). Fast radio bursts: the last sign of supramassive neutron stars. Astronomy & Astrophysics arXiv: 1307.1409v1

black hole accretion disk

While many news outlets were reporting a new paper showing that a black hole’s accretion disk can accelerate gas to nearly the speed of light by the event horizon’s distortion of the very fabric of time and space around it, they missed something quite important. Yes, knowing for a fact that gas is screaming around the event horizon at relativistic speeds and beaming out violent X-rays we can see for millions of light years is definitely awesome. Every time the universe does such amazing things, just being able to witness it, document it, and understand how it happens tends to be a huge feat. But there’s more to what this tells us than just how the event horizon works or how relativistic frame-dragging is affecting the flow of gas in the accretion disk. It also tells us a little about what must have happened to the black hole to make it as large as it is.

You see, after analyzing the spectrum of X-rays from iron spinning around the event horizon of the supermassive black hole at the center of the galaxy NGC 1365, physicists can rule out the idea that instead of spinning around the event horizons, the gas was just obscuring what really went on around the black hole’s maw. There was too much distortion for gas to be in the way of the incoming radiation. But the measurements also show that the event horizon itself is zipping around its own axis that the “surface” of the event horizon, the point where tidal forces around the singularity are so strong, a particle would have to be traveling faster than the speed of light to escape, is traveling at nearly the speed of light itself. Not only is that astonishing, but there is only one way it could be spinning that quickly. Collisions with other giant black holes.

If a supermassive black hole gained its mass simply by sipping matter of stars and gas, it would have a low spin because the angular momentum of these objects is small. But merging with one or two other black holes could do the trick because these objects can spin as quickly as 1,000 times per second when they’re at stellar mass. When they collapse, the initial sin of the star that formed them is conserved and all that energy needs to go somewhere, so the black hole begins to church around its axis faster and faster. And when two fast-spinning black holes merge, they impart their energy tp each others’ spin, making the resulting object travel even faster. The sum of the mechanics involved is mind-boggling because the collision is between superheated quark and gluon streams of energy with zero viscosity, so there are few real world analogies we could refer to when talking about it, but the general mathematical results give us something not unlike the monster at the heart of NGC 1365 which shows us that we’re standing to understand more about how supermassive black holes feed on each other and shape the galaxies around them.

See: Risaliti, G., et al. (2013). A rapidly spinning supermassive black hole at the centre of NGC 1365 Nature, 494 (7438), 449-451 DOI: 10.1038/nature11938

[ illustration by NASA/JPL ]


Since the dawn of modern cosmology there’s been an implicit assumption that no particular spot in the universe was supposed to be any more special than the rest. On the biggest scales of all, scales at which galaxies are treated like tiny particles, the universe is supposed to be isotropic and homogeneous i.e. more or less uniform in composition and its expansion from the Big Bang. For decades, simulations and observations seemed to show that this was really the case, but as a newly published paper argues, this might no longer be the true because lurking at the dawn of the universe was a group of quasars stretching for nearly 4 billion light years and tipping the very large metaphorical scales at 6.1 quintillion solar masses. That’s a big enough cluster to shatter the theorized limit on how big cosmological structures should be able to get by a factor of four. It looks as if the cosmological principle might need some refining unless it turns out that data from the Sloan Digital Sky survey is wrong and this cluster is much, much smaller than it appears.

Here are the basics on the fancifully named Huge Large Quasar Group, or Huge-LQG for short. It’s made up of 73 quasars arranged like a Y chromosome that was been shot right through the center with a high speed projectile. The upper, crescent-shaped branch is 56 quasars and the remaining 17 cluster tightly right underneath it. It’s about eight times the width of the Great Wall, which was once considered such an enormous cluster of galaxies that it too was once billed as a discovery that would challenge the cosmological constant. But simulations showed that it simply wasn’t big enough and that clusters as wide as 1.2 billion light years still leave the cosmos more or less uniform and isotropic. And this is the major issue with Huge-LQG. It’s almost four times wider and there’s no explanation for how a structure this big could exist without being torn apart by gravity and the expansion of space-time long before it gets anywhere near that size. Now, we can’t exactly toss the cosmological principle away yet, but we at least have to refine it.

Obviously, something is missing and if we were to simply adjust and say that 4 billion light years should now be the new limit on quasar groups, we would be missing why that’s the case. Letting go of the cosmological principle opens us to new models of galactic and cosmic evolution and exciting new ideas. However, it’s not really that simple because we’d also have to explain how an anisotropic early universe became the mostly isotropic, homogeneous mature one we see today while working in the confined space of a finite cosmos. One easy way to stick with homogeneity could be to declare that the known universe must be much bigger than we think because if your scale is big enough, anything can become small enough to be homogenized into your structure, but without being able to see beyond 13 billion light years or so, super-sizing the universe is an extremely questionable proposition. Either way, Huge-LQG leaves us with a dilemma that really gives the status quo a run for its money, and that’s how the really exciting breakthroughs can be made, fascinating new science gets done, and Nobel Prizes are eventually earned…

See: Clowes, R., et al. (2013). A structure in the early Universe at that exceeds the homogeneity scale of the R-W concordance cosmology MNRAS DOI: 10.1093/mnras/sts497

primordial black hole

Usually a new discovery in deep space tends to further complicate our picture of the universe, almost as if the cosmos says "oh yeah, you think you have a good idea of how this works?" and throws a monkey wrench into the works, or sometimes, the whole screaming, angry monkey. So when it comes to phenomena as complex and exciting as black holes, surely there can’t be any data that makes them easier to understand. But this time, when physicists wanted to figure out if jets from black holes followed the same patterns as the mass of the objects went up, nature was willing to cooperate. As it turns out, the powerful jets of material shot from the accretion disks of black holes of 20 solar masses and 20 million solar masses follow the same mechanism. How do we know that? By plotting their strength against the mass of the black hole. If the data follows a linear trend, we know that the physics don’t require a new process to explain the numbers.

So what exactly is happening around black holes? As you may already know, black holes aren’t the cosmic vacuum cleaners far too many sci-fi movies made them out to be. They simply stay where they were very violently born and their immense tidal forces accelerate anything straying nearby into their maws. But black holes are tiny on an astronomical scale and only eat so much at a time. Whatever doesn’t fall directly into their event horizons is whipped around them until it heats up into a glowing accretion disk we can detect. And some of this material gets trapped in the powerful magnetic fields around the black hole and is launched into deep space at 99.9% of the speed of light in the form of highly energetic jets which produce powerful gamma rays. This process seemed to be the same for every black hole observed, but there’s no way to be sure if the black holes affected the jets beyond kinetic energy unless you start comparing gamma ray bursts to one another and plotting them along a trend line.

If the trend is exponential, that means new physics are needed to explain the sudden surges in power as we go up in the jet’s energy and vice versa. But the observed trend between kinetic energy of the jets and the power of the gamma ray bursts is linear, which means that it’s rather likely that the process behind forming the jets is the same across the entire spectrum of known black holes. The black hole’s mass affects how much is can swallow at a time and how powerful the jets it emits could be. The power of the jets affects the observed gamma ray bursts when a new black hole is formed and when it’s in the middle of a large meal consisting of stars and gas floating through interstellar space. So if we know that when we up the jets’ power, we also make the GRB stronger in a predictable way, that tells us that we can more or less confidently scale up what we learn about smaller black holes to their immense siblings, and estimate black holes sizes based on the GRBs’ strength. And that’s very useful for learning more about these prolific and extremely influential gravitational ghosts of giant stars.

See: Nemmen, R., et al. (2012). A universal scaling for the energetics of relativistic jets from black hole systems Science, 338 (6113), 1445-1448 DOI: 10.1126/science.1227416


As we discussed many times on Weird Things, black holes are the most amazing and terrifying things in the universe we know, and they’re not shy about gathering every law of physics we’re sure we understand, then laughing at them and doing something completely different. Well, not completely different per se, but the incredible heat and gravity of these objects makes time and space flow in ways they can’t anywhere else. One of these extreme phenomena is time dilation induced by gravity. We talked about the extreme effects of time dilation at relativistic speed over the last few years and mentioned that you could technically cross the entire universe in a human lifetime if you were traveling at 99% the speed of light. And the same effect applies when you’re exposed to an extremely gravitationally powerful object. Time would continue normally for you if you’re falling into a black hole, but to an outside observer, you’d be frozen in time and he won’t see you spaghettified and turned into quark-gluon soup past the event horizon.

Or at least that’s the theory which one physicist says might be wrong. According to his view, any particle falling into the black hole would never actually cross the horizon because the dilation is so extreme as to keep it falling until the black hole evaporates. Unfortunately for him, this really doesn’t sound even remotely right since that would prevent black holes from accreting mass. We know they do exactly that. If particles could never fall into a black hole, there would be absolutely no accretion and black holes would have the same mass with which they were created. It could be possible but it would make explaining hypermassive 10 billion solar mass beasts very, very difficult. You’d need a significant portion of a galaxy to implode in on itself just right, circling into itself over eons without enough gravitational nudges and tugs from the various stars and solar systems inside to maintain some semblance of equilibrium. And that just doesn’t sound right. It’s a lot more straightforward to assume that supermassive black holes are born maybe a thousand or so orders of magnitude smaller and work their way up through galactic collisions, gaining most of their mass during massive cataclysms rather than steady feedings.

The root of the problem with this paper lies with its author seemingly forgetting that dilation has an observable effect from the outside while time for the object in question continues as if nothing happened. Were the test particle in the paper see another particle going at the speed of light right next to it, it wouldn’t keep pace with it; the other particle would seem as if it was flying away from its point of reference at the speed of light. He achieves his result by removing a metric he doesn’t seem to have any grounds to remove, and while describing how a black hole accretes a good amount of matter, then evaporates over time due to Hawking radiation, he says that a test particle will just fall until the black hole unraveled into nothingness. These flows of events seem to contradict each other, unless I’m missing something crucial, and since the paper describes opposite outcomes to the same process, methinks it’s staying put on arXiv. The whole point of a black hole’s event horizon is that it must eventually be crossed and nothing can escape it, and once something crosses the event horizon, it’s effectively inside the black hole. If your paper doesn’t get the definition of this critical juncture right, it’s pretty much bound to be flawed.

varies starship concept

Well ladies and germs, it appears that when I tried to calculate how much effort it would take for an alien civilization to create a warp drive, I may have been wrong and so were the theoreticians on whose work I based my numbers. And that’s a good thing because the latest buzz from the DARPA sponsored 100 Year Starship Symposium is that warp drives are many, many orders of magnitude more feasible than initially assumed. Rather than requiring the mass energy of all of Jupiter to jump start, it would require just 67.8 exajoules, which translates to roughly 755 kilos of material. Considering that just a few decades ago, the first theoretical basis of warp drives was considered to be impossible because it seemed like it would take more than the energy of the entire cosmos to create a space-time bubble, the new requirement lowers the bar to interstellar travel down to almost nothing. Yes, there’s the matter of how we can create a burst of energy approaching 68 exajoules, but we certainly have ideas involving large and powerful lasers.

Hold on though, how did we go from having to turn Jupiter into a spark plug to less than one ton of matter to kick-start a warp bubble? By fine tuning the warping of space and time required. In the classical scenario, we’d need a spherical bubble containing the ship, and aside from causing a number of rather nasty side effects, this arrangement turns out to be very energy-demanding since there’s so much space to warp. The first downgrade came from changing how the energy was applied. Rather than blasting out a space-time bubble, you’d basically implode space and time around you to manipulate the cosmological constant, or the Λ in Einstein’s equations, also known as dark energy. This downgrade in energy requirements does away with the warp bubble and proposes an oblong doughnut shape in which the ship is propelled in an area of normal and stable space-time being moved faster than light. For all intents and purposes, the spaceship will stand still as the universe moves around it. It sounds like a sci-fi cliché, but it may just work.

From what I’ve read on the subject, I could speculate that entirely possible that there would be a leak of Hawking radiation or a high-energy halo from the warp field, but these may not be big obstacles to warp travel. If anything, we may want to use powerful magnetic fields to channel all this energy into acceleration and really put the pedal to the metal when traveling to very distant stars. We’ll need to do a lot of experiments to know for sure and those experiments are already starting as a small NASA lab is trying to create space-time disruptions on an atomic scale with laser beams. When it can do that reliably, it can start scaling up to real-world objects and see if space and time will cooperate. If it does, we may be on our way to becoming the kind of space-faring species we only read about in sci-fi novels and space exploration will become a lot easier and more important. But at the same time, we have to stay realistic and understand that this is a tentative first baby step towards warp drives and into barely charted territory in which the laws of physics may cooperate with us just as easily as they might hinder us…

[ illustration by Adrian Mann ]

Say that somewhere out there is a species of space-faring aliens which have relativistic rockets or warp drive technology that lets it travel between solar systems. Considering the sheer size of the universe, it’s probably a good bet that at least one exists. And as these aliens are tooling around, their spacecraft will likely leave what we could call a wake in the fabric of space and time, a wake that we could observe under the right conditions, when the stars align. This is the main gist of an arXiv paper which considers that despite the possibilities of a successful detection of an alien craft’s fly-by being almost nil, we could still try just in case we do get lucky. To start a long term survey, we just need to find star pairs close to each other and aligned with the Earth at about the right angle to give us a good view of the space between them. Then we just look and wait for something to show up, ideally a smear of light magnified by the relativistic wake of the spacecraft we’re trying to detect. It’s a neat idea and the authors readily acknowledge that we may just be too far away to notice alien travelers, or be in a region of space where there are no civilizations capable of interstellar travel, which keeps them grounded when discussing such a lofty SETI approach. But there is one thing they may want to explore a little further…

When we last discussed the Icarus project, did you notice the sheer size of the probe being considered? Go and have a look at that monstrosity and note that the Empire Stare Building does not look all that much bigger by comparison. That’s not because Icarus’ designers have a thing for really large spacecraft, it’s because this craft will have to carry so much fuel and have giant engines to accelerate. Any future interstellar craft designed to support humans, would be even bigger than Icarus to carry all the essentials across trillions and trillions of miles. Let’s say that at some point, we’ll actually decide to build a ship able to ferry humans between the Sun and Alpha Centauri at relativistic speeds, and equip it with a brand new, state of the art artificial black hole engine which should get us up to relativistic speeds very, nicely, shaving the travel time down to only a couple of years instead of several millennia. We’d need to build something much like the Burj Khalifa tower in Dubai to house all the things necessary to comfortably support and house our crew, then get another pair of similar structures and devote them to being engines and fuel tanks, and at least another one to function as a backup tank and to securely house all the shuttle craft that will let the crew go down to the surface of their target world because that giant assembly is simply never going to be able to land. It’s far too huge and heavy. And keep in mind that these estimates are probably erring on the small side, relying on a radical propulsion system.

Now, our imaginary spaceship which we’ll call something inspiring, say, The Really, Really Huge, would have an approximate mass of 2 million tons empty and without the micro black hole suspended between the giant engines armed with nuclear lasers and fuel. The black hole would add at least another million tons and all of its fuel, all the relevant supplies, and supporting spacecraft would bring the total mass of our interstellar craft to something in the neighborhood of 4 million tons. Depending on its configuration, it could be close to 1,000 or so meters long which is just about two thirds of a mile, and about a quarter of a mile across. Sounds huge and very, very expensive, doesn’t it? And this baby goes from zero to ~0.5c in just 6.3 months! How could alien astronomers not notice something like that screaming through the voids of space, warping the photons from the sunlight behind it and leaving a high speed smear in the spectrum of our sun on its way out? Well, for the size and speed of this thing, you have to remember that its traveling through space and as such is tiny if we’re going to compare it to the kind of objects telescopes can actually resolve. We have trouble imaging gas giants in other solar systems, gas giants which are 50,000 times bigger than our hypothetical ship. Sure, its wake is going to affect how the spectrum of a star looks but the warping would be so tiny that it may not even be visible as an artifact of the imaging process, the tiniest fraction of a pixel across, smaller than an exomoon.

And that’s the real gotcha in an otherwise interesting plan. Even if you’re lucky enough to catch an alien ship in the middle of crossing between two nearby solar systems and snap that one in a quadrillion shot, how exactly do you prove that this microscopic smudge in the spectrum is the trail of an extraterrestrial spacecraft? What says it wasn’t dust in the air or atmospheric fluctuations at the time of the shot? Even if you take a picture with an orbital telescope to avoid having a stray air particle from blotting out a snapshot of a relativistic craft, there’s still the potential of a microscopic speck of space debris or a wandering electron to mess with the shot. If the alien species in question build a ship the size of Mercury and flies past our solar system, we’d probably have some chance of catching their relativistic wake by happenstance. Otherwise, the ship will be just too small for a proper identification, if would even register in the image in the first place. Likewise, if we set our sights on a few dozen nearby stars floating close to each other, we wouldn’t necessarily boost our odds of seeing aliens traverse between them since we have no guarantee that they would evolve and thrive in those systems, just a vague estimate of probability that a planet supporting life in general may exist there. It seems that if we’ll ever catch ET mid-flight, it would’ve had to buzz our telescopes on its way to planets unknown…

See: Garcia-Escartin, J.C., et al. (2012). Scouting the spectrum for interstellar travelers arXiv: 1203.3980v1

Speaking of space-based weapons, here’s an interesting one for you. Once upon a time, when writing about warp drive physics, I asked whether the creation of a warp bubble by a superluminal ship could have some very nasty effects on any nearby planets due to the energy involved, and had my fears validated by a paper on the potential energy output of a warp drive. Now, a small group of theoretical physicists took a look at an interesting related problem, the violent interactions of the warp bubble with the fabric of time and space. A few years ago, we looked at how such bubbles can create showers of radioactive particles inside them to see whether a ship could survive being in said warp bubble and whether the bubble could be stabilized. But what about the space outside the bubbles? Space may be famous for its vast voids between stars and planets but it’s not entirely empty since streams of particles constantly travel through it. So as you’re blasting along with a superluminal spacecraft possibly feeding off the energy the bubble generates internally, what happens to the countless particles picked up by the gravitational distortion? When the ship stops, the particles still have  a lot of inertia and become unstable high energy tachyons easily capable of annihilating anything in their path.

Sounds like a rather straightforward application of Newtonian physics with some relativity for context, but let’s keep in mind that the physics in question are anything but straightforward and the entire idea relies on a warp bubble behaving like an object moving at superluminal speeds. But that’s not really how warp bubbles should behave since they’re wrapping a physical object in a closed pocked of space-time, not creating a shield which lets the physical object inside accelerate past the speed of light. To visualize the difference, imagine driving a car on a long stretch of highway through a swarm of insects. The faster you go, the more bugs will smack into your windshield and with greater force. As conceptual insects, they don’t actually squish and stick to the glass because they have no internal organs, but stay on because you’re moving a lot faster than they could. If you’re suddenly stopped, these imaginary bugs will fly forwards with the same speed your car was going as classic Newtonian physics dictate, and smash into something else. But now, imagine your car traveling in its own air pocket and these imaginary insects are carried by wind around your vehicle, never even knowing that its there. The former scenario is how the paper treats active warp drives, and the latter is how they should work.

Certainly, an accelerating warp bubble screeching to a sudden stop, swiftly followed by a lethal aurora of very unstable particles with were just accelerated beyond the speed of light and now need to regain some sort of equilibrium irradiating entire planets into oblivion sounds like an awesome sci-fi weapon. However, wouldn’t accelerating particles surfing on a space-time tsunami violate some law of physics? The authors allude to an unlikely result as a giveaway that something seems to be off when they say that the accelerated particles will not have a limit to how fast they could move or how much energy they give off. Would that not violate the widely accepted mass-energy equivalence principle codified in Einstein’s famous equation? Obviously, the math is very complex and far be it from me to double-check it (comp sci math is very different from physics math), but random particles being able to drifting through the bubble throughout the journey sounds really off because it implies that the bubbles generated by warp drives are permeable. And if these bubbles really are permeable, they should then be subject to a set of physical phenomena that would render superluminal travel impossible for any object with mass. As spacecraft in the bubble try to accelerate to relativistic speeds, they will be pelted into oblivion by the incoming dust and cosmic rays, while being vaporized by the thermal energy generated by the bubble itself, something the warp bubble should prevent if it’s supposed to carry a craft through space.

On the other hand, however, one wonders exactly what happens when a superluminal craft casts off the warp bubble when it arrives at its destination. How big of a gravity wave would it generate? Would be detectable by someone on the planet near which it emerged? Could such waves propagate widely enough through a stellar neighborhood to be used as a SETI detection method? Considering that the energy required to create warp bubbles would be on par with vaporizing Saturn for a decent sized interstellar ship, one could conceive of at least a faint echo from a nearby solar system being detectable by sensitive enough instruments. But a caveat to trying to figure all this out using theoretical physical constructs we have for superluminal propulsion is that we don’t really know if a warp drive would work the way we envision and protracted investments in figuring out how it would affect space around it could well turn into the search for a proper adjective to describe the colors of the emperor’s new silk cape. It may be a somewhat safer bet to see what it would take to keep a ship using something like a black hole engine safe from bombardment by particles and radiation, or running into rogue, or previously unknown planets as it careens through space and see how much energy would be generated in the process. That way, we’ll have a good idea for what needs to be achieved to build relativistic rockets, and a clue as to what high energy events nearby we might want to investigate for signs of alien intelligence.

See: McMonigal, B., et al (2012). The Alcubierre Warp Drive: On the Matter of Matter Phys. Rev. D arXiv: 1202…

Quite a bit of scientific literature on astrobiology is filled with references to very exacting criteria for exoplanets capable of sustaining alien ecosystems. They have to be just the right distance from their suns, have the right kind of atmosphere, fall in the right temperature range, and hopefully, have a large stabilizing moon to counter their constant orbital wobbles from creating ice ages and migrating ice caps around the poles. But as we see more exoplanets out in the wild and do more accurate simulations, we’re finding that a lot of these constraints are starting to fall away. It seems that life could have a chemical basis in a liquid ethane lake, and might not even need a star to host a habitable ocean. And now, it looks like it might not even need a big moon to keep its axis more or less steady over the eons, allowing complex life to evolve without swift climate changes. It’s a nice to have for a flourishing ecosystem, certainly, kind of like having traction control in your car is a really nice and helpful feature, especially on ice and wet roads. But you can certainly get by without it if you had to, just as potential alien life on exoplanets without a big moon like ours could cope with an occasional climate shift.

It all started with a simulation in 1993 which showed that without the Moon, our planet could wobble as much as 85° on its axis which means that long term climate patterns humans enjoyed for many thousands of years just wouldn’t be possible. On geologic timescales, we’d be looking at mass extinctions on a far more frequent basis than we see in the fossil record as life would struggle to adapt. Planets are not exactly dainty things and all this would happen over tens of millions of years, but if we consider that Earth was home to living things for roughtly 3.5 billion years or so, these are fairly rapid and extreme changes which would test evolution’s ability to produce complex multicellular organisms when the selective pressure is to stay small and very efficient. So if an alien planet wants to be home to a massive, complex, and diverse ecosystem, it better be just as stable as we are, wobbling only by 2.6° at most thanks to our massive Moon, right? Turns out that that’s not the case at all because the range found in the original study is actually exaggerated by more than a factor of two. In fact, if the Moon was never formed, we would’ve wobbled between an axis of 10° and 50° over 4 billion years. Not a bad improvement on the originally predicted arc that could turn our planet sideways, then upright again.

And there’s another surprise. Within those 4 billion years without the Moon’s influence, there are stable cycles lasting for 500 million years. While the planet’s orbital wobble would be far more extreme than we have now, it wouldn’t be anywhere near 85° off axis. A more accurate figure seems to be 15° or so, which would entail the occasional massive ice age followed by rapid warming periods, but on timescales that would span almost all the evolutionary changes that lead from giant sea scorpions, to dinosaurs, to us. How can this kind of stability be possible if we didn’t have a lunar rudder? Well, generally a planet wobbles due to the very slight tugs from other objects in the solar system accumulating over millions and millions of years. But the same tugs that will send a planet wobbling could also be corrective and the occasional comet or asteroid impact could nudge the planet in another direction by countering a tug from a distant world or a passing comet. It all ads up to a slow, almost reluctant wobble rather than uncontrolled tumbling through space. And if the planet happens to be in a retrograde orbit (orbiting in the opposite direction of its siblings), their wobbles are in the same range as our current axial oscillations. That means we can bravely widen our search to include rocky worlds without large, stabilizing moons as a potential home for macroscopic aliens, if not other intelligent life.

See: Lissauer, J., Barnes, J., Chambers, J. (2012) Obliquity variations of a moonless Earth Icarus, 217 (1), 77- 87 DOI: 10.1016/j.icarus.2011.10.013

dark planet

Depending on who you talk to, planets around alien suns are either somewhat rare due to the chaotic nature of planetary formation around infant stars, or even more plentiful than the stars themselves. Since exoplanets are rather small and dim, lost in the glare of their host suns, spotting them takes a lot of time and effort. Direct observation means catching a momentary dip in starlight from an object of indeterminate size at a time that’s random to the observer. If a currently unknown planet orbits its sun every 237 days, how will you know to point your telescope at the right star every 237 days? There just has to be a better way of taking the galactic census so we can figure out what the average solar system looks like, and ultimately, what are the chances one may have the right conditions to host life. And there may be according to a group of astronomers who used a very familiar manifestation of general relativity to escape the normal fuss and bother of exoplanet detection, trying to find planets that orbit a little bit farther from their suns to get a rough measure of solar system sizes.

When we last talked about the physics of wormholes, we looked at microlensing, essentially the distortions in the appearance of an astronomical object caused by the gravity of something relatively small in front of this object in our line of sight bending the fabric of space. Usually we deal with gravitational lensing on the scales of galaxy and galaxy clusters and it’s partially how we know that dark matter exists. At either end of the light distorting spectrum it’s the same mechanism at work. Traveling photons are skewed by the uneven fabric of space and time. So, the researchers posited, if galaxies can distort the appearance of other galaxies and we see stars doing the same thing, what about planets orbiting stars? We know they also bend light in the wake of their gravitational wells, so a planet orbiting at some distance around a star should distort its halo. And so, after watching 100 million stars, they found evidence of exoplanets in orbit around their parent suns exactly as general relativity predicted would happen if they were there. Of course this is all easier said than done.

Since a star pumps out so much light and the planets have to orbit at just the right inclination to be spotted in the act of disturbing the halo, the 100 million stars had to be narrowed down to just 500 promising ones, and those 500 had to be watched for five long years until ten cases of direct microlensing were finally seen. But all that effort didn’t seem to bring consistent data since the results seem all over the place. According to the tally, between 6% and 23% of stars seem to host a Jupiter-like world, between 23% and 74% have a Neptune-like body, and between 25% and as many as 97% might have a terrestrial planet around them. As the planet size gets smaller, the uncertainty increases wildly, so much so as to be almost meaningless for terrestrial worlds which are the ultimate goal of all planet hunters. The exoworlds are just too dim, too far away, and too small to register prominently on our existing instruments, and although the study does imply that pretty much all stars have a solar system of some sort, it can’t actually tell us anything definitive about what sort of planet we could usually expect. Going by this survey, it could be anything from a turbulent gas giant to Earth 2.0.

Don’t get me wrong, trying to use microlensing to find the statistical distribution of planets is a terrific idea. It’s just that the universe keeps on placing interesting things too far away for us to spot with out current tools. We could even try this trick again with better equipment and hyper-sensitive telescopes to see if we can get more predictive and accurate tallies. However, it seems that until then, the candidate worlds seen by Kepler, and in the future, the Terrestrial Planet Finder, will provide us with the most accurate and predictive sampling of our galactic neighborhood since they can point to actual planets with an accuracy I doubt we could get from even the most precise measurements of planet-created microlensing manifesting around distant alien suns. This sort of survey would give us a more accurate picture of planetary distribution across the galaxy and allow us to build an accurate picture of a typical alien solar system. With such a model, we could look at any random star and have a decent idea of what we should expect to see orbiting it and at approximately what distances from it so we can better time our telescopes’ observations in the hunt for another planet that hosts intelligent life.

See: Cassan, A., et al. (2012). One or more bound planets per Milky Way star from microlensing observations Nature, 481 (7380), 167-169 DOI: 10.1038/nature10684