Archives For cosmology

icy void

Remember the anomalous Cold Spot, the bizarre, low temperature area spotted in the maps of the Cosmic Microwave Background Radiation, or CMBR for short, the echo the Big Bang which gives us a very high level overview of the structure of our universe? Cosmologists bristled at an anomaly stretching some 1.8 billion light years and seemingly violating what we thought was a universal rule that our cosmos is isotropic and homogeneous, i.e. expanding similarly in every direction and with roughly the same density of galaxies from end to end. And so they analyzed the map using different means and some were able to rule it out as an artifact in the data. Still, the question of whether it was really there never went away because every time you figure out some way of erasing something from your data set because it seems weird, you haven’t gotten rid of it, and sure enough, it appeared yet again on Planck’s CMBR map and was now stuck for good. This left scientists with a dilemma. Why was there a cold spot so large and so cold?

Well, the answer to that is a distinct lack of galaxies which makes the Cold Spot about 20% less dense than the typical patch of the sky. This has of course given pop sci headline writers cover to call it The Great Void, a grandiose moniker which overstates the shortfalls in density for this area of the universe, and when billed as the answer to why The Cold Spot is so cold, oversells the effect it has on the background temperature in this patch of the sky. In fact, just 10% of the temperature drop can be linked back to the lack of density while the rest is still very much open to debate. To give credit where credit is due, virtually all iterations of this story did mention this somewhere along the line, but since it’s a fact that people usually read just the first half of most articles, I thought I’d put my disclaimers and conditionals in the top half of my post, rather than towards the bottom as the articles in question because my feeling is that a lot of people will be convinced that the Cold Spot mystery is solved when in fact, it actually deepened.

While you can find anything in the CMBR you want if you stare hard enough, seeing the spot in both the WMAP and Planck results shows that it’s a persistent feature, unlike Roger Penrose’s proposed echoes of past Big Bangs, a hypothesis he was never sufficiently able to explain, and evidence for which strongly depends on how you process the data. And while it’s not really the biggest structure in the known cosmos since that title belongs to a group of quasars more than twice as large if we get nitpicky, as much of the media claims, it’s still a really important feature. When combined with some other weird observations, it hints at something under the surface of our cosmological framework. If you take the so-called Dark Flow discovered several years ago, and add it to the Cold Spot, as well as galactic superclusters which challenge the cosmological principle, one of the odd but still plausible explanations that ties all of them together, is that our universe is being bumped by other universes, essentially giving us evidence of a multiverse we think should exist to explain inflation and making the Cold Spot a cosmological bruise.

Of course now the big question is how we can validate that hypothesis because we steer right into the horizon problem, which puts other universes out of our reach and any attempt to even create a census of what occupies the multiverse is fraught with problems for which we have no existing solutions. Frustratingly, if the colliding universe explanation is in fact the right one, we’ll have to hold off on giving out the Nobel Prize for it because it would remain just out of reach to our instruments, tantalizing us through anomalous patterns in the CMBR and mysterious flows hinting at bizarre mechanics just beneath the fabric of space and time we can observe, but not study in enough depth to come to a solid conclusion. Even a few years ago, we would’ve simply defaulted to Occam’s Razor and ruled what we’re seeing as artifacts from data processing, but the fact that the anomalies keep showing up pretty much rules out that explanation. Now some of our more exotic cosmological theories may well have to be put to the test.

See: Szapudi, et. al. (2015). Detection of a supervoid aligned with the cold spot of the cosmic microwave background MNRAS, 450 (1), 288-294 DOI: 10.1093/mnras/stv488

cosmic mesh

Dark matter is a substance that makes up nearly all mass in the universe, but decades after we discovered it, all we have are indirect measurements which show us that it’s there in very large amounts, forming galactic halos, but ultimately, little else. It doesn’t seem to interact with any of the stuff that makes stars, dust, and planets, it emits or reflects no radiation, and this utter lack of interesting properties we could study leads to much wailing and gnashing of teeth on physics blogs and forums, wondering if it even exists. But there might finally be a glimmer of light in the study of dark matter because there’s now evidence that it can interact with itself and matches at least one theoretical behavior. While that doesn’t sound like much, it’s actually a pretty big deal because it narrows down the possible culprits and shows that we can design some way to catch particles exhibiting this behavior to figure out this mystery once and for all. Hopefully.

Last year, a team of researchers was examining the Bullet Cluster, which is actually two galaxy clusters undergoing a series of violent collisions, to try and detect dark matter interactions and figure out to what, if anything other than gravity, dark matter responds. The observations were not exactly conclusive, but they didn’t completely rule out dark matter particles colliding, just set a bound in which they can be expected to collide. Armed with this data, the same team tried to catch a glimpse of interacting dark matter particles in a cluster of just four galaxies, Abell 3827, hoping to get more detail how their galactic halos behave during tidal stripping events. Despite sounding like something like something one galaxy does for another to keep things interesting and relieve a little stress, it’s actually when galaxies shed stars, gas, dust, and dark matter to larger galaxies which exert powerful tidal forces on them across millions of light years.

Now, during tidal stripping, there’s a lag between matter being absorbed into a new galaxy and more matter coming in from the old galaxy because as clouds of dust and gas collide, they heat up, producing radiation, and create drag that pushes incoming material back. One inconclusive observation says it may have detected odd gamma ray flares that could be dark matter colliding during this phenomenon, but since no others have, some cosmologists concluded that it means that dark matter doesn’t interact with itself. But the team observing Abell 3827 found the tell tale signs of a significant lag in dark matter halos with a rate of interaction which fell neatly into their previous results. This means that dark matter particles are colliding, creating shockwaves and a detectable lag between absorbed and incoming clouds. In fact this lag can be up to 5,000 light years which isn’t much on a galactic scale, but definitely big enough that it’s unlikely to be just a fluke, or a random artifact in the data. Finally, we know something new about dark matter!

Of course we still don’t know what it really is, but we can now rule out a whole host of extremely exotic candidates which can’t interact with each other, and start designing detectors to seek out even more such events to confirm the observation and gather more data. With each new piece of information we tease out, we can eliminate more and more culprits until can actually design a way to capture dark matter itself. It may take decades more until we get to that point, but like a punishing, extremely difficult game can give you immense satisfaction when you finally manage to figure out the rules and advance, so can a profound and difficult to solve mystery like finding out what dark matter really is. Maybe it will be nothing groundbreaking in the end, and maybe it won’t change anything we think we know about the universe, but just the fact that we persisted, observed, experimented, theorized, and then observed some more to figure it out should make us a little more proud of our species in general for not giving up on a very difficult question.

See: Harvey, D., et al (2015). The nongravitational interactions of dark matter in colliding galaxy clusters Science, 347 (6229), 1462-1465 DOI: 10.1126/science.1261381

Massey, R., et al. (2015). The behavior of dark matter associated with bright cluster galaxies in the core of Abell 3827 MNRA, 449 (4), 3393-3406 DOI: 10.1093/mnras/stv467

[ illustration by AYM Creations / Ali Yaser ]

primordial black hole

At two events of the Wolrd Science Festival in early June,  a group of five theoretical physicists debated whether we’re living in a multiverse, and more surprisingly, if our current understanding of the cosmos all but mandates that multiple universes exist. It all goes back to the instant of the Big Bang, the femtosecond that set the rules for all reality as we know it in scientific terms. Each tiny little quantum instability and flux was stretched and projected across billions of light years to influence the shape of galaxy clusters and the tiny filaments what underpin our mostly isotropic, homogeneous universe. It’s kind of like the chaos theory saying about a flap of a butterfly’s wings eventually causing a tsunami halfway across the world, but taken to incredible extremes. We’re talking about a change in point particles becoming an archipelago a million galaxies across. So, why wouldn’t some of these instabilities become their own universes, sealed off from each other by the fabric of space and time? The inflation we just described should make this inevitable.

Here’s the issue. As our infant universe was inflating, it shouldn’t have spun off uniformly since that would make the fluctuations in early matter impossible and prevented the formation of stars and galaxies. It would’ve had to have large enough disruptions to kick-start other universes, or even itself be a product of another universe undergoing rapid inflation. And if one universe can inflate, so too must the rest because otherwise, inflation becomes a unique event and science is not happy with a one-off event as an explanation. Every significant process we know of happens more than once and on universal time scales of countless trillions of years, the possibilities are pretty much infinite. We should be able to see new universes bubbling up from dark voids in the fabric of space-time, over time. There might even be room to imagine a bizarre, hyper-advanced species of the far future crossing into a brand new universe as theirs dies in a void ship isolated from reality as we know it, Doctor Who-style, hopefully one that’s nothing like the Daleks.

Problem is, how do we prove that inflation works in more than one universe when we can’t see into the multiverse? One suggestion is that inflation basically wraps the universe into a sphere, an unbreachable, self-contained environment that seems flat to us and where trying to travel to the edge of the cosmos will result in the spaceship ending up back where it started as if it were on a Mobius strip. Simple, elegant, and convenient as far as solutions to cosmological problems go, don’t you think? And that’s precisely what’s so bothersome about it. Nothing in cosmology is that simple, even inflation itself. Instead of slowing down, it’s accelerating. Instead of flying apart into clouds of stars and gas under their own momentum, galaxies are keeping their shapes until a collision distorts them thanks to invisible dark matter. Hell, some 96% of the universe isn’t even matter and almost three quarters of it is some mysterious energy feeding its expansion. Does it really make sense that in a universe like that simple, convenient explanations will fly?

galaxy in hands

Planck’s unblinking eye on the sky far from Earth was supposed to map the cosmic background radiation, the echos of the Big Bang, to figure out whether the previous CMBR maps were right and see how much we know about the universe and how it works. Now, after more than a year of very strenuous stargazing, some 29 papers are behind published on the results, and while they adjust the proportions of ordinary matter, dark matter, and dark energy slightly, they’re still very much in line with what we thought we knew about the cosmos. There’s more ordinary matter like the type that makes everything we see and touch, a decent dollop more dark matter, and a little less dark energy, which means that the universe’s inflation rate is slightly slower and the age of all space as we know it is slightly higher, which makes it 13.81 billion years old rather than 13.77 billion, give or take a few tens of millions of years. But otherwise, not much needs to change in a science textbook aside from having them pay even less attention to some exotic theories.

Honestly, it’s a little boring because science really likes to make breakthroughs and having the universe as seen by Planck present us with a completely different CMBR landscape than WMAP would’ve made a few hundred careers and even a couple of Nobel Prizes, as well as attract a lot of attention to the field. But at the same time, science ultimately needs to stand up to scrutiny at every level and once in a while, it’s nice to get pretty much what you expect from an experiment, showing you that you have a good grasp of the big picture. And this doesn’t mean that there’s a lack of projects in cosmology’s future. If anything, Planck showed us that we have the outlines of the cosmic puzzle right and have filled out a good chunk of the inside. We could start channeling more and more time and effort into resolving more complex mysteries within a well established framework to uncover what’s behind enigmatic anomalies and exactly why the CMBR map looks the way it does, which would give us a more accurate view of the Big Bang…

hello monster

Oh for crying out loud, I’m gone for a Murphy’s Law kind of week and as soon as I can get back to blogging, the universe is supposed to explode. Well at least it’s all uphill from here. I mean if the end of the universe in a random fiery explosion of quantum fluctuations isn’t the worst thing that could happen to us, what is? You can blame the Higgs boson for all this because due to its effects on matter as we know it, we can extend the known laws of the Standard Model one way and end up with a universe that’s more or less stable as it is today, but could easily be brought down to a lower energy level, which is a theoretical physicists’ euphemism for "cataclysmic blast violent enough to change the fabric of existence." All that’s needed is a little quantum vacuum and next thing you know, fireballs will engulf the entire cosmos at the speed of light.

Or at least that’s one way to read that data which makes for an exciting headline from what’s an otherwise very specialized conference where scientists throw around big ideas just to see if any seem to catch the mass media’s interest. You see, we just found out that matter is stable over a very, very long period of time, and we’re also pretty sure that tiny quantum instabilities happen pretty much all the time, forming virtual particle/anti-particle pairs, so little quantum vacuums in the depths of space shouldn’t force matter across the cosmos to start radiating energy. And on top of that, as noted by Joseph Lykken, the originator of the hypothesis, if the tiniest change to our current models has to be made after the LHC performs its next round of experiments in the next three years, the entire notion of a universe on the brink of disaster from a quantum vacuum has to go out the window. Suddenly, doomsday doesn’t seem so imminent, huh?

Basically this idea is like forecasting that humans will be exterminated by an alien horde one of these days. It’s not entirely unthinkable and it could happen, but the odds aren’t exactly high in favor of this event and we have very little reliable data to be used to make this prediction with any sort of concrete authority. Sure, the Standard Model is incredibly well tested and underpins much of what we know to be true about matter, but when it comes to its predictive powers for all things cosmic, it’s not exactly a crystal ball, more of a murky lake with odd shapes twitching and slithering underneath. So why would Lykken make such a claim? Remember the media interest part about the purpose of the meeting where the idea was aired? There you go. Now the media is abuzz with doomsday fever and people are talking about quantum physics on the web, exactly what the meeting’s organizers were hoping would happen.

Again, this could all be true, but if we consider that the claim was made for the press and laden with enough caveats to make it more or less a wild guesstimate based on a hunch rather than a peer reviewed body of work on entropy with an attempt at the Grand Unification Theory, I’d say that it’s a pretty safe bet of be very skeptical of this one. Though it’s rather hard not to concede that "instantaneous death by quantum collapse of the cosmos" would be a pretty badass cause of death on your official paperwork because you could well claim that when you went down, you took the entire damn universe with you in a fiery explosion. Just a thought…


Since the dawn of modern cosmology there’s been an implicit assumption that no particular spot in the universe was supposed to be any more special than the rest. On the biggest scales of all, scales at which galaxies are treated like tiny particles, the universe is supposed to be isotropic and homogeneous i.e. more or less uniform in composition and its expansion from the Big Bang. For decades, simulations and observations seemed to show that this was really the case, but as a newly published paper argues, this might no longer be the true because lurking at the dawn of the universe was a group of quasars stretching for nearly 4 billion light years and tipping the very large metaphorical scales at 6.1 quintillion solar masses. That’s a big enough cluster to shatter the theorized limit on how big cosmological structures should be able to get by a factor of four. It looks as if the cosmological principle might need some refining unless it turns out that data from the Sloan Digital Sky survey is wrong and this cluster is much, much smaller than it appears.

Here are the basics on the fancifully named Huge Large Quasar Group, or Huge-LQG for short. It’s made up of 73 quasars arranged like a Y chromosome that was been shot right through the center with a high speed projectile. The upper, crescent-shaped branch is 56 quasars and the remaining 17 cluster tightly right underneath it. It’s about eight times the width of the Great Wall, which was once considered such an enormous cluster of galaxies that it too was once billed as a discovery that would challenge the cosmological constant. But simulations showed that it simply wasn’t big enough and that clusters as wide as 1.2 billion light years still leave the cosmos more or less uniform and isotropic. And this is the major issue with Huge-LQG. It’s almost four times wider and there’s no explanation for how a structure this big could exist without being torn apart by gravity and the expansion of space-time long before it gets anywhere near that size. Now, we can’t exactly toss the cosmological principle away yet, but we at least have to refine it.

Obviously, something is missing and if we were to simply adjust and say that 4 billion light years should now be the new limit on quasar groups, we would be missing why that’s the case. Letting go of the cosmological principle opens us to new models of galactic and cosmic evolution and exciting new ideas. However, it’s not really that simple because we’d also have to explain how an anisotropic early universe became the mostly isotropic, homogeneous mature one we see today while working in the confined space of a finite cosmos. One easy way to stick with homogeneity could be to declare that the known universe must be much bigger than we think because if your scale is big enough, anything can become small enough to be homogenized into your structure, but without being able to see beyond 13 billion light years or so, super-sizing the universe is an extremely questionable proposition. Either way, Huge-LQG leaves us with a dilemma that really gives the status quo a run for its money, and that’s how the really exciting breakthroughs can be made, fascinating new science gets done, and Nobel Prizes are eventually earned…

See: Clowes, R., et al. (2013). A structure in the early Universe at that exceeds the homogeneity scale of the R-W concordance cosmology MNRAS DOI: 10.1093/mnras/sts497

particle decay at the lhc

Once upon a time, we looked at an explanation for dark matter involving a theory about how all matter around us could decay over 6.6 × 10^33 years and noted that there’s a controversy as to whether protons actually decay. To help settle this, astronomers took advantage of the fact that telescopes are relativistic time machines, and peered through them at a galaxy known as PKS 1830-211 — a name only a scientist could love — that just so happens to be a gravitational lens allowing us to see some 7 billion years back. To be a bit more precise, it lets us look at clouds of alcohol molecules formed eons ago in deep space and compare their spectrum to that of booze analyzed in a lab right here on Earth. Don’t worry, no hard liquor was harmed in the process as the alcohol in question is methanol, the kind used in fuel and manufacturing, and which causes blindness if ingested, not the ethanol in which we can indulge. But even if no buzz was killed for the sake of science, what exactly does looking at the light spectra of alcohol tell us about how our universe formed and its possible fate many quadrillions of years from now?

Well, the spectrum of a molecule depends on μ, the ratio of proton to hydrogen mass. That’s an extremely important metric because it lets us measure the strong force, one of the fundamental interactions of matter as we know it responsible for building atomic nuclei. Because the masses involved are created by interactions of elementary particles representing the strong force, if μ falls below or exceeds 1,836.15267245(75) and the difference is reproducibly recorded, we can say that something changed the effect of this fundamental force on matter. Hence, if the 7 billion year old methanol emits an appreciably different spectrum from methanol we create today, this would mean that one of the fundamental forces has changed as the universe grew and matter is decaying on cosmic time scales. Lucky for us, turns out that atoms are very much stable since the spectrum of methanol was for all intents and purposes identical over 7 billion years, which is just over half of the way back to the Big Bang itself.

This tells us a couple of things about the fate of the universe. First is that the Standard Model of physics is still accurate and can make viable predictions about atomic structure and decay. The second is that matter will continue to be matter at the end of the universe or decays so slowly it would only matter on time scales far exceeding the lifetimes of supermassive black holes. Finally, it allows us to rule out overly exotic explanations for the origins of dark matter involving decay of particular subatomic elements or quirky behavior of the strong force since these results match a number of previous experiments designed to find out the same thing. In a universe flying apart, churning with explosions, collisions, and radiation, it’s nice to know that you can rely on matter that makes you and the planet on which you live isn’t also slowly decaying on you like a ticking cosmic time bomb. And while space may be out to get you through GRBs, asteroids, and huge galactic train wrecks, it will at least spare the very fabric of your existence.

See: Bagdonaite, et. al. (2012). A stringent limit on a drifting proton-to-electron mass ratio from alcohol in the early universe Science DOI: 10.1126/science.1224898

reaching out

Welcome back to yet another installment of the question of whether we’re all just products of an advanced simulation that created an entire universe, but this time, instead of plunging deep into the lore of the Matrix with Moore’s Law hijinks and philosophy, we’ll be hunting for physical proof that the universe is actually a simulation in the realm of quantum chromodynamics. What exactly is quantum chromodynamics? It’s the study of interactions between point particles that make up matter as we know it and its more exotic forms we sometimes glimpse when we smash atoms with enough force. How these particles interact basically defines what is and isn’t possible across the entire universe because without their fluctuations, the cosmos would still be a zoo of particles in no way, shape, or form resembling the planets, stars, and galaxies we know and love today. So the big question is whether those point particle interactions have a very telling limit and what this limit could tell us about the underlying nature of the universe.

One idea is that these limits should fit a three dimensional lattice around the interaction, which essentially means that interactions between point particles should fit into a predictable model on which other interactions can be neatly stacked. Since the authors of the idea in question aren’t computer scientists, they refer to this packet of quantum information as a cubical lattice. Being a computer person, I would refer to it as a voxel; it’s a three dimensional pixel which makes up the environment in which the simulation should take place. Think of Minecraft but with blocks on the smallest possible scale we know how to measure, a scale on which point particles would be as big as ants while atoms would be the size of buildings. This is essentially what the researchers are talking about when considering if our universe is a simulation; countless tiny voxels moving through a mind-bendingly complex simulation governed by exotic math of a computing device of unknown power, origin, purpose, and accuracy, defining the laws of physics we can detect.

But how does one prove that we live in a simulated environment and the limits of point particle interactions don’t simply happen to fall into a voxel on their own? Doesn’t the whole idea rest on circular logic? The voxels should have an energy limit of Ψ and if the quarks and gluons that we measure have an energy limit of Ψ they are voxels? Something just does not add up here. If we try to control the state of something virtual, we have to expend a lot of energy to do it. Today, it takes a supercomputer to simulate the behaviors seen in a cube of space barely big enough to fit a few simple atoms. If we want to do even a simple byte flip, we have to conduct a current that will be converted into 0s and 1s. Even on a quantum computer we’ll need to apply a good bit of energy to keep the qubits in a state we can manipulate. So if a universe is being simulated with some sort of a hypercomputer, it requires an immense amount of energy to run, even if all the supernovae and galactic collisions are just instructions on a stack.

Who would have such energy generation capabilities and why would anyone decide to simulate the universe in such detail? Simulations are best when they focus on the specific things to model at the appropriate level of abstraction. When researchers look at virtual galaxy collisions, they don’t spend the computing power and electricity to model the position of each star because they don’t really need to know where each star moves for the purposes of seeing how galaxies affect each other. They’re concerned about the overall shape of merging galactic arms so exact details of every solar system involved would only slow the simulation down. Likewise, a simulation of an entire universe down to the detail of a point particle doesn’t seem to make much sense unless the simulation’s goal is to create something like Laplace’s Demon, which we can do with enough computing grunt but will mean little in the real world. Beyond that, we get into philosophical and abstract questions like who designed the simulation and if their universe is a simulation too. And we’ll quickly arrive at the First Cause dilemma on rather shaky grounds. Not exactly the place a scientific proposal wants to end up when taken through its implied consequences…

See: Silas R. Beane, Zohreh Davoudi, & Martin J. Savage (2012) Constraints on the universe as a numerical simulation, arXiv: 1210.1847v1

Few things are as reviled on popular science and physics comment sections as dark matter and dark energy because aside from indirect observations, we’ve never actually detected either. We can see that something is pushing galaxies apart from each other while another invisible force holds these galaxies together, but there have been many attempts to do away with both in a theoretical sense. From imagining universes filled with a low energy plasma, to trying to re-imagine the Big Crunch model, to systematic reviews of CMBR data, just about everything seems to have been thrown at the dark matter and dark energy, but no model can yet explain how galaxies are being held together in their current configurations and why they seem to be flying apart. But there is a new idea out there that sounds like it may be on to something. Rather than mathematically rewiring the entire universe, it tries to eliminate the hidden forces by creating a new model of how the gravity of distant matter ripples throughout space, and concluding that their wake may explain the force behind dark matter not with exotic particles or quantum phenomena on cosmological spans, but with something far more familiar…

Generally, we don’t spend a whole lot of time thinking how distant objects would affect each other since, like a lot of other forces, gravity follows the inverse-square law, meaning that if you double the distance between two objects, their gravitational pull on each other would be reduced to a quarter of its strength. Basically, you could imagine gravity like a beam of light, diffusing with distance at an exponential pace, and hence, making the pull of distant stars and planets on each other a somewhat irrelevant concern. Yes, it registers when we get to the scale of galaxies spinning around huge central black holes, but when dealing with hundreds of thousands to millions of light years between two objects, even galactic scale entities shouldn’t matter, right? Well, an Italian mathematician begs to differ and he’s come up with a metric which ties in gravitational wakes from sprawling webs of galaxies and casts these interactions as the enigmatic dark matter. In other words, he says, what we call dark matter is actually just gravity on an intergalactic level. And his paper even includes examples of real galaxies behaving right on track with his models, give or take an occasional nudge from a big dwarf galaxy or the occasional supermassive black hole belch. So mystery solved, right? Well, not quite yet…

One problem with calling the existing observations fitting the model a slam dunk is the fact that there are over 100 billion galaxies in various stages of development, growth, maturity, or turmoil out there, so odds are that you’ll be able to find galaxies that’ll match your predictions no matter that they are if you were to invest enough time in the search. While we haven’t mapped anywhere near even a tenth of them, we do have a huge galactic catalog spanning millions of entries, again enough to find virtually any behavior you’d want, and many caught rotating, forming, or colliding in ways we don’t even know are possible yet since we don’t have enough people to review all the data we currently have. If you want to posit that new galaxies are birthed by the supermassive black holes of other galaxies, I’m sure you could find snapshots of early galaxies that look very much like they were being born from radioactive jets of quasars. Unless something like two thirds of all galaxies rotate just as this new model of dark matter predicts, we could say that we’re on to something. However, an effort to find out something as complex as this would take a long time and a lot of resources and when parts of the model don’t deal with such things as gravitational lensing or explain what look like random clumps of dark matter in intergalactic space, there may not be too many astronomers willing to devote a lot of time to testing it.

Similarly, when we’re dealing with a mathematical model, we always have to make sure that the numbers fit a particular set of observations not because the math has been retroactively set to fit them, but because they do simply by virtue of the equations’ results. In this case, an impressive test of this model would be pointing to a random region of space, using the model to calculate the rotation rate of a hypothetical galaxy, then seeing a new galaxy just like the one described in very similar circumstances doing the same thing. Though even then it could be argued that the model’s central metric is simply dark matter without being called dark matter since one of the key dilemmas with dark matter is its supposed preponderance. With just about 4% of all universal contents being regular matter, there have to be somewhere in the neighborhood of five dark matter particles for every particle of plain old matter. Can rotating matter really have enough momentum to account for the pull and energy of something six times as abundant when averaged out across the entire cosmos? Seems rather unlikely, but then again, the universe keeps showing us that all sorts of intuitively unlikely things tend to be the norm rather than the exception we tend to think they are at first.

See: Carati A. (2011). Gravitational effects of faraway matter on the rotation of spiral galaxies arXiv: 1111.57…

Since we last discussed the universe according to Roger Penrose, I thought the physics community wasn’t going to dedicate more time to the theory of cyclical cosmology, but apparently, I was wrong. It seems that the theory still lives and is being debated by scientists trying to figure out whether the concentric circles that could be spotted in CMBR maps mean anything significant, or if they’re just artifacts from the kind of anomalies we can expect after a Big Bang. Meanwhile, picking up on the criticism offered by many physicists about the need for a trigger to multiple incarnations of the universe, Penrose brought up a potential explanation for how we’d get an old universe out of gas to suddenly leave an imprint on a new one. Now, one could certainly see how a cyclical cosmology would be attractive. It all but eliminates the question of the source of the mass and energy behind the Big Bang, pointing back to the previous universe. However, were we to look past that, we’d find the theory making matters much more complex, especially when it comes to the cosmic reincarnation scheduled whenever entropy gets too low because the mechanism now given for it only introduces new problems.

First, let’s recap. When famed physicist Roger Penrose and his colleague Vahe Gurzadyan looked at a model of the cosmic microwave background radiation, or the CMBR, the universe’s first echoes of activity which give us the ability to see back to the very dawn of time as we know it, they spotted what resembled big, concentric circles of cooler temperatures. They then proceeded to theorize that these circles could well be scars left over from ancient Big Bangs and that each of them happened when a universe before them cooled and died. Their chosen method for explaining how this would work was to correlate low entropy just after a Big Bang and a similar state after an old universe has cooled completely, and leave it at that. In essence, they were saying that because countless tons of trillion degree quark-gluon plasma have an entropy value similar to that of icy nothingness, we can just flip the two and presto, a new universe is born from nothing, kind of like your can of pop suddenly turns into a fireball after you leave it in the freezer too long. So as you can imagine, cosmologist after cosmologist couldn’t figure out how Penrose actually expected the cyclical universe to work and how his past and future Big Bangs were being generated. They also couldn’t figure out what made those circles such unique features and noted that using current models also produced these features which puts their status as traces of something very special and significant in question. Why would they matter in the big picture?

Now we’re being told that these concentric circles are collisions between supermassive black holes from an earlier universe leaving anomalies in our current cycle. This is a puzzling statement to make since it means a few stray supermassive black holes will register on the CMBR of a new universe but past Big Bangs won’t, as well as clashing with Penrose’s earlier assessment that these features are evidence of other Big Bangs, not just activity in the previous universe. Maybe a collision of some ancient supermassive black holes triggered a birth of a new universe? After all, if a black hole is big enough and lasts long enough, it will eventually shed so much of its mass by Hawking radiation that it will no longer be able to self-gravitate, spewing out something a lot like raw quark-gluon plasmas which could then undergo baryogenesis and condense into matter. After all, the universe is expected to spend the vast majority of its time as a cold, empty stretch of vacuum dotted by an occasional supermassive black hole, and given the sheer length of time involved, even stranger things might happen. So we’ve got a plausible mechanism for cyclical cosmology then, right? Not so fast. Black holes are not magic and they don’t simply appear out of nowhere. Either vast clouds of hydrogen or an incredibly heavy star will need to collapse into one and it will take millions of years of feeding and collisions to grow one huge cosmic singularity. After it evaporates, it should release less matter then the universe that birthed it.

Using supermassive black holes as progenitors of new universes means that the amount of matter available for each new universe shrinks exponentially, and we’re actually on a course to a universe that will stay in near perpetual entropy after it cools off into nothing. And that brings us right back to the first Big Bang rather than a set of cycles which keep the universe bouncing back from its low entropy end. This may be why Penrose isn’t using black holes as his Big Bang generators, just as a source of gravitational waves to create little ripples in the CMBR map. However, we already know that a universe right after the Big Bang should have tiny variations since the blast itself did not need to be perfectly uniform and the tiniest little quantum fluctuation at the instant of the explosion could’ve left a major mark on the new universe as it expanded. And when modeling those tiny fluctuations, we also find concentric circles of slightly cooler temperatures in the CMBR without black holes of dead universes or complex cyclical cosmologies which imply bizarre universes that resurrect themselves. It’s not that there’s no way that we couldn’t all be children of primeval supermassive black holes starting a whole slew of Big Bangs or that the universe can’t be cyclical. It’s just that we have zero real evidence for these ideas past Penrose’s models and his repeated statements that his critics just don’t understand his work.

See: Moss, A., Scott, D., & Zibin, J. (2011). No evidence for anomalously low variance circles in the sky Journal of Cosmology and Astroparticle Physics, 2011 (04), 33-33 DOI: 10.1088/1475-7516/2011/04/033