Archives For quantum mechanics

broken causality

Countless poems, essays, and novels have ruminated on the inexorable forward march of time, how it slowly but surely grinds even the mightiest empires to dust and has an equal fate in store for the wealthiest of the wealthy and the poorest of the poor. But that only seems to apply if you are larger than a subatomic particle. If you’re an electron or a photon, time seems to be a very fungible thing that doesn’t always flow as one would expect and regularly ignores a pillar of the fabric of space and time: the fundamental limits imposed on the exchange of information by the speed of light. But some scientists were hoping they could bring the quantum world to heel with better designed experiments, arguing that because we have not observed single photons in an entangled system changing state faster than the speed of light would allow, calculating a cloud of them with advanced statistical methods, perhaps the noise drowned out the signals.

Well, Dutch scientists with the help of several colleagues in France decided to try test quantum entanglement using stable, heavy electrons entangled with photons so they could observe how the systems changed on stable particles, without worrying about decoherence. After managing to successfully entangle the system 245 times they collected enough data to plug into a formula known as Bell’s inequality, designed to determine if there are hidden variables in an experiment involving quantum systems. The result? No hidden variables could have been present while the spooky action of instantly changing quantum systems was reliably observed every time. It’s one of the most thorough and complete tests of quantum causality ever undertaken, and there have been a few murmurings of a potential Nobel Prize for the work. However, the paper is still under peer review and with the widespread attention to it, is bound to be scrutinized for flaws.

What does this mean for us? Well, it shows that we’re right about weird physics on a subatomic level happening exactly as counter-intuitively and inexplicably as we thought. But it also tells us that we can’t narrow down a simple conclusion and hints that some of the laws of physics might be scale variant, i.e. different depending on the size and scope of the objects they affect, and a scale-variant universe is going to make coming up with a unified theory of everything way more difficult than it already is because we now need to understand why it works that way. But again, this is science at its finest. We’re not trying to come up with one definitive answer to everything just by running enough experiments or watching the world around us for a long time, we’re just trying to expand how much we know to expand out horizons, finding answers and raising new questions which may be answered centuries down the road. Sometimes just knowing what you don’t know can be a big step forward because you now at least know where to start looking for an answer to a particularly nagging or difficult problem and where you will hit a dead end.


Imagine a problem with seemingly countless solutions, a paradox that’s paradoxically solved by completely unrelated mechanisms some of which violate the rules of physics as we know them, while others raise more questions than they provide answers. That paradox is what happens to an object unfortunate enough to fall into a black hole. Last time we talked about this puzzle, we reviewed why the very concept of something falling into a black hole is an insanely complicated problem which plays havoc with what we think we know about quantum mechanics. Currently, a leading theory posits that tiny wormholes allow for the scrambled particles of the doomed object to maintain some sort of presence in this universe without violating the laws of physics. But not content with someone else theories, and knowing full well that his last finding about black holes made them necessary in the first place, as explained by the linked post, Stephen Hawking now claims to have found a new solution to the paradox and will be publishing a paper shortly.

While we don’t know the exact wording of the paper, we know enough about his solution to say that he has not really found a satisfactory answer to the paradox. Why? His answer rests on an extremely hard to test notion that objects falling to a black hole are smeared across the edge of the event horizon and emit just enough photons for us to reconstruct holographic projections of what it once was. Unfortunately, it would be more scrambled than the Playboy channel on really old TVs, so anyone trying to figure out what the object was probably won’t be able to do it. But it will be something as least, which is all that thermodynamics needs to balance out the equations and make it seem that the paradox has been solved. Except it really hasn’t because we haven’t the slightest idea of how to test this hypothesis. It still violates monogamous entanglement, and because the photons we’re supposed to see are meant to be scrambled into unidentifiable flash of high speed, high energy particles, good luck proving the original source of the information.

Unless we physically travel to a black hole and dropping a powerful probe into it, we would only have guesses and complex equations we couldn’t rule out with practical observations. Sadly, a probe launched today would take 55.3 million years to get to the nearest one, which means any practical experiments are absolutely out of the question. Creating micro black holes as both an experiment for laboratory study and a potential relativistic power source, would take energy we can’t really generate right now, rendering experiments in controlled conditions impossible for a long time. And that means we’re very unlikely to be closer to solving the black hole information paradox for the foreseeable future unless by some lucky coincidence we’ll see something out in deep space able to shed light on the fate of whatever falls into a black hole’s physics-shattering maw, regardless of what the papers tell you or the stature of the scientist making the claim…

black hole accretion disk

Falling into a black hole is a confusing and complicated business, rife with paradoxes and weird quantum effects to reconcile. About a month ago, we looked at black holes’ interactions with the outside world when something falls into them, and today, we’re going to look into the other side of the fall. Conventional wisdom holds that inside a black hole gravity is exponentially increasing until time, space, and energy as we know it completely break down as the singularity. Notice I’m not talking about matter at all because at such tremendous gravitational forces and with searing temperatures in the trillions of degrees, matter simply can’t exist anymore. Movies imagine that singularity as some sort of mysterious portal where anything can happen, while in reality, we’re clueless about what it looks like or even if it really exists. We don’t even know if anything makes it down to the singularity in the first place. But what we do know is that somewhere, whatever is swallowed by the black hole should persist in some weird quantum state because we don’t see any evidence for black holes violating the first law of thermodynamics. Enter the fuzzball.

Quantum fuzzballs aren’t really objects or boundary layers as we know them. Instead, they’re a tangle of quarks and gluons made up of the matter that gave rise to the black hole and what it’s been eating over its lifetime. They don’t have singularities, just loops of raw energy trapped by the immense gravitational forces exerted on them. On the one hand, thinking of a black hole as just a hyper-dense fuzzball eliminates the anomalies and paradoxes inherent in descriptions of singularities, but on the other, simply making a problem go away with equations doesn’t mean it was solved. And that’s the real problem with quantum fuzzballs. They appear as exotic math in general relativity being extended deep into a realm where its predictive powers begin to fail, so while it’s entirely possible that we identified in what direction we need to explore and what we’d expect were we to look into a black hole, it’s equally likely that the classic idea of their anatomy still holds. Unless we drop something into one of those gravitational zombies nearby, we won’t know if the current toy models of what lies inside of it are right. All we have is conjecture.

alpha centauri bb

Carbon is a great element for kick-starting life thanks to its uncanny ability to form reactive, but still stable molecules perfect for creating proteins, amino acids, and even the backbone of DNA and RNA, or their functional equivalents. And yet, according to those who argue that the reason we exist is that the universe is somehow fine-tuned for us, or that life exists as a random, one in a trillion chance, it shouldn’t even be here. You see, when the first stars started fusing hydrogen into helium-4 deep in their searing cores, the resulting helium atoms should have combined into beryllium-8 which decays so quickly that there should have been virtually no chance for another helium atom to combine with it to form carbon-12, which accounts for 98.9% of all carbon in the known universe and makes life possible. According to astronomer Fred Hoyle, whose misuse of the anthropic principle has been used to justify many an anti-evolutionary screed, since carbon based life exists, there must be a mechanism by which this beryllium bottleneck is resolved and the clue to this mechanism must lie in the conditions under which the star fuses helium.

You see, when atoms fuse into a new element, the newly formed nucleus has to be at one of its natural, stable energy levels, otherwise the combination of the protons’ and neutrons’ energies, as well as the energy of their kinetic motion will prevent the fusion. Hoyle’s insight was than any new carbon atom must have had a resonance with the process by which a beryllium and helium atom would combine, which would exert just enough energy to slow down the decay rate for the reaction with a passing helium-4 atom to happen, so the natural energy level of the result would sustain a stable carbon-12 nucleus. Imagine rolling magnetic spheres down a hill, and as these magnets roll, they collide. Some will hit each other with just enough energy to keep rolling as a single unit and absorb new spheres they run into, others combine, then break apart, or just roll on their own. The angle, the force of impact, and the speed and masses of the spheres all have to be right for them to join, and when they do, they’ll have to stay that way long enough to settle down. This is quantum resonance in a nutshell, and it’s what made carbon-12 possible.

But while this is all well and good, especially for us carbon based lifeforms, where does Hoyle’s discovery leave us in regards to the question of whether the universe was fine-tuned for life? If we assume that only carbon based life is possible, and that the only life that could exist is what exists today, the argument makes sense. However those assumptions don’t. Even if there was no quantum resonance between helium-4, beryllium-8, and carbon-12 in the earliest stars from which the first atoms of organic molecules were spawned, the first stars were massive and it’s a reasonable guess that when they went supernova, they would have created carbon, silicon, and metals like aluminium and titanium. All four elements can be useful in creating molecules which can form the chemical backbones of living organisms. In fact, it’s entirely possible that we could one day find alien life based on silicon and that in some corner of the galaxy there are microbes with genomes wound around a titanium scaffold. Life does not have to exist as we know it, and only as we know it. We didn’t have to exist either, it’s just lucky for us that we did.

When creationists try to come up with the probability that life exactly the way we understand, or have at least observed to exist, came out the way it has, against all other probabilities, they are bound to get ridiculous odds against us being here. But what they’re really doing is calculating a probability of a reaction for reaction, mutation for mutation, event for event, repeat of the entire history of life on Earth, all 4 billion years of it, based on the self-absorbed and faulty assumption that because we’re here, there must a reason why that’s the case. The idea that there’s no real predisposition towards modern humans evolving in North Africa, or that life could exist if there’s no abundant carbon-12 to help bind its molecules is just something they cannot accept because the notion that our universe created us by accident and we can be gone in the blink of a cosmic eye to be replaced by something unlike ourselves in every way, is just too scary for them. They simply don’t know how to deal with not feeling like they are somehow special or that nature isn’t really interested in whether they exist or not, just like it hadn’t for at least 13.8 billion years…

black hole eating planet

Black holes are, needless to say, strange places, and over the years, I’ve written much about all the bizarre paradoxes and extreme questions they pose. All this weirdness is what makes them fun to study because solving some of these paradoxes and questions ultimately gets us closer and closer to figuring out how time and space works. Consider the science this way. When you build an airplane wing, you want to flex it as hard as you can until it snaps because that will tell you the limits of the materials you used and the soundness of your design. Much the same way as destructive testing helps engineers hone their craft, so does studying a place where physics seems, well, broken, helps scientists test the outer limits of their discipline. Of course when you have the broken fabric of space-time to piece together, some problems will be much harder to solve than others and one of the most persistent ones is whether black holes have firewalls.

What exactly is a black hole firewall? We don’t really know because it’s not supposed to be one of the defining features of its anatomy. Instead, it’s what happens when spontaneous quantum particles litter the cosmos in the wrong place. These particles constantly blink into existence as particle and anti-particle pairs which instantly annihilate each other, and where this won’t pose any problem anywhere else in the cosmos, when they appear too close to an event horizon of a black hole, one particle will get drawn in while the other is repelled into space, with a small flash of energy from the maw of the black hole which it must give up to keep with the laws of physics. It’s a fraction of a fraction of a nanowatt, but over the eons a black hole will exist, this adds up and the black hole would eventually be unable to hold itself together and explode. At least this is the theory behind what we call Hawking radiation, which will balance out the escaping particle’s energy and returns the swallowed matter back to the universe. So what’s the problem?

Well, the problem lays in a technicality that’s actually quite a big deal because it breaks a very fundamental principle of how quantum systems work. Entangled quantum particles are said to have a monogamy of entanglement, meaning that once you entangle one particle in a system, you can’t entangle it with another. To paraphrase a great explanation the source of which I just can’t recall, imagine quantum entanglement as rolling some dice and no matter what numbers come up for each individual dice, the sum of those numbers is the same with every roll. This is important because it lets us know that if we entangle a pair of dice and get 12 in our first throw, when we throw them again and one comes up as a 7, we understand that the other is 5 without even looking at it. Understanding how this works allows us to do some amazing experiments on the very nature of causality itself. But to make this balance out, we would have to know for sure that our 5 isn’t also being used to change the sum of another roll, elsewhere.

And this is exactly what a black hole’s event horizon allows. When one of our virtual particles is swallowed and the black hole gives off the teeny Hawking emission, the remaining particle and the emission are entangled. But the infalling anti-particle is still there, and the outgoing one is still entangled with it. Two independent quantum systems have created a mass-energy surplus, which is a very blatant violation of the laws of thermodynamics, and most solutions to this weird state of affairs involve even further violations of the laws of physics. Enter the firewall. Not really an ongoing phenomenon just beyond the event horizon, it’s instead a line crossing which would permanently sever the entanglement between the outgoing and infalling particles, leaving just a Hawking emission and the outgoing particle as a quantum system. This would release massive amounts of energy proportional to the event, and trap the particle in the black hole forever. It’s not a tidy solution, but it sort of works if you try not to think about it too hard.

Of course thinking too hard about things is what scientists do and they quickly pointed out that breaking quantum entanglement on a whim just doesn’t work, no matter how much energy you release to compensate for the inequality in the resulting equations. And that means, the firewall isn’t really the answer to what happens to the energy and information when black holes devour something. A new solution proposes that black holes actually spawn wormholes when they eat entangled particles. Those aren’t the conventional kind of wormholes we think of, they couldn’t be used to cross space and time on a whim, but they’re essentially a connection which keeps both the escaping and the infalling particle entangled. Recall our dice-based quantum system, and imagine that you roll a dice in NYC while someone else in Hong Kong rolls the other. Both will still amount to 12, and if your dice shows a 6, the one in Hong Kong will as well. But should you be unable to see what you rolled, you can count on a call telling you what the other dice is showing at the moment. That phone call? That’s more or less the wormhole in question.

Yes, this does basically jettison Hawking radiation and leads to its own weird conclusions about the fabric of the universe being composed of a constantly entangled quantum mesh, but that’s how science works. Slowly and carefully, we chip away at complex problems and flesh out all of the toy models until we can simulate real systems, and try and observe them and their behavior out in the wild. What happens to matter that falls into a black hole and if it’s still connected to a quantum system on the outside is still a wide open question. But the fact that it’s just so difficult to even try to answer what seems like a simple question at first glance, shows just how bizarre, complex, and self-contradictory the universe can be. Far from a steady, ordered system, it’s an incredibly wild mess that seems to barely be governed by its own rules should we look just a bit too close, and nowhere is this more evident than with black holes. They’re places where all we know about time and space is broken. But they’re also the places that could teach us the most about these laws, especially because that’s where they’re being tested at their extremes…


A long time ago, I shared one of my favorite jokes about philosophers. It went like this. Once, a president of a large and prestigious university was asked who were his most expensive staff to fund. "Phycisists and computer scientists," he replied without hesitation, "they always want some brand new machine that costs a fortune to build and operate, not like mathematicians who only need paper, pencils, and erasers. Or better yet, my philosophers. Those guys don’t even need the erasers!" Yes, yes, I know, I’m a philosophical phillistine, I’ve been told of this so many times that I should start some sort of contest. But my lack of reverence for the discipline is not helped by philosophers who decide to speak up for their occupation in an age of big data and powerful, new tools for scientific experimentation to propose answers to new and ever more complex real world questions. Case in point, a column by Raymond Tallis declaring that physics is broken so much so that it needs metaphysics to pull itself back together and produce real results.

Physics is a discipline near and dear to my heart because certain subsets of it can be applied to cutting edge hardware, and as someone whose primary focus is distributed computing, the area of computer science which gives us all our massive web applications, cloud storage, and parallel processing, there’s a lot of value in keeping up with the relevant underlying science. And maybe there’s already an inherent bias here when my mind starts to wonder how metaphysics will help someone build a quantum cloud or radically increase hard drive density, but the bigger problem is that Tallis doesn’t seem to have any command of the scientific issues he declares to be in dire need of graybeards in tweed suits pondering the grand mechanics of existence with little more than the p’s and q’s of propositional logic. For example, take his description of why physics has chased itself into a corner with quantum mechanics…

A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.

As science bloggers love to say, this isn’t even wrong. Tallis and Wallace have mixed up three very different concepts into a grab bag of confusion. Quantum mechanics can do very, very odd things that seem to defy the normal flow of time, but there’s nothing that says we can’t know the general topology of a quantum system. The oft cited and abused Uncertainty Principle is based on the fact that certain fundamental building blocks of the universe can function as both a wave and a particle, and each state has its own set of measurements. If you try to treat the blocks as particles, you can measure the properties of the particle state. If you try to treat them as waves, you can only measure the properties of the waves. The problem is that you can’t get both at the same exact time because you have to choose which state you measure. However, what you can do is create a wave packet, where you should get a good, rough approximation of how the block behaves in both states. In other words, measurement of quantum systems is very possible.

All right, so this covers the Uncertainty Principle mixup, what about the other two concepts? The biggest problem in physics today is the lack of unification between the noisy quantum mechanics on the subatomic scale and the ordered patterns of general relativity. String theory and the very popular but nearly impossible to test many worlds theory tries to explain the effects of the basic forces that shape the universe on all scales in terms of different dimensions or leaks from other universes. So when Tallis says that it’s still 40 years and we don’t know which one is right, then piles on his misunderstanding of quantum mechanics on top of Wallace’s seeming inability to tell the difference between multiverses and string theory, he ends up with the mess above. We get a paradox where there isn’t one and scope creep from particle physics into cosmology. Not quite a ringing endorsement of philosophy in physics so far. And then Tallis makes it worse…

The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).

Again, a grab bag of not even wrong is supposed to sell us on the idea that a philosopher could help where our tools are pushed to their limits. Considering that Tallis dismisses the entire idea that neuroscience as a discipline has any merit, no wonder that he proclaims that we don’t have any clue of what consciousness is from a biological perspective. The fact is that we do have lots of clues. Certain patterns of brain activity are strongly associated with a person being aware of his or her environment, being able to meaningfully interact, and store and recall information as needed. It’s hardly the full picture of course, but it’s a lot more than Tallis thinks it is. His bizarre claim that scientists consider some nerve pulses to be conscious while the majority are said not to be is downright asinine. Just about every paper on the study of the conscious mind in a peer reviewed, high quality journal refer to consciousness as a product of the entire brain.

The rest of his argument is just a meaningless, vitalist word salad. If brain activity is irrelevant to consciousness, why do healthy living people have certain paterns while those who had massive brain injuries have different ones depending on the site of injury? Why do all those basic brain wave patterns repeat again and again in test after test? Just for the fun of seeing themselves on an EEG machine’s output? And what does it mean that it’s a surprising fact that we can perceive matter around us? Once again, hardly a serious testament to the usefulness of philosophers in science because so far all we got is meaningless questions accusing scientists of being unable to solve problems that aren’t problems by using a couple of buzzwords incorrectly, haphazardly cobbling bits of pieces of different theories into an overreaching statement that initially sounds well researched, but means pretty much nothing. Well, this is at least when we don’t have Tallis outright dismissing the science without explaining what’s wrong with it…

Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication.

Here we get a double whammy of Tallis getting the science wrong and deciding that he doesn’t like the existing ideas because they don’t pass his smell test. He’s combining competing ideas to declare them inconsistent within a unified framework, seeingly unaware that the hypotheses he’s ridiculing aren’t complimentary by design. Yes, we don’t know how the universe was created, all we have is evidence of the Big Bang and we want to know exactly what banged and how. This is why we have competing theories about quantum fluxes, virtual particles, branes, and all sorts of other mathematical ideas created in a giant brainstorm, waiting to be tested for any hint of a real application to observable phenomena. Pop sci magazines might declare that math proved that a stray quantum particle caused the Big Bang or that we were all vomited out by some giant black hole, or are living in the event horizon of one, but in reality, that math is just one idea. So yes, Tallis is right about the confusion under the algebra, but he’s wrong about why it exists.

And here’s the bottom line. If the philosopher trying to make the case for this profession’s need for inclusion into the realms of physics and neuroscience doesn’t understand what the problems are, what the fields do, and how the fields work, why would we even want to hear how he could help? If you read his entire column, he never does explain how, but really, after all his whoppers and not even wrongs, do you care? Philosophers are useful when you want to define a process or wrap your head around where to start your research on a complex topic, like how to create an artificial intelligence. But past that, hard numbers and experiments are required to figure out the truth, otherwise, all we have are debates about semantics which at some point may well turn into questions of what it means to exist in the first place. Not to say that this last part is not a debate worth having, but it doesn’t add much to a field where we can actually measure and calculate a real answer to a real question and apply what we learn to dive even further.

industrial laser

Most of us learned about lasers from science fiction. We know that lasers come in red if you’re the bad guy, and green or blue if you’re the good guy. We know that they travel at the speed of sound between two space fighters, and they make a phew-phew sound when fired. And they all travel in perfect straight lines. Of course real lasers are very different. They come in all colors, depending on how they’re powered and fired, they’re silent, some are invisible until they reach the kind of energy levels used in fusion reactor prototypes when fired at a real world target, and they travel to their targets so quickly, they seem to flash into existence and disappear in an instant. Oh and they don’t always travel in a straight line. In fact, as noted elsewhere on the web by a scientist and science blogger, they can bend it like Schrodinger if they emit an Airy beam, curving slightly after passing through a filter that changes their quantum waveforms. Previously, this feat has only been accomplished with photons, but now, it’s been done with electrons.

Airy beams — named after a British astronomer who tried to solve Schrodinger’s equation in the field of optics — have a couple of very interesting properties. Not only do they curve, but they’re not as prone to diffraction as our run of the mill laser beams and they can heal themselves after hitting an obstacle that should severely diffuse them, reassembling to continue their curved path after passing through it. It’s even more impressive that electron Airy lasers behave just like their photon counterparts because that allows for significant improvements in electron microscopes, precision sensors, and possibly even alternative computer chip designs that can better control the flow of electrons through themselves. How do you get electrons to do such bizarre things? A specially designed hologram projected in front of an electron gun changes their quantum state and sends them on whatever trajectory you need them to follow. Pretty much anything that uses the flow of electrons to do something very precise in tight quarters can benefit from the ability to attach a sort of steering wheel to particles that would otherwise travel in straight lines.

Now it’s important to keep in mind that curving is not what makes this an Airy laser, it’s the ability to change the quantum states of the photons and electrons being fired, and being able to scale up such lasers could be huge not just in the lab or in specialized applications, but even for very common, everyday things like high speed wi-fi access, secure transmissions, and major gains in energy efficiency for a whole slew of electronic device we use on a regular basis. With so much talk about how much money is being "wasted" on basic research like this, it’s amazing how little attention has been paid for the possibilities Airy lasers can offer if we could integrate their key principles into today’s devices. After all, experiments like this one are the very definition of basic research. The science says something should be possible, let’s try it and see what happens. In this case, Israeli scientists showed that Airy lasers can indeed do some pretty cool things…

See: Voloch-Bloch, N., et al. (2013). Generation of electron Airy beams Nature, 494 (7437), 331-335 DOI: 10.1038/nature11840

hello monster

Oh for crying out loud, I’m gone for a Murphy’s Law kind of week and as soon as I can get back to blogging, the universe is supposed to explode. Well at least it’s all uphill from here. I mean if the end of the universe in a random fiery explosion of quantum fluctuations isn’t the worst thing that could happen to us, what is? You can blame the Higgs boson for all this because due to its effects on matter as we know it, we can extend the known laws of the Standard Model one way and end up with a universe that’s more or less stable as it is today, but could easily be brought down to a lower energy level, which is a theoretical physicists’ euphemism for "cataclysmic blast violent enough to change the fabric of existence." All that’s needed is a little quantum vacuum and next thing you know, fireballs will engulf the entire cosmos at the speed of light.

Or at least that’s one way to read that data which makes for an exciting headline from what’s an otherwise very specialized conference where scientists throw around big ideas just to see if any seem to catch the mass media’s interest. You see, we just found out that matter is stable over a very, very long period of time, and we’re also pretty sure that tiny quantum instabilities happen pretty much all the time, forming virtual particle/anti-particle pairs, so little quantum vacuums in the depths of space shouldn’t force matter across the cosmos to start radiating energy. And on top of that, as noted by Joseph Lykken, the originator of the hypothesis, if the tiniest change to our current models has to be made after the LHC performs its next round of experiments in the next three years, the entire notion of a universe on the brink of disaster from a quantum vacuum has to go out the window. Suddenly, doomsday doesn’t seem so imminent, huh?

Basically this idea is like forecasting that humans will be exterminated by an alien horde one of these days. It’s not entirely unthinkable and it could happen, but the odds aren’t exactly high in favor of this event and we have very little reliable data to be used to make this prediction with any sort of concrete authority. Sure, the Standard Model is incredibly well tested and underpins much of what we know to be true about matter, but when it comes to its predictive powers for all things cosmic, it’s not exactly a crystal ball, more of a murky lake with odd shapes twitching and slithering underneath. So why would Lykken make such a claim? Remember the media interest part about the purpose of the meeting where the idea was aired? There you go. Now the media is abuzz with doomsday fever and people are talking about quantum physics on the web, exactly what the meeting’s organizers were hoping would happen.

Again, this could all be true, but if we consider that the claim was made for the press and laden with enough caveats to make it more or less a wild guesstimate based on a hunch rather than a peer reviewed body of work on entropy with an attempt at the Grand Unification Theory, I’d say that it’s a pretty safe bet of be very skeptical of this one. Though it’s rather hard not to concede that "instantaneous death by quantum collapse of the cosmos" would be a pretty badass cause of death on your official paperwork because you could well claim that when you went down, you took the entire damn universe with you in a fiery explosion. Just a thought…

beyond absolute zero

Suppose you take some potassium atoms and put them in a vacuum where you cool them to as close to absolute zero as you possibly can in a lab. What you’ve done is reduced the entropy of this system of atoms because the colder it gets, the less kinetic energy they have, and the less energy they could exchange with each other. Sure there will be some quantum effects that will upset the perfect stillness of these atoms which is why it’s theorized that we’ll never see absolute zero temperatures in the wild, but for all intents and purposes, you’ve hit the coldest that matter can get. Now, with a laser, start heating up the atoms but charge them so they attract each other and stay in their place in the system. Their energy goes up but they can’t exchange it or move in any direction. The overall entropy of the system is now technically less and you’ve just broken a limit we had the gall to preface with the word "absolute." You’ve effectively "cooled" potassium to a billionth of a degree below absolute zero, or at least to a quantum state that seems like it.

This is exactly what a team of scientists recently achieved in the lab and they’re excited about a slew of possible experiments to test the behavior of atoms and molecules in an exotic quantum state, opening new avenues for investigating the nature of dark matter and dark energy. As the media reports it, they managed to chill something below -273.15 °C, but take a moment to note that the word cooled in the description of the experiment is in quote marks. That’s because they didn’t actually go below this temperature. What they really did is way, way more complicated and has actually been long thought possible, just never accomplished. Absolute zero is still important because it marks a point at which injecting energy into a system changes how its distributed. For the positive temperature range, which in this case is anything above absolute zero, more energy brings more atoms to the same energy state. Negative temperatures, however, make exchange of energy much more difficult and can create inequalities between the atoms’ energy states.

Again, seems rather counter-intuitive, doesn’t it? In this setup, positive temperatures should be the low entropy ones, right? Well, in this range, atoms can move and exchange their energy with no limit which means that their possible number of quantum states could be infinite. Atoms which have to deal with negative temperature have a limit to how many energy states they could be in, meaning that you can keep injecting energy into the system but it will be more or less trapped in the atoms and the lattice will remain stable rather than fly apart as the atoms start moving more and more in response. In short, when you go into negative temperatures, you lower entropy as you add energy with the bizarre added twist that as you initially heat up the atoms, they could be in an infinite amount of energy states, then abruptly find themselves trapped in ever fewer. Just another way quantum mechanics makes things fun, and by fun I mean really, really weird.

So what does this all mean? It means that in this case, absolute zero has nothing to do with how cold things are, but how energy states are distributed in a system, and while we thought that this temperature was the dividing line between the two types of energy distribution, this is really the first experimental proof we have that this can happen in nature. If this seems really confusing, it is, because this is just the complicated nature of the beast. But knowing that one can achieve a negative temperature under the right conditions means that you can explore an entire realm of very bizarre quantum states what could explain otherwise seemingly inexplicable behaviors, one of which could offer an explanation for dark energy and give experimentally verifiable answers to one of cosmology’s biggest mysteries. And while yours truly would love to dive deeper into these possibilities, it may be best for everyone just to digest what we have so far and get ready for the imminent flood of Twitter and Facebook posts about cooling things below absolute zero…

See: Braun, S., et al. (2013). Negative absolute temperature for motional degrees of freedom Science, 339 (6115), 52-55 DOI: 10.1126/science.1227831

schroedinger's cat

Here’s what sounds like a rather typical experiment with quantum mechanics. A pair of devices we’ll call Alice and Bob, or A and B in cryptographic parlance, measure entangled photons which we know can be entangled at least 10,000 times faster than the speed of light. A third device called Victor, or an intermediary in the very same cryptographic convention that we just used, will randomly choose to entangle or not to entangle another pair of photons. So of course when Victor entangles its pair of photons, Bob and Alice would find the photons to be entangled, right? Except there’s a catch. Victor entangles or doesn’t entangle its photons after Alice and Bob already made their measurements. Barring some sort of technical guffaw in the setup, Alice and Bob are basically predicting what Victor will do or somehow influencing Victor’s supposedly random choice of whether to entangle its photons or not. In other words, causality just took a lead pipe to the kneecap as past and future are crossing wires on a subatomic level. This shouldn’t happen because the two pairs of entangled photons are not related to each other and Victor is dealing with a photon from each pair, and yet, it’s happening.

One of the reasons why the names of the devices are in cryptographic convention is because cryptography is the best way to follow what’s actually happening. Imagine sending two secure e-mails containing two entirely separate passwords to two friends, then, after these e-mails have been received, forwarding copies of those passwords to a system administrator who might just randomly reset them. And when those passwords are reset, somehow, your two friends get the new passwords instead of the ones you just sent them even though the system administrator hasn’t even received the original ones to reset yet. This prompts the question of why and how in the hell this could possibly happen. According to the researchers, we could view the measures of the photons’ states not as a discrete result but a sort of probability list of their possible states, i.e. they’re both entangled and not entangled depending on what will happen through the rest of the system. Then, when their fate is decided, the waveform collapses into the particular result like the famous Schrödinger’s cat taken one notch higher up the causality ladder, and which will only be truly dead or alive when the observer writes down the result of his or her observations into the official logbook after another observer confirms them.

Hold on though, what about the entanglement being nearly instantaneous? Maybe it’s more simple than all of this mumbo jumbo about collapsing waveforms and we don’t need to awaken the zombie of the Copenhagen interpretation of quantum mechanics? Victor could have entangled the photons and the spooky action moving much, much faster than the speed of light reached the detectors before the first measurements. We broke the rules of special relativity which dictate that information can’t travel faster than light, but surely this is a far more elegant solution, right? Unfortunately, we can’t prove that information travels faster than light as shown by the neutrino saga at the OPERA labs, and until we find a way to detect honest to goodness tachyons, we have to follow the special relativity framework, and in the experiment, the each half of the photon pair was measured a few femtoseconds prior to reaching Victor. Granted, since a glitch in OPERA’s fiendishly delicate arrangement turned into a 60 nanosecond error, surely a femtosecond or two discrepancy could be caused by a bad angle or a tiny manufacturing defect inside of the fiber optic wire as well. This is why the researchers suggest more experiments using much longer wires to make sure that the delay is even longer to see if their results will be further supported. However, the experimental setup here has been well calibrated and seems rather unlikely to be subject to a systematic error, so you probably shouldn’t bet the farm on their results being wrong.

Provided that future research validates their experiment, what does this mean for practical applications? Well, we may not have to cool a quantum computer to near absolute zero to measure its output if we can simply collapse the waveform with an algorithm that uses it as an input. Furthermore, we could implement quantum computer-like features in photonic computing for speeding up ordinarily time consuming processes we can’t readily parallelize across several CPUs with an algorithm that tries to collapse the waveforms on all possible relationships between objects, or all objects with a certain value. So obviously this is an exciting result and it’s interesting to think about all the things we could do with this quantum phenomenon in the realm of computing and ultimately, communications technology. And one also wonders whether objects much bigger than run of the mill photons can be induced to laugh in causality’s face by being cooled to near absolute zero since in the recent past, experiments have shown that objects much larger than we’d think can adopt the odd behaviors of subatomic particles and what we can ultimately do with these super-cooled pseudo-quantum things. But first and foremost, as with any groundbreaking and bizarre experiment, it may be a good idea to replicate it to rule out any interference or technical anomalies to avoid another OPERA-esque drama…

See: Ma, X., et al. (2012). Experimental delayed-choice entanglement swapping NatPhys DOI: 10.1038/nph…