oh quantum causality, we hardly knew ye…
Here’s what sounds like a rather typical experiment with quantum mechanics. A pair of devices we’ll call Alice and Bob, or A and B in cryptographic parlance, measure entangled photons which we know can be entangled at least 10,000 times faster than the speed of light. A third device called Victor, or an intermediary in the very same cryptographic convention that we just used, will randomly choose to entangle or not to entangle another pair of photons. So of course when Victor entangles its pair of photons, Bob and Alice would find the photons to be entangled, right?
Except there’s a catch. Victor entangles or doesn’t entangle its photons after Alice and Bob already made their measurements. Barring some sort of technical guffaw in the setup, Alice and Bob are basically predicting what Victor will do or somehow influencing Victor’s supposedly random choice of whether to entangle its photons or not. In other words, causality just took a lead pipe to the kneecap as past and future are crossing wires on a subatomic level. This shouldn’t happen because the two pairs of entangled photons are not related to each other and Victor is dealing with a photon from each pair, and yet, it’s happening.
One of the reasons why the names of the devices are in cryptographic convention is because cryptography is the best way to follow what’s actually happening. Imagine sending two secure e-mails containing two entirely separate passwords to two friends, then, after these e-mails have been received, forwarding copies of those passwords to a system administrator who might just randomly reset them. And when those passwords are reset, somehow, your two friends get the new passwords instead of the ones you just sent them even though the system administrator hasn’t even received the original ones to reset yet. This prompts the question of why and how in the hell this could possibly happen.
According to the researchers, we could view the measures of the photons’ states not as a discrete result but a sort of probability list of their possible states, i.e. they’re both entangled and not entangled depending on what will happen through the rest of the system. Then, when their fate is decided, the waveform collapses into the particular result like the famous Schrödinger’s cat taken one notch higher up the causality ladder, and which will only be truly dead or alive when the observer writes down the result of his or her observations into the official logbook after another observer confirms them.
Hold on though, what about the entanglement being nearly instantaneous? Maybe it’s more simple than all of this mumbo jumbo about collapsing waveforms and we don’t need to awaken the zombie of the Copenhagen interpretation of quantum mechanics? Victor could have entangled the photons and the spooky action moving much, much faster than the speed of light reached the detectors before the first measurements. We broke the rules of special relativity which dictate that information can’t travel faster than light, but surely this is a far more elegant solution, right? Unfortunately, we can’t prove that information travels faster than light as shown by the neutrino saga at the OPERA labs, and until we find a way to detect honest to goodness tachyons, we have to follow the special relativity framework, and in the experiment, the each half of the photon pair was measured a few femtoseconds prior to reaching Victor.
Granted, since a glitch in OPERA’s fiendishly delicate arrangement turned into a 60 nanosecond error, surely a femtosecond or two discrepancy could be caused by a bad angle or a tiny manufacturing defect inside of the fiber optic wire as well. This is why the researchers suggest more experiments using much longer wires to make sure that the delay is even longer to see if their results will be further supported. However, the experimental setup here has been well calibrated and seems rather unlikely to be subject to a systematic error, so you probably shouldn’t bet the farm on their results being wrong.
Provided that future research validates their experiment, what does this mean for practical applications? Well, we may not have to cool a quantum computer to near absolute zero to measure its output if we can simply collapse the waveform with an algorithm that uses it as an input. Furthermore, we could implement quantum computer-like features in photonic computing for speeding up ordinarily time consuming processes we can’t readily parallelize across several CPUs with an algorithm that tries to collapse the waveforms on all possible relationships between objects, or all objects with a certain value.
So, obviously this is an exciting result and it’s interesting to think about all the things we could do with this quantum phenomenon in the realm of computing and ultimately, communications technology. And one also wonders whether objects much bigger than run of the mill photons can be induced to laugh in causality’s face by being cooled to near absolute zero since in the recent past, experiments have shown that objects much larger than we’d think can adopt the odd behaviors of subatomic particles and what we can ultimately do with these super-cooled pseudo-quantum things. But first and foremost, as with any groundbreaking and bizarre experiment, it may be a good idea to replicate it to rule out any interference or technical anomalies to avoid another OPERA-esque drama…
See: Ma, X., et al. (2012). Experimental delayed-choice entanglement swapping NatPhys DOI: 10.1038/nph…