Archives For physics

looking into the universe

No one seems to be exempt from having some sort of an issue with weight nowadays, even all the matter in the universe. You see, by measuring the gravitational pull of all the galaxies, we’re able to see about how much the cosmos weighs. Of course, since the measurement is indirect, our observations don’t necessarily line up with each other and leave us with something a lot like placing a person on a scale only to see a weight roughly twice what would make sense for any human this size. For a long time, astronomers looked for any trace of missing matter and found that sometimes, it’s hiding in plain sight behind clouds of gas and dust. Still, just because you’re now able to see stars and galaxies you couldn’t see before in a region of space, doesn’t mean you can call the whole matter resolve and retire for drinks at the local pub. You still have to get data showing that the same kind of phenomena is hiding stars and galaxies everywhere, which is no trivial task. You have to keep scouring the cosmos for any sign of them to be sure.

Sometimes, though, the universe gives you a break from an unexpected source. In this case, a stray signal from an FRB, which, despite media reports to the contrary are not aliens or just an open microwave door in the telescope’s facility, but real, violent cosmic phenomena, traveled a mind-boggling 6 billion light years to get to Earth. Not only was it the first time that an FRB was pinned down to a particular galaxy, but the radio afterglow studied in unprecedented detail by a significant team of researchers all over the world showed that the missing matter we’ve inferred to exist is in fact there and the radio waves have been bouncing off of it in ways our models say they should. How can we tell? Well, we can see it in the dispersion patterns of the FRB’s signal, which were affected by traveling through the interstellar and intergalactic medium. The more it hit on its way to Earth, the more pronounced the effects which can be generalized, since at the scales involved, the universe is more or less homogeneous in density. Averaging out the matter we think is there at the density the equations say it should be based on its mass gives us a very straightforward benchmark for expected dispersion patterns, which this FRB matched.

Now, newspapers and blogs less familiar with science or unable to read scientific papers will be trumpeting that we’ve solved the mystery of dark matter, which is boring galaxies astronomers couldn’t see before. But that’s not true. That missing matter is actually just part of the 5%, or so of standard, baryonic matter the observable universe is made of. Dark matter and dark energy still make up over 95% of the universe’s mass and their existence was inferred using the same exact methods showing that the missing matter found in the FRB’s dispersion patterns needed to be there. Still, the fact that we now know that we’ve weighed the universe correctly is a huge boon to further research in astronomy and cosmology. Science is very exciting when new data overturns something we’ve long held to be true. But at the same time, we do need at least core principles of how we think the universe works to hold as firm foundations so we could capitalize on breakthroughs and have a context for them. This discovery of missing matter alongside the recent detection of gravitational waves is exactly what we needed: nature’s confirmation that as science moves forward, we’re starting to get key things about how the universe works right.

futurama takeoff

Far be it from me to claim physic powers, because those aren’t real, but the moment the news of weird results coming from experiments to test the EmDrive came to my attention, I knew that one day I’d have to write a post about it. Not sure whether to jump on the bandwagon to simply join the chorus of voices explaining that it was impossible, I waited until proper experiments will show that the minuscule thrust being recorded in earlier tests was within the margins of error, a little bit of interesting noise but nothing beyond that to prove my premonition wrong. But as odd as it sounds, the EmDrive is still being tested and showing faint signs of life, and getting a whole lot of press claiming we’re on the verge of building a warp drive. And so, it’s time to quit stalling, roll up my sleeves and explain why the EmDrive can show us some interesting physics in weird environments, but simply would not work as a viable spacecraft engine as it was planned.

Getting right to the point, the biggest concern with the EmDrive is that it’s yet another version of a reactionless drive proposed by those who thought they spied something that isn’t there when looking at general relativity and tortured complex equations until they seemed to say what they wanted them to say. But such devices are impossible because they violate fundamental laws of physics we know to be true after centuries of observation and study. Objects at rest stay at rest until energy is added to the system and causes other objects to act on them. That’s what we’re taught in our very first physics class as one of the fundamental laws governing motion. When a device like the EmDrive comes along, it asks us to throw out this law and believe that whatever is going on inside the object can act as an external force large enough to make it move without actually adding energy to the mix. How that happens is usually peppered with tortured ret-cons of general relativity and buzzwords about group motion, frequencies, and reference frames.

Basically, think of piloting spacecraft with EmDrives as trying to make sailboats in a vacuum go simply by blowing into the sails. Sure, they’ll react a little at first as you introduce the initial tidbit of new energy, but in a closed system, the air you blow out of your lungs will simply dissipate as the system reaches equilibrium and all motion will stop fairly quickly. Same with the EmDrive. It seems that bouncing microwaves do produce some odd effects as they collide in the resonant chamber, but in a closed system, in which it has never actually been tested by the way, this too will dissipate and reach equilibrium so even the infinitesimal thrust currently being detected will be gone. Tellingly, the experiments on the versions of the EmDrive that seemed to be the most promising deviate in principle from the original design by including a nozzle to expel photons in the chamber as the reaction takes place, while the original was just supposed to propel craft by resonating away with no propellant or thruster like an alien warp drive in a sci-fi movie.

In the end, we’re left with pop sci blogs and news telling us the the EmDrive works while citing a few possibly intriguing experiments with a very inefficient Q-thruster design that departs from a core principle of the EmDrive’s planned implementation. That’s how it works. It’s not breaking a fundamental law of physics, it’s trying to resonate well known, but still rather poorly understood quantum particles that pop in and out of existence from the fabric of space and time. It’s a cool concept and not out of the realm of plausibility, but it’s very unclear whether it could actually be used as a real spacecraft engine and it’s not a reactionless drive we are being told it is by pretty much all of the media. That’s what a small skunk woks lab at NASA actually tested just to see if the concept was plausible, not the “impossible drive that violates the laws of physics,” and while it might not really go anywhere and seems rather buggy and hard to definitively verify today, it’s still a pretty interesting way to if we can actually do anything with zero point energy.

sci-fi plane

Now, I don’t mean to alarm you, but if Boeing is serious about its idea for the fusion powered jet engine and puts it into a commercial airplane in the near future more or less as it is now, you’re probably going to be killed when it’s turned on as the plane gets ready to taxi. How exactly your life will end is a matter of debate really. The most obvious way is being poisoned by a shower of stray neutrons and electrons emanating from the fusion process, and the fissile shielding which would absorb some of the neutrons and start a chain reaction much like in a commercial fission plant but with basically nothing between you and the radiation. If you want to know exactly what that would do to your body, and want to lose sleep for a few days, simply do a search — and for the love of all things Noodly not an image search, anything but that — for Hiroshi Ouchi. Another way would be a swift crash landing after the initial reaction gets the plane airborne but just can’t continue consistently enough to stay in the air. A third involves electrical components fried by a steady radioactive onslaught giving out mid-flight. I could go on and on, but you get the point.

Of course this assumes that Boeing would actually build such a jet engine, which is pretty much impossible without some absolutely amazing breakthroughs in physics, material sciences, and a subsequent miniaturization of all these huge leaps into something that will fit into commercial jet engines. While you’ve seen something the size of a NYC or San Francisco studio apartment on the side of each wing on planes that routinely cross oceans, that’s not nearly enough space for even one component of Boeing’s fusion engine. It would be like planning to stuff one of the very first computers into a Raspberry Pi back in 1952, when we theoretically knew that we should be able to do it someday, but had no idea how. We know that fusion should work. It’s basically the predominant high energy reaction in the universe. But we just can’t scale it down until we figure out how to negotiate turbulent plasma streams and charged particles repelling each other in the early stages of ignition. Right now, we can mostly recoup the energy from the initial laser bursts, but we’re still far off from breaking even on the whole system, much generate more power.

Even in ten years there wouldn’t be lasers powerful enough to start fusion with enough net gain to send a jet down a runway. The most compact and energetic fission reactors today are used by submarines and icebreakers, but they’re twice the size of even the biggest jet engines with a weight measured in thousand of tons. Add between 1,000 pounds and a ton of uranium-238 for the fissile shielding and the laser assembly, and you’re quickly looking at close to ten times the maximum takeoff weight for the largest aircraft ever built with just two engines. Even if you can travel in time and bring back the technology for all this to work, your plane could not land in any airport in existence. Just taxiing onto the runway would crush the tarmac. Landing would tear it to shreds as the plane would drive straight through solid ground. And of course, it would rain all sorts of radioactive particles over its flight path. If chemtrails weren’t just a conspiracy theory for people who don’t know what contrails are, I’d take them over a fusion-fission jet engine, and I’m pretty closely acquainted with the fallout from Chernobyl, living in Ukraine as it happened.

So the question hanging in the air is why Boeing would patent an engine that can’t work without sci-fi technology? Partly, as noted by Ars in the referenced story, it shows just how easy it is for corporate entities with lots of lawyers to get purely speculative defensive patents. Knowing how engineers who design jet engines work, I’m betting that they understand full well that this is just another fanciful take on nuclear jet propulsion which was briefly explored in the 1950s when the dream was nuclear powered everything. We’re also entertaining the idea of using small nuclear reactors for interplanetary travel which could ideally fit into an aircraft engine, though lacking all the necessary oomph for producing constant, powerful thrust. But one day, all of this, or even a few key components, could actually combine to produce safe, efficient, nuclear power at almost any scale and be adopted into a viable jet engine design for a plane that would need to refuel a few times per year at most. Boeing wants to be able to exploit such designs while protecting its technology from patent trolls, so it seems likely that it nabbed this patent just in case, as a plan for a future that might never come, but needs to be protected should it actually arrive.

[ illustration by Adam Kop ]

primordial black hole

At two events of the Wolrd Science Festival in early June,  a group of five theoretical physicists debated whether we’re living in a multiverse, and more surprisingly, if our current understanding of the cosmos all but mandates that multiple universes exist. It all goes back to the instant of the Big Bang, the femtosecond that set the rules for all reality as we know it in scientific terms. Each tiny little quantum instability and flux was stretched and projected across billions of light years to influence the shape of galaxy clusters and the tiny filaments what underpin our mostly isotropic, homogeneous universe. It’s kind of like the chaos theory saying about a flap of a butterfly’s wings eventually causing a tsunami halfway across the world, but taken to incredible extremes. We’re talking about a change in point particles becoming an archipelago a million galaxies across. So, why wouldn’t some of these instabilities become their own universes, sealed off from each other by the fabric of space and time? The inflation we just described should make this inevitable.

Here’s the issue. As our infant universe was inflating, it shouldn’t have spun off uniformly since that would make the fluctuations in early matter impossible and prevented the formation of stars and galaxies. It would’ve had to have large enough disruptions to kick-start other universes, or even itself be a product of another universe undergoing rapid inflation. And if one universe can inflate, so too must the rest because otherwise, inflation becomes a unique event and science is not happy with a one-off event as an explanation. Every significant process we know of happens more than once and on universal time scales of countless trillions of years, the possibilities are pretty much infinite. We should be able to see new universes bubbling up from dark voids in the fabric of space-time, over time. There might even be room to imagine a bizarre, hyper-advanced species of the far future crossing into a brand new universe as theirs dies in a void ship isolated from reality as we know it, Doctor Who-style, hopefully one that’s nothing like the Daleks.

Problem is, how do we prove that inflation works in more than one universe when we can’t see into the multiverse? One suggestion is that inflation basically wraps the universe into a sphere, an unbreachable, self-contained environment that seems flat to us and where trying to travel to the edge of the cosmos will result in the spaceship ending up back where it started as if it were on a Mobius strip. Simple, elegant, and convenient as far as solutions to cosmological problems go, don’t you think? And that’s precisely what’s so bothersome about it. Nothing in cosmology is that simple, even inflation itself. Instead of slowing down, it’s accelerating. Instead of flying apart into clouds of stars and gas under their own momentum, galaxies are keeping their shapes until a collision distorts them thanks to invisible dark matter. Hell, some 96% of the universe isn’t even matter and almost three quarters of it is some mysterious energy feeding its expansion. Does it really make sense that in a universe like that simple, convenient explanations will fly?

alien bacteria

In a fair bit of science fiction, we see advanced alien species use some sort of shielding to walk around other planets or survive being ejected into space. Something around them flickers and a protective invisible bubble is raised, protecting them from a horrible death by dehydration as all the fluid in their bodies effectively boils away. As it turns out, that’s actually possible. So far it’s only been done with fruit fly and mosquito larvae, but we apparently know how to create a shield from extreme conditions, capturing water and necessary gases trapped in a field of electrons or plasma. All you have to do is take a specimen into an electron scanning microscope and send a shower of electrons or a plasma beam at your target. The electrons and ions envelop the living specimen, creating a little, almost skin-tight biodome that contains just enough air for it to move and otherwise keep very obviously living for about an hour. So, you might ask, electromagnetic spacesuits for everybody? Well, no, not exactly. There are a few really important caveats.

First and foremost, the specimens are being irradiated, and the more powerful the shielding has to be, the more radiation it requires to organize itself. Humans could get radiation poisoning as their suits are being beamed onto them, or at least risk extremly dangerous exposure levels. But if you think a little cancer is worth it, there’s the issue of being trapped with your air supply. With no scrubbers, your respiration would produce dangerous level of carbon dioxide and you would die of hypoxia after about 45 minutes to an hour or so, depending, of course, on your breathing and how much of an air supply you initially had. And now might be a good time to mention that a spacesuit created by nothing but charged particles wasn’t the original goal of the research, the idea was to insulate insects so their movements could be studied in the vacuum of the electron microscope’s sample chamber, so there’s not going to be a team working on these issues in the near or far future. But at least we now know that there really is something to the electromagnetic shielding we see in sci-fi all the time, even though it would make for a lousy spacesuit…

See: Takaku, Y., et al. (2013). A thin polymer membrane, nano-suit, enhancing survival across the continuum between air and high vacuum PNAS DOI: 10.1073/pnas.1221341110

industrial laser

Most of us learned about lasers from science fiction. We know that lasers come in red if you’re the bad guy, and green or blue if you’re the good guy. We know that they travel at the speed of sound between two space fighters, and they make a phew-phew sound when fired. And they all travel in perfect straight lines. Of course real lasers are very different. They come in all colors, depending on how they’re powered and fired, they’re silent, some are invisible until they reach the kind of energy levels used in fusion reactor prototypes when fired at a real world target, and they travel to their targets so quickly, they seem to flash into existence and disappear in an instant. Oh and they don’t always travel in a straight line. In fact, as noted elsewhere on the web by a scientist and science blogger, they can bend it like Schrodinger if they emit an Airy beam, curving slightly after passing through a filter that changes their quantum waveforms. Previously, this feat has only been accomplished with photons, but now, it’s been done with electrons.

Airy beams — named after a British astronomer who tried to solve Schrodinger’s equation in the field of optics — have a couple of very interesting properties. Not only do they curve, but they’re not as prone to diffraction as our run of the mill laser beams and they can heal themselves after hitting an obstacle that should severely diffuse them, reassembling to continue their curved path after passing through it. It’s even more impressive that electron Airy lasers behave just like their photon counterparts because that allows for significant improvements in electron microscopes, precision sensors, and possibly even alternative computer chip designs that can better control the flow of electrons through themselves. How do you get electrons to do such bizarre things? A specially designed hologram projected in front of an electron gun changes their quantum state and sends them on whatever trajectory you need them to follow. Pretty much anything that uses the flow of electrons to do something very precise in tight quarters can benefit from the ability to attach a sort of steering wheel to particles that would otherwise travel in straight lines.

Now it’s important to keep in mind that curving is not what makes this an Airy laser, it’s the ability to change the quantum states of the photons and electrons being fired, and being able to scale up such lasers could be huge not just in the lab or in specialized applications, but even for very common, everyday things like high speed wi-fi access, secure transmissions, and major gains in energy efficiency for a whole slew of electronic device we use on a regular basis. With so much talk about how much money is being "wasted" on basic research like this, it’s amazing how little attention has been paid for the possibilities Airy lasers can offer if we could integrate their key principles into today’s devices. After all, experiments like this one are the very definition of basic research. The science says something should be possible, let’s try it and see what happens. In this case, Israeli scientists showed that Airy lasers can indeed do some pretty cool things…

See: Voloch-Bloch, N., et al. (2013). Generation of electron Airy beams Nature, 494 (7437), 331-335 DOI: 10.1038/nature11840

beyond absolute zero

Suppose you take some potassium atoms and put them in a vacuum where you cool them to as close to absolute zero as you possibly can in a lab. What you’ve done is reduced the entropy of this system of atoms because the colder it gets, the less kinetic energy they have, and the less energy they could exchange with each other. Sure there will be some quantum effects that will upset the perfect stillness of these atoms which is why it’s theorized that we’ll never see absolute zero temperatures in the wild, but for all intents and purposes, you’ve hit the coldest that matter can get. Now, with a laser, start heating up the atoms but charge them so they attract each other and stay in their place in the system. Their energy goes up but they can’t exchange it or move in any direction. The overall entropy of the system is now technically less and you’ve just broken a limit we had the gall to preface with the word "absolute." You’ve effectively "cooled" potassium to a billionth of a degree below absolute zero, or at least to a quantum state that seems like it.

This is exactly what a team of scientists recently achieved in the lab and they’re excited about a slew of possible experiments to test the behavior of atoms and molecules in an exotic quantum state, opening new avenues for investigating the nature of dark matter and dark energy. As the media reports it, they managed to chill something below -273.15 °C, but take a moment to note that the word cooled in the description of the experiment is in quote marks. That’s because they didn’t actually go below this temperature. What they really did is way, way more complicated and has actually been long thought possible, just never accomplished. Absolute zero is still important because it marks a point at which injecting energy into a system changes how its distributed. For the positive temperature range, which in this case is anything above absolute zero, more energy brings more atoms to the same energy state. Negative temperatures, however, make exchange of energy much more difficult and can create inequalities between the atoms’ energy states.

Again, seems rather counter-intuitive, doesn’t it? In this setup, positive temperatures should be the low entropy ones, right? Well, in this range, atoms can move and exchange their energy with no limit which means that their possible number of quantum states could be infinite. Atoms which have to deal with negative temperature have a limit to how many energy states they could be in, meaning that you can keep injecting energy into the system but it will be more or less trapped in the atoms and the lattice will remain stable rather than fly apart as the atoms start moving more and more in response. In short, when you go into negative temperatures, you lower entropy as you add energy with the bizarre added twist that as you initially heat up the atoms, they could be in an infinite amount of energy states, then abruptly find themselves trapped in ever fewer. Just another way quantum mechanics makes things fun, and by fun I mean really, really weird.

So what does this all mean? It means that in this case, absolute zero has nothing to do with how cold things are, but how energy states are distributed in a system, and while we thought that this temperature was the dividing line between the two types of energy distribution, this is really the first experimental proof we have that this can happen in nature. If this seems really confusing, it is, because this is just the complicated nature of the beast. But knowing that one can achieve a negative temperature under the right conditions means that you can explore an entire realm of very bizarre quantum states what could explain otherwise seemingly inexplicable behaviors, one of which could offer an explanation for dark energy and give experimentally verifiable answers to one of cosmology’s biggest mysteries. And while yours truly would love to dive deeper into these possibilities, it may be best for everyone just to digest what we have so far and get ready for the imminent flood of Twitter and Facebook posts about cooling things below absolute zero…

See: Braun, S., et al. (2013). Negative absolute temperature for motional degrees of freedom Science, 339 (6115), 52-55 DOI: 10.1126/science.1227831


According to the Cthulhu Mythos, somewhere between New Zealand and Chile in the waters of the South Pacific, an underwater city known as R’yleh houses a malevolent monster that came to our planet eons ago and is now dead-dreaming until the stars align and he can once more send his spawns across the land, sowing death, destruction, and chaos, feeding on souls of both his followers and his victims. Of course this is just a setting for a string of horror stories and there’s no record of such things as Cthulhu, R’yleh, or the Necronomicon, but that doesn’t mean that a curious physicist can’t have a little fun with a sci-fi horror story and see what it would take for the mythical city of bizarre geometry and warped dimensions to exist. His conclusion? R’yleh’s odd distinguishing features described in The Call of Cthulhu are either powered by a warp drive or the effects of a cloaking device which works much like a warp drive would. And that would make the mythos’ main character’s description as an alien invader seem a lot more convincing…

How would the sailors who landed on the island housing R’yleh see a warped landscape and an enormous eldritch metropolis that made no sense to them. The layout and architecture would’ve obviously been made for alien creatures, so it’s unlikely it would’ve resembled building patterns we use in our own cities. Winged extraterrestrials who either float or move on tentacles wouldn’t need stairs and strictly defined doors, floors, and windows are unlikely to be mandatory. But that doesn’t explain the strange colors and the seemingly impossible geometry. That’s the effect of a gravitational lens on a very small scale, one created by the warp drive enveloping R’yleh. Light would be bent in very unusual ways, giving familiar things bizarre colors and shapes, and giving the sailors constant optical illusions, making the whole city look like a giant M.C. Escher sketch with a liberal touch of late Eocene Clawed and Tentacled Horror and Mild Acid Trip. And just to add to the weirdness, time inside R’yleh would move much slower than it would on the outside of it due to the time dilation effects created by the active warp drive or gravitational cloak.

You certainly wouldn’t want to get stuck in this city if you were lost at sea. Not only would space and time appear and flow differently for you, the primeval ruins populated with only FSM knows, or more likely doesn’t know, what that may be eager to devour you or tear you limb from limb to satisfy their curiosity about the strange bipedal squishy thing making lots of noise in their home, could turn even the shortest stay into decades if you ever make it back to the real world. Good thing this is all just one spine-tingling story from a pulp sci-fi magazine of a long-gone era and in the many decades since it accurately described what sounds like an alien generation ship there hasn’t been so much as a hint of anything weird in the South Pacific pole of inaccessibility where R’yleh was said to be sitting at the bottom of the sea. Well, if you don’t count The Bloop — which no one has been able to explain to full scientific satisfaction. But as I’ve already said, it’s all just creepy fiction. We’re all probably just fine. Probably…

See: Tippett, B. (2012). Possible Bubbles of Spacetime Curvature in the South Pacific arXiv: 1210.8144v1

transformer box

Last year, I wrote about Andrea Rossi’s claim to have created a cold fusion reactor and suffice it to say that I wasn’t very optimistic about the prospects. Not only did it seem to defy some basic laws of physics but its inventor was exceedingly cagey about how the device worked, claiming to have simply stumble on the wondrous reaction at first, then claiming to protecting a trade secret behind a new 1 MW power plant he was building for a client. The paper he and his partner sent to physics journals and the patent they tried to register were both rejected for the same reason: in place of a basic diagram of how their cold fusion reactor was supposed to work they placed a black box. Without plausible explanations of how they were getting the reaction they claimed was taking place and without a formal, qualified third party validation of their results, there just wasn’t enough for reviewers or the patent office to conclude that the results were legitimate. And there were the two tiny little problems of Rossi being a convicted con man whose engineering degree came from a now defunct diploma mill, making it hard for him to establish credibility.

But it seems that Rossi is anything but persistent and he’s kept his experiments going. The Pop Sci story really tries to keep an open mind when talking about cold fusion, or as its advocates refer to it, low enery nuclear reactions (LENR) and tried not to be too hard on Rossi, but what it portrays is very unflattering nonetheless. Rossi steadfastly refuses to release any details, hand-picks the audiences for his demonstrations, doesn’t unplug his device during these demos, and refers only to "important institutions" and "major technical reports in progress" when pressed for some specificity in his claims. Not only that but he is almost pathologically paranoid of criticism, so much so that he refused to meet with the story’s author several times because he got wind of the fact that his critics would be asked to weigh in as well, finally consenting to be interviewed in one of the pettiest ways imaginable: at the exact time the author booked his critics’ interviews. If I were a potential investor in his business, this would definitely spook me. And considering that we aren’t told who may be interested in investing, there may be no real takers for Rossi’s E-Cat.

After the first post on Rossi, several people posted and sent me links to cold fusion sites filled with papers claiming to see low energy reactions in a variety of improbable machines, arguing in favor of keeping cold fusion in mind as a potentially viable power source. However, none of the papers seemed to make a whole lot of sense from either a physics or an engineering standpoint and the vast majority of them reported the kind of energy that could’ve easily been created by a random chemical process or impurities in the materials used to construct these test reactors. To claim viable cold fusion you need more than a small temperature rise. You need to have a major spike in energy and some radioactivity to prove that the reaction is indeed nuclear in nature. On top of that, you need the reactor to clearly be scalable, enough to get a 15 to 30-fold return on the initial energy investment so plans for power plants could be drawn up. A lot of weird stuff can happen on very small scales, but that weird stuff is not cold fusion or anything like it as far as all the verifiable, consistent evidence we have tells us.

What seems to be happening is that LENR advocates see small temperature bumps and a wide variety of anomalies they can’t explain, and because they so badly want to do what science said cannot be done, attribute it to the first stirring of cold fusion. And since they’re so invested in the idea, they refuse to take no for an answer and reject any alternative explanation for what they’re seeing in small scale, insisting that with just a little funding their table top experiments could be turned into energy farms. But because they can’t explain what they’re seeing and how it works, a scientist with access to the cash they need isn’t going to be swayed into hobbling his or her own research for a gamble on an unexplained anomaly. After all, the LENR enthusiasts are asking us to do the equivalent of buying a car that doesn’t seem to have wheels or a gas tank, with a hood that’s welded shut without test driving it first, claiming that they’ve driven it and while it wouldn’t do 0 to 60 in 5 seconds, it can waddle along by some wondrous phenomenon they can’t explain which is why we need to buy it and invest in modifying it into a cargo ship. It’s simply too much to have to buy into before writing a check, which is why cold fusion has been long mothballed.

reaching out

Welcome back to yet another installment of the question of whether we’re all just products of an advanced simulation that created an entire universe, but this time, instead of plunging deep into the lore of the Matrix with Moore’s Law hijinks and philosophy, we’ll be hunting for physical proof that the universe is actually a simulation in the realm of quantum chromodynamics. What exactly is quantum chromodynamics? It’s the study of interactions between point particles that make up matter as we know it and its more exotic forms we sometimes glimpse when we smash atoms with enough force. How these particles interact basically defines what is and isn’t possible across the entire universe because without their fluctuations, the cosmos would still be a zoo of particles in no way, shape, or form resembling the planets, stars, and galaxies we know and love today. So the big question is whether those point particle interactions have a very telling limit and what this limit could tell us about the underlying nature of the universe.

One idea is that these limits should fit a three dimensional lattice around the interaction, which essentially means that interactions between point particles should fit into a predictable model on which other interactions can be neatly stacked. Since the authors of the idea in question aren’t computer scientists, they refer to this packet of quantum information as a cubical lattice. Being a computer person, I would refer to it as a voxel; it’s a three dimensional pixel which makes up the environment in which the simulation should take place. Think of Minecraft but with blocks on the smallest possible scale we know how to measure, a scale on which point particles would be as big as ants while atoms would be the size of buildings. This is essentially what the researchers are talking about when considering if our universe is a simulation; countless tiny voxels moving through a mind-bendingly complex simulation governed by exotic math of a computing device of unknown power, origin, purpose, and accuracy, defining the laws of physics we can detect.

But how does one prove that we live in a simulated environment and the limits of point particle interactions don’t simply happen to fall into a voxel on their own? Doesn’t the whole idea rest on circular logic? The voxels should have an energy limit of Ψ and if the quarks and gluons that we measure have an energy limit of Ψ they are voxels? Something just does not add up here. If we try to control the state of something virtual, we have to expend a lot of energy to do it. Today, it takes a supercomputer to simulate the behaviors seen in a cube of space barely big enough to fit a few simple atoms. If we want to do even a simple byte flip, we have to conduct a current that will be converted into 0s and 1s. Even on a quantum computer we’ll need to apply a good bit of energy to keep the qubits in a state we can manipulate. So if a universe is being simulated with some sort of a hypercomputer, it requires an immense amount of energy to run, even if all the supernovae and galactic collisions are just instructions on a stack.

Who would have such energy generation capabilities and why would anyone decide to simulate the universe in such detail? Simulations are best when they focus on the specific things to model at the appropriate level of abstraction. When researchers look at virtual galaxy collisions, they don’t spend the computing power and electricity to model the position of each star because they don’t really need to know where each star moves for the purposes of seeing how galaxies affect each other. They’re concerned about the overall shape of merging galactic arms so exact details of every solar system involved would only slow the simulation down. Likewise, a simulation of an entire universe down to the detail of a point particle doesn’t seem to make much sense unless the simulation’s goal is to create something like Laplace’s Demon, which we can do with enough computing grunt but will mean little in the real world. Beyond that, we get into philosophical and abstract questions like who designed the simulation and if their universe is a simulation too. And we’ll quickly arrive at the First Cause dilemma on rather shaky grounds. Not exactly the place a scientific proposal wants to end up when taken through its implied consequences…

See: Silas R. Beane, Zohreh Davoudi, & Martin J. Savage (2012) Constraints on the universe as a numerical simulation, arXiv: 1210.1847v1