computing the universe, one model at a time
Is it possible to simulate and compute our way to solving the mysteries of the universe and actually get it right in the end?
In one of my favorite science fiction tales, The Last Question by Isaac Asimov, a powerful hyper-computer is evolving from a tool to perform complex calculations, into a knowledge base able to give potential answers to deep and profound scientific questions. While we don’t have a machine quite like that yet, we do have a strong candidate for the part: Wolfram Alpha. The basic idea behind it could probably be summed up as taking what we see on Wikipedia and adding one more level of complexity and accuracy to provide in depth answers to all kinds of questions. But it’s being sold as something much more than that. In fact, Stephen Wolfram wants it to become a tool for very serious scientific research into the mysteries of physics. Here’s his pitch at TED…
Let’s keep in mind that while there’s some real scientific content here, this was more of a sales presentation than anything else, and as a sales pitch, it was rather ambitious. Even though Mathematica and Alpha are a very significant set of tools and can offer quite a bit to their users, they face the same fundamental limitations as all software. The models they create and the answers they provide are only as accurate as the knowledge being fed into their databases and the millions of lines of code defining how they’ll work. In the presentation, Wolfram alluded to this when he said that his team has to curate “zillions of sources of information” and that he still needs to collect more and more scientific data. But if we were to give it a few more years and keep on feeding tools like Alpha with more and more knowledge, does it mean that we’ll have enough data to come up with viable scientific theories that will reveal the universe’s deepest mysteries as everyone will be writing their own complex, viable models simply by entering a plain language query into a Mathematica/Alpha hybrid?
Personally, I would say no with provisions. Sure, if everyone has the tools to create their own computer model of genetic structures, or stars, or entire universes, a lot of people will do it. But again, that doesn’t necessarily mean that these models will be accurate since they’re based on people’s queries, which are in turn based on what these people know about a particular topics. Here’s a scary thought to illustrate the issue. Just imagine the self-crowned luminary of information theory, Bill Dembski, using one of Wolfram tools to model fine-tuning in the universe and the impossibility of evolution. Considering that he has no clue what evolution involves, I’d be willing to bet that whatever he conjures up and tries to pass off as a serious scientific model of the cosmos would be a textbook example of a GIGO computation: or garbage in, garbage out. The possibilities for cranks and pseudoscientists to abuse and torture Alpha’s vast body of knowledge, then pass their “models” off as a legitimate and serious exercise in scientific research could be a serious concern. Democratizing the ability to perform complex computing can be great, but it can have its side effects.
But what about serious scientists? Surely, if we put some powerful tools in their hands, experts could take the models they already have and improve on them further and further, right? Well, yes and no. As we’ve said, how well Wolfram’s tools will perform and how accurate they’ll be depends on the quality and quantity of data fed to them throughout their existence. If a popular scientific model has a very subtle and complex mistake, science would need to correct it through good, old fashioned experimental research carried out to test the model itself. Rather than become oracles revealing deep insights into time and space, the models which could be created by Wolfram’s software would simply provide new hypotheses to test, just like all models used by scientists. A model doesn’t mean much if it can’t be backed up by observations or experiments, and even the closest virtual match to our universe created in the confines of a computer via a combination of evolutionary algorithms may be completely useless because it’s basic dynamics deviate from the behaviors we see in the real world. This is why many scientists today use the world model to mean calculations backed by extensive observations and by taking well known and understood natural processes into account.
In the end, it’s not about computing or elegant applications of algorithms, but about helping experts generate new ideas to test with powerful and sophisticated new tools for elaborating, simulating and visualizing many of the driving principles behind our universe. I’m sure that Stephen Wolfram is keenly aware of this and that’s the overall goal of his products. However, that important disclaimer tends to get lost in his sales pitches, and in quite a bit of the overly excited media coverage that his experiments tend to receive.