Archives For computers

crt head

Humans beware. Our would-be cybernetic overlords made a leap towards hyper-intelligence in the last few months as artificial neural networks can now be trained on specialized chips which use memristors, an electrical component that can remember the flow of electricity through it to help manage the amount of current required in a circuit. Using these specialized chips, robots, supercomputers, and sensors could solve complex real world problems faster, easier, and with far less energy. Or at least this is how I’m pretty sure a lot of devoted Singularitarians are taking the news that a team of researchers created a proof of concept chip able to house and train an artificial neural network with aluminium dioxide and titanium dioxide electrodes. Currently, it’s a fairly basic 12 by 12 grid of “synapses”, but there’s no reason why it couldn’t be scaled up into chips carrying billions of these artificial synapses that sip about the same amount of power as a cell phone imparts on your skin. Surely, the AIs of Kurzwelian lore can’t be far off, right?

By itself, the design in question is a long-proposed solution to the problem of how to scale a big artificial neural network when relying on the cloud isn’t an option. Surely if you use Chrome, you right clicked on an image and tried to have the search engine find it on the web and suggesting similar ones. This is powered by an ANN which basically carves up the image you send to it into hundreds or thousand of pieces, each of which is analyzed for information that will help it find a match or something in the same color palette, and hopefully, the same subject matter. It’s not perfect, but when you’re aware its limitations and use it accordingly, it can be quite handy. The problem is that to do its job, it requires a lot of neurons and synapses, and running them is very expensive from both a computational and a fiscal viewpoint. It has to take up server resources which don’t come cheap, even for a corporate Goliath like Google. A big part of the reason why is the lack of specialization for the servers which could just as easily execute other software.

Virtually every computer used today is based on what’s known as von Neumann architecture, a revolutionary idea back when it was proposed despite seeming obvious to us now. Instead of a specialized wiring diagram dictating how computers would run programs, von Neumann wanted programmers to just write instructions and have a machine smart enough to execute them with zero changes in their hardware. If you asked your computer whether it was running some office software, a game, or a web browser, it couldn’t tell you. To it, every program is a set of specific instructions pushed onto a stack on each CPU core, read and completed one by one, and then popped to make room for the next order. All of these instructions boil down to where to move a byte or series of bytes in memory and to what their values should be set. It’s perfect for when a computer could run anything and everything, and you’ll either have no control over what it runs, or want it to be able to run whatever software you throw its way.

In computer science, this ability to hide nitty-gritty details of how a complex process on which a piece of functionality relies actually works, is called an abstraction. Abstractions are great, I use them every day to design database schemas and write code. But they come at a cost. Making something more abstract means you incur an overhead. In virtual space, that means more time for something to execute, and in physical space that means more electricity, more heat, and in the case of cloud based software, more money. Here’s where the memristor chip for ANNs has its time to shine. Knowing that certain computing systems like routers and robots could need to run a specialized process again and again, they’ve designed a purpose built piece of hardware which does away with abstractions, reducing overhead, and allowing them to train and run their neural nets with just a little bit of strategically directed electricity.

Sure, that’s neat, it’s also what an FPGA, or a Field Programmable Gate Array can do already. But unlike these memristor chips, FPGAs can’t be easily retrained to run neural nets with a little reverse current and a new training session, they need to be re-configured, and they can’t use less power by “remembering” the current. This is what makes this experiment so noteworthy. It created a proof of concept for a much more efficient FPGA when techies are looking for a new way to speed up resource-hungry algorithms that require probabilistic approaches. And this is also why these memristor chips won’t change computing as we know it. They’re meant for very specific problems as add-ons to existing software and hardware, much like GPUs are used for intensive parallelization while CPUs handle day to day applications without one substituting for another. The von Neumann model is just too useful and it’s not going anywhere soon.

While many an amateur tech pundit will regale you with a vision of super-AIs built with this new technology taking over the world, or becoming your sapient 24/7 butler, the reality is that you’ll never be able to build a truly useful computer out of nothing but ANNs. You will lose the flexible nature of modern computing and the ability to just run an app without worrying about training a machine how to use it. These chips are very promising and there’s a lot of demand for them to hit the market sooner than later, but they’ll just be another tool to make technology a little more awesome, secure, and reliable for you, the end user. Just like quantum computing, they’re one means to tackling the growing list of demands for our connected world without making you wait for days, if not months, for a program to finish running and a request to complete. But the fact that they’re not going to become the building blocks of an Asimovian positronic brain does not make them any less cool in this humble techie’s professional opinion.

See: Prezioso, M., et. al. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature, 521 (7550), 61-64 DOI: 10.1038/nature14441

humanoid robot

With easy, cheap access to cloud computing, a number of popular artificial intelligence models computer scientists wanted to put to the test for decades, have now finally able to summon the necessary oomph to drive cars and perform sophisticated pattern recognition and classification tasks. With these new probabilistic approaches, we’re on the verge of having robotic assistants, soldiers, and software able to talk to us and help us process mountains of raw data based not on code we enter, but the questions we ask as we play with the output. But with that immense power come potential dangers which alarmed a noteworthy number of engineers and computer scientists, and sending them wondering aloud how to build artificial minds with values similar to ours and can see the world enough like we do to avoid harm us by accident, or even worse, by their own independent decision after seeing us as being “in the way” of their task.

Their ideas on how to do that are quite sound, if exaggerated somewhat to catch the eye of the media and encourage interested non-experts in taking this seriously, and they’re not thinking of some sort of Terminator-style or even Singularitarian scenarios, but how to educate an artificial intelligence on our human habits. But the flaw I see in their plans has nothing to do with how to train computers. Ultimately an AI will do what its creator wills it to do. If its creator is hell bent on wreaking havoc, there’s nothing we can do other than stop him or her from creating it. We can’t assume that everyone wants a docile, friendly, helpful AI system. I’m sure they realize it, but all that I’ve found so far on the subject ignores bad actors. Perhaps it’s because they’re well aware that the technology itself is neutral and the intent of the user is everything. But it’s easier just to focus on technical safeguards than on how to stop criminals and megalomaniacs…

police graffiti

Ignorance of the law is no excuse we’re told when we try to defend ourselves by saying that we had no idea that a law existed or worked the way it did after getting busted. But what if not even the courts actually know if you broke a law or not, or the law is just so vague or based on such erroneous ideas of what’s actually being talked about, that your punishment, if you would even be sentenced to one, is guaranteed to be more or less arbitrary? This is what an article over at the Atlantic about two cases taken on by the Supreme Court dives into, asking if there will be a decision that allows vague laws to be struck as invalid because they can’t be properly enforced and rely on the courts to do lawmakers’ jobs. Yes, it’s the courts’ job to interpret the law, but if a law is so unclear that a room full of judges can’t agree what it’s actually trying to do and how, it would require legislating form the bench, a practice which runs afoul of the Constitution’s stern insistence on separation of powers in government.

Now, the article itself deals mostly with the question of how vague is too vague for a judge to be unable to understand what the law really says, which while important in its own right, is suited a lot better to a law or poly-sci blog than a pop science and tech one, but it also bumps into poor understanding of science and technology creating vague laws intended to prevent criminals on getting off on a technicality. Specifically, in the case of McFadden v. United States, lawmakers didn’t want someone who gets caught manufacturing and selling a designer drug to admit that he indeed make and sell it, but because there’s one slight chemical difference between what’s made in his lab and the illegal substance, he’s well within the law, leaving the prosecutors pretty much no other choice but to drop the matter. So they created a law which says that a chemical substance “substantially similar” to something illegal is also, by default, illegal. Prosecutors will have legal leverage to bring a case, but chemists say they can now be charged with making an illegal drug on a whim if someone finds out he or she can use it to get high.

Think of it as the Drug War equivalent of a trial by the Food Babe. One property of a chemical, taken out of context, compared to a drug that has some similarity to the chemical in question in the eyes of the court, but instead of being flooded with angry tweets and Facebook messages from people who napped through their middle school chemistry, there’s decades of jail time to look forward to at the end of the whole thing. Scary, right? No wonder the Supreme Court does want to take another look at the law and possibly invalidate it. Making the Drug War even more expensive and filling jails with even more people would make it an even greater disaster than it has been already, especially now that you’re filling them with people who didn’t even know that they were breaking the law and the judges who put them there were more worried about how they were going to get reelected than whether the law was sound and the punishment was fair and deserved. Contrary to popular belief of angry mobs, you can get too tough on crime.

But if you think you’re not a chemist, you’re safe from this vague, predatory overreach, you are very wrong, especially if you’re in the tech field, specifically web development, if the Computer Fraud and Abuse Act, or the CFAA has anything to say about it. Something as innocuous as a typo in the address bar discovering a security flaw which you report right away can land you in legal hot water under its American and international permutations. It’s the same law which may well have helped drive Aaron Schwartz to suicide. And it gets even worse when a hack you find and want to disclose gives a major corporation grief. Under the CFAA, seeing data you weren’t supposed to see by design is a crime, even if you make no use of it and warn the gatekeepers that someone could see it too. Technically that data has to be involved in some commercial or financial activity to qualify as a violation of the law, but the vagueness of the act means that all online activity could fall under this designation. So as it stands, the law gives companies a legal cover to call finding their complete lack of any security a malicious, criminal activity.

And this is why so many people like me harp on the danger of letting lawyers go wild with laws, budgets, and goal-setting when it comes to science and technology. If they don’t understand a topic on which they’re legislating, or are outright antagonistic towards it, we get not just typical setbacks to basic research and underfunded labs, but we also get laws based on a very strong desire to do something, but not understanding enough about the problem to end up with good laws that actually deal with the problem in a sane and meaningful way. It’s true with chemistry, computers, and a whole host of other subjects requiring specialized knowledge we apparently feel confident that lawyers, business managers, and lifelong political operatives will be zapped with when they enter Congress. We can tell ourselves the comforting lie that surely, they would consult someone before making these laws since that’s the job, or we can look at the reality of what actually happens. Lobbyists with pre-written bills and blind ambition result in laws that we can’t interpret or properly enforce, and which criminalize things that shouldn’t be illegal.


When an investment loses half its value in about six hours, people notice and people worry. So as the virtual bitcoin currency fell from about $260 per BTC to $130 per BTC during what had to be a really scary day for bitcoin investors, articles all over the web started questioning bitcoin as a viable currency all over again. Now the price is back up to the $160 per BTC range and things seem to have stabilized. They might even pick up a bit as aggressive investors see opportunity for scooping up a valuable asset at a discount. But the question remains. Is bitcoin a viable and solid investment, or is it just a virtual casino which will one day crumble? Last time I wrote about the currency, everyone was wondering what’s next for bitcoin. And after all this time, there’s still no clear strategy for what bitcoin could be other than something to buy and hold on to until it will gain value, and use it for anonymous transactions on the web. Since it does have uses, it’s not going anywhere soon, but it’s a little unnerving to think about its long term viability.

Here’s the dilemma. Money is worth something when a) we say that it is, and b) it’s adopted with enough enthusiasm to be widespread and useful. Dollars are used around the world because a common understanding across the planet is that dollars are valuable and they’re backed by the world’s largest economy and dominant military superpower. This is why everybody will take your green bills as a valid payment, and should your dollars be exchanged, there’s a big international network of markets that figure out exchange rates with other established currencies. This is very basic Econ 101 stuff, right? Well, bitcoin has some of the same things going for it. We agree that bitcoins have value and there are international exchanges to figure out what it’s worth when we compare it to other currencies. And there are adopters who take it as valid payment. But are the adopters enthusiastic enough and are there enough of them out there? After all, who stands by the value and strength of the bitcoin? Who secures its worth as a currency? Right now, it’s a few major exchanges and a large online community. That’s a start, but it may not be enough.

Rather than spending bitcoins around the world to pay for things on a regular basis, bitcoins sit in virtual lockers on hard drives in safe deposit boxes. Instead of mining and spending, most of the inertia seems to be mining and hoarding. There are something like 11 million bitcoins being circulated and another 10 million left to mine. What then? Is that when people will finally start to spend it and mainstream businesses adopt it as a valid form of payment? And will people deal with a finite supply of bitcoins by taking advantage of the fact that bitcoins can be split into very, very small decimal values, creating artificial inflation as the bitcoin floodgates are opened? And let’s not even get started with alternative block chains that could split bitcoin into two, or three, or hundreds of different currencies, negating the original libertarian idealism behind it. The way the bitcoin economy has been originally meant to work basically caps its size, which means that as every bitcoin to be mined has been mined, the economy will no longer grow, the prices will only deflate to allow an increase in the volume of trade, and produce very bizarre PPP ratios.

Interestingly, this is somewhat reminiscent of what we saw in the ancient world, in which a fixed supply of gold had to be broken down further and further into more and more complex currency to allow transactions to keep happening, so there is precedent for this. However, there were no international exchanges to figure out what currencies are worth back then and minting gold and silver and bronze coins was pretty much the safest way to go for many official transactions. Now we have such exchanges so we don’t need to price our currencies solely on commodities. In fact we’re doing it the other way around and using commodities as a useful hedge against a fall in a currency’s value. But of course all this is predicated on the simple fact that these currencies are being used in enough transactions to be truly valuable, otherwise, they fall out of use and much of the country’s debts have to go unpaid or be repaid in commodities that can be made liquid on international markets. (Well, that’s the idea, the practice is way more complicated.) Who would stand behind bitcoin? Who would repay all those left holding the bag if bitcoin crashes?

It’s that anxiety about what will happen when the bitcoins are all mined and all the complexities of using a currency with no central bank behind it that keep a whole lot of people from using it in a mainstream environment. And this is very likely what caused the massive, massive correction we just saw. Those who were not True Believers™ in the power of virtual currencies with no central bank or regulators attached, decided they made enough and cashed out. More speculators will return and new ones will buy the now discounted bitcoins, so we could see $260 or $300+ BTC rates in no time at all. But if all they’re doing is hoarding a currency with very limited use (and an uncomfortable deal of it quite unsavory, such as paying cybercrime tools and illegal goods), the value of the bitcoin is limited as a novelty that’s ripe for aggressive speculation and little else. At the same time though, bitcoin shows that you can do an awful lot with virtual cash and we might want to adopt its design to phase out paper money and coins for established currencies.

All in all, it looks like bitcoin is an interesting experiment ripe for some fun speculation to make a little, or a lot of, quick cash (relevant aside: I have no investment in bitcoins), and with things to teach us when it comes to modifying modern, existing currencies for the wired world. But it really doesn’t look like it could rise to become the next franc, or dollar, or yen. It will have its uses, but it’s going to stay more or less a novelty for the foreseeable future. That is unless bitcoin finds a new way to open up and instead of hoarding it in virtual wallets on encrypted hard drives buried deep in a remote mountain, locked in safe deposit boxes designed to survive a thermonuclear blast and the inevitable zombie apocalypse that will follow, bitcoin holders go out and spend the currency out in the real world. Maybe then the lack of a central issuing authority might not be an absolute deal breaker and if it stays small enough, people might not want regulators to sort out particularly rough price changes and the Fed will just let it keep flying under the radar…

circuit boards

A few years ago, when theoretical physicist Michio Kaku took on the future of computing in his thankfully short lived Big Think series, I pointed out the many things he got wrong. Most of them weren’t pedantic little issues either, they were a fundamental misunderstanding of not only the existing computing arsenal deployed outside academia, but the business of technology itself. So when the Future Tense blog put up a post from highly decorated computer expert Sethuraman Panchanathan purporting to answer the question of what comes after computer chips, a serious and detailed answer should’ve been expected. And there was one. Only it wasn’t a reply to the question that was asked. It was a breezy overview of brain-machine interfaces. Just like Kaku’s venture into the future of computing in response to a question clearly asked by someone whose grasp of computing is sketchy at best, Panchanathan’s answer was a detour that avoided what should’ve been done instead: an explanation of why the question was not even wrong.

Every computing technology not based on living things, a somewhat esoteric topic in the theory of computation we once covered, will rely on some form of a computer chip. It’s currently one of the most efficient ways we found of working with binary data and it’s very unlikely that we will be abandoning integrated circuitry and compact chips anytime soon. We might fiddle around with how they work on the inside making them probabilistic, or building them out of exotic materials, or even modifying them to read quantum fluctuations as well as electron pulses, but there isn’t a completely new approach to computing that’s poised to completely replace the good old chip in the foreseeable future. Everything Panchanathan mentions is based on integrating the signals from neurons with running currents through computer chips. Even cognitive computing for future AI models relies on computer chips. And why shouldn’t it? The chips give us lots of bang for our buck so asking "what comes after them" doesn’t make a whole lot of sense.

If computer chips weren’t keeping up with our computing demands and could not be modified to do so due to some law of physics or chemistry standing in the way, this question would be pretty logical, just like asking how we’ll store data when our typical spinning disk hard drives can’t read or write fast enough to keep up with data center demands and create unacceptable lag. But in the case of aging hard drive technology, we have good answers like RAID configurations and a new generation of solid state drives because these are real problems for which we had to find real solutions. But computer chips aren’t a future bottleneck. In fact they’re the very engine of a modern computer and we’d have to heavily add on to the theory of computing to even consider devices that don’t function like computer chips or whose job couldn’t be done by them. Honestly, I’m at a complete loss what these devices could be and how they could work. Probably the most novel idea I found was using chemical reactions to create logic gates, but it’s trying to improve a computer chip’s function and design, not outright replace it as the question implies.

Maybe we’re going a little too far with this. Maybe the person asking the questions really wanted to know about designs that will replace today’s CMOS chips, not challenge computation as most of us in the field know it. Then he could’ve talked about boron-enriched diamond, graphene, or graphene-molybdenum disulfide chips rather than future applications of computer chips in what are quite exciting areas of computer science all by themselves. But that’s the problem with a bad question by someone who doesn’t know the topic. We don’t know what’s really being asked and can’t give a proper answer. Considering that it originally came from a popular science and tech discussion though, makes answering it a political proposition. If instead of an answer you explain that the entire premise is wrong, you’ll risk coming across as patronizing, and making the topic way too complex for those whose expertise is not in your field. That may be why Panchanathan took a shot, though I really wish he tried to educate the person asking the question instead…

math love

Sometimes it’s hard to decide whether an article asking about the role of computers in research is simply click bait that lures readers to disagree and boost views, or a legitimate question that a writer is trying to investigate. In this case, an article on Wired about the future of math focused ever more on computer proofs and algorithms asks whether computers are steamrolling over all human mathematicians because they can calculate so much so quickly, then answers itself with notes on how easily code can be buggy and proofs of complex theorems can go wrong. Maybe the only curious note is that of an eccentric mathematician at Rutgers who credits his computers on his papers as co-authors, and his polar opposite, an academic who eschews programming to such an extent that he delegates problems requiring code to his students, thinking it’s not worth his time to bother learning the new technology. It’s a quirky study in contrast, but little else.

But aside from the obvious answers and problems with the initial questions, a few things jumped out at me. I’m not a mathematician by any stretch of the imagination. My software deals with the applied world. But nevertheless, I’m familiar with how to write code in general and when there’s a mathematical proof that takes 50,000 lines of code being discussed, my first thought is how you could possibly have that much code to test one problem. The entire approach seems bizarre for what sounds like an application of graph theory that shouldn’t take more than a few functions to implement, especially in a higher level language. And this is not counting the 300 pages of the proof’s dissection, which again seems like tacking the problem with a flood of data rather than a basic understanding of the solution’s roots. In this case, the computer seemed like it was aiding and abetting a throw-everything-and-the-kitchen-sink-at-it methodology, and that’s not good.

When you use computers to generate vast reams of data where a solution may be hiding or just recording what it said after it ran a program you designed, you might get the right answer. The catch is that you’re never going to be sure unless you can solve the problem itself or come very close to the real answer and just need the computer to scale up your calculations and fill in most of the decimal spaces you know need to be there. After all, computers were designed for doing repetitive, well-defined work that would take humans far too long to do and in which missing an insignificant detail would quickly throw everything off by the end. They are not thinking machines and they rely on a programmer really knowing what’s going on under the hood to be truly useful in the academic field. Otherwise, mathematics could end up with 300 pages and 50,000 lines of code for one paper and two pages of computer printouts for another. And both extremes would get us nowehere pretty fast without a human who knows how to tackle the real problem…

overheated mouse

As an old expression teaches us, when you have a hammer, all your problems look like nails, so it’s no surprise that Silicon Valley bigwigs interested in improving education quickly turn to coding and training kids for future computer science jobs. Really, that’s pretty much all they know and they were very successful, so surely the answer to our nation’s economic and educational woes can be solved by teaching everyone how to code, from toddlers to marketing executives, right? According to the brothers behind the project, computer science classes would remove the need for tax hikes and spending cuts currently being debated into oblivion on Capitol Hill, as well as make countless workers immune to outsourcing. Just so you know how seriously they’re taking the need for coding in schools, here’s a money quote from the article that details exactly how they plan to fix schools and reclaim economic prosperity with programming classes…

[Hadi Partovi] told me “It’s a challenge that our country needs to face.” Some of these gaps are because schools don’t treat computer science the way it should, and they don’t recognize coding as an essential skill, like reading and writing is. Partovi has taken this on as his personal goal, as well as the goal of

How can I put this delicately? You need to be able to write your name to function in society. You need to be able to read signs to get anywhere on your own. You don’t need to know how to write recursive JavaScript functions to get a mortgage or apply for a credit card. You don’t need to be able to write an implementation of Djikstra’s algorithm in Python to find your way around town. I’d say it would be great if you could and more power to you if you enjoy working with graph theory as it applies to the real world, but we have GPS devices for that, and they’re already built to find the most efficient and practical routes to your destination. We also have maps and street signs, which require that you know how to read rather than how to code. Schools don’t see coding as a critical life skill because it’s not. It’s an essential skill for programmers, but for some odd reason, some members of my profession in Silicon Valley tend to forget that not everyone out there is a programmer, and not everyone wanted to be a programmer since childhood.

When basing essential skills on one’s own career, we could argue that plumbing, woodworking, accounting, or electrical engineering should rank just as highly as basic literacy. Pipes leak and taxes need to get done, not to mention that homes can have bad wiring and people need some furniture around the house, and you can definitely make a living doing any of these things as a full time professional. But when was the last time you had to lay new pipes in your home? Or the last time you had to fix the wiring in your office? Or built your own furniture? It’s impossible to be skilled in everything and every useful job can lay the same exact claims made by the Partovis as to why they should be given outsize attention and resources in schools. But hold it a minute, say the Partovis, by 2020 there will be a million IT jobs with no one to fill them. Imagine the benefit to our economy if we found a million high paying jobs for computer science students immune to all outsourcing, trained for their first day of professional coding code since grade school.

Yeah, about those million jobs. This assumes a straighline projection in which computer science jobs grow at twice the national average without a hiccup for seven years and that none of them could be outsourced. It could be possible that IT jobs will keep exploding, but considering that a few Indian firms have an enormous IT consulting footprint and has convinced many a CEO and CIO to ship countless programming jobs overseas or hire their coders, the idea that these jobs are here to stay isn’t a given. If anything, fewer projected workers would give them an incentive to crank up outsourcing rather than invest into education at home. Why? Because it’s cheaper on paper, despite many a well justified warning about the unreliable quality of code that comes back. It would also be a good idea to keep in mind that schools are now being graded mostly by high-stakes testing, and while teachers are being told to teach their students how to take all the mandatory tests and score well enough on them not to defund the schools, they’re probably not going to be all that keen on incorporating computer science into the curriculum.

Likewise, the assumption that colleges will continue to churn out the same number of comp sci grads over the next seven years doesn’t seem plausible to me. In late summer and early fall, as colleges were getting ready to start a new academic year, hardly a week went by without e-mails and Facebook messages asking what I thought about computer science as a major, referring to some friend or cousin who has an IT job and seems to be doing very well. People are well aware that computer science is a lucrative field with a lot of demand. But the truth of the matter is that not everyone can be a programmer and that not everyone wants to be. Trying to create armies of coders by going out of our way to show how supposedly easy and fun it is doesn’t mean that more people will choose it and if the only reason why they’re going into the field is for the size of their expected paychecks, they’re not going to like the field and either quit or be ran out of their city’s IT companies in a hurry. Better education for the nation begins with reigning in testing for the sake of testing, and with more time to explore and study science, not by plopping kids down in front of a computer and telling them how programming is a crucial life skill when it’s not.


The mindset of a Singularitarian is an interesting one. It’s certainly very optimistic, countering a lot of criticisms of their ideas by declaring that surely, someone will solve them with the mighty and omnipotent technology of the future, technology that pre-Singularity primitives like us won’t even be able to conceive because we don’t understand their mythology of exponential growth in scientific sophistication. And it also has some very strange ideas about computers, placing them as useful and powerful tools, our potential overlords, rogue agents to be tamed like pets, and as new homes for our brains after our bodies are past their use-by date, all at the same time. Now, I’m not exactly surprised by this because the original concept of the Singularity as detailed by a paper by Vernor Vinge is pretty much all over the place so overlap and conflicting opinions are pretty much inevitable as everyone tries to define what the Singularity really is and when it will arrive, generally settling on vague, almost meaningless cliches for the press.

But what does surprise me is how brazenly Singularitarians embrace the idea of a future where computers can and will do it all just by having more processing power or more efficient CPUs on display in this H+ Magazine review of a transhumanist guide. While ruminating on the awesome things we’ll get to go with infinite technological prowess in Q&A format, the book’s author blithely dismisses the notion of using advanced cyborg technology for space exploration. According to him, we’ll have so much computing power available that we could simulate anything we wanted, making the notion of space exploration obsolete. In the words of Wolfgang Pauli, this isn’t even wrong. We have a lot of computational power available today though a cloud or by assembling immense supercomputers with many thousands of cores and algorithms which can distribute the work to squeeze the most processing power out of them. All that power means squat though if it’s not used wisely, like, for instance, to simulate things we know too little about to simulate.

How can we simulate Mars or Titan if we’re still not sure of their exact composition and natural processes and use these simulations as viable models for exploration? Look at the models that we had for alien solar systems in the 1970s and how little resemblance they have to what we’re actually seeing by exploring the cosmos. Instead of organizing in neat groups and orbits which look like slightly elongated circles, exoplanets are all over the place. We didn’t even think that a Hot Jupiter was a thing until we saw one and even then, it took us years to say that yes, they’re really a thing and it definitely exists. And after all that, we also find that they appear to be rather common, making our solar system an outlier. Now this may all change with new observations, of course, but the point is that we can’t simulate what we don’t know and the only way to know is to go, look, experiment, and repeat the findings. Raw computing power is not substitute for a real world research program or genuine space exploration done by humans and machines.

The scary thing about this proposal though is that I’ve heard very similar views casually echoed by members of the Singularity Institute as well as mentioned by transhumanists around the web while they disparage the future of human spaceflight. I’m a firm believer that if anything would be able to qualify for a Singularity, it would be augmented humans living and working in space and carrying out complex engineering and scientific missions beyond Earth orbit. Considering what long term stays in microgravity and cosmic radiation do to the human body, augmentation of our future astronauts is just downright logical, especially because it could be put to great use after it proves its worth to help stroke and trauma victims regain control of their bodies or give them new limbs which will become permanent parts of them, not just prosthetics. Rather than run with the idea, however, too many Singularitarians prefer to believe that magical computers endowed with powerful enough CPUs will just do everything for them, even their scientific research. That’s just intellectually lazy and a major disservice to their goal of merging with machines.

[ illustration by Oliver Wetter ]


Political parties don’t take well to losing. They’re in the business of winning elections because a winner attracts money and attention, money and attention they can use to grow stronger. So in the fallout from this presidential election, one wing of the Republican party is calling for a much needed and long overdue period of self-reflection in which the GOP swings close to the center and becomes much more libertarian but without the borderline anarchist overtones, and another is mourning the death of traditional America thanks to liberal freeloaders and spinning constant conspiracy theories. This reaction is not too dissimilar from what you could see after 2004, when swarms of liberal bloggers sighed heavily about the end of the America they knew to bloodthirsty, Bible-thumping theocrats, and tossed out conspiracy theories about voting machines. But in the conservative blogosphere, a conspiracy theory about voting and technology that doesn’t target voting machines is now trying to get some traction by accusing coders of political sabotage.

Basically, the theory goes as follows. Romney’s campaign created an app called Orca to track the Republican vote and give conservative voters a tool to submit what they saw as obstructions to voting on the spot via their smartphones. One of the companies employed a developer once contracted for some unspecified work with the Gore campaign, and another Orca developer was black and therefore, a likely Obama supporter. And so, they and all their likeminded friends who were working on this project intentionally sabotaged it, making it difficult to really crank up a get out the vote effort and report voting incidents and mishaps in a timely manner; the app was too slow, frustrating too many users, and you can see that in the low turnout for Republicans. That’s a little odd to say the least when you consider that hundreds of millions have been spent on ads, canvassing, robocalls, mass mailings, and every other known effort to get people to vote during the last two years. Being slammed with election talk for a year didn’t get enough Republicans to the polls but a vote-tracking app would’ve made a multi-million vote difference?

Now this is an interesting election conspiracy theory because it’s the first one I’ve heard going after developers and campaign tools rather than the classic allegation that voting machines are being rigged. It’s true that voting machines were rigged in some cities, but they were rigged for Romney and the GOP so that angle wouldn’t have worked. Going after Orca shows that there’s some original thought happening here, even though the original thought is holding a stint more than a decade old against a developer who can easily end up working on a campaign he would rather not support and whose code will be reviewed before being added into the final product, and indulges in playing the race card. The odds that a couple of developers snuck some sort of malicious code into Orca aren’t all that high because delivering a bad product means that you’ll have a black mark on your track record and the developers in question didn’t simply volunteer to work on code for a campaign. They’re employees who were assigned some units of work, not a small team of tech-savvy political activists who volunteered to create Orca for Romney.

But if the developers can’t be held liable without a lot more proof and source code to back it up, why would Orca suddenly fail on election night? The data points to a simple but pressing issue that has little to do with the code: infrastructure. Or rather a lack of it. If you’re going to collect a lot of data in a very short amount of time, you better be ready for it. When just ten servers were hit with 1,200 or so requests per minute and the mobile part of the system was housed on only one server, it was just a question of when the system would either crash or jam so badly that for all intents and purposes it appeared dead to the outside world. If Orca was built with the proper scale in mind, it would’ve lived on a hundred servers and the mobile end would take up half of all that capacity. There would’ve been special agreements with ISPs to get the most throughput on election night. None of that seems to have been done according to reports across the web. And when we pause to consider that Romney staffers could’ve counted the number of servers then ask "are you sure that’s enough?" to catch the issue, calling this sabotage seems hyperbolic.

What seems far more likely is that Romney dropped the ball and those in key positions of all his campaign activities failed to do their research and follow up with the Orca team. Even if it was a perfectly working app, it was unlikely to make all that much of a difference because it could only track who voted and where, not spring into action and get more people to the polls. When at the end of the day we’re talking about a difference of nearly 3.4 million votes, Orca would’ve needed to get more than 1.8 million Republican voters into the booths within several hours. Romney had spent almost a decade campaigning. If all the hundreds of millions he and the GOP spent, along with the barrage of exhortations to vote from talk shows, Fox News, and right wing blogs made little difference, what exactly would a tracking app do? If anything, the campaign did what many techies like me see on a daily basis in the business world. The boss went after a buzzword, then threw a lot of money and effort into a tool he didn’t know quite how to use but which he can show to reporters as something very comparable to something used by his main competitor…

digital cloud

Good stories need conflict, and if you’re going to have conflict, you need a villain. But you don’t always get the right villain in the process, as we can see with the NYT’s scathing article on waste in giant data centers which form the backbone of cloud computing. According to the article, data centers waste between 88% and 94% of all the electricity they consume for idle servers. When they’re going through enough electricity to power a medium sized town, that adds up to a lot of wasted energy, and diesel backups generate quite a bit of pollution on top of that. Much of this article focuses on portraying data centers as lumbering, risk averse giants who either refuse to innovate out of fear alone and have no incentive to reduce their wasteful habits. The real issue, the fact that their end users demand 99.999% uptime and will tear their heads off if their servers are down for any reason at any time, especially during a random traffic surge, is glossed over in just a few brief paragraphs despite being the key to why data centers are so overbuilt.

Here’s a practical example. This blog is hosted by MediaTemple and has recently been using a cloud service to improve performance. Over the last few years, it’s been down five or six times, primarily because database servers went offline or crashed. During those five or six times, this blog was unreachable by readers and its feed was present only in the cache of the syndication company, a cache that refreshes on a fairly frequent basis. This means fewer views because for all intents and purposes, the links leading to Weird Things are now dead. Fewer views means a smaller payout at the end of the month, and when this was a chunk of my income necessary for paying the bills, it was unpleasant to take the hit. Imagine what would’ve happened if right as my latest post got serious momentum on news aggregator sites (once I had a post make the front pages of both Reddit and StumbleUpon and got 25,000 views in two hours), the site went down due to another server error? A major and lucrative spike would’ve been dead in its tracks.

Now, keep in mind that Weird Things is a small site that’s doing between 40,000 to 60,000 or so views per month. What about a site that gets 3 million hits a month? Or 30 million? Or how about the massive news aggregators dealing with hundreds of millions of views in the same time frame and for which being down for an hour means tens of thousands of dollars in lost revenue? Data centers are supposed to be Atlases holding up the world of on-demand internet in a broadband era and if they can’t handle the load, they’ll be dead in the water. So what if they wasted 90% of all the energy they consumed? The clients are happy and the income stream continues. They’ll win no awards for turning off a server and taking a minute or two to boot it back up and starting all the instances of the applications it needs to run. Of course each instance takes only a small amount of memory and processing capability even on a heavily used server, so there’s always a viable option of virtualizing servers on a single box to utilize more of the server’s hardware.

If you were to go by the NYT article, you’d think that data centers are avoiding this, but they’re actually trying to virtualize more and more servers. The problem is that virtualization on a scale like this isn’t an easy thing to implement and there’s a number of technical issues that any data center will need to address before going into it full tilt. Considering that each center uses what a professor of mine used to call "their secret sauce," it will need to make sure that any extensive virtualization schemes it wants to deploy won’t interfere with their secret sauce recipe. When we talk about changing how thousands of servers work, we have to accept that it takes a while for a major update like that to be tested and deployed. Is there an element of fear there? Yes. But do you really expect there not to be any when the standards to which these data centers are held are so high? That 99.999% uptime figure allows for 8 hours and 45 minutes of total downtime in an entire year, and a small glitch here or there can easily get the data center to fail the service contract requirements. So while they virtualize, they’re keeping their eye on the money.

But the silver lining here is that once virtualization in data centers becomes the norm, we will be set for a very long period of time in terms of data infrastructure. Very few, if any, additional major data centers will need to be built, and users can continue to send huge files across the web at will just as they do today. If you want to blame anyone for the energy waste in data centers, you have to point the finger squarely at consumers with extremely high demands. They’re the ones for whom these centers are built and they’re the ones who will bankrupt a data center should an outage major enough to affect their end of month metrics happen. This, by the way, includes us, the typical internet users as well. Our e-mails, documents, videos, IM transcripts, and backups in case our computers break or get stolen all have to be housed somewhere and all these wasteful data centers is where they end up. After all, the cloud really is just huge clusters of hard drives filled to the brim with stuff we may well have forgotten by now alongside the e-mails we read last night and the Facebook posts we made last week…