Archives For computers

bitcoin

When an investment loses half its value in about six hours, people notice and people worry. So as the virtual bitcoin currency fell from about $260 per BTC to $130 per BTC during what had to be a really scary day for bitcoin investors, articles all over the web started questioning bitcoin as a viable currency all over again. Now the price is back up to the $160 per BTC range and things seem to have stabilized. They might even pick up a bit as aggressive investors see opportunity for scooping up a valuable asset at a discount. But the question remains. Is bitcoin a viable and solid investment, or is it just a virtual casino which will one day crumble? Last time I wrote about the currency, everyone was wondering what’s next for bitcoin. And after all this time, there’s still no clear strategy for what bitcoin could be other than something to buy and hold on to until it will gain value, and use it for anonymous transactions on the web. Since it does have uses, it’s not going anywhere soon, but it’s a little unnerving to think about its long term viability.

Here’s the dilemma. Money is worth something when a) we say that it is, and b) it’s adopted with enough enthusiasm to be widespread and useful. Dollars are used around the world because a common understanding across the planet is that dollars are valuable and they’re backed by the world’s largest economy and dominant military superpower. This is why everybody will take your green bills as a valid payment, and should your dollars be exchanged, there’s a big international network of markets that figure out exchange rates with other established currencies. This is very basic Econ 101 stuff, right? Well, bitcoin has some of the same things going for it. We agree that bitcoins have value and there are international exchanges to figure out what it’s worth when we compare it to other currencies. And there are adopters who take it as valid payment. But are the adopters enthusiastic enough and are there enough of them out there? After all, who stands by the value and strength of the bitcoin? Who secures its worth as a currency? Right now, it’s a few major exchanges and a large online community. That’s a start, but it may not be enough.

Rather than spending bitcoins around the world to pay for things on a regular basis, bitcoins sit in virtual lockers on hard drives in safe deposit boxes. Instead of mining and spending, most of the inertia seems to be mining and hoarding. There are something like 11 million bitcoins being circulated and another 10 million left to mine. What then? Is that when people will finally start to spend it and mainstream businesses adopt it as a valid form of payment? And will people deal with a finite supply of bitcoins by taking advantage of the fact that bitcoins can be split into very, very small decimal values, creating artificial inflation as the bitcoin floodgates are opened? And let’s not even get started with alternative block chains that could split bitcoin into two, or three, or hundreds of different currencies, negating the original libertarian idealism behind it. The way the bitcoin economy has been originally meant to work basically caps its size, which means that as every bitcoin to be mined has been mined, the economy will no longer grow, the prices will only deflate to allow an increase in the volume of trade, and produce very bizarre PPP ratios.

Interestingly, this is somewhat reminiscent of what we saw in the ancient world, in which a fixed supply of gold had to be broken down further and further into more and more complex currency to allow transactions to keep happening, so there is precedent for this. However, there were no international exchanges to figure out what currencies are worth back then and minting gold and silver and bronze coins was pretty much the safest way to go for many official transactions. Now we have such exchanges so we don’t need to price our currencies solely on commodities. In fact we’re doing it the other way around and using commodities as a useful hedge against a fall in a currency’s value. But of course all this is predicated on the simple fact that these currencies are being used in enough transactions to be truly valuable, otherwise, they fall out of use and much of the country’s debts have to go unpaid or be repaid in commodities that can be made liquid on international markets. (Well, that’s the idea, the practice is way more complicated.) Who would stand behind bitcoin? Who would repay all those left holding the bag if bitcoin crashes?

It’s that anxiety about what will happen when the bitcoins are all mined and all the complexities of using a currency with no central bank behind it that keep a whole lot of people from using it in a mainstream environment. And this is very likely what caused the massive, massive correction we just saw. Those who were not True Believers™ in the power of virtual currencies with no central bank or regulators attached, decided they made enough and cashed out. More speculators will return and new ones will buy the now discounted bitcoins, so we could see $260 or $300+ BTC rates in no time at all. But if all they’re doing is hoarding a currency with very limited use (and an uncomfortable deal of it quite unsavory, such as paying cybercrime tools and illegal goods), the value of the bitcoin is limited as a novelty that’s ripe for aggressive speculation and little else. At the same time though, bitcoin shows that you can do an awful lot with virtual cash and we might want to adopt its design to phase out paper money and coins for established currencies.

All in all, it looks like bitcoin is an interesting experiment ripe for some fun speculation to make a little, or a lot of, quick cash (relevant aside: I have no investment in bitcoins), and with things to teach us when it comes to modifying modern, existing currencies for the wired world. But it really doesn’t look like it could rise to become the next franc, or dollar, or yen. It will have its uses, but it’s going to stay more or less a novelty for the foreseeable future. That is unless bitcoin finds a new way to open up and instead of hoarding it in virtual wallets on encrypted hard drives buried deep in a remote mountain, locked in safe deposit boxes designed to survive a thermonuclear blast and the inevitable zombie apocalypse that will follow, bitcoin holders go out and spend the currency out in the real world. Maybe then the lack of a central issuing authority might not be an absolute deal breaker and if it stays small enough, people might not want regulators to sort out particularly rough price changes and the Fed will just let it keep flying under the radar…

Share

circuit boards

A few years ago, when theoretical physicist Michio Kaku took on the future of computing in his thankfully short lived Big Think series, I pointed out the many things he got wrong. Most of them weren’t pedantic little issues either, they were a fundamental misunderstanding of not only the existing computing arsenal deployed outside academia, but the business of technology itself. So when the Future Tense blog put up a post from highly decorated computer expert Sethuraman Panchanathan purporting to answer the question of what comes after computer chips, a serious and detailed answer should’ve been expected. And there was one. Only it wasn’t a reply to the question that was asked. It was a breezy overview of brain-machine interfaces. Just like Kaku’s venture into the future of computing in response to a question clearly asked by someone whose grasp of computing is sketchy at best, Panchanathan’s answer was a detour that avoided what should’ve been done instead: an explanation of why the question was not even wrong.

Every computing technology not based on living things, a somewhat esoteric topic in the theory of computation we once covered, will rely on some form of a computer chip. It’s currently one of the most efficient ways we found of working with binary data and it’s very unlikely that we will be abandoning integrated circuitry and compact chips anytime soon. We might fiddle around with how they work on the inside making them probabilistic, or building them out of exotic materials, or even modifying them to read quantum fluctuations as well as electron pulses, but there isn’t a completely new approach to computing that’s poised to completely replace the good old chip in the foreseeable future. Everything Panchanathan mentions is based on integrating the signals from neurons with running currents through computer chips. Even cognitive computing for future AI models relies on computer chips. And why shouldn’t it? The chips give us lots of bang for our buck so asking "what comes after them" doesn’t make a whole lot of sense.

If computer chips weren’t keeping up with our computing demands and could not be modified to do so due to some law of physics or chemistry standing in the way, this question would be pretty logical, just like asking how we’ll store data when our typical spinning disk hard drives can’t read or write fast enough to keep up with data center demands and create unacceptable lag. But in the case of aging hard drive technology, we have good answers like RAID configurations and a new generation of solid state drives because these are real problems for which we had to find real solutions. But computer chips aren’t a future bottleneck. In fact they’re the very engine of a modern computer and we’d have to heavily add on to the theory of computing to even consider devices that don’t function like computer chips or whose job couldn’t be done by them. Honestly, I’m at a complete loss what these devices could be and how they could work. Probably the most novel idea I found was using chemical reactions to create logic gates, but it’s trying to improve a computer chip’s function and design, not outright replace it as the question implies.

Maybe we’re going a little too far with this. Maybe the person asking the questions really wanted to know about designs that will replace today’s CMOS chips, not challenge computation as most of us in the field know it. Then he could’ve talked about boron-enriched diamond, graphene, or graphene-molybdenum disulfide chips rather than future applications of computer chips in what are quite exciting areas of computer science all by themselves. But that’s the problem with a bad question by someone who doesn’t know the topic. We don’t know what’s really being asked and can’t give a proper answer. Considering that it originally came from a popular science and tech discussion though, makes answering it a political proposition. If instead of an answer you explain that the entire premise is wrong, you’ll risk coming across as patronizing, and making the topic way too complex for those whose expertise is not in your field. That may be why Panchanathan took a shot, though I really wish he tried to educate the person asking the question instead…

Share

math love

Sometimes it’s hard to decide whether an article asking about the role of computers in research is simply click bait that lures readers to disagree and boost views, or a legitimate question that a writer is trying to investigate. In this case, an article on Wired about the future of math focused ever more on computer proofs and algorithms asks whether computers are steamrolling over all human mathematicians because they can calculate so much so quickly, then answers itself with notes on how easily code can be buggy and proofs of complex theorems can go wrong. Maybe the only curious note is that of an eccentric mathematician at Rutgers who credits his computers on his papers as co-authors, and his polar opposite, an academic who eschews programming to such an extent that he delegates problems requiring code to his students, thinking it’s not worth his time to bother learning the new technology. It’s a quirky study in contrast, but little else.

But aside from the obvious answers and problems with the initial questions, a few things jumped out at me. I’m not a mathematician by any stretch of the imagination. My software deals with the applied world. But nevertheless, I’m familiar with how to write code in general and when there’s a mathematical proof that takes 50,000 lines of code being discussed, my first thought is how you could possibly have that much code to test one problem. The entire approach seems bizarre for what sounds like an application of graph theory that shouldn’t take more than a few functions to implement, especially in a higher level language. And this is not counting the 300 pages of the proof’s dissection, which again seems like tacking the problem with a flood of data rather than a basic understanding of the solution’s roots. In this case, the computer seemed like it was aiding and abetting a throw-everything-and-the-kitchen-sink-at-it methodology, and that’s not good.

When you use computers to generate vast reams of data where a solution may be hiding or just recording what it said after it ran a program you designed, you might get the right answer. The catch is that you’re never going to be sure unless you can solve the problem itself or come very close to the real answer and just need the computer to scale up your calculations and fill in most of the decimal spaces you know need to be there. After all, computers were designed for doing repetitive, well-defined work that would take humans far too long to do and in which missing an insignificant detail would quickly throw everything off by the end. They are not thinking machines and they rely on a programmer really knowing what’s going on under the hood to be truly useful in the academic field. Otherwise, mathematics could end up with 300 pages and 50,000 lines of code for one paper and two pages of computer printouts for another. And both extremes would get us nowehere pretty fast without a human who knows how to tackle the real problem…

Share

overheated mouse

As an old expression teaches us, when you have a hammer, all your problems look like nails, so it’s no surprise that Silicon Valley bigwigs interested in improving education quickly turn to coding and training kids for future computer science jobs. Really, that’s pretty much all they know and they were very successful, so surely the answer to our nation’s economic and educational woes can be solved by teaching everyone how to code, from toddlers to marketing executives, right? According to the brothers behind the Code.org project, computer science classes would remove the need for tax hikes and spending cuts currently being debated into oblivion on Capitol Hill, as well as make countless workers immune to outsourcing. Just so you know how seriously they’re taking the need for coding in schools, here’s a money quote from the article that details exactly how they plan to fix schools and reclaim economic prosperity with programming classes…

[Hadi Partovi] told me “It’s a challenge that our country needs to face.” Some of these gaps are because schools don’t treat computer science the way it should, and they don’t recognize coding as an essential skill, like reading and writing is. Partovi has taken this on as his personal goal, as well as the goal of Code.org.

How can I put this delicately? You need to be able to write your name to function in society. You need to be able to read signs to get anywhere on your own. You don’t need to know how to write recursive JavaScript functions to get a mortgage or apply for a credit card. You don’t need to be able to write an implementation of Djikstra’s algorithm in Python to find your way around town. I’d say it would be great if you could and more power to you if you enjoy working with graph theory as it applies to the real world, but we have GPS devices for that, and they’re already built to find the most efficient and practical routes to your destination. We also have maps and street signs, which require that you know how to read rather than how to code. Schools don’t see coding as a critical life skill because it’s not. It’s an essential skill for programmers, but for some odd reason, some members of my profession in Silicon Valley tend to forget that not everyone out there is a programmer, and not everyone wanted to be a programmer since childhood.

When basing essential skills on one’s own career, we could argue that plumbing, woodworking, accounting, or electrical engineering should rank just as highly as basic literacy. Pipes leak and taxes need to get done, not to mention that homes can have bad wiring and people need some furniture around the house, and you can definitely make a living doing any of these things as a full time professional. But when was the last time you had to lay new pipes in your home? Or the last time you had to fix the wiring in your office? Or built your own furniture? It’s impossible to be skilled in everything and every useful job can lay the same exact claims made by the Partovis as to why they should be given outsize attention and resources in schools. But hold it a minute, say the Partovis, by 2020 there will be a million IT jobs with no one to fill them. Imagine the benefit to our economy if we found a million high paying jobs for computer science students immune to all outsourcing, trained for their first day of professional coding code since grade school.

Yeah, about those million jobs. This assumes a straighline projection in which computer science jobs grow at twice the national average without a hiccup for seven years and that none of them could be outsourced. It could be possible that IT jobs will keep exploding, but considering that a few Indian firms have an enormous IT consulting footprint and has convinced many a CEO and CIO to ship countless programming jobs overseas or hire their coders, the idea that these jobs are here to stay isn’t a given. If anything, fewer projected workers would give them an incentive to crank up outsourcing rather than invest into education at home. Why? Because it’s cheaper on paper, despite many a well justified warning about the unreliable quality of code that comes back. It would also be a good idea to keep in mind that schools are now being graded mostly by high-stakes testing, and while teachers are being told to teach their students how to take all the mandatory tests and score well enough on them not to defund the schools, they’re probably not going to be all that keen on incorporating computer science into the curriculum.

Likewise, the assumption that colleges will continue to churn out the same number of comp sci grads over the next seven years doesn’t seem plausible to me. In late summer and early fall, as colleges were getting ready to start a new academic year, hardly a week went by without e-mails and Facebook messages asking what I thought about computer science as a major, referring to some friend or cousin who has an IT job and seems to be doing very well. People are well aware that computer science is a lucrative field with a lot of demand. But the truth of the matter is that not everyone can be a programmer and that not everyone wants to be. Trying to create armies of coders by going out of our way to show how supposedly easy and fun it is doesn’t mean that more people will choose it and if the only reason why they’re going into the field is for the size of their expected paychecks, they’re not going to like the field and either quit or be ran out of their city’s IT companies in a hurry. Better education for the nation begins with reigning in testing for the sake of testing, and with more time to explore and study science, not by plopping kids down in front of a computer and telling them how programming is a crucial life skill when it’s not.

Share

gynoid

The mindset of a Singularitarian is an interesting one. It’s certainly very optimistic, countering a lot of criticisms of their ideas by declaring that surely, someone will solve them with the mighty and omnipotent technology of the future, technology that pre-Singularity primitives like us won’t even be able to conceive because we don’t understand their mythology of exponential growth in scientific sophistication. And it also has some very strange ideas about computers, placing them as useful and powerful tools, our potential overlords, rogue agents to be tamed like pets, and as new homes for our brains after our bodies are past their use-by date, all at the same time. Now, I’m not exactly surprised by this because the original concept of the Singularity as detailed by a paper by Vernor Vinge is pretty much all over the place so overlap and conflicting opinions are pretty much inevitable as everyone tries to define what the Singularity really is and when it will arrive, generally settling on vague, almost meaningless cliches for the press.

But what does surprise me is how brazenly Singularitarians embrace the idea of a future where computers can and will do it all just by having more processing power or more efficient CPUs on display in this H+ Magazine review of a transhumanist guide. While ruminating on the awesome things we’ll get to go with infinite technological prowess in Q&A format, the book’s author blithely dismisses the notion of using advanced cyborg technology for space exploration. According to him, we’ll have so much computing power available that we could simulate anything we wanted, making the notion of space exploration obsolete. In the words of Wolfgang Pauli, this isn’t even wrong. We have a lot of computational power available today though a cloud or by assembling immense supercomputers with many thousands of cores and algorithms which can distribute the work to squeeze the most processing power out of them. All that power means squat though if it’s not used wisely, like, for instance, to simulate things we know too little about to simulate.

How can we simulate Mars or Titan if we’re still not sure of their exact composition and natural processes and use these simulations as viable models for exploration? Look at the models that we had for alien solar systems in the 1970s and how little resemblance they have to what we’re actually seeing by exploring the cosmos. Instead of organizing in neat groups and orbits which look like slightly elongated circles, exoplanets are all over the place. We didn’t even think that a Hot Jupiter was a thing until we saw one and even then, it took us years to say that yes, they’re really a thing and it definitely exists. And after all that, we also find that they appear to be rather common, making our solar system an outlier. Now this may all change with new observations, of course, but the point is that we can’t simulate what we don’t know and the only way to know is to go, look, experiment, and repeat the findings. Raw computing power is not substitute for a real world research program or genuine space exploration done by humans and machines.

The scary thing about this proposal though is that I’ve heard very similar views casually echoed by members of the Singularity Institute as well and mentioned by transhumanists around the web while they disparage the future of human spaceflight. I’m a firm believer that if anything would be able to qualify for a Singularity, it would be augmented humans living and working in space and carrying out complex engineering and scientific missions beyond Earth orbit. Considering what long term stays in microgravity and cosmic radiation do to the human body, augmentation of our future astronauts is just downright logical, especially because it could be put to great use after it proves its worth to help stroke and trauma victims regain control of their bodies or give them new limbs which will become permanent parts of them, not just prosthetics. Rather than run with the idea, however, a number of Singularitarians prefer to believe that magical computers endowed with powerful enough CPUs will just do everything for them, even their scientific research. That’s just intellectually lazy and a major disservice to their goal of merging with machines.

[ illustration by Oliver Wetter ]

Share

ballot

Political parties don’t take well to losing. They’re in the business of winning elections because a winner attracts money and attention, money and attention they can use to grow stronger. So in the fallout from this presidential election, one wing of the Republican party is calling for a much needed and long overdue period of self-reflection in which the GOP swings close to the center and becomes much more libertarian but without the borderline anarchist overtones, and another is mourning the death of traditional America thanks to liberal freeloaders and spinning constant conspiracy theories. This reaction is not too dissimilar from what you could see after 2004, when swarms of liberal bloggers sighed heavily about the end of the America they knew to bloodthirsty, Bible-thumping theocrats, and tossed out conspiracy theories about voting machines. But in the conservative blogosphere, a conspiracy theory about voting and technology that doesn’t target voting machines is now trying to get some traction by accusing coders of political sabotage.

Basically, the theory goes as follows. Romney’s campaign created an app called Orca to track the Republican vote and give conservative voters a tool to submit what they saw as obstructions to voting on the spot via their smartphones. One of the companies employed a developer once contracted for some unspecified work with the Gore campaign, and another Orca developer was black and therefore, a likely Obama supporter. And so, they and all their likeminded friends who were working on this project intentionally sabotaged it, making it difficult to really crank up a get out the vote effort and report voting incidents and mishaps in a timely manner; the app was too slow, frustrating too many users, and you can see that in the low turnout for Republicans. That’s a little odd to say the least when you consider that hundreds of millions have been spent on ads, canvassing, robocalls, mass mailings, and every other known effort to get people to vote during the last two years. Being slammed with election talk for a year didn’t get enough Republicans to the polls but a vote-tracking app would’ve made a multi-million vote difference?

Now this is an interesting election conspiracy theory because it’s the first one I’ve heard going after developers and campaign tools rather than the classic allegation that voting machines are being rigged. It’s true that voting machines were rigged in some cities, but they were rigged for Romney and the GOP so that angle wouldn’t have worked. Going after Orca shows that there’s some original thought happening here, even though the original thought is holding a stint more than a decade old against a developer who can easily end up working on a campaign he would rather not support and whose code will be reviewed before being added into the final product, and indulges in playing the race card. The odds that a couple of developers snuck some sort of malicious code into Orca aren’t all that high because delivering a bad product means that you’ll have a black mark on your track record and the developers in question didn’t simply volunteer to work on code for a campaign. They’re employees who were assigned some units of work, not a small team of tech-savvy political activists who volunteered to create Orca for Romney.

But if the developers can’t be held liable without a lot more proof and source code to back it up, why would Orca suddenly fail on election night? The data points to a simple but pressing issue that has little to do with the code: infrastructure. Or rather a lack of it. If you’re going to collect a lot of data in a very short amount of time, you better be ready for it. When just ten servers were hit with 1,200 or so requests per minute and the mobile part of the system was housed on only one server, it was just a question of when the system would either crash or jam so badly that for all intents and purposes it appeared dead to the outside world. If Orca was built with the proper scale in mind, it would’ve lived on a hundred servers and the mobile end would take up half of all that capacity. There would’ve been special agreements with ISPs to get the most throughput on election night. None of that seems to have been done according to reports across the web. And when we pause to consider that Romney staffers could’ve counted the number of servers then ask "are you sure that’s enough?" to catch the issue, calling this sabotage seems hyperbolic.

What seems far more likely is that Romney dropped the ball and those in key positions of all his campaign activities failed to do their research and follow up with the Orca team. Even if it was a perfectly working app, it was unlikely to make all that much of a difference because it could only track who voted and where, not spring into action and get more people to the polls. When at the end of the day we’re talking about a difference of nearly 3.4 million votes, Orca would’ve needed to get more than 1.8 million Republican voters into the booths within several hours. Romney had spent almost a decade campaigning. If all the hundreds of millions he and the GOP spent, along with the barrage of exhortations to vote from talk shows, Fox News, and right wing blogs made little difference, what exactly would a tracking app do? If anything, the campaign did what many techies like me see on a daily basis in the business world. The boss went after a buzzword, then threw a lot of money and effort into a tool he didn’t know quite how to use but which he can show to reporters as something very comparable to something used by his main competitor…

Share

digital cloud

Good stories need conflict, and if you’re going to have conflict, you need a villain. But you don’t always get the right villain in the process, as we can see with the NYT’s scathing article on waste in giant data centers which form the backbone of cloud computing. According to the article, data centers waste between 88% and 94% of all the electricity they consume for idle servers. When they’re going through enough electricity to power a medium sized town, that adds up to a lot of wasted energy, and diesel backups generate quite a bit of pollution on top of that. Much of this article focuses on portraying data centers as lumbering, risk averse giants who either refuse to innovate out of fear alone and have no incentive to reduce their wasteful habits. The real issue, the fact that their end users demand 99.999% uptime and will tear their heads off if their servers are down for any reason at any time, especially during a random traffic surge, is glossed over in just a few brief paragraphs despite being the key to why data centers are so overbuilt.

Here’s a practical example. This blog is hosted by MediaTemple and has recently been using a cloud service to improve performance. Over the last few years, it’s been down five or six times, primarily because database servers went offline or crashed. During those five or six times, this blog was unreachable by readers and its feed was present only in the cache of the syndication company, a cache that refreshes on a fairly frequent basis. This means fewer views because for all intents and purposes, the links leading to Weird Things are now dead. Fewer views means a smaller payout at the end of the month, and when this was a chunk of my income necessary for paying the bills, it was unpleasant to take the hit. Imagine what would’ve happened if right as my latest post got serious momentum on news aggregator sites (once I had a post make the front pages of both Reddit and StumbleUpon and got 25,000 views in two hours), the site went down due to another server error? A major and lucrative spike would’ve been dead in its tracks.

Now, keep in mind that Weird Things is a small site that’s doing between 40,000 to 60,000 or so views per month. What about a site that gets 3 million hits a month? Or 30 million? Or how about the massive news aggregators dealing with hundreds of millions of views in the same time frame and for which being down for an hour means tens of thousands of dollars in lost revenue? Data centers are supposed to be Atlases holding up the world of on-demand internet in a broadband era and if they can’t handle the load, they’ll be dead in the water. So what if they wasted 90% of all the energy they consumed? The clients are happy and the income stream continues. They’ll win no awards for turning off a server and taking a minute or two to boot it back up and starting all the instances of the applications it needs to run. Of course each instance takes only a small amount of memory and processing capability even on a heavily used server, so there’s always a viable option of virtualizing servers on a single box to utilize more of the server’s hardware.

If you were to go by the NYT article, you’d think that data centers are avoiding this, but they’re actually trying to virtualize more and more servers. The problem is that virtualization on a scale like this isn’t an easy thing to implement and there’s a number of technical issues that any data center will need to address before going into it full tilt. Considering that each center uses what a professor of mine used to call "their secret sauce," it will need to make sure that any extensive virtualization schemes it wants to deploy won’t interfere with their secret sauce recipe. When we talk about changing how thousands of servers work, we have to accept that it takes a while for a major update like that to be tested and deployed. Is there an element of fear there? Yes. But do you really expect there not to be any when the standards to which these data centers are held are so high? That 99.999% uptime figure allows for 8 hours and 45 minutes of total downtime in an entire year, and a small glitch here or there can easily get the data center to fail the service contract requirements. So while they virtualize, they’re keeping their eye on the money.

But the silver lining here is that once virtualization in data centers becomes the norm, we will be set for a very long period of time in terms of data infrastructure. Very few, if any, additional major data centers will need to be built, and users can continue to send huge files across the web at will just as they do today. If you want to blame anyone for the energy waste in data centers, you have to point the finger squarely at consumers with extremely high demands. They’re the ones for whom these centers are built and they’re the ones who will bankrupt a data center should an outage major enough to affect their end of month metrics happen. This, by the way, includes us, the typical internet users as well. Our e-mails, documents, videos, IM transcripts, and backups in case our computers break or get stolen all have to be housed somewhere and all these wasteful data centers is where they end up. After all, the cloud really is just huge clusters of hard drives filled to the brim with stuff we may well have forgotten by now alongside the e-mails we read last night and the Facebook posts we made last week…

Share

cyborg hand and eye

Journalist and skeptic Steven Poole is breathing fire in his scathing review of the current crop of trendy pop neuroscience books, citing rampant cherry-picking, oversimplifications, and constant presentations of much-debated functions of the brain as having been settled with fMRI and the occasional experiment or two with supposedly definitive results. He goes a little too heavy on the style, ridiculing the clichés of pop neurology and abuse of the science to land corporate lecture gigs where executives eager to seem innovative want to try out the latest trend in management, and is a little too light on some of the scientific debates he touches, but overall his point is quite sound. We do not know enough about the brain to start writing casual manuals on how it works and how you can best get in touch with your inner emotional supercomputer. And since so much of the human mind is still an enigma, how can we even approach trying to build an artificial one as requested by the Singularitarians and those waiting for robot butlers and maids?

While working on the key part of my expansion on Hivemind — which I really need to start putting on GitHub and documenting for public comment — that question has been weighing heavily on my mind because this is basically what I’m building; a decentralized robot brain. But despite my passable knowledge of how operating systems, microprocessors, and code work, and a couple years of psychology in college, I’m hardly a neuroscientist. How would I go about replicating the sheer complexity of a brain in silicon, stacks, and bytes? My answer? I’d take the easy way out and not even try. Evolution is a messy process and involved living things that don’t stop to try to debug and optimize themselves, so it’s little wonder that the brain is a maze of neurons that are loosely organized by some very vague, basic rules and is really, really difficult to unravel. It has the immense task of carrying fragments of memory to be reconstructed, consciousness, learned and instinctual responses, sensory processing and recognition, and even high level logic in one wet lump of metabolically vampiric tissue which has to work 24/7/365 for decades.

Computers, however, don’t have such taxing requirements. They can save what they need to a physical medium like spinning hard drives or SSDs, and they focus on carrying out just one or a handful of basic instructions at a time. With such a tolerant substrate, why would I want to set my sights on the equivalent of jumping into orbit when I can build something functional enough to serve as a brain for a heap of plastic, metal, and integrated circuitry? For the Hivemind toolkit, I used a structure representing a tree of related concepts set by a user to deal with higher level logic, sort of how we learn to compartmentalize and categorize concepts we know, and the same approach will be used in the spawn of Hivemind. Low-level implementation and recognition will also adopt the same pattern of detection and action as explained in the paper. But that’s good for carrying out a few scripted actions or looping those actions. For a more nuanced and useful set of behaviors, I’m perusing a different implementation built on a tool for organizing collections of synchronous and asynchronous monads invented by a team of computers scientists Microsoft imprisons in its dark lair under Mt. Rainer… I mean employs.

Here’s the basic idea. When a robot is called to accomplish a task, we summon all the relevant ideas and their implementations as simple, specialized neural networks which extend from initial classification and recognition of stimuli to the appropriate reaction to said stimuli. That gives us just one fine-tuned neural network per concept. We associate the ideas with the tasks at hand, and put the implementation of the relevant concepts into a collection of actions waiting to fire off as scripted. Then, after the connection with the robot is established and it sends its sensor data to us, we fire off the neural networks in the queue and beam back the appropriate commands in milliseconds. Each target and each task is its own distinct entity in stark contrast to the overlaps we see in biological brains. Overlaps here come from the higher level logic used to tie concepts together rather than connections between the artificial neurons, and alternatives can be loaded and calculated in parallel, ready to fire off as soon as we made sense of what the robot reported back to us. And at this point we can even bring in other robots and establish future timelines for possible events by directing entire bots as the appendages of a decentralized brain.

Certainly, something like that has very little resemblance to what we generally think of when we imagine a brain because we’re used to the notion of a mind being a monolithic entity composed of tightly knit modules rather than a branching queue pulling together distinctly separate bits and pieces of data from distinct compartments. But it has the capacity for carrying out complex and nuanced behaviors, and it can talk to robots that can work with SOAP formatted messages. And that’s what we really need an AI to do, isn’t it? We want something that can make decisions, be aware of its environment, give us a way to teach it how to weave complex actions from a simple set of building blocks, and a way to interact with the outside world. Maybe forgoing a single, self-aware entity is a good way to make that happen and lay the groundwork for combining bigger and more elaborate systems into a single, cohesive whole sometime in the future. Or maybe, we could just keep it decentralized and let different instances communicate with each other, kind of like Skynet, but without that whole nuclear weapons and enslavement of humanity thing as it replicates via the web. Though to be up front, I should warn you that compiled, its key services are about 100 kilobytes so it could technically spread via a virus…

Share

circuit boards

Once upon a time I wrote a post about the sacrifices in intelligence our rovers have to make to be able to travel to other worlds and why these sacrifices are necessary. Basically, we can build very smart bots here on Earth because we can give them a big energy supply for faster, more complex, and more energy demanding computation. On Mars, however, this big energy supply will be a big liability since it will have to take away from a rover’s ability to move or its overall mission time. I’m still pretty confident in my earlier assessment, but some stories spreading around pop sci blogs made me realize that there was a AI-hobbling factor forgotten that wasn’t addressed; cosmic rays. As rovers explore Mars, they’re bombarded with radiation that easily penetrates through the red planet’ thin atmosphere. To give Curiosity the best possible tools to explore the Martian surface, it was given a very powerful setup, at least by spacecraft standards.

BAE Systems’ RAD750 chips provide it with a blazing dual 200 MHz processors and 256 MB of DRAM as well as an entire 2 GB of flash memory. Again, this is blazing only in the world of space travel since these are pretty much the specs for a low end smartphone, and even that probably has a dual core 1 GHz CPU. But the low end smartphone probably can’t withstand a massive radioactive bombardment without going haywire. The problem is the DRAM, or the memory the computer uses to keep all the things it needs to run. When hit by cosmic rays, it goes through something called a bit flip. Ordinarily, for us, this is no big deal because the vast majority of the memory our devices use is taken up by some background process, usually one with enough temporary variables that can absorb the hit before being cleared out of a register in a matter of nanoseconds. This means we either don’t care, or don’t notice, and that’s just fine for those rare cases when a stray particle flips a bit or two. Hell, we lose entire packets when we send them around the internet with certain protocols and that’s a lot more than a bit, but life goes on.

For rovers on other worlds, this is a much, much bigger issue. Not only are the bit flips a lot more frequent since they’re being showered by energetic particles, there’s a lot less margin for error since their setups are a lot more lean. Were a particle case a most significant bit to flip while a small array of bytes is telling the rover how to move, the consequences could be disastrous. The value for 0×00 [00000000] could turn into 0×80 [10000000] and instead of telling the wheel motors to stop, the byte stream just gave it the command to apply 50% power to each wheel, driving it into a ditch, or right off a cliff. And this is why the RAD750 chip is made to only tolerate a single bit flip per year, about twice during the entire Curiosity mission. Were the scenario I just outlined happen, the chip would auto-correct the stream to keep 0×00 as it was when assigned. Rovers go on their merry way, JPL is not living in fear of cosmic rays giving Curiosity a mind of its own, and we get great high rez pictures from the surface of another planet. Win, win, win, right?

Yes, but the auto-correction and the radiation hardening necessitates some tradeoff. It makes the chip more expensive, or consume a little more power, or slows down the CPU cycles, all of which could be used to make rovers smarter and more autonomous. Though dumbing them down a little is a small sacrifice for making sure they’re a lot less likely to randomly drive off a cliff unless you have the budget to build a much bigger robot, launch it on a much more powerful rocket, and devise a way for it to land safely tens if not hundreds of millions of miles from home. Don’t get me wrong, Curiosity’s dual cores and an RTG will make it a lot smarter than previous rovers, but it’s hardly an E-Einstein and unless we find a way to double or triple the size of our Martian rovers, or create artificial magnetospheres for our spacecraft, it’s going to be fairly close to the peak of the kind of intelligence we can get in an interplanetary robot for the next decade or so. Actually, considering that just testing and certifying a new radiation-hardened chip can take that long, that may be an optimistic assessment.

And this is why ultimately, we have to go to other worlds ourselves if we want to do high impact science quickly and efficiently. Robots are safer, they’re cheaper, and they don’t want hazard pay, true. But ultimately, humans are going to be much better explorers than the rovers and probes they send. Not only do they have the necessary brainpower to deal with challenging alien environments without a 34 minute delay between actions, they also have the will and interest to try new things and fit in an experiment or two that can’t be crammed into a rover’s schedule but can teach us something new and exciting as well. And this is not to mention the medical benefits we’d reap from getting humans ready to walk on other worlds and the possible wonders it could do for surgeries, physical therapy, and regenerative treatments as all these technologies and ideas are forces to come together, compete, and produce a roadmap that can be empirically tested and proven by a real mission…

Share

Contrary to the gripes of many security types, your antivirus software is not useless. Were you turn it off, many routine infections from contaminated websites, that nowadays are more likely to ask you to give to the poor than to pay for a live nude webcam show, would quickly turn your computer into a gold mine for a lazy identity thief armed with simple viruses. Really advanced and powerful malware using zero day exploits, however, will always elude it because that’s the nature of the arms race between virus writers and antivirus makers. Those with the means and motive attack systems and applications, the companies and researchers who discover a security breach either patch the vulnerability if possible, or add a new algorithm to look for the threat signature in the future, such a self-modifying files or local services suddenly trying to open an internet connection. And a piece of malware that slips by the antivirus and doesn’t get reported can work in silence for years, just like the widely reported cyberweapons Stuxnet and Flame did. To explain how these worms went unnoticed, both Ars Technica and Wired, published a self-defensive missive by an antivirus company executive which basically boils down to an admission of defeat when it comes to proactively recognizing sophisticated malware.

Slightly longer version? Some of the most advanced cyberweapons work a lot like typical software and uses a lot of the same tools, or uses legitimate frameworks and packages included in most legitimate software as a launching pad for deploying hidden code designed to act in the sort of malicious ways antivirus would flag as an attack but executed in a way that circumvents the channels through which it would scan. So when Flame is installed, the antivirus checks its components, probably saying to itself "all right, we got what looks like a valid certificate, SQL, SSH, some files encrypted using a standard hashing algorithm… yeah, it all checks out, that’s probably a network monitoring tool of some sort." And herein lies the problem. Start blocking all these tools or preventing their installation and you’re going to cripple perfectly valid applications or make them very difficult to install because every bit of them will have to be approved by the user. How does the user know which piece of software or what DLL is legitimate and which one is not? For the antivirus to help there, it would need to read the decompiled code and make judgments about which behaviors are safe to execute on your machine.

But having an antivirus suite decompile and check the code of every application you run for possible threats is not much of a solution because the decisions it makes are only as good as the judgment of the programmers who wrote it, and because a lot of perfectly legitimate applications have potentially exploitable code in them; a rather unfortunate but very real fact of life. Remember when your antivirus asked you if a program you installed just a couple of minutes ago could access the internet or modify a registry key? Just image being faced with a dialog asking you to decide whether some potentially exploitable function call in one of your programs should be allowed to run or not, faced with the following disassembly snippet to help you make a decision…

00000010 89 45 E4              mov    dword ptr [ebp-1Ch],eax
00000013 83 3D A4 14 9D 03 00  cmp    dword ptr ds:[039D14A4h],0
0000001a 74 05                 je     00000021
0000001c E8 5E 40 3D 76        call   763D407F

Certainly you can see why an antivirus suite that tries to predict malicious behavior, rather than simply watch if something suspicious starts happening on your system, simply wouldn’t be practical. No user, no matter how advanced, wants to view computer-generated flowcharts and disassembly dumps before being able to run a piece of software, and nontechnical users confronted with something like the scary mess above may just turn their computers off and sob quietly as they imagine their machines crawling with viruses, worms, back doors for identify thieves looking for their banking information, and other nightmarish scenarios. Conspiracy theorist after conspiracy theorist would start posting such disassembly dumps to Prison Planet, Rense, and ATS, and portray them as proof that the Illuminati are spying on them through their computers. Unless we want to parse every function call and variable assignment, look into every nook and cranny of every bit of software we’ve ever installed, or write our own operating systems, browsers, and applications, and never using the web, shutting off and physically disconnecting all our modems, we’ll just have to accept that there will always be malware or spyware, and the best we can do is keep our systems patched and basic defenses running.

Share