Archives For robots

mystery astronaut

As long time readers know, I’m a sucker for a good counter-intuitive think piece and pretty much every professional blogger knows that to start a big debate and draw a crowd, you need a view way out of left field once in a while to mix things up. But the really big catch for posts like these, especially in science and tech, is that they need to be based on sound criticism and have logical consistency and flow. This is why Morozov’s rebellion against TED was spot on while the attempt at a shot across the bow of human spaceflight programs by Srikanth Saripalli in Future Tense is basically a train wreck of an argument. Unlike Morozov, Saripalli isn’t willing or able to explore or engage with the issues he brings up, and his grasp of some very basic technological concepts comes off as shoddy at best. He even veers off into Singularity territory to argue for that future robotic probes will be smarter and uses transhumanism as an excuse to ground astronauts. The whole thing was so badly written that I was dead sure Saripalli must have been a journalist with exactly zero STEM credentials, but shockingly, he’s actually a robotics researcher at ASU.

Maybe he’s a terrific robotics person, but it certainly doesn’t get conveyed in his piece because much of it is spent on rehashing the very same claims from Kurzweil and his disciples that I have debated time and time again on this blog. From promises of digital immortality to artificial minds that can out-think all of humanity, Saripalli parrots it all with zero caveats or skepticism and then barrels right ahead to transhumanist effots in life extension to declare the future of our bodies to be very much uncertain, and thus reason enough to replace astronauts with robots. Then, after seemingly providing for cyborg space exploration he never returns to the topic again, wandering off to the current buzzword in bleeding edge robotics, evolving robot networks. Yes, they’re very awesome and their potential is mind-blowing. But put light years between them and you’re going to have to radically rethink how they could be deployed and used. Though you know what, we’re getting ahead of ourselves here. Let’s come back to his sneaky misuse of transhumanism…

Given that the future of our bodies is uncertain, it makes more sense to send robots with intelligence to other planets and galaxies. Nature has built us a certain way—we are best-suited for our planet "Earth." Future space explorers will quickly realize that the human body is not the perfect machine for these environments. We will also want to explore other planets such as Venus and maybe even think about living on those planets. Rather than make those planets habitable, does it not make sense to purposefully evolve ourselves such that we are habitable in those worlds?

You know, this attitude is surprisingly common in Singularitarian and transhumanist circles, and there’s a widespread disdain for human spaceflight as simulations and beaming one’s mind in a laser beam across the universe in a hypothetical future are praised as the solutions to the issue of our biology’s limitations in space. The problem is that beaming yourself around the cosmos is not only biologically implausible, but the physics and orbital mechanics don’t work out either. So while it’s true that we actually should send cyborgs into space, something for which I argued in a few articles on Discovery News, we’re not going to send human minds to ready made bodies, or disembodied brains ala Project Kronos to wander through space. Even less desirable is trying to evolve to live on an alien world as if evolution can be directed on cue and we aren’t better off as the generalists we currently are. We want to upgrade our bodies to survive alien environments, but we don’t want to do it just so we get stuck on another planet all over again, which is what the question seems to propose. Ignoring this line of debate, Saripalli then lunges into robotics.

Several articles in popular press have argued that humans on the moon have produced far more scientific data than the robots on Mars. While this is true, the robots that have been used till now are not at all "autonomous" or "intelligent" in any sense. […] Indeed, we are very far from having autonomous robots on planetary missions, but such machines are being built in university labs every day. Robot Magellans (with scientific skills to boot) could be here long before colonists take off for Mars.

There are two problems with this train of thought. Powerful, intelligent robots are extremely hard to build when you’re going to send them to other planets because physics is the universe’s Buzz Killington when it comes to boldly going into the final frontier. It comes down primarily to weight and power placing some very harsh limitations on how smart our machines can be. I can think of ways to make them much smarter, hypothetically speaking, but all of them involve humans and a lunar or orbital base with giant clean rooms and heavily shielded supercomputers. And while I’m not a gambling man beyond playing with a few bucks in Vegas between shows or attractions, I’d be willing to bet that even the smarter machines we’ll build in the next half century will not totally eliminate the need for human guidance, strategy, and corrections. Our robots will be our trusted help and we’ll use them to do jobs we can’t, but they’ll in no way replace astronauts, just make a very tough job easier and allow us to cram even more science into a mission. But Saripalli plays dirty when it comes to astronauts, summoning politics to rid the space program of humans…

Contrary to popular belief, there never has been a groundswell of popular support from the general public for the space program. Even during the Apollo era, more people were against the space program than for it. Getting robots into space costs a lot less than humans and is safer —so we can keep the space program going without creating budgetary battles.

Yes, it’s true that despite today’s near sacred status of the Apollo missions, people just wanted the government to beat those commie bastards and go home at the time you could turn on your TV and see humans walking on another world. This is what killed the lunar program and future plans for the launch stack, and arguably, what ails NASA to this day. However, you can’t argue that space probes don’t face the scorn of politicians when budgets are being decided since they pretty much loathe all science spending as wasteful, and despite singing praises to science and technology, much of the public doesn’t understand the people who do science or engineering in any way, shape, or form, and really don’t care to. Take quick a look at all the snide dismissals of Curiosity as a colossal waste of $2.5 billion and tell me with a straight face that you’re not going to get budgetary battles by sending robots instead of humans. Of course none of this can get in the way of Saripalli’s rosy view of a galaxy buzzing with our networked robotics along with a huge flop that makes me wonder if he actually understands distributed computing.

While NASA is interested in sending big missions with large robots to accomplish tasks, I believe future robots will be smaller, “distributed,” and much cheaper. To understand this, let us look at the current computing environment: We have moved from supercomputers to using distributed computing; from large monolithic data warehouses to saving data in the cloud; from using laptops to tablets and our smartphones.

All right, let’s stop right there for a minute. We did not go from large monolithic data warehouses to saving data in the cloud. We went from large monolithic data warehouses to even larger data warehouses that are basically a modern riff on mainframes. As explained before, the cloud isn’t magic, it’s just a huge set of hard drives in enormous buildings housing the modern equivalents of what mainframes were originally developed to do at a much higher level of complexity. To say that the cloud is different from a data warehouse is like saying that we moved from penicillin to antibiotics. Maybe he means something completely different than what came out, but since this isn’t a piece from a professional blogger trying to submit five articles a day, he probably wrote it, proofread it, and reviewed it multiple times before submitting it, and had plenty of chances to fix this sort of major error. Unfortunately, the continuation of his thought uses this factually incorrect assertion as the linchpin for his vision of robotic space exploration, which just makes it worse.

The future of space exploration is going to be the same—we will transition from large, heavy robots and satellites to “nanosats” and small, networked robots. We will use hundreds or thousands of cheap, small "sensor networks" that can be deployed on planetary bodies. These will form a self-organizing network that can quickly explore areas of interest and also organize themselves into larger machines that can mine metals or develop new vehicles for future exploration.

Let’s get something straight here, people at NASA are pretty damn smart. They prefer fairly big missions because they’re easier to power, easier to coordinate than many small ones, and can do more science when they reach their destinations. Thousands of tiny bots means very limited power supplies to instruments and many expensive pings between them. Factor in the distances involved in space travel and you’ll spend most of your time waiting to hear back from other bots, while a large, integrated system already got the job done. These are not things that will improve with new technology. There are hard limits on how small logic gates can be and how fast lasers and radio signals can travel, and changing these limits would require a different universe rather than a different manufacturing process or communication technique. It only really makes sense to distribute these robot networks across a single planetary body overseen by humans who had a number of modifications to their bodies to help deal with the alien environment. And there are reasons beyond efficiency for sending humans into space on a regular basis.

Humans are natural explorers, our minds are wired to wonder from birth. If we’re going to try and explore the universe, we need to do more than send our robotic proxies and stay on Earth. And as was mentioned a few times in the comments to Saripalli’s post, there’s a huge psychological effect of going into space. Seeing the entire Earth as a blue marble floating in the void makes a lot of astronauts extremely aware of just how mindlessly, ignorantly petty some 95% of the stuff that we bicker about with no end in sight, really is. We can’t expect to end political battles about things that seem huge to us here but mean nothing in the grand scheme of things when we take into account where and who we actually are just by flying politicians to space. But if we are more and more involved in space travel, we’ll get a much broader perspective. We’re one species, on one planet, wasting lifetimes arguing about magic sky people and their wishes for us, and on all sorts of petty spats about what is and isn’t ours on a tiny blue ball spinning in space. And when we finally let that sink in, maybe, we’ll devote a little more time to something far more important, like advancing ourselves. Sending robots to take our place in space only delays that.

[ illustration by Ian Wilding ]



A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]


cyborg integration

Stop me if you’ve heard any of this before. As computers keep getting faster and more powerful and robots keep advancing at a breakneck pace, most human jobs will be obsolete. But instead of simply being pink-spilled, humans will get brand new jobs which pay better and give them a lot of free time to enjoy the products of our civilization’s robotic workforce, create, and invent. It’s a futuristic dream that’s been around for almost a century in one form or another, and it has been given an update in the latest issue of Wired. Robots will take our jobs and we should welcome it because we’ll eliminate grunt work in favor of more creative pursuits, say today’s tech prophets, and in a way they’re right. Automation is one of the biggest reasons why a lot of people can’t go out and get jobs that once used to be plentiful and why companies are bringing in more revenue with far fewer workers. Machines have effectively eliminated millions of jobs.

When we get to the second part of this techno-utopian prediction, however, things aren’t exactly as rosy. Yes, new and higher paying jobs have emerged, especially in IT, but they’re closed to a lot of people who simply don’t have the skills to do these new jobs or for whom no position exists in their geographical vicinity. Automation doesn’t just mean that humans get bumped up from an obsolete job, it means there are fewer jobs overall for humans. And when it comes to positions in which dealing with reams of paperwork and mundane office tasks is the order of the day, having computers and robots in them eliminates internships college students or young grads can use to build up a resume and get their feet in the door. They’re now stuck in a Catch-22 where they’re unable to get experience and more education puts them further behind thanks to a machine. I’m going to go out on a limb and say that this is not what the techno-utopians had in mind.

Of course humans will have to move up into more abstract and creative jobs where robots have no hope of ever competing with them, otherwise the economy will collapse as automated factory after automated factory churns out trillions of dollars worth of goods that no one can buy since some 70% of the population no longer has a job. And at 70% unemployment, every last horrible possibility that sends societal collapse theory survivalists screaming themselves awake at night has a high enough chance of happening that yours truly would also start seriously considering taking up gun hoarding and food stockpiling as really good hobbies. Basically, the failure to get adjusted to the growing cybernetic sector of the workforce simply isn’t an option. Companies, no matter how multinational, would be able to eliminate so many positions that the robot takeover of human jobs with no replacements in sight that it wouldn’t start feeling the economic pain as they hit maximum market saturation and can go no further because no one can buy their wares.

But all these good news aside, just because we’ll have time to adjust to an ever more automated economy and feel the need to do so, doesn’t mean that the transition will be easy and people will not be left behind. Without a coordinated effort by wealthy nations to change the incentives they give their companies and educational institutions, we’ll be forced to ride out a series of massive recessions in which millions of jobs are shed, relatively few are replaced, and the job markets will be slowly rebuilt around new careers because a large chunk of the ones lost are now handed off to machines or made obsolete by an industry’s contraction after the crisis. And this means that when facing the machine takeover of the economy we have two realistic choices. The first is to adapt by taking action now and bringing education and economic incentives in line with what the postindustrial markets are likely to become. The second is to try and ride out the coming storm, adapting in a very economically painful ad hoc manner through cyclical recessions. Unlike we’re being told, the new, post-machine jobs won’t just naturally appear on their own…


punk model

As odd as it may have sounded, I’ve said multiple times that the web did not change human sexuality nearly as much as we’re often told and much of the novelty is really just well forgotten antiquity ranging from Roman orgies to the personal and highly publicized perversions of Marquis de Sade. And aside from making it easier to find and talk to our fellow perverts, not a whole lot has changed about our sexual appetites, despite threats of runaway pornography addicts from angry conservatives and alarms about men quickly becoming more sexually deviant from borderline misandrists. In fact, I’ll even bet you that transhumanist sexual fantasies of computer-assisted mind-melding is an extension of 1960s New Ageisms in which quantum vibrations along with large quantities of drugs and meditation have been substituted with machine-neuron interfaces and very big leaps in some very hazy new areas of computer science. But all this said, I’ll grant you something unique when it comes to the fantasies of futurists known as AFSR or a fetish for humaniod robots, often custom built to turn one’s wildest fantasies into reality and trained to be the perfect object of arousal. And according to new literature looking at human and computer interaction, that market could be very lucrative for a lot of people…

One of the more recent summations of how comes from Ian Yeoman and Michelle Mars’ scenario for a robot brothel that would substitute advanced versions of Real Dolls we have today for flesh and blood women, a scenario that could put a real dent in the amount of human trafficking, misery, and woe that’s inflicted on many sex workers shuttled around the world to staff illegal establishments ran by organized crime groups. No need to torture a human and subject her to countless risks when one can just buy a robot and sanitize it after every use, then simply pay for the maintenance and amortized depreciation. And the manufacturers would certainly make plenty of male models too because contrary to popular opinion, women do pay for sex to ensure they’ll get the experience they want and you will be hard pressed to find be a more certain return on their investment than a robot. Now you could still imagine an illegal industry trading in real humans for added kink, but when a much safer, legal, and human option is within easy reach, it would more likely become a niche market. Try to outlaw robotic call girls and boys and you’d have to bring a case which would put any sex toy under threat of a swift illegalization and create an uproar from voters. As for the robots themselves, they’re just doing what they will be programmed to do and nothing you can do or say will hurt them since they’ll lack real emotions.

Not for long though, says David Levy in his 2007 book which declares that with enough advancement in AI, a whole string of human-robot relationships and even marriages will take off. From a psychological standpoint, his thesis is sound. There are numerous people out there who crave attention from other humans but simply don’t know how to get it, using Real Dolls and products like them as not only sexual but emotional surrogates which actually serves to make them even more befuddled by the seeming irrationality of who they sometimes call "organic partners," creating a cycle of co-dependence on their synthetic substitutes. Add some AI that will make those machines more animated, give them perceived moods and ideas, and voila! Why even look for a bothersome, unpredictable, hormonally driven organic partner when a controllable synthetic one is right here and could be fine tuned to be exactly what you’d like? And if you spend years taking care of this machine, why not somehow commemorate the bond just like the organics do? Well, that’s where we enter the legal realm’s difficulties for this scenario. You won’t be able to marry a robot for the same reason you can’t marry toasters or cell phones. Even AI-enabled machines are not entities with free will that can give their consent. If you write a boyfriend or girlfriend routine, of course the robot will consent to whatever you want. It’s in the code.

Also, what about the courts’ idea of whether the human can legitimately even consider marrying or being in an emotional relationship with a robot? It would be one thing if humans didn’t seem to show a preference for the company of other humans, but we do. And as we’ve seen, those who may be the most likely to treat a robot as we would treat a significant other could well be substituting human contact. Would a judge consider someone who finds himself — because let’s be honest, it’s usually males who experience this — unable to relate to girls or women around him and turns to inanimate objects for emotional and sexual gratification, as mentally fit to have a legal relationship with any entity other than another person? On the other side of the argument, I could see activists making the claim that we can’t force someone to conform to whatever the social custom is at the time because that’s discriminatory, and argue that a sufficiently engaging AI should have personhood and be allowed to give consent for things like marriage. But these are not going to be easy arguments to make and if there ever are official human-robot marriages or a big explosion in human-robot relationships, expect there to be a lot of acrimony about it in the media. There won’t be smooth transitions and any incident in which human users of sex bots get injured or an AI goes haywire will be agonizingly dissected during the debates.


Several months ago, Slate’s tech writer Farhad Manjo wrote a six part series detailing how you will lose your job to a machine in the near future and how absolutely no one is safe from being automated away. He uses a fairly simple formula to make his point. First, he describes a system doing a tedious pattern recognition task or a meta-analysis well enough to be used in the real world. Second, he describes how a profession which is using these systems is glad that the tedium has been transferred to a machine and knows that entry level job opportunities will shrink in favor of this system and its successors. Third, and finally, he describes how this is just wishful thinking and pretty much everyone but a few experts can be automated away as the computers do it all with the absolute minimum of human supervision. He even targets scientists, arguing that machines are becoming oracles of science and spitting out complex equations underlying the laws of physics and biology a human will not understand. Forget a nuclear holocaust by Skynet, machines will just take your job without the decency to exterminate you afterwards. In the Animatrix, this is how the Great Human/Machine war began…

Sadly there is truth in Manjo’s conclusion that many jobs will go away for good thanks to automation. As many liberal political activists and the OWS movement point out, productivity and corporate profits are booming, but while they often use this as a starting point to blame outsourcing and bonus-saving layoffs for a lack of jobs, they forget the role of automation. It’s not something we think about often and it’s not easy to make slogans to shame robots into quitting. You can fault an executive for laying off a thousand people to meet quarterly goals or deciding that hiring an American worker is too expensive and going overseas, but the uncomfortable reality is that a lot of companies are about as lean as they’re going to be after years of layoffs and belt-tightening and a number of smaller companies that used to outsource have been slowly weaning themselves off a reliance overseas factories citing increased labor costs, blatant theft of intellectual property aided and abetted by local bureaucrats, quality issues, and customs troubles. So how is productivity still up? Automation. How could you fault a company for increasing productivity not by simply getting rid of a job for questionable reasons, but to an automated tool? Of course the takeaway here is that some jobs will be completely unnecessary.

But just how many jobs will go the way of the dodo? Extending Manjo’s formula, we could even argue that one day not even programmers will be needed, only architects who run code generation tools as in an ironic twist, those who automated away tens of thousands of jobs now automate themselves away. But funny thing is that this approach has been tried before in IT and it did not end well. Model Driven Architecture, or MDA, attempted to create a kind of factory line for software where many steps could be fully automated, including generation of code. But lack of standards, incompatibilities with existing tools, and the many big and little issues in trying to turn an abstract model into a complete piece of software made the end products unmanageable. Why? While computers are great at repetitive tasks and crunching immense amounts of data, which is what they’re made to do, they’re not good at design or nuance. In programming, how does a machine know that object X needed to be encapsulated? Or that it could use less code to get the same behavior meaning less code to test? You need humans who know how to write code and define the rules to step in, roll up their sleeves and work on a creative problem like this. The MDA scholars tried to counter this issue by creating ever more abstract ways of designing logical models but abstraction doesn’t always yield lean, mean, performant applications.

So here’s what automation is good at doing. The mundane stuff. If you change the rules or come up with new ideas for how to do something, those new rules and ideas have to be implemented by humans, and systems need to be upgraded to deal with new processes. Try to automate away an entire field and you’ll end up with a dearth of innovation and software unable to cope with new challenges. One wonders why Manjo decided that help from robots and machines means that anything with concrete deliverables can be done by computers in his ruthless musings of whose job can be eliminated and replaced by a machine and in the last part, we see just why he made up his mind about the economic cyber-takeover in his quote about a hypothesis generating prototype which uses a genetic algorithm to come up with descriptive equations for scientific data…

Lipson and Schmidt recently worked with Gurol Suel, a molecular biophysicist at the University of Texas Southwestern Medical Center, to look at the dynamics of a bacterium cell. Given data about several different biological functions within a cell, the computer did something mind-blowing. “We found this really beautiful, elegant equation that described how the cell worked, and that tended to hold up over all of our new experiments,” Schmidt says. There was only one problem: the humans had no idea why the equation worked, or what underlying scientific principle it suggested. It was, Schmidt says, as if they’d consulted an oracle.

Actually it’s more like they fed a computer with reams of data, had it try to find relationships, and hit on a lucky few guesses that worked out. Happy that it worked, they’re now trying to have the software produce more data as it comes up with its guesses until it hits something that looks right. Far from being an oracle, the software in question is just a scientific correlation finder. What Majo is doing here is taking a few successful attempts and presenting them as the norm with barely a mention of the several thousand erroneous guesses made in the process, much like proponents of psychics and astrologers focus only on the "correct" predictions without so much as even acknowledging the overwhelming error rate. And this is not to mention the slew of other big issues with a computer doing science. Yes, technology will spread even farther and yes, there will be many jobs lost to ongoing automation. But presenting this fact while portraying this technology as transcending the humans who built it when it does no such thing, and shedding tear after tear for the soon-to-be laid off or the- never-to-be-hired without taking the opportunity to explain that this is exactly why we need to invest a lot more into research, development, and STEM disciplines, makes a potentially interesting look at the future of a post- industrial economy which asks profound questions, fall far, far, far short of its potential.

[ illustration by Paul Hostetker ]


For all their endurance and toughness, our vaunted Martian rovers suffer from a major handicap that makes a typical mission far less effective than we want it to be. In all their time on Mars, Spirit and Opportunity covered less than 20 miles combined. What’s the current record for the longest distance covered in one day? Several hundred meters. You can cover that in ten minutes at a leisurely pace. Granted, you’re on Earth and have two feet that were selected by evolution for optimal locomotion while the rovers are on Mars and have to be driven by remote control, with every rock, fissure, crevice, and sand trap in their way analyzed and accounted in prior to a move command being issued since getting a rover stuck hundreds of millions of miles away is a serious problem. But isn’t there anything we could do to make the robots smarter? Can we make them more proactive when they land so far away we can’t control them in real time? Well, we could make them smarter but that will cost you, both in expense and resources since they’ll have to think and keep on thinking while they work…

Technically, we could do what a lot of cyberneticists do and design artificial neural networks for our rovers and probes, treating the various sensors as input neurons and the motors as output neurons. We simulate all the environments virtually and train them using backpropagation. Then, when encountering certain combinations of sensory readings, these artificial neurons transmit the signals to the motors and the machine does what it should do in that situation. If we can interrupt ongoing processes to monitor new stimuli, we could even allow them to cope with unexpected dangers. Let’s say we have a work mode and an alert mode. The work mode is endowed with the ability to pursue objects of interest, the alert mode looks out for stimuli indicating that there may be something harmful coming. So when the work mode finds a rock to drill, another simultaneous thread opens and the alert mode starts scanning the environment. Should the tire slip or the wind pick up, the alerts go out to the rover to stop and reevaluate its options. Sounds doable, right? And it is. But unfortunately, there’s a catch and that catch is the energy that will be required to run all this processing and manifest its results.

Brainpower is expensive from an energy standpoint. There’s a reason why our brain eats up a fifth of our total energy budget; its processes are very intensive and they continue non-stop. Any intelligent machine will have to deal with a very similar trade-off and allocate enough memory and energy to interact with its environment in the absence of human instruction. That means either less energy for everything else, or that the rover will now have to come with a bigger energy source. The aforementioned MER rovers generated only 140W at the peak of their operational capacity to power hardware using 20 MHz CPU and 128 MB of RAM. With this puny energy budget, forget about running anything that takes a little processing oomph or supports multithreading. With a no-frills operating system and a lot of very creative programming, one could imagine running a robust artificial neural network on devices comparable to early-generation smartphones, something with a 200 MHz CPU and somewhere around 256 MB of RAM. To run something like that nonstop can easily soak up a lot of the energy generated by a Mars rover, and when you’re on the same energy budget as a household light bulb, this kind of constant, ongoing, intensive power consumption quickly becomes a very, very big deal.

Hold on though, you might object, why do we need a beefier CPU? Can’t we just link multiple small ones for a boost in processing capacity? Or, come to think of it, why bother with processing capacity at all? Well, since a rover has certain calculations and checks it constantly needs to make, you need to provide time for them to do what they need to do. Likewise, you need to keep processing data from your sensors to feed the neural net in the background and handle the actual calculations from it. Detecting threats in real time with what would be a state of the art system in the 1980s seems like a tall order, especially if you expect your rover to actually react to them rather than plow onwards as the alarms go off in its robotic head, resigned to its fate, whatever it may be. On top of that, just trying to run something like an artificial neural network while performing other functions requires an overhead to keep the computations separate, much less actually having the neural net command the rest of the rover. Of course there could be something I’m missing here and there’s a way to run an artificial neural network with such a light footprint that it could be maintained on a much leaner system than I outlined, but it seems very unlikely that if bare bones systems like those used for today’s rovers could be made to run a complex cognitive routine and act on its decisions, someone wouldn’t already be doing just that.


While robots aren’t yet conquering the world and won’t be anytime soon, they’re finally learning to walk, and in the near future, the kind of bipedal locomotion that’s a major part of what makes humanoid robots such an enormous engineering and maintenance challenge, may get a lot easier. And not only are they learning how to walk, but we’re making them learn to walk like we do, through trial and error. Just like you don’t pause to do several million calculations before each step but simply let your motor neurons guide your muscles with thick synapse connections developed over a lifetime of walking, neither do cyberneticist Josh Bongard’s machines. Instead of all that tedious computation, their algorithms track down the optimum set of movements for all their joints and appendages, a set of movements that could simply be repeated for that particular robot when it has to move at the same speed. The same algorithms could probably be applied to teaching it how to run or jump as well, though a faster moving robot needs very strong joints and very powerful motors to withstand repeated impacts of hitting the ground with all its weight as it makes its way forward.

So how is it done? Bondard is actually applying his work on self-discovering robots, robots that discover how they’re put together and try to learn how to move regardless of how they’re altered, to new new morphologies and designs. Besides just figuring out how to move, the robots in his simulations and lab are also working on balancing themselves and finding the optimal walk cycles for their designs. Not only would this save valuable computing overhead when the machine is in action, it also addresses a very important point in programming robots. Programmers can use drivers and DLLs, collections of algorithms and logic they include in their code, and set ranges of motion themselves. However, without knowing the exact weight distribution of the machine at every step and the exact power of each rotor and actuator, as well as how it affects balance, the robot would very likely fall when it tries to take its first step. One of the solutions tried in early robotics was to cram as many sensors as possible into your machine and write complex logic to keep it moving. The one proposed by Josh Bondard is far more elegant and lets the robot figure it out for you. After all, it’s faster at computation and in the time it takes you to try ten walking routines, it can try tens of thousands.

But wait, if the robot is figuring it all out for you, what else could it figure out? Well, not much actually. From the paper detailing the mechanics of the learning process, we can see that each sensor in the robot is assigned to an artificial motor neuron object which in turn is connected to every other motor neuron object like it. Then, a robot is given the parameters optimized for it in a simulation and a squashing function, an equation designed to correct the errors inevitably made by the neuron objects and bring them closer in line with the wanted result over as many iterations as necessary. And there’s more. It turns out that to teach a machine how to stand up, it’s actually very beneficial to get it crawling like a snake first, then hobble spread-legged like a lizard, and then you can get it to stand up, building on each step because the basic movements forward and on legs are now computed and ready to apply to a new body type. Snakes can’t just fall over, and lizards with widely positioned legs are quite anatomically stable. Figure out how they move, posits Bondard, and you’re two thirds of the way there to freely walking and balancing yourself. According to him, he’s following the evolutionary path we see in the fossil record and letting his robots evolve the same way animals did in the primeval past.

That means we shouldn’t be designing robots manually or basing them on a rough idea of what we’re seeing in the natural world through an experiment or two and incorporating that feature into new machines, but allow the machines to evolve from scratch in a simulator. After thousands of virtual failures, they’ll eventually master the task and give us a very good idea for an optimal layout and morphology. Of course one very important note to keep in mind is that the resulting machines will only be good at that task and very little else. Unlike a natural organism, an evolving robot doesn’t have to be good at almost everything to survive and doesn’t need to adapt to a wide variety of environments and threats. It will be the most efficient and well adapted robot for your needs and perform them extremely well with enough trial and error as it learns, but outside that task, it will be virtually useless. Now, if you want a complex, rugged robot able to tackle a complicated environment, you’re looking at a much more sophisticated set of simulations with multiple neural networks arranged into "cortices" and ran through far more rigorous virtual conditions requiring years and years of planning to create.

See: Bongard, Josh (2011). Morphological change in machines accelerates the evolution of robust behavior. Proceedings of the National Academy of Sciences PMID: 21220304

Bongard, J., Zykov, V., & Lipson, H. (2006). Resilient Machines Through Continuous Self-Modeling Science, 314 (5802), 1118-1121 DOI: 10.1126/science.1133687

[ illustration by Chester Chien ]


Here are some good news on the tech skepticism front. Popular science writers are no longer taking the idea of human-level AI going rogue and wiping out humans in an indeterminate future. Now the bad news. They’re still hyping the threats from military machines, which while real, aren’t quite as severe as they’re being made out to be and are pretty much always the result of bugs in the code. We’re starting to turn to robots for more and more on the battlefield and those robots can and will get smarter, reacting to threats faster than humans, attacking their targets with greater efficiency than even computer-aided pilots. Being expendable, they’re a lot less emotionally and politically expensive to lose than humans, so the more robots we build, the less we will have to get involved in the actual fighting, and the more damage we can do remotely. However, machines are indiscriminate and even the best programmers will make mistakes. There will be accidents and civilians can still be harmed during a shootout between enemy forces and a squad of robots. And that worries tech writers and experts in AI, especially because so far, there’s no plan for coordinating current and future killer bots.

Today, there are few places where we can get a better glimpse of the future than in military aviation, where the rumor is that the last fighter pilot has already been born. In less than half a century, most fighter and bomber operators will be replaced by smaller, stealthy jets which fly themselves to their targets much faster than they could with a human on board, and carrying a greater payload since they’re not weighed down with redundant, space-consuming, and heavy life support systems. In experimental flights or simulations, this sounds great, but in the real world, how will they operate in groups? How will they communicate without human handlers or decide how to allocate targets among each other? When they’re screaming towards a target at Mach 2.5 and readying to drop a bomb, how long of a time should humans have to intervene? There’s no guideline for this, and considering that the military usually seems to have a 30 page manual spelling out every step for, oh just about everything, that may seem a little disconcerting. However, all this technology is still brand new and not exactly ready to deploy en masse. This is why in the Popular Science article linked above, the anecdote of the engaged Pentagon official who’s wondering about the protocols for mass deployment of robot soldiers gives the very misleading impression that no one’s really worried about how to control military AI.

Of course that’s not really true. Runaway, armed robots who seem to go rogue when they either loose targets or have a lapse in communication, assuming a default behavior to "fail gracefully" as programmers say, are a very real concern, and so is the need to coordinate entire squads of them and be able to intervene when they start taking the wrong course of action mid-combat. But by focusing on all the things that could go wrong and ignoring the fact that these are all just prototypes being tested and fine-tuned, tech writers trying to find a new, more plausible robot insurrection story amp up the existing concerns while making it seem like no one takes them seriously. What policy on wartime AI can we expect from the Pentagon when the AI in question is still an experiment taking its baby steps into the real world? Now, when we have a real, working weapon ready to be assigned to an actual mission completely on its own, with humans only in the role of supervisors who’ll take control during an emergency, then we can start thinking of meaningful ways to coordinate robotic armies and fleets. Without the finished product in place and a detailed knowledge of how it works and what it could do, a far-reaching policy on cybernetic warfare would be putting the cart before the horse. Knowing the capabilities of an unmanned fighter, bomber, or tank would let you create new requirements for the vendors and specify a communications package that will let all the different units communicate their positions and actions.

And there’s another interesting twist here. Deploying individual robots that talk to one another would require a supercomputer to issue commands across the battlefield, controlling these AIs with even more AI logic. Our somewhat inefficient method of communication which requires us to actually write or say something, simply couldn’t keep up with the milliseconds it takes for compatible computer systems to exchange vital data. This means that at some level, there’s always a computer making a crucial decision, even if the humans issue all the important strategic orders. We just wouldn’t be fast enough to assign every target and every motion when the battle is underway to prevent a robot from straying off target or getting a bit too close to an ally position. No matter how many layers of computers will be involved, however, we all know that all it takes is an override or a proper command to freeze machines in their tracks. All we need is to program enough fail-safe mechanisms, and any potential SkyNet would be disabled just by switching the power switch to off. Unless there’s a virus in the system planted there by a human, but that’s a whole other, and probably very complicated, story…


In today’s economy, everybody is worried about jobs and rightfully so. Some experts are even forecasting that we can forget about a full recovery in the job market until 2017, and the big fear for many is that their skills will no longer be needed because some jobs are going away forever. Unfortunately, for many workers, that does seem to be true thanks to the new ways so much business is being done. This is why in this election China, and its image of a global outsourcing hub which creates millions of new jobs at the expense of others, is the boogieman so often cited by politicians when it comes to addressing soaring unemployment. But there’s one more reason why millions of jobs are being eliminated altogether, a reason that doesn’t get much attention or airtime, and a reason much more difficult to sum up in a catchy slogan for an attack ad, or to fight. Robots.

As reported by a piece in Good Magazine, more and more manufacturing jobs are being automated because unlike humans, robots don’t need breaks, they don’t require benefits, they don’t negotiate wages, and if they’re not doing the work they’re supposed to do, you can fix them or just send them back and buy new ones. On top of that, they make far fewer mistakes than fallible and fatigued humans. They don’t necessarily cost that much less than humans since prices for heavy duty industrial machinery easily run into the millions of dollars, and they’re not perfect, requiring a human to inspect their work and adjust their aim and actions accordingly. But in the long run, it’s easier to employ a small staff of engineers to supervise the machinery than pay benefits to a few hundred blue collar workers whose wages are often negotiated by unions, and the legacy costs are much less. And it’s not only the manufacturing workers who should be concerned. Although while collar jobs are still safer than blue collar ones, automation and outsourcing will also take its toll there.

Paperwork is increasingly being handled by software or outsourced to cut costs, and with every recession, the management tries to push the limits of both to see how much work can be done without having to start hiring. This is why it takes a while for many jobs to come back after a recession. It’s not that the companies are timid about the economic landscape after a recession is over and just need another tax incentive or a bonus to hire new employees, as so many politicians and supply-siders often repeat whenever the question of how they’re planning to create new jobs is voiced at a town hall meeting. Companies aren’t hiring because they just don’t have to hire a lot of people and the tax credit or cash incentive put up against an average worker’s salary with benefits are just too small to hire some extra people and explore new opportunities. Again, keep in mind that there will always be jobs that can’t, and shouldn’t, be outsourced for strategic or legal reasons, and there will always be a need to check that the software did what it was actually supposed to do. But the double punch of outsourcing and automation is going to cut down on a lot of clerical and administrative jobs.

So, you may wonder, who’s safe in the future other than the IT people who’ll have to build, maintain and fix all the robots and software packages in question? We can say that human subject matter experts will always be needed to teach and supervise the machinery, though there will be fewer of them than many companies have today. Managers will be needed to coordinate work and come up with new ideas and products. And yes, we’ll still need skilled trades-people who can do things most machines won’t be able to do safely or effectively for decades to come. When it comes to outsourcing, the picture is rather murky. Outsourcing as we know it is an example of globalization which supporters say adds $1 trillion to the American GDP every year, most of that in the form of cheaper goods being sold to more people. But the globalization currently being practiced is often very one-sided. American and European multi-nationals send work overseas, but Americans and Europeans will have a very hard time trying to find a job in hot global markets as foreigners. And the very countries which benefit most from outsourcing tend to be some of the more protectionist states as well.

In other words, we’re sending money and jobs to nations while expecting them to create new opportunities for us, building a loop of globalization benefiting everyone involved. But what really happens is that these nations take the jobs, ship back the required product, then shut out foreign companies trying to get what their leaders see as too big of a slice of the economy. And while we talk about the need for free trade and how protectionist measures would take a toll on our economies (and they certainly would), the world’s outsourcing hubs nod in agreement as they write laws forbidding foreign companies to accumulate more than a certain stake in their homegrown corporations, and write immigration laws that keep foreign citizens from coming over and trying to get jobs in high demand. And that’s a problem requiring a lot of diplomatic work to resolve…


If you’ve been reading this blog long enough, you may recall that I’m not a big fan of humanoid robots. There’s no need to invoke the uncanny valley effect, even though some attempts to build humanoid robots managed to produce rather creepy entities which try to look as human as possible to goad future users into some kind of social bond with them, presumably to gain their trust and get into a perfect position to kill the inferior things made of flesh. No, the reason why I’m not sure that humanoid robots will be invaluable to us in the future is a very pragmatic one. Simply put, emulating bipedalism is a huge computational overhead as well as a major, and unavoidable engineering and maintenance headache. And with the limits on size and weight of would be robot butlers, as well as the patience of its users, humanoid bot designers may be aiming a bit too high…

We walk, run, and perform complicated tasks with our hands and feet so easily, we only notice the amount of effort and coordination this takes after an injury that limits our mobility. The reason why we can do that lies in a small, squishy mass of neurons coordinating a firestorm of constant activity. Unlike old-standing urban myths imply, we actually use all of our brainpower, and we need it to help coordinate and execute the same motions that robots struggle to repeat. Of course our brains are cheating when compared to a computer because with tens of billions of neurons and trillions of synapses, our brains are like screaming fast supercomputers. They can calculate what it will take to catch a ball in mid-air in less than a few hundred milliseconds and make the most minute adjustments to our muscles in order to keep us balanced and upright just as quickly. Likewise, our bodies can heal the constant wear and tear on our joints, wear and tear we will accumulate from walking, running, and bumping into things. Bipedal robots navigating our world wouldn’t have these assets.

Humanoid machines would need to be constantly maintained just to keep up with us in a mechanical sense, and carry the equivalent of Red Storm in their heads, or at least be linked to something like it, to even hope to coordinate themselves as quickly as we do cognitively and physically. Academically, this is a lofty goal which could yield new algorithms and robotic designs. Practically? Not so much. While last month’s feature in Pop Sci bemoaned the lack of interest in humanoid robots in the U.S., it also failed to demonstrate why such an incredibly complicated machine would be needed for basic household chores that could be done by robotic systems functioning independently, and without the need to move on two legs. Instead, we got the standard Baby Boomers’ caretaker argument which goes somewhat like this…

Put aside the idea of a robot that cleans out your gutters so you can spend a Saturday in the yard with your son. Imagine that your son has children of his own, has taken you in, and works a ten- hour shift. Who will have the time to administer your medication? To schedule your next doctor’s appointment? To help you to the bathroom? Who will you rely on? Perhaps, if Hong and his peers can convince our country that their work could someday remedy a national crisis, you’ll rely on [ a humanoid robot like ] CHARLI.

Or, alternatively, a computer could book your appointments via e-mail, or a system that lets patients make an appointment with their doctors on the web, a smart dispenser that gives you the right amount of pills, checks for potential interactions based on public medical databases, and beeps to remind you to take your medicine, and a programmable walker with actuators and a few buttons could do these jobs while costing far less than the tens of millions a humanoid robot would cost by 2025, and requiring much less coordination or learning than a programmable humanoid. Why wouldn’t we want to pursue immediate fixes to what’s being described as a looming caretaker shortage choosing instead to invest billions of dollars into E-Jeeves, which may take an entire decade or two just to learn how to go about daily human life, ready to tackle the problem only after it was no longer an issue, even if we started right now? If anything, harping on the need for a robotic hand for Baby Boomers’ future medical woes would only prompt more R&D cash into immediate solutions and rules- based intelligent agents we already employ rather than long-term academic research.

There’s a huge gap between human abilities and machinery because we have the benefit of having evolved over hundreds of millions of years of trial and error. Machines, even though they’re advancing at an ever faster pace, only had a few decades by comparison. It will take decades more to build self-repairing machines and computer chips that can boast the same performance as a supercomputer while being small enough to fit in human-sized robots’ heads before robotic butlers become practical and feasible. And even then, we might go with distinctly robotic versions because they’d be cheaper to maintain and operate.