Archives For artificial intelligence

sleeping cell phone

Is seems that George Dvorsky and I will never see eye to eye on AI matters. We couldn’t agree on some key things when we were on two episodes of Science For The People when it was still called Skeptically Speaking, and after his recent attempt at dispelling popular myths about what artificial intelligence is and how it may endanger us, I don’t see a reason to break with tradition. It’s not that Dvorsky is completely wrong in what he says, but like many pundits fascinated with bleeding edge technology, he ascribes abilities and a certain sentience to computers that they simply don’t have, and borrows from vague Singularitarianism which uses grandiose terms with seemingly no fixed definitions for what they mean. The result is a muddled list which has some valid points and does provide valuable information, but not for the reasons actually specified as some fundamental problems are waived off as is they don’t matter. It’s articles like this why I’m doing the open source AI project, which I swear is being worked on in my spare time, although that’s been a bit hard to come by as I was navigating a professional roller coaster recently. But while the pace of my code review has slowed, I still have time to be a proper AI skeptic.

The very first problem with Dvorsky’s attempt at myth busting comes with his attempts to tackle very first “myth” that we won’t create AI with human-like intelligence. His argument? We made machines that can beat humans at certain games and which can trade stocks faster than us. If that’s all there is to human intelligence, that’s pretty deflating. We’ve succeeded in writing some apps and neural networks which we trained to be extremely good at a task which requires a lot of repetition and the strategies for which lay in very fixed domains where there are a few, really well defined correct answers, which is why we built computers in the first place. They automate repetitive tasks during which our attention and focus can drift and cause errors. So it’s not that surprising that we can build a search engine than can look up an answer faster than the typical human will remember it, or a computer that can play a board game by keeping track of enough probabilities with each move to beat a human champion. Make those machines do something a neural network in their software has not been trained to do and watch them fail. But a human is going to figure out the new task and train him or herself how to do it until it’s second nature.

For all the gung-ho quotes from equally enthusiastic laypeople with only tangential expertise in the subject matter, and the typical Singularitarian mantras that brains are just meat machines, throwing around the term “human-like intelligence” while scientists still struggle to define what it means to be intelligent in the first place, is not even an argument. It’s basically a typical techie’s rough day on the job, listening to clients debate about their big ideas, simply assuming that with enough elbow grease, what they want can be done without realizing that their requests are only loosely tethered to reality, they’re just regurgitating the promotional fluff they read on some tech blogs. And besides, none of the software Dvorsky so approvingly cites appeared ex nihlo; there were people who wrote it and tested it, so to say that software beat a person at a particular task isn’t even what happened. People wrote software to beat other people in certain tasks. All that’s happening with the AI part is that they used well understood math and data structures to avoid writing too much code and have the software itself guess its way to better performance. To just neglect the programmers like that is like praising a puck for getting into a net past a goalie while forgetting to mention that oh yeah, there was a team that lined up the shot and got it in.

Failing to get this fundamental part of where we are with AI, looking at fancy calculators and an advanced search engine, then imagining HAL 9000 and Skynet being the next logical steps for straightforward probabilistic algorithms, the rest of the myths are philosophical what-ifs instead of the definitive facts Dvorsky presents them to be. Can someone write a dangerous AI that we might have to fear or that may turn against us? Sure. But will it be so smart that we’ll be unable to shut it down is we have to as he claims? Probably not. Just like the next logical step for your first rocket to make it into orbit is not a fully functioning warp drive — which may or may not be feasible in the first place, and if it is, unlikely to be anything like shown in science fiction — an AI system today is on track to be a glorified calculator, search engine, and workflow supervisor. In terms of original and creative thought, it’s a tool to extend a human’s abilities by crunching the numbers on speculative ideas, but little else. There’s a reason why computer scientists are not writing countless philosophical treatises on artificial intelligence co-existing with lesser things of flesh and bone in droves, while pundits, futurists, and self-proclaimed AI experts churn out vast papers passionately debating the contents of vague PopSci Tech section articles after all…

designer v. developer

After some huffing, puffing, and fussing around with GitHub pages and wikis, I can finally bring you the promised first installment of my play-along-at-home AI project in which there’s no code to review just yet, but a high level explanation of how it will be implemented. It’s nothing fancy, but that’s kind of the point. Simple, easy to follow modules are easier to deal with and debug so that’s the direction in which I’m headed as they’ll be snapped on top of cloud service interfaces which will provide the on-demand computing resources required as it ramps up. There are also explanations for some of the choices I’m making when there are several decent implementation options for a particular feature set, some of which more or less have to come down to personal preference, while there are some long-view reasons to definitely pick one over the other.

In the next update, there will be database designs and SQL which may look a like overkill for a framework to run some ANNs, particularly when there are hundred line Python scripts that run them without a hiccup. But remember that for what’s being built, ANNs are just one component so the overhead is based on managing where it goes and securing the information because if I learned anything about security is that if it’s not baked in from the start but layered on top after all the functionality has been completed, you end up with only one layer of defense that may be easily pierced by exploiting a vulnerability out of your control. Inputs may not get sanitized with proper care, your framework package for CSRF prevention might not have been updated, and without a security model to put up some roadblocks between a hacker and your data, you may as well have not bothered. Likewise, there’s going to be a fair amount of code and resources to define the ANNs inputs and outputs so we can actually harness them to do useful things.

curious bot

Defense contractor Raytheon is working on robots that can talk to each other by tweaking how machine learning is commonly done. Out with top-down algorithmic instructions, in with neural network collaboration and delegation across numerous machines. Personally, I think this is not just a great idea but a fantastic one, so much so that I ended up writing my thesis on it and had some designs and code laying around for a proof of concept. Sadly, it’s been a few years and I got side-tracked by work, my eventual cross-country move, and other pedestrian concerns. But all that time, this idea just kept nagging me, and so after reading about Raytheon’s thoughts on networked robotics, I decided to dust off my old project and build it anew with modern tools in a series of posts, laying out not just the core concepts but the details of the implementation. Yes, there’s going to be a lot of in-depth discussion about code, but I’ll do my best to keep it easy to follow and discuss, whether you’re a seasoned professional coder, or just byte-curious.

All right, all right, that’s enough with all the groaning, I design and write software for a living, not pack comedy clubs in West Hollywood. And before you write any software, you have to lay out a few basic goals for what you want it to do. First and foremost, this project should be flexible and easily expandable because all we know is that we’re going to have neural networks that will run for machines with inputs and outputs, and we want to tie them to a certain terminology we could invoke when calling it. Secondly, it should be easily scalable and ready for the cloud, where all it takes to ramp it up is tweaking a few settings on the administration screen. Thirdly, it should be capable of accepting and executing custom rules for making sure the digital representations of the robots in the system are valid on the fly. And finally, it should allow for custom interfaces to different machines inhabiting the real world, or at least get close enough to providing a generic way to talk to real world entities. Sounds pretty ambitious, I know, but hey, if you’re going to be dealing with artificial intelligence, why not try to see just how far you can take an idea?

Before we proceed though, I’d like to tackle the obvious question of why one would want to dive into a project like that on skeptical pop sci blog. Well, for the last few years artificial intelligence has figured in popular science news as some sort of dark magic able to create utopias and ruin economies by making nearly half of all jobs obsolete in mere decades by writers who can’t fact check the claims they quoted and use to build elaborate scenarios of the future. But even if you don’t dive into the code and experiment with it yourself, you’ll get a good idea what AI actually is and isn’t. Then, the next time you read some clickbait laying out preposterous claims about how robots will take over the world and enslave us as we remain oblivious to it, you could recall that AI isn’t a digital ghost from sci-fi comic books, waiting to turn on humanity it comes to resent like the hateful supercomputer of I Have No Mouth and I Must Scream, but something you’ve seen diagrammed and rendered in code you can run on your very own computer on an odd little pop sci blog, and feel accordingly unimpressed with the cheap sensationalism. So with that in mind, here’s your chance to stop worrying to learn to understand your future machine overlords.

Here how this project is going to work. Each new post in this series is going to point to a GitHub wiki entry with code and details to keep the code and in-depth analysis in the same place while the posts here will give the high level overview. This way, if you prefer to stick to very high level basic overviews, that’s what you get to see first because as I’ve been told by so many bloggers who specialize in popular science and technology, big blocks of math and code are guaranteed to scare off an audience. But if the details intrigue you and you wanted a better look under the hood, it’s only a link away, and even though it looks scary at first, I really would encourage you to click on it and try to see how much you can follow along. Meanwhile, you’ll still get your dose of skeptical and scientific content in between so don’t think Weird Things is about to turn into an a comp sci blog the whole time this project is underway. After all, after long days of dealing with code and architectural designs, even someone who can’t imagine doing anything else will need a break from talking about computers and writing even more code for public review…

x47b takeoff

The peaceniks at Amnesty International have been worried about killer robots for a while, so as the international community convenes in Geneva to talk about weapons of the future, they once again launched a media blitz about what they see as an urgent need to ban killer robots. In the future they envision, merciless killer bots mow down soldiers and civilians alike with virtually no human intervention, kind of like in the opening scene of the Robocop remake. In an age of vast global trade empires with far too much to lose by fighting with each other use their soldiers and war machines to tackle far-flung low intensity conflicts, in military wonk parlance, where telling a civilian apart from a combatant is no easy feat, Amnesty International raises an important issue to consider. If we build robots to kill, there’s bound to be a time when they’ll make a decision in error and end someone’s life when they shouldn’t have. Who will be held responsible? Was it a bug or a feature that it killed who it did? Could we prevent similar incidents in the future?

Having seen machines take on the role of perfect bad guys in countless sci-fi tales, I can’t help but shake the feeling that a big part of the objections to autonomous armed robots comes from the innate anxiety at the idea of being killed because some lines of code ruled you a target. It’s an uneasy feeling even for someone who works with computers every day. Algorithms are way too often buggy and screw up edge cases way too easily. Programmers rushing to meet a hard deadline will sometimes cut corners to make something work, then never go back to fix it. They mean to, but as new projects start and time gets away from them, an update breaks their code and bugs emerge seemingly out of nowhere. If you ask a roomful of programmers who did this at least a few times in their careers to raise their hands, almost all of them will. And the few who did not are lying. When this is a bug in a game or a mobile app, it’s seldom a big deal. When it’s code deployed in an active war zone, it’s going to become a major problem very quickly.

Even worse, imagine bugs in the robots’ security systems. Shoddy encryption, or lack of it, was once exploited to capture live video feeds from drones on patrol. Poorly secured APIs meant to talk to the robot mid-action could be hijacked and turn the killer bot against its handlers, and as seen in pretty much every movie ever, this turn of events never has a good ending. Even good, secure APIs might not stay that way because cybersecurity is a very lopsided game in which all the cards are heavily stacked the hackers’ favor. Security experts need to execute perfectly for every patch, update, and code change to keep their machines safe. Hackers only need to take advantage of a single slip-up or bug to gain access and do their dirty work. This is why security for killer robots’ systems could never be perfect and the only thing its creators could do is make the machine extremely hard to hack with strict code, constantly updated secure connections to its base station, and include a way to quickly reset or destroy it when it does get hacked.

Still, all of this isn’t necessarily an argument against killer robots. It’s a reminder of how serious the challenges of making them are, and they better be heeded because no matter how much it may pain pacifist groups and think tanks, these weapons are coming. While they’ll inevitably kill civilians in war zones, in the mind of a general, so do flesh and blood soldiers, and if those well trained humans with all the empathy and complex reasoning skills being human entails cannot get it right all the time, what hope do robots have? Plus, to paraphrase the late General Patton, you don’t win wars by dying for your country but by making someone does for theirs’ and what better way to do that than by substituting your live troops with machinery you don’t mind losing nearly as much in combat? I’ve covered the “ideal” scenario for how all this would work back in the early days of this blog and in subsequent years, the technology to make it all possible isn’t just growing ever more advanced, it’s practically already here. It would make little sense to just throw it all away to continue to risk human lives in war zones from a military standpoint.

And here’s another thing to think about when envisioning a world where killer robots making life or death decisions dominate the battlefield. Only advanced countries could afford to build robot armies and deploy them instead of humans in conflict. Third World states would have no choice but to rely on flesh and blood soldiers, meaning that one side loses thousands of lives fighting a vast, expendable metal swarm armed with high tech weaponry able to outflank any human-held position before its defenders even have time to react. How easy would it be to start wars when soldiers no longer need to be put at risk and the other side either would not have good enough robots or must put humans on the front lines? If today all it takes to send thousand into combat saying that they volunteered and their sacrifice won’t be in vain, how quickly will future chicken hawks vote to send the killer bots to settle disputes, often in nations where only humans will be capable of fighting back, all but assuring the robots’ swift tactical victory?

ultron

There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

seamus

When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him along for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

one half of a debate

May 11, 2015

sad robots

And now, how about a little classic Singularity skepticism after the short break? What’s that? It’s probably a good idea to go back in time and revisit the intellectual feud between Jaron Lanier, a virtual reality pioneer turned Luddite-lite in recent years, and Ray Kurzweil, the man who claims to see the future and generally has about the same accuracy as a psychic doing a cold reading when trying this? Specifically the One-Half of a Manifesto vs. One-Half of an Argument debate, the public scuffle now some 15 years old which is surprisingly relevant today? Very well my well read imaginary reader, whatever you want. Sure, this debate is old and nothing in the positions of the personalities involved has changed, but that’s actually what makes it so interesting, that a decade and a half of technological advancements and dead ends didn’t budge either of people who claim to be authorities on the subject matter. And all of this is in no small part because the approach from both sides was to take a distorted position and preach it past each other.

No, this isn’t a case when you can get those on opposing sides to compromise on something to arrive at the truth, which is somewhere in the middle. Both of them are very wrong about many basic facts about the economics, technology, and understanding of what makes one human for the foreseeable future and they build strawmen to assault each other with their errors, clinging to their old accomplishments to argue from authority. Lanier has developed a vision of absolute gloom and doom where algorithms and metrics have taken over for humans by engineers who place zero value on human input and interaction. Kurzweil insists that Lanier can only see all of the problems to overcome and became a pessimist solely because he can’t solve them while in the Singularitarian world, the magic of exponential advancement will eventually solve it all. With computers armed with super-smart AI. That Lanier is convinced will make humanity obsolete by not being smarter than humans but by the actions of those who believe they are.

What strikes me as bizarre is how neither of them ever looked at the current trend of making a machine perform computationally tedious, complex calculations and offloading things that we’ve all known for a long time that computers do better and more accurately than us, then having us make decisions based on this information? Computers will not replace us. We’re the ones with the creative ideas, goals, and motivation, not them. We’re the ones that tell them what to do or what to calculate and how to calculate it. Today, we’re going through a period of what we could generously call creative destruction in which some jobs are sadly becoming obsolete and we’re lacking the political spine to apply what we know are policy fixes to political problems, which is unfair and cruel to those affected. But the idea that this is a political, not a technical problem is not even considered. Computers are their hammers and all they see is nails, therefore, they will hammer away at these problems until they go away and wonder why they refuse to.

Should you fail to grasp both the promise of AI and human/machine interfaces and search only for downsides without considering solutions, as Lanier does, or overestimate what they can do based on wildly unrealistic notions from popular computer science news headlines, looking only for upsides without even acknowledging problems or limitations, as Kurzweil does, and you get optimism and pessimism recycling the same arguments against each other for a decade and a half while omitting the human dimension of the problems that manage to describe, and in which they claim said human dimension is the most important. If humans are greater than the sum of their parts, as Lanier argues, why would they be displaced solely by a fancy enough calculator, having nothing useful to offer past making more computers? And if humans are so easy to boil down to a finite list of parts and pieces, why is it that we can’t define what makes them creative and how to embody machines with the same creativity outside of a well defined problem space limited by propositional logic? Try to answer these questions and we’d have a real debate.

humanoid robot

With easy, cheap access to cloud computing, a number of popular artificial intelligence models computer scientists wanted to put to the test for decades, have now finally able to summon the necessary oomph to drive cars and perform sophisticated pattern recognition and classification tasks. With these new probabilistic approaches, we’re on the verge of having robotic assistants, soldiers, and software able to talk to us and help us process mountains of raw data based not on code we enter, but the questions we ask as we play with the output. But with that immense power come potential dangers which alarmed a noteworthy number of engineers and computer scientists, and sending them wondering aloud how to build artificial minds with values similar to ours and can see the world enough like we do to avoid harm us by accident, or even worse, by their own independent decision after seeing us as being “in the way” of their task.

Their ideas on how to do that are quite sound, if exaggerated somewhat to catch the eye of the media and encourage interested non-experts in taking this seriously, and they’re not thinking of some sort of Terminator-style or even Singularitarian scenarios, but how to educate an artificial intelligence on our human habits. But the flaw I see in their plans has nothing to do with how to train computers. Ultimately an AI will do what its creator wills it to do. If its creator is hell bent on wreaking havoc, there’s nothing we can do other than stop him or her from creating it. We can’t assume that everyone wants a docile, friendly, helpful AI system. I’m sure they realize it, but all that I’ve found so far on the subject ignores bad actors. Perhaps it’s because they’re well aware that the technology itself is neutral and the intent of the user is everything. But it’s easier just to focus on technical safeguards than on how to stop criminals and megalomaniacs…

fish kung fu

Robots and software are steadily displacing more and more workers. We’ve known this for the last decade as automation picked up the pace and entire professions are facing obsolescence with the relentless march of the machines. But surely, there are safe, creative careers no robot would ever be able to do. Say for example, cooking. Can a machine write an original cookbook and create a step-by-step guide for another robot to perfectly replicate the recipe every time on demand? Oh, it can. Well, damn. There go line cooks at some point in the foreseeable future. Really, can any mass market job not somehow dealing with making, modifying, and maintaining our machines and software be safe from automation? Well, sadly, the answer to that question seems to be a pretty clear and resounding “no,” as we’ve started hooking up our robots to the cloud to finally free them of the computational limits that held them back from their full potential. But what does this mean for us? Do we have to build a new post-industrial society?

Over the last century or so, we’ve gotten used to a factory work model. We report to the office, the factory floor, or a work site, spend a certain amount of hours doing the job, go home, then get up in the morning and do it all over again, day after day, year after year. We based virtually all of Western society on this work cycle. Now that an end to this is in sight, we don’t know how we’re going to deal with it. Not everybody can be an artisan or an artist, and not everyone can perform a task so specialized that building robots to do it instead would be too expensive, time consuming, and cost ineffective. What happens when robots build every house and where dirt cheap RFID tags on products and cloud-based payment systems made cashiers unnecessary, and smart kiosks and shelf-stocking robots have replaced the last retail odd job?

As a professional techie, I’m writing this from a rather privileged position. Jobs like mine really can’t really go away since they’re responsible for the smarter software and hardware. There’s been a rumor about software that can write software and robots that can build other robots for years, and while we actually do have all this technology already, a steady expert hand is still a necessity, and always will be since making these things is more of an art than a science. I can also see plenty of high end businesses and professions where human to human relationships are essential holding out just fine. But my concern is best summarized as First World nations turning into country-sized versions of San Francisco, a post-industrial times city which doesn’t know how to adapt to a post-industrial future. Massive income inequalities, insanely priced and seldom available housing, and a culture that encourages class-based self-segregation.

The only ways I see out of this dire future is either unrolling a wider social safety net (a political no-no that would never survive conservative fury), or making education cost almost nothing to retrain workers on the fly (a political win-win that never gets funded). We don’t really have very much time to debate this and do nothing. This painful adjustment has been underway for more than five years now and we’ve sitting on our hands letting it happen. It’s definitely very acute on the coasts, especially here on the West Coast, but its been making a mess out of factories and suburbs of the Midwest and the South. When robots are writing cookbooks and making lobster bisque that even competition-winning chefs praise as superior to their own creations, its time to tackle this problem instead of just talking about how we’re going to talk about a solution.

[ illustration by Andre Kutscherauer ]

plaything

A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]