Archives For artificial intelligence

seamus

When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him along for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

sad robots

And now, how about a little classic Singularity skepticism after the short break? What’s that? It’s probably a good idea to go back in time and revisit the intellectual feud between Jaron Lanier, a virtual reality pioneer turned Luddite-lite in recent years, and Ray Kurzweil, the man who claims to see the future and generally has about the same accuracy as a psychic doing a cold reading when trying this? Specifically the One-Half of a Manifesto vs. One-Half of an Argument debate, the public scuffle now some 15 years old which is surprisingly relevant today? Very well my well read imaginary reader, whatever you want. Sure, this debate is old and nothing in the positions of the personalities involved has changed, but that’s actually what makes it so interesting, that a decade and a half of technological advancements and dead ends didn’t budge either of people who claim to be authorities on the subject matter. And all of this is in no small part because the approach from both sides was to take a distorted position and preach it past each other.

No, this isn’t a case when you can get those on opposing sides to compromise on something to arrive at the truth, which is somewhere in the middle. Both of them are very wrong about many basic facts about the economics, technology, and understanding of what makes one human for the foreseeable future and they build strawmen to assault each other with their errors, clinging to their old accomplishments to argue from authority. Lanier has developed a vision of absolute gloom and doom where algorithms and metrics have taken over for humans by engineers who place zero value on human input and interaction. Kurzweil insists that Lanier can only see all of the problems to overcome and became a pessimist solely because he can’t solve them while in the Singularitarian world, the magic of exponential advancement will eventually solve it all. With computers armed with super-smart AI. That Lanier is convinced will make humanity obsolete by not being smarter than humans but by the actions of those who believe they are.

What strikes me as bizarre is how neither of them ever looked at the current trend of making a machine perform computationally tedious, complex calculations and offloading things that we’ve all known for a long time that computers do better and more accurately than us, then having us make decisions based on this information? Computers will not replace us. We’re the ones with the creative ideas, goals, and motivation, not them. We’re the ones that tell them what to do or what to calculate and how to calculate it. Today, we’re going through a period of what we could generously call creative destruction in which some jobs are sadly becoming obsolete and we’re lacking the political spine to apply what we know are policy fixes to political problems, which is unfair and cruel to those affected. But the idea that this is a political, not a technical problem is not even considered. Computers are their hammers and all they see is nails, therefore, they will hammer away at these problems until they go away and wonder why they refuse to.

Should you fail to grasp both the promise of AI and human/machine interfaces and search only for downsides without considering solutions, as Lanier does, or overestimate what they can do based on wildly unrealistic notions from popular computer science news headlines, looking only for upsides without even acknowledging problems or limitations, as Kurzweil does, and you get optimism and pessimism recycling the same arguments against each other for a decade and a half while omitting the human dimension of the problems that manage to describe, and in which they claim said human dimension is the most important. If humans are greater than the sum of their parts, as Lanier argues, why would they be displaced solely by a fancy enough calculator, having nothing useful to offer past making more computers? And if humans are so easy to boil down to a finite list of parts and pieces, why is it that we can’t define what makes them creative and how to embody machines with the same creativity outside of a well defined problem space limited by propositional logic? Try to answer these questions and we’d have a real debate.

humanoid robot

With easy, cheap access to cloud computing, a number of popular artificial intelligence models computer scientists wanted to put to the test for decades, have now finally able to summon the necessary oomph to drive cars and perform sophisticated pattern recognition and classification tasks. With these new probabilistic approaches, we’re on the verge of having robotic assistants, soldiers, and software able to talk to us and help us process mountains of raw data based not on code we enter, but the questions we ask as we play with the output. But with that immense power come potential dangers which alarmed a noteworthy number of engineers and computer scientists, and sending them wondering aloud how to build artificial minds with values similar to ours and can see the world enough like we do to avoid harm us by accident, or even worse, by their own independent decision after seeing us as being “in the way” of their task.

Their ideas on how to do that are quite sound, if exaggerated somewhat to catch the eye of the media and encourage interested non-experts in taking this seriously, and they’re not thinking of some sort of Terminator-style or even Singularitarian scenarios, but how to educate an artificial intelligence on our human habits. But the flaw I see in their plans has nothing to do with how to train computers. Ultimately an AI will do what its creator wills it to do. If its creator is hell bent on wreaking havoc, there’s nothing we can do other than stop him or her from creating it. We can’t assume that everyone wants a docile, friendly, helpful AI system. I’m sure they realize it, but all that I’ve found so far on the subject ignores bad actors. Perhaps it’s because they’re well aware that the technology itself is neutral and the intent of the user is everything. But it’s easier just to focus on technical safeguards than on how to stop criminals and megalomaniacs…

fish kung fu

Robots and software are steadily displacing more and more workers. We’ve known this for the last decade as automation picked up the pace and entire professions are facing obsolescence with the relentless march of the machines. But surely, there are safe, creative careers no robot would ever be able to do. Say for example, cooking. Can a machine write an original cookbook and create a step-by-step guide for another robot to perfectly replicate the recipe every time on demand? Oh, it can. Well, damn. There go line cooks at some point in the foreseeable future. Really, can any mass market job not somehow dealing with making, modifying, and maintaining our machines and software be safe from automation? Well, sadly, the answer to that question seems to be a pretty clear and resounding “no,” as we’ve started hooking up our robots to the cloud to finally free them of the computational limits that held them back from their full potential. But what does this mean for us? Do we have to build a new post-industrial society?

Over the last century or so, we’ve gotten used to a factory work model. We report to the office, the factory floor, or a work site, spend a certain amount of hours doing the job, go home, then get up in the morning and do it all over again, day after day, year after year. We based virtually all of Western society on this work cycle. Now that an end to this is in sight, we don’t know how we’re going to deal with it. Not everybody can be an artisan or an artist, and not everyone can perform a task so specialized that building robots to do it instead would be too expensive, time consuming, and cost ineffective. What happens when robots build every house and where dirt cheap RFID tags on products and cloud-based payment systems made cashiers unnecessary, and smart kiosks and shelf-stocking robots have replaced the last retail odd job?

As a professional techie, I’m writing this from a rather privileged position. Jobs like mine really can’t really go away since they’re responsible for the smarter software and hardware. There’s been a rumor about software that can write software and robots that can build other robots for years, and while we actually do have all this technology already, a steady expert hand is still a necessity, and always will be since making these things is more of an art than a science. I can also see plenty of high end businesses and professions where human to human relationships are essential holding out just fine. But my concern is best summarized as First World nations turning into country-sized versions of San Francisco, a post-industrial times city which doesn’t know how to adapt to a post-industrial future. Massive income inequalities, insanely priced and seldom available housing, and a culture that encourages class-based self-segregation.

The only ways I see out of this dire future is either unrolling a wider social safety net (a political no-no that would never survive conservative fury), or making education cost almost nothing to retrain workers on the fly (a political win-win that never gets funded). We don’t really have very much time to debate this and do nothing. This painful adjustment has been underway for more than five years now and we’ve sitting on our hands letting it happen. It’s definitely very acute on the coasts, especially here on the West Coast, but its been making a mess out of factories and suburbs of the Midwest and the South. When robots are writing cookbooks and making lobster bisque that even competition-winning chefs praise as superior to their own creations, its time to tackle this problem instead of just talking about how we’re going to talk about a solution.

[ illustration by Andre Kutscherauer ]

plaything

A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]

tron_police_600

When four researchers decided to see what would happen when robots issue speeding tickets and the impact it might have on the justice system, they found out two seemingly obvious things about machines. First, robots make binary decisions so if you’re over the speed limit, you get no leeway or second chances. Second, robots are not smart enough to take into account all of the little nuances that a police officer notes when deciding whether to issue a ticket or not. And here lies the value of this study. Rather than trying to figure out how to get computers to write tickets and determine when to write them, something we already know how to do, the study showed that computers would generate significantly more tickets than human law enforcement, and that even the simplest human laws are too much for our machines to handle without many years of training and very complex artificial neural networks to understand what’s happening and why, because a seemingly simple and straightforward task turned out to be anything but simple.

Basically, here’s what the legal scholars involved say in example form. Imagine you’re speeding down an empty highway at night. You’re sober, alert, in control, and a cop sees you coming and knows you’re speeding. You notice her, hit the breaks, and slow down to an acceptable 5 to 10 miles per hour over the speed limit. Chances are that she’ll let you keep going because you are not being a menace to anyone and the sight of another car, especially a police car, is enough to relieve your mild case of lead foot. Try doing that on a crowded road during rush hour and you’ll more than likely be stopped, especially if you’re aggressively passing or riding bumpers. Robots will issue you a ticket either way because they don’t really track or understand your behavior or the danger you may pose to others while another human can make a value judgment. Yes, this means that the law isn’t being properly enforced 100% of the time, but that’s ok because it’s not as important to enforce as say, laws against robbery or assault. Those laws take priority.

Even though this study is clearly done with lawyers in mind, there is a lot for the comp sci crowd to dissect also, and it brings into focus the amazing complexity behind a seemingly mundane, if not outright boring activity and the challenge it poses to AI models. If there’s such a rich calculus of philosophical and social cues and decisions behind something like writing a speeding ticket, just imagine how incredibly more nuanced something like tracking potential terrorists half a world away becomes when we break it down on a machine level. We literally need to create a system with a personality, compassion, and discipline at the same time, in other words, a walking pile of stark contradictions, just like us. And then, we’d need to teach it to find the balance between the need to be objective and decisive, and compassionate and thoughtful, depending on the context of the situation in question. We, who do this our entire lives, have problems with that. How do we get robots to develop such self-contradictory complexity in the form of probabilistic code?

Consider this anecdote. Once upon a time, your truly and his wife were sitting in a coffee shop after a busy evening and talking about one thing or another. Suddenly, there was a tap on the glass window to my left, and I turned around to see a young, blonde girl with two friends in tow pressing her open palm against the glass. On her palm, she wrote in black marker "hi 5." So of course I high-fived her through the glass much to her and her friends’ delight, and they skipped off down the street. Nothing about that encounter or our motivations makes logical sense to any machine whatsoever. Yet, I’m sure you can think of reasons why it took place and propose why the girl and her friends were out collecting high fives through glass windows, or why I decided to play along, and why others might not have. But this requires situational awareness on the scale we’re not exactly sure how to create, collecting so much information that it probably requires a small data center to process by recursive neural networks weighing hundreds of factors.

And that’s is why we are so far from AI as seen in sci-fi movies. We underestimate the complexity of the world around us because we had the benefit of evolving to deal with it. Computers had no such advantage and must start from scratch. If anything, they have a handicap because all the humans who are supposed to program them work at such high levels of cognitive abstraction, it takes them a very long time to even describe their process, much less elaborate each and every factor influencing it. After all, how would you explain how to disarm someone wielding a knife to someone who doesn’t even know what a punch is, much less how to throw one? How do you try to teach urban planning to someone who doesn’t understand what a car is and what it’s built to do? And just when we think we’ve found something nice and binary yet complex enough to have real world implications to teach our machines, like writing speeding tickets, we suddenly find out that there was a small galaxy of things we just took for granted in the back of our minds…

android chip

There’s been a blip in the news cycle I’ve been meaning to dive into, but lately, more and more projects have been getting in the way of a steady writing schedule, and there are only so many hours in the day. So what’s the blip? Well, professional tech prophet and the public face of the Singularity as most of us know it, Ray Kurzweil, has a new gig at Google. His goal? To use stats to create an artificial intelligence that will handle web searches and explore the limits of how one could use statistics and inference to teach a synthetic mind. Unlike many of his prognostications about where technology is headed, this project is actually on very sound ground because we’re using search engines more and more to find what we want, and we do it based on the same type of educated guessing that machine learning can tackle quite well. And that’s why instead of what you’ve probably come to expect from me when Kurzweil embarks on a mission, you’ll get a small preview of the problems an artificially intelligent search engine will eventually face.

Machine learning and artificial neural networks are all the rage in the press right now because lots and lots of computing power can now run the millions of simulations required to train rather complex and elaborate behaviors in a relatively short amount of time. Watson couldn’t be built a few decades ago when artificial neural networks were being mathematically formalized because we simply didn’t have the technology we do today. Today’s cloud storage ideas require roughly the same kind of computational might as an intelligent system, and the thinking goes that if you pair the two, you’ll not only have your data available anywhere with an internet connection, but you’ll also have a digital assitant to fetch you what you need without having to browse through a myrriad of folders. Hence, systems like Watson and Siri, and now, whatever will come out of the joint Google-Kurzweil effort, and these functional AI prototypes are good at navigating context with a probabilistic approach, which successfully models how we think about the world.

So far so good, right? If we’re looking for something like "auto mechanics in Random, AZ," your search assistant living in the cloud would know to look at the relevant business listings, and if a lot of these listings link to reviews, it would assume that reviews are an important past of such a search result and bring them over as well. Knowing that reviews are important, it would likely do what it can to read through the reviews and select the mechanics with the most positive reviews that really read as if they were written by actual customers, parsing the text and looking for any telltale signs of sockpuppeting like too many superlatives or a rash of users in what seems like a stangely short time window as compared to the rest of the reviews. You get good results, some warnings about who to avoid, the AI did it’s job, you’re happy, the search engine is happy, and a couple of dozen tech reporters write gushing articles about this Wolfram Alpha Mark 2. But what if, just what if, you were to search for something scientific, something that brings up lots and lots of manufactroversies like evolution, climate change, or sexual education materials? The AI isn’t going to have the tools to give you the most useful or relevant recommendations there.

First off, there’s only so much that knowing context will do. For the AI, any page discussing the topic is valid, so a creationist website savaging evolution with unholy fury and a barrage of very, very carefully mined quotes designed to look respectable to the novice reader, and the archives at Talk Origins have the same validity unless a human tells it to prioritize scientific content over religious misrepresentations. Likewise, sites discussing healthy adult sexuality, sites going off in their condemantions of monogamy, and sites decrying any sexual activity before marriage as an amoral indulgence of the emotionally defective , are all the same to an AI without human input. I shudder to think of the kind of mess trying to accomodate a statistical approach here can make. Yes, we could say that if a user lives in what we know to be a socially conservative area, place a marked emphasis on the prudish and religious side of things, and if a user is in a moderate or a liberal area, show a gradient of sound science and alternative views on sexuality. Statistically, it makes sense. In the big picture, it perpetuates socio-political echo chambers.

And that introduces a moral dilemma Google and Kurzweil will have to face. Today’s search bar takes in your input, finds what look like good matches, and spits them out in pages. Good? Bad? Moral? Immoral? Scientifically valid? Total crackpottery? You, the human, will decide. Having an intelligent search assistant, however, places at least some of the responsibility for trying to filter out or flag bad or heavily biased information on the technology involved, and if the AI is way too accommodating to the user, it will simply perpetuate misinformation and propaganda. If it’s a bit too confrontational, or follows a version of the Golden Mean fallacy, it will be seen as defective by users who don’t like to step outside of their bubble too much, or those who’d like their AI to be a little more opinionated and put up an intellectual challenge. Hey, no one said that indexing and curating all human knowledge will be easy and that it won’t require making a stand on what gets top billing when someone tries to dive into your digital library. And here, no amount of machine learning and statistical analysis will save your thinking search engine…

giant robot

Personally, I’m a big fan of Ray Villard’s columns because he writes about the same kind of stuff that gets dissected on this blog and the kind of stuff I like to read. Since most of it is wonderfully and wildly speculative, I seldom find something with which to really disagree. But his latest foray into futurism inspired by Cambridge University’s Center for the Study of Existential Risk’s project trying to assess the danger artificial intelligence poses to us is an exception to this rule. Roughly speaking, Ray takes John Good’s idea of human designing robots better at making new robots than humans and runs with it darkest adaptations in futurist lore. His endgame? Galaxies ruled not by "thinking meat" but by immortal machinery which surpassed its squishy creators and built civilizations that dominated their home worlds and beyond. The cosmos, it seems, is destined to be in the cold, icy grip of intelligent machinery rather than a few clever space-faring species.

To cut straight to the heart of the matter, the notion that we’ll build robots better at making new and different robots than us is not an objective one. We can certainly build machines that have more efficient approaches and can mass produce their new designs faster than us. But when it comes a nebulous notion like "better," we have to ask in what way. Over the last century, we’ve really excelled at measuring how well we do in tasks like math, pattern recognition, or logic. With concrete answers to most problems in these categories, it’s fairly straightforward to administer a test heavily emphasizing these skills and comparing the scores among the general populace. In dealing with things like creativity or social skills, things are much harder to measure and it’s easy to end up measuring inconsequeantial things as if they were make or break metrics, or give up on measuring them at all. And the difficulty only goes up when we consider context.

We can complicate the matter even further when we start taking who’s judging into account. To judges who aren’t very creative people and never have been, some robots’ designs might seem like feats beyond the limits of the human imagination. To a panel of artists and pro designers, a machine’s effort at creating other robots might seem nifty but predictable, or far too specialized for a particular task to be useful in more than one context. To a group of engineers, having the ability to design just-for-the-job robots might seem just the right mix of creativity and utility, even though they’d question whether this isn’t just a wasteful design. If you’re starting to get fuzzy on this hypothetical design by machine concept, don’t worry. You’re supposed to be since grading designs without very specific guidelines is basically just a matter of personal taste and opinion where trying to inject objective criteria doesn’t help in the least. And yet the Singularitarians who run with Good’s idea expect us to assume that this will be an easy win for the machines.

This unshakable belief that computers are somehow destined to surpass us in all things as they get faster and have bigger hard drives is at the core of the Singulatarianism that gives us these daramatic visions of organic obsolesence and machine domination of the galaxy. But it’s wrong from the ground up because it equates processing power and complexity of programming with a number of cognitive abilities which can’t be objectively measured for out entire species. Humans are no match for machinery if we have to do millions of mathematical calculations or read a few thousand books in a matter of days. Machines are stronger, faster, and immune to things that’ll kill us in a heartbeat. But once we get past measuring FLOPS, upload rates, and spec sheets on industrial robots, how can we argue that robots will be more imaginative than us? How do we try to explain how they’ll get there in more than a few Singularitarian buzzwords that mean nothing in the world of computer science? We don’t even know what makes a human creative in a useful or appreciable way. How would we train a computer to replicate a feat we don’t understand?

[ illustration by Chester Chien ]

x47b takeoff

Human Rights Watch has seen the future of warfare and they don’t like it, not one bit. It’s pretty much inevitable that machines will be doing more and more fighting because they’re cheap and when one of them is destroyed by enemy fire, no one has to lose a father or a mother. Another one will be rolled off the assembly line and thrown into the fray. But the problem, according to a lengthy report by HRW, is that robots couldn’t tell civilians from enemy combatants during a war, and so humans should be the ones deciding who gets killed and who doesn’t. Today being able to distinguish civilians from hostiles is absolutely crucial because most wars being fought today are asymmetric and often involve complex, loosely affiliated groups which move through a civilian population and recruit civilians or so-called "non-state actors" to join them. How do you tell the difference, especially when you’re just a collection of circuits running code?

Just as HRW warns in its grandly titled report, robots left to make all the decisions could easily turn into indiscriminate killers, butchering everyone in sight and no human would be accountable for their actions because one could always blame a bug or lack of testing in real world situations on what could all too easily become a war crime. But considering that humans have a hard time telling who is on whose side in Afghanistan and faced the same problem in Iraq by keeping the country together until the population decided to come down hard on the worst of the sectarian militias, how well would a robot fare? HRW may be asking for an impossible goal here: to make a robot better at telling civilians apart from combatants than humans who spend years learning to do that. Of course as a computer person, I’m intrigued by the idea, but the only viable possibility that I see is to keep the entire population under constant surveillance, log their every movement, word, key stroke, and nervous tick, and parse the resulting oceans of data for patterns.

But how would that look? Excuse us, mind if we’d wire your building as if we’re shooting a reality show, install spyware on your computer, and tap your phones to record everything you say and do so our supercomputer doesn’t tell a drone to lob a 1,000 pound warhead through your living room window? Something tells me that’s not a viable plan, and even then, mistakes could easily be made by both humans and robots since our intra-cultural interactions are very complex and hard to interpret with certainty. And again, we already spy on people and still mistakes are made so it’s doubtful this technique would help, especially when we consider just how much data would come pouring in. Really, it all comes down to the fact that war is terrible and people get killed in armed conflicts. Mistakes can and will inevitably be made, robots or no robots, and asking that a nation looking to automate its mechanized infantry and air force keep on risking humans is like yelling into the wind. The only way civilians will be spared is if wars are prevented but preventing wars is a task at which we’ve been spectacularly failing for thousands of years…

cyborg hand and eye

Journalist and skeptic Steven Poole is breathing fire in his scathing review of the current crop of trendy pop neuroscience books, citing rampant cherry-picking, oversimplifications, and constant presentations of much-debated functions of the brain as having been settled with fMRI and the occasional experiment or two with supposedly definitive results. He goes a little too heavy on the style, ridiculing the clichés of pop neurology and abuse of the science to land corporate lecture gigs where executives eager to seem innovative want to try out the latest trend in management, and is a little too light on some of the scientific debates he touches, but overall his point is quite sound. We do not know enough about the brain to start writing casual manuals on how it works and how you can best get in touch with your inner emotional supercomputer. And since so much of the human mind is still an enigma, how can we even approach trying to build an artificial one as requested by the Singularitarians and those waiting for robot butlers and maids?

While working on the key part of my expansion on Hivemind — which I really need to start putting on GitHub and documenting for public comment — that question has been weighing heavily on my mind because this is basically what I’m building; a decentralized robot brain. But despite my passable knowledge of how operating systems, microprocessors, and code work, and a couple years of psychology in college, I’m hardly a neuroscientist. How would I go about replicating the sheer complexity of a brain in silicon, stacks, and bytes? My answer? I’d take the easy way out and not even try. Evolution is a messy process and involved living things that don’t stop to try to debug and optimize themselves, so it’s little wonder that the brain is a maze of neurons that are loosely organized by some very vague, basic rules and is really, really difficult to unravel. It has the immense task of carrying fragments of memory to be reconstructed, consciousness, learned and instinctual responses, sensory processing and recognition, and even high level logic in one wet lump of metabolically vampiric tissue which has to work 24/7/365 for decades.

Computers, however, don’t have such taxing requirements. They can save what they need to a physical medium like spinning hard drives or SSDs, and they focus on carrying out just one or a handful of basic instructions at a time. With such a tolerant substrate, why would I want to set my sights on the equivalent of jumping into orbit when I can build something functional enough to serve as a brain for a heap of plastic, metal, and integrated circuitry? For the Hivemind toolkit, I used a structure representing a tree of related concepts set by a user to deal with higher level logic, sort of how we learn to compartmentalize and categorize concepts we know, and the same approach will be used in the spawn of Hivemind. Low-level implementation and recognition will also adopt the same pattern of detection and action as explained in the paper. But that’s good for carrying out a few scripted actions or looping those actions. For a more nuanced and useful set of behaviors, I’m perusing a different implementation built on a tool for organizing collections of synchronous and asynchronous monads invented by a team of computers scientists Microsoft imprisons in its dark lair under Mt. Rainer… I mean employs.

Here’s the basic idea. When a robot is called to accomplish a task, we summon all the relevant ideas and their implementations as simple, specialized neural networks which extend from initial classification and recognition of stimuli to the appropriate reaction to said stimuli. That gives us just one fine-tuned neural network per concept. We associate the ideas with the tasks at hand, and put the implementation of the relevant concepts into a collection of actions waiting to fire off as scripted. Then, after the connection with the robot is established and it sends its sensor data to us, we fire off the neural networks in the queue and beam back the appropriate commands in milliseconds. Each target and each task is its own distinct entity in stark contrast to the overlaps we see in biological brains. Overlaps here come from the higher level logic used to tie concepts together rather than connections between the artificial neurons, and alternatives can be loaded and calculated in parallel, ready to fire off as soon as we made sense of what the robot reported back to us. And at this point we can even bring in other robots and establish future timelines for possible events by directing entire bots as the appendages of a decentralized brain.

Certainly, something like that has very little resemblance to what we generally think of when we imagine a brain because we’re used to the notion of a mind being a monolithic entity composed of tightly knit modules rather than a branching queue pulling together distinctly separate bits and pieces of data from distinct compartments. But it has the capacity for carrying out complex and nuanced behaviors, and it can talk to robots that can work with SOAP formatted messages. And that’s what we really need an AI to do, isn’t it? We want something that can make decisions, be aware of its environment, give us a way to teach it how to weave complex actions from a simple set of building blocks, and a way to interact with the outside world. Maybe forgoing a single, self-aware entity is a good way to make that happen and lay the groundwork for combining bigger and more elaborate systems into a single, cohesive whole sometime in the future. Or maybe, we could just keep it decentralized and let different instances communicate with each other, kind of like Skynet, but without that whole nuclear weapons and enslavement of humanity thing as it replicates via the web. Though to be up front, I should warn you that compiled, its key services are about 100 kilobytes so it could technically spread via a virus…