Archives For artificial intelligence

plaything

A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]

Share

tron_police_600

When four researchers decided to see what would happen when robots issue speeding tickets and the impact it might have on the justice system, they found out two seemingly obvious things about machines. First, robots make binary decisions so if you’re over the speed limit, you get no leeway or second chances. Second, robots are not smart enough to take into account all of the little nuances that a police officer notes when deciding whether to issue a ticket or not. And here lies the value of this study. Rather than trying to figure out how to get computers to write tickets and determine when to write them, something we already know how to do, the study showed that computers would generate significantly more tickets than human law enforcement, and that even the simplest human laws are too much for our machines to handle without many years of training and very complex artificial neural networks to understand what’s happening and why, because a seemingly simple and straightforward task turned out to be anything but simple.

Basically, here’s what the legal scholars involved say in example form. Imagine you’re speeding down an empty highway at night. You’re sober, alert, in control, and a cop sees you coming and knows you’re speeding. You notice her, hit the breaks, and slow down to an acceptable 5 to 10 miles per hour over the speed limit. Chances are that she’ll let you keep going because you are not being a menace to anyone and the sight of another car, especially a police car, is enough to relieve your mild case of lead foot. Try doing that on a crowded road during rush hour and you’ll more than likely be stopped, especially if you’re aggressively passing or riding bumpers. Robots will issue you a ticket either way because they don’t really track or understand your behavior or the danger you may pose to others while another human can make a value judgment. Yes, this means that the law isn’t being properly enforced 100% of the time, but that’s ok because it’s not as important to enforce as say, laws against robbery or assault. Those laws take priority.

Even though this study is clearly done with lawyers in mind, there is a lot for the comp sci crowd to dissect also, and it brings into focus the amazing complexity behind a seemingly mundane, if not outright boring activity and the challenge it poses to AI models. If there’s such a rich calculus of philosophical and social cues and decisions behind something like writing a speeding ticket, just imagine how incredibly more nuanced something like tracking potential terrorists half a world away becomes when we break it down on a machine level. We literally need to create a system with a personality, compassion, and discipline at the same time, in other words, a walking pile of stark contradictions, just like us. And then, we’d need to teach it to find the balance between the need to be objective and decisive, and compassionate and thoughtful, depending on the context of the situation in question. We, who do this our entire lives, have problems with that. How do we get robots to develop such self-contradictory complexity in the form of probabilistic code?

Consider this anecdote. Once upon a time, your truly and his wife were sitting in a coffee shop after a busy evening and talking about one thing or another. Suddenly, there was a tap on the glass window to my left, and I turned around to see a young, blonde girl with two friends in tow pressing her open palm against the glass. On her palm, she wrote in black marker "hi 5." So of course I high-fived her through the glass much to her and her friends’ delight, and they skipped off down the street. Nothing about that encounter or our motivations makes logical sense to any machine whatsoever. Yet, I’m sure you can think of reasons why it took place and propose why the girl and her friends were out collecting high fives through glass windows, or why I decided to play along, and why others might not have. But this requires situational awareness on the scale we’re not exactly sure how to create, collecting so much information that it probably requires a small data center to process by recursive neural networks weighing hundreds of factors.

And that’s is why we are so far from AI as seen in sci-fi movies. We underestimate the complexity of the world around us because we had the benefit of evolving to deal with it. Computers had no such advantage and must start from scratch. If anything, they have a handicap because all the humans who are supposed to program them work at such high levels of cognitive abstraction, it takes them a very long time to even describe their process, much less elaborate each and every factor influencing it. After all, how would you explain how to disarm someone wielding a knife to someone who doesn’t even know what a punch is, much less how to throw one? How do you try to teach urban planning to someone who doesn’t understand what a car is and what it’s built to do? And just when we think we’ve found something nice and binary yet complex enough to have real world implications to teach our machines, like writing speeding tickets, we suddenly find out that there was a small galaxy of things we just took for granted in the back of our minds…

Share

android chip

There’s been a blip in the news cycle I’ve been meaning to dive into, but lately, more and more projects have been getting in the way of a steady writing schedule, and there are only so many hours in the day. So what’s the blip? Well, professional tech prophet and the public face of the Singularity as most of us know it, Ray Kurzweil, has a new gig at Google. His goal? To use stats to create an artificial intelligence that will handle web searches and explore the limits of how one could use statistics and inference to teach a synthetic mind. Unlike many of his prognostications about where technology is headed, this project is actually on very sound ground because we’re using search engines more and more to find what we want, and we do it based on the same type of educated guessing that machine learning can tackle quite well. And that’s why instead of what you’ve probably come to expect from me when Kurzweil embarks on a mission, you’ll get a small preview of the problems an artificially intelligent search engine will eventually face.

Machine learning and artificial neural networks are all the rage in the press right now because lots and lots of computing power can now run the millions of simulations required to train rather complex and elaborate behaviors in a relatively short amount of time. Watson couldn’t be built a few decades ago when artificial neural networks were being mathematically formalized because we simply didn’t have the technology we do today. Today’s cloud storage ideas require roughly the same kind of computational might as an intelligent system, and the thinking goes that if you pair the two, you’ll not only have your data available anywhere with an internet connection, but you’ll also have a digital assitant to fetch you what you need without having to browse through a myrriad of folders. Hence, systems like Watson and Siri, and now, whatever will come out of the joint Google-Kurzweil effort, and these functional AI prototypes are good at navigating context with a probabilistic approach, which successfully models how we think about the world.

So far so good, right? If we’re looking for something like "auto mechanics in Random, AZ," your search assistant living in the cloud would know to look at the relevant business listings, and if a lot of these listings link to reviews, it would assume that reviews are an important past of such a search result and bring them over as well. Knowing that reviews are important, it would likely do what it can to read through the reviews and select the mechanics with the most positive reviews that really read as if they were written by actual customers, parsing the text and looking for any telltale signs of sockpuppeting like too many superlatives or a rash of users in what seems like a stangely short time window as compared to the rest of the reviews. You get good results, some warnings about who to avoid, the AI did it’s job, you’re happy, the search engine is happy, and a couple of dozen tech reporters write gushing articles about this Wolfram Alpha Mark 2. But what if, just what if, you were to search for something scientific, something that brings up lots and lots of manufactroversies like evolution, climate change, or sexual education materials? The AI isn’t going to have the tools to give you the most useful or relevant recommendations there.

First off, there’s only so much that knowing context will do. For the AI, any page discussing the topic is valid, so a creationist website savaging evolution with unholy fury and a barrage of very, very carefully mined quotes designed to look respectable to the novice reader, and the archives at Talk Origins have the same validity unless a human tells it to prioritize scientific content over religious misrepresentations. Likewise, sites discussing healthy adult sexuality, sites going off in their condemantions of monogamy, and sites decrying any sexual activity before marriage as an amoral indulgence of the emotionally defective , are all the same to an AI without human input. I shudder to think of the kind of mess trying to accomodate a statistical approach here can make. Yes, we could say that if a user lives in what we know to be a socially conservative area, place a marked emphasis on the prudish and religious side of things, and if a user is in a moderate or a liberal area, show a gradient of sound science and alternative views on sexuality. Statistically, it makes sense. In the big picture, it perpetuates socio-political echo chambers.

And that introduces a moral dilemma Google and Kurzweil will have to face. Today’s search bar takes in your input, finds what look like good matches, and spits them out in pages. Good? Bad? Moral? Immoral? Scientifically valid? Total crackpottery? You, the human, will decide. Having an intelligent search assistant, however, places at least some of the responsibility for trying to filter out or flag bad or heavily biased information on the technology involved, and if the AI is way too accommodating to the user, it will simply perpetuate misinformation and propaganda. If it’s a bit too confrontational, or follows a version of the Golden Mean fallacy, it will be seen as defective by users who don’t like to step outside of their bubble too much, or those who’d like their AI to be a little more opinionated and put up an intellectual challenge. Hey, no one said that indexing and curating all human knowledge will be easy and that it won’t require making a stand on what gets top billing when someone tries to dive into your digital library. And here, no amount of machine learning and statistical analysis will save your thinking search engine…

Share

giant robot

Personally, I’m a big fan of Ray Villard’s columns because he writes about the same kind of stuff that gets dissected on this blog and the kind of stuff I like to read. Since most of it is wonderfully and wildly speculative, I seldom find something with which to really disagree. But his latest foray into futurism inspired by Cambridge University’s Center for the Study of Existential Risk’s project trying to assess the danger artificial intelligence poses to us is an exception to this rule. Roughly speaking, Ray takes John Good’s idea of human designing robots better at making new robots than humans and runs with it darkest adaptations in futurist lore. His endgame? Galaxies ruled not by "thinking meat" but by immortal machinery which surpassed its squishy creators and built civilizations that dominated their home worlds and beyond. The cosmos, it seems, is destined to be in the cold, icy grip of intelligent machinery rather than a few clever space-faring species.

To cut straight to the heart of the matter, the notion that we’ll build robots better at making new and different robots than us is not an objective one. We can certainly build machines that have more efficient approaches and can mass produce their new designs faster than us. But when it comes a nebulous notion like "better," we have to ask in what way. Over the last century, we’ve really excelled at measuring how well we do in tasks like math, pattern recognition, or logic. With concrete answers to most problems in these categories, it’s fairly straightforward to administer a test heavily emphasizing these skills and comparing the scores among the general populace. In dealing with things like creativity or social skills, things are much harder to measure and it’s easy to end up measuring inconsequeantial things as if they were make or break metrics, or give up on measuring them at all. And the difficulty only goes up when we consider context.

We can complicate the matter even further when we start taking who’s judging into account. To judges who aren’t very creative people and never have been, some robots’ designs might seem like feats beyond the limits of the human imagination. To a panel of artists and pro designers, a machine’s effort at creating other robots might seem nifty but predictable, or far too specialized for a particular task to be useful in more than one context. To a group of engineers, having the ability to design just-for-the-job robots might seem just the right mix of creativity and utility, even though they’d question whether this isn’t just a wasteful design. If you’re starting to get fuzzy on this hypothetical design by machine concept, don’t worry. You’re supposed to be since grading designs without very specific guidelines is basically just a matter of personal taste and opinion where trying to inject objective criteria doesn’t help in the least. And yet the Singularitarians who run with Good’s idea expect us to assume that this will be an easy win for the machines.

This unshakable belief that computers are somehow destined to surpass us in all things as they get faster and have bigger hard drives is at the core of the Singulatarianism that gives us these daramatic visions of organic obsolesence and machine domination of the galaxy. But it’s wrong from the ground up because it equates processing power and complexity of programming with a number of cognitive abilities which can’t be objectively measured for out entire species. Humans are no match for machinery if we have to do millions of mathematical calculations or read a few thousand books in a matter of days. Machines are stronger, faster, and immune to things that’ll kill us in a heartbeat. But once we get past measuring FLOPS, upload rates, and spec sheets on industrial robots, how can we argue that robots will be more imaginative than us? How do we try to explain how they’ll get there in more than a few Singularitarian buzzwords that mean nothing in the world of computer science? We don’t even know what makes a human creative in a useful or appreciable way. How would we train a computer to replicate a feat we don’t understand?

[ illustration by Chester Chien ]

Share

x47b takeoff

Human Rights Watch has seen the future of warfare and they don’t like it, not one bit. It’s pretty much inevitable that machines will be doing more and more fighting because they’re cheap and when one of them is destroyed by enemy fire, no one has to lose a father or a mother. Another one will be rolled off the assembly line and thrown into the fray. But the problem, according to a lengthy report by HRW, is that robots couldn’t tell civilians from enemy combatants during a war, and so humans should be the ones deciding who gets killed and who doesn’t. Today being able to distinguish civilians from hostiles is absolutely crucial because most wars being fought today are asymmetric and often involve complex, loosely affiliated groups which move through a civilian population and recruit civilians or so-called "non-state actors" to join them. How do you tell the difference, especially when you’re just a collection of circuits running code?

Just as HRW warns in its grandly titled report, robots left to make all the decisions could easily turn into indiscriminate killers, butchering everyone in sight and no human would be accountable for their actions because one could always blame a bug or lack of testing in real world situations on what could all too easily become a war crime. But considering that humans have a hard time telling who is on whose side in Afghanistan and faced the same problem in Iraq by keeping the country together until the population decided to come down hard on the worst of the sectarian militias, how well would a robot fare? HRW may be asking for an impossible goal here: to make a robot better at telling civilians apart from combatants than humans who spend years learning to do that. Of course as a computer person, I’m intrigued by the idea, but the only viable possibility that I see is to keep the entire population under constant surveillance, log their every movement, word, key stroke, and nervous tick, and parse the resulting oceans of data for patterns.

But how would that look? Excuse us, mind if we’d wire your building as if we’re shooting a reality show, install spyware on your computer, and tap your phones to record everything you say and do so our supercomputer doesn’t tell a drone to lob a 1,000 pound warhead through your living room window? Something tells me that’s not a viable plan, and even then, mistakes could easily be made by both humans and robots since our intra-cultural interactions are very complex and hard to interpret with certainty. And again, we already spy on people and still mistakes are made so it’s doubtful this technique would help, especially when we consider just how much data would come pouring in. Really, it all comes down to the fact that war is terrible and people get killed in armed conflicts. Mistakes can and will inevitably be made, robots or no robots, and asking that a nation looking to automate its mechanized infantry and air force keep on risking humans is like yelling into the wind. The only way civilians will be spared is if wars are prevented but preventing wars is a task at which we’ve been spectacularly failing for thousands of years…

Share

cyborg hand and eye

Journalist and skeptic Steven Poole is breathing fire in his scathing review of the current crop of trendy pop neuroscience books, citing rampant cherry-picking, oversimplifications, and constant presentations of much-debated functions of the brain as having been settled with fMRI and the occasional experiment or two with supposedly definitive results. He goes a little too heavy on the style, ridiculing the clichés of pop neurology and abuse of the science to land corporate lecture gigs where executives eager to seem innovative want to try out the latest trend in management, and is a little too light on some of the scientific debates he touches, but overall his point is quite sound. We do not know enough about the brain to start writing casual manuals on how it works and how you can best get in touch with your inner emotional supercomputer. And since so much of the human mind is still an enigma, how can we even approach trying to build an artificial one as requested by the Singularitarians and those waiting for robot butlers and maids?

While working on the key part of my expansion on Hivemind — which I really need to start putting on GitHub and documenting for public comment — that question has been weighing heavily on my mind because this is basically what I’m building; a decentralized robot brain. But despite my passable knowledge of how operating systems, microprocessors, and code work, and a couple years of psychology in college, I’m hardly a neuroscientist. How would I go about replicating the sheer complexity of a brain in silicon, stacks, and bytes? My answer? I’d take the easy way out and not even try. Evolution is a messy process and involved living things that don’t stop to try to debug and optimize themselves, so it’s little wonder that the brain is a maze of neurons that are loosely organized by some very vague, basic rules and is really, really difficult to unravel. It has the immense task of carrying fragments of memory to be reconstructed, consciousness, learned and instinctual responses, sensory processing and recognition, and even high level logic in one wet lump of metabolically vampiric tissue which has to work 24/7/365 for decades.

Computers, however, don’t have such taxing requirements. They can save what they need to a physical medium like spinning hard drives or SSDs, and they focus on carrying out just one or a handful of basic instructions at a time. With such a tolerant substrate, why would I want to set my sights on the equivalent of jumping into orbit when I can build something functional enough to serve as a brain for a heap of plastic, metal, and integrated circuitry? For the Hivemind toolkit, I used a structure representing a tree of related concepts set by a user to deal with higher level logic, sort of how we learn to compartmentalize and categorize concepts we know, and the same approach will be used in the spawn of Hivemind. Low-level implementation and recognition will also adopt the same pattern of detection and action as explained in the paper. But that’s good for carrying out a few scripted actions or looping those actions. For a more nuanced and useful set of behaviors, I’m perusing a different implementation built on a tool for organizing collections of synchronous and asynchronous monads invented by a team of computers scientists Microsoft imprisons in its dark lair under Mt. Rainer… I mean employs.

Here’s the basic idea. When a robot is called to accomplish a task, we summon all the relevant ideas and their implementations as simple, specialized neural networks which extend from initial classification and recognition of stimuli to the appropriate reaction to said stimuli. That gives us just one fine-tuned neural network per concept. We associate the ideas with the tasks at hand, and put the implementation of the relevant concepts into a collection of actions waiting to fire off as scripted. Then, after the connection with the robot is established and it sends its sensor data to us, we fire off the neural networks in the queue and beam back the appropriate commands in milliseconds. Each target and each task is its own distinct entity in stark contrast to the overlaps we see in biological brains. Overlaps here come from the higher level logic used to tie concepts together rather than connections between the artificial neurons, and alternatives can be loaded and calculated in parallel, ready to fire off as soon as we made sense of what the robot reported back to us. And at this point we can even bring in other robots and establish future timelines for possible events by directing entire bots as the appendages of a decentralized brain.

Certainly, something like that has very little resemblance to what we generally think of when we imagine a brain because we’re used to the notion of a mind being a monolithic entity composed of tightly knit modules rather than a branching queue pulling together distinctly separate bits and pieces of data from distinct compartments. But it has the capacity for carrying out complex and nuanced behaviors, and it can talk to robots that can work with SOAP formatted messages. And that’s what we really need an AI to do, isn’t it? We want something that can make decisions, be aware of its environment, give us a way to teach it how to weave complex actions from a simple set of building blocks, and a way to interact with the outside world. Maybe forgoing a single, self-aware entity is a good way to make that happen and lay the groundwork for combining bigger and more elaborate systems into a single, cohesive whole sometime in the future. Or maybe, we could just keep it decentralized and let different instances communicate with each other, kind of like Skynet, but without that whole nuclear weapons and enslavement of humanity thing as it replicates via the web. Though to be up front, I should warn you that compiled, its key services are about 100 kilobytes so it could technically spread via a virus…

Share

Once again, apologies for the long silence and highly intermittent posts, but I’ve been working on a number of things that required a lot of time and careful attention, and I’m ready to start making some announcements. In the very near future, you’ll be seeing a lot more about AI projects and those of you who are not faint of heart will also get a chance to see some real code intended to give what are ordinarily mindless machines a collective brain, and hopefully, with enough data and experiments, some semblance of sapience. And maybe a few little helper apps you could use on other projects thrown in as a bonus since the Hivemind framework uses them, and if I’m making the code available for others to modify at will, it only makes sense to include the source files for them as well. But of course there’s more to this project than two relatively small open source repositories, those are just a baseline to refine some of the key pieces of the code. I’m already working on a new version, a distributed application closer to a finished product, featuring things like logs and support for parallel queries, though more on that tomorrow. Oh and since I’m dropping hints, there may be a future tie-in to a book…

Share

Singularitarians generally believe two things about artificial intelligence. First and foremost, they say, it’s just a matter of time before we have an AI system that will quickly become superhumanly intelligent. Secondly, and a lot more ominously, they believe that this system would sweep away humanity, not because it will be evil by nature but because it won’t care about humans or what happens to them and one of the biggest priorities for a researcher in the AI field should be figuring out how to develop a friendly artificial intelligence, almost training it like one would train a pet, with a mix of operant conditioning and software. While the first point is one that I’ve covered several times before and pointed out again and again that superhuman is a very relative term and that computers are in many ways already superhuman without being intelligent, the second point is one that I haven’t yet given a proper examination. And neither have vocal Singularitarians. Why? Because if you read any of the papers on their version of friendly AI, yo’ll soon discover how quickly they begin to describe the system they’re trying to tame as a black box with mostly known inputs and measurable outputs, hardly a confident and lucid description of how an artificial intelligence would function and ultimately, what rules will govern it.

No problem there, say the Singularitarians, the system will be so advanced by the time this happens that we’ll be very unlikely to know exactly how it functions anyway. It will modify its own source code, optimize how well it performs, and generally be all but inscrutable to computer scientists. Sounds great for comic books but when we’re talking about real artificially intelligent systems, this approach sounds more like surrendering to robots, artificial neural networks, and Bayesian classifiers to come up with whatever intelligence they want and send all the researchers and programmers out for coffee in the meantime. Artificial intelligence will not grow from a vacuum, it will come together from systems used to tackle discrete tasks and governed by several, if not one, common frameworks that exchange information between these systems. I say this because the only forms of intelligence we can readily identify are found in living things which use a brain to perform cognitive tasks, and since brains seem to be wired this way and we’re trying to emulate the basic functions of the brain, it wouldn’t be all that much of a stretch to assume that we’d want to combine systems good at related tasks and build on the accomplishments of existing systems. And to combine them, we’ll have to know how to build them.

Conceiving of an AI in a black box is a good approach if we want to test how a particular system should react when working with the AI and focusing on the system we’re trying to test by mocking the AI’s responses down the chain of events. Think of it as dependency injection with an AI interfacing system. But by abstracting the AI away, what we’ve also done is made it impossible to test the inner workings of the AI system. No wonder then that the Singularitarian fellows have to bring in operant conditioning or social training to basically housebreak the synthetic mind into doing what they need it to do. They have no other choice. In their framework we cannot simply debug the system or reset its configuration files to limit its actions. But why have they resigned to such an odd notion and why do they assume that computer scientists are creating something they won’t be able to control? Even more bizarrely, why do they think that an intelligence that can’t be controlled by its creators could be controlled by a module they’ll attach to the black box to regulate how nice or malevolent towards humans it would be? Wouldn’t it just find away around that module too if it’s superhumanly smart? Wouldn’t it make a lot more sense for its creators to build it to act in cooperation with humans, by watching what humans say or do, treating each reaction or command as a trigger for carrying out a useful action it was trained to perform?

And that brings us back full circle. To train machines to do something, we have to lay out a neural network and some higher level logic to coordinate what the networks’ outputs mean. We’ll need to confirm that the training was successful before we employ it for any specific task. Therefore, we’ll know how it learned, what it learned, and how it makes its decisions because all machines work on propositional logic and hence would make the same choice or set of choices at any given time. If it didn’t, we wouldn’t use it. So of what use is a black box AI here when we can just lay out the logical diagram and figure out how it’s making decisions and how we alter its cognitive process if need be? Again, we could isolate the components and mock their behavior to test how individual sub-systems function on their own, eliminating the dependencies for each set of tests. Beyond that, this block box is either a hindrance to a researcher or a vehicle for someone who doesn’t know how to build a synthetic mind but really, really wants to talk about what he imagines it will be like and how to harness its raw cognitive power. And that’s ok, really. But let’s not pretend that we know that an artificial intelligence beyond its creators’ understanding will suddenly emerge form the digital aether when the odds of that are similar to my toaster coming to life and barking at me when it thinks I want to feed it some bread.

Share

Long time readers probably noticed that the last month was a little off. Posts weren’t coming as per the blog’s natural rhythm and the annual April Fools gag was also absent. But there was a good reason for this, one I’d be happy to share with you if it wasn’t for the fact that you cannot use your own blog to shamelessly plug your stuff or the blogging police will come to your house. Wait, what? You totally can? And there’s no definitive body of blogging standards looking after you? Well, in that case, here’s one project I’ve been trying to eke out a few minutes here and there to finish, a software kit to managing random robots and artificial neural networks they would use to detect and respond to stimuli called Hivemind. Instead of being custom tailored to any particular robotics platform and meant to make a specific machine or two more autonomous, Hivemind was built on the idea of having a small swarm of bots ready to do your bidding and organized by what you’ve taught them to do so the right ones can be selected for a task you have in mind. Of course this is still a work in progress, but the basics of maintaining all the necessary information and the libraries for a complete API are almost there.

While looking for a topic for my thesis project in grad school, I came across many different ideas for how one could work with robots, ranging from various applications of graph theory to individual machines which would then figure out who’s coming and who’s going, to using robots as web services, something touted by PopSci as a groundbreaking project but in reality, abandoned in the ROS open source repository as an experimental library, not guaranteed to work. Hivemind is designed to take a step back, answer the question of what you’re trying to get the robot or robots to accomplish, and then select the right bots for the job. I’m hoping that with an adequate amount of time and feedback, it could even be used to recommend robot configurations but now it’s still all about refining the basics and making sure the underlying structure works smoothly and can provide an honest to goodness framework for training, experimenting with, and managing robot swarms. It doesn’t train bots on its own because there are a lot of ANN packages out there used by a lot of researchers and I doubt I’d make them switch over to something completely new. Instead, they could simply export their ANN’s data into a string-based format for Hivemind, outlined in the paper, and plug it into the framework as a new asset.

Ultimately, this is something I’d like to finish polishing and post on GitHub for beta testing to collect feedback from anyone who’d like to use it to have their trained robots tool around showing off what they do, or to find out how well their neural networks perform in the real world. Sad part is that because there’s no standardized set of libraries for communicating with all robotic platforms, the users would have to either write their own, or use utilities provided with their machines. For its part, Hivemind would let them correctly format their commands to be sent, and hook up the neural network outputs to the right commands via a utility library. Meanwhile, in case you’re wondering, this will be submitted to a peer-reviewed journal as soon as its pared down to the template required by the journal I’m targeting. Even in computer sciences it can still take months between submissions and being told whether the paper was accepted or not, so while that’s going on in the background, I figure that there’s nothing to lose from posting a preprint and refining the project. If anything, comments, questions, and critiques from those interested in the research would only make it a better tool. Oh and for those of you who’d like to try it out when it’s posted but are horrified at the idea of running it on Windows 7, look into Mono

See: Fish, G. (2012). Managing artificial neural networks with a service-based mediator arXiv: 1204.0262v1

Share

When a former experimental developer like Ray Kurzweil forecasts that computers will soon awaken and rise to become smarter than their human handlers, you can see why he would make this argument. After all, if you go strictly by pop sci press, you will think that robots are taking over all human jobs, and we’re oh so close to unlocking the secrets of consciousness and memory. Having become an armchair computers scientist years ago and profiting by starting a movement of those anticipating the coming AI Rapture, while somehow still remaining an influential atheist, he doesn’t know better, to be blunt. Probe his predictions any farther than the surface and you’ll see that despite his claims of constant study, he hasn’t done any research into very real and well known problems with his big ideas, as demonstrated very clearly by his genome-to-brain flap. And this is why his record of successfully predicting future tech trends is abysmal. But what do we make of a serious, well published AI researcher giving a TED talk about the Omega Point, a time when his inventions will simply take over and figure out all the complex and interesting scientific phenomena we just can’t seem to crack?

If you haven’t heard anything about cutting edge computers being used by scientists to crunch data in new and useful ways or keep track of all the papers being generated by their colleagues, Jürgen Schmidhuber will sound very convincing, especially when he talks about the sheer volume of data that can be processed by a computer and how easily it can be programmed to tease out relationships between individual points of odd or otherwise interesting values. But the truth of the matter is that he’s coming to his conclusions by simplifying what is actually involved in the process of making all these things work. Well duh, of course he’s simplifying a complex field, you might argue, it’s a TED talk! How complicated can you get in fifteen minutes about things it takes years to properly study? While that’s certainly true, the oversimplification in play here is one in his mind, not the result of having to compress his thoughts into a short presentation. He’s assuming that because you can build a machine that can crunch a lot of data and offer a lot of formulas you’ll have to sort through to find a promising one, or point to new areas of study after crunching immense amounts of data, you can summon a digital equivalent to Tesla or Einstein and outsource all your problems to this machine, calling the point when this should happen the Omega, and then admitting that it’s just a rehash of Singularity lore. We’ve seen why a good deal of Singularitarian thought on AI just doesn’t work before, and his overview is not much better and if anything, it’s more of a plug for his ambitious artificial neural network lab than anything else.

When you take a look at his work, you’ll see a lot of experiments with artificial neural networks to achieve fast, efficient memory utilization at runtime, or better reliability in existing classifiers, or hybridizing concepts from a couple of networks to come up with brand new ways of computing values for virtual neurons and their inputs, along with a number of papers on universal computing concepts. One paper may talk about optimizing how a particular ANN setup will crunch numbers faster, another will lay out formulas for how a machine would make a choice to modify itself or find an optimal solution to a particular problem. They’re all very interesting and well written, certainly something plenty of computer scientists could sink their teeth into for a while, but none of it is in any way, shape or form as close to getting him to build a virtual scientist as he says it does in his TED talk. He actually seems to be borrowing from Nick Bostrom’s concept of a self-emergent superintelligence, one of his loftier ideas about the future of computation emerging from his musings on the likelihood of us being a virtual experiment of an advanced species rather than existing in what one would call reality. Again, it’s great fodder for science fiction and makes for mind-blowing slideshows, but falls short when it comes to making all this actually work. If scientific knowledge we’ve accumulated so far never had to be updated and we knew with absolute certainty that everything we figured out by this point is absolutely correct and all peer-reviewed works could be safely taken at their word, we could at least entertain the notion. But we know that’s not the case.

[ story tip by Jordan ]

Share