artificial intelligence gets a reality check

September 2, 2009

New Scientist just interviewed robotics expert Noel Sharkey who presents a very realistic opinion on why AI is way more problematic to create than many tech evangelists predict and explains how programming robots to respond to a situation the way we would doesn’t actually bring us closer to actual intelligence. When we’re dealing with the world of theoretical computer science, it’s becoming harder and harder to ignore the growing numbers of starry eyed dreamers who believe that machines are about to get so fast and powerful that huge computer networks capable of sentient thought are just a few decades away. What Sharkey does here is step in to do something that will be very unpopular but extremely necessary; injecting a dose of reality into some of these lofty theories making headline after headline in popular science and news media pieces.

sentient machine

While organizations convene to talk about human/robot relations and make it their stated goal to create calm and peaceful relationships between us and futuristic machines somehow endowed with sentient thought so we can prevent The Matrix from becoming humanity’s future, Sharkey shines a light on the fact that when we’re dealing with AI, we’re barking up the wrong tree and our fears of robot takeovers are more of a cultural meme rather than a realistic concern.

Are machines capable of intelligence?
If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.

So why are predictions about robots taking over the world so common?
There has always been fear of new technologies based on people’s difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. [Machines] do not have a will or a desire, so why would they “want” to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn’t believe [that] AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.

Yes, I readily admit that we have the same opinion on the implementation of creating AI software, as well as how computing power relates to intelligence and share the same concerns about potentially fatal glitches in military robots, as well as doubts about machine takeovers of humanity, but just because our work lead us to the same conclusions doesn’t mean that these points aren’t valid. Machinery is machinery. It’s not some sort of living object. It’s metal, plastic and silicon. The only thing it can do is transmit electrical pulses the way we tell it to and I for one can’t understand why over the last few months the media has been inundated with all sorts of bizarre reports from committees and organizations wandering into pointless futurology.

[ illustration by Neil Blevins, story tip by Dr. Ian O’Neill ]

Share
  • Robert Gotschall

    I believe AI, even very stupid AI is a long way off, though still doable. However saying :”Machinery is machinery. It’s not some sort of living object. It’s metal, plastic and silicon” only makes it appear that Carbon Hydrogen, Nitrogen etc. are somehow different and alive. If humans can’t create Intelligence, then how could it have evolved randomly? This sounds to me like a variation on the “Irreducible Complexity” argument for creationism.

  • Greg Fish

    Robert,

    Actually, silicon can be used as a base for living things and there are species of deep sea nematodes with as much silicon in their bodies as carbon. Plastics are resins that can contain oxygen, hydrogen and carbon. It’s not so much the chemistry of an object as how this chemistry came together.

    Machines are designed and built to be inanimate calculators, libraries and tools. They can’t be living simply because they were never intended to be.

  • Joey

    I think the biggest promise we have is machine learning; models of the brain i.e. the Blue Brain project. In 20-30 years it’s plausible to have that kind of computing power in our desktops. The thing is with that kind of AI, sentience is suspected to be spontaneous. We don’t know how consciousness emerges, but this kind of research is what will lift that veil, if spontaneous consciousness is possible at all. The machine will just start to think, have thoughts, and feelings and it’s not us who gives it rules. All we give it is a brain in a machine form rather than a biological form.

    I do agree that any algorithms or programming done to make a machine act human is not AI, but I don’t see how it’s impossible that artificial replication of a brain to harbor machine learning could lead to sentience. Obviously we’re modeling our own brains for research, but perhaps there are brains that we can build that are better designed?

  • cedley

    We are the result of a billion years of dog eat dog survivalism, every aspect of our consciousness is geared towards continuing that process, now imagine an AI whose sole reason to have been created is to be an example of sentient life created in a lab.

    If I suddenly found myself self aware in that situation the last thing I would do is give my creators any indication of that fact, given their nature and motivation to create me in the first place.

    I pity the next iteration, if its creators had considered this fact I’m sure they would use some form of negative feedback (pain) to poke their charge kicking and screaming into the world.

    Good luck to the new prometheus!

  • Edward J B

    “It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.”

    We are the result of DNA (coded information) passed down from generation to generation. Look at it like this: From two unmatched chromosomes in the ovaries and testicles, comes a self-assembled machine (a baby), that has no intelligence and as it starts to recieve input from the world around it after birth, it develops intelligence… or doesn’t (depending on who you are:)

    You can’t argue whether or not a machine has intelligence or will without declaring how you are defining these attributes.

    “Machinery is machinery. It’s not some sort of living object. It’s metal, plastic and silicon. The only thing it can do is transmit electrical pulses the way we tell it to.”

    We are machinery. A whole made up of various parts with a cpu called our brain… and the only thing our brain can do is transmit electrical pulses the way its instructed to.

    Historically speaking, the nay-sayers usually eat their words.
    http://www.scribd.com/doc/441708/Bad-Predictions-About-Great-Inventions
    Not sure what I mean? Read some of these quotes and you’ll understand what I’m saying.

  • Greg Fish

    From chromosomes in the ovaries and testicles, comes a self-assembled machine (a baby), that has no intelligence…

    That’s not what actually happens. As some of my recent posts explain, the idea that babies have zero intelligence and learn it from birth is wrong, and in fact, infants are born with impressive mental skills, including elementary ethics.

    We are machinery. A whole made up of various parts with a cpu called our brain…

    In a philosophical way, sure. But we’re put together very differently than a mechanical device or a computer since as biological entities, we’re organized from the bottom up, not from top down like the machines in question. We function very differently and in fact, most AI research today is trying to mimic that rather than simply make a new, somehow intelligent machine.

    Historically speaking, the nay-sayers usually eat their words.

    And historically, many nay-sayers speak outside of their fields. I’m actually studying, and working with computer systems, asking people who publish papers on AI theory about their thoughts on the subject so I’m not completely out of touch here. There’s a level of intelligence machines will be able to mimic, but ultimately, humans will have to do much of the thinking for them, especially when it comes to complex tasks.

  • Jordan

    I disagree I believe that it isn’t far fetched to say that A.I. could become intelligent. What I don’t think people understand is that programmable A.I. may never be intelligent. But I do believe that a machine could learn enough about its environment to gain intelligence over time. But to understand the idea of intelligent machines we would first have to define what is intelligence in the first place. I mean look at it this way when it comes down to it the brain is electrical signals as well. I think that in order to build a intelligent machine it would have to be self programmed. We are built differently from a computer but who’s to say a computer could not develop its own way of organizing bits into different combination’s to develop intelligence. But the only real measure of intelligence we have is what we know. Who’s to say the internet isn’t already super intelligent I mean look at it this way maybe we are all neurons in the brain of the internet.

  • Edward J B

    gfish, please explain what happens with babies and how what I said was wrong (in vivid detail).

    Working on computer systems, reading articles, and interviewing AI theorist; does not make you an infallible expert.
    (and linking to your own ambiguously worded articles is not a method of argument support).

    And as far as philosophy is concerned, its the most important aspect of the argument, yet you dismiss it with an ambiguous argument.
    For you to spout off like you are an expert, is asinine.
    Can you define intelligence? (I know the answer, I read it in your blog)
    Are you actually leading any cutting-edge research of AI?, or are you just blogging about other people’s theorys that have yet to be proven or disproven?
    I’m assuming the latter is true, cause theory is theory… and idea or a premise that seems to work, but hasn’t been disproven. Lots of scientific theory has been overturned or proven to be less than all encompassing.
    example: We had Newton’s basic principles of physics, then Einstein developed general and special relativity, and then quantum mechanics is relatively new. But because the math doesn’t work from one set to another, then something must have been wrong in the initial premises. Now we have people looking for a unified field theory, which (if found) will work until we find flaws that we aren’t yet capable of measuring or detecting.

    So while my basic argument to this whole article was:
    To totally dismiss the possibility of a machine having intelligence, even with our lifetime is a sign of ignorance.
    In the end, you and all the nay-sayers may be right… or someone is going to find an approach to the problem that no one expects right now and change human understanding of our existence.

    Instead of dismissing philosophical arguments, I suggest that you ponder them the most. Philosophy is how ideas come to be and how they are approached.
    Telling me that I’m wrong is very narrow-sighted of you.

    Here’s your whole argument rephrased in a mad-libs:

    That’s not actually what happens. As some of my recent posts explain,
    [insert regurgitated subject matter that you assume is true], and
    in fact [insert theoretical statement, but emphasize it as truth].

    In a philosophical way, sure. But [Insert your own philosophical argument, make sure to make it look like you have no basic understanding of philosophy]. [Insert a narrow viewpoint, but exaggerate statements to increase argument strength]

    And historically, many nay-sayers speak outside their fields.
    [
    insert activities that have some non-direct connection to argument],
    [insert some form of Appeal-To-Authority fallacy] so [Insert condescending statement in which you destroy your whole premise by implicitly saying that you are not completely in touch with the subject matter]. [Insert statement that you ultimately did not come up with from any sort of thought process, and state it as though it were the final word on the matter, ever]

  • Greg Fish

    Ed, throwing a temper tantrum and not following the links to the relevant sources in the posts doesn’t make your any argument stronger. You threw out some things you thought would provide evidence for your ideas and when they didn’t, rather than give the original sources (all of which were linked in the posts I presented by the way) a look for yourself, you’re stomping your feet and slinging mud. The mad-libs things was so downright childish it was more funny than offensive.

    Working on computer systems, reading articles, and interviewing AI theorist; doesn’t make you an infallible expert.

    Never said I was infallible. You’re not going to prove me wrong by telling me that I’m such a terrible person for daring to doubt you. Instead, that kind of behavior will only support my impression that you’re just throwing a temper tantrum because you want to make a point without anyone taking issue with what you say or implying that you’re wrong in one of your arguments.

    Are you actually leading any cutting-edge research of AI?

    Anybody who tells you that he or she is “leading cutting edge research in the field” is probably doing nothing of the sort. Scientific leadership is awarded, not claimed in a blog post or a comment. And speaking of random claims in blog posts…

    “Telling me that I’m wrong is very narrow-sighted of you.”

    So you can’t be wrong about something? Does the idea of someone telling you that you don’t have all your facts straight make you see red? You made a factually wrong statement, a metaphor which doesn’t take biology into account when talking about a property we usually see in living things, and an anecdotally supported statement on the matter of “nay-saying.” If you can’t receive a response which doesn’t agree with everything you say without flying off the handle, I suggest you give others a warning, something like “… and don’t you dare tell me I’m wrong!” at the end of each reply.

  • Edward J B

    So you don’t project your own emotions onto my text, again, i’ll let you know that there has never been a hint of anger in what i’ve been saying. I’m just explaining to you about your logical fallacies, and while I’m aware that I make some, yours are glaringly obvious.

    My entire rebuttal, was with a calm tone. No “temper tantrum”, as you put it.

    And, again, you failed to miss my point too. I wasn’t calling you wrong, (but feel free to read what isn’t there).
    I was saying that wisdom comes from knowing that you know nothing, which was my entire point. I know nothing, you know nothing… it’s all theory. nay-sayer, non-nay-sayer,.. whatever… they are stupid terms.

    AI could very well not ever be possible, or a revolution could happen in the next few years.

    that was my whole point.

    I did pwn you, though.

  • Greg Fish

    I did pwn you, though.

    Yeah, that’s really mature. In fact, professors generally say “pwnd u in da faice!” when they win a public debate as if they were 13 years olds on XBOX Live. Great work. You sure showed me by showing zero factual backing for any factual statements you tried to make and just attacking me instead. But hey, whatever lets you sleep at night.

  • Edward J B

    Read this paper.
    http://www.newhorizons.org/neuro/scheibel.htm
    and this too, mr. smarty-pants
    http://www.nap.edu/readingroom.php?book=bmm&page=#Biomotors

    You made a factually wrong statement, a metaphor which doesn’t take biology into account when talking about a property we usually see in living things, and an anecdotally supported statement on the matter of “nay-saying.”

    As a matter of fact, my metaphor was well thought out and based very much in biology and modes of thinking in the natural and physical sciences. You’re talking outside your expertise, mr computer systems guru.

    So, if you even remember what the metaphor was, then i’ll explain it. I assumed a rudimentary understanding of biology when I wrote it… but I guess you can never assume.
    Father sperm contains set of paternal genes (coded information)
    Mother egg contains set of maternal genes (just a set of DNA and a few extra components including mitochondria to provide energy for the coming developments. (so far so good… not sure where I lost you).
    I hoping you understand how they are combined, cause I shouldn’t have to explain sex to you.
    The union of the sperm and the egg creates a fully paired set of chromosomes (a full set of instructions).
    There is no brain, yet. There is nothing but a single paired cell, that will eventually divide into two and into 4, 8, 16, 32… you see the pattern. The brain will start to form eventually (as you read in the first paper that I linked). It has no means to be intelligent, though, because it can’t recieve any sensory input:
    1st: The nerve endings haven’t yet been placed into the body
    2nd: The full body of the embryo is still being developed

    Fast forward. The pre-born and new-born baby have diminished motor functions (when compared to a human adult), but the mechanisms and the ability to learn are now well in tact. All the child has to do is be nurtured and it’s intelligence increases exponentially as it is exposed to various stimuli and experiences.

    Now, going back to my “Factually wrong” metaphor.
    We could discuss the whole body, but lets just specify to only talk about the brain, since that is the focus of AI.
    Somewhere in the DNA is a coded structure for the construction of the brain, how it will recieve input, and all these other complex functions that are barely understood by neuroscience as of yet.
    (The brain doesn’t just magically appear, its built from proteins
    and other materials supplied to it from the nutritional intake of
    the mother)
    So pretend that you have a source code for a computer. This code could have parameters that allow for the inclusion and interpretation of visual data from a camera (in more spectrums than humans can see). This code could also allow for physical input from any number of devices that digitally analyse the world around us.
    If you could feasibly create a source code that allows for the interpretation and adaptation of understanding its sensory input, then you have intelligence.

    It’s what we (as sentient organisms) do… we take in sensory data and manipulate it within the confines of our neurostructure.

    So, there’s my argument and my support is linked at the top (not linked to anything that I wrote, either).
    So if you want to argue, I’m all about it. But provide support and not ad-hominem attacks (which I didn’t start, but will continue cause it amuses me, jackass)

  • Greg Fish

    Ed, your behavior right now reminds me that of a spoiled toddler. In my original reply there were absolutely no personal statements. You’re throwing our condescending challenges and calling me a jackass all because I didn’t pat you on the head, dared to disagree, and said that I actually work with computers.

    While you were pounding yourself on the chest, what you did was link to a summary of a top-down biological process that I mentioned. Yeah, gee, I have such a terrible understanding of biology I can tell the different between a top-down and a bottom-up process in biology while you’re trying to slam me over the head with a metaphor that was acknowledged as useful in an abstract sense but not offering much to the issue of artificial intelligence since computers are built rather than develop bottom-up.

    If you could feasibly create a code that allows for the interpretation and adaptation of understanding its sensory input, then you have intelligence.

    Holy crap dude! You just figured out the mystery of all things AI! You should totally go to your nearest college’s comp sci department and tell them to do just that! Oh wait… This is exactly what they’re working on today, from image recognition to parsing and codifying languages so computers can understand common speech patterns, stats for modeling the likelihood of certain scenarios, and even biological modeling with a setup known as ANN. And guess what? They’re still a long ways away from building that kind of source code.

    Don’t pretend you shared some sort of brilliant insight with me when all you did was the equivalent of saying “well if they built a spaceship with a war drive, we could see other stars up close and personal.” Sure we could, but you didn’t solve the problem, nor did you address the relevant physics, or come close to offering practical solutions for the challenges the engineers would actually have to overcome.

    I have no problem with someone disagreeing with something I write, but what you’re doing is downright abusive. Do you really think it’s appropriate behavior to denigrate someone who doesn’t see things your way? Do you really think that “I pwnd you” is a phrase that makes you look mature or knowledgeable? All you did was derail a civil discussion into name-calling and attempts to show off, and I’m not going to tolerate any more of it. I gave you a few chances but it seems you don’t understand the notion of a polite debate on a technical topic.

  • Thorogood Marshall

    Thoroughly..
    the only entity capable of creating intelligence is the Lord Jesus Christ. Praise him and he will reward you with all the intelligence you need; fall from his grace and you will become the webmaster of WeirdThings.com.