[ weird things ] | artificial intelligence gets a reality check

artificial intelligence gets a reality check

Finally, there's a skeptical expert voice addressing the panic about our potential AI overlords.
stargazing wall-e

New Scientist just interviewed robotics expert Noel Sharkey who presents a very realistic opinion on why AI is way more problematic to create than many tech evangelists predict and explains how programming robots to respond to a situation the way we would doesn’t actually bring us closer to actual intelligence. When we’re dealing with the world of theoretical computer science, it’s becoming harder and harder to ignore the growing numbers of starry eyed dreamers who believe that machines are about to get so fast and powerful that huge computer networks capable of sentient thought are just a few decades away. What Sharkey does here is step in to do something that will be very unpopular but extremely necessary; injecting a dose of reality into some of these lofty theories making headline after headline in popular science and news media pieces.

While organizations convene to talk about human/robot relations and make it their stated goal to create calm and peaceful relationships between us and futuristic machines somehow endowed with sentient thought so we can prevent The Matrix from becoming humanity’s future, Sharkey shines a light on the fact that when we’re dealing with AI, we’re barking up the wrong tree and our fears of robot takeovers are more of a cultural meme rather than a realistic concern.

Are machines capable of intelligence?

If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.

So why are predictions about robots taking over the world so common?

There has always been fear of new technologies based on people’s difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. [Machines] do not have a will or a desire, so why would they “want” to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn’t believe [that] AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.

Yes, I readily admit that we have the same opinion on the implementation of creating AI software, as well as how computing power relates to intelligence and share the same concerns about potentially fatal glitches in military robots, as well as doubts about machine takeovers of humanity, but just because our work lead us to the same conclusions doesn’t mean that these points aren’t valid. Machinery is machinery. It’s not some sort of living object. It’s metal, plastic and silicon. The only thing it can do is transmit electrical pulses the way we tell it to and I for one can’t understand why over the last few months the media has been inundated with all sorts of bizarre reports from committees and organizations wandering into pointless futurology.

# tech // artificial intelligence / noel sharkey / robotics


  Show Comments