[ weird things ] | when philosophy meets artificial intelligence

when philosophy meets artificial intelligence

Computer scientist Jaron Lanier wants to you remind you not to worship technology as a panacea to all the world's ills, albeit in a very bizarre, rambling way...
philosophical robot

When you’re dealing with anything even remotely approaching artificial intelligence, you will always bump into the realm of philosophy. Considering that a good deal of the research involves actually defining intelligence in concrete terms, there’s plenty of room to ask what really makes us intelligent. Is it the size of a brain? Is it the complexity of our neural connections? Is it that our brain is a collection of biological constructs that can do very specific tasks really well coming together into one organ? And what defines our thoughts and the notions of intellect in the first place? That’s what a rather confusing and somewhat rambling essay in the NYT tries to address. It’s author is Jaron Lanier, an accomplished computer scientist who ventured into the philosophy of artificial intelligence and wrote a book to remind us that machines are just tools made by us, and for us.

It seems like a pretty obvious idea, but then again, if you remember a major feature on the progress of AI with its share of bizarre definitions and statements about the future of the discipline from Singularitarians and a futurist, you might see the benefit of mentioning this every once in a while. Lanier quickly hones in on the very same people and their motivations for seeing technology as the ultimate solution to all that ails them, but his ruminations on how some people are seeing the web and the computers that power them as an independent entity at the expensive of valuing human skills and imagination sound more like a technophobe’s plea than a professional’s evaluation. His column sets up a false dichotomy between seeing machines as nothing more than useful gadgets, and worshipping them as the masters of a new world order while reducing the need and importance of humans in the process. At times, he sounds like he’s about to slip into Bill McKibben territory, especially when we consider that most people are painfully aware of our technology’s limitations and wonder why they keep hearing about how smart computers are becoming when theirs either constantly break or just refuse to let them do what they need to do for the day thanks to bugs in the software’s code.

Lanier’s justification for sounding the alarm is the much talked about interest in Silicon Valley in what’s being preached by Ray Kurzweil and his disciples at Singularity University. In fact, the followers of transhumanism across the tech world made it to the corporate gossip obsessed TechCrunch, though they got the short end of the stick from one of its bloggers, and unfairly so. But really, while the big shots at tech companies might talk about how to build a super-intellect from a massive server farm, or how to create a science-fiction AI from from social media plugins, they still have to build it, and when they do, I have trouble seeing the programmers who’d try to make it happen forgo all credit if they’re successful. Generally, it’s the people who don’t know how technology works who tend to see it as a mystery box which keeps getting more and more complex and does more and more stuff. They don’t care about the hard work that goes on behind the scenes, and they probably never will. And that’s not something to bemoan, especially for those of us working with tech R&D. A constantly growing demand for more and better phones, computers, and software is what pays our bills. Does it matter that much if some people don’t realize how much hard work goes into our computer infrastructure, or believe that either The Matrix, or doomsday military robots are upon us?

# tech // artificial intelligence / computer science / technological singularity


  Show Comments