[ weird things ] | the simmering battle to define a.i. and how smart it can be

the simmering battle to define a.i. and how smart it can be

Despite constantly using algorithms defined as artificial intelligence and losing millions of jobs to it, we're still not sure where this technology can go and if it's really intelligent.
mechanical fetus

We live in an era of smart devices and software, using technology less to log the stuff we do, and more to help us get it done. In just 15 years, we’ve built an army of virtual assistants we rely on for countless little things that drastically improve our quality of life. So, of course, that’s a sign that we’re leaping towards creating truly intelligent machines similar to JARVIS from the Marvel Cinematic Universe, right? According to AI researcher Melanie Mitchell, not even close. What we really have, she argues, are very effective statistical calculators trained to tackle very narrow problems and generalized, self-motivated artificial intelligence would require massive breakthroughs she has trouble trying to imagine.

Now, overall, Mitchell is correct. Artificial neural networks and machine learning in general are basically just fancy calculators designed to solve narrowly defined tasks, aided by our ability to assign quantitative properties to information and patterns to real world concepts. And while there’s a popular trend of insisting that someday soon, machines will be out of our control as we ramp up our use of artificial neural networks and train them to do more and more things on faster and faster cloud architectures, it’s just cargo cult computer science. Not many people in the field would challenge her arguments. But that’s not the important question.

what’s the goal of building artificial intelligence?

The important question is what we’re trying to achieve in the long term. Do we want to make machines just as intelligent as us — known as Artificial General Intelligence, or AGI — and is it even possible? We’ve actually tackled this question before, for both AI and AGI with no clear answer. Before we start trying to build a digital version of ourselves to think like us, we have to really understand the purpose behind it. There are already entities who think like us. Us. Are we suddenly desperate for a virtual friend or an immaterial servant? Are we just trying to see if we can do it just for the sake of doing it? Do we have a contingency plan if we succeed and create some sort of entity similar to us trapped in a lab?

From what we understand today, an AGI that works the same way our minds do is extremely unlikely and would require some sort of a body and an ability to learn by interacting with its environment. But even then, what’s more likely to happen is a collection of tools and methods capable of dealing with discrete tasks and sequences of those tasks employed in response to external stimuli, with some emergent behaviors as a side-effect of the math being used to power its training and logic, but largely very much their own thing. It would never be like us, it won’t want to be our friend, it won’t have its own goals, it will simply do as it’s told, then idle until the next command or external trigger.

what do we want from our machines?

Because this synthetic intellect won’t emerge and evolve as we did, because it would be built out of fundamentally different components, because its path forward will be guided and all of its capacity customized to perform faster calculations with little to no slack because this slack would be an expensive resource for those who maintain it, having it turn out nothing like us is almost a foregone conclusion. Artificial intelligence we can realistically build in the foreseeable future couldn’t exist without our input, although it wouldn’t need us to constantly check on it, able to function independently for long stretches of time, make some decisions on its own based on what we taught it, and adapt to some variations in its inputs and environment.

But it would only be able to truly advance when working alongside us, or even integrating with our minds, like a symbiote. But is that really somehow a bad thing or something we need to be busy criticizing? After all, we set out to build smart, sophisticated tools to help us come up with the next big idea and even if we put aside faux-AI and marketing hype, we’re succeeding. Why should we torture ourselves trying to erect potentially booby-trapped castles in the sky based on plot devices from sci-fi comics? And why are obsessions of Singularitarians dominating leading us into a philosophical morass of irrelevant what-ifs?

# tech // agi / artificial intelligence / computer science


  Show Comments