[ weird things ] | the a.i. world’s mind-body dualism dilemma

the a.i. world’s mind-body dualism dilemma

Does an artificial intelligence need a body to truly become what we could consider intelligent?
ghost in the shell tachikoma

Most intelligence as we know it involves interacting with the outside world. You capture information, process it in an appropriate context, then act on it, if we were to put your everyday actions into the simplest, most general form. So if the intelligence we know, and can sort of define, is done by entities able to react and respond in an environment, ask some artificial intelligence researchers, should we consider having a body a prerequisite for an AI system? Could an artificial intelligence be completely conceptual, just a big synthetic neural network solving complex problems in the background? Could an abstract machine learn and grow without having any exposure to an environment? Would its number-crunching be considered intelligent if it has any applications in the real world? And last, but certainly not least, could a synthetic intellect confined to its own server farm do what we eventually expect all of its mobile robotic brethren to do one day and set its programming to kill all of those inferior bipedal things made of flesh, achieving mastery of the planet and surpassing its creators?

Well, as with everything in the AI world, the answer depends on how you define intelligence because without a working definition of how to evaluate this trait, it’s impossible to objectively judge whether it can manifest in synthetic constructs. More enthusiastic pundits on the subject have a rather lax conception of what we could see as intelligent or try to gauge intelligence based on statistical variations of IQ tests. On the opposite end of the spectrum, there are philosophers who try to rationalize away the very concept of AI by thinking how to cheat the Turing test. Neither approach is a very good one because we end up between seeing virtually every modern enterprise or business system as being intelligent and declaring that the concept is impossible. So, just like astrobiologists search for Earth-like planets assuming that aliens living on a world like ours could be more readily recognizable as living things, so to do some artificial intelligence researchers focus on robots in search of emergent behaviors similar to those exhibited by real organisms. If robots can act like intelligent living things based on nothing more than the patterns developed by their neural networks, maybe we’re close to figuring out how intelligence evolves and works, and by extension, understand how to make smarter, more useful machines. Working backwards, we now arrive at the same question with which we started; does an AI need to have some sort of body to develop intelligence needed to navigate its environment?

Personally, I agree with the idea that we can best test our theories about learning algorithms and intelligence using robots, and the abstract feedback loop set up by the researchers who posed this question looks rather straightforward. The robot’s sensors receive stimuli, its artificial nervous system makes decisions, it reacts to the stimuli, then detects new inputs. Again, very straightforward and logical. However, it doesn’t really account for abstract thought since every action and decision by which intelligence is measured in this model is based solely on physical responses or lack thereof. Neither does it account for motivations to become intelligent. An organism has needs it has to meet if it’s to survive and reproduce and how it goes about fulfilling those needs is what nurtures the development of an intellect. Machines don’t have to eat, drink, or mate so they really have no need or desire to do anything unless we introduce the concepts of hunger or thirst, as was done in several experiments by substituting food and water with a signal when the robot’s batteries were running low. Where is the role of motivation in our model? It would introduce an active element to an otherwise passive logic loop and open the door to things like learned behaviors which can be adjusted over time with selective pressures, sort of like a robot culture if you will. Now something like that would be hard not to define as intelligence…

# tech // artificial intelligence / computer science


  Show Comments