[ weird things ] | so much for the three laws of robotics…

so much for the three laws of robotics…

While Singularitarians are focused on creating a friendly AI, the first real world artificial intelligence systems are likely to be as unfriendly as possible by design...
battle bot drone

When you try to build an artificial intelligence system, you probably want to make sure it won’tt going to turn on you when you least expect it. But how would you do that? According to the Singularity Institute which spends a lot of time thinking about the future, you just follow the plan to build a friendly machine immune to our human emotions. To think of it, maybe friendly would be a loose description, and benevolent would be slightly more accurate. However, one wonders why someone would actually build such a system and how it would apply to the real world? Considering that the likeliest use for AI would be in future military campaigns and that defense agencies fund a good deal of relevant research, it’s fair to say that friendliness is not their primary concern.

One very important thing to remember about software is that it’s designed to perform a specific task. You can’t just build an application that does everything and expect to receive the resources and funding to keep such a project going. So before setting out to work on artificial intelligence, the big question is who could afford to get this technology off the ground. Today, that’s DARPA which has a hand in IBM’s brain modeling projects as well as implants that can control machines by thought alone. And it makes perfect sense that the military wants to harness computers capable of making complex decisions on their own. As high tech defensive and offensive systems which can keep tabs on every square inch of the battlefield become ever more important in modern warfare, it becomes harder and harder to manage and sort through all the data they generate.

Rudimentary AI soluions which can combine numerous streams of data, organize them and identify what may be really important, could be a huge tactical advantage during massive engagements in a challenging setting like a major city. This is where the research money is going. What would be the point of investing decades of hard work backed by the vast R&D coffers of the Pentagon into creating a digital version of Gandhi? To make the friendly AI of the Singularitarians’ design, you’d need billions of dollars and a very large, dedicated staff of computer scientists who can somehow obtain vast grants to make an idealized version of intelligent life with a supercomputer, built with seemingly no other purpose than to show that we can make something which has traits of what we’d call sapience. Considering the trajectory of AI development today, it seems that not only will the first cognitive machines tackling real world problems be less than concerned about harming humans with their actions, but assisting in delivering harm as efficiently as possible would actually be their goal.

# tech // artificial intelligence / cognitive computing / computer science / military


  Show Comments