[ weird things ] | the people vs. killer robot #567-b?

the people vs. killer robot #567-b?

UK academics want to know if a sufficiently advanced robot goes on a killing spree should also stand trial for its crimes.
dalek extreminate

The UK’s Royal Academy of Engineering recently published a report on the social and legal implications of using more and more automatic system in our daily lives. Some of its main questions ask who should be held responsible if one of these systems malfunctions with a lethal outcome. Is the machine its own entity that’s learning and making bad decisions as it does? Are the coders and designers to blame for its glitches. Do we need to separate machines from the humans who build them and if so, when? Can you haul a robot to court and charge it with negligent homicide or manslaughter?

Admittedly when it comes to dealing with computer systems, my approach doesn’t linger in the realm of the theoretical. In fact, I’m not a huge fan of reports and essays like this because there’s a very fine line after which ideas about future technologies become philosophical navel-gazing and we lose the focus on how these systems will be designed, developed, tested and implemented. Practical brainstorming sessions identify problems and come up with creative solutions that are then critiqued for feasibility and give us new software, updates to systems we’re outgrowing and hardware that meets our demands. Theoretical philosophy tends to inspire transhumanism and Singularitarians.

So, from a practical standpoint, who should be responsible if an automated system makes an error and kills someone? Any automated system will have to use a software package to function and any software is a set of instructions carried out according to predetermined rules. You can update those rules as much as you’d like but every time the software needs to carry out an operation, it will do so according to the rules given to it by human developers. And when software makes mistakes, it does so because it trips up in the logic build into it, or lacks a rule to deal with something new and throws out an exception. That means any malfunction has to be attributed to the system’s development team and they’re the ones who have to fix it.

Computers can’t think for themselves. In fact, they never think and require designers and developers to do it for them. This is why we, humans, install cutoff switches, emergency overrides and manual controls. With every new technology, it’s our job to know that we may have to jump in and take over for the machines to solve a problem ourselves. To allow a computer to take complete and absolute control of anything and everything without an option for human intervention is just begging for trouble. In the worst case scenario, the end result is unintentional suicide by robot. This is why you’re not seeing too many drivers warm up to the persistent ideas for self-driving cars and people like me frown on vehicles that don’t let you turn off a feature that can be a bother in an otherwise great vehicle. It’s not because we’re technophobes. It’s because we know full well how machines can fail where we could make the right decision and avoid an accident.

But wait a second, you may ask, what about AI? What happens in the future, when computers can think for themselves? Like I’ve written numerous times, the hurdles to actual, unaided, creative problem solving from a box of plastic and silicon are immense. Any AI system will have to be developed in small stages, over many decades of work, and considering how real software is developed, it will be designed for a very specialized set of tasks. No BA or developer should be crazy enough to just suddenly decide to plunk billions of dollars and decades of time into a real world application that does anything and everything under the sun. That’s the IT equivalent of a Sisyphean task. At best, it would be a needless technical curiosity with a list of maintenance tasks that would keep the project gushing red ink.

Experimenting with AI software is great for academia, but don’t think it will be developed anytime soon or that it will be plugged into the world’s technical infrastructure on completion. Most likely, academic developments in creative computing would be spun off to solve very task specific problems because remember, the goal of software design is to make a system that solves a certain problem, not an OmniApp. And that means any future systems that will try to think for themselves within parameters given to them by their development teams, will come with a lot of documentation and will be mapped and scrutinized so much so, we’ll know how to deal with them.

Unlike theoreticians seem to worry, building automated systems in the real world is a very intensive process in which things like artificial consciousness wouldn’t just slip by during the QA process and suddenly emerge out of nowhere. The algorithms for it would have to be purposefully built in from the start and come with a set of business rules and functional requirements. And those are a lot more specific than just “build a self-aware machine that tells great knock-knock jokes and runs the planet’s energy grids.”

# tech // artificial intelligence / automated systems / robots


  Show Comments