[ weird things ] | how a nice idea would quickly fail in practice

how a nice idea would quickly fail in practice

A moral test for new technology is a fine idea. But if its implementation is too strict, it might leave us without any real innovations at all.

A constant, and in certain contexts, fair criticism of the tech field that we will create something before asking if we should created it the first place. We can let people tag their photos with geographic coordinates which pin them down to a specific time and specific location which is convenient when they want to tell everyone exactly how much fun they’re having at their favorite bar or amusement park. But the same technology can track them at all times and create ready audit logs of where they go, who they call, how long, and amass enough data for anyone with access to all these logs to know far too much about their lives.

So the question is, was building a tracking capability a bad idea in the first place and should there be a process that lets us judge whether such an idea should’ve even been considered? Well, AI researcher Damon Horowitz thinks we could create several guidelines according to which we can approve or halt development of technologies that could be used for any sort of unethical purpose under the premise that if we don’t create a potentially dangerous technology, it won’t be misused because there’s nothing to misuse in the first place. He even did a TED talk on the subject.

By now, you’re probably thinking that there’s a gap in Horowitz’s logic. Wouldn’t those who want to build some sort of technology for nefarious purposes simply develop it on their own and instead of having to weigh trade- offs between a technology’s beneficial and malicious uses, they’ll just build all those abusive devices as such since they don’t much care about his moral principles? In our above example, so what if geo-tagging failed an elaborate morality test and was never developed? Someone would simply build a piece of malware to track a person via an active smartphone. A moral operating system, as Horowitz calls it, seems a little naive since to be effective, it would have to be used by people with the same ethical values.

And on the flip side, who are we to deny people the benefits of a system that could be abused? Sure, geo-tagging images can allow for major invasions of privacy, but if people want to get-tag their photos, or use the feature in an emergency to tell police cars or ambulances exactly where they are , should we really say “we won’t give you the ability because some bad people could abuse it?” It’s a small scale version of the debacle over research into H5N1. Could papers that show what genetic changes can trigger another flu pandemic be used for bioterrorism? Yes. But can we use this research to carefully monitor wild strains of H5N1 to detect an incoming outbreak? Absolutely.

So what do we do with technology that can be abused? We can’t stop it from coming to light and we can’t just put the genie back in the bottle once we unleash a system that can be misused. We can’t deny users benefits from new technologies because by the time we’re done evaluating the benefits and downsides, we’d have to start going back to the farm and abandon pretty much everything modern. Planes are useful in transporting a lot of people very quickly and efficiently across long distances, but they can also be used as missiles or held hostage by hijackers. No more planes for us? Painkillers help people cope with injuries while they heal, but a big enough overdose can kill them and the medication itself can be addictive.

So what now? No more vicodin, just walk it off and take a good swig of whiskey when it really hurts? Obviously this is stretching the argument made by Horowitz to an extreme, but we should do that to remember that our entire life consists of tradeoffs in just about everything to find the right mix of safety, enjoyment, threat, and benefits. Technology is governed by the same analysis and the only way to make sure that a technology is not widely abused is to inform users of the potential for their software and hardware to be used against them. Explain the risks and benefits, then let the users decide what to do instead of adopting a paternalistic attitude and decide what’s best for them.

# tech // computers / ethics / research and development


  Show Comments