how facial recognition technology could cause more crime than it helps solve | [ weird things ]

how facial recognition technology could cause more crime than it helps solve

Law enforcement agencies can’t wait to deploy facial recognition AI in daily policing, and pressuring lawmakers to get out of their way. But their zeal for face-seeking AI can easily backfire.
iot human recognition

Seeing how China turned facial recognition technology into a tool to further its police state, a number of American lawmakers are trying to proactively regulate its use in United States. If you want any degree of freedom, you’ll need at least some degree of privacy, and not having your face scanned without cause everywhere you go, sightings of you stored in a database for future access and data mining, is probably a good step towards that privacy. However, they’re faced with pressure from police departments who want to deploy this technology to solve crimes. And this leaves us with a pressing question. How do we reconcile the need for privacy with the need to catch criminals? Well, the problem at the heart of the matter is a lot more complicated than privacy vs. law enforcement.

In a way, this is reminiscent of the encryption debate happening right now in both America and Australia. If we could all be surveilled at all times by every digital device across the world, and every one of our conversations recorded and stored for further use, it would be easier to solve a whole lot of crimes. But it would also be easier for the police to manufacture a convincing case from selective fragments of footage and recordings too, something that already happens with forensic evidence presented in courts across the country. Likewise, if the databases with our facial scans and other digital record are poorly secured — and let’s face it, many most certainly will be — and compromised by hackers, criminals would have a field day stealing identities.

And this is the real danger of the state spying on you with AI. All the gathered data ultimately has to go somewhere for further study and reference, and those servers will be prime targets not only for cybercriminals but adversarial states. Consider that foreign intelligence services might be scanning these databases to confirm if someone might be a spy, or look for potential blackmail material on someone they’d like to recruit as an informant to ensure compliance. It’s not only possible, it may have already happened as Sweden’s government accidentally turned over access to such a database to contractors within Russia’s reach. There wasn’t even any hacking involved because political appointees with little computer savvy didn’t bother to heed experts’ numerous and loud warnings.

On top of all this, the very law enforcement agents lobbying against limits on face recognition technology would also be recorded and recognized in public places and added to the data in question. Hackers able to access their databases could easily take advantage of this data and target those investigating them, or sell it to other criminals. Just scrub recordings of the police to protect them from criminals you say? But what if they might be potential suspects in a crime and know they can get away with it due to the scrubbing policy? And if recording them is dangerous because criminals could gain access to enough data to target them, what about civilians? Why shouldn’t they get the same protections from criminals as the police? How do we justify that?

Here’s the bottom line. There can be legitimate uses for facial recognition at borders and to help solve certain crimes. But it has to be deployed very carefully and thoughtfully, with the explicit understanding that it’s far from perfect, and the data it collects must be very limited in scope, heavily encrypted, and handled very carefully because it’s a gold mine for criminals and spies otherwise. Arguing against efforts to prevent a massive breach of personal data and protect the privacy of law abiding citizens out of the misguided hope that doing so will make it easy to solve crimes, with little evidence that this would be the case, is just setting up everyone involved for very serious problems down the road as the data and technology are inevitably compromised and abused for questionable, criminal, or otherwise nefarious purposes.

# tech // artificial intelligence / computer science / cybersecurity

  Show Comments