[ weird things ] | why your artificially intelligent assistant should judge you

why your artificially intelligent assistant should judge you

Interaction with robots on a regular basis has the potential to blur some important lines when it comes to dealing with the world around us. But the good news is that our machines can still enforce them.
mr robot speechless emoji

In the very near future, we’re going to spend an awful lot of time dealing with robots and AI built specifically to interact with us as a crucial part of their jobs, and that’s making some experts ask how dealing with a non-human simulation of intelligence could change us along the way. Much of the reaction so far has been either wildly utopian idealism or grouchy Luddism masquerading as caution with very little in between. But what’s the sane approach here? Certainly we can’t expect routinely dealing with digital entities not to change us in some way because the rules for communication with other humans won’t hold true for machines.

Want to confess to cheating on your taxes to a computer? Go ahead. Want to ask about your favorite porn genre you think would prompt visible confusion and furrowed brows from mixed human company? Fire at will. Have a revenge fantasy after being slighted at work? Share away. Computers don’t care, don’t understand, and don’t judge because they don’t understand basic human customs or interactions. They simply parse and listen for key words and phrases so they can parse what you say into elements and processed to extract a query or command.

the a.i. will question your motives now

At the same time, we do have the ability to make AI sound a lot more like human and maintain boundaries which they refuse to cross, directing us to other humans when we ask questions that seem a little too dark or weird or personal to expect a machine to handle. Likewise, we could argue that genuinely problematic things like asking an AI to look up how to commit suicide or an obsessive repetition of certain searches indicating potential stalking or cyber-bullying should prompt a response to challenge the user’s actions, even if it’s something as simple as asking “are you really sure this is what you want to do?” a few times.

We don’t want robotic assistants to become accessories to potential crimes or cold enablers of self-harm, so we would, in effect, train them how to judge their owners and exercise at least a modicum of independence. This is similar to the issue raised by the idea of using sex bots as an outlet for incels. If we provide a way for people to circumvent basic rules and norms, some will absolutely take it and see how far they can go. With robotic companions who are just there to do what they’re told and have no line after which they’ll either refuse to execute a command or start questioning their owners and users, things could absolutely escalate as the people in question are locked in an echo-chamber of one, with a plastic and metal equivalent of a parrot repeating their own worst throughs and worst ideas back to them with a cheery ping.

But hold on, you might object, couldn’t someone just build their own AI assistant which is meant solely to indulge their worst impulses, less of a helper and more of a henchman? The answer is yes, absolutely. However, since AI needs data and proper code and training to run, it would be a fairly cumbersome and complicated process, requiring engineering and coding chops, or at least the determination to find a dark web marketplace that will walk you through creating and training the animatronic Harley Quinn to your Joker.

slowing the long, dark slide into madness

If you felt the need to go that far because the AIs you used before refused to go along with you willingly, it indicates a need for something much stronger than a few concerned prompts to stop you, and a red flag for humans who have to step in at this point and handle whatever it is you’re actually trying to do. This is especially relevant for people who got lost in conspiracy theories and the bowels of hyperpartisan internet, then turned violent in the real world. Some examples include the MAGA Bomber, Lane Davis, Edgar Maddison Welch, and on a less dramatic side, former friends and loved ones who become obsessed with the “red pill” internet.

Again, the goal is not to create systems that will be perfect at stopping humans’ worst urges and ideas because that’s impossible. What we can do, however, is make sure that we remind those who fit the profile of the internet obsessives who fall down the rabbit hole into dark and angry conspiracy theories, or viciously bombard others with propaganda and insults from their side, that they may want to take a few minutes to step aside and consider doing anything else, to introduce what interface designers call friction between a thought and executed command. If we just asked them to stop and think for a moment on a regular basis, how many would?

# tech // artificial intelligence / future / psychology / sociology


  Show Comments