VPA Fourth Wall

Why Not to Abuse Digital Assistants

I remember the first time I saw Jim and other characters in The Office stare directly into the camera. It’s a technique called “breaking the fourth wall,” since the stage has a back, left, and right wall—and a fourth, imaginary, wall between the performer and the audience.

We may need to employ an analogous approach to deal with the way end users abuse digital assistants like Siri and Alexa. One of the particularly thorny aspects of this problem, as pointed out by The Guardian, relates to the way that largely female personas of digital assistants tend to portray women as obliging, docile and eager-to-please.

There are actually multiple layers to this issue.

The Default Gender

The first is whether a digital assistant’s voice should be male or female. While it’s true that users can change the voices of Siri, Alexa, Cortana, and Google Assistant to a male voice, the default voice is typically female. That means most users probably never bother to switch. A simple solution here is to proactively prompt users to personalize their assistant the moment they begin interacting with it. That should help ensure that these servants aren’t just always female by default.

Does Abusing Code Lead to Abusing People?

A trickier issue is what to do when users abuse their assistant. As The Guardian notes:

“The assistant holds no power of agency beyond what the commander asks of it. It honours commands and responds to queries regardless of their tone or hostility. In many communities, this reinforces commonly held gender biases that women are subservient and tolerant of poor treatment.”

Digital assistants like Siri and Alexa entrench gender biases, says UN

The subservience of digital voice assistants.

Tech companies have little incentive to challenge the inappropriate treatment of digital assistants. With social networks and online gaming communities, toxic behavior negatively impacts the service. But when the toxicity is aimed at a digital assistant, it’s a bit like someone hurling insults at their TV. Code doesn’t have feelings and the toxicity doesn’t leak out to affect other people.

Or does it?

The question here is whether allowing people to spew toxicity onto a virtual persona makes them more disposed to doing so with real people. There’s research that shows that toxic behavior is contagious in the workplace. There is also research that shows a high degree of overlap between cyber and traditional bullying. As digital assistants become increasingly human-like, it’s not unreasonable to suspect that toxic behavior habits with these relationships could spread to our relationships with people. This is particularly true when there are absolutely no negative consequences for our abusive behavior. This is an area where we need research to understand the real impacts.

If toxic behavior with digital assistants does lead to its spread in interpersonal relationships, it could lead to a dangerous spiral of incivility in society. And while this problem wouldn’t necessarily be limited to women, the fact that most digital assistants use female voices suggests it could very easily disproportionately affect women.

Breaking the Fourth Wall

Part of the power of a digital assistant is its ability to draw on human social coding. We are genetically programmed to deeply relate with fellow human beings. A good digital assistant can piggyback on those conventions to build trust and increase engagement with a service. That means that there is a real incentive for companies to keep their digital assistants “in character” and sustain the illusion that they are real humans.

One solution to dealing with users’ toxic behavior is to keep digital assistants in character and confront the abuse. That may be more than most companies are willing to take on, however. It would also require some fairly sophisticated responses—just think how difficult it is to confront toxicity in our own lives.

Another approach is for digital assistants to simply drop their persona in the face of toxic behavior. Today, when a user tells Alexa, “you’re a slut,” it replies with, “thanks for the feedback.” But by momentarily dropping the persona, it could simply respond with something like, “That’s an odd thing to say to a software program.” In other words, rather than playing along with the abuse, the assistant could simply drop character and disengage from role-playing as a response to sociopathic behavior. It’s a bit like breaking the virtual fourth wall.

The Sci-Fi Rationale

You may not see the threat of a downward spiral in how we treat one another as enough of a reason to change the way we act with our software. If not, perhaps the creators of Westworld will change your opinion. One of the main themes of that show is how the cruelty of humans toward software ultimately backfires—in horrific ways.

“Did you ever stop to wonder about your actions? The price you’d have to pay if there was a reckoning? That reckoning is here.” — Dolores Abernathy of Westworld

A tip of the hat to Swiss Cognitive for bringing The Guardian article to my attention.

Scroll to Top
%d bloggers like this: