Empathy at Scale

How a drab, new occupation scrubs social media of the dark shadows of our collective unconscious.

How do we deal with technologies that unintentionally magnify the dark side of humanity?

We live in an era of socio-technical systems, massive, virtual machines that process human behavior for commercial ends. Sometimes that’s a social network or a search engine optimized to capture our attention, and sometimes it’s an online shopping service perfectly tuned to stimulate our urge to buy. Regardless of the specifics at work, the algorithms that drive these systems perform an iterative dance between our data and the machine learning processes that make them smarter over time.

The challenge we now face is that that iteration create feedback that can lead to very negative consequences.

Negative Feedback Loops

We’ve all done it. You’re stuck in horrible traffic resulting from an accident ahead of you. It’s all those damn lookie-loos, slowing down to get a good look at the carnage as they pass. And still, when you get to the spot of the accident — since you’re driving so slowly anyway — you just can’t resist the urge to crane your neck for a peek yourself. Were you driving 60 mph, you probably wouldn’t risk it, but driving at 15 mph, what harm could it really do to drop your speed down to 8 mph so you can get a better look? And so our morbid curiosity feeds back upon ourselves, leaving us trapped in traffic exacerbated by our own curiosity.

Social media helps us create similar negative feedback loops. At the simplest level, you might receive a negative reply to a tweet that provokes you to an angry response. That kind of interaction isn’t all that different from the reaction we might have in person, without any algorithmic amplification — just old fashioned psychological stimulus-response.

Now, let’s say that some of your friends see the exchange and start to pile on with their own outrage. Twitter’s algorithms interpret that flurry of negative reactivity as engagement worthy of bringing to others’ attention and the flames are further stoked. We don’t even necessarily have to say anything to set this feedback in motion. Just clicking the wow, sad, or angry emoticons in Facebook tells the algorithms that something is worth showing to others. Even just looking at something on social media can act as a signal to tell the algorithms that there is something worth highlighting here for others.

Filtering the Dross

While most of us can’t seem to stop ourselves from sneaking a peek at an accident as we drive, that doesn’t mean we actually feel good about doing it. And if what we glimpse in doing so happens to be particularly gory, it can even traumatize us.

The dregs of human behavior algorithmically surfaced in social media can be similarly horrifying. The algorithms amplify our curiosity, magnifying the shadow within our collective unconscious until it comes spilling out into our newsfeeds. Even though the result is what those dark recesses of our psyche ‘told’ the algorithms we wanted, it’s not what most of us actually want at a more conscious level of mind. The results can be disturbing — and bad for business.

An Outsourced Conscience

The catalyst for this article was a disturbing piece by Casey Newton on The Verge called The Trauma Floor. It details working conditions at Cognizant, one of the major firms that Facebook uses to outsource moderation of content on its network. It paints a gruesome picture, starting with a new trainee named “Chloe” who, soon after being hired, is exposed to a video of a man being murdered while he begs for his life.

Chloe is being trained to spot objectionable content that violates Facebook policy. As such, she is part of a massive psychological shield, an outsourced conscience operated by human employees. This shield is what staves off the backwash of filth that would otherwise overwhelm these hugely profitable networks and make them untenable. But as Newton’s article points out, for those paid to do this work, the psychological impacts can be absolutely corrosive.

“We were doing something that was darkening our soul — or whatever you call it.”

Li, a Facebook Moderator at Cognizant

Welcome to the seamy underbelly of our social networks. Moderation companies like Cognizant help Facebook and other networks clear out the dross leftover from the algorithmic amplification of our darker human drives. Someday algorithms will likely take over more of this work, but for now, their effectiveness remains limited. Today, the only way to operate these networks with algorithmic feedback on billions of users is by exposing lots of humans to some really nasty stuff.

Scaling Conscience

Our generation is now building a future interface between human and machine intelligence. Humanity has much intelligence and great beauty to embed into machines. But there is also much darkness that is embedded in the process. It is our nature.

The companies behind these gargantuan systems have yet to figure out how to build these anthro-mechanical systems in ways that stifle their sociopathic tendencies. But with rising acrimony and hatred and the growing distortions of reality flowing through these networks, these companies are coming under growing political pressure to do something.

Yes, part of the solution will be better machine learning algorithms capable of the nuance for dealing with the subtly of moderating ever-changing human behavior. But as Tim Berners-Lee puts it, part of the problem comes down to the way we design user experience in these systems:

“How do we design systems so that when we drop a drop of hate into it, falls away dissipates, but when we drop a drop of love in, it propagates and spreads — preserving free speech but influencing us for the good.”

Tim Berners-Lee

Until we figure that out, Chloe and her coworkers will continue to be subjected to the horror of filling in for our missing collective conscience.

6 thoughts on “Empathy at Scale”

  1. Awesome framing Gideon and very much the kind of thing we ought to be thinking about as connectivity across the globe becomes easier, our devices become a lot more powerful and we become more and more embedded in the social fabric of the greater web.

    1. Thanks, David. Exactly. As our minds come together, we are probably going to need analogs to the functionality that exists within our individual brains — just to prevent us from a kind of “collective craziness.”

  2. Wonderful piece, Gideon. What’s concerning is how there might, indeed, be entities who seek to exploit this human trait of “the fascination of abomination” (Joseph Conrad, Heart of Darkness) as it continues to exact its toll on the ‘Chloes’ of the world, as well as the unwitting who grow up knowing nothing different.

    1. Thanks, Ken. Yes, to draw from the movie adaptation, Kurtz notes, “Horror! Horror has a face, and you must make a friend of horror. Horror and moral terror are your friends.”

  3. Makes a lot of sense.

    Cities needed drainage. Initially people just let out their sewage to the ground nearby. That made the neighbourhood to stink and bred unhealthy bacteria. Then they channelled it into a larger tank and treated it. When we had sewage pipelines, and they got clogged, manual scavengers cleared it up. When we have mastered automation, it made sense to develop machines to clear the clogged drains, than expose fellow humans to the risk of dealing directly with the dirty drain. This is a parallel?

    In our effort to organize society, we needed to discover how to add value to society. First came market survey to gauge demand. Then the economic viability to provide it at a price consumers were willing to pay. If it was technically difficult we looked at techno economic feasibility of alternatives through innovation. The direct object of business was one goal. There were always undesirable side effects, and pollution. Business generally shirked away from addressing this to save cost. Regulations were required to force further developments to make these safer. Thanks to these regulations we are now addressing the undesirable side effects, and pollution.

    Responsible organizations design for different failure modes. Social pollution, and side effects is something for which the first generation social media was apparently not designed. Your article clearly underlines a future dimension and generations for social media design. Currently generations are distinguished by technology used. It is time to seed thoughts to consider generations based on how they reduced their polluting capability. Automobiles have already gone into these. The social dimension is yet to shift.

    1. The sewage example is both a great metaphor and a very literal example of this same effect that I’m talking about, Sowmyan. That’s the physical level of human waste needing to be extracted and discretely transported away in order to sanitize the experiences of our daily lives. This is the psychological version of that.

      The pollution regulation point is an interesting one. Facebook would have us believe that the market will solve the problem, since end users would simply stop using their service if they don’t address these problems. We also have the problem of elected representatives simply not understanding these issues well enough to counter what Zuckerberg testifies. In the end, one of the things we are going to need as a society is more technically literate officials. Or at least officials whose staffs understand what is really happening. Thanks for the great comment.

Your comments are welcome here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top