How Artificial Intelligence is Reinventing the Art of Influencing Human Behavior

Neuroscientist Jeffrey Lin wants to dramatically reduce people’s toxic behavior in online gaming communities, and he’s using artificial intelligence to do it.

When players experience persistent abuse or toxic behavior in a game, they are on average 320% more likely to leave that game and never come back. Toxic behavior isn’t just a conspicuous PR problem for the gaming companies; it costs them real money.

I’m not a gamer and I don’t generally write about gaming, but it’s clear to me that what Lin and his colleagues at Riot Games are doing deserves attention. They aren’t just shaping the future of gaming and online communities, they are demonstrating how artificial intelligence may one day be used to modify human behavior on very large scales.

Crowdsourcing the Judges

LoLLogoIn 2011, Riot Games unveiled something called the “Tribunal” to deal with toxic behavior among the 67 million players of its flagship game, League of Legends.  The system allowed players to file reports on abusive players, the most egregious of which were then reviewed by volunteer judges drawn from within the player community. These crowdsourced judges reviewed player feedback, metadata from games, and logs from chat conversations before rendering their decisions. The Tribunal proved a powerful tool for engaging players in policing their own community, and in combination with a number of other player behavior initiatives at Riot, it’s proven quite effective in reducing toxicity.

Good-and-Toxic-PlayersBy profiling players by their frequency of toxic behavior, Lin’s team discovered that the worst 1% of players contributed just 5% of total toxicity in League of Legends, which meant that it wasn’t just a matter of banning a few bad apples. This was a very large-scale challenge of reforming the vast majority of players who simply had occasional outbursts of toxicity.

Last year, Riot took the Tribunal offline to revamp it. Lin recently outlined some of the reasons for that decision; the most interesting of which was that the Tribunal’s feedback was simply too slow. Decisions were taking a week or more, which proved too long of a separation between infraction and consequences for being reported on to even clearly remember what exactly they’d done, let alone meaningfully change their behavior.

The new system would need to be large-scale and provide immediate feedback — both things that machines do quite well.

League of Legends Judges Become Teachers

Over this last year, Riot has used some 100 million votes cast in Tribunal judgements to build an artificial intelligence system that automates responses to toxicity in League of Legends. Think of it as an artificial immune response system.

Tribunal-Awakenings

Riot has used machine learning techniques to extract patterns from the Tribunal and other massive datasets and teach an artificial intelligence system how to emulate the collective wisdom of its community. As a result, Tribunal volunteers effectively changed their job from judges to teachers. Their past judgements now form the basis of massive scale, real-time judgement engine, grounded in the very human values of the League of Legends community.

This new system is coming online in phases, the first being an “instant feedback system” designed to notify both reporter and reported with the system’s rulings. It’s not just some crude filter for profanity and offensive keywords, but something capable of understanding phrases, and by drawing on Tribunal voting history, emulating a nuanced understanding of what is and is not considered toxic behavior within the League of Legends community.

Easy-GGAs you might suspect, situational context is difficult for a machine to crack, as are behaviors like sarcasm and passive aggressiveness. The system needs to adjust over time too. A phrase like easy gg is an obnoxious thing to say at the end of a game today, but it might someday lose its sting and the system would need to adjust accordingly. The Tribunal is still offline, but once it’s back up, Riot will once again have a sufficient flow of community feedback to automatically and continuously tune the system so it remains relevant and accurate over time.

The new instant feedback system has only been up for a few months, but the results are already impressive. Lin recently noted that it is only generating about one mistake for every five thousand decisions: a .02% error rate.

Imagine, just for a moment, receiving this message after having lost your cool in an online game:

Riot-Suspension

“Your peers judged your behavior to be far below the standards of the League of Legends community. Think through the conversation and reflect on your words. League is an intense, competitive game, but every player deserves respect.”

Remember, this judgement was carried out by an artificial intelligence – not a human. Can you sense that odd feeling? It’s the future knocking.

Crowdsourced Artificial Intelligence

What Riot Games is building is a prime example of an important new trend: “crowdsourced machine learning.”

Crowdsourced machine learning requires both scale and feedback loops. It’s no coincidence that most of today’s leaders in machine learning also happen to be giants like Google, Facebook, and Baidu, with Internet platforms with which to engage hundreds of millions of users in powerful feedback loops for machine learning.

League of Legends starts with game behavior data with a feedback loop from player-judges, and uses it to map community values. Google starts with third-party websites and creates a feedback loop generated by user clickthrough behavior from search results. The result is a Knowledge Graph.  Facebook starts with end user posts and creates a feedback loop through likes and other forms of user engagement. The result is an Interest Graph.

Google-Image-CathedralsRiot is clearly out front in applying artificial intelligence in gaming. I predict the next large-scale applications of crowdsourced artificial intelligence will be in social networks. Facebook and Google have snapped up the leaders of a particularly promising type of machine learning called Deep Learning, and Twitter is ramping up its artificial intelligence investments too. Demis Hassabis, CEO of Google‘s DeepMind acquisition, recently noted that “In six months to a year’s time we’ll start seeing some aspects of what we’re doing embedded in Google Plus, natural language and maybe some recommendation systems.” Google Photos, which was recently spun out from Google Plus, uses machine learning to recognize people and things amongst users’ own photos. The images to the right, for example, came from a simple search for cathedrals in my own photos.

Influencing Employee Behavior

It just takes a little creativity to see where all this might go. Imagine a coffee shop where baristas’ interactions with customers are recorded and translated into transcripts which are then matched to feedback from customer evaluations. Retail evaluation systems like this wouldn’t just catch toxic employee interactions with customers, but evaluate cash register rings against employee communications, body language, conflict resolution, speed and other variables. Think of it as artificial intelligence extending a company‘s best practices, policies and business rules.

Barista-Behavior

There is real potential for these tools to introduce a frightening new “AI-driven Taylorism” into the workplace. If you think I’m overplaying the desire for kind of control over employee behavior, just consider the way call center employees are evaluated today.

This call may be recorded for quality assurance.

Changing Human Culture with Machines

Rather than end on that dystopian note, I think it’s important to highlight what is actually happening with Riot’s application of this kind of crowdsourced artificial intelligence.

The company has rooted its system design in a bottoms-up feedback loop, designed to emulate its stakeholders’ values, and there’s something very admirable about that. Sure, toxic behavior generates customer churn and that costs them money, but listening to Lin, these efforts also seem grounded in a bigger goal of healing the culture of online gaming.

League-of-Legends-Machine

The first time I heard about this project, I thought about “broken windows theory.” It’s the idea that small symbols of urban disorder –like broken windows– can create an atmosphere of perceived lawlessness where crime is to be expected. In this case, the goal is shifting the culture so that we no longer simply expect toxicity as a given in online gaming. In this sense, we’re talking about a very pragmatic, tractable approach to shifting human culture, and I think it’s worth studying.

Since turning on the new artificial intelligence a few months ago on League of Legends, something dramatic really has happened. The culture is shifting:

“As a result of these governance systems changing online cultural norms, incidences of homophobia, sexism and racism in League of Legends have fallen to a combined 2 percent of all games. Verbal abuse has dropped by more than 40 percent, and 91.6 percent of negative players change their act and never commit another offense after just one reported penalty.”

Anyone who’s ever suffered from online harassment or toxicity will immediately understand just how important this work is. That’s one of the reasons it matters.

compassOne of the other reasons it matters relates to the future of artificial intelligence, where one of the potential challenges is around values alignment, or how you ensure that the values driving an artificial intelligence remain in line with those of humanity. What’s particularly interesting about what Riot is doing is that it demonstrates an artificial intelligence that has been trained explicitly on the values of the League of Legends community.

This is groundbreaking work – a pragmatic demonstration of how to build values alignment into an intelligent system. Yes, it raises a number of difficult questions, but it should also give us hope that we may just figure out a way to seed the next intelligence on the planet with the echoes of our better angels.

 

4 thoughts on “How Artificial Intelligence is Reinventing the Art of Influencing Human Behavior”

  1. Pingback: The Weekly Crunch: Zero-Waste Grocery Stores, Black Women's Business Boom, Robocops for Cyber-Bullies - Ecomentality

  2. Hi,
    Can you tell me how to get access to this dataset? I am trying to do a research project on cyber bullying and can’t find any good data.

    Thanks

  3. Pingback: Digital Assistants and Breaking the Fourth Wall

Your comments are welcome here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top