Facebook Experiment Reveals it Can Emotionally Manipulate its Stream

It was January 11th, 2012, when 689,003 people started to experience a slightly different reality on Facebook. The distortion was subtle, and lasted just a week, but while it lasted, they experienced less emotion from their friends on Facebook.

Rat In MazeWhat was it that caused this emotional dampening? A full moon in Scorpio or some strange new flu symptom? No. It was a massive online psychological experiment to test whether emotions are contagious on Facebook.
The short answer is yes, emotions are contagious on Facebook. When subjects saw fewer negative posts from their friends, they were slightly less negative and slightly more positive in their own posts. And when subjects saw fewer positive posts from friends, they were more negative and less positive with their own. The effects were real, even if they weren’t particularly strong.

The experiment itself has generated a lot of controversy related to the ethics doing these kind of emotional experiments on so many people, and with out their knowledge or consent. The main researcher at Facebook has even apologized in a public post.

What the Facebook Experiment Actually Demonstrated

My concerns have less to do with how the research was conducted, and more to do with what Facebook actually demonstrated while it was conducting this experiment.

What Facebook demonstrated was its ability to use sentiment mining tools such as the Linguistic Inquiry and Word Count (LIWC) to filter its stream based on the emotional content of its users posts. It should probably come as no surprise that the company has this ability, but now we know that they’ve actually used it – at least once – to modify the stream of content that users see, based on its emotional content. This is actually a big deal.

Incentives for Emotional Manipulation

In explaining the reasons for the experiment, the main Facebook researcher noted that we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.” This is actually a remarkable admission of what the true motives behind this research actually were. They were trying to make the network more sticky, and so, to keep users coming back, they tested out a way to deliberately distort reality

A few years back, researchers at Wharton discovered that people are more likely to forward and engage with online content that is emotionally positive. If more emotional, more positive content generates more engagement, then it also drives more eyeballs to Facebook’s advertisers. That means there is likely significant commercial incentive to filter social media streams for positive emotions, which is why I’m betting that this experiment may not have been the first time Facebook has used emotional filtering on its stream. And I’m guessing it won’t be the last.

Bad-NewsFacebook isn’t the first big media company to be interested in manipulating emotions to maximize profits. News organizations long ago learned that hijacking our fear and our negativity bias is a powerful way to build urgency, viewership, and ad revenues. “Could that milk in your refrigerator be slowly killing your children? Tune in at eleven to find what you need to know!”

Software-based emotional manipulation is more sophisticated though. It’s not just personalized, it’s social, which means it comes from people with whom we already have positive emotional connections. What we share on social networks also define who we are, which is why the content we share on these networks tends to be more positive than the doom and gloom of yesterday’s networks.

Is It Already Happening?

So now, in addition to these advantages, researchers have confirmed that Facebook is a medium of emotional contagion, which is to say, it’s good at spreading emotions. While the research found the contagion to be only mildly viral, it’s important to remember that these findings are from over two years ago; Facebook’s software engineers may well already working on behind-the-scenes tricks to boost this virality. The tricky part here is that this kind of manipulation would to be extremely difficult to detect from the outside, and there’s little incentive for Facebook to disclose this kind of activity if they were doing it. These algorithms are closely guarded secrets.

Caution for the Future

WatsonComputer scientists are now starting to train software to make it smarter. The most visible example is the way IBM researchers trained Watson to use logic and knowledge to trounce two Jeopardy world champions in February 2011.

Emotional intelligence gets less attention , but I believe will be a critical aspect of artificial intelligence, and that online social networks will play an important role in training our software to be emotionally intelligent.

If Google is the master of online information, Facebook is the master of online emotion. Facebook is a marketing engine disguised as an online social network; it is designed to extract information about what moves us most, and, in the long-run, it is this emotional sophistication that is Facebook’s greatest commercial advantage.

I am an advocate for building emotional intelligence into our software. But in doing so, we should aim for empathy and compassion, and not the cold, calculating emotional manipulation of the sociopath. And that’s what worries me most about the Facebook experiment.

 

Resources:

Affect research on Twitter from December 23, 2010:
The link between affect, defined as the capacity for sentimental arousal on the part of a message, and virality, defined as the probability that it be sent along, is of significant theoretical and practical importance, e.g. for viral marketing. A quantitative study of emailing of articles from the NY Times (Berger and Milkman, 2010) finds a strong link between positive affect and virality, and, based on psychological theories it is concluded that this relation is universally valid. The conclusion appears to be in contrast with classic theory of diffusion in news media (Galtung and Ruge, 1965) emphasizing negative affect as promoting propagation. In this paper we explore the apparent paradox in a quantitative analysis of information diffusion on Twitter. Twitter is interesting in this context as it has been shown to present both the characteristics social and news media (Kwak et al., 2010). The basic measure of virality in Twitter is the probability of retweet. Twitter is different from email in that retweeting does not depend on pre-existing social relations, but often occur among strangers, thus in this respect Twitter may be more similar to traditional news media. We therefore hypothesize that negative news content is more likely to be retweeted, while for non-news tweets positive sentiments support virality. To test the hypothesis we analyze three corpora: A complete sample of tweets about the COP15 climate summit, a random sample of tweets, and a general text corpus including news. The latter allows us to train a classifier that can distinguish tweets that carry news and non-news information. We present evidence that negative sentiment enhances virality in the news segment, but not in the non-news segment. We conclude that the relation between affect and virality is more complex than expected based on the findings of Berger and Milkman (2010), in short ’if you want to be cited: Sweet talk your friends or serve bad news to the public’.

Your comments are welcome here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top