Why Artificial Emotional Intelligence Really Matters

Reading Time: 4 minutes

The way that we understand one another has been finely tuned over millions of years, to the point where it’s hard to believe anything could outperform humans when it comes to understanding humans. I’m convinced though, that within the next five to ten years, that belief will gradually disappear, as machines get better and better at making sense of our emotions.

This is the field of affective computing, or what I affectionately call, artificial emotional intelligence.

Why Emotional Intelligence Matters to Artificial Intelligence

The first signs of the shift to more emotionally intelligent software are already starting to appear on the market, and I’ll touch on them in a moment. But first, I want to disclose a strong conviction: I believe emotional intelligence is absolutely essential to artificial intelligence. There are two reasons I believe this.

The first reason is that solving emotional intelligence is a more natural path to solving true machine thinking. There’s still important work in more rules-based, rational approaches to artificial intelligence such as expert systems. But if you look at were the bulk of artificial intelligence research is today, it seems to be very roughly following the path that nature took when it first developed intelligence long, long ago.

Just look at the field of robotics and the increased interest in much simpler (and cheaper) designs that often draw inspiration from more primitive forms of life in nature. This is a more embodied approach to intelligence, and one that draw heavily on sensors as a way to embed robots in their environment to learn from it through physical feedback.

Beyond robotics, there is now also a much larger focus on pattern recognition and new approaches that look quite different from our original, more structured attempts at artificial intelligence. Look at the major investments that Google and other companies are making in Deep Learning and other cuts at machine learning. The more I dig into these approaches, the more I see analogies to primitive nervous systems, running pattern recognition processes on incoming sensory data.

Spocks Brain
Credit: Paramount Television

If we are, in some loose way, following nature’s path to developing intelligence, it’s important to remember that our higher-level cognitive processing sits on top of a layer of emotion processing that is distributed throughout our bodies and in the more primitive centers of our brains. As we reach for true machine thinking, skipping this emotional layer might not only lead to undesirable consequences (machines without the the capacity to feel the difference between right and wrong, for example); it just plain might not be possible.

The second reason emotional intelligence is so important is that, for the foreseeable future at least, it is humans that will teach our artificial intelligence progeny what it needs to know in order to evolve. If our machines lack the capacity for understanding emotions, they will be severely handicapped in their ability to learn from us.

Why Emotional Intelligence Matters to Tech Companies

It’s unlikely that we will see any shortage of interest in emotional intelligence from today’s developers and users of artificial intelligence. Why? Because there will be a great deal of money in it.

Facebook’s recent experiments in manipulating emotions in their stream seem ultimately to have been done out of concern that seeing negative emotions from other people on Facebook might reduce the amount of time that users were willing to spend on the service. That obviously has a huge impact on the company’s earnings potential.

With all of the hullaballoo surrounding the Facebook controversy, few commentators have really focused on the limitations to the current generation of tools for doing this kinds of sentiment analysis. Most sentiment analysis tools today are text-based approaches that, at the most basic level, map words to sentiments. They tend not to deal well with sarcasm and other forms of human nuance, and work best in situations where is a large sampling of text to draw from, which makes them somewhat ill-suited to very short posts most people tend to make on social media.

I’ve no doubt that these text-based approaches will improve over time though. For companies like Facebook, Google, and Amazon, where user opinions are such an important aspect of their business model, there is just a lot of incentives to continue investing here.

Interestingly, it looks like we won’t have to rely solely on text-based approaches to machine-based emotional intelligence for long. Affectiva, a startup with roots in the MIT Media Lab’s Affective Computing group, has developed software that reads people’s emotions through sophisticated facial recognition algorithms. The company is not alone in this space, but if they get it right, they could be sitting on a huge opportunity.

Clearly, there is a lot happening right now in the area of building more emotional intelligence into computing. Researchers at Microsoft are even experimenting with bras that will warn you when you are stress eating and clothing that will signal the emotional state of its wearer (see video below). All this work will no doubt continue because, regardless of whether we might actually want shirts that shimmer or flap when we’re happy, there is big money in understanding how people actually feel about particular advertisements, political candidates, products and many other things.

Why Emotional Intelligence Really Matters

The real question that emerges from all this though, at least from my perspective, is how will all this artificial emotional intelligence work actually benefit the world? And here I will leave you with some surprising research out of USC that finds that people are much more willing to discuss their problems with virtual avatars than they are with real people. The key reasons for this stem from the program’s superior abilities in building rapport (largely through empathetic listening with facial expressions and nodding) and because the test subjects felt that the avatar would not judge them (which a human might). There is great potential here, even if it might eventually represent a threat to professionals with backgrounds in counseling.

we-posterIf we are lucky, we may just end up building something that isn’t just about trying to understand one another’s emotions so that we can more easily manipulate them. If we’re lucky, we might just end up building something that really understands us. And for a variety of really deep reasons that I hope to detail in future posts, I believe that it’s this understanding that may well be our most important legacy.


Leave a Reply