Mind Reading

Mind Reading Research Sees What You Mean

When we see words on a screen or piece of paper, their meaning comes to us quite naturally. But how does that understanding happen?

That is the question that Professor Marcel Just, the Director of the Center for Cognitive Brain Imaging at Carnegie Mellon University, spends his time trying to answer. I came upon Just’s through a paper he and his colleagues recently published on how we map the meaning of sentences in our brains. That research changed my understanding of the way we make meaning out of concepts.

Mind Reading Research

In the simplest terms, this new research details a computational model for predicting how particular sentences activate the way neurons fire in the brain. While having their brains scanned by functional magnetic resonance imaging (fMRI), the study’s research participants were asked to read some 240 sample sentences. Just and team then fed this data into machine learning algorithms to analyze it and ended up with 42 elements, which they dubbed “neurally plausible semantic features” (NPSFs). The resulting computer model accurately predicted which areas of the brain would light up when subjects read a set of new test sentences. It also made it possible to have a rough understanding of what these individuals were thinking simply by looking at how their brains were activating.

Yes, this represents an early step towards mind reading.

What This Research Means

I asked Just what he thought was most significant about this research:

“We’re getting at the basic building blocks of human thought. We like to think that you can think any old idea that you want to. Everybody thinks they’re creative, they’re generative, they’re unique and there’s some truth to that. But we’re all given a very similar set of Lego pieces with which to build our thoughts. We’re finding out what those pieces are, what types of pieces they are, and to some extent how they go together.”

In the last few years, a number of studies have shown that individual concepts light up particular locations in the brain. But in this research, Just and his team demonstrated for the first time the ability to map full sentences against the activation patterns they generate in the brain. We now know which parts of the brain light up from reading “the judge met the mayor” and which ones from reading “the flood-damaged the hospital.”

In other words, we now know how to look inside the brain to see the way it connects concepts together in simple sentences. This research thus marks an important step in unraveling how the nothingness of ideas manifests within the spongy, pink tissue of a human brain. By looking at the brain activation of someone, we now have a rough understanding of what they’re thinking about. It is, in fact, a rudimentary form of mind reading.

If that weren’t enough, the researchers also found that the same sentences activated the same regions of the brain across different research participants. That means that when I read “the fish lived in the river,” the neurons in my brain fire in a very similar pattern to those in yours. This finding suggests that we humans share a conceptual map with each other and that that map is baked right into our brains. It is a remarkable finding that dramatically changes the way we understand our relationship to conceptual meaning.

What’s more, these conceptual maps are language-independent. In a separate paper, the researchers demonstrated that English-language sentences activate the same areas in an English speaker’s brain as their translations into Portuguese do in the brain of a Portuguese speaker. According to Just, this phenomenon isn’t restricted to alphabetic languages either as they recently found similar results with Mandarin Chinese speakers.

Validating with Other Models

The research applies an additional layer of evaluation (called factor analysis) to further group the model’s conceptual elements into four bigger buckets of meaning: people, places, actions, and feelings. These categories of thought map to what we already know about the functions of various regions in the brain. As Just puts it:

“…From the brain function, from the information processing of the brain, we can then get back to its biological structure but the biological structure just reflects the functional subdivisions.”

Here’s a graphic depiction of those clusters from the research:

The other interesting cross-check in this research compared the model to a different approach to categorizing concepts from the field of Natural Language Processing. These “vector representations” use machine learning to group like concepts by their proximity to one another in large bodies of text. The researchers evaluated four of these techniques against their model by comparing them to labor-intensive human assessments of concept similarities. The researcher’s model performed reliably better than the vector representations, but it’s interesting to note that the validity of these other approaches was also corroborated by the research. Because these vector representations can be easily automated, they are a much less expensive than measuring brain activations. Just also noted in our conversation that there are ways to link the two approaches:

“You can map those (vector representation) co-occurrence properties to the brain activation and it works out not bad. So suppose for some reason you wanted to write a sentence or a text that activates certain brain areas, you could do it using the vector representation.”

The Limits of Today’s Mind Reading

One of the questions that kept coming back to me while digesting this research was just how nuanced these conceptual maps were, especially when it comes to interpreting what someone is thinking. Here’s what Just had to say:

“We can do kind of mind reading but not very precisely. We can tell if you’re talking about eating something and maybe we can tell the difference between a banana and a peach by the way you hold it, but we certainly can’t tell the difference between a peach and a nectarine… Obviously, your brain knows the difference between a nectarine and a peach, and if it’s in your brain you should be able to get it. We haven’t tried that level of discrimination.”

He went on to note that their model interprets a concept like “jury” as a slightly broader concept; something akin to say, a “group of people.” Getting that level of conceptual discrimination for mind reading and other applications will likely require a next-generation brain scanning technology:

“It’s really the sensitivity of the instrument in general. Fundamentally fMRI is not direct enough to give you the mapping to very, very specific content. We need the next technology to get us that sensitivity. I don’t know what it’s going to be. People are doing intra-cranial recording with micro-electrode arrays, but you need patients who have their brain exposed to do that.”

Applications of Conceptual Mapping

Professor Just was very eager to get into the potential applications of this research. We talked about a few areas specifically.

The first is psychiatric diagnosis. In a 2014 paper on the neurocognitive markers of autism, Just and his colleagues used machine learning to isolate a dimension they called “self-representation” as a reliable means of identifying patients on the autism spectrum. For people with high-functioning autism Just notes, “the concept of a social interaction, like persuading, hugging or insulting doesn’t involve a self-representation. It’s just a dictionary definition.”

The other application that Just is excited about is in education. Last year, he and Robert Mason published research on how the brain incorporates scientific concepts. As Just puts it:

“Basic physics concepts get mapped to these fundamental brain systems that have been there for thousands of years even though the physics concepts have only been formalized for a couple hundred. To give you an example, one of the factors in the representation of physics concepts is ‘radiating energy.’ And your brain doesn’t really care whether it’s heat, light or sound. It’s just ‘radiating energy’ that it understands because radiating energy is a fundamental thing that we experience from heat or whatever. So then you take your physics course and someone writes on the board some equation that pertains to radiating energy and that’s where that goes in the brain – that’s how it gets represented.”

What is exciting about these results is the possibility of designing curricula around these underlying, natural conceptual hooks. Just imagine how lectures and homework deliberately tied into our existing brain structures might change the efficacy of teaching.

I recently wrote about using machine learning to explore interspecies communication and asked Just about this possibility. While it is not his area of focus, he noted that others are using fMRI imaging to understand what’s happening in the heads of monkeys and dogs. He’s generally bullish:

“I think with this approach, one could do comparative psychology like never before. We’re going to find out how primates and other animals think. We’re going to find out how infants think.”

Changing Our Understanding of Meaning

One of the things I found most heartening in speaking with Just is his focus on the social and emotional aspects of knowing something. I asked him about how he saw the connection between more objective representations of knowledge and the subjective experience that it generates in us:

“In the dictionary when you look up “snake” it just tells you what it is. But I think, in the human brain “snake” also evokes fear in a lot of people. I think that’s part of the brain’s definition of “snake.” I don’t make that much of a distinction. Just as its shape is one of its properties, its ability to evoke fear, for reasons I don’t understand, is one of its properties… The non-objective elements of meaning are a normal part of the representation of meaning.”

Just is a cognitive neuroscientist. He noted that when he started his work, his focus was all about the objective aspects of thought. “But what brain imaging has revealed is how incredibly emotion and interpersonally-oriented human brain activity is. We view our world in that emotional and interpersonal context. It’s there. It’s ubiquitous.”

As we turn to the human brain as a model for the future of artificial intelligence, it’s important to keep this insight front-of-mind.

 

 

2 comments

  1. Another fascinating article, Gideon. I found it interesting that Professor Just acknowledges that it will take the next-generation scanning technology for more conceptual discrimination in mind-reading and says “I don’t know what its going to be.” Having read your article about his amazing research to this point, I wouldn’t be surprised if it isn’t Just who researches and creates “what its going to be.”

    With the recent introduction of I-phone X and the facial recognition, I can’t help but think one of these days Apple will introduce thought recognition!

Leave a Reply