Tapping the Brain: Decoding fMRI

How researchers are predicting specific thoughts from brain activity

Revealing the brain’s hidden stash of pictures, thoughts, and plans has, until recently, been the work of parlor magicians. Yet within the last decade, neuroscientists have gained powerful methods for delving into the contents of brain activity, allowing them to predict specific thoughts—including images, memories, and intentions—from brain activity.

 

“When these pattern recognition techniques came out, it gave the field a big boost. People realized that now we can really get at content,” says John-Dylan Haynes, PhD, at the Bernstein Center for Computational Neuroscience in Berlin.

 

Since the 1990s, functional magnetic resonance imaging (fMRI) has been used to track the flow of blood and oxygen in the brain, thus showing which spots in the brain are busy. Around 2005, neuroscientists discovered that computational multi-voxel pattern analysis (MVPA)—a technique used in other fields such as fingerprint identification—could help them do more than just pinpoint what part of the brain is active. It could help them read meaning in the patterns of activity. If fMRI takes pictures of the brain’s hidden bar codes, MVPA decodes them. Two studies below show the power of MVPA applied to intentions and memories. The final study below shows how modeling can recreate the images in the mind’s eye.

 

Decoding Intentions

Some of the early studies using MVPA showed how scientists could use a volunteer’s brain patterns to train a linear classifier to predict whether the volunteer was looking at, say, a face or a chair. Then researchers expanded to predict other mental content: emotions, sounds, and memories, for example.

 

A Brain’s Choice to Make.  Before a volunteer clicked on either a left or right button, Haynes and his group collected the volunteer’s brain activity using fMRI. The scientists found regions (in green) that, analyzed using pattern recognition techniques, could predict the button the volunteer would push. Those regions revealed the predicted decision up to seven seconds before the volunteer felt he or she had made a decision.  Credit: John-Dylan Haynes and Chun Siong Soon.Intentions, Haynes found, are also predictable. In a study published first in 2008 in Nature Neuroscience and repeated using ultra-high field fMRI in June 2011 in Public Library of Science One, Haynes and his coworkers asked volunteers to enter an MRI machine with one button near their left finger, and one button near their right finger. Relax, they told the volunteers. Watch this stream of letters in front of your eyes, choose which button you will push, remember the letter in front of you at the moment you choose, and then press the button. The volunteers told the researchers which letter was in front of their eyes when they chose which button to press. Haynes and his coworkers found that not only could the classifier predict which button a volunteer was about to press, it could predict it far earlier than volunteer was able to—up to seven seconds before the volunteer reported consciously choosing which button to push.

 

With this study, Haynes established that our brains know some things before we do. “Not all of our decisions are made consciously,” he says.

 

Haynes was also one of the first to use MVPA to explore the brain for patterns. Because Haynes didn’t know exactly where he might find patterns of decision-making in the brain, or how many seconds before the decision he might be able to see those patterns, he used MVPA as an initial tool for exploration. He used a procedure called searchlight decoding to search the whole brain without making any assumptions ahead of time, and pinpoint the areas and times in the brain that forecast decision-making. More recently, Haynes and his group have found they can predict not just button pushing, but abstract thought as well, such as deciding whether to add or subtract two numbers.

 

Decoding Memories

Others, including Frank Tong, PhD, at Vanderbilt University, have grappled with decoding memories. Tong and his group designed an experiment using MRI to test for working memory (published in Nature in April 2009) in the early visual areas of the brain—areas that perceive basic visual features. They showed a volunteer first one set of parallel lines, then another set in a different orientation. They told the volunteer to remember one of the sets. Then after an 11-second pause, they asked the volunteer to recall the orientation they kept in mind.

 

During that pause, the overall brain activity in the early visual areas of the brain often returned to normal. However, even at that low level of activity (no greater than would be expected when viewing a blank screen), Tong found that pattern classifiers could pick out subtle shifts in brain patterns associated with each orientation, and could still be trained to predict which orientation the volunteer held in memory. “We can see these faint echoes of what they saw before, what they are actively holding in mind,” Tong says. “That would be invisible if we didn’t do multivoxel pattern analysis.”

 

Tong’s study shows that working memory resides in the early visual areas of the brain, a zone where few expected to find such a higher thought process. And it helps establish that images held only in the mind are robust enough that pattern algorithms can decode and predict them.

 

Encoding Models

Researchers have also started decoding novel brain patterns—something a linear classifier cannot do because it can only predict patterns it has already seen. In a study published in Current Biology in October, computational neuroscientist Jack Gallant, PhD, and post-doctoral researchers Shinji Nishimoto, PhD, and Thomas Naselaris, PhD, and their group at the University of California, Berkeley used a combination of brain modeling and decoding to reconstruct a primitive version of a movie someone is watching.

 

After showing a series of movie clips to a volunteer lying inside an MRI machine, the scientists ran the data through a two-step process, Naselaris says. They first created an encoding model: in essence, a virtual brain that generates signals when images are presented to it. In the early visual region that Gallant studies, the brain processes images as pieces of moving edges. Each two-millimeter cube of brain space (also called a voxel) processes the moving edges in a unique way, and Gallant’s group created a model for each cube. Each model maps various features of the movie input—motion, color, spatial frequency, and so on—into an output signal that matches as closely as possible the brain activity seen in each cube of the volunteer’s fMRI brain data. “It is the signature of that one individual voxel,” Naselaris says.

 

Reconstructing Movies via the Brain. Using brain activity of volunteers watching a movie (frames in top row), Gallant and his coworkers created a model that predicts brain patterns from movie clips. They then used that model and a database of 18 million seconds of random movie clips from the Internet to predict the brain activity that would be expected from each clip. They then averaged together the 100 clips in the random library that were most likely to have produced brain activity similar to what was actually observed (bottom row).  See Nishimoto, S et al., Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Current Biology 21:1641-1646 (2011).

 

Then, using a brand-new set of movie clips, Gallant’s group collected new brain activity data, and set the model going the other direction. They input the brain activity to see if the model could now predict movie scenes from brain data. A novel form of Bayesian decoding helped them match images, taken from a massive database of 1-second movies, to brain activity patterns. Their dark, blurry movie reconstructions are an average of the top 100 short movie clips with a predicted brain pattern that fits best to the actual brain activity patterns.

 

Naselaris cautions that they can’t recreate dreams or other mind’s-eye images with this model. Because they focused on an early visual brain region, their model “has everything to do with the light that is hitting your eye,” Naselaris says.

 

Yet Gallant and his group’s strategy is powerful for a number of reasons. With it, they can make surprisingly accurate predictions about images that the model has not seen before. In addition, their strategy of first creating an encoding model before decoding gives researchers a method for testing theories about how the brain processes information. For other areas of the brain, the strategy will likely have to be modified. “The brain is a pretty complicated place,” Gallant says, “so there is no one grand approach that will solve everything. Instead, there are thousands of neuroscientists using thousands of different approaches to try to move ahead.”



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.