How Scientists Are Learning to Read Our Minds

An excerpt from “The Future of the Mind.”

The following is an excerpt from The Future of the Mind, by Michio Kaku.

Houdini, some historians believe, was the greatest magician who ever lived. His breathtaking escapes from locked, sealed chambers and death-defying stunts left audiences gasping. He could make people disappear and then re-emerge in the most unexpected places. And he could read people’s minds.

Or, at least it seemed that way.

Houdini took pains to explain that everything he did was an illusion, a series of clever sleight-of-hand tricks. Mind reading, he would remind people, was impossible. He was so outraged that unscrupulous magicians would cheat wealthy patrons by performing cheap parlor tricks and séances that he took it upon himself to go around the country exposing fakes. He was even on a committee organized by Scientific American, which offered a generous reward to anyone who could positively prove they had psychic power. (No one ever picked up the reward.)

Related Segment

Michio Kaku Imagines ‘The Future of the Mind’

Houdini believed that true telepathy was impossible. But science is proving Houdini wrong.

Telepathy is now the subject of intense research at universities around the world where scientists have already been able to read individual words, images, and thoughts of our brain by combining the latest scanning technology with pattern recognition software. This could revolutionize the way we communicate with stroke and accident victims who are “locked in” their bodies, unable to articulate their thoughts except through blinks of their eyes. But that’s just the start. It might also radically change the way we interact with computers and the outside world.

As we know, the brain is electrical. In general, any time an electron is accelerated, it gives off electromagnetic radiation. The same holds true for electrons oscillating inside the brain. It sounds like something out of science fiction or fantasy, but humans naturally emit radio waves. But these signals are too faint to be detected by others, and even if we could perceive these radio waves, it would be difficult for us to make sense of them. But computers are changing all this. Scientists have already been able to get crude approximations of a person’s thoughts using EEG scans. Subjects put on a helmet of EEG sensors on their head, and concentrate on certain pictures, say, the image of a car or a house. The EEG signals were then recorded for each image and eventually, a rudimentary dictionary of thought was created, with a one-to-one correspondence between a person’s thoughts and the EEG image. Then, when a person was shown a picture of another car, the computer would recognize this EEG pattern.

The advantage of EEG sensors is that they are non-invasive and quick. You simply put on a helmet containing many electrodes onto the surface of the brain and the EEG can rapidly identify signals, which change every millisecond. But the problem with EEG sensors, as we have seen, is that electromagnetic waves deteriorate as they pass through the skull, and it is difficult to locate the precise source. This method can tell if you are thinking of a car versus a house, but it cannot recreate an image of the car. That is where Dr. Gallant’s work comes in.

VIDEOS OF THE MIND

The epicenter for much of this research is the University of California at Berkeley, where I received my own Ph.D. in theoretical physics years ago. I had the pleasure of touring the laboratory of Dr. Jack Gallant, whose group has accomplished a feat once considered to be impossible: video taping people’s thoughts. “This is a major leap forward reconstructing internal imagery. We are opening a window into the movies in our mind,” says Dr. Gallant.

When I visited his laboratory, the first thing I noticed was the team of young, eager postdoctoral and graduate students huddled behind their computer screens, looking intently at video images that were reconstructed from someone’s brain scans. Talking to his team, you feel as though you are witnessing scientific history in the making.

Dr. Gallant explained to me that first, the subject lies flat on a stretcher, which is slowly inserted head first into a huge, state-of-the-art MRI machine, costing upwards of $3 million. The subject is then shown several movie clips (such as movie trailers readily available on YouTube.) To accumulate enough data, you have to sit motionless for hours watching these clips, a truly arduous task. I asked one of the post-docs, Dr. Shinji Nishimoto, how they found volunteers who were willing to lie still for hours on end with only fragments of video footage to occupy the time. He said the people in the room, the grad students and post-docs, volunteered to be guinea pigs for their own research.

As the subject watches the movies, the MRI machine creates a 3D image of the blood flow within the brain. The MRI image looks like a vast collection of 30,000 dots or voxels. Each voxel represents a pinpoint of neural energy, and the color of the dot corresponds to the intensity of the signal and blood flow. Red dots represent points of large neural activity, while blue dots represent points of less activity. (The final image looks very much like thousands of Christmas lights in the shape of the brain. Immediately, you can see the brain is concentrating most of its mental energy in the visual cortex while watching these videos.)

At first, this color 3D collection of dots looks like gibberish. But after years of research, Dr. Gallant and his colleagues have developed a mathematical formula which begins to make connections between certain features of a picture (edges, textures, intensity, etc.) and the MRI voxels. For example, if you look at a boundary, you’ll notice it’s a region separating lighter and darker areas and hence the edge generates a certain pattern of voxels. By having subject after subject view such a large library of movie clips, this mathematical formula is refined, allowing the computer to analyze how all sorts of images are converted into MRI voxels. Eventually, the scientists were able to ascertain a direct correlation between certain MRI patterns of voxels and each picture. “We built a model for each voxel that describes how space and motion information in the movie is mapped into brain activity,” Dr. Nishmoto told me.

At this point, the patient is then shown another movie trailer while a computer analyzes the voxels generated during the viewing and re-creates a rough approximation of the original image. (The computer selects images from 100 movie clips that most closely resemble the one that the subject just saw and then merges images to create a close approximation.) In this way, the computer is able to create a fuzzy video of the visual imagery going through your mind. Dr. Gallant’s mathematical formula is so versatile that it can take a collection of MRI voxels and convert it into a picture, or it can do the reverse, taking a picture and then converting it to MRI voxels.

I had a chance to view one of the videos created by Dr. Gallant’s group, and it was very impressive. Watching it was like viewing a movie through dark glasses with faces, animals, street scenes, and buildings: Although you could not see the close-up details, you could clearly identify the kind of object you were seeing.

The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

Buy

Not only can this program decode what you are looking at, it can also decode imaginary images circulating in your head. Let’s say you are asked to think of the Mona Lisa. We know from MRI scans that even though you’re not viewing the painting with your two eyes, the visual cortex of your brain will light up. Dr. Gallant’s program then scans your brain, and flips through its data files of pictures, trying to find the closest match. In one experiment that I saw, the computer selected a picture of the actress Selma Hayek as the closest approximation to Mona Lisa. Of course, the average person can easily recognize hundreds of faces, but the fact that the computer analyzed an image within a person’s brain and then picked out this picture from millions of random pictures at its disposal is still pretty impressive.

The goal of this whole process is to create an accurate dictionary that allows you to rapidly match an object in the real world with the MRI pattern in your brain. In general, a detailed match is very difficult and will take years, but some categories are actually easy to read just by flipping through some photographs. Dr. Stanislas Dehaene of the College de France in Paris was examining MRI scans of the parietal lobe, where numbers are recognized, when one of his post-docs casually mentioned that just by quickly scanning the MRI pattern, he could tell what number the patient was looking at. In fact, numbers create distinctive patterns on the MRI scan.

This leaves open the question of when we might be able to have picture-quality videos of our thoughts. Unfortunately, information is lost when a person is visualizing an image. Brain scans corroborate this: when you compare the MRI scan of the brain when it is looking at a flower, versus the MRI scan when the brain is thinking about a flower, you immediately see that the second image has far fewer dots than the first. So although this technology will vastly improve in the coming years, it will never be perfect. It reminds of a short story I once read where a man meets a genie who offers to create anything that the person can imagine. The man immediately asks for a luxury car, jet plane, and a million dollars. At first, the man is ecstatic. But when he looks at these items in detail, he sees that the car and the plane have no engines, and the image on the cash is all blurred. Everything is useless. This is because our memories are only approximations to the real thing.

But given the rapidity with which scientists are beginning to decode the MRI patterns in our brain, does this mean we will soon be able to go beyond seeing the images, to actually reading words and thoughts circulating in the mind?

READING THE MIND
In fact, in a building next to Gallant’s laboratory, Dr. Brian Pasley and his colleagues are literally reading thoughts—at least in principle. One of the post-docs there, Dr. Sara Szczepanski, explained to me how they are able to identify words inside the mind.

The scientists used what is called ECOG (electrocorticogram) technology, which is a vast improvement over the jumble of signals EEG scans produce. ECOG scans are unprecedented in accuracy and resolution since the signals come directly from the brain tissue and do not pass through the skull. Of course, the flipside is that one has to remove a large portion of the skull to place a mesh, containing 64 electrodes inside an 8 x 8 grid, directly on top of the exposed brain.

Luckily, these scientists were able to get permission to conduct experiments with ECOG scans on epileptic patients, who were suffering from debilitating seizures. The ECOG mesh was placed on their brains while open-brain surgery was being performed on them by doctors at the nearby University of California at San Francisco.

As the patient hears various words, signals from their brains pass through electrodes and are then recorded. Eventually, a dictionary is formed, matching the word with the signals emanating from the electrodes in the brain. Later, when a word is uttered, one can see the same electrical pattern. It also means that if one is thinking of a certain word, the computer can pick up the characteristic signals and identify it.

With this technology, stroke victims who are totally paralyzed may be able to “talk” through a voice synthesizer which recognizes the brain patterns of individual words they’re thinking of. It might also be possible to have a conversation which takes place entirely telepathically.

Not surprisingly, BMI (brain-machine interface) has become a hot field, with groups around the country making significant breakthroughs. Similar results have been obtained by scientists at the University of Utah in 2011. They placed two grids, each containing 16 electrodes, over the facial motor cortex (which controls movements of the mouth, lips, tongue, and face) and also Wernicke’s area, which processes information about language.

The person was then asked to say ten common words, such as “yes and no,” “hot, and cold,” “hungry and thirsty,” “hello and goodbye,” and “more and less.” Using a computer to record the brain signals when these words were uttered, they were able to create a rough one-to-one correspondence between spoken words and computer signals from the brain. Later, when the patient voiced certain words, they were able to correctly identify each one with accuracy from 76 percent to 90 percent. The next step is to use grids with 121 electrodes on them to get better resolution.

In the future, this procedure may prove useful for individuals suffering from stokes or paralyzing illnesses such as Lou Gehrig’s disease, who would be able to speak effortlessly using this brain-to-computer technique.


Excerpted from The Future of the Mind, by Michio Kaku. Excerpt courtesy of Doubleday.


Meet the Writer

About Michio Kaku

Michio Kaku is author of The Future of the Mind (Doubleday, 2014) and professor of theoretical physics at City College and City University of New York in New York, New York.

Explore More