Neuroscientists at University of California at Berkeley have developed a technique for creating digital images that correspond with neural activity in the brain.
This represents one of the first steps toward a computer being able to tap directly into what our brain sees, imagines, and even dreams.
(Link to video) Every image that we see activates photoreceptors in the retina of the eye. The information is fed through the optic nerve to the back of the brain. There, the information is assembled and interpreted by increasingly higher-level processes of the brain.
In this experiment, subjects watched clips of movie trailers while an fMRI machine scanned their brains in real time. The computer mapped activity throughout millions of “voxels” (3D pixels).
The computer gradually learned to associate qualities of shape, edges, and motion occurring in the film with corresponding patterns of brain activity.
It then built “dictionaries” by matching video images with patterns of brain activity, and then predicting patterns that it guessed would be created by novel videos, using a palette of 18 million seconds of random clips taken from the internet. Over time, the computer could crunch all this data into a set of images that played out alongside the original video.
If I understand the process correctly, the images we’re seeing on the right side (“clips reconstructed from brain activity”) are actually running averages created by blending a hundred or so random YouTube clips that met the computer’s predictions of what images would match the patterns it was monitoring in the brain.
In other words, the right-hand image is generated from existing clips, not from scratch. In this video (
link), you can see the novel video that's causing the brain activity in the upper left of the screen, and some of the samples (strung out in a line) that the computer is guessing must be causing that kind of brain activity.
That would explain the momentary ghostly word fragments that pop up in the images, as well as the strange color and shape-shifts from the original.
The result is a moving image that looks a bit like a blurry version of the original video, but one that has been a bit generalized based on the available palette of average images. Evidently, the perception of faces triggers the brain in very active ways, judging from the relative clarity of the computer’s generated images, compared to other kinds of images.
I wonder what would happen if you set this system up in a biofeedback loop, so that the brain activity and image generation could play off against each other? It might be like a computer-aided hallucination.
Article on Gizmodo
Thanks,
Christian Schlierkamp