Brain-reading uses the responses of multiple voxels in the brain evoked by stimulus and then detected by fMRI in order to decode the original stimulus. Brain reading studies differ in the type of decoding (i.e. classification, identification and reconstruction) employed, the target (i.e. decoding visual patterns, auditory patterns, cognitive states), and the decoding algorithms (linear classification, nonlinear classification, direct reconstruction, Bayesian reconstruction, etc.) employed.
In classification, a pattern of activity across multiple voxels is used to determine the particular class from which the stimulus was drawn. Many studies have classified visual stimuli, but this approach has also been used to classify cognitive states.
In reconstruction brain reading the aim is to create a literal picture of the image that was presented. Early studies used voxels from early visual cortex areas (V1, V2, and V3) to reconstruct geometric stimuli made up of flickering checkerboard patterns.
More recent studies used voxels from early and anterior visual cortex areas forward of them (visual areas V3A, V3B, V4, and the lateral occipital) together with Bayesian inference techniques to reconstruct complex natural images. This brain reading approach uses three components: A structural encoding model that characterizes responses in early visual areas; a semantic encoding model that characterizes responses in anterior visual areas; and a Bayseian prior that describes the distribution of structural and semantic scene statistics.
Experimentally the procedure is for subjects to view 1750 black and white natural images that are correlated with voxel activation in their brains. Then subjects viewed another 120 novel target images, and information from the earlier scans is used reconstruct them. Natural images used include pictures of a seaside cafe and harbor, performers on a stage, and dense foliage.
It is possible to track which of two forms of rivalrous binocular illusions a person was subjectively experiencing from fMRI signals. The category of event which a person freely recalls can be identified from fMRI before they say what they remembered. Statistical analysis of EEG brainwaves has been claimed to allow the recognition of phonemes, and at a 60% to 75% level color and visual shape words. It has also been shown that brain-reading can be achieved in a complex virtual environment.
Brain-reading accuracy is increasing steadily as the quality of the data and the complexity of the decoding algorithms improve. In one recent experiment it was possible to identify which single image was being seen from a set of 120. In another it was possible to correctly identify 90% of the time which of two categories the stimulus came and the specific semantic category (out of 23) of the target image 40% of the time.
It has been noted that so far that brain reading is limited. "in practice exact reconstructions are impossible to achieve by any reconstruction algorithm on the basis of brain activity signals acquired by fMRI. This is because all reconstructions will inevitably be limited by inaccuracies in the encoding models and noise in the measured signals. Our results demonstrate that the natural image prior is a powerful (if unconventional) tool for mitigating the effects of these fundamental limitations. A natural image prior with only six million images is sufficient to produce reconstructions that are structurally and semantically similar to a target image."
- Bayesian brain
- Brain fingerprinting
- Mind uploading
- Minority Report (film)
- Kamitani, Yukiyasu; Tong, Frank (2005). "Decoding the visual and subjective contents of the human brain". Nature Neuroscience 8 (5): 679–85. doi:10.1038/nn1444. PMC 1808230. PMID 15852014.
- Miyawaki, Y; Uchida, H; Yamashita, O; Sato, M; Morito, Y; Tanabe, H; Sadato, N; Kamitani, Y (2008). "Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders". Neuron 60 (5): 915–29. doi:10.1016/j.neuron.2008.11.004. PMID 19081384.
- Thirion, Bertrand; Duchesnay, Edouard; Hubbard, Edward; Dubois, Jessica; Poline, Jean-Baptiste; Lebihan, Denis; Dehaene, Stanislas (2006). "Inverse retinotopy: Inferring the visual content of images from brain activation patterns". NeuroImage 33 (4): 1104–16. doi:10.1016/j.neuroimage.2006.06.062. PMID 17029988.
- Naselaris, Thomas; Prenger, Ryan J.; Kay, Kendrick N.; Oliver, Michael; Gallant, Jack L. (2009). "Bayesian Reconstruction of Natural Images from Human Brain Activity". Neuron 63 (6): 902–15. doi:10.1016/j.neuron.2009.09.006. PMID 19778517.
- Haynes, J; Rees, G (2005). "Predicting the Stream of Consciousness from Activity in Human Visual Cortex". Current Biology 15 (14): 1301–7. doi:10.1016/j.cub.2005.06.026. PMID 16051174.
- Polyn, S. M.; Natu, VS; Cohen, JD; Norman, KA (2005). "Category-Specific Cortical Activity Precedes Retrieval During Memory Search". Science 310 (5756): 1963–6. doi:10.1126/science.1117645. PMID 16373577.
- Suppes, Patrick; Perreau-Guimaraes, Marcos; Wong, Dik Kin (2009). "Partial Orders of Similarity Differences Invariant Between EEG-Recorded Brain and Perceptual Representations of Language". Neural Computation 21 (11): 3228–69. doi:10.1162/neco.2009.04-08-764. PMID 19686069.
- Suppes, Patrick; Han, Bing; Epelboim, Julie; Lu, Zhong-Lin (1999). "Invariance of brain-wave representations of simple visual images and their names". Proceedings of the National Academy of Sciences of the United States of America 96 (25): 14658–63. doi:10.1073/pnas.96.25.14658. PMC 24492. PMID 10588761.
- Chu, Carlton; Ni, Yizhao; Tan, Geoffrey; Saunders, Craig J.; Ashburner, John (2010). "Kernel regression for fMRI pattern prediction". NeuroImage 56 (2): 662–673. doi:10.1016/j.neuroimage.2010.03.058. PMID 20348000.
- Kay, Kendrick N.; Naselaris, Thomas; Prenger, Ryan J.; Gallant, Jack L. (2008). "Identifying natural images from human brain activity". Nature 452 (7185): 352–5. doi:10.1038/nature06713. PMID 18322462.
- Brain scanners can tell what you're thinking about New Scientist article on brain-reading 28 October 2009
- 2007 Pittsburgh Brain Activity Interpretation Competition:Interpreting subject-driven actions and sensory experience in a rigorously characterized virtual world