Reading the mind's eye: decoding category information during mental imagery

Leila Reddy, Naotsugu Tsuchiya, Thomas Serre

Research output: Contribution to journalArticleResearchpeer-review

191 Citations (Scopus)

Abstract

Category information for visually presented objects can be read out frommulti-voxel patterns of fMRI activity in ventral-temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom-up visual input, for example, during visual imagery? Here, we first ask howwell category information canbe decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral-temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable towithin the imagery condition. The above results held evenwhenwe did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of diagnostic voxels (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral-temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom-up input, cortical back projections can selectively re-activate specific patterns of neural activity.
Original languageEnglish
Pages (from-to)818 - 825
Number of pages8
JournalNeuroImage
Volume50
Issue number2
DOIs
Publication statusPublished - 2010
Externally publishedYes

Cite this