Summation of visual motion across eye movements reflects a nonspatial decision mechanism

Adam P Morris, Charles C Liu, Simon J Cropper, Jason D Forte, Bart Krekelberg, Jason B Mattingley

Research output: Contribution to journalArticleResearchpeer-review

27 Citations (Scopus)


Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., spatiotopic receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer s uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
Original languageEnglish
Pages (from-to)9821 - 9830
Number of pages10
JournalJournal of Neuroscience
Issue number29
Publication statusPublished - 2010
Externally publishedYes

Cite this