Eye tracking technology could become obsolete for many neuroscience studies, as researchers demonstrate they can predict where people look during movies using only brain scan data. This breakthrough addresses a persistent technical challenge that has limited naturalistic brain research for decades.
A deep learning model called DeepMReye successfully estimated eye movements from fMRI signals across three independent datasets of people watching films. Individual predictions showed moderate accuracy (correlations of -0.38 to 0.67 with actual eye tracking), but group-averaged predictions reached much higher reliability (0.7-0.8 correlations). The technique works by detecting subtle eyeball-related signals embedded within standard brain imaging data, eliminating the need for specialized eye-tracking cameras that often fail in MRI environments.
This computational approach represents a significant methodological advance for cognitive neuroscience. Traditional eye tracking in MRI scanners faces numerous obstacles—head movement, magnetic field interference, and equipment compatibility issues that frequently compromise data quality. The ability to extract gaze patterns retrospectively from existing brain scans could unlock analysis of thousands of archived datasets that lack concurrent eye tracking.
However, the moderate individual-level accuracy suggests this method works best for population-level studies rather than precise single-subject analyses. The technique also revealed that group-averaged predictions activated broader brain networks including established eye movement control regions, while individual predictions primarily engaged visual cortex. This finding indicates the method captures genuine oculomotor signals, not just visual processing artifacts. For researchers studying attention, perception, and social cognition during naturalistic viewing, this represents a practical solution to a longstanding technical barrier.