Refine
Document Type
- Article (2)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Electroencephalography (2) (remove)
Institute
- Informatik (2) (remove)
Event-related potentials (ERPs) are widely used in basic neuroscience and in clinical diagnostic procedures. In contrast, neurophysiological insights from ERPs have been limited, as several different mechanisms lead to ERPs. Apart from stereotypically repeated responses (additive evoked responses), these mechanisms are asymmetric amplitude modulations and phase-resetting of ongoing oscillatory activity. Therefore, a method is needed that differentiates between these mechanisms and moreover quantifies the stability of a response. We propose a constrained subspace independent component analysis that exploits the multivariate information present in the all-to-all relationship of recordings over trials. Our method identifies additive evoked activity and quantifies its stability over trials. We evaluate identification performance for biologically plausible simulation data and two neurophysiological test cases: Local field potential (LFP) recordings from a visuo-motor-integration task in the awake behaving macaque and magnetoencephalography (MEG) recordings of steady-state visual evoked fields (SSVEFs). In the LFPs we find additive evoked response contributions in visual areas V2/4 but not in primary motor cortex A4, although visually triggered ERPs were also observed in area A4. MEG-SSVEFs were mainly created by additive evoked response contributions. Our results demonstrate that the identification of additive evoked response contributions is possible both in invasive and in non-invasive electrophysiological recordings.
The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models’ prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.