Filtern
Dokumenttyp
Sprache
- Englisch (3)
Volltext vorhanden
- ja (3)
Gehört zur Bibliographie
- nein (3)
Schlagworte
- Computational neuroscience (3) (entfernen)
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
Modeling long-term neuronal dynamics may require running long-lasting simulations. Such simulations are computationally expensive, and therefore it is advantageous to use simplified models that sufficiently reproduce the real neuronal properties. Reducing the complexity of the neuronal dendritic tree is one option. Therefore, we have developed a new reduced-morphology model of the rat CA1 pyramidal cell which retains major dendritic branch classes. To validate our model with experimental data, we used HippoUnit, a recently established standardized test suite for CA1 pyramidal cell models. The HippoUnit allowed us to systematically evaluate the somatic and dendritic properties of the model and compare them to models publicly available in the ModelDB database. Our model reproduced (1) somatic spiking properties, (2) somatic depolarization block, (3) EPSP attenuation, (4) action potential backpropagation, and (5) synaptic integration at oblique dendrites of CA1 neurons. The overall performance of the model in these tests achieved higher biological accuracy compared to other tested models. We conclude that, due to its realistic biophysics and low morphological complexity, our model captures key physiological features of CA1 pyramidal neurons and shortens computational time, respectively. Thus, the validated reduced-morphology model can be used for computationally demanding simulations as a substitute for more complex models.
The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models’ prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.