• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish
  • FAQ
  • Dewey Decimal Classification
  • 1 Philosophie und Psychologie
  • 15 Psychologie

150 Psychologie

Refine

Author

  • Zyl, Llewellyn Ellardus van (33)
  • Dick, Rolf van (21)
  • Fiebach, Christian (19)
  • Mitscherlich, Margarete (17)
  • Stangier, Ulrich (16)
  • Shing, Yee Lee (13)
  • Hasselhorn, Marcus (12)
  • Reif, Andreas (12)
  • Võ, Melissa Lê-Hoa (12)
  • Freitag, Christine M. (11)
+ more

Year of publication

  • 2021 (106)
  • 2020 (72)
  • 2019 (66)
  • 2018 (43)
  • 2017 (38)
  • 2022 (32)
  • 2012 (31)
  • 2013 (31)
  • 2014 (31)
  • 2015 (31)
+ more

Document Type

  • Article (429)
  • Doctoral Thesis (127)
  • Preprint (32)
  • Contribution to a Periodical (27)
  • Review (19)
  • Book (16)
  • Part of a Book (15)
  • Part of Periodical (15)
  • Report (5)
  • magisterthesis (2)
+ more

Language

  • English (420)
  • German (277)

Has Fulltext

  • yes (695)
  • no (2)

Is part of the Bibliography

  • no (694)
  • yes (3)

Keywords

  • Psychoanalyse (17)
  • Freud, Sigmund (14)
  • working memory (12)
  • Human behaviour (9)
  • confirmatory factor analysis (8)
  • fMRI (8)
  • Behavior (7)
  • EEG (7)
  • ADHD (6)
  • COVID-19 (6)
+ more

Institute

  • Psychologie (349)
  • Psychologie und Sportwissenschaften (104)
  • Medizin (68)
  • Präsidium (37)
  • Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (31)
  • Gesellschaftswissenschaften (26)
  • Erziehungswissenschaften (24)
  • Frankfurt Institute for Advanced Studies (FIAS) (22)
  • Sigmund-Freud Institut – Forschungsinstitut fur Psychoanalyse und ihre Anwendungen (18)
  • Ernst Strüngmann Institut (17)
+ more

697 search hits

  • 1 to 10
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
A simple model for detailed visual cortex maps predicts fixed hypercolumn sizes (2022)
Weigand, Marvin ; Cuntz, Hermann
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
A general principle of dendritic constancy a neuron’s size and shape invariant excitability (2019)
Cuntz, Hermann ; Bird, Alexander D ; Beining, Marcel ; Schneider, Marius ; Mediavilla Santos, Laura ; Hoffmann, Felix Z. ; Deller, Thomas ; Jedlička, Peter
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex (2019)
Bird, Alexander D. ; Deters, Lisa Hilde ; Cuntz, Hermann
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
Visual exposure enhances stimulus encoding and persistence in primary cortex (2021)
Lazar, Andreea ; Lewis, Christopher ; Fries, Pascal ; Singer, Wolf ; Nikolić, Danko
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Where’s the noise? Key features of neuronal variability and inference emerge from self-organized learning (2014)
Hartmann, Christoph ; Lazar, Andreea ; Triesch, Jochen
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing. In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences. We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms. Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
EEG-triggered TMS reveals stronger brain state-dependent modulation of motor evoked potentials at weaker stimulation intensities (2018)
Schaworonkow, Natalie ; Triesch, Jochen ; Ziemann, Ulf ; Zrenner, Christoph
Background Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of µ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity. Objective We further characterize the effect of µ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states. Methods A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to µ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants. Results MEP amplitude is modulated by µ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the µ-rhythm negative peak. Conclusion The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by µ-phase. Lower stimulation intensities enable more efficient differentiation of EEG µ-phase-defined brain states.
Active efficient coding explains the development of binocular vision and its failure in amblyopia (2020)
Eckmann, Samuel ; Klimmasch, Lukas ; Shi, Bertram E. ; Triesch, Jochen
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Distinct feedforward and feedback pathways for cell-type specific attention effects (2022)
Spyropoulos, Georgios ; Schneider, Marius ; Kempen, Jochem van ; Gieselmann, Marc Alwin ; Thiele, Alexander ; Vinck, Martin
Spatial attention increases both inter-areal synchronization and spike rates across the visual hierarchy. To investigate whether these attentional changes reflect distinct or common mechanisms, we performed simultaneous laminar recordings of identified cell classes in macaque V1 and V4. Enhanced V4 spike rates were expressed by both excitatory neurons and fast-spiking interneurons, and were most prominent and arose earliest in time in superficial layers, consistent with a feedback modulation. By contrast, V1-V4 gamma-synchronization reflected feedforward communication and surprisingly engaged only fast-spiking interneurons in the V4 input layer. In mouse visual cortex, we found a similar motif for optogenetically identified inhibitory-interneuron classes. Population decoding analyses further indicate that feedback-related increases in spikes rates encoded attention more reliably than feedforward-related increases in synchronization. These findings reveal distinct, cell-type-specific feedforward and feedback pathways for the attentional modulation of inter-areal synchronization and spike rates, respectively.
Understanding how visual information is represented in humans and machines (2022)
Dwivedi, Kshitij
In the human brain, the incoming light to the retina is transformed into meaningful representations that allow us to interact with the world. In a similar vein, the RGB pixel values are transformed by a deep neural network (DNN) into meaningful representations relevant to solving a computer vision task it was trained for. Therefore, in my research, I aim to reveal insights into the visual representations in the human visual cortex and DNNs solving vision tasks. In the previous decade, DNNs have emerged as the state-of-the-art models for predicting neural responses in the human and monkey visual cortex. Research has shown that training on a task related to a brain region’s function leads to better predictivity than a randomly initialized network. Based on this observation, we proposed that we can use DNNs trained on different computer vision tasks to identify functional mapping of the human visual cortex. To validate our proposed idea, we first investigate a brain region occipital place area (OPA) using DNNs trained on scene parsing task and scene classification task. From the previous investigations about OPA’s functions, we knew that it encodes navigational affordances that require spatial information about the scene. Therefore, we hypothesized that OPA’s representation should be closer to a scene parsing model than a scene classification model as the scene parsing task explicitly requires spatial information about the scene. Our results showed that scene parsing models had representation closer to OPA than scene classification models thus validating our approach. We then selected multiple DNNs performing a wide range of computer vision tasks ranging from low-level tasks such as edge detection, 3D tasks such as surface normals, and semantic tasks such as semantic segmentation. We compared the representations of these DNNs with all the regions in the visual cortex, thus revealing the functional representations of different regions of the visual cortex. Our results highly converged with previous investigations of these brain regions validating the feasibility of the proposed approach in finding functional representations of the human brain. Our results also provided new insights into underinvestigated brain regions that can serve as starting hypotheses and promote further investigation into those brain regions. We applied the same approach to find representational insights about the DNNs. A DNN usually consists of multiple layers with each layer performing a computation leading to the final layer that performs prediction for a given task. Training on different tasks could lead to very different representations. Therefore, we first investigate at which stage does the representation in DNNs trained on different tasks starts to differ. We further investigate if the DNNs trained on similar tasks lead to similar representations and on dissimilar tasks lead to more dissimilar representations. We selected the same set of DNNs used in the previous work that were trained on the Taskonomy dataset on a diverse range of 2D, 3D and semantic tasks. Then, given a DNN trained on a particular task, we compared the representation of multiple layers to corresponding layers in other DNNs. From this analysis, we aimed to reveal where in the network architecture task-specific representation is prominent. We found that task specificity increases as we go deeper into the DNN architecture and similar tasks start to cluster in groups. We found that the grouping we found using representational similarity was highly correlated with grouping based on transfer learning thus creating an interesting application of the approach to model selection in transfer learning. During previous works, several new measures were introduced to compare DNN representations. So, we identified the commonalities in different measures and unified different measures into a single framework referred to as duality diagram similarity. This work opens up new possibilities for similarity measures to understand DNN representations. While demonstrating a much higher correlation with transfer learning than previous state-of-the-art measures we extend it to understanding layer-wise representations of models trained on the Imagenet and Places dataset using different tasks and demonstrate its applicability to layer selection for transfer learning. In all the previous works, we used the task-specific DNN representations to understand the representations in the human visual cortex and other DNNs. We were able to interpret our findings in terms of computer vision tasks such as edge detection, semantic segmentation, depth estimation, etc. however we were not able to map the representations to human interpretable concepts. Therefore in our most recent work, we developed a new method that associates individual artificial neurons with human interpretable concepts. Overall, the works in this thesis revealed new insights into the representation of the visual cortex and DNNs...
Developmental loss of ErbB4 in PV interneurons disrupts state-dependent cortical circuit dynamics (2020)
Batista-Brito, Renata ; Majumdar, Antara ; Nuno, Alejandro ; Vinck, Martin ; Cardin, Jessica A.
GABAergic inhibition plays an important role in the establishment and maintenance of cortical circuits during development. Neuregulin 1 (Nrg1) and its interneuron-specific receptor ErbB4 are key elements of a signaling pathway critical for the maturation and proper synaptic connectivity of interneurons. Using conditional deletions of the ERBB4 gene in mice, we tested the role of this signaling pathway at two developmental timepoints in parvalbumin-expressing (PV) interneurons, the largest subpopulation of cortical GABAergic cells. Loss of ErbB4 in PV interneurons during embryonic, but not late postnatal, development leads to alterations in the activity of excitatory and inhibitory cortical neurons, along with severe disruption of cortical temporal organization. These impairments emerge by the end of the second postnatal week, prior to the complete maturation of the PV interneurons themselves. Early loss of ErbB4 in PV interneurons also results in profound dysregulation of excitatory pyramidal neuron dendritic architecture and a redistribution of spine density at the apical dendritic tuft. In association with these deficits, excitatory cortical neurons exhibit normal tuning for sensory inputs, but a loss of state-dependent modulation of the gain of sensory responses. Together these data support a key role for early developmental Nrg1/ErbB4 signaling in PV interneurons as powerful mechanism underlying the maturation of both the inhibitory and excitatory components of cortical circuits.
  • 1 to 10

OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks