Refine
Year of publication
- 2014 (20) (remove)
Document Type
- Article (20) (remove)
Language
- English (20)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (20) (remove)
Currently, little is known about how synesthesia develops and which aspects of synesthesia can be acquired through a learning process. We review the increasing evidence for the role of semantic representations in the induction of synesthesia, and argue for the thesis that synesthetic abilities are developed and modified by semantic mechanisms. That is, in certain people semantic mechanisms associate concepts with perception-like experiences—and this association occurs in an extraordinary way. This phenomenon can be referred to as “higher” synesthesia or ideasthesia. The present analysis suggests that synesthesia develops during childhood and is being enriched further throughout the synesthetes’ lifetime; for example, the already existing concurrents may be adopted by novel inducers or new concurrents may be formed. For a deeper understanding of the origin and nature of synesthesia we propose to focus future research on two aspects: (i) the similarities between synesthesia and ordinary phenomenal experiences based on concepts; and (ii) the tight entanglement of perception, cognition and the conceptualization of the world. Importantly, an explanation of how biological systems get to generate experiences, synesthetic or not, may have to involve an explanation of how semantic networks are formed in general and what their role is in the ability to be aware of the surrounding world.
The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.
Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.
Top-down influences on ambiguous perception: the role of stable and transient states of the observer
(2014)
The world as it appears to the viewer is the result of a complex process of inference performed by the brain. The validity of this apparently counter-intuitive assertion becomes evident whenever we face noisy, feeble or ambiguous visual stimulation: in these conditions, the state of the observer may play a decisive role in determining what is currently perceived. On this background, ambiguous perception and its amenability to top-down influences can be employed as an empirical paradigm to explore the principles of perception. Here we offer an overview of both classical and recent contributions on how stable and transient states of the observer can impact ambiguous perception. As to the influence of the stable states of the observer, we show that what is currently perceived can be influenced (1) by cognitive and affective aspects, such as meaning, prior knowledge, motivation, and emotional content and (2) by individual differences, such as gender, handedness, genetic inheritance, clinical conditions, and personality traits and by (3) learning and conditioning. As to the impact of transient states of the observer, we outline the effects of (4) attention and (5) voluntary control, which have attracted much empirical work along the history of ambiguous perception. In the huge literature on the topic we trace a difference between the observer's ability to control dominance (i.e., the maintenance of a specific percept in visual awareness) and reversal rate (i.e., the switching between two alternative percepts). Other transient states of the observer that have more recently drawn researchers' attention regard (6) the effects of imagery and visual working memory. (7) Furthermore, we describe the transient effects of prior history of perceptual dominance. (8) Finally, we address the currently available computational models of ambiguous perception and how they can take into account the crucial share played by the state of the observer in perceiving ambiguous displays.
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.
Evidence from anatomical and functional imaging studies have highlighted major modifications of cortical circuits during adolescence. These include reductions of gray matter (GM), increases in the myelination of cortico-cortical connections and changes in the architecture of large-scale cortical networks. It is currently unclear, however, how the ongoing developmental processes impact upon the folding of the cerebral cortex and how changes in gyrification relate to maturation of GM/WM-volume, thickness and surface area. In the current study, we acquired high-resolution (3 Tesla) magnetic resonance imaging (MRI) data from 79 healthy subjects (34 males and 45 females) between the ages of 12 and 23 years and performed whole brain analysis of cortical folding patterns with the gyrification index (GI). In addition to GI-values, we obtained estimates of cortical thickness, surface area, GM and white matter (WM) volume which permitted correlations with changes in gyrification. Our data show pronounced and widespread reductions in GI-values during adolescence in several cortical regions which include precentral, temporal and frontal areas. Decreases in gyrification overlap only partially with changes in the thickness, volume and surface of GM and were characterized overall by a linear developmental trajectory. Our data suggest that the observed reductions in GI-values represent an additional, important modification of the cerebral cortex during late brain maturation which may be related to cognitive development.
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
We study the equilibrium properties of strongly-interacting infinite parton-hadron matter, characterized by the transport coefficients such as shear and bulk viscosity and electric conductivity, and the non-equilibrium dynamics of heavy-ion collisions within the Parton-Hadron-String Dynamics (PHSD) transport approach, which incorporates explicit partonic degrees of freedom in terms of strongly interacting quasiparticles (quarks and gluons) in line with an equation of state from lattice QCD as well as the dynamical hadronization and hadronic collision dynamics in the final reaction phase. We discuss in particular the possible origin for the strong elliptic flow v2 of direct photons observed at RHIC energies.
The so-called Pygmy Dipole Resonance, an additional structure of low-lying electric dipole strength, has attracted strong interest in the last years. Different experimental approaches have been used in the last decade in order to investigate this new interesting nuclear excitation mode. In this contribution an overview on the available experimental data is given.