Refine
Year of publication
Document Type
- Conference Proceeding (26) (remove)
Language
- English (26)
Has Fulltext
- yes (26)
Is part of the Bibliography
- no (26)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (26) (remove)
We study the implications on compact star properties of a soft nuclear equation of state determined from kaon production at subthreshold energies in heavy-ion collisions. On one hand, we apply these results to study radii and moments of inertia of light neutron stars. Heavy-ion data provides constraints on nuclear matter at densities relevant for those stars and, in particular, to the density dependence of the symmetry energy of nuclear matter. On the other hand, we derive a limit for the highest allowed neutron star mass of three solar masses. For that purpouse, we use the information on the nucleon potential obtained from the analysis of the heavy-ion data combined with causality on the nuclear equation of state.
Two generic mechanisms for emergence of direction selectivity coexist in recurrent neural networks
(2013)
Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013.
In the mammalian visual cortex, the time-averaged response of many neurons is maximal for stimuli moving in a particular direction. Such a direction selective response is not found in LGN, upstream of the visual processing pathway, suggesting that cortical networks play a strong role in the generation of direction selectivity. Here we investigate the mechanisms for the emergence of direction selectivity in the recurrent networks of nonlinear firing rate neurons in layer 4 of V1 receiving the input from LGN. In the model the LGN inputs are characterized by different receptive field positions, and their relative temporal phase shifts are reversed for the stimuli moving in the opposite direction. We propose that two distinct mechanisms result in the neuronal direction selective response in these recurrent networks. The first one is a result of nonlinear feed-forward summation of several time-shifted inputs. The second mechanism is based on the competition between neurons for firing in a winner-take-all regime. Both mechanisms rely on inhibitory interactions in the connectivity matrix of lateral connections, but the second one involves inhibitory loops. Typically, the first mechanism results in lower selectivity values than the second, but the time-course of acquiring direction selective response is faster for the first mechanism. Importantly, the two mechanisms have different input frequency tuning. The first mechanism, based on the nonlinear summation, result in a relatively narrow tuning curve around the preferred frequency of the stimulus in the case of the moving grating. In contrast, the direction selectivity arising from the second mechanism depends only weakly on the input frequency, i.e. has a broader tuning curve. These differences allow us to provide the recipe for identifying in experiment which of the two mechanisms is used by a given direction selective neuron. We then analyze how the statistics of the connections in the random recurrent networks affect the relative contributions from these two mechanisms and determine the distributions of the direction selectivity values. We identify the motifs in the connectivity matrix, which are required for each mechanism and show that the minimal conditions for both mechanisms are met in a very broad set of random recurrent networks with sufficiently strong inhibitory connections. Thus, we propose that these mechanisms coexist in generic recurrent networks with inhibition. Our results may account for the recent experimental observations that direction selectivity is present in dark-reared mice and ferrets [1,2]. It can also explain the emergence of direction selectivity in species lacking a spatially organized direction selectivity map.
Poster presentation at The Twenty Third Annual Computational Neuroscience Meeting: CNS*2014 Québec City, Canada. 26-31 July 2014: We study random strongly heterogeneous recurrent networks of firing rate neurons, introducing the notion of cohorts: groups of co-active neurons, who compete for firing with one another and whose presence depends sensitively on the structure of the input. The identities of neurons recruited to and dropped from an active cohort changes smoothly with varying input features. We search for network parameter regimes in which the activation of cohorts is robust yet easily switchable by the external input and which exhibit large repertoires of different cohorts. We apply these networks to model the emergence of orientation and direction selectivity in visual cortex. We feed these random networks with a set of harmonic inputs that vary across neurons only in their temporal phase, mimicking the feedforward drive due to a moving grating stimulus. The relationship between the phases that carries the information about the orientation of the stimulus determines which cohort of neurons is activated. As a result the individual neurons acquire non-monotonic orientation tuning curves which are characterized by high orientation and direction selectivity. This mechanism of emergence for direction selectivity differs from the classical motion detector scheme, which is based on the nonlinear summation of the time-shifted inputs. In our model these two mechanisms coexist in the same network, but can be distinguished by their different frequency and contrast dependences. In general, the mechanism we are studying here converts temporal phase sequence into population activity and could therefore be used to extract and represent also various other relevant stimulus features.
TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy
(2011)
To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ...
Network or graph theory has become a popular tool to represent and analyze large-scale interaction patterns in the brain. To derive a functional network representation from experimentally recorded neural time series one has to identify the structure of the interactions between these time series. In neuroscience, this is often done by pairwise bivariate analysis because a fully multivariate treatment is typically not possible due to limited data and excessive computational cost. Furthermore, a true multivariate analysis would consist of the analysis of the combined effects, including information theoretic synergies and redundancies, of all possible subsets of network components. Since the number of these subsets is the power set of the network components, this leads to a combinatorial explosion (i.e. a problem that is computationally intractable). In contrast, a pairwise bivariate analysis of interactions is typically feasible but introduces the possibility of false detection of spurious interactions between network components, especially due to cascade and common drive effects. These spurious connections in a network representation may introduce a bias to subsequently computed graph theoretical measures (e.g. clustering coefficient or centrality) as these measures depend on the reliability of the graph representation from which they are computed. Strictly speaking, graph theoretical measures are meaningful only if the underlying graph structure can be guaranteed to consist of one type of connections only, i.e. connections in the graph are guaranteed to be non-spurious. ...