Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (753)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1774) (remove)
Is part of the Bibliography
- no (1774)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1774)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
Background: Transfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present.
Results: In simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected.
Conclusions: TRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox.
We present a non-parametric and computationally efficient method that detects spatiotemporal firing patterns and pattern sequences in parallel spike trains and tests whether the observed numbers of repeating patterns and sequences on a given timescale are significantly different from those expected by chance. The method is generally applicable and uncovers coordinated activity with arbitrary precision by comparing it to appropriate surrogate data. The analysis of coherent patterns of spatially and temporally distributed spiking activity on various timescales enables the immediate tracking of diverse qualities of coordinated firing related to neuronal state changes and information processing. We apply the method to simulated data and multineuronal recordings from rat visual cortex and show that it reliably discriminates between data sets with random pattern occurrences and with additional exactly repeating spatiotemporal patterns and pattern sequences. Multineuronal cortical spiking activity appears to be precisely coordinated and exhibits a sequential organization beyond the cell assembly concept.
During meditation, practitioners are required to center their attention on a specific object for extended periods of time. When their thoughts get diverted, they learn to quickly disengage from the distracter. We hypothesized that learning to respond to the dual demand of engaging attention on specific objects and disengaging quickly from distracters enhances the efficiency by which meditation practitioners can allocate attention. We tested this hypothesis in a global-to-local task while measuring electroencephalographic activity from a group of eight highly trained Buddhist monks and nuns and a group of eight age and education matched controls with no previous meditation experience. Specifically, we investigated the effect of attentional training on the global precedence effect, i.e., faster detection of targets on a global than on a local level. We expected to find a reduced global precedence effect in meditation practitioners but not in controls, reflecting that meditators can more quickly disengage their attention from the dominant global level. Analysis of reaction times confirmed this prediction. To investigate the underlying changes in brain activity and their time course, we analyzed event-related potentials. Meditators showed an enhanced ability to select the respective target level, as reflected by enhanced processing of target level information. In contrast with control group, which showed a local target selection effect only in the P1 and a global target selection effect in the P3 component, meditators showed effects of local information processing in the P1, N2, and P3 and of global processing for the N1, N2, and P3. Thus, meditators seem to display enhanced depth of processing. In addition, meditation altered the uptake of information such that meditators selected target level information earlier in the processing sequence than controls. In a longitudinal experiment, we could replicate the behavioral effects, suggesting that meditation modulates attention already after a 4-day meditation retreat. Together, these results suggest that practicing meditation enhances the speed with which attention can be allocated and relocated, thus increasing the depth of information processing and reducing response latency.
In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left or rightward auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left or rightward) more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
The influence of visual tasks on short and long-term memory for visual features was investigated using a change-detection paradigm. Subjects completed 2 tasks: (a) describing objects in natural images, reporting a specific property of each object when a crosshair appeared above it, and (b) viewing a modified version of each scene, and detecting which of the previously described objects had changed. When tested over short delays (seconds), no task effects were found. Over longer delays (minutes) we found the describing task influenced what types of changes were detected in a variety of explicit and incidental memory experiments. Furthermore, we found surprisingly high performance in the incidental memory experiment, suggesting that simple tasks are sufficient to instill long-lasting visual memories. Keywords: visual working memory, natural scenes, natural tasks, change detection
Visual selective attention and visual working memory (WM) share the same capacity-limited resources. We investigated whether and how participants can cope with a task in which these 2 mechanisms interfere. The task required participants to scan an array of 9 objects in order to select the target locations and to encode the items presented at these locations into WM (1 to 5 shapes). Determination of the target locations required either few attentional resources (“popout condition”) or an attention-demanding serial search (“non pop-out condition”). Participants were able to achieve high memory performance in all stimulation conditions but, in the non popout conditions, this came at the cost of additional processing time. Both empirical evidence and subjective reports suggest that participants invested the additional time in memorizing the locations of all target objects prior to the encoding of their shapes into WM. Thus, they seemed to be unable to interleave the steps of search with those of encoding. We propose that the memory for target locations substitutes for perceptual pop-out and thus may be the key component that allows for flexible coping with the common processing limitations of visual WM and attention. The findings have implications for understanding how we cope with real-life situations in which the demands on visual attention and WM occur simultaneously. Keywords: attention, working memory, interference, encoding strategies
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.
Spherical harmonics coeffcients for ligand-based virtual screening of cyclooxygenase inhibitors
(2011)
Background: Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. Methodology/Principal Findings: We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. Conclusions/Significance: 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.
As important as the intrinsic properties of an individual nervous cell stands the network of neurons in which it is embedded and by virtue of which it acquires great part of its responsiveness and functionality. In this study we have explored how the topological properties and conduction delays of several classes of neural networks affect the capacity of their constituent cells to establish well-defined temporal relations among firing of their action potentials. This ability of a population of neurons to produce and maintain a millisecond-precise coordinated firing (either evoked by external stimuli or internally generated) is central to neural codes exploiting precise spike timing for the representation and communication of information. Our results, based on extensive simulations of conductance-based type of neurons in an oscillatory regime, indicate that only certain topologies of networks allow for a coordinated firing at a local and long-range scale simultaneously. Besides network architecture, axonal conduction delays are also observed to be another important factor in the generation of coherent spiking. We report that such communication latencies not only set the phase difference between the oscillatory activity of remote neural populations but determine whether the interconnected cells can set in any coherent firing at all. In this context, we have also investigated how the balance between the network synchronizing effects and the dispersive drift caused by inhomogeneities in natural firing frequencies across neurons is resolved. Finally, we show that the observed roles of conduction delays and frequency dispersion are not particular to canonical networks but experimentally measured anatomical networks such as the macaque cortical network can display the same type of behavior.
Dynamics of chaotic strings
(2011)
The main topic of this thesis is the investigation of dynamical properties of coupled Tchebycheff map networks. At every node of the network the dynamics is given by the iteration of a Tchebycheff map, which shows strongest possible chaotic behaviour. By applying a coupling between the various individual dynamics along the links of the network, a rich structure of complex dynamical patterns emerges. Accordingly, coupled chaotic map networks provide prototypical models for studying the interplay between local dynamics, network structure, and the emergent global dynamics. An exciting application of coupled Tchebycheff map lattices in quantum field theory has been proposed Beck in Spatio-temporal chaos and vacuum fluctuations of quantized fields' (2002). In this so-called chaotic string model, the coupled map lattice dynamics generates the noise needed for the Parisi-Wu approach of stochastic quantization. The remarkable obversation is that the respective dynamics seems to reproduce distinguished numerical values of coupling constants that coincide with those observed in the standard model of particle physic. The results of this thesis give insights into the chaotic string model and its network generalization from a dynamical point of view. This leads to a deeper understanding of the dynamics, which is essential for a critical discussion of possible physical embeddings. Apart from this specific application to particle physics, the investigated concepts like synchronization or a most random behaviour of the dynamics are of general interest for dynamical system theory and the science of complex networks. As a first approach, discrete symmetry transformations of the model are studied. These transformations are formulated in a general way in order to be also applicable to similar dynamics on bipartite network structures. An observable of main interest in the chaotic string model is the interaction energy. In Spatio-temporal chaos and vacuum fluctuations of quantized fields' (2002) it has been observed that certain chaotic string couplings, corresponding to a vanishing interaction energy, coincide with coupling constants of the standard model of elementary particle physics. Since the interaction energy is basically a spatial correlation measure, an interpretation of the respective dynamical states in terms of a most random behaviour is tempting. In order to distinguish certain states as most random', or evoke another dynamical principle, a deeper understanding of the dynamics essential. In the present thesis the dynamics is studied numerically via Lyapunov measures, spatial correlations, and ergodic properties. It is shown that the zeros of the interaction energy are distinguished only with respect to this specific observable, but not by a more general dynamical principle. The original chaotic string model is defined on a one-dimensional lattice (ring-network) as the underlying network topology. This thesis studies a modification of the model based on the introduction of tunable disorder. The effects of inhomogeneous coupling weights as well as small-world perturbations of the ring-network structure on the interaction energy are discussed. Synchronization properties of the chaotic string model and its network generalization are studied in later chapters of this thesis. The analysis is based on the master stability formalism, which relates the stability of the synchronized state to the spectral properties of the network. Apart from complete synchronization, where the dynamics at all nodes of the network coincide, also two-cluster synchronization on bipartite networks is studied. For both types of synchronization it is shown that depending on the type of coupling the synchronized dynamics can display chaotic as well as periodic or quasi-periodic behaviour. The semi-analytical calculations reveal that the respective synchronized states are often stable for a wide range of coupling values even for the ring-network, although the respective basins of attraction may inhabit only a small fraction of the phase space. To provide analytical results in closed form, for complete synchronization the stability of all fixed points and period-2 orbits of all chaotic string networks are determined analytically. The master stability formalism allows to treat the ring-network of the chaotic string model as a special case, but the results are valid for coupled Tchebycheff maps on arbitrary networks. For two-cluster synchronization on bipartite networks, selected fixed points and period-2 orbits are analyzed.
Background: The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results: We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions: The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. Additional files Additional file 1: Dependence of Q on the order parameter rank. The quantity Qi is plotted against the order parameter rank i for 9 different protein structure bundles. Additional file 2: Dependence of P on the clustering stage. The quantity Pi is plotted against the clustering stage i for 9 different protein structure bundles. Additional file 3: Dependence of CYRANGE results on the minimal cluster size parameter my. The sequence coverage (red) and RMSD (blue) of the residue ranges determined by CYRANGE were plotted as a function of my for 9 different protein structure bundles. The dotted vertical line indicates the default value, my = 8. Where CYRANGE found two domains, the RMSD values of the individual domains are shown in light and dark blue. Additional file 4: Dependence of CYRANGE results on the domain boundary extension parameter m. See Additional File 3 for details. Additional file 5: Dependence of CYRANGE results on the minimal gap width g. See Additional File 3 for details. Additional file 6: Dependence of CYRANGE results on the relative RMSD decrease parameter delta. See Additional File 3 for details. Additional file 7: Dependence of CYRANGE results on the absolute RMSD decrease parameter delta abs. See Additional File 3 for details. Additional file 8: Dependence of CYRANGE results on the gap penalty parameter gamma. See Additional File 3 for details. Additional file 9: Correlation between the sequence coverage from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents a protein shown in Figures 3 and 4. The coverage is the percentage of amino acid residues included in the residue ranges found by the different methods. The GDT_TS value is defined by GDT_TS = (P1 + P2 + P4 + P8)/4, where Pd is the fraction of residues that can be superimposed under a distance cutoff of d Å. Additional file 10: Correlation between the RMSD value for the residue ranges from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents one protein domain. See Additional File 9 for details.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. One of the central questions in neuroscience is how neural activity is organized across different spatial and temporal scales. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies, a cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. Although various experiments have revealed amplitude-to-amplitude and phase-to-phase coupling, the most common and most celebrated result is that the phase of the lower frequency component modulates the amplitude of the higher frequency component. Over the recent 5 years the amount of experimental works finding such phase-amplitude coupling in LFP, ECoG, EEG and MEG has been tremendous (summarized in [1]). We suggest that although the mechanism of cross-frequency-coupling (CFC) is theoretically very tempting, the current analysis methods might overestimate any physiological CFC actually evident in the signals of LFP, ECoG, EEG and MEG. In particular, we point out three conceptual problems in assessing the components and their correlations of a time series. Although we focus on phase-amplitude coupling, most of our argument is relevant for any type of coupling. 1) The first conceptual problem is related to isolating physiological frequency components of the recorded signal. The key point is to notice that there are many different mathematical representations for a time series but the physical interpretation we make out of them is dependent on the choice of the components to be analyzed. In particular, when one isolates the components by Fourier-representation based filtering, it is the width of the filtering bands what defines what we consider as our components and how their power or group phase change in time. We will discuss clear cut examples where the interpretation of the existence of CFC depends on the width of the filtering process. 2) A second problem deals with the origin of spectral correlations as detected by current cross-frequency analysis. It is known that non-stationarities are associated with spectral correlations in the Fourier space. Therefore, there are two possibilities regarding the interpretation of any observed CFC. One scenario is that basic neuronal mechanisms indeed generate an interaction across different time scales (or frequencies) resulting in processes with non-stationary features. The other and problematic possibility is that unspecific non-stationarities can also be associated with spectral correlations which in turn will be detected by cross frequency measures even if physiologically there is no causal interaction between the frequencies. 3) We discuss on the role of non-linearities as generators of cross frequency interactions. As an example we performed a phase-amplitude coupling analysis of two nonlinearly related signals: atmospheric noise and the square of it (Figure 1) observing an enhancement of phase-amplitude coupling in the second signal while no pattern is observed in the first. Finally, we discuss some minimal conditions need to be tested to solve some of the ambiguities here noted. In summary, we simply want to point out that finding a significant cross frequency pattern does not always have to imply that there indeed is physiological cross frequency interaction in the brain.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Parallel multiunit recordings from V1 in anesthetized cat were collected during the presentation of random sequences of drifting sinusoidal gratings at 12 fixed orientations while gamma oscillations were present. In agreement with the seminal work [1], most units were orientation selective to varying degrees and synchronization was evident in spike train crosscorrelograms computed between units with similar preferred orientations, particularly during the presentation of optimal stimuli. Interestingly, a subset of units, which we refer to as synchronization hubs, were additionally found to synchronize with units having differing preferred orientations which was consistent with a previous study [2]. Moreover, oscillatory patterning in spike train autocorrelograms was also found to be strongest in units denoted as synchronization hubs, and synchronization hubs also tended to have narrower tuning curves relative to other units. We used simplified computational models of small networks of V1 neurons to demonstrate that neurons subject to a sufficiently strong level of inhibitory input can function as synchronization hubs. Neurons were endowed either with integrate-and-fire or conductance-based dynamics and each neuron received a combination of excitatory (AMPA) synaptic inputs that were Poisson-distributed and inhibitory (GABA) inputs that were coherent at a gamma-frequency range. If the strength of rhythmic inhibition was increased for a subset of neurons in the network, and excitation was increased simultaneously to maintain a fixed firing rate, then these neurons produced stronger oscillatory patterning in their discharge probabilities. The oscillations in turn synchronized these neurons with other neurons in the network. Importantly, the strength of synchronization increased with neurons of differing orientation preferences even though no direct synaptic coupling existed between the hubs and the other neurons. Enhanced levels of inhibition account for the emergence of synchronization hubs in the following way: Inhibitory inputs exhibiting a gamma rhythm determine a time window within which a cell is likely to discharge. Increased levels of inhibition narrow down this window further simultaneously leading to (i) even stronger oscillatory patterning of the neuron's activity and (ii) enhanced synchronization with other neurons. This enables synchronization even between cells with differing orientation preferences. Additionally, the same increased levels of inhibition may be responsible for the narrow tuning curves of hub neurons. In conclusion, synchronization hubs may be the cells that interact most strongly with the network of inhibitory interneurons during gamma oscillations in primary visual cortex.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Background: Oscillatory activity in high-beta and gamma bands (20-80Hz) is known to play an important role in cortical processing being linked to cognitive processes and behavior. Beta/gamma oscillations are thought to emerge in local cortical circuits via two mechanisms: the interaction between excitatory principal cells and inhibitory interneurons – the pyramidal-interneuron gamma (PING) [1], and in networks of coupled inhibitory interneurons under tonic excitation – the interneuronal gamma (ING) [2]. Experimental evidence underlines the important role of inhibitory interneurons and especially of the fast spiking (FS) interneurons [3,4]. We show in simulation that an important property of FS neurons, namely the membrane resonance (frequency preference), represents an additional mechanism – the resonance induced gamma (RING), i.e. modulation of oscillatory discharge by resonance. RING promotes frequency stability and enables oscillations in purely excitatory networks. Methods: Local circuits were modeled with small world networks of 80% excitatory and 20% inhibitory neuron populations interconnected in small-world topology by realistic conductance-based synapses. Neuron populations were leaky integrate and fire (LIF) or Izhikevich resonator (RES) neurons. We also tested networks of purely inhibitory and purely excitatory RES neurons. Networks were stimulated with miniature postsynaptic potentials (MINIs) [5] and with low frequency sinusoidal (0.5 Hz) input that mimics the effect of gratings passing trough the visual field. The activity was calibrated to match recordings from cat visual cortex (firing rate, oscillatory activity). Results: Sinusoidal input modulates network oscillation frequency. This effect is most prominent in IF excitatory and IF inhibitory (IF-IF) networks and less prominent (about 4 times) in IF-RES or RES-IF networks where frequency remains relatively stable. The most stable frequency was observed in networks of pure resonators (RES-RES, None-RES, RES-None). Interestingly, purely excitatory RES networks (RES-None) were also able to exhibit oscillations through RING. By contrast purely excitatory or inhibitory IF networks (IF-None, None-IF) were not able to express oscillations under these conditions, matching experimental parameters. Conclusions: In both PING and ING, adding membrane resonance to principal cells or inhibitory interneurons stabilizes network oscillation frequency via the RING mechanism. Notably, in networks of purely excitatory networks, where ING and PING are not defined, oscillations can emerge via the RING mechanism if membrane resonance is expressed. Thus, RING appears as a potentially important mechanism for promoting stable network oscillations.
TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy
(2011)
To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ...
Dynamics of relativistic heavy-ion collisions is investigated on the basis of a simple (1+1)-dimensional hydrodynamical model in light-cone coordinates. The main emphasis is put on studying sensitivity of the dynamics and observables to the equation of state and initial conditions. Low sensitivity of pion rapidity spectra to the presence of the phase transition is demonstrated, and some inconsistencies of the equilibrium scenario are pointed out. Possible non-equilibrium effects are discussed, in particular, a possibility of an explosive disintegration of the deconfined phase into quark-gluon droplets. Simple estimates show that the characteristic droplet size should decrease with increasing the collective expansion rate. These droplets will hadronize individually by emitting hadrons from the surface. This scenario should reveal itself by strong non-statistical fluctuations of observables. Critical Point and Onset of Deconfinement 4th International Workshop July 9-13 2007 GSI Darmstadt,Germany