### Refine

#### Document Type

- Article (8)
- Conference Proceeding (2)

#### Keywords

- STDP (1)
- alpha peak (1)
- auto-structure (1)
- hippocampal volume (1)
- integrate and fire (1)
- magnetoencephalography (1)
- mild cognitive impairment (1)
- non-Poissonian (1)
- slowing (1)
- spike train (1)

#### Institute

- TRENTOOL: a Matlab open source toolbox to analyse information flow in time series data with transfer entropy (2011)
- Background: Transfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present. Results: In simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected. Conclusions: TRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox.

- TRENTOOL: an open source toolbox to estimate neural directed interactions with transfer entropy (2011)
- Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Poster presentation To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. We evaluated the performance of the toolbox on simulation data and also a neuronal dataset that provides connections that are truly unidirectional to circumvent the following generic problem: typically, for any result of an analysis of directed interactions in the brain there will be a plausible explanation because of the combination of feedforward and feedback connectivity between any two measurement sites. Therefore, we estimated TE between the electroretinogram (ERG) and the LFP response in the tectum of the turtle (Chrysemys scripta elegans) under visual stimulation by random light pulses. In addition, we also investigated transfer entropy between the input to the light source (TTL pulse) and the ERG, to test the ability of TE to detect directed interactions between signals with vastly different properties. We found significant (p<0.0005) causal interactions from the TTL pulse to the ERG and from the ERG to the tectum – as expected. No significant TE was detected in the reverse direction. CONCLUSION: TRENTOOL is an easy to use implementation of transfer entropy estimation combined with statistical testing routines suitable for the analysis of directed interactions in neuronal data.

- Zero-lag long-range synchronization of Hodgkin-Huxley neurons is enhanced by dynamical relaying : poster presentation (2007)
- Background The synchrony hypothesis postulates that precise temporal synchronization of different pools of neurons conveys information that is not contained in their firing rates. The synchrony hypothesis had been supported by experimental findings demonstrating that millisecond precise synchrony of neuronal oscillations across well separated brain regions plays an essential role in visual perception and other higher cognitive tasks [1]. Albeit, more evidence is being accumulated in favour of its role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization for a wide range of temporal delays [3]. We demonstrate that zero-lag synchronization between two distant neurons or neural populations can be achieved by relaying the dynamics via a third mediating single neuron or population. Methods We simulated the dynamics of two Hodgkin-Huxley neurons that interact with each other via an intermediate third neuron. The synaptic coupling was mediated through alpha-functions. Individual temporal delays of the arrival of pre-synaptic potentials were modelled by a gamma distribution. The strength of the synchronization and the phase-difference between each individual pairs were derived by cross-correlation of the membrane potentials. Results In the regular spiking regime the two outer neurons consistently synchronize with zero phase lag irrespective of the initial conditions. This robust zero-lag synchronization naturally arises as a consequence of the relay and redistribution of the dynamics performed by the central neuron. This result is independent on whether the coupling is excitatory or inhibitory and can be maintained for arbitrarily long time delays (see Fig. 1). Conclusion We have presented a simple and extremely robust network motif able to account for the isochronous synchronization of distant neural elements in a natural way. As opposed to other possible mechanisms of neural synchronization, neither inhibitory coupling, gap junctions nor precise tuning of morphological parameters are required to obtain zero-lag synchronized neuronal oscillation.

- Analyzing possible pitfalls of cross-frequency analysis : poster presentation from Twentieth Annual Computational Neuroscience Meeting CNS*2011 Stockholm, Sweden, 23 - 28 July 2011 (2011)
- Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. One of the central questions in neuroscience is how neural activity is organized across different spatial and temporal scales. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies, a cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. Although various experiments have revealed amplitude-to-amplitude and phase-to-phase coupling, the most common and most celebrated result is that the phase of the lower frequency component modulates the amplitude of the higher frequency component. Over the recent 5 years the amount of experimental works finding such phase-amplitude coupling in LFP, ECoG, EEG and MEG has been tremendous (summarized in [1]). We suggest that although the mechanism of cross-frequency-coupling (CFC) is theoretically very tempting, the current analysis methods might overestimate any physiological CFC actually evident in the signals of LFP, ECoG, EEG and MEG. In particular, we point out three conceptual problems in assessing the components and their correlations of a time series. Although we focus on phase-amplitude coupling, most of our argument is relevant for any type of coupling. 1) The first conceptual problem is related to isolating physiological frequency components of the recorded signal. The key point is to notice that there are many different mathematical representations for a time series but the physical interpretation we make out of them is dependent on the choice of the components to be analyzed. In particular, when one isolates the components by Fourier-representation based filtering, it is the width of the filtering bands what defines what we consider as our components and how their power or group phase change in time. We will discuss clear cut examples where the interpretation of the existence of CFC depends on the width of the filtering process. 2) A second problem deals with the origin of spectral correlations as detected by current cross-frequency analysis. It is known that non-stationarities are associated with spectral correlations in the Fourier space. Therefore, there are two possibilities regarding the interpretation of any observed CFC. One scenario is that basic neuronal mechanisms indeed generate an interaction across different time scales (or frequencies) resulting in processes with non-stationary features. The other and problematic possibility is that unspecific non-stationarities can also be associated with spectral correlations which in turn will be detected by cross frequency measures even if physiologically there is no causal interaction between the frequencies. 3) We discuss on the role of non-linearities as generators of cross frequency interactions. As an example we performed a phase-amplitude coupling analysis of two nonlinearly related signals: atmospheric noise and the square of it (Figure 1) observing an enhancement of phase-amplitude coupling in the second signal while no pattern is observed in the first. Finally, we discuss some minimal conditions need to be tested to solve some of the ambiguities here noted. In summary, we simply want to point out that finding a significant cross frequency pattern does not always have to imply that there indeed is physiological cross frequency interaction in the brain.

- Graphical analyses in delay interaction networks (2013)
- Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013. Network or graph theory has become a popular tool to represent and analyze large-scale interaction patterns in the brain. To derive a functional network representation from experimentally recorded neural time series one has to identify the structure of the interactions between these time series. In neuroscience, this is often done by pairwise bivariate analysis because a fully multivariate treatment is typically not possible due to limited data and excessive computational cost. Furthermore, a true multivariate analysis would consist of the analysis of the combined effects, including information theoretic synergies and redundancies, of all possible subsets of network components. Since the number of these subsets is the power set of the network components, this leads to a combinatorial explosion (i.e. a problem that is computationally intractable). In contrast, a pairwise bivariate analysis of interactions is typically feasible but introduces the possibility of false detection of spurious interactions between network components, especially due to cascade and common drive effects. These spurious connections in a network representation may introduce a bias to subsequently computed graph theoretical measures (e.g. clustering coefficient or centrality) as these measures depend on the reliability of the graph representation from which they are computed. Strictly speaking, graph theoretical measures are meaningful only if the underlying graph structure can be guaranteed to consist of one type of connections only, i.e. connections in the graph are guaranteed to be non-spurious. We propose an approximate solution to improve this situation in the form of an algorithm that flags potentially spurious edges that are due to cascade effects and "three node" common drive effects in a network representation of bivariately analyzed interactions. As these two effects are responsible for a large part of spurious connections in bivariate analyses, their removal would mean a significant improvement of the network representation over existing bivariate solutions. Our approach is based on the detection of directed interactions and the weighting of these interactions by their reconstructed interaction delays. We demonstrate how both questions can be addressed using a modified estimator of transfer entropy (TE). TE is an implementation of Wiener's principle of observational causality based on information theory [1], and detects arbitrary linear and non-linear interactions. Using a modified TE estimator that uses delayed states of the driving system, one can mathematically prove that transfer entropy values peak if the delay of the state of the driving system equals the true interaction delay [2]. From this analysis, we derive a delay weighted network representation of directed interactions. On this network representation, potentially spurious interactions can be detected by analyzing sets of alternative paths between two endpoints in terms of their summed delays. The proposed algorithm may be used to prune spurious edges from the network, improving the reliability of the network representation itself and enhancing the applicability of subsequent graph theoretical measures. For the detection of "multi-node" common drive effects, that are not considered in this study, a theoretical solution exists as well, extending the power of the method, but this solution has not been implemented yet. We demonstrate the application of this algorithm to networks of interacting neural sources in magneto-encephalographic data, and show that roughly 30% of bivariate interactions in these data are potentially spurious, and thus alter graph properties. We conclude that the post hoc correction provided by our approach is a computationally less demanding alternative to a fully multivariate analysis of directed interactions, and preferable in cases were a multivariate treatment of the data is difficult due to the limited amount of data available.

- Measuring information-transfer delays (2013)
- In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.

- Brain-wide slowing of spontaneous alpha rhythms in mild cognitive impairment (2013)
- The neurophysiological changes associated with Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) include an increase in low frequency activity, as measured with electroencephalography or magnetoencephalography (MEG). A relevant property of spectral measures is the alpha peak, which corresponds to the dominant alpha rhythm. Here we studied the spatial distribution of MEG resting state alpha peak frequency and amplitude values in a sample of 27 MCI patients and 24 age-matched healthy controls. Power spectra were reconstructed in source space with linearly constrained minimum variance beamformer. Then, 88 Regions of Interest (ROIs) were defined and an alpha peak per ROI and subject was identified. Statistical analyses were performed at every ROI, accounting for age, sex and educational level. Peak frequency was significantly decreased (p < 0.05) in MCIs in many posterior ROIs. The average peak frequency over all ROIs was 9.68 ± 0.71 Hz for controls and 9.05 ± 0.90 Hz for MCIs and the average normalized amplitude was (2.57 ± 0.59)·10(-2) for controls and (2.70 ± 0.49)·10(-2) for MCIs. Age and gender were also found to play a role in the alpha peak, since its frequency was higher in females than in males in posterior ROIs and correlated negatively with age in frontal ROIs. Furthermore, we examined the dependence of peak parameters with hippocampal volume, which is a commonly used marker of early structural AD-related damage. Peak frequency was positively correlated with hippocampal volume in many posterior ROIs. Overall, these findings indicate a pathological alpha slowing in MCI.

- Using transfer entropy to measure the patterns of information flow though cortex : application to MEG recordings from a visual Simon task (2009)
- Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...

- A mechanism for achieving zero-lag long-range synchronization of neural activity (2009)
- Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.

- Spike train auto-structure impacts post-synaptic firing and timing-based plasticity (2011)
- Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification.