### Refine

#### Document Type

- Article (16)
- Conference Proceeding (4)

#### Keywords

- MEG (2)
- Causality (1)
- Crossmodal (1)
- DCM (1)
- Effective connectivity (1)
- Electroencephalography (1)
- Functional connectivity (1)
- Functional magnetic resonance imaging (1)
- Independent component analysis (1)
- Information theory (1)

#### Institute

- Medizin (18)
- Frankfurt Institute for Advanced Studies (FIAS) (12)
- MPI für Hirnforschung (2)
- Informatik (1)
- Physik (1)
- Psychologie (1)

- TRENTOOL: an open source toolbox to estimate neural directed interactions with transfer entropy (2011)
- Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Poster presentation To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. We evaluated the performance of the toolbox on simulation data and also a neuronal dataset that provides connections that are truly unidirectional to circumvent the following generic problem: typically, for any result of an analysis of directed interactions in the brain there will be a plausible explanation because of the combination of feedforward and feedback connectivity between any two measurement sites. Therefore, we estimated TE between the electroretinogram (ERG) and the LFP response in the tectum of the turtle (Chrysemys scripta elegans) under visual stimulation by random light pulses. In addition, we also investigated transfer entropy between the input to the light source (TTL pulse) and the ERG, to test the ability of TE to detect directed interactions between signals with vastly different properties. We found significant (p<0.0005) causal interactions from the TTL pulse to the ERG and from the ERG to the tectum – as expected. No significant TE was detected in the reverse direction. CONCLUSION: TRENTOOL is an easy to use implementation of transfer entropy estimation combined with statistical testing routines suitable for the analysis of directed interactions in neuronal data.

- TRENTOOL: a Matlab open source toolbox to analyse information flow in time series data with transfer entropy (2011)
- Background: Transfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present. Results: In simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected. Conclusions: TRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox.

- Transfer entropy - a model-free measure of effective connectivity for the neurosciences (2010)
- Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain’s activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction.

- Measuring information-transfer delays (2013)
- In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.

- Distinct gamma-band components reflect the short-term memory maintenance of different sound lateralization angles (2008)
- Oscillatory activity in human electro- or magnetoencephalogram has been related to cortical stimulus representations and their modulation by cognitive processes. Whereas previous work has focused on gamma-band activity (GBA) during attention or maintenance of representations, there is little evidence for GBA reflecting individual stimulus representations. The present study aimed at identifying stimulus-specific GBA components during auditory spatial short-term memory. A total of 28 adults were assigned to 1 of 2 groups who were presented with only right- or left-lateralized sounds, respectively. In each group, 2 sample stimuli were used which differed in their lateralization angles (15° or 45°) with respect to the midsagittal plane. Statistical probability mapping served to identify spectral amplitude differences between 15° versus 45° stimuli. Distinct GBA components were found for each sample stimulus in different sensors over parieto-occipital cortex contralateral to the side of stimulation peaking during the middle 200–300 ms of the delay phase. The differentiation between "preferred" and "nonpreferred" stimuli during the final 100 ms of the delay phase correlated with task performance. These findings suggest that the observed GBA components reflect the activity of distinct networks tuned to spatial sound features which contribute to the maintenance of task-relevant information in short-term memory.

- The timing of feedback to early visual cortex in the perception of long-range apparent motion (2008)
- When 2 visual stimuli are presented one after another in different locations, they are often perceived as one, but moving object. Feedback from area human motion complex hMT/V5+ to V1 has been hypothesized to play an important role in this illusory perception of motion. We measured event-related responses to illusory motion stimuli of varying apparent motion (AM) content and retinal location using Electroencephalography. Detectable cortical stimulus processing started around 60-ms poststimulus in area V1. This component was insensitive to AM content and sequential stimulus presentation. Sensitivity to AM content was observed starting around 90 ms post the second stimulus of a sequence and most likely originated in area hMT/V5+. This AM sensitive response was insensitive to retinal stimulus position. The stimulus sequence related response started to be sensitive to retinal stimulus position at a longer latency of 110 ms. We interpret our findings as evidence for feedback from area hMT/V5+ or a related motion processing area to early visual cortices (V1, V2, V3).

- Subsampling effects in neuronal avalanche distributions recorded in vivo (2009)
- Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s) of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s), and sigma=1, are hallmark features of self-organized critical (SOC) systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s) and sigma by imposing subsampling on three different SOC models. We then compared f(s) and sigma of the subsampled models with those of multielectrode local field potential (LFP) activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s). Both, f(s) and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s) and sigma similar to those calculated from LFP activity. Conclusions Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s) and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s) and sigma calculated from the physiological recordings may be selected over alternatives.

- Using transfer entropy to measure the patterns of information flow though cortex : application to MEG recordings from a visual Simon task (2009)
- Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...

- Detection of single trial power coincidence for the identification of distributed cortical processes in a behavioral context (2009)
- Poster presentation: The analysis of neuronal processes distributed across multiple cortical areas aims at the identification of interactions between signals recorded at different sites. Such interactions can be described by measuring the stability of phase angles in the case of oscillatory signals or other forms of signal dependencies for less regular signals. Before, however, any form of interaction can be analyzed at a given time and frequency, it is necessary to assess whether all potentially contributing signals are present. We have developed a new statistical procedure for the detection of coincident power in multiple simultaneously recorded analog signals, allowing the classification of events as 'non-accidental co-activation'. This method can effectively operate on single trials, each lasting only for a few seconds. Signals need to be transformed into time-frequency space, e.g. by applying a short-time Fourier transformation using a Gaussian window. The discrete wavelet transform (DWT) is used in order to weight the resulting power patterns according to their frequency. Subsequently, the weighted power patterns are binarized via applying a threshold. At this final stage, significant power coincidence is determined across all subgroups of channel combinations for individual frequencies by selecting the maximum ratio between observed and expected duration of co-activation as test statistic. The null hypothesis that the activity in each channel is independent from the activity in every other channel is simulated by independent, random rotation of the respective activity patterns. We applied this procedure to single trials of multiple simultaneously sampled local field potentials (LFPs) obtained from occipital, parietal, central and precentral areas of three macaque monkeys. Since their task was to use visual cues to perform a precise arm movement, co-activation of numerous cortical sites was expected. In a data set with 17 channels analyzed, up to 13 sites expressed simultaneous power in the range between 5 and 240 Hz. On average, more than 50% of active channels participated at least once in a significant power co-activation pattern (PCP). Because the significance of such PCPs can be evaluated at the level of single trials, we are confident that this procedure is useful to study single trial variability with sufficient accuracy that much of the behavioral variability can be explained by the dynamics of the underlying distributed neuronal processes.

- Neuronal avalanches recorded in the awake and sleeping monkey do not show a power law but can be reproduced by a self-organized critical model (2009)
- Poster presentation: Self-organized critical (SOC) systems are complex dynamical systems that may express cascades of events, called avalanches [1]. The SOC state was proposed to govern brain function, because of its activity fluctuations over many orders of magnitude, its sensitivity to small input and its long term stability [2,3]. In addition, the critical state is optimal for information storage and processing [4]. Both hallmark features of SOC systems, a power law distribution f(s) for the avalanche size s and a branching parameter (bp) of unity, were found for neuronal avalanches recorded in vitro [5]. However, recordings in vivo yielded contradictory results [6]. Electrophysiological recordings in vivo only cover a small fraction of the brain, while criticality analysis assumes that the complete system is sampled. We hypothesized that spatial subsampling might influence the observed avalanche statistics. In addition, SOC models can have different connectivity, but always show a power law for f(s) and bp = 1 when fully sampled. This may not be the case under subsampling, however. Here, we wanted to know whether a state change from awake to asleep could be modeled by changing the connectivity of a SOC model without leaving the critical state. We simulated a SOC model [1] and calculated f(s) and bp obtained from sampling only the activity of a set of 4 × 4 sites, representing the electrode positions in the cortex. We compared these results with results obtained from multielectrode recordings of local field potentials (LFP) in the cortex of behaving monkeys. We calculated f(s) and bp for the LFP activity recorded while the monkey was either awake or asleep and compared these results to results obtained from two subsampled SOC model with different connectivity. f(s) and bp were very similar for both the experiments and the subsampled SOC model, but in contrast to the fully sampled model, f(s) did not show a power law and bp was smaller than unity. With increasing the distance between the sampling sites, f(s) changed from "apparently supercritical" to "apparently subcritical" distributions in both the model and the LFP data. f(s) and bp calculated from LFP recorded during awake and asleep differed. These changes could be explained by altering the connectivity in the SOC model. Our results show that subsampling can prevent the observation of the characteristic power law and bp in SOC systems, and misclassifications of critical systems as sub- or supercritical are possible. In addition, a change in f(s) and bp for different states (awake/asleep) does not necessarily imply a change from criticality to sub- or supercriticality, but can also be explained by a change in the effective connectivity of the network without leaving the critical state.