Refine
Year of publication
Document Type
- Article (33)
- Conference Proceeding (4)
- Preprint (3)
Language
- English (40)
Has Fulltext
- yes (40)
Is part of the Bibliography
- no (40)
Keywords
- schizophrenia (6)
- magnetoencephalography (4)
- predictive coding (4)
- MEG (3)
- information theory (3)
- Electroencephalography (2)
- Information theory (2)
- local information dynamics (2)
- neural oscillations (2)
- partial information decomposition (2)
Institute
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. One of the central questions in neuroscience is how neural activity is organized across different spatial and temporal scales. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies, a cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. Although various experiments have revealed amplitude-to-amplitude and phase-to-phase coupling, the most common and most celebrated result is that the phase of the lower frequency component modulates the amplitude of the higher frequency component. Over the recent 5 years the amount of experimental works finding such phase-amplitude coupling in LFP, ECoG, EEG and MEG has been tremendous (summarized in [1]). We suggest that although the mechanism of cross-frequency-coupling (CFC) is theoretically very tempting, the current analysis methods might overestimate any physiological CFC actually evident in the signals of LFP, ECoG, EEG and MEG. In particular, we point out three conceptual problems in assessing the components and their correlations of a time series. Although we focus on phase-amplitude coupling, most of our argument is relevant for any type of coupling. 1) The first conceptual problem is related to isolating physiological frequency components of the recorded signal. The key point is to notice that there are many different mathematical representations for a time series but the physical interpretation we make out of them is dependent on the choice of the components to be analyzed. In particular, when one isolates the components by Fourier-representation based filtering, it is the width of the filtering bands what defines what we consider as our components and how their power or group phase change in time. We will discuss clear cut examples where the interpretation of the existence of CFC depends on the width of the filtering process. 2) A second problem deals with the origin of spectral correlations as detected by current cross-frequency analysis. It is known that non-stationarities are associated with spectral correlations in the Fourier space. Therefore, there are two possibilities regarding the interpretation of any observed CFC. One scenario is that basic neuronal mechanisms indeed generate an interaction across different time scales (or frequencies) resulting in processes with non-stationary features. The other and problematic possibility is that unspecific non-stationarities can also be associated with spectral correlations which in turn will be detected by cross frequency measures even if physiologically there is no causal interaction between the frequencies. 3) We discuss on the role of non-linearities as generators of cross frequency interactions. As an example we performed a phase-amplitude coupling analysis of two nonlinearly related signals: atmospheric noise and the square of it (Figure 1) observing an enhancement of phase-amplitude coupling in the second signal while no pattern is observed in the first. Finally, we discuss some minimal conditions need to be tested to solve some of the ambiguities here noted. In summary, we simply want to point out that finding a significant cross frequency pattern does not always have to imply that there indeed is physiological cross frequency interaction in the brain.
The disruption of coupling between brain areas has been suggested as the mechanism underlying loss of consciousness in anesthesia. This hypothesis has been tested previously by measuring the information transfer between brain areas, and by taking reduced information transfer as a proxy for decoupling. Yet, information transfer is a function of the amount of information available in the information source—such that transfer decreases even for unchanged coupling when less source information is available. Therefore, we reconsidered past interpretations of reduced information transfer as a sign of decoupling, and asked whether impaired local information processing leads to a loss of information transfer. An important prediction of this alternative hypothesis is that changes in locally available information (signal entropy) should be at least as pronounced as changes in information transfer. We tested this prediction by recording local field potentials in two ferrets after administration of isoflurane in concentrations of 0.0%, 0.5%, and 1.0%. We found strong decreases in the source entropy under isoflurane in area V1 and the prefrontal cortex (PFC)—as predicted by our alternative hypothesis. The decrease in source entropy was stronger in PFC compared to V1. Information transfer between V1 and PFC was reduced bidirectionally, but with a stronger decrease from PFC to V1. This links the stronger decrease in information transfer to the stronger decrease in source entropy—suggesting reduced source entropy reduces information transfer. This conclusion fits the observation that the synaptic targets of isoflurane are located in local cortical circuits rather than on the synapses formed by interareal axonal projections. Thus, changes in information transfer under isoflurane seem to be a consequence of changes in local processing more than of decoupling between brain areas. We suggest that source entropy changes must be considered whenever interpreting changes in information transfer as decoupling.
Operating in a reverberating regime enables rapid tuning of network states to task requirements
(2018)
Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a "reverberating regime" because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call "dynamic adaptive computation," presents a central organization principle of cortical networks and discuss first experimental evidence.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...
Inspiration for artificial biologically inspired computing is often drawn from neural systems. This article shows how to analyze neural systems using information theory with the aim of obtaining constraints that help to identify the algorithms run by neural systems and the information they represent. Algorithms and representations identified this way may then guide the design of biologically inspired computing systems. The material covered includes the necessary introduction to information theory and to the estimation of information-theoretic quantities from neural recordings. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is partitioned into component processes of information storage, transfer, and modification – locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems.
Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.
The neurophysiological changes associated with Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) include an increase in low frequency activity, as measured with electroencephalography or magnetoencephalography (MEG). A relevant property of spectral measures is the alpha peak, which corresponds to the dominant alpha rhythm. Here we studied the spatial distribution of MEG resting state alpha peak frequency and amplitude values in a sample of 27 MCI patients and 24 age-matched healthy controls. Power spectra were reconstructed in source space with linearly constrained minimum variance beamformer. Then, 88 Regions of Interest (ROIs) were defined and an alpha peak per ROI and subject was identified. Statistical analyses were performed at every ROI, accounting for age, sex and educational level. Peak frequency was significantly decreased (p < 0.05) in MCIs in many posterior ROIs. The average peak frequency over all ROIs was 9.68 ± 0.71 Hz for controls and 9.05 ± 0.90 Hz for MCIs and the average normalized amplitude was (2.57 ± 0.59)·10(-2) for controls and (2.70 ± 0.49)·10(-2) for MCIs. Age and gender were also found to play a role in the alpha peak, since its frequency was higher in females than in males in posterior ROIs and correlated negatively with age in frontal ROIs. Furthermore, we examined the dependence of peak parameters with hippocampal volume, which is a commonly used marker of early structural AD-related damage. Peak frequency was positively correlated with hippocampal volume in many posterior ROIs. Overall, these findings indicate a pathological alpha slowing in MCI.