Refine
Year of publication
- 2013 (3) (remove)
Document Type
- Conference Proceeding (3) (remove)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3) (remove)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (3) (remove)
Network or graph theory has become a popular tool to represent and analyze large-scale interaction patterns in the brain. To derive a functional network representation from experimentally recorded neural time series one has to identify the structure of the interactions between these time series. In neuroscience, this is often done by pairwise bivariate analysis because a fully multivariate treatment is typically not possible due to limited data and excessive computational cost. Furthermore, a true multivariate analysis would consist of the analysis of the combined effects, including information theoretic synergies and redundancies, of all possible subsets of network components. Since the number of these subsets is the power set of the network components, this leads to a combinatorial explosion (i.e. a problem that is computationally intractable). In contrast, a pairwise bivariate analysis of interactions is typically feasible but introduces the possibility of false detection of spurious interactions between network components, especially due to cascade and common drive effects. These spurious connections in a network representation may introduce a bias to subsequently computed graph theoretical measures (e.g. clustering coefficient or centrality) as these measures depend on the reliability of the graph representation from which they are computed. Strictly speaking, graph theoretical measures are meaningful only if the underlying graph structure can be guaranteed to consist of one type of connections only, i.e. connections in the graph are guaranteed to be non-spurious. ...
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...