Medizin
Refine
Year of publication
Document Type
- Article (43)
- Conference Proceeding (5)
Language
- English (48)
Has Fulltext
- yes (48)
Is part of the Bibliography
- no (48)
Keywords
- schizophrenia (5)
- MEG (2)
- classical Hodgkin lymphoma (2)
- cortex (2)
- gamma (2)
- graph theory (2)
- neural oscillations (2)
- synchrony (2)
- Adult neurogenesis (1)
- Alpha oscillations (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (48) (remove)
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.
Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain “decide” what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function).
Mitochondria form a dynamic tubular reticulum within eukaryotic cells. Currently, quantitative understanding of its morphological characteristics is largely absent, despite major progress in deciphering the molecular fission and fusion machineries shaping its structure. Here we address the principles of formation and the large-scale organization of the cell-wide network of mitochondria. On the basis of experimentally determined structural features we establish the tip-to-tip and tip-to-side fission and fusion events as dominant reactions in the motility of this organelle. Subsequently, we introduce a graph-based model of the chondriome able to encompass its inherent variability in a single framework. Using both mean-field deterministic and explicit stochastic mathematical methods we establish a relationship between the chondriome structural network characteristics and underlying kinetic rate parameters. The computational analysis indicates that mitochondrial networks exhibit a percolation threshold. Intrinsic morphological instability of the mitochondrial reticulum resulting from its vicinity to the percolation transition is proposed as a novel mechanism that can be utilized by cells for optimizing their functional competence via dynamic remodeling of the chondriome. The detailed size distribution of the network components predicted by the dynamic graph representation introduces a relationship between chondriome characteristics and cell function. It forms a basis for understanding the architecture of mitochondria as a cell-wide but inhomogeneous organelle. Analysis of the reticulum adaptive configuration offers a direct clarification for its impact on numerous physiological processes strongly dependent on mitochondrial dynamics and organization, such as efficiency of cellular metabolism, tissue differentiation and aging.
Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies.
Mitochondrial dynamics and mitophagy play a key role in ensuring mitochondrial quality control. Impairment thereof was proposed to be causative to neurodegenerative diseases, diabetes, and cancer. Accumulation of mitochondrial dysfunction was further linked to aging. Here we applied a probabilistic modeling approach integrating our current knowledge on mitochondrial biology allowing us to simulate mitochondrial function and quality control during aging in silico. We demonstrate that cycles of fusion and fission and mitophagy indeed are essential for ensuring a high average quality of mitochondria, even under conditions in which random molecular damage is present. Prompted by earlier observations that mitochondrial fission itself can cause a partial drop in mitochondrial membrane potential, we tested the consequences of mitochondrial dynamics being harmful on its own. Next to directly impairing mitochondrial function, pre-existing molecular damage may be propagated and enhanced across the mitochondrial population by content mixing. In this situation, such an infection-like phenomenon impairs mitochondrial quality control progressively. However, when imposing an age-dependent deceleration of cycles of fusion and fission, we observe a delay in the loss of average quality of mitochondria. This provides a rational why fusion and fission rates are reduced during aging and why loss of a mitochondrial fission factor can extend life span in fungi. We propose the ‘mitochondrial infectious damage adaptation’ (MIDA) model according to which a deceleration of fusion–fission cycles reflects a systemic adaptation increasing life span.
Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification.
Background: Transfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present.
Results: In simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected.
Conclusions: TRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox.
In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left or rightward auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left or rightward) more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Visual selective attention and visual working memory (WM) share the same capacity-limited resources. We investigated whether and how participants can cope with a task in which these 2 mechanisms interfere. The task required participants to scan an array of 9 objects in order to select the target locations and to encode the items presented at these locations into WM (1 to 5 shapes). Determination of the target locations required either few attentional resources (“popout condition”) or an attention-demanding serial search (“non pop-out condition”). Participants were able to achieve high memory performance in all stimulation conditions but, in the non popout conditions, this came at the cost of additional processing time. Both empirical evidence and subjective reports suggest that participants invested the additional time in memorizing the locations of all target objects prior to the encoding of their shapes into WM. Thus, they seemed to be unable to interleave the steps of search with those of encoding. We propose that the memory for target locations substitutes for perceptual pop-out and thus may be the key component that allows for flexible coping with the common processing limitations of visual WM and attention. The findings have implications for understanding how we cope with real-life situations in which the demands on visual attention and WM occur simultaneously. Keywords: attention, working memory, interference, encoding strategies
TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy
(2011)
To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ...
Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory.
Background: In this interdisciplinary project, the biological effects of heavy ions are compared to those of X-rays using tissue slice culture preparations from rodents and humans. Advantages of this biological model are the conservation of an organotypic environment and the independency from genetic immortalization strategies used to generate cell lines. Its open access allows easy treatment and observation via live-imaging microscopy. Materials and methods: Rat brains and human brain tumor tissue are cut into 300 micro m thick tissue slices. These slices are cultivated using a membrane-based culture system and kept in an incubator at 37°C until treatment. The slices are treated with X-rays at the radiation facility of the University Hospital in Frankfurt at doses of up to 40 Gy. The heavy ion irradiations were performed at the UNILAC facility at GSI with different ions of 11.4 A MeV and fluences ranging from 0.5–10 x 106 particles/cm². Using 3D-confocal microscopy, cell-death and immune cell activation of the irradiated slices are analyzed. Planning of the irradiation experiments is done with simulation programs developed at GSI and FIAS. Results: After receiving a single application of either X-rays or heavy ions, slices were kept in culture for up to 9d post irradiation. DNA damage was visualized using gamma H2AXstaining. Here, a dose-dependent increase and time-dependent decrease could clearly be observed for the X-ray irradiation. Slices irradiated with heavy ions showed less gamma H2AX-positive cells distributed evenly throughout the slice, even though particles were calculated to penetrate only 90–100 micro m into the slice. Conclusions: Single irradiations of brain tissue, even at high doses of 40 Gy, will result neither in tissue damage visible on a macroscopic level nor necrosis. This is in line with the view that the brain is highly radio-resistant. However, DNA damage can be detected very well in tissue slices using gamma H2AX-immuno staining. Thus, slice cultures are an excellent tool to study radiation-induced damage and repair mechanisms in living tissues.
Poster presentation: Introduction Adequate anesthesia is crucial to the success of surgical interventions and subsequent recovery. Neuroscientists, surgeons, and engineers have sought to understand the impact of anesthetics on the information processing in the brain and to properly assess the level of anesthesia in an non-invasive manner. Studies have indicated a more reliable depth of anesthesia (DOA) detection if multiple parameters are employed. Indeed, commercial DOA monitors (BIS, Narcotrend, M-Entropy and A-line ARX) use more than one feature extraction method. Here, we propose TESPAR (Time Encoded Signal Processing And Recognition) a time domain signal processing technique novel to EEG DOA assessment that could enhance existing monitoring devices. ...
Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
Poster presentation: Introduction The brain is a highly interconnected network of constantly interacting units. Understanding the collective behavior of these units requires a multi-dimensional approach. The results of such analyses are hard to visualize and interpret. Hence tools capable of dealing with such tasks become imperative. ....
The timing of feedback to early visual cortex in the perception of long-range apparent motion
(2008)
When 2 visual stimuli are presented one after another in different locations, they are often perceived as one, but moving object. Feedback from area human motion complex hMT/V5+ to V1 has been hypothesized to play an important role in this illusory perception of motion. We measured event-related responses to illusory motion stimuli of varying apparent motion (AM) content and retinal location using Electroencephalography. Detectable cortical stimulus processing started around 60-ms poststimulus in area V1. This component was insensitive to AM content and sequential stimulus presentation. Sensitivity to AM content was observed starting around 90 ms post the second stimulus of a sequence and most likely originated in area hMT/V5+. This AM sensitive response was insensitive to retinal stimulus position. The stimulus sequence related response started to be sensitive to retinal stimulus position at a longer latency of 110 ms. We interpret our findings as evidence for feedback from area hMT/V5+ or a related motion processing area to early visual cortices (V1, V2, V3).
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.