Refine
Year of publication
Document Type
- Article (33)
- Conference Proceeding (4)
- Preprint (3)
Language
- English (40)
Has Fulltext
- yes (40)
Is part of the Bibliography
- no (40)
Keywords
- schizophrenia (6)
- magnetoencephalography (4)
- predictive coding (4)
- MEG (3)
- information theory (3)
- Electroencephalography (2)
- Information theory (2)
- local information dynamics (2)
- neural oscillations (2)
- partial information decomposition (2)
Institute
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing.
Event-related potentials (ERPs) are widely used in basic neuroscience and in clinical diagnostic procedures. In contrast, neurophysiological insights from ERPs have been limited, as several different mechanisms lead to ERPs. Apart from stereotypically repeated responses (additive evoked responses), these mechanisms are asymmetric amplitude modulations and phase-resetting of ongoing oscillatory activity. Therefore, a method is needed that differentiates between these mechanisms and moreover quantifies the stability of a response. We propose a constrained subspace independent component analysis that exploits the multivariate information present in the all-to-all relationship of recordings over trials. Our method identifies additive evoked activity and quantifies its stability over trials. We evaluate identification performance for biologically plausible simulation data and two neurophysiological test cases: Local field potential (LFP) recordings from a visuo-motor-integration task in the awake behaving macaque and magnetoencephalography (MEG) recordings of steady-state visual evoked fields (SSVEFs). In the LFPs we find additive evoked response contributions in visual areas V2/4 but not in primary motor cortex A4, although visually triggered ERPs were also observed in area A4. MEG-SSVEFs were mainly created by additive evoked response contributions. Our results demonstrate that the identification of additive evoked response contributions is possible both in invasive and in non-invasive electrophysiological recordings.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in, eg, perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Perceptual variability may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients (N = 20), synesthetes (N = 20), and controls (N = 26). Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors—introduced during the hysteresis experiment—lowered thresholds but did not normalize perception. Our results imply that perceptual variability might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in e.g. perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Different perceptual phenotypes may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients, synesthetes, and controls. Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors – introduced during the hysteresis experiment – lowered thresholds but did not normalize perception. Our results imply that distinct perceptual phenotypes might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.
Human behaviour is inextricably linked to the interaction of emotion and cognition. For decades, emotion and cognition were perceived as separable processes, yet with mutual interactions. Recently, this differen-tiation has been challenged by more integrative approaches, but without addressing the exact neurophysiological basis of their interaction. Here, we aimed to uncover neurophysiological mechanisms of emotion-cognition interaction. We used an emotional Flanker task paired with EEG/FEM beamforming in a large cohort (N=121) of healthy human participants, obtaining high temporal and fMRI-equivalent spatial resolution. Spatially, emotion and cognition processing overlapped in the right inferior frontal gyrus (rIFG), specifically in pars triangularis. Temporally, emotion and cognition processing overlapped during the transition from emotional to cognitive processing, with a stronger interaction in β-band power leading to worse behavioral performance. Despite functionally segregated subdivisions in rIFG, frequency-specific information flowed extensively within IFG and top-down to visual areas (V2, Precuneus) – explaining the behavioral interference effect. Thus, for the first time we here show the neural mechanisms of emotion-cognition interaction in space, time, frequency and information transfer with high temporal and spatial resolution, revealing a central role for β-band activity in rIFG. Our results support the idea that rIFG plays a broad role in both inhibitory control and emotional interference inhibition as it is a site of convergence in both processes. Furthermore, our results have potential clinical implications for understanding dysfunctional emotion-cognition interaction and emotional interference inhibition in psychiatric disor-ders, e.g. major depression and substance use disorder, in which patients have difficulties in regulating emotions and executing inhibitory control.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.
Aging is accompanied by unisensory decline. To compensate for this, two complementary strategies are potentially relied upon increasingly: first, older adults integrate more information from different sensory organs. Second, according to the predictive coding (PC) model, we form “templates” (internal models or “priors”) of the environment through our experiences. It is through increased life experience that older adults may rely more on these templates compared to younger adults. Multisensory integration and predictive coding would be effective strategies for the perception of near-threshold stimuli, which may however come at the cost of integrating irrelevant information. Both strategies can be studied in multisensory illusions because these require the integration of different sensory information, as well as an internal model of the world that can take precedence over sensory input. Here, we elicited a classic multisensory illusion, the sound-induced flash illusion, in younger (mean: 27 years, N = 25) and older (mean: 67 years, N = 28) adult participants while recording the magnetoencephalogram. Older adults perceived more illusions than younger adults. Older adults had increased pre-stimulus beta-band activity compared to younger adults as predicted by microcircuit theories of predictive coding, which suggest priors and predictions are linked to beta-band activity. Transfer entropy analysis and dynamic causal modeling of pre-stimulus magnetoencephalography data revealed a stronger illusion-related modulation of cross-modal connectivity from auditory to visual cortices in older compared to younger adults. We interpret this as the neural correlate of increased reliance on a cross-modal predictive template in older adults leading to the illusory percept.
Background: Cognitive dysfunctions represent a core feature of schizophrenia and a predictor for clinical outcomes. One possible mechanism for cognitive impairments could involve an impairment in the experience-dependent modifications of cortical networks.
Methods: To address this issue, we employed magnetoencephalography (MEG) during a visual priming paradigm in a sample of chronic patients with schizophrenia (n = 14), and in a group of healthy controls (n = 14). We obtained MEG-recordings during the presentation of visual stimuli that were presented three times either consecutively or with intervening stimuli. MEG-data were analyzed for event-related fields as well as spectral power in the 1–200 Hz range to examine repetition suppression and repetition enhancement. We defined regions of interest in occipital and thalamic regions and obtained virtual-channel data.
Results: Behavioral priming did not differ between groups. However, patients with schizophrenia showed prominently reduced oscillatory response to novel stimuli in the gamma-frequency band as well as significantly reduced repetition suppression of gamma-band activity and reduced repetition enhancement of beta-band power in occipital cortex to both consecutive repetitions as well as repetitions with intervening stimuli. Moreover, schizophrenia patients were characterized by a significant deficit in suppression of the C1m component in occipital cortex and thalamus as well as of the late positive component (LPC) in occipital cortex.
Conclusions: These data provide novel evidence for impaired repetition suppression in cortical and subcortical circuits in schizophrenia. Although behavioral priming was preserved, patients with schizophrenia showed deficits in repetition suppression as well as repetition enhancement in thalamic and occipital regions, suggesting that experience-dependent modification of neural circuits is impaired in the disorder.
Inspiration for artificial biologically inspired computing is often drawn from neural systems. This article shows how to analyze neural systems using information theory with the aim of obtaining constraints that help to identify the algorithms run by neural systems and the information they represent. Algorithms and representations identified this way may then guide the design of biologically inspired computing systems. The material covered includes the necessary introduction to information theory and to the estimation of information-theoretic quantities from neural recordings. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is partitioned into component processes of information storage, transfer, and modification – locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems.
Local active information storage as a tool to understand distributed neural information processing
(2014)
Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding.