Sondersammelgebiets-Volltexte
Refine
Year of publication
Document Type
- Article (42)
- Conference Proceeding (3)
Language
- English (45)
Has Fulltext
- yes (45)
Is part of the Bibliography
- no (45)
Keywords
- visual cortex (2)
- BPTI (1)
- NACI (1)
- NMR spectroscopy (1)
- NMR spectrum (1)
- NMR structure determination (1)
- Naja naja atra (1)
- Non-negative matrix factorization (1)
- Peak overlap (1)
- Peak picking (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (45) (remove)
Summary
Wild relatives of crops thrive in habitats where environmental conditions can be restrictive for productivity and survival of cultivated species. The genetic basis of this variability, particularly for tolerance to high temperatures, is not well understood. We examined the capacity of wild and cultivated accessions to acclimate to rapid temperature elevations that cause heat stress (HS).
We investigated genotypic variation in thermotolerance of seedlings of wild and cultivated accessions. The contribution of polymorphisms associated with thermotolerance variation was examined regarding alterations in function of the identified gene.
We show that tomato germplasm underwent a progressive loss of acclimation to strong temperature elevations. Sensitivity is associated with intronic polymorphisms in the HS transcription factor HsfA2 which affect the splicing efficiency of its pre‐mRNA. Intron splicing in wild species results in increased synthesis of isoform HsfA2‐II, implicated in the early stress response, at the expense of HsfA2‐I which is involved in establishing short‐term acclimation and thermotolerance.
We propose that the selection for modern HsfA2 haplotypes reduced the ability of cultivated tomatoes to rapidly acclimate to temperature elevations, but enhanced their short‐term acclimation capacity. Hence, we provide evidence that alternative splicing has a central role in the definition of plant fitness plasticity to stressful conditions.
We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radiative transfer code, BHOSS. The resulting synthetic intensity maps of accretion onto black holes are found to be convergent with increasing resolution and are anticipated to play a crucial role in the interpretation of horizon-scale images resulting from upcoming radio observations of the source at the Galactic Center.
Abstract: Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring.
Author Summary: The problem of how the brain wires itself up has important implications for the understanding of both brain development and cognition. The microscopic structure of the circuits of the adult neocortex, often considered the seat of our highest cognitive abilities, is still poorly understood. Recent experiments have provided a first set of findings on the structural features of these circuits, but it is unknown how these features come about and how they are maintained. Here we present a neural network model that shows how these features might come about. It gives rise to numerous connectivity features, which have been observed in experiments, but never before simultaneously produced by a single model. Our model explains the development of these structural features as the result of a process of self-organization. The results imply that only a few simple mechanisms and constraints are required to produce, at least to the first approximation, various characteristic features of a typical fragment of brain microcircuitry. In the absence of any of these mechanisms, simultaneous production of all desired features fails, suggesting a minimal set of necessary mechanisms for their production.
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.
The YaeJ protein is a codon-independent release factor with peptidyl-tRNA hydrolysis (PTH) activity, and functions as a stalled-ribosome rescue factor in Escherichia coli. To identify residues required for YaeJ function, we performed mutational analysis for in vitro PTH activity towards rescue of ribosomes stalled on a non-stop mRNA, and for ribosome-binding efficiency. We focused on residues conserved among bacterial YaeJ proteins. Additionally, we determined the solution structure of the GGQ domain of YaeJ from E. coli using nuclear magnetic resonance spectroscopy. YaeJ and a human homolog, ICT1, had similar levels of PTH activity, despite various differences in sequence and structure. While no YaeJ-specific residues important for PTH activity occur in the structured GGQ domain, Arg118, Leu119, Lys122, Lys129 and Arg132 in the following C-terminal extension were required for PTH activity. All of these residues are completely conserved among bacteria. The equivalent residues were also found in the C-terminal extension of ICT1, allowing an appropriate sequence alignment between YaeJ and ICT1 proteins from various species. Single amino acid substitutions for each of these residues significantly decreased ribosome-binding efficiency. These biochemical findings provide clues to understanding how YaeJ enters the A-site of stalled ribosomes.
Background: Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments.
Results: To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments.
Conclusions: Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap.
Tumour cells show a varying susceptibility to radiation damage as a function of the current cell cycle phase. While this sensitivity is averaged out in an unperturbed tumour due to unsynchronised cell cycle progression, external stimuli such as radiation or drug doses can induce a resynchronisation of the cell cycle and consequently induce a collective development of radiosensitivity in tumours. Although this effect has been regularly described in experiments it is currently not exploited in clinical practice and thus a large potential for optimisation is missed. We present an agent-based model for three-dimensional tumour spheroid growth which has been combined with an irradiation damage and kinetics model. We predict the dynamic response of the overall tumour radiosensitivity to delivered radiation doses and describe corresponding time windows of increased or decreased radiation sensitivity. The degree of cell cycle resynchronisation in response to radiation delivery was identified as a main determinant of the transient periods of low and high radiosensitivity enhancement. A range of selected clinical fractionation schemes is examined and new triggered schedules are tested which aim to maximise the effect of the radiation-induced sensitivity enhancement. We find that the cell cycle resynchronisation can yield a strong increase in therapy effectiveness, if employed correctly. While the individual timing of sensitive periods will depend on the exact cell and radiation types, enhancement is a universal effect which is present in every tumour and accordingly should be the target of experimental investigation. Experimental observables which can be assessed non-invasively and with high spatio-temporal resolution have to be connected to the radiosensitivity enhancement in order to allow for a possible tumour-specific design of highly efficient treatment schedules based on induced cell cycle synchronisation.
Author Summary: The sensitivity of a cell to a dose of radiation is largely affected by its current position within the cell cycle. While under normal circumstances progression through the cell cycle will be asynchronous in a tumour mass, external influences such as chemo- or radiotherapy can induce a synchronisation. Such a common progression of the inner clock of the cancer cells results in the critical dependence on the effectiveness of any drug or radiation dose on a suitable timing for its administration. We analyse the exact evolution of the radiosensitivity of a sample tumour spheroid in a computer model, which enables us to predict time windows of decreased or increased radiosensitivity. Fractionated radiotherapy schedules can be tailored in order to avoid periods of high resistance and exploit the induced radiosensitivity for an increase in therapy efficiency. We show that the cell cycle effects can drastically alter the outcome of fractionated irradiation schedules in a spheroid cell system. By using the correct observables and continuous monitoring, the cell cycle sensitivity effects have the potential to be integrated into treatment planing of the future and thus to be employed for a better outcome in clinical cancer therapies.
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...
The Taiwan cobra (Naja naja atra) chymotrypsin inhibitor (NACI) consists of 57 amino acids and is related to other Kunitz-type inhibitors such as bovine pancreatic trypsin inhibitor (BPTI) and Bungarus fasciatus fraction IX (BF9), another chymotrypsin inhibitor. Here we present the solution structure of NACI. We determined the NMR structure of NACI with a root-mean-square deviation of 0.37 Å for the backbone atoms and 0.73 Å for the heavy atoms on the basis of 1,075 upper distance limits derived from NOE peaks measured in its NOESY spectra. To investigate the structural characteristics of NACI, we compared the three-dimensional structure of NACI with BPTI and BF9. The structure of the NACI protein comprises one 310-helix, one α-helix and one double-stranded antiparallel β-sheet, which is comparable with the secondary structures in BPTI and BF9. The RMSD value between the mean structures is 1.09 Å between NACI and BPTI and 1.27 Å between NACI and BF9. In addition to similar secondary and tertiary structure, NACI might possess similar types of protein conformational fluctuations as reported in BPTI, such as Cys14–Cys38 disulfide bond isomerization, based on line broadening of resonances from residues which are mainly confined to a region around the Cys14–Cys38 disulfide bond.
Adequate digital resolution and signal sensitivity are two critical factors for protein structure determinations by solution NMR spectroscopy. The prime objective for obtaining high digital resolution is to resolve peak overlap, especially in NOESY spectra with thousands of signals where the signal analysis needs to be performed on a large scale. Achieving maximum digital resolution is usually limited by the practically available measurement time. We developed a method utilizing non-uniform sampling for balancing digital resolution and signal sensitivity, and performed a large-scale analysis of the effect of the digital resolution on the accuracy of the resulting protein structures. Structure calculations were performed as a function of digital resolution for about 400 proteins with molecular sizes ranging between 5 and 33 kDa. The structural accuracy was assessed by atomic coordinate RMSD values from the reference structures of the proteins. In addition, we monitored also the number of assigned NOESY cross peaks, the average signal sensitivity, and the chemical shift spectral overlap. We show that high resolution is equally important for proteins of every molecular size. The chemical shift spectral overlap depends strongly on the corresponding spectral digital resolution. Thus, knowing the extent of overlap can be a predictor of the resulting structural accuracy. Our results show that for every molecular size a minimal digital resolution, corresponding to the natural linewidth, needs to be achieved for obtaining the highest accuracy possible for the given protein size using state-of-the-art automated NOESY assignment and structure calculation methods.
Background: After induction of DNA double strand breaks (DSBs), the DNA damage response (DDR) is activated. One of the earliest events in DDR is the phosphorylation of serine 139 on the histone variant H2AX (gH2AX) catalyzed by phosphatidylinositol 3-kinases-related kinases. Despite being extensively studied, H2AX distribution[1] across the genome and gH2AX spreading around DSBs sites[2] in the context of different chromatin compaction states or transcription are yet to be fully elucidated.
Materials and methods: gH2AX was induced in human hepatocellular carcinoma cells (HepG2) by exposure to 10 Gy X-rays (250 kV, 16 mA). Samples were incubated 0.5, 3 or 24 hours post irradiation to investigate early, intermediate and late stages of DDR, respectively. Chromatin immunoprecipitation was performed to select H2AX, H3 and gH2AX-enriched chromatin fractions. Chromatin-associated DNA was then sequenced by Illumina ChIP-Seq platform. HepG2 gene expression and histone modification (H3K36me3, H3K9me3) ChIP-Seq profiles were retrieved from Gene Expression Omnibus (accession numbers GSE30240 and GSE26386, respectively).
Results: First, we combined G/C usage, gene content, gene expression or histone modification profiles (H3K36me3, H3K9me3) to define genomic compartments characterized by different chromatin compaction states or transcriptional activity. Next, we investigated H3, H2AX and gH2AX distributions in such defined compartments before and after exposure to ionizing radiation (IR) to study DNA repair kinetics during DDR. Our sequencing results indicate that H2AX distribution followed H3 occupancy and, thus, the nucleosome pattern. The highest H2AX and H3 enrichment was observed in transcriptionally active compartments (euchromatin) while the lowest was found in low G/C and gene-poor compartments (heterochromatin). Under physiological conditions, the body of highly and moderately transcribed genes was devoid of gH2AX, despite presenting high H2AX levels. gH2AX accumulation was observed in 5’ or 3’ flanking regions, instead. The same genes showed a prompt gH2AX accumulation during the early stage of DDR which then decreased over time as DDR proceeded.
Finally, during the late stage of DDR the residual gH2AX signal was entirely retained in heterochromatic compartments. At this stage, euchromatic compartments were completely devoid of gH2AX despite presenting high levels of non-phosphorylated H2AX.
Conclusions: We show that gH2AX distribution ultimately depends on H2AX occupancy, the latter following H3 occupancy and, thus, nucleosome pattern. Both H2AX and H3 levels were higher in actively transcribed compartments. However, gH2AX levels were remarkably low over the body of actively transcribed genes suggesting that transcription levels antagonize gH2AX spreading. Moreover, repair processes did not take place uniformly across the genome; rather, DNA repair was affected by genomic location and transcriptional activity. We propose that higher H2AX density in euchromaticcompartments results in high relative gH2AXconcentration soon after the activation of DDR, thus favoring the recruitment of the DNA repair machinery to those compartments. When the damage is repaired and gH2AX is removed, its residual fraction is retained in the heterochromatic compartments which are then targeted and repaired at later times.
Radiation damage following the ionising radiation of tissue has different scenarios and mechanisms depending on the projectiles or radiation modality. We investigate the radiation damage effects due to shock waves produced by ions. We analyse the strength of the shock wave capable of directly producing DNA strand breaks and, depending on the ion's linear energy transfer, estimate the radius from the ion's path, within which DNA damage by the shock wave mechanism is dominant. At much smaller values of linear energy transfer, the shock waves turn out to be instrumental in propagating reactive species formed close to the ion's path to large distances, successfully competing with diffusion.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations.
Mitochondria form a dynamic tubular reticulum within eukaryotic cells. Currently, quantitative understanding of its morphological characteristics is largely absent, despite major progress in deciphering the molecular fission and fusion machineries shaping its structure. Here we address the principles of formation and the large-scale organization of the cell-wide network of mitochondria. On the basis of experimentally determined structural features we establish the tip-to-tip and tip-to-side fission and fusion events as dominant reactions in the motility of this organelle. Subsequently, we introduce a graph-based model of the chondriome able to encompass its inherent variability in a single framework. Using both mean-field deterministic and explicit stochastic mathematical methods we establish a relationship between the chondriome structural network characteristics and underlying kinetic rate parameters. The computational analysis indicates that mitochondrial networks exhibit a percolation threshold. Intrinsic morphological instability of the mitochondrial reticulum resulting from its vicinity to the percolation transition is proposed as a novel mechanism that can be utilized by cells for optimizing their functional competence via dynamic remodeling of the chondriome. The detailed size distribution of the network components predicted by the dynamic graph representation introduces a relationship between chondriome characteristics and cell function. It forms a basis for understanding the architecture of mitochondria as a cell-wide but inhomogeneous organelle. Analysis of the reticulum adaptive configuration offers a direct clarification for its impact on numerous physiological processes strongly dependent on mitochondrial dynamics and organization, such as efficiency of cellular metabolism, tissue differentiation and aging.
Mitochondrial dynamics and mitophagy play a key role in ensuring mitochondrial quality control. Impairment thereof was proposed to be causative to neurodegenerative diseases, diabetes, and cancer. Accumulation of mitochondrial dysfunction was further linked to aging. Here we applied a probabilistic modeling approach integrating our current knowledge on mitochondrial biology allowing us to simulate mitochondrial function and quality control during aging in silico. We demonstrate that cycles of fusion and fission and mitophagy indeed are essential for ensuring a high average quality of mitochondria, even under conditions in which random molecular damage is present. Prompted by earlier observations that mitochondrial fission itself can cause a partial drop in mitochondrial membrane potential, we tested the consequences of mitochondrial dynamics being harmful on its own. Next to directly impairing mitochondrial function, pre-existing molecular damage may be propagated and enhanced across the mitochondrial population by content mixing. In this situation, such an infection-like phenomenon impairs mitochondrial quality control progressively. However, when imposing an age-dependent deceleration of cycles of fusion and fission, we observe a delay in the loss of average quality of mitochondria. This provides a rational why fusion and fission rates are reduced during aging and why loss of a mitochondrial fission factor can extend life span in fungi. We propose the ‘mitochondrial infectious damage adaptation’ (MIDA) model according to which a deceleration of fusion–fission cycles reflects a systemic adaptation increasing life span.
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
Spherical harmonics coeffcients for ligand-based virtual screening of cyclooxygenase inhibitors
(2011)
Background: Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. Methodology/Principal Findings: We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. Conclusions/Significance: 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.
Background: The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results: We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions: The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. Additional files Additional file 1: Dependence of Q on the order parameter rank. The quantity Qi is plotted against the order parameter rank i for 9 different protein structure bundles. Additional file 2: Dependence of P on the clustering stage. The quantity Pi is plotted against the clustering stage i for 9 different protein structure bundles. Additional file 3: Dependence of CYRANGE results on the minimal cluster size parameter my. The sequence coverage (red) and RMSD (blue) of the residue ranges determined by CYRANGE were plotted as a function of my for 9 different protein structure bundles. The dotted vertical line indicates the default value, my = 8. Where CYRANGE found two domains, the RMSD values of the individual domains are shown in light and dark blue. Additional file 4: Dependence of CYRANGE results on the domain boundary extension parameter m. See Additional File 3 for details. Additional file 5: Dependence of CYRANGE results on the minimal gap width g. See Additional File 3 for details. Additional file 6: Dependence of CYRANGE results on the relative RMSD decrease parameter delta. See Additional File 3 for details. Additional file 7: Dependence of CYRANGE results on the absolute RMSD decrease parameter delta abs. See Additional File 3 for details. Additional file 8: Dependence of CYRANGE results on the gap penalty parameter gamma. See Additional File 3 for details. Additional file 9: Correlation between the sequence coverage from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents a protein shown in Figures 3 and 4. The coverage is the percentage of amino acid residues included in the residue ranges found by the different methods. The GDT_TS value is defined by GDT_TS = (P1 + P2 + P4 + P8)/4, where Pd is the fraction of residues that can be superimposed under a distance cutoff of d Å. Additional file 10: Correlation between the RMSD value for the residue ranges from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents one protein domain. See Additional File 9 for details.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. One of the central questions in neuroscience is how neural activity is organized across different spatial and temporal scales. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies, a cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. Although various experiments have revealed amplitude-to-amplitude and phase-to-phase coupling, the most common and most celebrated result is that the phase of the lower frequency component modulates the amplitude of the higher frequency component. Over the recent 5 years the amount of experimental works finding such phase-amplitude coupling in LFP, ECoG, EEG and MEG has been tremendous (summarized in [1]). We suggest that although the mechanism of cross-frequency-coupling (CFC) is theoretically very tempting, the current analysis methods might overestimate any physiological CFC actually evident in the signals of LFP, ECoG, EEG and MEG. In particular, we point out three conceptual problems in assessing the components and their correlations of a time series. Although we focus on phase-amplitude coupling, most of our argument is relevant for any type of coupling. 1) The first conceptual problem is related to isolating physiological frequency components of the recorded signal. The key point is to notice that there are many different mathematical representations for a time series but the physical interpretation we make out of them is dependent on the choice of the components to be analyzed. In particular, when one isolates the components by Fourier-representation based filtering, it is the width of the filtering bands what defines what we consider as our components and how their power or group phase change in time. We will discuss clear cut examples where the interpretation of the existence of CFC depends on the width of the filtering process. 2) A second problem deals with the origin of spectral correlations as detected by current cross-frequency analysis. It is known that non-stationarities are associated with spectral correlations in the Fourier space. Therefore, there are two possibilities regarding the interpretation of any observed CFC. One scenario is that basic neuronal mechanisms indeed generate an interaction across different time scales (or frequencies) resulting in processes with non-stationary features. The other and problematic possibility is that unspecific non-stationarities can also be associated with spectral correlations which in turn will be detected by cross frequency measures even if physiologically there is no causal interaction between the frequencies. 3) We discuss on the role of non-linearities as generators of cross frequency interactions. As an example we performed a phase-amplitude coupling analysis of two nonlinearly related signals: atmospheric noise and the square of it (Figure 1) observing an enhancement of phase-amplitude coupling in the second signal while no pattern is observed in the first. Finally, we discuss some minimal conditions need to be tested to solve some of the ambiguities here noted. In summary, we simply want to point out that finding a significant cross frequency pattern does not always have to imply that there indeed is physiological cross frequency interaction in the brain.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Parallel multiunit recordings from V1 in anesthetized cat were collected during the presentation of random sequences of drifting sinusoidal gratings at 12 fixed orientations while gamma oscillations were present. In agreement with the seminal work [1], most units were orientation selective to varying degrees and synchronization was evident in spike train crosscorrelograms computed between units with similar preferred orientations, particularly during the presentation of optimal stimuli. Interestingly, a subset of units, which we refer to as synchronization hubs, were additionally found to synchronize with units having differing preferred orientations which was consistent with a previous study [2]. Moreover, oscillatory patterning in spike train autocorrelograms was also found to be strongest in units denoted as synchronization hubs, and synchronization hubs also tended to have narrower tuning curves relative to other units. We used simplified computational models of small networks of V1 neurons to demonstrate that neurons subject to a sufficiently strong level of inhibitory input can function as synchronization hubs. Neurons were endowed either with integrate-and-fire or conductance-based dynamics and each neuron received a combination of excitatory (AMPA) synaptic inputs that were Poisson-distributed and inhibitory (GABA) inputs that were coherent at a gamma-frequency range. If the strength of rhythmic inhibition was increased for a subset of neurons in the network, and excitation was increased simultaneously to maintain a fixed firing rate, then these neurons produced stronger oscillatory patterning in their discharge probabilities. The oscillations in turn synchronized these neurons with other neurons in the network. Importantly, the strength of synchronization increased with neurons of differing orientation preferences even though no direct synaptic coupling existed between the hubs and the other neurons. Enhanced levels of inhibition account for the emergence of synchronization hubs in the following way: Inhibitory inputs exhibiting a gamma rhythm determine a time window within which a cell is likely to discharge. Increased levels of inhibition narrow down this window further simultaneously leading to (i) even stronger oscillatory patterning of the neuron's activity and (ii) enhanced synchronization with other neurons. This enables synchronization even between cells with differing orientation preferences. Additionally, the same increased levels of inhibition may be responsible for the narrow tuning curves of hub neurons. In conclusion, synchronization hubs may be the cells that interact most strongly with the network of inhibitory interneurons during gamma oscillations in primary visual cortex.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Background: Oscillatory activity in high-beta and gamma bands (20-80Hz) is known to play an important role in cortical processing being linked to cognitive processes and behavior. Beta/gamma oscillations are thought to emerge in local cortical circuits via two mechanisms: the interaction between excitatory principal cells and inhibitory interneurons – the pyramidal-interneuron gamma (PING) [1], and in networks of coupled inhibitory interneurons under tonic excitation – the interneuronal gamma (ING) [2]. Experimental evidence underlines the important role of inhibitory interneurons and especially of the fast spiking (FS) interneurons [3,4]. We show in simulation that an important property of FS neurons, namely the membrane resonance (frequency preference), represents an additional mechanism – the resonance induced gamma (RING), i.e. modulation of oscillatory discharge by resonance. RING promotes frequency stability and enables oscillations in purely excitatory networks. Methods: Local circuits were modeled with small world networks of 80% excitatory and 20% inhibitory neuron populations interconnected in small-world topology by realistic conductance-based synapses. Neuron populations were leaky integrate and fire (LIF) or Izhikevich resonator (RES) neurons. We also tested networks of purely inhibitory and purely excitatory RES neurons. Networks were stimulated with miniature postsynaptic potentials (MINIs) [5] and with low frequency sinusoidal (0.5 Hz) input that mimics the effect of gratings passing trough the visual field. The activity was calibrated to match recordings from cat visual cortex (firing rate, oscillatory activity). Results: Sinusoidal input modulates network oscillation frequency. This effect is most prominent in IF excitatory and IF inhibitory (IF-IF) networks and less prominent (about 4 times) in IF-RES or RES-IF networks where frequency remains relatively stable. The most stable frequency was observed in networks of pure resonators (RES-RES, None-RES, RES-None). Interestingly, purely excitatory RES networks (RES-None) were also able to exhibit oscillations through RING. By contrast purely excitatory or inhibitory IF networks (IF-None, None-IF) were not able to express oscillations under these conditions, matching experimental parameters. Conclusions: In both PING and ING, adding membrane resonance to principal cells or inhibitory interneurons stabilizes network oscillation frequency via the RING mechanism. Notably, in networks of purely excitatory networks, where ING and PING are not defined, oscillations can emerge via the RING mechanism if membrane resonance is expressed. Thus, RING appears as a potentially important mechanism for promoting stable network oscillations.
Gamma synchronization has generally been associated with grouping processes in the visual system. Here, we examine in monkey V1 whether gamma oscillations play a functional role in segmenting surfaces of plaid stimuli. Local field potentials (LFPs) and spiking activity were recorded simultaneously from multiple sites in the opercular and calcarine regions while the monkeys were presented with sequences of single and superimposed components of plaid stimuli. In accord with the previous studies, responses to the single components (gratings) exhibited strong and sustained gamma-band oscillations (30–65 Hz). The superposition of the second component, however, led to profound changes in the temporal structure of the responses, characterized by a drastic reduction of gamma oscillations in the spiking activity and systematic shifts to higher frequencies in the LFP (~10% increase). Comparisons between cerebral hemispheres and across monkeys revealed robust subject-specific spectral signatures. A possible interpretation of our results may be that single gratings induce strong cooperative interactions among populations of cells that share similar response properties, whereas plaids lead to competition. Overall, our results suggest that the functional architecture of the cortex is a major determinant of the neuronal synchronization dynamics in V1. Key words: attention , gamma , gratings , oscillation , visual cortex
Human Transformer2-beta (hTra2-beta) is an important member of the serine/arginine-rich protein family, and contains one RNA recognition motif (RRM). It controls the alternative splicing of several pre-mRNAs, including those of the calcitonin/calcitonin gene-related peptide (CGRP), the survival motor neuron 1 (SMN1) protein and the tau protein. Accordingly, the RRM of hTra2-beta specifically binds to two types of RNA sequences [the CAA and (GAA)2 sequences]. We determined the solution structure of the hTra2-beta RRM (spanning residues Asn110–Thr201), which not only has a canonical RRM fold, but also an unusual alignment of the aromatic amino acids on the beta-sheet surface. We then solved the complex structure of the hTra2-beta RRM with the (GAA)2 sequence, and found that the AGAA tetra-nucleotide was specifically recognized through hydrogen-bond formation with several amino acids on the N- and C-terminal extensions, as well as stacking interactions mediated by the unusually aligned aromatic rings on the beta-sheet surface. Further NMR experiments revealed that the hTra2-beta RRM recognizes the CAA sequence when it is integrated in the stem-loop structure. This study indicates that the hTra2-beta RRM recognizes two types of RNA sequences in different RNA binding modes.
Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory.
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (<= ~20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.
Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.
Poster presentation: Background To test the importance of synchronous neuronal firing for information processing in the brain, one has to investigate if synchronous firing strength is correlated to the experimental subjects. This requires a tool that can compare the strength of the synchronous firing across different conditions, while at the same time it should correct for other features of neuronal firing such as spike rate modulation or the auto-structure of the spike trains that might co-occur with synchronous firing. Here we present the bi- and multivariate extension of previously developed method NeuroXidence [1,2], which allows for comparing the amount of synchronous firing between different conditions. ...
Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives.
Poster presentation: Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to characterize the relationship between neural spiking activity and the features of putative stimuli. These methods include: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently, the point process generalized linear model (GLM). We compared the performance of these three approaches in estimating receptive field properties and orientation tuning of 251 V1 neurons recorded from 2 monkeys during a fixation period in response to a moving bar. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. We fit the GLMs by maximum likelihood using GLMfit in Matlab. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). The GLMs that considered spike history of up to 35 ms, accurately predicted neuronal spiking activity (95% confidence intervals for KS test) with a performance of 97.0% (3895/4016) for the training data, and 96.5% (3876/4016) for the test data. If spike history was not considered, performance dropped to 73,1% in the training and 71.3% in the testing data. In contrast, the WVF and the STA predicted spiking accurately for 24.2% and 44.5% of the test data examples respectively. The receptive field size estimates obtained from the GLM (with and without history), WVF and STA were comparable. Relative to the GLM orientation tuning was underestimated on average by a factor of 0.45 by the WVF and the STA. The main reason for using the STA and WVF approaches is their apparent simplicity. However, our analyses suggest that more accurate spike prediction as well as more credible estimates of receptive field size and orientation tuning can be computed easily using GLMs implemented in Matlab with standard functions such as GLMfit.
Poster presentation: Introduction Rhythmic synchronization of neural activity in the gamma-frequency range (30–100 Hz) was observed in many brain regions; see the review in [1]. The functional relevance of these oscillations remains to be clarified, a task that requires modeling of the relevant aspects of information processing. The temporal correlation hypothesis, reviewed in [2], proposes that the temporal correlation of neural units provides a means to group the neural units into so-called neural assemblies that are supposed to represent mental objects. Here, we approach the modeling of the temporal grouping of neural units from the perspective of oscillatory neural network systems based on phase model oscillators. Patterns are assumed to be stored in the network based on Hebbian memory and assemblies are identified with phase-locked subset of these patterns. Going beyond foregoing discussions, we demonstrate the combination of two recently discussed mechanisms, referred to as "acceleration" [3] and "pooling" [4]. The combination realizes in a complementary manner a competition for activity on a local scale, while providing a competition for coherence among different assemblies on a non-local scale. ...
Poster presentation: Introduction Adequate anesthesia is crucial to the success of surgical interventions and subsequent recovery. Neuroscientists, surgeons, and engineers have sought to understand the impact of anesthetics on the information processing in the brain and to properly assess the level of anesthesia in an non-invasive manner. Studies have indicated a more reliable depth of anesthesia (DOA) detection if multiple parameters are employed. Indeed, commercial DOA monitors (BIS, Narcotrend, M-Entropy and A-line ARX) use more than one feature extraction method. Here, we propose TESPAR (Time Encoded Signal Processing And Recognition) a time domain signal processing technique novel to EEG DOA assessment that could enhance existing monitoring devices. ...
Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...
Poster presentation: Our work deals with the self-organization [1] of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses – bottom-up, lateral and top-down – being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. ...
Poster presentation: Introduction We study the problem of object recognition invariant to transformations, such as translation, rotation and scale. A system is underdetermined if its degrees of freedom (number of possible transformations and potential objects) exceed the available information (image size). The regularization theory solves this problem by adding constraints [1]. It is unclear what constraints biological systems use. We suggest that rather than seeking constraints, an underdetermined system can make decisions based on available information by grouping its variables. We propose a dynamical system as a minimum system for invariant recognition to demonstrate this strategy. ...
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events
(2009)
Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ...
Poster presentation: Introduction We here focus on constructing a hierarchical neural system for position-invariant recognition, which is one of the most fundamental invariant recognition achieved in visual processing [1,2]. The invariant recognition have been hypothesized to be done by matching a sensory image of a particular object stimulated on the retina to the most suitable representation stored in memory of the higher visual cortical area. Here arises a general problem: In such a visual processing, the position of the object image on the retina must be initially uncertain. Furthermore, the retinal activities possessing sensory information are being far from the ones in the higher area with a loss of the sensory object information. Nevertheless, with such recognition ambiguity, the particular object can effortlessly and easily be recognized. Our aim in this work is an attempt to resolve such a general recognition problem. ...
Poster presentation: Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. ...
Poster presentation: Introduction The brain is a highly interconnected network of constantly interacting units. Understanding the collective behavior of these units requires a multi-dimensional approach. The results of such analyses are hard to visualize and interpret. Hence tools capable of dealing with such tasks become imperative. ....
A small-world network has been suggested to be an efficient solution for achieving both modular and global processing-a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population´s activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of "hubs" in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding.
Background Synchronous neuronal firing has been discussed as a potential neuronal code. For testing first, if synchronous firing exists, second if it is modulated by the behaviour, and third if it is not by chance, a large set of tools has been developed. However, to test whether synchronous neuronal firing is really involved in information processing one needs a direct comparison of the amount of synchronous firing for different factors like experimental or behavioural conditions. To this end we present an extended version of a previously published method NeuroXidence [1], which tests, based on a bi- and multivariate test design, whether the amount of synchronous firing above the chance level is different for different factors.
Background The synchrony hypothesis postulates that precise temporal synchronization of different pools of neurons conveys information that is not contained in their firing rates. The synchrony hypothesis had been supported by experimental findings demonstrating that millisecond precise synchrony of neuronal oscillations across well separated brain regions plays an essential role in visual perception and other higher cognitive tasks [1]. Albeit, more evidence is being accumulated in favour of its role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization for a wide range of temporal delays [3]. We demonstrate that zero-lag synchronization between two distant neurons or neural populations can be achieved by relaying the dynamics via a third mediating single neuron or population. Methods We simulated the dynamics of two Hodgkin-Huxley neurons that interact with each other via an intermediate third neuron. The synaptic coupling was mediated through alpha-functions. Individual temporal delays of the arrival of pre-synaptic potentials were modelled by a gamma distribution. The strength of the synchronization and the phase-difference between each individual pairs were derived by cross-correlation of the membrane potentials. Results In the regular spiking regime the two outer neurons consistently synchronize with zero phase lag irrespective of the initial conditions. This robust zero-lag synchronization naturally arises as a consequence of the relay and redistribution of the dynamics performed by the central neuron. This result is independent on whether the coupling is excitatory or inhibitory and can be maintained for arbitrarily long time delays (see Fig. 1). Conclusion We have presented a simple and extremely robust network motif able to account for the isochronous synchronization of distant neural elements in a natural way. As opposed to other possible mechanisms of neural synchronization, neither inhibitory coupling, gap junctions nor precise tuning of morphological parameters are required to obtain zero-lag synchronized neuronal oscillation.