004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Preprint (43) (remove)
Language
- English (43) (remove)
Has Fulltext
- yes (43)
Is part of the Bibliography
- no (43)
Keywords
- Amblyopia (1)
- Auto-ML (1)
- Automatentheorie (1)
- Binocular Rivalry (1)
- Browsertool (1)
- Computational model (1)
- Cortical column (1)
- Data Analysis (1)
- Degradation (1)
- Deutsch (1)
Institute
- Informatik (19)
- Frankfurt Institute for Advanced Studies (FIAS) (15)
- Medizin (10)
- Physik (7)
- Biowissenschaften (3)
- Ernst Strüngmann Institut (3)
- Biochemie und Chemie (2)
- Extern (2)
- MPI für Hirnforschung (2)
- Informatik und Mathematik (1)
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
The ubiquitin (Ub) code denotes the complex Ub architectures, including Ub chains of different length, linkage-type and linkage combinations, which enable ubiquitination to control a wide range of protein fates. Although many linkage-specific interactors have been described, how interactors are able to decode more complex architectures is not fully understood. We conducted a Ub interactor screen, in humans and yeast, using Ub chains of varying length, as well as, homotypic and heterotypic branched chains of the two most abundant linkage types – K48- and K63-linked Ub. We identified some of the first K48/K63 branch-specific Ub interactors, including histone ADP-ribosyltransferase PARP10/ARTD10, E3 ligase UBR4 and huntingtin-interacting protein HIP1. Furthermore, we revealed the importance of chain length by identifying interactors with a preference for Ub3 over Ub2 chains, including Ub-directed endoprotease DDI2, autophagy receptor CCDC50 and p97-adaptor FAF1. Crucially, we compared datasets collected using two common DUB inhibitors – Chloroacetamide and N-ethylmaleimide. This revealed inhibitor-dependent interactors, highlighting the importance of inhibitor consideration during pulldown studies. This dataset is a key resource for understanding how the Ub code is read.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Graph data is an omnipresent way to represent information in machine learning. Especially, in neuroscience research, data from Diffusion-Tensor Imaging (DTI) and functional Magnetic Resonance Imaging (fMRI) is commonly represented as graphs. Exploiting the graph structure of these modalities using graph-specific machine learning applications is currently hampered by the lack of easy-to-use software. PHOTONAI Graph aims to close the gap between domain experts of machine learning, graph experts and neuroscientists. Leveraging the rapid machine learning model development features of the Python machine learning API PHOTONAI, PHOTONAI Graph enables the design, optimization, and evaluation of reliable graph machine learning models for practitioners. As such, it provides easy access to custom graph machine learning pipelines including, hyperparameter optimization and algorithm evaluation ensuring reproducibility and valid performance estimates. Integrating established algorithms such as graph neural networks, graph embeddings and graph kernels, it allows researchers without significant coding experience to build and optimize complex graph machine learning models within a few lines of code. We showcase the versatility of this toolbox by building pipelines for both resting–state fMRI and DTI data in the hope that it will increase the adoption of graph-specific machine learning algorithms in neuroscience research.
Bioinformatics analysis quantifies neighborhood preferences of cancer cells in Hodgkin lymphoma
(2017)
Motivation Hodgkin lymphoma is a tumor of the lymphatic system and represents one of the most frequent lymphoma in the Western world. It is characterized by Hodgkin cells and Reed-Sternberg cells, which exhibit a broad morphological spectrum. The cells are visualized by immunohistochemical staining of tissue sections. In pathology, tissue images are mainly manually evaluated, relying on the expertise and experience of pathologists. Computational quantification methods become more and more essential to evaluate tissue images. In particular, the distribution of cancer cells is of great interest.
Results Here, we systematically quantified and investigated cancer cell properties and their spatial neighborhood relations by applying statistical analyses to whole slide images of Hodgkin lymphoma and lymphadenitis, which describes a non-cancerous inflammation of the lymph node. We differentiated cells by their morphology and studied the spatial neighborhood relation of more than 400,000 immunohistochemically stained cells. We found that, according to their morphological features, the cells exhibited significant preferences for and aversions to cells of specific profiles as nearest neighbor. We quantified differences between Hodgkin lymphoma and lymphadenitis concerning the neighborhood relations of cells and the sizes of cells. The approach can easily be applied to other cancer types.
isiKnock is a new software that automatically conducts in silico knockouts for mathematical models of biochemical pathways. The software allows for the prediction of the behavior of biological systems after single or multiple knockout. The implemented algorithm applies transition invariants and the novel concept of Manatee invariants. A knockout matrix visualizes the results. The tool enables the analysis of dependencies, for example, in signal flows from the receptor activation to the cell response at steady state.
Motivation: Partial differential equations (PDEs) is a well-established and powerful tool to simulate multi-cellular biological systems. However, available free tools for validation against data are not established. The PDEparams module provides flexible functionality in Python for parameter estimation in PDE models.
Results: The PDEparams module provides a flexible interface and readily accommodates different parameter analysis tools in PDE models such as computation of likelihood profiles, and parametric boot-strapping, along with direct visualisation of the results. To our knowledge, it is the first open, freely available tool for parameter fitting of PDE models.
Availability and implementation: The PDEparams module is distributed under the MIT license. The source code, usage instructions and step-by-step examples are freely available on GitHub at github.com/systemsmedicine/PDE_params.
DNA points accumulation for imaging in nanoscale topography (DNA-PAINT) is a super-resolution technique with relatively easy-to-implement multi-target imaging. However, image acquisition is slow as sufficient statistical data has to be generated from spatio-temporally isolated single emitters. Here, we trained the neural network (NN) DeepSTORM to predict fluorophore positions from high emitter density DNA-PAINT data. This achieves image acquisition in one minute. We demonstrate multi-color super-resolution imaging of structure-conserved semi-thin neuronal tissue and imaging of large samples. This improvement can be integrated into any single-molecule microscope and enables fast single-molecule super-resolution microscopy.
Electrocardiograms (ECG) record the heart activity and are the most common and reliable method to detect cardiac arrhythmias, such as atrial fibrillation (AFib). Lately, many commercially available devices such as smartwatches are offering ECG monitoring. Therefore, there is increasing demand for designing deep learning models with the perspective to be physically implemented on these small portable devices with limited energy supply. In this paper, a workflow for the design of small, energy-efficient recurrent convolutional neural network (RCNN) architecture for AFib detection is proposed. However, the approach can be well generalized to every type of long time series. In contrast to previous studies, that demand thousands of additional network neurons and millions of extra model parameters, the logical steps for the generation of a CNN with only 114 trainable parameters are described. The model consists of a small segmented CNN in combination with an optimal energy classifier. The architectural decisions are made by using the energy consumption as a metric in an equally important way as the accuracy. The optimisation steps are focused on the software which can be embedded afterwards on a physical chip. Finally, a comparison with some previous relevant studies suggests that the widely used huge CNNs for similar tasks are mostly redundant and unessentially computationally expensive.