004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (31)
- Preprint (15)
- Doctoral Thesis (4)
- Part of a Book (2)
- Contribution to a Periodical (1)
Has Fulltext
- yes (53)
Is part of the Bibliography
- no (53)
Keywords
- computer vision (2)
- deep learning (2)
- intrinsic plasticity (2)
- sparse coding (2)
- 3D image analysis (1)
- Amblyopia (1)
- Autonomous Learning (1)
- Bilderkennung (1)
- Binocular Rivalry (1)
- CBM detector (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (53) (remove)
Tracking influenza a virus infection in the lung from hematological data with machine learning
(2022)
The tracking of pathogen burden and host responses with minimal-invasive methods during respiratory infections is central for monitoring disease development and guiding treatment decisions. Utilizing a standardized murine model of respiratory Influenza A virus (IAV) infection, we developed and tested different supervised machine learning models to predict viral burden and immune response markers, i.e. cytokines and leukocytes in the lung, from hematological data. We performed independently in vivo infection experiments to acquire extensive data for training and testing purposes of the models. We show here that lung viral load, neutrophil counts, cytokines like IFN-γ and IL-6, and other lung infection markers can be predicted from hematological data. Furthermore, feature analysis of the models shows that blood granulocytes and platelets play a crucial role in prediction and are highly involved in the immune response against IAV. The proposed in silico tools pave the path towards improved tracking and monitoring of influenza infections and possibly other respiratory infections based on minimal-invasively obtained hematological parameters.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
Poster presentation A central problem in neuroscience is to bridge local synaptic plasticity and the global behavior of a system. It has been shown that Hebbian learning of connections in a feedforward network performs PCA on its inputs [1]. In recurrent Hopfield network with binary units, the Hebbian-learnt patterns form the attractors of the network [2]. Starting from a random recurrent network, Hebbian learning reduces system complexity from chaotic to fixed point [3]. In this paper, we investigate the effect of Hebbian plasticity on the attractors of a continuous dynamical system. In a Hopfield network with binary units, it can be shown that Hebbian learning of an attractor stabilizes it with deepened energy landscape and larger basin of attraction. We are interested in how these properties carry over to continuous dynamical systems. Consider system of the form Math(1) where xi is a real variable, and fi a nondecreasing nonlinear function with range [-1,1]. T is the synaptic matrix, which is assumed to have been learned from orthogonal binary ({1,-1}) patterns ξμ, by the Hebbian rule: Math. Similar to the continuous Hopfield network [4], ξμ are no longer attractors, unless the gains gi are big. Assume that the system settles down to an attractor X*, and undergoes Hebbian plasticity: T´ = T + εX*X*T, where ε > 0 is the learning rate. We study how the attractor dynamics change following this plasticity. We show that, in system (1) under certain general conditions, Hebbian plasticity makes the attractor move towards its corner of the hypercube. Linear stability analysis around the attractor shows that the maximum eigenvalue becomes more negative with learning, indicating a deeper landscape. This in a way improves the system´s ability to retrieve the corresponding stored binary pattern, although the attractor itself is no longer stabilized the way it does in binary Hopfield networks.
Neural computations emerge from recurrent neural circuits that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, it is challenging to predict which spiking network connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We established a mapping between the stabilized supralinear network (SSN) and spiking activity which allowed us to pinpoint the location in parameter space where these activity regimes occur. Notably, we found that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we showed that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success. Keywords: synaptic plasticity, intrinsic plasticity, recurrent neural networks, reservoir computing, time series prediction
Self-organized complexity and Coherent Infomax from the viewpoint of Jaynes’s probability theory
(2012)
This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory.
Structural rearrangements play a central role in the organization and function of complex biomolecular systems. In principle, Molecular Dynamics (MD) simulations enable us to investigate these thermally activated processes with an atomic level of resolution. In practice, an exponentially large fraction of computational resources must be invested to simulate thermal fluctuations in metastable states. Path sampling methods focus the computational power on sampling the rare transitions between states. One of their outstanding limitations is to efficiently generate paths that visit significantly different regions of the conformational space. To overcome this issue, we introduce a new algorithm for MD simulations that integrates machine learning and quantum computing. First, using functional integral methods, we derive a rigorous low-resolution spatially coarse-grained representation of the system’s dynamics, based on a small set of molecular configurations explored with machine learning. Then, we use a quantum annealer to sample the transition paths of this low-resolution theory. We provide a proof-of-concept application by simulating a benchmark conformational transition with all-atom resolution on the D-Wave quantum computer. By exploiting the unique features of quantum annealing, we generate uncorrelated trajectories at every iteration, thus addressing one of the challenges of path sampling. Once larger quantum machines will be available, the interplay between quantum and classical resources may emerge as a new paradigm of high-performance scientific computing. In this work, we provide a platform to implement this integrated scheme in the field of molecular simulations.
The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.
Robotic gesture recognition
(1998)
Robots of the future should communicate with humans in a natural way. We are especially interested in vision-based gesture interfaces. In the context of robotics several constraints exist, which make the task of gesture recognition particularly challenging. We discuss these constraints and report on progress being made in our lab in the development of techniques for building robust gesture interfaces which can handle these constraints. In an example application, the techniques are shown to be easily combined to build a gesture interface for a real robot grasping objects on a table in front of it.
Background: The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data.
Results: QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data.
Conclusions: The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview.