Year of publication
- Intrinsic Motivations Drive Learning of Eye Movements: An Experiment with Human Adults (2015)
- Intrinsic motivations drive the acquisition of knowledge and skills on the basis of novel or surprising stimuli or the pleasure to learn new skills. In so doing, they are different from extrinsic motivations that are mainly linked to drives that promote survival and reproduction. Intrinsic motivations have been implicitly exploited in several psychological experiments but, due to the lack of proper paradigms, they are rarely a direct subject of investigation. This article investigates how different intrinsic motivation mechanisms can support the learning of visual skills, such as “foveate a particular object in space”, using a gaze contingency paradigm. In the experiment participants could freely foveate objects shown in a computer screen. Foveating each of two “button” pictures caused different effects: one caused the appearance of a simple image (blue rectangle) in unexpected positions, while the other evoked the appearance of an always-novel picture (objects or animals). The experiment studied how two possible intrinsic motivation mechanisms might guide learning to foveate one or the other button picture. One mechanism is based on the sudden, surprising appearance of a familiar image at unpredicted locations, and a second one is based on the content novelty of the images. The results show the comparative effectiveness of the mechanism based on image novelty, whereas they do not support the operation of the mechanism based on the surprising location of the image appearance. Interestingly, these results were also obtained with participants that, according to a post experiment questionnaire, had not understood the functions of the different buttons suggesting that novelty-based intrinsic motivation mechanisms might operate even at an unconscious level.
- Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning (2014)
- Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
- SORN: a self-organizing recurrent neural network (2009)
- Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success. Keywords: synaptic plasticity, intrinsic plasticity, recurrent neural networks, reservoir computing, time series prediction
- Robust active binocular vision through intrinsically motivated learning (2013)
- The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.
- Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures (2014)
- The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.
- Analysis of a biologically-inspired system for real-time object recognition (2005)
- We present a biologically-inspired system for real-time, feed-forward object recognition in cluttered scenes. Our system utilizes a vocabulary of very sparse features that are shared between and within different object models. To detect objects in a novel scene, these features are located in the image, and each detected feature votes for all objects that are consistent with its presence. Due to the sharing of features between object models our approach is more scalable to large object databases than traditional methods. To demonstrate the utility of this approach, we train our system to recognize any of 50 objects in everyday cluttered scenes with substantial occlusion. Without further optimization we also demonstrate near-perfect recognition on a standard 3-D recognition problem. Our system has an interpretation as a sparsely connected feed-forward neural network, making it a viable model for fast, feed-forward object recognition in the primate visual system.
- Learning more by sampling less: subsampling effects are model specific (2013)
- Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013. When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons . Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons . Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited. We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo [2,3]. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms . The second model had the same interaction rules but random connectivity . The third model had local connectivity but different activity propagation rules . As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue . Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units. Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated. Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
- Task-specific modulation of memory for object features in natural scenes (2008)
- The influence of visual tasks on short and long-term memory for visual features was investigated using a change-detection paradigm. Subjects completed 2 tasks: (a) describing objects in natural images, reporting a specific property of each object when a crosshair appeared above it, and (b) viewing a modified version of each scene, and detecting which of the previously described objects had changed. When tested over short delays (seconds), no task effects were found. Over longer delays (minutes) we found the describing task influenced what types of changes were detected in a variety of explicit and incidental memory experiments. Furthermore, we found surprisingly high performance in the incidental memory experiment, suggesting that simple tasks are sufficient to instill long-lasting visual memories. Keywords: visual working memory, natural scenes, natural tasks, change detection
- Learning the optimal control of coordinated eye and head movements (2011)
- Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
- Independent component analysis in spiking neurons (2010)
- Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.