Physik
Refine
Year of publication
Document Type
- Article (18)
- Conference Proceeding (3)
- Contribution to a Periodical (2)
- Preprint (2)
Has Fulltext
- yes (25)
Is part of the Bibliography
- no (25)
Keywords
- Fisher information (3)
- Hebbian learning (3)
- objective functions (3)
- synaptic plasticity (3)
- game theory (2)
- generating functionals (2)
- homeostatic adaption (2)
- Actuators (1)
- Arms (1)
- Biological locomotion (1)
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
The phase diagram of the square lattice bilayer Hubbard model: a variational Monte Carlo study
(2014)
We investigate the phase diagram of the square lattice bilayer Hubbard model at half-filling with the variational Monte Carlo method for both the magnetic and the paramagnetic case as a function of the interlayer hopping and on-site Coulomb repulsion U. With this study we resolve some discrepancies in previous calculations based on the dynamical mean-field theory, and we are able to determine the nature of the phase transitions between metal, Mott insulator and band insulator. In the magnetic case we find only two phases: an antiferromagnetic Mott insulator at small for any value of U and a band insulator at large . At large U values we approach the Heisenberg limit. The paramagnetic phase diagram shows at small a metal to Mott insulator transition at moderate U values and a Mott to band insulator transition at larger U values. We also observe a re-entrant Mott insulator to metal transition and metal to band insulator transition for increasing in the range of . Finally, we discuss the phase diagrams obtained in relation to findings from previous studies based on different many-body approaches.
Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.
Human societies are characterized by three constituent features, besides others. (A) Options, as for jobs and societal positions, differ with respect to their associated monetary and non-monetary payoffs. (B) Competition leads to reduced payoffs when individuals compete for the same option as others. (C) People care about how they are doing relatively to others. The latter trait –the propensity to compare one’s own success with that of others– expresses itself as envy. It is shown that the combination of (A)–(C) leads to spontaneous class stratification. Societies of agents split endogenously into two social classes, an upper and a lower class, when envy becomes relevant. A comprehensive analysis of the Nash equilibria characterizing a basic reference game is presented. Class separation is due to the condensation of the strategies of lower-class agents, which play an identical mixed strategy. Upper-class agents do not condense, following individualist pure strategies. The model and results are size-consistent, holding for arbitrary large numbers of agents and options. Analytic results are confirmed by extensive numerical simulations. An analogy to interacting confined classical particles is discussed.
We study simulated animats in terms of wheeled robots with the most simple neural controller possible – a single neuron per actuator. The system is fully self-organized in the sense that the controlling neuron receives uniquely the actual angle of the wheel as an input. Non-trivial locomotion results in structured environments, with the robot determining autonomously the direction of movement (time-reversal symmetry is spontaneously broken). Our controller, which mimics the mechanism used to transmit power in steam locomotives, abstracts from the body plan of the animat, working without problems also in the presence of noise and for chains of individual two-wheeled cars. Being fully compliant our controller may be also used, in the spirit of morphological computation, as a basic unit for higher-level evolutionary algorithms.
Envy, the inclination to compare rewards, can be expected to unfold when inequalities in terms of pay-off differences are generated in competitive societies. It is shown that increasing levels of envy lead inevitably to a self-induced separation into a lower and an upper class. Class stratification is Nash stable and strict, with members of the same class receiving identical rewards. Upper-class agents play exclusively pure strategies, all lower-class agents the same mixed strategy. The fraction of upper-class agents decreases progressively with larger levels of envy, until a single upper-class agent is left. Numerical simulations and a complete analytic treatment of a basic reference model, the shopping trouble model, are presented. The properties of the class-stratified society are universal and only indirectly controllable through the underlying utility function, which implies that class-stratified societies are intrinsically resistant to political control. Implications for human societies are discussed. It is pointed out that the repercussions of envy are amplified when societies become increasingly competitive.