Refine
Year of publication
Document Type
- Article (27)
- Conference Proceeding (3)
- Contribution to a Periodical (3)
- Preprint (2)
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- Fisher information (3)
- Hebbian learning (3)
- objective functions (3)
- synaptic plasticity (3)
- game theory (2)
- generating functionals (2)
- homeostatic adaption (2)
- Actuators (1)
- Arms (1)
- Biological locomotion (1)
- COVID 19 (1)
- Complex networks (1)
- Diseases (1)
- Dynamical systems (1)
- Epidemiological statistics (1)
- Epidemiology (1)
- Hubbard model (1)
- Laminar flow (1)
- Mathematics and computing (1)
- Mott insulator (1)
- Network models (1)
- Nobel prizes (1)
- Pandemics (1)
- Physics (1)
- Prototypes (1)
- Robotic behavior (1)
- Robots (1)
- Simulation and modeling (1)
- Social distancing (1)
- Social systems (1)
- Statistical and Nonlinear Physics (1)
- Szenarienbildung (1)
- Wissenschaft und Verantwortung (1)
- Zukunftsforschung (1)
- animal cognition (1)
- artificial intelligence (1)
- band insulator (1)
- bilayer square lattice (1)
- class separation (1)
- closed-loop robots (1)
- coincidence detection (1)
- complex systems (1)
- compliant robot (1)
- decision making (1)
- decision-making (1)
- dendrites (1)
- echo-state networks (1)
- emotion theory (1)
- envy (1)
- excess kurtosis (1)
- feelings (emotions) (1)
- guiding principle (1)
- homeostasis (1)
- independent component analysis (1)
- limit cycles (1)
- migration (1)
- music charts (1)
- nash equilibrium (1)
- phase transition (1)
- plasticity (1)
- predictive modelling (1)
- pyramidal neuron (1)
- recurrent networks (1)
- reinforcement statistical learning (1)
- reservoir computing (1)
- robophysics (1)
- science of sciences (1)
- self-organization (1)
- self-organized criticalit (1)
- self-organized locomotion (1)
- sensorimotor loop (1)
- social acceleration (1)
- social classes (1)
- social modelling (1)
- social stratification (1)
- sociophysics (1)
- spectral radius (1)
- strategy condensation (1)
- supervised learning (1)
- synaptic scaling (1)
- theory mind (1)
- time scales (1)
- translocation (1)
- variational Monte Carlo (1)
Institute
- Physik (33)
- Präsidium (3)
- Biowissenschaften (1)
Behavior is characterized by sequences of goal oriented conducts, such as food uptake, socializing and resting. Classically, one would define for each task a corresponding satisfaction level, with the agent engaging, at a given time, in the activity having the lowest satisfaction level. Alternatively, one may consider that the agent follows the overarching objective to generate sequences of distinct activities. To achieve a balanced distribution of activities would then be the primary goal, and not to master a specific task. In this setting the agent would show two types of behaviors, task-oriented and task-searching phases, with the latter interseeding the former. We study the emergence of autonomous task switching for the case of a simulated robot arm. Grasping one of several moving objects corresponds in this setting to a specific activity. Overall, the arm should follow a given object temporarily and then move away, in order to search for a new target and reengage. We show that this behavior can be generated robustly when modeling the arm as an adaptive dynamical system. The dissipation function is in this approach time dependent. The arm is in a dissipative state when searching for a nearby object, dissipating energy on approach. Once close, the dissipation function starts to increase, with the eventual sign change implying that the arm will take up energy and wander off. The resulting explorative state ends when the dissipation function becomes again negative and the arm selects a new target. We believe that our approach may be generalized to generate self-organized sequences of activities in general.
Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.
Charts are used to measure relative success for a large variety of cultural items. Traditional music charts have been shown to follow self-organizing principles with regard to the distribution of item lifetimes, the on-chart residence times. Here we examine if this observation holds also for (a) music streaming charts (b) book best-seller lists and (c) for social network activity charts, such as Twitter hashtags and the number of comments Reddit postings receive. We find that charts based on the active production of items, like commenting, are more likely to be influenced by external factors, in particular by the 24 h day–night cycle. External factors are less important for consumption-based charts (sales, downloads), which can be explained by a generic theory of decision-making. In this view, humans aim to optimize the information content of the internal representation of the outside world, which is logarithmically compressed. Further support for information maximization is argued to arise from the comparison of hourly, daily and weekly charts, which allow to gauge the importance of decision times with respect to the chart compilation period.
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfiled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working in the vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term 'self-regulated' in this context, instead of 'self-organized'.
Modern societies face the challenge that the time scale of opinion formation is continuously accelerating in contrast to the time scale of political decision making. With the latter remaining of the order of the election cycle we examine here the case that the political state of a society is determined by the continuously evolving values of the electorate. Given this assumption we show that the time lags inherent in the election cycle will inevitable lead to political instabilities for advanced democracies characterized both by an accelerating pace of opinion dynamics and by high sensibilities (political correctness) to deviations from mainstream values. Our result is based on the observation that dynamical systems become generically unstable whenever time delays become comparable to the time it takes to adapt to the steady state. The time needed to recover from external shocks grows in addition dramatically close to the transition. Our estimates for the order of magnitude of the involved time scales indicate that socio-political instabilities may develop once the aggregate time scale for the evolution of the political values of the electorate falls below 7–15 months.
The phase diagram of the square lattice bilayer Hubbard model: a variational Monte Carlo study
(2014)
We investigate the phase diagram of the square lattice bilayer Hubbard model at half-filling with the variational Monte Carlo method for both the magnetic and the paramagnetic case as a function of the interlayer hopping and on-site Coulomb repulsion U. With this study we resolve some discrepancies in previous calculations based on the dynamical mean-field theory, and we are able to determine the nature of the phase transitions between metal, Mott insulator and band insulator. In the magnetic case we find only two phases: an antiferromagnetic Mott insulator at small for any value of U and a band insulator at large . At large U values we approach the Heisenberg limit. The paramagnetic phase diagram shows at small a metal to Mott insulator transition at moderate U values and a Mott to band insulator transition at larger U values. We also observe a re-entrant Mott insulator to metal transition and metal to band insulator transition for increasing in the range of . Finally, we discuss the phase diagrams obtained in relation to findings from previous studies based on different many-body approaches.
In physics, the wavefunctions of bosonic particles collapse when the system undergoes a Bose–Einstein condensation. In game theory, the strategy of an agent describes the probability to engage in a certain course of action. Strategies are expected to differ in competitive situations, namely when there is a penalty to do the same as somebody else. We study what happens when agents are interested how they fare not only in absolute terms, but also relative to others. This preference, denoted envy, is shown to induce the emergence of distinct social classes via a collective strategy condensation transition. Members of the lower class pursue identical strategies, in analogy to the Bose–Einstein condensation, with the upper class remaining individualistic.
Zukunftsforschung ohne Orakel : zur langfristigen Szenarienbildung und der Initiative "Zukunft 25"
(2007)
Jedes Jahrhundert bringt eigene Visionen der Zukunft hervor, wobei vor allem diejenigen Entwicklungen extrapoliert werden, die in der aktuellen Forschung besonders präsent sind. Im 19. Jahrhundert waren dies, wie die gezeigten Sammelbilder belegen, vor allem Verkehr und Mobilität. In seinem Roman »In 80 Tagen um die Erde« drückt Jules Verne die Faszination darüber aus, dass Orte und Menschen zusammenrücken, weil die Entfernungen sich dank moderner Verkehrsmittel wie Auto, Eisenbahn und Flugzeug schneller überbrücken lassen. Die überwiegend optimistischen Zukunftserwartungen des 19. Jahrhunderts sind inzwischen kritischeren, wenn nicht pessimistischen Visionen gewichen. Betrachtet man Filme wie »Blade Runner« oder »Matrix«, so beschäftigen uns heute Themen wie der künstliche oder manipulierte Mensch. Auch der Zukunftsforscher Claudius Gros denkt über die Folgen einer künstlichen Gebärmutter nach. Aber er sieht optimistisch in die Zukunft.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Biological as well as advanced artificial intelligences (AIs) need to decide which goals to pursue. We review nature's solution to the time allocation problem, which is based on a continuously readjusted categorical weighting mechanism we experience introspectively as emotions. One observes phylogenetically that the available number of emotional states increases hand in hand with the cognitive capabilities of animals and that raising levels of intelligence entail ever larger sets of behavioral options. Our ability to experience a multitude of potentially conflicting feelings is in this view not a leftover of a more primitive heritage, but a generic mechanism for attributing values to behavioral options that can not be specified at birth. In this view, emotions are essential for understanding the mind. For concreteness, we propose and discuss a framework which mimics emotions on a functional level. Based on time allocation via emotional stationarity (TAES), emotions are implemented as abstract criteria, such as satisfaction, challenge and boredom, which serve to evaluate activities that have been carried out. The resulting timeline of experienced emotions is compared with the “character” of the agent, which is defined in terms of a preferred distribution of emotional states. The long-term goal of the agent, to align experience with character, is achieved by optimizing the frequency for selecting individual tasks. Upon optimization, the statistics of emotion experience becomes stationary.
Learning and animal movement
(2021)
Integrating diverse concepts from animal behavior, movement ecology, and machine learning, we develop an overview of the ecology of learning and animal movement. Learning-based movement is clearly relevant to ecological problems, but the subject is rooted firmly in psychology, including a distinct terminology. We contrast this psychological origin of learning with the task-oriented perspective on learning that has emerged from the field of machine learning. We review conceptual frameworks that characterize the role of learning in movement, discuss emerging trends, and summarize recent developments in the analysis of movement data. We also discuss the relative advantages of different modeling approaches for exploring the learning-movement interface. We explore in depth how individual and social modalities of learning can matter to the ecology of animal movement, and highlight how diverse kinds of field studies, ranging from translocation efforts to manipulative experiments, can provide critical insight into the learning process in animal movement.