Refine
Year of publication
Document Type
- Article (27)
- Conference Proceeding (3)
- Contribution to a Periodical (3)
- Preprint (2)
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- Fisher information (3)
- Hebbian learning (3)
- objective functions (3)
- synaptic plasticity (3)
- game theory (2)
- generating functionals (2)
- homeostatic adaption (2)
- Actuators (1)
- Arms (1)
- Biological locomotion (1)
Institute
- Physik (33)
- Präsidium (3)
- Biowissenschaften (1)
The rapid spread of the Coronavirus (COVID-19) confronts policy makers with the problem of measuring the effectiveness of containment strategies, balancing public health considerations with the economic costs of social distancing measures. We introduce a modified epidemic model that we name the controlled-SIR model, in which the disease reproduction rate evolves dynamically in response to political and societal reactions. An analytic solution is presented. The model reproduces official COVID-19 cases counts of a large number of regions and countries that surpassed the first peak of the outbreak. A single unbiased feedback parameter is extracted from field data and used to formulate an index that measures the efficiency of containment strategies (the CEI index). CEI values for a range of countries are given. For two variants of the controlled-SIR model, detailed estimates of the total medical and socio-economic costs are evaluated over the entire course of the epidemic. Costs comprise medical care cost, the economic cost of social distancing, as well as the economic value of lives saved. Under plausible parameters, strict measures fare better than a hands-off policy. Strategies based on current case numbers lead to substantially higher total costs than strategies based on the overall history of the epidemic.
Poster presentation: The brain is autonomously active and this self-sustained neural activity is in general modulated, but not driven, by the sensory input data stream [1,2]. Traditionally one has regarded this eigendynamics as resulting from inter-modular recurrent neural activity [3]. Understanding the basic modules for cognitive computation is, in this view, the primary focus of research and the overall neural dynamics would be determined by the the topology of the intermodular pathways. Here we examine an alternative point of view, asking whether certain aspects of the neural eigendynamics have a central functional role for overall cognitive computation [4,5]. Transiently stable neural activity is regularly observed on the cognitive time-scale of 80–100 ms, with indications that neural competition [6] plays an important role in the selection of the transiently stable neural ensembles [7], also denoted winning coalitions [8]. We report on a theory approach which implements these two principles, transient-state dynamics and neural competition, in terms of an associative neural network with clique encoding [9]. A cognitive system [10] with a non-trivial internal eigendynamics has two seemingly contrasting tasks to fulfill. The internal processes need to be regular and not chaotic on one side, but sensitive to the afferent sensory stimuli on the other side. We show, that these two contrasting demands can be reconciled within our approach based on competitive transient-state dynamics, when allowing the sensory stimuli to modulate the competition for the next winning coalition. By testing the system with the bars problem, we find an emerging cognitive capability. Only based on the two basic architectural principles, neural competition and transient-state dynamics, with no explicit algorithmic encoding, the system performs on its own a non-linear independent component analysis of input data stream. The system has rudimentary biological features. All learning is local Hebbian-style, unsupervised and online. It exhibits an ever-ongoing eigendynamics and at no time is the state or the value of synaptic strengths reset or the system restarted; there is no separation between training and performance. We believe that this kind of approach – cognitive computation with autonomously active neural networks – to be an emerging field, relevant both for system neuroscience and synthetic cognitive systems.
Predicting the cumulative medical load of COVID-19 outbreaks after the peak in daily fatalities
(2021)
The distinct ways the COVID-19 pandemic has been unfolding in different countries and regions suggest that local societal and governmental structures play an important role not only for the baseline infection rate, but also for short and long-term reactions to the outbreak. We propose to investigate the question of how societies as a whole, and governments in particular, modulate the dynamics of a novel epidemic using a generalization of the SIR model, the reactive SIR (short-term and long-term reaction) model. We posit that containment measures are equivalent to a feedback between the status of the outbreak and the reproduction factor. Short-term reaction to an outbreak corresponds in this framework to the reaction of governments and individuals to daily cases and fatalities. The reaction to the cumulative number of cases or deaths, and not to daily numbers, is captured in contrast by long-term reaction. We present the exact phase space solution of the controlled SIR model and use it to quantify containment policies for a large number of countries in terms of short and long-term control parameters. We find increased contributions of long-term control for countries and regions in which the outbreak was suppressed substantially together with a strong correlation between the strength of societal and governmental policies and the time needed to contain COVID-19 outbreaks. Furthermore, for numerous countries and regions we identified a predictive relation between the number of fatalities within a fixed period before and after the peak of daily fatality counts, which allows to gauge the cumulative medical load of COVID-19 outbreaks that should be expected after the peak. These results suggest that the proposed model is applicable not only for understanding the outbreak dynamics, but also for predicting future cases and fatalities once the effectiveness of outbreak suppression policies is established with sufficient certainty. Finally, we provide a web app (https://itp.uni-frankfurt.de/covid-19/) with tools for visualising the phase space representation of real-world COVID-19 data and for exporting the preprocessed data for further analysis.
Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius toward the desired value. For both mechanisms the spectral radius is autonomously adapted while the network receives and processes inputs under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols. Moreover, we evaluated the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using set point homeostatic feedback controls of neural firing.
An empirical study of the per capita yield of science Nobel prizes : is the US era coming to an end?
(2018)
We point out that the Nobel prize production of the USA, the UK, Germany and France has been in numbers that are large enough to allow for a reliable analysis of the long-term historical developments. Nobel prizes are often split, such that up to three awardees receive a corresponding fractional prize. The historical trends for the fractional number of Nobelists per population are surprisingly robust, indicating in particular that the maximum Nobel productivity peaked in the 1970s for the USA and around 1900 for both France and Germany. The yearly success rates of these three countries are to date of the order of 0.2–0.3 physics, chemistry and medicine laureates per 100 million inhabitants, with the US value being a factor of 2.4 down from the maximum attained in the 1970s. The UK in contrast managed to retain during most of the last century a rate of 0.9–1.0 science Nobel prizes per year and per 100 million inhabitants. For the USA, one finds that the entire history of science Noble prizes is described on a per capita basis to an astonishing accuracy by a single large productivity boost decaying at a continuously accelerating rate since its peak in 1972.
Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics.
For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation can split into an initial exponential decrease and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. Both processes can be either of the same or of very different time scales. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall extent of the attractor) for exceedingly long times and remain partially predictable. Standard tests for chaos widely use inter-orbital correlations as an indicator. However, testing partially predictable chaos yields mostly ambiguous results, as this type of chaos is characterized by attractors of fractally broadened braids. For a resolution we introduce a novel 0-1 indicator for chaos based on the cross-distance scaling of pairs of initially close trajectories. This test robustly discriminates chaos, including partially predictable chaos, from laminar flow. Additionally using the finite time cross-correlation of pairs of initially close trajectories, we are able to identify laminar flow as well as strong and partially predictable chaos in a 0-1 manner solely from the properties of pairs of trajectories.
Self-organized robots may develop attracting states within the sensorimotor loop, that is within the phase space of neural activity, body and environmental variables. Fixpoints, limit cycles and chaotic attractors correspond in this setting to a non-moving robot, to directed, and to irregular locomotion respectively. Short higher-order control commands may hence be used to kick the system from one self-organized attractor robustly into the basin of attraction of a different attractor, a concept termed here as kick control. The individual sensorimotor states serve in this context as highly compliant motor primitives. We study different implementations of kick control for the case of simulated and real-world wheeled robots, for which the dynamics of the distinct wheels is generated independently by local feedback loops. The feedback loops are mediated by rate-encoding neurons disposing exclusively of propriosensoric inputs in terms of projections of the actual rotational angle of the wheel. The changes of the neural activity are then transmitted into a rotational motion by a simulated transmission rod akin to the transmission rods used for steam locomotives. We find that the self-organized attractor landscape may be morphed both by higher-level control signals, in the spirit of kick control, and by interacting with the environment. Bumping against a wall destroys the limit cycle corresponding to forward motion, with the consequence that the dynamical variables are then attracted in phase space by the limit cycle corresponding to backward moving. The robot, which does not dispose of any distance or contact sensors, hence reverses direction autonomously.
Which are the factors underlying human information production on a global level? In order to gain an insight into this question we study a corpus of 252–633 mil. publicly available data files on the Internet corresponding to an overall storage volume of 284–675 Terabytes. Analyzing the file size distribution for several distinct data types we find indications that the neuropsychological capacity of the human brain to process and record information may constitute the dominant limiting factor for the overall growth of globally stored information, with real-world economic constraints having only a negligible influence. This supposition draws support from the observation that the files size distributions follow a power law for data without a time component, like images, and a log-normal distribution for multimedia files, for which time is a defining qualia.
Author summary: The generation of new information is limited by two key factors, by the incurring economic costs and by the capacity of the human brain to process and store data and information; the controlling agent needs to retain an overall understanding even when data is generated by semiautomatic processes. These processes are reflected in the statistical properties of the data files publicly available on the Internet. Collecting a corpus of 252–633 mil. files we find that the statistics of the file size distribution are consistent with the supposition that data production on a global level is shaped and limited by the neuropsychological information processing capacity of the brain, with economic and hardware constraints having a negligible influence.
Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron’s input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.
We present an effective model for timing-dependent synaptic plasticity (STDP) in terms of two interacting traces, corresponding to the fraction of activated NMDA receptors and the concentration in the dendritic spine of the postsynaptic neuron. This model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, thus constituting a practical tool for the study of the interplay of neural activity and synaptic plasticity in extended spiking neural networks. For isolated pairs of pre- and postsynaptic spikes, the standard pairwise STDP rule is reproduced, with appropriate parameters determining the respective weights and timescales for the causal and the anticausal contributions. The model contains otherwise only three free parameters, which can be adjusted to reproduce triplet nonlinearities in hippocampal culture and cortical slices. We also investigate the transition from time-dependent to rate-dependent plasticity occurring for both correlated and uncorrelated spike patterns.
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule for synaptic plasticity. In the present work, we study the dependence of the solutions to this rule in terms of the moments of the input probability distribution and find a preference for non-Gaussian directions, making it a suitable candidate for independent component analysis (ICA). We confirm in a numerical experiment that a neuron trained under these rules is able to find the independent components in the non-linear bars problem. The specific form of the plasticity rule depends on the transfer function used, becoming a simple cubic polynomial of the membrane potential for the case of the rescaled error function. The cubic learning rule is also an excellent approximation for other transfer functions, as the standard sigmoidal, and can be used to show analytically that the proposed plasticity rules are selective for directions in the space of presynaptic neural activities characterized by a negative excess kurtosis.
Generating functionals may guide the evolution of a dynamical system and constitute a possible route for handling the complexity of neural networks as relevant for computational intelligence.We propose and explore a new objective function, which allows to obtain plasticity rules for the afferent synaptic weights. The adaption rules are Hebbian, self-limiting, and result from the minimization of the Fisher information with respect to the synaptic flux. We perform a series of simulations examining the behavior of the new learning rules in various circumstances.The vector of synaptic weights aligns with the principal direction of input activities, whenever one is present. A linear discrimination is performed when there are two or more principal directions; directions having bimodal firing-rate distributions, being characterized by a negative excess kurtosis, are preferred. We find robust performance and full homeostatic adaption of the synaptic weights results as a by-product of the synaptic flux minimization. This self-limiting behavior allows for stable online learning for arbitrary durations.The neuron acquires new information when the statistics of input activities is changed at a certain point of the simulation, showing however, a distinct resilience to unlearn previously acquired knowledge. Learning is fast when starting with randomly drawn synaptic weights and substantially slower when the synaptic weights are already fully adapted.
Envy, the inclination to compare rewards, can be expected to unfold when inequalities in terms of pay-off differences are generated in competitive societies. It is shown that increasing levels of envy lead inevitably to a self-induced separation into a lower and an upper class. Class stratification is Nash stable and strict, with members of the same class receiving identical rewards. Upper-class agents play exclusively pure strategies, all lower-class agents the same mixed strategy. The fraction of upper-class agents decreases progressively with larger levels of envy, until a single upper-class agent is left. Numerical simulations and a complete analytic treatment of a basic reference model, the shopping trouble model, are presented. The properties of the class-stratified society are universal and only indirectly controllable through the underlying utility function, which implies that class-stratified societies are intrinsically resistant to political control. Implications for human societies are discussed. It is pointed out that the repercussions of envy are amplified when societies become increasingly competitive.
We study simulated animats in terms of wheeled robots with the most simple neural controller possible – a single neuron per actuator. The system is fully self-organized in the sense that the controlling neuron receives uniquely the actual angle of the wheel as an input. Non-trivial locomotion results in structured environments, with the robot determining autonomously the direction of movement (time-reversal symmetry is spontaneously broken). Our controller, which mimics the mechanism used to transmit power in steam locomotives, abstracts from the body plan of the animat, working without problems also in the presence of noise and for chains of individual two-wheeled cars. Being fully compliant our controller may be also used, in the spirit of morphological computation, as a basic unit for higher-level evolutionary algorithms.
Human societies are characterized by three constituent features, besides others. (A) Options, as for jobs and societal positions, differ with respect to their associated monetary and non-monetary payoffs. (B) Competition leads to reduced payoffs when individuals compete for the same option as others. (C) People care about how they are doing relatively to others. The latter trait –the propensity to compare one’s own success with that of others– expresses itself as envy. It is shown that the combination of (A)–(C) leads to spontaneous class stratification. Societies of agents split endogenously into two social classes, an upper and a lower class, when envy becomes relevant. A comprehensive analysis of the Nash equilibria characterizing a basic reference game is presented. Class separation is due to the condensation of the strategies of lower-class agents, which play an identical mixed strategy. Upper-class agents do not condense, following individualist pure strategies. The model and results are size-consistent, holding for arbitrary large numbers of agents and options. Analytic results are confirmed by extensive numerical simulations. An analogy to interacting confined classical particles is discussed.
Five decades of US, UK, German and Dutch music charts show that cultural processes are accelerating
(2019)
Analysing the timeline of US, UK, German and Dutch music charts, we find that the evolution of album lifetimes and of the size of weekly rank changes provide evidence for an acceleration of cultural processes. For most of the past five decades, number one albums needed more than a month to climb to the top, nowadays an album is in contrast top ranked either from the start, or not at all. Over the last three decades, the number of top-listed albums increased as a consequence from roughly a dozen per year, to about 40. The distribution of album lifetimes evolved during the last decades from a log-normal distribution to a power law, a profound change. Presenting an information–theoretical approach to human activities, we suggest that the fading relevance of personal time horizons may be causing this phenomenon. Furthermore, we find that sales and airplay- based charts differ statistically and that the inclusion of streaming affects chart diversity adversely. We point out in addition that opinion dynamics may accelerate not only in cultural domains, as found here, but also in other settings, in particular in politics, where it could have far reaching consequences.
Behavior is characterized by sequences of goal oriented conducts, such as food uptake, socializing and resting. Classically, one would define for each task a corresponding satisfaction level, with the agent engaging, at a given time, in the activity having the lowest satisfaction level. Alternatively, one may consider that the agent follows the overarching objective to generate sequences of distinct activities. To achieve a balanced distribution of activities would then be the primary goal, and not to master a specific task. In this setting the agent would show two types of behaviors, task-oriented and task-searching phases, with the latter interseeding the former. We study the emergence of autonomous task switching for the case of a simulated robot arm. Grasping one of several moving objects corresponds in this setting to a specific activity. Overall, the arm should follow a given object temporarily and then move away, in order to search for a new target and reengage. We show that this behavior can be generated robustly when modeling the arm as an adaptive dynamical system. The dissipation function is in this approach time dependent. The arm is in a dissipative state when searching for a nearby object, dissipating energy on approach. Once close, the dissipation function starts to increase, with the eventual sign change implying that the arm will take up energy and wander off. The resulting explorative state ends when the dissipation function becomes again negative and the arm selects a new target. We believe that our approach may be generalized to generate self-organized sequences of activities in general.
Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.
Charts are used to measure relative success for a large variety of cultural items. Traditional music charts have been shown to follow self-organizing principles with regard to the distribution of item lifetimes, the on-chart residence times. Here we examine if this observation holds also for (a) music streaming charts (b) book best-seller lists and (c) for social network activity charts, such as Twitter hashtags and the number of comments Reddit postings receive. We find that charts based on the active production of items, like commenting, are more likely to be influenced by external factors, in particular by the 24 h day–night cycle. External factors are less important for consumption-based charts (sales, downloads), which can be explained by a generic theory of decision-making. In this view, humans aim to optimize the information content of the internal representation of the outside world, which is logarithmically compressed. Further support for information maximization is argued to arise from the comparison of hourly, daily and weekly charts, which allow to gauge the importance of decision times with respect to the chart compilation period.
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfiled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working in the vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term 'self-regulated' in this context, instead of 'self-organized'.
Modern societies face the challenge that the time scale of opinion formation is continuously accelerating in contrast to the time scale of political decision making. With the latter remaining of the order of the election cycle we examine here the case that the political state of a society is determined by the continuously evolving values of the electorate. Given this assumption we show that the time lags inherent in the election cycle will inevitable lead to political instabilities for advanced democracies characterized both by an accelerating pace of opinion dynamics and by high sensibilities (political correctness) to deviations from mainstream values. Our result is based on the observation that dynamical systems become generically unstable whenever time delays become comparable to the time it takes to adapt to the steady state. The time needed to recover from external shocks grows in addition dramatically close to the transition. Our estimates for the order of magnitude of the involved time scales indicate that socio-political instabilities may develop once the aggregate time scale for the evolution of the political values of the electorate falls below 7–15 months.
The phase diagram of the square lattice bilayer Hubbard model: a variational Monte Carlo study
(2014)
We investigate the phase diagram of the square lattice bilayer Hubbard model at half-filling with the variational Monte Carlo method for both the magnetic and the paramagnetic case as a function of the interlayer hopping and on-site Coulomb repulsion U. With this study we resolve some discrepancies in previous calculations based on the dynamical mean-field theory, and we are able to determine the nature of the phase transitions between metal, Mott insulator and band insulator. In the magnetic case we find only two phases: an antiferromagnetic Mott insulator at small for any value of U and a band insulator at large . At large U values we approach the Heisenberg limit. The paramagnetic phase diagram shows at small a metal to Mott insulator transition at moderate U values and a Mott to band insulator transition at larger U values. We also observe a re-entrant Mott insulator to metal transition and metal to band insulator transition for increasing in the range of . Finally, we discuss the phase diagrams obtained in relation to findings from previous studies based on different many-body approaches.
In physics, the wavefunctions of bosonic particles collapse when the system undergoes a Bose–Einstein condensation. In game theory, the strategy of an agent describes the probability to engage in a certain course of action. Strategies are expected to differ in competitive situations, namely when there is a penalty to do the same as somebody else. We study what happens when agents are interested how they fare not only in absolute terms, but also relative to others. This preference, denoted envy, is shown to induce the emergence of distinct social classes via a collective strategy condensation transition. Members of the lower class pursue identical strategies, in analogy to the Bose–Einstein condensation, with the upper class remaining individualistic.
Zukunftsforschung ohne Orakel : zur langfristigen Szenarienbildung und der Initiative "Zukunft 25"
(2007)
Jedes Jahrhundert bringt eigene Visionen der Zukunft hervor, wobei vor allem diejenigen Entwicklungen extrapoliert werden, die in der aktuellen Forschung besonders präsent sind. Im 19. Jahrhundert waren dies, wie die gezeigten Sammelbilder belegen, vor allem Verkehr und Mobilität. In seinem Roman »In 80 Tagen um die Erde« drückt Jules Verne die Faszination darüber aus, dass Orte und Menschen zusammenrücken, weil die Entfernungen sich dank moderner Verkehrsmittel wie Auto, Eisenbahn und Flugzeug schneller überbrücken lassen. Die überwiegend optimistischen Zukunftserwartungen des 19. Jahrhunderts sind inzwischen kritischeren, wenn nicht pessimistischen Visionen gewichen. Betrachtet man Filme wie »Blade Runner« oder »Matrix«, so beschäftigen uns heute Themen wie der künstliche oder manipulierte Mensch. Auch der Zukunftsforscher Claudius Gros denkt über die Folgen einer künstlichen Gebärmutter nach. Aber er sieht optimistisch in die Zukunft.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Biological as well as advanced artificial intelligences (AIs) need to decide which goals to pursue. We review nature's solution to the time allocation problem, which is based on a continuously readjusted categorical weighting mechanism we experience introspectively as emotions. One observes phylogenetically that the available number of emotional states increases hand in hand with the cognitive capabilities of animals and that raising levels of intelligence entail ever larger sets of behavioral options. Our ability to experience a multitude of potentially conflicting feelings is in this view not a leftover of a more primitive heritage, but a generic mechanism for attributing values to behavioral options that can not be specified at birth. In this view, emotions are essential for understanding the mind. For concreteness, we propose and discuss a framework which mimics emotions on a functional level. Based on time allocation via emotional stationarity (TAES), emotions are implemented as abstract criteria, such as satisfaction, challenge and boredom, which serve to evaluate activities that have been carried out. The resulting timeline of experienced emotions is compared with the “character” of the agent, which is defined in terms of a preferred distribution of emotional states. The long-term goal of the agent, to align experience with character, is achieved by optimizing the frequency for selecting individual tasks. Upon optimization, the statistics of emotion experience becomes stationary.
Learning and animal movement
(2021)
Integrating diverse concepts from animal behavior, movement ecology, and machine learning, we develop an overview of the ecology of learning and animal movement. Learning-based movement is clearly relevant to ecological problems, but the subject is rooted firmly in psychology, including a distinct terminology. We contrast this psychological origin of learning with the task-oriented perspective on learning that has emerged from the field of machine learning. We review conceptual frameworks that characterize the role of learning in movement, discuss emerging trends, and summarize recent developments in the analysis of movement data. We also discuss the relative advantages of different modeling approaches for exploring the learning-movement interface. We explore in depth how individual and social modalities of learning can matter to the ecology of animal movement, and highlight how diverse kinds of field studies, ranging from translocation efforts to manipulative experiments, can provide critical insight into the learning process in animal movement.