### Refine

#### Year of publication

#### Document Type

- Article (22)
- Conference Proceeding (3)
- Contribution to a Periodical (3)

#### Has Fulltext

- yes (28)

#### Is part of the Bibliography

- no (28)

#### Keywords

- Fisher information (3)
- Hebbian learning (3)
- objective functions (3)
- synaptic plasticity (3)
- game theory (2)
- generating functionals (2)
- homeostatic adaption (2)
- Actuators (1)
- Arms (1)
- Biological locomotion (1)

#### Institute

- Physik (26)
- Präsidium (3)
- Biowissenschaften (1)

Zukunftsforschung ohne Orakel : zur langfristigen Szenarienbildung und der Initiative "Zukunft 25"
(2007)

Jedes Jahrhundert bringt eigene Visionen der Zukunft hervor, wobei vor allem diejenigen Entwicklungen extrapoliert werden, die in der aktuellen Forschung besonders präsent sind. Im 19. Jahrhundert waren dies, wie die gezeigten Sammelbilder belegen, vor allem Verkehr und Mobilität. In seinem Roman »In 80 Tagen um die Erde« drückt Jules Verne die Faszination darüber aus, dass Orte und Menschen zusammenrücken, weil die Entfernungen sich dank moderner Verkehrsmittel wie Auto, Eisenbahn und Flugzeug schneller überbrücken lassen. Die überwiegend optimistischen Zukunftserwartungen des 19. Jahrhunderts sind inzwischen kritischeren, wenn nicht pessimistischen Visionen gewichen. Betrachtet man Filme wie »Blade Runner« oder »Matrix«, so beschäftigen uns heute Themen wie der künstliche oder manipulierte Mensch. Auch der Zukunftsforscher Claudius Gros denkt über die Folgen einer künstlichen Gebärmutter nach. Aber er sieht optimistisch in die Zukunft.

Poster presentation: The brain is autonomously active and this self-sustained neural activity is in general modulated, but not driven, by the sensory input data stream [1,2]. Traditionally one has regarded this eigendynamics as resulting from inter-modular recurrent neural activity [3]. Understanding the basic modules for cognitive computation is, in this view, the primary focus of research and the overall neural dynamics would be determined by the the topology of the intermodular pathways. Here we examine an alternative point of view, asking whether certain aspects of the neural eigendynamics have a central functional role for overall cognitive computation [4,5]. Transiently stable neural activity is regularly observed on the cognitive time-scale of 80–100 ms, with indications that neural competition [6] plays an important role in the selection of the transiently stable neural ensembles [7], also denoted winning coalitions [8]. We report on a theory approach which implements these two principles, transient-state dynamics and neural competition, in terms of an associative neural network with clique encoding [9]. A cognitive system [10] with a non-trivial internal eigendynamics has two seemingly contrasting tasks to fulfill. The internal processes need to be regular and not chaotic on one side, but sensitive to the afferent sensory stimuli on the other side. We show, that these two contrasting demands can be reconciled within our approach based on competitive transient-state dynamics, when allowing the sensory stimuli to modulate the competition for the next winning coalition. By testing the system with the bars problem, we find an emerging cognitive capability. Only based on the two basic architectural principles, neural competition and transient-state dynamics, with no explicit algorithmic encoding, the system performs on its own a non-linear independent component analysis of input data stream. The system has rudimentary biological features. All learning is local Hebbian-style, unsupervised and online. It exhibits an ever-ongoing eigendynamics and at no time is the state or the value of synaptic strengths reset or the system restarted; there is no separation between training and performance. We believe that this kind of approach – cognitive computation with autonomously active neural networks – to be an emerging field, relevant both for system neuroscience and synthetic cognitive systems.

An empirical study of the per capita yield of science Nobel prizes : is the US era coming to an end?
(2018)

We point out that the Nobel prize production of the USA, the UK, Germany and France has been in numbers that are large enough to allow for a reliable analysis of the long-term historical developments. Nobel prizes are often split, such that up to three awardees receive a corresponding fractional prize. The historical trends for the fractional number of Nobelists per population are surprisingly robust, indicating in particular that the maximum Nobel productivity peaked in the 1970s for the USA and around 1900 for both France and Germany. The yearly success rates of these three countries are to date of the order of 0.2–0.3 physics, chemistry and medicine laureates per 100 million inhabitants, with the US value being a factor of 2.4 down from the maximum attained in the 1970s. The UK in contrast managed to retain during most of the last century a rate of 0.9–1.0 science Nobel prizes per year and per 100 million inhabitants. For the USA, one finds that the entire history of science Noble prizes is described on a per capita basis to an astonishing accuracy by a single large productivity boost decaying at a continuously accelerating rate since its peak in 1972.

Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.

Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics.

For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation can split into an initial exponential decrease and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. Both processes can be either of the same or of very different time scales. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall extent of the attractor) for exceedingly long times and remain partially predictable. Standard tests for chaos widely use inter-orbital correlations as an indicator. However, testing partially predictable chaos yields mostly ambiguous results, as this type of chaos is characterized by attractors of fractally broadened braids. For a resolution we introduce a novel 0-1 indicator for chaos based on the cross-distance scaling of pairs of initially close trajectories. This test robustly discriminates chaos, including partially predictable chaos, from laminar flow. Additionally using the finite time cross-correlation of pairs of initially close trajectories, we are able to identify laminar flow as well as strong and partially predictable chaos in a 0-1 manner solely from the properties of pairs of trajectories.

Self-organized robots may develop attracting states within the sensorimotor loop, that is within the phase space of neural activity, body and environmental variables. Fixpoints, limit cycles and chaotic attractors correspond in this setting to a non-moving robot, to directed, and to irregular locomotion respectively. Short higher-order control commands may hence be used to kick the system from one self-organized attractor robustly into the basin of attraction of a different attractor, a concept termed here as kick control. The individual sensorimotor states serve in this context as highly compliant motor primitives. We study different implementations of kick control for the case of simulated and real-world wheeled robots, for which the dynamics of the distinct wheels is generated independently by local feedback loops. The feedback loops are mediated by rate-encoding neurons disposing exclusively of propriosensoric inputs in terms of projections of the actual rotational angle of the wheel. The changes of the neural activity are then transmitted into a rotational motion by a simulated transmission rod akin to the transmission rods used for steam locomotives. We find that the self-organized attractor landscape may be morphed both by higher-level control signals, in the spirit of kick control, and by interacting with the environment. Bumping against a wall destroys the limit cycle corresponding to forward motion, with the consequence that the dynamical variables are then attracted in phase space by the limit cycle corresponding to backward moving. The robot, which does not dispose of any distance or contact sensors, hence reverses direction autonomously.