Refine
Document Type
- Article (2)
Language
- English (2) (remove)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- learning (2) (remove)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (2) (remove)
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.
Currently, little is known about how synesthesia develops and which aspects of synesthesia can be acquired through a learning process. We review the increasing evidence for the role of semantic representations in the induction of synesthesia, and argue for the thesis that synesthetic abilities are developed and modified by semantic mechanisms. That is, in certain people semantic mechanisms associate concepts with perception-like experiences—and this association occurs in an extraordinary way. This phenomenon can be referred to as “higher” synesthesia or ideasthesia. The present analysis suggests that synesthesia develops during childhood and is being enriched further throughout the synesthetes’ lifetime; for example, the already existing concurrents may be adopted by novel inducers or new concurrents may be formed. For a deeper understanding of the origin and nature of synesthesia we propose to focus future research on two aspects: (i) the similarities between synesthesia and ordinary phenomenal experiences based on concepts; and (ii) the tight entanglement of perception, cognition and the conceptualization of the world. Importantly, an explanation of how biological systems get to generate experiences, synesthetic or not, may have to involve an explanation of how semantic networks are formed in general and what their role is in the ability to be aware of the surrounding world.