MPI für empirische Ästhetik
Refine
Year of publication
Language
- English (43)
Has Fulltext
- yes (43)
Is part of the Bibliography
- no (43)
Keywords
- Acoustics (4)
- Speech (4)
- Language (3)
- Behavior (2)
- Bioacoustics (2)
- Cognitive linguistics (2)
- Cognitive science (2)
- Consolidation (2)
- Electroencephalography – EEG (2)
- Human behaviour (2)
Institute
- MPI für empirische Ästhetik (43)
- Psychologie (21)
- Ernst Strüngmann Institut (7)
- Medizin (5)
- Neuere Philologien (4)
- Frankfurt Institute for Advanced Studies (FIAS) (2)
- Biowissenschaften (1)
- MPI für Hirnforschung (1)
- Mathematik (1)
- Sprachwissenschaften (1)
Congenitally blind individuals have been shown to activate the visual cortex during non-visual tasks. The neuronal mechanisms of such cross-modal activation are not fully understood. Here, we used an auditory working memory training paradigm in congenitally blind and in sighted adults. We hypothesized that the visual cortex gets integrated into auditory working memory networks, after these networks have been challenged by training. The spectral profile of functional networks was investigated which mediate cross-modal reorganization following visual deprivation. A training induced integration of visual cortex into task-related networks in congenitally blind individuals was expected to result in changes in long-range functional connectivity in the theta-, beta- and gamma band (imaginary coherency) between visual cortex and working memory networks. Magnetoencephalographic data were recorded in congenitally blind and sighted individuals during resting state as well as during a voice-based working memory task; the task was performed before and after working memory training with either auditory or tactile stimuli, or a control condition. Auditory working memory training strengthened theta-band (2.5-5 Hz) connectivity in the sighted and beta-band (17.5-22.5 Hz) connectivity in the blind. In sighted participants, theta-band connectivity increased between brain areas typically involved in auditory working memory (inferior frontal, superior temporal, insular cortex). In blind participants, beta-band networks largely emerged during the training, and connectivity increased between brain areas involved in auditory working memory and as predicted, the visual cortex. Our findings highlight long-range connectivity as a key mechanism of functional reorganization following congenital blindness, and provide new insights into the spectral characteristics of functional network connectivity.
When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.
Studies investigating the prevalence, cause, and consequence of multiple sclerosis (MS) fatigue typically use single measures that implicitly assume symptom-stability over time, neglecting information about if, when, and why severity fluctuates. We aimed to examine the extent of moment-to-moment and day-to-day variability in fatigue in relapsing-remitting MS and healthy individuals, and identify daily life determinants of fluctuations. Over 4 weekdays, 76 participants (38 relapsing-remitting MS; 38 controls) recruited from multiple sites provided real-time self-reports six times daily (n = 1661 observations analyzed) measuring fatigue severity, stressors, mood, and physical exertion, and daily self-reports of sleep quality. Fatigue fluctuations were evident in both groups. Fatigue was highest in relapsing-remitting MS, typically peaking in late-afternoon. In controls, fatigue started lower and increased steadily until bedtime. Real-time stressors and negative mood were associated with increased fatigue, and positive mood with decreased fatigue in both groups. Increased fatigue was related to physical exertion in relapsing-remitting MS, and poorer sleep quality in controls. In relapsing-remitting MS, fatigue fluctuates substantially over time. Many daily life determinants of fluctuations are similar in relapsing-remitting MS and healthy individuals (stressors, mood) but physical exertion seems more relevant in relapsing-remitting MS and sleep quality most relevant in healthy individuals.
Talking about emotion and sharing emotional experiences is a key component of human interaction. Specifically, individuals often consider the reactions of other people when evaluating the meaning and impact of an emotional stimulus. It has not yet been investigated, however, how emotional arousal ratings and physiological responses elicited by affective stimuli are influenced by the rating of an interaction partner. In the present study, pairs of participants were asked to rate and communicate the degree of their emotional arousal while viewing affective pictures. Strikingly, participants adjusted their arousal ratings to match up with their interaction partner. In anticipation of the affective picture, the interaction partner’s arousal ratings correlated positively with activity in anterior insula and prefrontal cortex. During picture presentation, social influence was reflected in the ventral striatum, that is, activity in the ventral striatum correlated negatively with the interaction partner’s ratings. Results of the study show that emotional alignment through the influence of another person’s communicated experience has to be considered as a complex phenomenon integrating different components including emotion anticipation and conformity.
The lateralization of neuronal processing underpinning hearing, speech, language, and music is widely studied, vigorously debated, and still not understood in a satisfactory manner. One set of hypotheses focuses on the temporal structure of perceptual experience and links auditory cortex asymmetries to underlying differences in neural populations with differential temporal sensitivity (e.g., ideas advanced by Zatorre et al. (2002) and Poeppel (2003). The Asymmetric Sampling in Time theory (AST) (Poeppel, 2003), builds on cytoarchitectonic differences between auditory cortices and predicts that modulation frequencies within the range of, roughly, the syllable rate, are more accurately tracked by the right hemisphere. To date, this conjecture is reasonably well supported, since – while there is some heterogeneity in the reported findings – the predicted asymmetrical entrainment has been observed in various experimental protocols. Here, we show that under specific processing demands, the rightward dominance disappears. We propose an enriched and modified version of the asymmetric sampling hypothesis in the context of speech. Recent work (Rimmele et al., 2018b) proposes two different mechanisms to underlie the auditory tracking of the speech envelope: one derived from the intrinsic oscillatory properties of auditory regions; the other induced by top-down signals coming from other non-auditory regions of the brain. We propose that under non-speech listening conditions, the intrinsic auditory mechanism dominates and thus, in line with AST, entrainment is rightward lateralized, as is widely observed. However, (i) depending on individual brain structural/functional differences, and/or (ii) in the context of specific speech listening conditions, the relative weight of the top-down mechanism can increase. In this scenario, the typically observed auditory sampling asymmetry (and its rightward dominance) diminishes or vanishes.
In this study, we investigated the impact of two constraints on the linear order of constituents in German preschool children’s and adults’ speech production: a rhythmic (*LAPSE, militating against sequences of unstressed syllables) and a semantic one (ANIM, requiring animate referents to be named before inanimate ones). Participants were asked to produce coordinated bare noun phrases in response to picture stimuli (e.g., Delfin und Planet, ‘dolphin and planet’) without any predefined word order. Overall, children and adults preferably produced animate items before inanimate ones, confirming findings of Prat-Sala, Shillcock, and Sorace (2000). In the group of preschoolers, the strength of the animacy effect correlated positively with age. Furthermore, the order of the conjuncts was affected by the rhythmic constraint, such that disrhythmic sequences, i.e., stress lapses, were avoided. In both groups, the latter result was significant when the two stimulus pictures did not vary with respect to animacy. In sum, our findings suggest a stronger influence of animacy compared to rhythmic well-formedness on conjunct ordering for German speaking children and adults, in line with findings by McDonald, Bock, and Kelly (1993) who investigated English speaking adults.
Music is an effective means of stress-reduction. However, to date there has been no systematic comparison between musical and language-based means of stress reduction in an ambulatory setting. Furthermore, although the aim for listening to music appears to play a role in its effect, this has not yet been investigated thoroughly. We compared musical means, language-based means like guided relaxation or self-enhancement exercises, and a combination of both with respect to their potential to reduce perceived stress. Furthermore, we investigated whether the aim one wants to achieve by listening to these means had an impact on their effect. We tested 64 participants (age: M = 40.09 years; 18 female) for 3–10 days during their everyday life using an app containing three means: musical means, language-based means, and a combination of both. For the music and the combination conditions participants were asked to select an aim: relaxation or activation. We measured perceived stress, relaxation, activation, and electrical skin resistance (ESR) as a marker of sympathetic nervous system (SNS) activity before and after using the app. Participants were instructed to use the app as often as desired. Overall, perceived stress was reduced after using the app, while perceived relaxation and activation were increased. There were no differences between the three means regarding their effect on perceived stress and relaxation, but music led to a greater increase in ESR and perceived activation compared to the other means. There was a decrease in ESR only for music. Moreover, perceived stress was reduced and perceived relaxation was increased to greater extent if the aim “relaxation” had been selected. Perceived activation, however, showed a larger increase if the aim had been “activation,” which was even more marked in the case of music listening. Our results indicate that all three means reduced perceived stress and promoted feelings of relaxation and activation. For enhancing feelings of activation music seems to be more effective than the other means, which was reflected in increased SNS activity as well. Furthermore, the choice of an aim plays an important role for the reduction of stress, and promotion of relaxation and activation.
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively ‘local’ and ‘global’ scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively ‘unitary’ auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.
Switching between reading tasks leads to phase-transitions in reading times in L1 and L2 readers
(2019)
Reading research uses different tasks to investigate different levels of the reading process, such as word recognition, syntactic parsing, or semantic integration. It seems to be tacitly assumed that the underlying cognitive process that constitute reading are stable across those tasks. However, nothing is known about what happens when readers switch from one reading task to another. The stability assumptions of the reading process suggest that the cognitive system resolves this switching between two tasks quickly. Here, we present an alternative language-game hypothesis (LGH) of reading that begins by treating reading as a softly-assembled process and that assumes, instead of stability, context-sensitive flexibility of the reading process. LGH predicts that switching between two reading tasks leads to longer lasting phase-transition like patterns in the reading process. Using the nonlinear-dynamical tool of recurrence quantification analysis, we test these predictions by examining series of individual word reading times in self-paced reading tasks where native (L1) and second language readers (L2) transition between random word and ordered text reading tasks. We find consistent evidence for phase-transitions in the reading times when readers switch from ordered text to random-word reading, but we find mixed evidence when readers transition from random-word to ordered-text reading. In the latter case, L2 readers show moderately stronger signs for phase-transitions compared to L1 readers, suggesting that familiarity with a language influences whether and how such transitions occur. The results provide evidence for LGH and suggest that the cognitive processes underlying reading are not fully stable across tasks but exhibit soft-assembly in the interaction between task and reader characteristics.
The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support–or even contradicted–the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.