150 Psychologie
Refine
Language
- English (27)
Has Fulltext
- yes (27)
Is part of the Bibliography
- no (27)
Keywords
- Cognitive linguistics (2)
- Perception (2)
- Semantics (2)
- Aesthetic responsiveness (1)
- Aesthetics (1)
- Affect (1)
- Ambulatory assessment (1)
- Art interventions (1)
- Attention (1)
- Behavior (1)
Institute
- MPI für empirische Ästhetik (27) (remove)
Congenitally blind individuals have been shown to activate the visual cortex during non-visual tasks. The neuronal mechanisms of such cross-modal activation are not fully understood. Here, we used an auditory working memory training paradigm in congenitally blind and in sighted adults. We hypothesized that the visual cortex gets integrated into auditory working memory networks, after these networks have been challenged by training. The spectral profile of functional networks was investigated which mediate cross-modal reorganization following visual deprivation. A training induced integration of visual cortex into task-related networks in congenitally blind individuals was expected to result in changes in long-range functional connectivity in the theta-, beta- and gamma band (imaginary coherency) between visual cortex and working memory networks. Magnetoencephalographic data were recorded in congenitally blind and sighted individuals during resting state as well as during a voice-based working memory task; the task was performed before and after working memory training with either auditory or tactile stimuli, or a control condition. Auditory working memory training strengthened theta-band (2.5-5 Hz) connectivity in the sighted and beta-band (17.5-22.5 Hz) connectivity in the blind. In sighted participants, theta-band connectivity increased between brain areas typically involved in auditory working memory (inferior frontal, superior temporal, insular cortex). In blind participants, beta-band networks largely emerged during the training, and connectivity increased between brain areas involved in auditory working memory and as predicted, the visual cortex. Our findings highlight long-range connectivity as a key mechanism of functional reorganization following congenital blindness, and provide new insights into the spectral characteristics of functional network connectivity.
When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.
Studies investigating the prevalence, cause, and consequence of multiple sclerosis (MS) fatigue typically use single measures that implicitly assume symptom-stability over time, neglecting information about if, when, and why severity fluctuates. We aimed to examine the extent of moment-to-moment and day-to-day variability in fatigue in relapsing-remitting MS and healthy individuals, and identify daily life determinants of fluctuations. Over 4 weekdays, 76 participants (38 relapsing-remitting MS; 38 controls) recruited from multiple sites provided real-time self-reports six times daily (n = 1661 observations analyzed) measuring fatigue severity, stressors, mood, and physical exertion, and daily self-reports of sleep quality. Fatigue fluctuations were evident in both groups. Fatigue was highest in relapsing-remitting MS, typically peaking in late-afternoon. In controls, fatigue started lower and increased steadily until bedtime. Real-time stressors and negative mood were associated with increased fatigue, and positive mood with decreased fatigue in both groups. Increased fatigue was related to physical exertion in relapsing-remitting MS, and poorer sleep quality in controls. In relapsing-remitting MS, fatigue fluctuates substantially over time. Many daily life determinants of fluctuations are similar in relapsing-remitting MS and healthy individuals (stressors, mood) but physical exertion seems more relevant in relapsing-remitting MS and sleep quality most relevant in healthy individuals.
The lateralization of neuronal processing underpinning hearing, speech, language, and music is widely studied, vigorously debated, and still not understood in a satisfactory manner. One set of hypotheses focuses on the temporal structure of perceptual experience and links auditory cortex asymmetries to underlying differences in neural populations with differential temporal sensitivity (e.g., ideas advanced by Zatorre et al. (2002) and Poeppel (2003). The Asymmetric Sampling in Time theory (AST) (Poeppel, 2003), builds on cytoarchitectonic differences between auditory cortices and predicts that modulation frequencies within the range of, roughly, the syllable rate, are more accurately tracked by the right hemisphere. To date, this conjecture is reasonably well supported, since – while there is some heterogeneity in the reported findings – the predicted asymmetrical entrainment has been observed in various experimental protocols. Here, we show that under specific processing demands, the rightward dominance disappears. We propose an enriched and modified version of the asymmetric sampling hypothesis in the context of speech. Recent work (Rimmele et al., 2018b) proposes two different mechanisms to underlie the auditory tracking of the speech envelope: one derived from the intrinsic oscillatory properties of auditory regions; the other induced by top-down signals coming from other non-auditory regions of the brain. We propose that under non-speech listening conditions, the intrinsic auditory mechanism dominates and thus, in line with AST, entrainment is rightward lateralized, as is widely observed. However, (i) depending on individual brain structural/functional differences, and/or (ii) in the context of specific speech listening conditions, the relative weight of the top-down mechanism can increase. In this scenario, the typically observed auditory sampling asymmetry (and its rightward dominance) diminishes or vanishes.
Music is an effective means of stress-reduction. However, to date there has been no systematic comparison between musical and language-based means of stress reduction in an ambulatory setting. Furthermore, although the aim for listening to music appears to play a role in its effect, this has not yet been investigated thoroughly. We compared musical means, language-based means like guided relaxation or self-enhancement exercises, and a combination of both with respect to their potential to reduce perceived stress. Furthermore, we investigated whether the aim one wants to achieve by listening to these means had an impact on their effect. We tested 64 participants (age: M = 40.09 years; 18 female) for 3–10 days during their everyday life using an app containing three means: musical means, language-based means, and a combination of both. For the music and the combination conditions participants were asked to select an aim: relaxation or activation. We measured perceived stress, relaxation, activation, and electrical skin resistance (ESR) as a marker of sympathetic nervous system (SNS) activity before and after using the app. Participants were instructed to use the app as often as desired. Overall, perceived stress was reduced after using the app, while perceived relaxation and activation were increased. There were no differences between the three means regarding their effect on perceived stress and relaxation, but music led to a greater increase in ESR and perceived activation compared to the other means. There was a decrease in ESR only for music. Moreover, perceived stress was reduced and perceived relaxation was increased to greater extent if the aim “relaxation” had been selected. Perceived activation, however, showed a larger increase if the aim had been “activation,” which was even more marked in the case of music listening. Our results indicate that all three means reduced perceived stress and promoted feelings of relaxation and activation. For enhancing feelings of activation music seems to be more effective than the other means, which was reflected in increased SNS activity as well. Furthermore, the choice of an aim plays an important role for the reduction of stress, and promotion of relaxation and activation.
Switching between reading tasks leads to phase-transitions in reading times in L1 and L2 readers
(2019)
Reading research uses different tasks to investigate different levels of the reading process, such as word recognition, syntactic parsing, or semantic integration. It seems to be tacitly assumed that the underlying cognitive process that constitute reading are stable across those tasks. However, nothing is known about what happens when readers switch from one reading task to another. The stability assumptions of the reading process suggest that the cognitive system resolves this switching between two tasks quickly. Here, we present an alternative language-game hypothesis (LGH) of reading that begins by treating reading as a softly-assembled process and that assumes, instead of stability, context-sensitive flexibility of the reading process. LGH predicts that switching between two reading tasks leads to longer lasting phase-transition like patterns in the reading process. Using the nonlinear-dynamical tool of recurrence quantification analysis, we test these predictions by examining series of individual word reading times in self-paced reading tasks where native (L1) and second language readers (L2) transition between random word and ordered text reading tasks. We find consistent evidence for phase-transitions in the reading times when readers switch from ordered text to random-word reading, but we find mixed evidence when readers transition from random-word to ordered-text reading. In the latter case, L2 readers show moderately stronger signs for phase-transitions compared to L1 readers, suggesting that familiarity with a language influences whether and how such transitions occur. The results provide evidence for LGH and suggest that the cognitive processes underlying reading are not fully stable across tasks but exhibit soft-assembly in the interaction between task and reader characteristics.
Music, like language, is characterized by hierarchically organized structure that unfolds over time. Music listening therefore requires not only the tracking of notes and beats but also internally constructing high-level musical structures or phrases and anticipating incoming contents. Unlike for language, mechanistic evidence for online musical segmentation and prediction at a structural level is sparse. We recorded neurophysiological data from participants listening to music in its original forms as well as in manipulated versions with locally or globally reversed harmonic structures. We discovered a low-frequency neural component that modulated the neural rhythms of beat tracking and reliably parsed musical phrases. We next identified phrasal phase precession, suggesting that listeners established structural predictions from ongoing listening experience to track phrasal boundaries. The data point to brain mechanisms that listeners use to segment continuous music at the phrasal level and to predict abstract structural features of music.
The ability to extract regularities from the environment is arguably an adaptive characteristic of intelligent systems. In the context of speech, statistical learning is thought to be an important mechanism for language acquisition. By considering individual differences in speech auditory-motor synchronization, an independent component analysis of fMRI data revealed that the neural substrates of statistical word form learning are not fully shared across individuals. While a network of auditory and superior pre/motor regions is universally activated in the process of learning, a fronto-parietal network is instead additionally and selectively engaged by some individuals, boosting their performance. Furthermore, interfering with the use of this network via articulatory suppression (producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on language-related statistical learning and reconciles previous contrasting findings, while highlighting the need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Free gaze and moving images are typically avoided in EEG experiments due to the expected generation of artifacts and noise. Yet for a growing number of research questions, loosening these rigorous restrictions would be beneficial. Among these is research on visual aesthetic experiences, which often involve open-ended exploration of highly variable stimuli. Here we systematically compare the effect of conservative vs. more liberal experimental settings on various measures of behavior, brain activity and physiology in an aesthetic rating task. Our primary aim was to assess EEG signal quality. 43 participants either maintained fixation or were allowed to gaze freely, and viewed either static images or dynamic (video) stimuli consisting of dance performances or nature scenes. A passive auditory background task (auditory steady-state response; ASSR) was added as a proxy measure for overall EEG recording quality. We recorded EEG, ECG and eye tracking data, and participants rated their aesthetic preference and state of boredom on each trial. Whereas both behavioral ratings and gaze behavior were affected by task and stimulus manipulations, EEG SNR was barely affected and generally robust across all conditions, despite only minimal preprocessing and no trial rejection. In particular, we show that using video stimuli does not necessarily result in lower EEG quality and can, on the contrary, significantly reduce eye movements while increasing both the participants’ aesthetic response and general task engagement. We see these as encouraging results indicating that — at least in the lab — more liberal experimental conditions can be adopted without significant loss of signal quality.