MPI für empirische Ästhetik
Refine
Document Type
- Article (31)
Language
- English (31)
Has Fulltext
- yes (31)
Is part of the Bibliography
- no (31)
Keywords
- Acoustics (4)
- Language (3)
- Speech (3)
- Behavior (2)
- Bioacoustics (2)
- Cognitive linguistics (2)
- Cognitive science (2)
- Consolidation (2)
- Electroencephalography – EEG (2)
- Human behaviour (2)
Institute
- MPI für empirische Ästhetik (31)
- Psychologie (18)
- Medizin (5)
- Neuere Philologien (3)
- Frankfurt Institute for Advanced Studies (FIAS) (2)
- Biowissenschaften (1)
- Mathematik (1)
Background/Objectives: Sharing the bed with a partner is common among adults and impacts sleep quality with potential implications for mental health. However, hitherto findings are contradictory and particularly polysomnographic data on co-sleeping couples are extremely rare. The present study aimed to investigate the effects of a bed partner's presence on individual and dyadic sleep neurophysiology.
Methods: Young healthy heterosexual couples underwent sleep-lab-based polysomnography of two sleeping arrangements: individual sleep and co-sleep. Individual and dyadic sleep parameters (i.e., synchronization of sleep stages) were collected. The latter were assessed using cross-recurrence quantification analysis. Additionally, subjective sleep quality, relationship characteristics, and chronotype were monitored. Data were analyzed comparing co-sleep vs. individual sleep. Interaction effects of the sleeping arrangement with gender, chronotype, or relationship characteristics were moreover tested.
Results: As compared to sleeping individually, co-sleeping was associated with about 10% more REM sleep, less fragmented REM sleep (p = 0.008), longer undisturbed REM fragments (p = 0.0006), and more limb movements (p = 0.007). None of the other sleep stages was significantly altered. Social support interacted with sleeping arrangement in a way that individuals with suboptimal social support showed the biggest impact of the sleeping arrangement on REM sleep. Sleep architectures were more synchronized between partners during co-sleep (p = 0.005) even if wake phases were excluded (p = 0.022). Moreover, sleep architectures are significantly coupled across a lag of ± 5min. Depth of relationship represented an additional significant main effect regarding synchronization, reflecting a positive association between the two. Neither REM sleep nor synchronization was influenced by gender, chronotype, or other relationship characteristics.
Conclusion: Depending on the sleeping arrangement, couple's sleep architecture and synchronization show alterations that are modified by relationship characteristics. We discuss that these alterations could be part of a self-enhancing feedback loop of REM sleep and sociality and a mechanism through which sociality prevents mental illness.
The ability to vocalize is ubiquitous in vertebrates, but neural networks underlying vocal control remain poorly understood. Here, we performed simultaneous neuronal recordings in the frontal cortex and dorsal striatum (caudate nucleus, CN) during the production of echolocation pulses and communication calls in bats. This approach allowed us to assess the general aspects underlying vocal production in mammals and the unique evolutionary adaptations of bat echolocation. Our data indicate that before vocalization, a distinctive change in high-gamma and beta oscillations (50–80 Hz and 12–30 Hz, respectively) takes place in the bat frontal cortex and dorsal striatum. Such precise fine-tuning of neural oscillations could allow animals to selectively activate motor programs required for the production of either echolocation or communication vocalizations. Moreover, the functional coupling between frontal and striatal areas, occurring in the theta oscillatory band (4–8 Hz), differs markedly at the millisecond level, depending on whether the animals are in a navigational mode (that is, emitting echolocation pulses) or in a social communication mode (emitting communication calls). Overall, this study indicates that fronto-striatal oscillations could provide a neural correlate for vocal control in bats.
Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4–8 Hz) and beta–gamma (around 15–40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.
Music listening has become a highly individualized activity with smartphones and music streaming services providing listeners with absolute freedom to listen to any kind of music in any situation. Until now, little has been written about the processes underlying the selection of music in daily life. The present study aimed to disentangle some of the complex processes among the listener, situation, and functions of music listening involved in music selection. Utilizing the experience sampling method, data were collected from 119 participants using a smartphone application. For 10 consecutive days, participants received 14 prompts using stratified-random sampling throughout the day and reported on their music-listening behavior. Statistical learning procedures on multilevel regression models and multilevel structural equation modeling were used to determine the most important predictors and analyze mediation processes between person, situation, functions of listening, and music selection. Results revealed that the features of music selected in daily life were predominantly determined by situational characteristics, whereas consistent individual differences were of minor importance. Functions of music listening were found to act as a mediator between characteristics of the situation and music-selection behavior. We further observed several significant random effects, which indicated that individuals differed in how situational variables affected their music selection behavior. Our findings suggest a need to shift the focus of music-listening research from individual differences to situational influences, including potential person-situation interactions.
To prepare for an impending event of unknown temporal distribution, humans internally increase the perceived probability of event onset as time elapses. This effect is termed the hazard rate of events. We tested how the neural encoding of hazard rate changes by providing human participants with prior information on temporal event probability. We recorded behavioral and electroencephalographic (EEG) data while participants listened to continuously repeating five-tone sequences, composed of four standard tones followed by a non-target deviant tone, delivered at slow (1.6 Hz) or fast (4 Hz) rates. The task was to detect a rare target tone, which equiprobably appeared at either position two, three or four of the repeating sequence. In this design, potential target position acts as a proxy for elapsed time. For participants uninformed about the target’s distribution, elapsed time to uncertain target onset increased response speed, displaying a significant hazard rate effect at both slow and fast stimulus rates. However, only in fast sequences did prior information about the target’s temporal distribution interact with elapsed time, suppressing the hazard rate. Importantly, in the fast, uninformed condition pre-stimulus power synchronization in the beta band (Beta 1, 15–19 Hz) predicted the hazard rate of response times. Prior information suppressed pre-stimulus power synchronization in the same band, while still significantly predicting response times. We conclude that Beta 1 power does not simply encode the hazard rate, but—more generally—internal estimates of temporal event probability based upon contextual information.
The lateralization of neuronal processing underpinning hearing, speech, language, and music is widely studied, vigorously debated, and still not understood in a satisfactory manner. One set of hypotheses focuses on the temporal structure of perceptual experience and links auditory cortex asymmetries to underlying differences in neural populations with differential temporal sensitivity (e.g., ideas advanced by Zatorre et al. (2002) and Poeppel (2003). The Asymmetric Sampling in Time theory (AST) (Poeppel, 2003), builds on cytoarchitectonic differences between auditory cortices and predicts that modulation frequencies within the range of, roughly, the syllable rate, are more accurately tracked by the right hemisphere. To date, this conjecture is reasonably well supported, since – while there is some heterogeneity in the reported findings – the predicted asymmetrical entrainment has been observed in various experimental protocols. Here, we show that under specific processing demands, the rightward dominance disappears. We propose an enriched and modified version of the asymmetric sampling hypothesis in the context of speech. Recent work (Rimmele et al., 2018b) proposes two different mechanisms to underlie the auditory tracking of the speech envelope: one derived from the intrinsic oscillatory properties of auditory regions; the other induced by top-down signals coming from other non-auditory regions of the brain. We propose that under non-speech listening conditions, the intrinsic auditory mechanism dominates and thus, in line with AST, entrainment is rightward lateralized, as is widely observed. However, (i) depending on individual brain structural/functional differences, and/or (ii) in the context of specific speech listening conditions, the relative weight of the top-down mechanism can increase. In this scenario, the typically observed auditory sampling asymmetry (and its rightward dominance) diminishes or vanishes.
Talking about emotion and sharing emotional experiences is a key component of human interaction. Specifically, individuals often consider the reactions of other people when evaluating the meaning and impact of an emotional stimulus. It has not yet been investigated, however, how emotional arousal ratings and physiological responses elicited by affective stimuli are influenced by the rating of an interaction partner. In the present study, pairs of participants were asked to rate and communicate the degree of their emotional arousal while viewing affective pictures. Strikingly, participants adjusted their arousal ratings to match up with their interaction partner. In anticipation of the affective picture, the interaction partner’s arousal ratings correlated positively with activity in anterior insula and prefrontal cortex. During picture presentation, social influence was reflected in the ventral striatum, that is, activity in the ventral striatum correlated negatively with the interaction partner’s ratings. Results of the study show that emotional alignment through the influence of another person’s communicated experience has to be considered as a complex phenomenon integrating different components including emotion anticipation and conformity.
Beauty is the single most frequently and most broadly used aesthetic virtue term. The present study aimed at providing higher conceptual resolution to the broader notion of beauty by comparing it with three closely related aesthetically evaluative concepts which are likewise lexicalized across many languages: elegance, grace(fulness), and sexiness. We administered a variety of questionnaires that targeted perceptual qualia, cognitive and affective evaluations, as well as specific object properties that are associated with beauty, elegance, grace, and sexiness in personal looks, movements, objects of design, and other domains. This allowed us to reveal distinct and highly nuanced profiles of how a beautiful, elegant, graceful, and sexy appearance is subjectively perceived. As aesthetics is all about nuances, the fine-grained conceptual analysis of the four target concepts of our study provides crucial distinctions for future research.
A body of research demonstrates convincingly a role for synchronization of auditory cortex to rhythmic structure in sounds including speech and music. Some studies hypothesize that an oscillator in auditory cortex could underlie important temporal processes such as segmentation and prediction. An important critique of these findings raises the plausible concern that what is measured is perhaps not an oscillator but is instead a sequence of evoked responses. The two distinct mechanisms could look very similar in the case of rhythmic input, but an oscillator might better provide the computational roles mentioned above (i.e., segmentation and prediction). We advance an approach to adjudicate between the two models: analyzing the phase lag between stimulus and neural signal across different stimulation rates. We ran numerical simulations of evoked and oscillatory computational models, showing that in the evoked case,phase lag is heavily rate-dependent, while the oscillatory model displays marked phase concentration across stimulation rates. Next, we compared these model predictions with magnetoencephalography data recorded while participants listened to music of varying note rates. Our results show that the phase concentration of the experimental data is more in line with the oscillatory model than with the evoked model. This finding supports an auditory cortical signal that (i) contains components of both bottom-up evoked responses and internal oscillatory synchronization whose strengths are weighted by their appropriateness for particular stimulus types and (ii) cannot be explained by evoked responses alone.
Congenitally blind individuals have been shown to activate the visual cortex during non-visual tasks. The neuronal mechanisms of such cross-modal activation are not fully understood. Here, we used an auditory working memory training paradigm in congenitally blind and in sighted adults. We hypothesized that the visual cortex gets integrated into auditory working memory networks, after these networks have been challenged by training. The spectral profile of functional networks was investigated which mediate cross-modal reorganization following visual deprivation. A training induced integration of visual cortex into task-related networks in congenitally blind individuals was expected to result in changes in long-range functional connectivity in the theta-, beta- and gamma band (imaginary coherency) between visual cortex and working memory networks. Magnetoencephalographic data were recorded in congenitally blind and sighted individuals during resting state as well as during a voice-based working memory task; the task was performed before and after working memory training with either auditory or tactile stimuli, or a control condition. Auditory working memory training strengthened theta-band (2.5-5 Hz) connectivity in the sighted and beta-band (17.5-22.5 Hz) connectivity in the blind. In sighted participants, theta-band connectivity increased between brain areas typically involved in auditory working memory (inferior frontal, superior temporal, insular cortex). In blind participants, beta-band networks largely emerged during the training, and connectivity increased between brain areas involved in auditory working memory and as predicted, the visual cortex. Our findings highlight long-range connectivity as a key mechanism of functional reorganization following congenital blindness, and provide new insights into the spectral characteristics of functional network connectivity.