MPI für empirische Ästhetik
Refine
Year of publication
Language
- English (44)
Has Fulltext
- yes (44)
Is part of the Bibliography
- no (44)
Keywords
- Acoustics (4)
- Speech (4)
- Language (3)
- Behavior (2)
- Bioacoustics (2)
- Cognitive linguistics (2)
- Cognitive science (2)
- Consolidation (2)
- Electroencephalography – EEG (2)
- Human behaviour (2)
Institute
- MPI für empirische Ästhetik (44)
- Psychologie (22)
- Ernst Strüngmann Institut (7)
- Medizin (5)
- Neuere Philologien (4)
- Frankfurt Institute for Advanced Studies (FIAS) (2)
- Biowissenschaften (1)
- MPI für Hirnforschung (1)
- Mathematik (1)
- Sprachwissenschaften (1)
Cortical tracking of stimulus features (such as the envelope) is a crucial tractable neural mechanism, allowing us to investigate how we process continuous music. We here tested whether cortical and behavioural tracking of beat, typically related to rhythm processing, are modulated by pitch predictability. In two experiments (n=20, n=52), participants’ ability to tap along to the beat of musical sequences was measured for tonal (high pitch predictability) and atonal (low pitch predictability) music. In Experiment 1, we additionally measured participants’ EEG and analysed cortical tracking of the acoustic envelope and of pitch surprisal (using IDyOM). In both experiments, finger-tapping performance was better in the tonal than the atonal condition, indicating a positive effect of pitch predictability on behavioural rhythm processing. Neural data revealed that the acoustic envelope was tracked stronger while listening to atonal than tonal music, potentially reflecting listeners’ violated pitch expectations. Our findings show that cortical envelope tracking, beyond reflecting musical rhythm processing, is modulated by pitch predictability (as well as musical expertise and enjoyment). Stronger cortical surprisal tracking was linked to overall worse envelope tracking, and worse finger-tapping performance for atonal music. Specifically, the low pitch predictability in atonal music seems to draw attentional resources resulting in a reduced ability to follow the rhythm behaviourally. Overall, cortical envelope and surprisal tracking were differentially related to behaviour in tonal and atonal music, likely reflecting differential processing under conditions of high and low predictability. Taken together, our results show diverse effects of pitch predictability on musical rhythm processing.
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Free gaze and moving images are typically avoided in EEG experiments due to the expected generation of artifacts and noise. Yet for a growing number of research questions, loosening these rigorous restrictions would be beneficial. Among these is research on visual aesthetic experiences, which often involve open-ended exploration of highly variable stimuli. Here we systematically compare the effect of conservative vs. more liberal experimental settings on various measures of behavior, brain activity and physiology in an aesthetic rating task. Our primary aim was to assess EEG signal quality. 43 participants either maintained fixation or were allowed to gaze freely, and viewed either static images or dynamic (video) stimuli consisting of dance performances or nature scenes. A passive auditory background task (auditory steady-state response; ASSR) was added as a proxy measure for overall EEG recording quality. We recorded EEG, ECG and eye tracking data, and participants rated their aesthetic preference and state of boredom on each trial. Whereas both behavioral ratings and gaze behavior were affected by task and stimulus manipulations, EEG SNR was barely affected and generally robust across all conditions, despite only minimal preprocessing and no trial rejection. In particular, we show that using video stimuli does not necessarily result in lower EEG quality and can, on the contrary, significantly reduce eye movements while increasing both the participants’ aesthetic response and general task engagement. We see these as encouraging results indicating that — at least in the lab — more liberal experimental conditions can be adopted without significant loss of signal quality.
When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.
In this study, we investigated the impact of two constraints on the linear order of constituents in German preschool children’s and adults’ speech production: a rhythmic (*LAPSE, militating against sequences of unstressed syllables) and a semantic one (ANIM, requiring animate referents to be named before inanimate ones). Participants were asked to produce coordinated bare noun phrases in response to picture stimuli (e.g., Delfin und Planet, ‘dolphin and planet’) without any predefined word order. Overall, children and adults preferably produced animate items before inanimate ones, confirming findings of Prat-Sala, Shillcock, and Sorace (2000). In the group of preschoolers, the strength of the animacy effect correlated positively with age. Furthermore, the order of the conjuncts was affected by the rhythmic constraint, such that disrhythmic sequences, i.e., stress lapses, were avoided. In both groups, the latter result was significant when the two stimulus pictures did not vary with respect to animacy. In sum, our findings suggest a stronger influence of animacy compared to rhythmic well-formedness on conjunct ordering for German speaking children and adults, in line with findings by McDonald, Bock, and Kelly (1993) who investigated English speaking adults.
The ability to extract regularities from the environment is arguably an adaptive characteristic of intelligent systems. In the context of speech, statistical learning is thought to be an important mechanism for language acquisition. By considering individual differences in speech auditory-motor synchronization, an independent component analysis of fMRI data revealed that the neural substrates of statistical word form learning are not fully shared across individuals. While a network of auditory and superior pre/motor regions is universally activated in the process of learning, a fronto-parietal network is instead additionally and selectively engaged by some individuals, boosting their performance. Furthermore, interfering with the use of this network via articulatory suppression (producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on language-related statistical learning and reconciles previous contrasting findings, while highlighting the need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.
Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum between about 4.3-5.5Hz, reflecting the production and processing of linguistic information chunks (syllables, words) every ∼200ms. Interestingly, ∼200ms is also the typical duration of eye fixations during reading. Prompted by this observation, we demonstrate that German readers sample written text at ∼5Hz. A subsequent meta-analysis with 142 studies from 14 languages replicates this result, but also shows that sampling frequencies vary across languages between 3.9Hz and 5.2Hz, and that this variation systematically depends on the complexity of the writing systems (character-based vs. alphabetic systems, orthographic transparency). Finally, we demonstrate empirically a positive correlation between speech spectrum and eye-movement sampling in low-skilled readers. Based on this convergent evidence, we propose that during reading, our brain’s linguistic processing systems imprint a preferred processing rate, i.e., the rate of spoken language production and perception, onto the oculomotor system.
Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, relative duration and relative distance in time of two sequentially-organized events —standard S, with constant duration, and comparison C, varying trial-by-trial— are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer ISI helps counteracting such serial distortion effect only the constant S is in first position, but not if the unpredictable C is in first position. These results suggest the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. We simulated our behavioral results with a Bayesian model and replicated the finding that participants disproportionately expand first-position dynamic (unpredictable) short events. Our results clarify the mechanics generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.
Research points to neurofunctional differences underlying fluent speech production in stutterers and non-stutterers. There has been considerably less work focusing on the processes that underlie stuttered speech, primarily due to the difficulty of reliably eliciting stuttering in the unnatural contexts associated with neuroimaging experiments. We used magnetoencephalography (MEG) to test the hypothesis that stuttering events result from global motor inhibition–a “freeze” response typically characterized by increased beta power in nodes of the action-stopping network. We leveraged a novel clinical interview to develop participant-specific stimuli in order to elicit a comparable amount of stuttered and fluent trials. Twenty-nine adult stutterers participated. The paradigm included a cue prior to a go signal, which allowed us to isolate processes associated with stuttered and fluent trials prior to speech initiation. During this pre-speech time window, stuttered trials were associated with greater beta power in the right pre-supplementary motor area, a key node in the action-stopping network, compared to fluent trials. Beta power in the right pre-supplementary area was related to a clinical measure of stuttering severity. We also found that anticipated words identified independently by participants were stuttered more often than those generated by the researchers, which were based on the participants’ reported anticipated sounds. This suggests that global motor inhibition results from stuttering anticipation. This study represents the largest comparison of stuttered and fluent speech to date. The findings provide a foundation for clinical trials that test the efficacy of neuromodulation on stuttering. Moreover, our study demonstrates the feasibility of using our approach for eliciting stuttering during MEG and functional magnetic resonance imaging experiments so that the neurobiological bases of stuttered speech can be further elucidated.
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability—particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioral experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (Digit Span) and higher linguistic, model-based sentence predictability – particularly so at higher speech rates and for individuals with high auditory-motor synchronization. These findings support the notion of an individual preferred auditory– motor regime that allows for optimal speech processing. The data provide evidence for a model that assigns a central role to motor-system-dependent individual flexibility in continuous speech comprehension.
Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a fundamental ability linked to cognitive functions such as inner speech, phonological working memory, and predictive processing. Speech imagery is also considered an ideal tool to test theories of overt speech. The study of speech imagery is challenging, primarily because of the absence of overt behavioral output as well as the difficulty in temporally aligning imagery events across trials and individuals. We used magnetoencephalography (MEG) paired with temporal-generalization-based neural decoding and a simple behavioral protocol to determine the processing stages underlying speech imagery. We monitored participants’ lip and jaw micromovements during mental imagery of syllable production using electromyography. Decoding participants’ imagined syllables revealed a sequence of task-elicited representations. Importantly, participants’ micromovements did not discriminate between syllables. The decoded sequence of neuronal patterns maps well onto the predictions of current computational models of overt speech motor control and provides evidence for hypothesized internal and external feedback loops for speech planning and production, respectively. Additionally, the results expose the compressed nature of representations during planning which contrasts with the natural rate at which internal productions unfold. We conjecture that the same sequence underlies the motor-based generation of sensory predictions that modulate speech perception as well as the hypothesized articulatory loop of phonological working memory. The results underscore the potential of speech imagery, based on new experimental approaches and analytical methods, and further pave the way for successful non-invasive brain-computer interfaces.
Music, like language, is characterized by hierarchically organized structure that unfolds over time. Music listening therefore requires not only the tracking of notes and beats but also internally constructing high-level musical structures or phrases and anticipating incoming contents. Unlike for language, mechanistic evidence for online musical segmentation and prediction at a structural level is sparse. We recorded neurophysiological data from participants listening to music in its original forms as well as in manipulated versions with locally or globally reversed harmonic structures. We discovered a low-frequency neural component that modulated the neural rhythms of beat tracking and reliably parsed musical phrases. We next identified phrasal phase precession, suggesting that listeners established structural predictions from ongoing listening experience to track phrasal boundaries. The data point to brain mechanisms that listeners use to segment continuous music at the phrasal level and to predict abstract structural features of music.
Background/Objectives: Sharing the bed with a partner is common among adults and impacts sleep quality with potential implications for mental health. However, hitherto findings are contradictory and particularly polysomnographic data on co-sleeping couples are extremely rare. The present study aimed to investigate the effects of a bed partner's presence on individual and dyadic sleep neurophysiology.
Methods: Young healthy heterosexual couples underwent sleep-lab-based polysomnography of two sleeping arrangements: individual sleep and co-sleep. Individual and dyadic sleep parameters (i.e., synchronization of sleep stages) were collected. The latter were assessed using cross-recurrence quantification analysis. Additionally, subjective sleep quality, relationship characteristics, and chronotype were monitored. Data were analyzed comparing co-sleep vs. individual sleep. Interaction effects of the sleeping arrangement with gender, chronotype, or relationship characteristics were moreover tested.
Results: As compared to sleeping individually, co-sleeping was associated with about 10% more REM sleep, less fragmented REM sleep (p = 0.008), longer undisturbed REM fragments (p = 0.0006), and more limb movements (p = 0.007). None of the other sleep stages was significantly altered. Social support interacted with sleeping arrangement in a way that individuals with suboptimal social support showed the biggest impact of the sleeping arrangement on REM sleep. Sleep architectures were more synchronized between partners during co-sleep (p = 0.005) even if wake phases were excluded (p = 0.022). Moreover, sleep architectures are significantly coupled across a lag of ± 5min. Depth of relationship represented an additional significant main effect regarding synchronization, reflecting a positive association between the two. Neither REM sleep nor synchronization was influenced by gender, chronotype, or other relationship characteristics.
Conclusion: Depending on the sleeping arrangement, couple's sleep architecture and synchronization show alterations that are modified by relationship characteristics. We discuss that these alterations could be part of a self-enhancing feedback loop of REM sleep and sociality and a mechanism through which sociality prevents mental illness.
The ability to vocalize is ubiquitous in vertebrates, but neural networks underlying vocal control remain poorly understood. Here, we performed simultaneous neuronal recordings in the frontal cortex and dorsal striatum (caudate nucleus, CN) during the production of echolocation pulses and communication calls in bats. This approach allowed us to assess the general aspects underlying vocal production in mammals and the unique evolutionary adaptations of bat echolocation. Our data indicate that before vocalization, a distinctive change in high-gamma and beta oscillations (50–80 Hz and 12–30 Hz, respectively) takes place in the bat frontal cortex and dorsal striatum. Such precise fine-tuning of neural oscillations could allow animals to selectively activate motor programs required for the production of either echolocation or communication vocalizations. Moreover, the functional coupling between frontal and striatal areas, occurring in the theta oscillatory band (4–8 Hz), differs markedly at the millisecond level, depending on whether the animals are in a navigational mode (that is, emitting echolocation pulses) or in a social communication mode (emitting communication calls). Overall, this study indicates that fronto-striatal oscillations could provide a neural correlate for vocal control in bats.
Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4–8 Hz) and beta–gamma (around 15–40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.
Music listening has become a highly individualized activity with smartphones and music streaming services providing listeners with absolute freedom to listen to any kind of music in any situation. Until now, little has been written about the processes underlying the selection of music in daily life. The present study aimed to disentangle some of the complex processes among the listener, situation, and functions of music listening involved in music selection. Utilizing the experience sampling method, data were collected from 119 participants using a smartphone application. For 10 consecutive days, participants received 14 prompts using stratified-random sampling throughout the day and reported on their music-listening behavior. Statistical learning procedures on multilevel regression models and multilevel structural equation modeling were used to determine the most important predictors and analyze mediation processes between person, situation, functions of listening, and music selection. Results revealed that the features of music selected in daily life were predominantly determined by situational characteristics, whereas consistent individual differences were of minor importance. Functions of music listening were found to act as a mediator between characteristics of the situation and music-selection behavior. We further observed several significant random effects, which indicated that individuals differed in how situational variables affected their music selection behavior. Our findings suggest a need to shift the focus of music-listening research from individual differences to situational influences, including potential person-situation interactions.
To prepare for an impending event of unknown temporal distribution, humans internally increase the perceived probability of event onset as time elapses. This effect is termed the hazard rate of events. We tested how the neural encoding of hazard rate changes by providing human participants with prior information on temporal event probability. We recorded behavioral and electroencephalographic (EEG) data while participants listened to continuously repeating five-tone sequences, composed of four standard tones followed by a non-target deviant tone, delivered at slow (1.6 Hz) or fast (4 Hz) rates. The task was to detect a rare target tone, which equiprobably appeared at either position two, three or four of the repeating sequence. In this design, potential target position acts as a proxy for elapsed time. For participants uninformed about the target’s distribution, elapsed time to uncertain target onset increased response speed, displaying a significant hazard rate effect at both slow and fast stimulus rates. However, only in fast sequences did prior information about the target’s temporal distribution interact with elapsed time, suppressing the hazard rate. Importantly, in the fast, uninformed condition pre-stimulus power synchronization in the beta band (Beta 1, 15–19 Hz) predicted the hazard rate of response times. Prior information suppressed pre-stimulus power synchronization in the same band, while still significantly predicting response times. We conclude that Beta 1 power does not simply encode the hazard rate, but—more generally—internal estimates of temporal event probability based upon contextual information.
The lateralization of neuronal processing underpinning hearing, speech, language, and music is widely studied, vigorously debated, and still not understood in a satisfactory manner. One set of hypotheses focuses on the temporal structure of perceptual experience and links auditory cortex asymmetries to underlying differences in neural populations with differential temporal sensitivity (e.g., ideas advanced by Zatorre et al. (2002) and Poeppel (2003). The Asymmetric Sampling in Time theory (AST) (Poeppel, 2003), builds on cytoarchitectonic differences between auditory cortices and predicts that modulation frequencies within the range of, roughly, the syllable rate, are more accurately tracked by the right hemisphere. To date, this conjecture is reasonably well supported, since – while there is some heterogeneity in the reported findings – the predicted asymmetrical entrainment has been observed in various experimental protocols. Here, we show that under specific processing demands, the rightward dominance disappears. We propose an enriched and modified version of the asymmetric sampling hypothesis in the context of speech. Recent work (Rimmele et al., 2018b) proposes two different mechanisms to underlie the auditory tracking of the speech envelope: one derived from the intrinsic oscillatory properties of auditory regions; the other induced by top-down signals coming from other non-auditory regions of the brain. We propose that under non-speech listening conditions, the intrinsic auditory mechanism dominates and thus, in line with AST, entrainment is rightward lateralized, as is widely observed. However, (i) depending on individual brain structural/functional differences, and/or (ii) in the context of specific speech listening conditions, the relative weight of the top-down mechanism can increase. In this scenario, the typically observed auditory sampling asymmetry (and its rightward dominance) diminishes or vanishes.
Talking about emotion and sharing emotional experiences is a key component of human interaction. Specifically, individuals often consider the reactions of other people when evaluating the meaning and impact of an emotional stimulus. It has not yet been investigated, however, how emotional arousal ratings and physiological responses elicited by affective stimuli are influenced by the rating of an interaction partner. In the present study, pairs of participants were asked to rate and communicate the degree of their emotional arousal while viewing affective pictures. Strikingly, participants adjusted their arousal ratings to match up with their interaction partner. In anticipation of the affective picture, the interaction partner’s arousal ratings correlated positively with activity in anterior insula and prefrontal cortex. During picture presentation, social influence was reflected in the ventral striatum, that is, activity in the ventral striatum correlated negatively with the interaction partner’s ratings. Results of the study show that emotional alignment through the influence of another person’s communicated experience has to be considered as a complex phenomenon integrating different components including emotion anticipation and conformity.
Beauty is the single most frequently and most broadly used aesthetic virtue term. The present study aimed at providing higher conceptual resolution to the broader notion of beauty by comparing it with three closely related aesthetically evaluative concepts which are likewise lexicalized across many languages: elegance, grace(fulness), and sexiness. We administered a variety of questionnaires that targeted perceptual qualia, cognitive and affective evaluations, as well as specific object properties that are associated with beauty, elegance, grace, and sexiness in personal looks, movements, objects of design, and other domains. This allowed us to reveal distinct and highly nuanced profiles of how a beautiful, elegant, graceful, and sexy appearance is subjectively perceived. As aesthetics is all about nuances, the fine-grained conceptual analysis of the four target concepts of our study provides crucial distinctions for future research.
A body of research demonstrates convincingly a role for synchronization of auditory cortex to rhythmic structure in sounds including speech and music. Some studies hypothesize that an oscillator in auditory cortex could underlie important temporal processes such as segmentation and prediction. An important critique of these findings raises the plausible concern that what is measured is perhaps not an oscillator but is instead a sequence of evoked responses. The two distinct mechanisms could look very similar in the case of rhythmic input, but an oscillator might better provide the computational roles mentioned above (i.e., segmentation and prediction). We advance an approach to adjudicate between the two models: analyzing the phase lag between stimulus and neural signal across different stimulation rates. We ran numerical simulations of evoked and oscillatory computational models, showing that in the evoked case,phase lag is heavily rate-dependent, while the oscillatory model displays marked phase concentration across stimulation rates. Next, we compared these model predictions with magnetoencephalography data recorded while participants listened to music of varying note rates. Our results show that the phase concentration of the experimental data is more in line with the oscillatory model than with the evoked model. This finding supports an auditory cortical signal that (i) contains components of both bottom-up evoked responses and internal oscillatory synchronization whose strengths are weighted by their appropriateness for particular stimulus types and (ii) cannot be explained by evoked responses alone.
Congenitally blind individuals have been shown to activate the visual cortex during non-visual tasks. The neuronal mechanisms of such cross-modal activation are not fully understood. Here, we used an auditory working memory training paradigm in congenitally blind and in sighted adults. We hypothesized that the visual cortex gets integrated into auditory working memory networks, after these networks have been challenged by training. The spectral profile of functional networks was investigated which mediate cross-modal reorganization following visual deprivation. A training induced integration of visual cortex into task-related networks in congenitally blind individuals was expected to result in changes in long-range functional connectivity in the theta-, beta- and gamma band (imaginary coherency) between visual cortex and working memory networks. Magnetoencephalographic data were recorded in congenitally blind and sighted individuals during resting state as well as during a voice-based working memory task; the task was performed before and after working memory training with either auditory or tactile stimuli, or a control condition. Auditory working memory training strengthened theta-band (2.5-5 Hz) connectivity in the sighted and beta-band (17.5-22.5 Hz) connectivity in the blind. In sighted participants, theta-band connectivity increased between brain areas typically involved in auditory working memory (inferior frontal, superior temporal, insular cortex). In blind participants, beta-band networks largely emerged during the training, and connectivity increased between brain areas involved in auditory working memory and as predicted, the visual cortex. Our findings highlight long-range connectivity as a key mechanism of functional reorganization following congenital blindness, and provide new insights into the spectral characteristics of functional network connectivity.
Music is an effective means of stress-reduction. However, to date there has been no systematic comparison between musical and language-based means of stress reduction in an ambulatory setting. Furthermore, although the aim for listening to music appears to play a role in its effect, this has not yet been investigated thoroughly. We compared musical means, language-based means like guided relaxation or self-enhancement exercises, and a combination of both with respect to their potential to reduce perceived stress. Furthermore, we investigated whether the aim one wants to achieve by listening to these means had an impact on their effect. We tested 64 participants (age: M = 40.09 years; 18 female) for 3–10 days during their everyday life using an app containing three means: musical means, language-based means, and a combination of both. For the music and the combination conditions participants were asked to select an aim: relaxation or activation. We measured perceived stress, relaxation, activation, and electrical skin resistance (ESR) as a marker of sympathetic nervous system (SNS) activity before and after using the app. Participants were instructed to use the app as often as desired. Overall, perceived stress was reduced after using the app, while perceived relaxation and activation were increased. There were no differences between the three means regarding their effect on perceived stress and relaxation, but music led to a greater increase in ESR and perceived activation compared to the other means. There was a decrease in ESR only for music. Moreover, perceived stress was reduced and perceived relaxation was increased to greater extent if the aim “relaxation” had been selected. Perceived activation, however, showed a larger increase if the aim had been “activation,” which was even more marked in the case of music listening. Our results indicate that all three means reduced perceived stress and promoted feelings of relaxation and activation. For enhancing feelings of activation music seems to be more effective than the other means, which was reflected in increased SNS activity as well. Furthermore, the choice of an aim plays an important role for the reduction of stress, and promotion of relaxation and activation.
Switching between reading tasks leads to phase-transitions in reading times in L1 and L2 readers
(2019)
Reading research uses different tasks to investigate different levels of the reading process, such as word recognition, syntactic parsing, or semantic integration. It seems to be tacitly assumed that the underlying cognitive process that constitute reading are stable across those tasks. However, nothing is known about what happens when readers switch from one reading task to another. The stability assumptions of the reading process suggest that the cognitive system resolves this switching between two tasks quickly. Here, we present an alternative language-game hypothesis (LGH) of reading that begins by treating reading as a softly-assembled process and that assumes, instead of stability, context-sensitive flexibility of the reading process. LGH predicts that switching between two reading tasks leads to longer lasting phase-transition like patterns in the reading process. Using the nonlinear-dynamical tool of recurrence quantification analysis, we test these predictions by examining series of individual word reading times in self-paced reading tasks where native (L1) and second language readers (L2) transition between random word and ordered text reading tasks. We find consistent evidence for phase-transitions in the reading times when readers switch from ordered text to random-word reading, but we find mixed evidence when readers transition from random-word to ordered-text reading. In the latter case, L2 readers show moderately stronger signs for phase-transitions compared to L1 readers, suggesting that familiarity with a language influences whether and how such transitions occur. The results provide evidence for LGH and suggest that the cognitive processes underlying reading are not fully stable across tasks but exhibit soft-assembly in the interaction between task and reader characteristics.
In transferring the concept of flow to the context of fiction reading a new approach to understanding the evolvement of reading pleasure is provided. This study presents the Reading Flow Short Scale (RFSS), the first reading-specific flow measurement tool. The RFSS was applied to 229 readers via online survey after 20 min of reading in self-selected novels. In a systematic analysis of psychometric properties, the RFSS’ factorial structure, reliability, and associations with theoretically related constructs were examined. As expected, the RFSS showed a two-factor structure, positive correlations with variables related to reading pleasure and flow, and an inverted U-shaped association with perceived fit between reader skills and text challenge. Comparisons of confirmatory factor analysis model confirmed that RFSS items loaded on different latent variables than items assessing other narrative engagement concepts, namely presence, identification, suspense, and cognitive mastery, and hence distinctly capture flow states in fiction reading. In sum, our findings indicate that the RFSS is a useful instrument for assessing flow states in fiction reading, thereby enriching the portfolio of measurement instruments in reading research.
Correction to: Nature Communications https://doi.org/10.1038/s41467-017-01045-x, published online 31 October 2017
It has come to our attention that we did not specify whether the stimulation magnitudes we report in this Article are peak amplitudes or peak-to-peak. All references to intensity given in mA in the manuscript refer to peak-to-peak amplitudes, except in Fig. 2, where the model is calibrated to 1 mA peak amplitude, as stated. In the original version of the paper we incorrectly calibrated the computational models to 1 mA peak-to-peak, rather than 1 mA peak amplitude. This means that we divided by a value twice as large as we should have. The correct estimated fields are therefore twice as large as shown in the original Fig. 2 and Supplementary Fig. 11. The corrected figures are now properly calibrated to 1mA peak amplitude. Furthermore, the sentence in the first paragraph of the Results section ‘Intensity ranged from 0.5 to 2.5 mA (current density 0.125–0.625 mA mA/cm2), which is stronger than in previous reports’, should have read ‘Intensity ranged from 0.5 to 2.5 mA peak to peak (peak current density 0.0625–0.3125 mA/cm2), which is stronger than in previous reports.’ These errors do not affect any of the Article’s conclusions. Correct versions of Fig. 2 and Supplementary Fig. 11 are presented below as Figs. 1, 2.
Transcranial electrical stimulation has widespread clinical and research applications, yet its effect on ongoing neural activity in humans is not well established. Previous reports argue that transcranial alternating current stimulation (tACS) can entrain and enhance neural rhythms related to memory, but the evidence from non-invasive recordings has remained inconclusive. Here, we measure endogenous spindle and theta activity intracranially in humans during low-frequency tACS and find no stable entrainment of spindle power during non-REM sleep, nor of theta power during resting wakefulness. As positive controls, we find robust entrainment of spindle activity to endogenous slow-wave activity in 66% of electrodes as well as entrainment to rhythmic noise-burst acoustic stimulation in 14% of electrodes. We conclude that low-frequency tACS at common stimulation intensities neither acutely modulates spindle activity during sleep nor theta activity during waking rest, likely because of the attenuated electrical fields reaching the cortical surface.
Using the method or time-delayed embedding, a signal can be embedded into higher-dimensional space in order to study its dynamics. This requires knowledge of two parameters: The delay parameter τ, and the embedding dimension parameter D. Two standard methods to estimate these parameters in one-dimensional time series involve the inspection of the Average Mutual Information (AMI) function and the False Nearest Neighbor (FNN) function. In some contexts, however, such as phase-space reconstruction for Multidimensional Recurrence Quantification Analysis (MdRQA), the empirical time series that need to be embedded already possess a dimensionality higher than one. In the current article, we present extensions of the AMI and FNN functions for higher dimensional time series and their application to data from the Lorenz system coded in Matlab.
Previous magnetoencephalography (MEG) studies have revealed gamma-band activity at sensors over parietal and fronto-temporal cortex during the delay phase of auditory spatial and non-spatial match-to-sample tasks, respectively. While this activity was interpreted as reflecting the memory maintenance of sound features, we noted that task-related activation differences might have been present already prior to the onset of the sample stimulus. The present study focused on the interval between a visual cue indicating which sound feature was to be memorized (lateralization or pitch) and sample sound presentation to test for task-related activation differences preceding stimulus encoding. MEG spectral activity was analyzed with cluster randomization tests (N = 15). Whereas there were no differences in frequencies below 40 Hz, gamma-band spectral amplitude (about 50–65 and 90–100 Hz) was higher for the lateralization than the pitch task. This activity was localized at right posterior and central sensors and present for several hundred ms after task cue offset. Activity at 50–65 Hz was also increased throughout the delay phase for the lateralization compared with the pitch task. Apparently cortical networks related to auditory spatial processing were activated after participants had been informed about the task.
Research on the music-language interface has extensively investigated similarities and differences of poetic and musical meter, but largely disregarded melody. Using a measure of melodic structure in music––autocorrelations of sound sequences consisting of discrete pitch and duration values––, we show that individual poems feature distinct and text-driven pitch and duration contours, just like songs and other pieces of music. We conceptualize these recurrent melodic contours as an additional, hitherto unnoticed dimension of parallelistic patterning. Poetic speech melodies are higher order units beyond the level of individual syntactic phrases, and also beyond the levels of individual sentences and verse lines. Importantly, auto-correlation scores for pitch and duration recurrences across stanzas are predictive of how melodious naive listeners perceive the respective poems to be, and how likely these poems were to be set to music by professional composers. Experimentally removing classical parallelistic features characteristic of prototypical poems (rhyme, meter, and others) led to decreased autocorrelation scores of pitches, independent of spoken renditions, along with reduced ratings for perceived melodiousness. This suggests that the higher order parallelistic feature of poetic melody strongly interacts with the other parallelistic patterns of poems. Our discovery of a genuine poetic speech melody has great potential for deepening the understanding of the music-language interface.
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively ‘local’ and ‘global’ scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively ‘unitary’ auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Emotional competence has an important influence on development in school. We hypothesized that reading and discussing children’s books with emotional content increases children’s emotional competence. To examine this assumption, we developed a literature-based intervention, named READING and FEELING, and tested it on 104 second and third graders in their after-school care center. Children who attended the same care center but did not participate in the emotion-centered literary program formed the control group (n = 104). Our goal was to promote emotional competence and to evaluate the effectiveness of the READING and FEELING program. Emotional competence variables were measured prior to the intervention and 9 weeks later, at the end of the program. Results revealed significant improvements in the emotional vocabulary, explicit emotional knowledge, and recognition of masked feelings. Regarding the treatment effect for detecting masked feelings, we found that boys benefited significantly more than girls. These findings underscore the assumption that children’s literature is an appropriate vehicle to support the development of emotional competence in middle childhood.
Studies investigating the prevalence, cause, and consequence of multiple sclerosis (MS) fatigue typically use single measures that implicitly assume symptom-stability over time, neglecting information about if, when, and why severity fluctuates. We aimed to examine the extent of moment-to-moment and day-to-day variability in fatigue in relapsing-remitting MS and healthy individuals, and identify daily life determinants of fluctuations. Over 4 weekdays, 76 participants (38 relapsing-remitting MS; 38 controls) recruited from multiple sites provided real-time self-reports six times daily (n = 1661 observations analyzed) measuring fatigue severity, stressors, mood, and physical exertion, and daily self-reports of sleep quality. Fatigue fluctuations were evident in both groups. Fatigue was highest in relapsing-remitting MS, typically peaking in late-afternoon. In controls, fatigue started lower and increased steadily until bedtime. Real-time stressors and negative mood were associated with increased fatigue, and positive mood with decreased fatigue in both groups. Increased fatigue was related to physical exertion in relapsing-remitting MS, and poorer sleep quality in controls. In relapsing-remitting MS, fatigue fluctuates substantially over time. Many daily life determinants of fluctuations are similar in relapsing-remitting MS and healthy individuals (stressors, mood) but physical exertion seems more relevant in relapsing-remitting MS and sleep quality most relevant in healthy individuals.
Background: Early-life institutional deprivation produces disinhibited social engagement (DSE). Portrayed as a childhood condition, little is known about the persistence of DSE-type behaviours into, presentation during, and their impact on, functioning in adulthood.
Aims: We examine these issues in the young adult follow-up of the English and Romanian Adoptees study.
Method: A total of 122 of the original 165 Romanian adoptees who had spent up to 43 months as children in Ceauşescu's Romanian orphanages and 42 UK adoptees were assessed for DSE behaviours, neurodevelopmental and mental health problems, and impairment between ages 2 and 25 years.
Results: Young adult DSE behaviour was strongly associated with early childhood deprivation, with a sixfold increase for those who spent more than 6 months in institutions. However, although DSE overlapped with autism spectrum disorder and attention-deficit hyperactivity disorder symptoms it was not, in itself, related to broader patterns of mental health problems or impairments in daily functioning in young adulthood.
Conclusions: DSE behaviour remained a prominent, but largely clinically benign, young adult feature of some adoptees who experienced early deprivation.
We aimed to prospectively assess changes in chronic stress among young adults transitioning from high school to university or working life. A population-based cohort in Munich and Dresden (Germany) was followed from age 16–18 (2002–2003) to age 20–23 (2007–2009) (n = 1688). Using the Trier Inventory for the Assessment of Chronic Stress, two dimensions of stress at university or work were assessed: work overload and work discontent. In the multiple ordinal generalized estimating equations, socio-demographics, stress outside the workplace, and job history were additionally considered. At follow-up, 52% of the population were university students. Work overload increased statistically significantly from first to second follow-up, while work discontent remained constant at the population level. Students, compared to employees, reported a larger increase in work overload (adjusted odds ratio (OR): 1.33; 95% confidence interval (95% CI): 1.07, 1.67), while work discontent did not differ between the groups. In conclusion, work overload increases when young adults transition from school to university/job life, with university students experiencing the largest increase.
In the later stages of addiction, automatized processes play a prominent role in guiding drug-seeking and drug-taking behavior. However, little is known about the neural correlates of automatized drug-taking skills and drug-related action knowledge in humans. We employed functional magnetic resonance imaging (fMRI) while smokers and non-smokers performed an orientation affordance task, where compatibility between the hand used for a behavioral response and the spatial orientation of a priming stimulus leads to shorter reaction times resulting from activation of the corresponding motor representations. While non-smokers exhibited this behavioral effect only for control objects, smokers showed the affordance effect for both control and smoking-related objects. Furthermore, smokers exhibited reduced fMRI activation for smoking-related as compared to control objects for compatible stimulus-response pairings in a sensorimotor brain network consisting of the right primary motor cortex, supplementary motor area, middle occipital gyrus, left fusiform gyrus and bilateral cingulate gyrus. In the incompatible condition, we found higher fMRI activation in smokers for smoking-related as compared to control objects in the right primary motor cortex, cingulate gyrus, and left fusiform gyrus. This suggests that the activation and performance of deeply embedded, automatized drug-taking schemata employ less brain resources. This might reduce the threshold for relapsing in individuals trying to abstain from smoking. In contrast, the interruption or modification of already triggered automatized action representations require increased neural resources.
The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support–or even contradicted–the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.
Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4–7 Hz) and gamma band ranges (31–45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8–12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.
Aesthetic perception and judgement are not merely cognitive processes, but also involve feelings. Therefore, the empirical study of these experiences requires conceptualization and measurement of aesthetic emotions. Despite the long-standing interest in such emotions, we still lack an assessment tool to capture the broad range of emotions that occur in response to the perceived aesthetic appeal of stimuli. Elicitors of aesthetic emotions are not limited to the arts in the strict sense, but extend to design, built environments, and nature. In this article, we describe the development of a questionnaire that is applicable across many of these domains: the Aesthetic Emotions Scale (Aesthemos). Drawing on theoretical accounts of aesthetic emotions and an extensive review of extant measures of aesthetic emotions within specific domains such as music, literature, film, painting, advertisements, design, and architecture, we propose a framework for studying aesthetic emotions. The Aesthemos, which is based on this framework, contains 21 subscales with two items each, that are designed to assess the emotional signature of responses to stimuli’s perceived aesthetic appeal in a highly differentiated manner. These scales cover prototypical aesthetic emotions (e.g., the feeling of beauty, being moved, fascination, and awe), epistemic emotions (e.g., interest and insight), and emotions indicative of amusement (humor and joy). In addition, the Aesthemos subscales capture both the activating (energy and vitality) and the calming (relaxation) effects of aesthetic experiences, as well as negative emotions that may contribute to aesthetic displeasure (e.g., the feeling of ugliness, boredom, and confusion).
BACKGROUND: Time-limited, early-life exposures to institutional deprivation are associated with disorders in childhood, but it is unknown whether effects persist into adulthood. We used data from the English and Romanian Adoptees study to assess whether deprivation-associated adverse neurodevelopmental and mental health outcomes persist into young adulthood.
METHODS: The English and Romanian Adoptees study is a longitudinal, natural experiment investigation into the long-term outcomes of individuals who spent from soon after birth to up to 43 months in severe deprivation in Romanian institutions before being adopted into the UK. We used developmentally appropriate standard questionnaires, interviews completed by parents and adoptees, and direct measures of IQ to measure symptoms of autism spectrum disorder, inattention and overactivity, disinhibited social engagement, conduct or emotional problems, and cognitive impairment (IQ score <80) during childhood (ages 6, 11, and 15 years) and in young adulthood (22-25 years). For analysis, Romanian adoptees were split into those who spent less than 6 months in an institution and those who spent more than 6 months in an institution. We used a comparison group of UK adoptees who did not experience deprivation. We used mixed-effects regression models for ordered-categorical outcome variables to compare symptom levels and trends between groups.
FINDINGS: Romanian adoptees who experienced less than 6 months in an institution (n=67 at ages 6 years; n=50 at young adulthood) and UK controls (n=52 at age 6 years; n=39 at young adulthood) had similarly low levels of symptoms across most ages and outcomes. By contrast, Romanian adoptees exposed to more than 6 months in an institution (n=98 at ages 6 years; n=72 at young adulthood) had persistently higher rates than UK controls of symptoms of autism spectrum disorder, disinhibited social engagement, and inattention and overactivity through to young adulthood (pooled p<0·0001 for all). Cognitive impairment in the group who spent more than 6 months in an institution remitted from markedly higher rates at ages 6 years (p=0·0001) and 11 years (p=0·0016) compared with UK controls, to normal rates at young adulthood (p=0·76). By contrast, self-rated emotional symptoms showed a late-onset pattern with minimal differences versus UK controls at ages 11 years (p=0·0449) and 15 years (p=0·17), and then marked increases by young adulthood (p=0·0005), with similar effects seen for parent ratings. The high deprivation group also had a higher proportion of people with low educational achievement (p=0·0195), unemployment (p=0·0124), and mental health service use (p=0·0120, p=0·0032, and p=0·0003 for use when aged <11 years, 11-14 years, and 15-23 years, respectively) than the UK control group. A fifth (n=15) of individuals who spent more than 6 months in an institution were problem-free at all assessments.
INTERPRETATION: Notwithstanding the resilience shown by some adoptees and the adult remission of cognitive impairment, extended early deprivation was associated with long-term deleterious effects on wellbeing that seem insusceptible to years of nurturance and support in adoptive families.
FUNDING: Economic and Social Research Council, Medical Research Council, Department of Health, Jacobs Foundation, Nuffield Foundation.
A variety of joint action studies show that people tend to fall into synchronous behavior with others participating in the same task, and that such synchronization is beneficial, leading to greater rapport, satisfaction, and performance. It has been noted that many of these task environments require simple interactions that involve little planning of action coordination toward a shared goal. The present study utilized a complex joint construction task in which dyads were instructed to build model cars while their hand movements and heart rates were measured. Participants built these models under varying conditions, delimiting how freely they could divide labor during a build session. While hand movement synchrony was sensitive to the different tasks and outcomes, the heart rate measure did not show any effects of interpersonal synchrony. Results for hand movements show that the more participants were constrained by a particular building strategy, the greater their behavioral synchrony. Within the different conditions, the degree of synchrony was predictive of subjective satisfaction and objective product outcomes. However, in contrast to many previous findings, synchrony was negatively associated with superior products, and, depending on the constraints on the interaction, positively or negatively correlated with higher subjective satisfaction. These results show that the task context critically shapes the role of synchronization during joint action, and that in more complex tasks, not synchronization of behavior, but rather complementary types of behavior may be associated with superior task outcomes.
The increasing number of casting shows and talent contests in the media over the past years suggests a public interest in rating the quality of vocal performances. In many of these formats, laymen alongside music experts act as judges. Whereas experts' judgments are considered objective and reliable when it comes to evaluating singing voice, little is known about laymen's ability to evaluate peers. On the one hand, layman listeners–who by definition did not have any formal training or regular musical practice–are known to have internalized the musical rules on which singing accuracy is based. On the other hand, layman listeners' judgment of their own vocal skills is highly inaccurate. Also, when compared with that of music experts, their level of competence in pitch perception has proven limited. The present study investigates laypersons' ability to objectively evaluate melodies performed by untrained singers. For this purpose, laymen listeners were asked to judge sung melodies. The results were compared with those of music experts who had performed the same task in a previous study. Interestingly, the findings show a high objectivity and reliability in layman listeners. Whereas both the laymen's and experts' definition of pitch accuracy overlap, differences regarding the musical criteria employed in the rating task were evident. The findings suggest that the effect of expertise is circumscribed and limited and supports the view that laypersons make trustworthy judges when evaluating the pitch accuracy of untrained singers.