Refine
Year of publication
Language
- English (43)
Has Fulltext
- yes (43)
Is part of the Bibliography
- no (43) (remove)
Keywords
- Acoustics (4)
- Speech (4)
- Language (3)
- Behavior (2)
- Bioacoustics (2)
- Cognitive linguistics (2)
- Cognitive science (2)
- Consolidation (2)
- Electroencephalography – EEG (2)
- Human behaviour (2)
Institute
- MPI für empirische Ästhetik (43) (remove)
Free gaze and moving images are typically avoided in EEG experiments due to the expected generation of artifacts and noise. Yet for a growing number of research questions, loosening these rigorous restrictions would be beneficial. Among these is research on visual aesthetic experiences, which often involve open-ended exploration of highly variable stimuli. Here we systematically compare the effect of conservative vs. more liberal experimental settings on various measures of behavior, brain activity and physiology in an aesthetic rating task. Our primary aim was to assess EEG signal quality. 43 participants either maintained fixation or were allowed to gaze freely, and viewed either static images or dynamic (video) stimuli consisting of dance performances or nature scenes. A passive auditory background task (auditory steady-state response; ASSR) was added as a proxy measure for overall EEG recording quality. We recorded EEG, ECG and eye tracking data, and participants rated their aesthetic preference and state of boredom on each trial. Whereas both behavioral ratings and gaze behavior were affected by task and stimulus manipulations, EEG SNR was barely affected and generally robust across all conditions, despite only minimal preprocessing and no trial rejection. In particular, we show that using video stimuli does not necessarily result in lower EEG quality and can, on the contrary, significantly reduce eye movements while increasing both the participants’ aesthetic response and general task engagement. We see these as encouraging results indicating that — at least in the lab — more liberal experimental conditions can be adopted without significant loss of signal quality.
The ability to vocalize is ubiquitous in vertebrates, but neural networks underlying vocal control remain poorly understood. Here, we performed simultaneous neuronal recordings in the frontal cortex and dorsal striatum (caudate nucleus, CN) during the production of echolocation pulses and communication calls in bats. This approach allowed us to assess the general aspects underlying vocal production in mammals and the unique evolutionary adaptations of bat echolocation. Our data indicate that before vocalization, a distinctive change in high-gamma and beta oscillations (50–80 Hz and 12–30 Hz, respectively) takes place in the bat frontal cortex and dorsal striatum. Such precise fine-tuning of neural oscillations could allow animals to selectively activate motor programs required for the production of either echolocation or communication vocalizations. Moreover, the functional coupling between frontal and striatal areas, occurring in the theta oscillatory band (4–8 Hz), differs markedly at the millisecond level, depending on whether the animals are in a navigational mode (that is, emitting echolocation pulses) or in a social communication mode (emitting communication calls). Overall, this study indicates that fronto-striatal oscillations could provide a neural correlate for vocal control in bats.
Using the method or time-delayed embedding, a signal can be embedded into higher-dimensional space in order to study its dynamics. This requires knowledge of two parameters: The delay parameter τ, and the embedding dimension parameter D. Two standard methods to estimate these parameters in one-dimensional time series involve the inspection of the Average Mutual Information (AMI) function and the False Nearest Neighbor (FNN) function. In some contexts, however, such as phase-space reconstruction for Multidimensional Recurrence Quantification Analysis (MdRQA), the empirical time series that need to be embedded already possess a dimensionality higher than one. In the current article, we present extensions of the AMI and FNN functions for higher dimensional time series and their application to data from the Lorenz system coded in Matlab.
A variety of joint action studies show that people tend to fall into synchronous behavior with others participating in the same task, and that such synchronization is beneficial, leading to greater rapport, satisfaction, and performance. It has been noted that many of these task environments require simple interactions that involve little planning of action coordination toward a shared goal. The present study utilized a complex joint construction task in which dyads were instructed to build model cars while their hand movements and heart rates were measured. Participants built these models under varying conditions, delimiting how freely they could divide labor during a build session. While hand movement synchrony was sensitive to the different tasks and outcomes, the heart rate measure did not show any effects of interpersonal synchrony. Results for hand movements show that the more participants were constrained by a particular building strategy, the greater their behavioral synchrony. Within the different conditions, the degree of synchrony was predictive of subjective satisfaction and objective product outcomes. However, in contrast to many previous findings, synchrony was negatively associated with superior products, and, depending on the constraints on the interaction, positively or negatively correlated with higher subjective satisfaction. These results show that the task context critically shapes the role of synchronization during joint action, and that in more complex tasks, not synchronization of behavior, but rather complementary types of behavior may be associated with superior task outcomes.
Switching between reading tasks leads to phase-transitions in reading times in L1 and L2 readers
(2019)
Reading research uses different tasks to investigate different levels of the reading process, such as word recognition, syntactic parsing, or semantic integration. It seems to be tacitly assumed that the underlying cognitive process that constitute reading are stable across those tasks. However, nothing is known about what happens when readers switch from one reading task to another. The stability assumptions of the reading process suggest that the cognitive system resolves this switching between two tasks quickly. Here, we present an alternative language-game hypothesis (LGH) of reading that begins by treating reading as a softly-assembled process and that assumes, instead of stability, context-sensitive flexibility of the reading process. LGH predicts that switching between two reading tasks leads to longer lasting phase-transition like patterns in the reading process. Using the nonlinear-dynamical tool of recurrence quantification analysis, we test these predictions by examining series of individual word reading times in self-paced reading tasks where native (L1) and second language readers (L2) transition between random word and ordered text reading tasks. We find consistent evidence for phase-transitions in the reading times when readers switch from ordered text to random-word reading, but we find mixed evidence when readers transition from random-word to ordered-text reading. In the latter case, L2 readers show moderately stronger signs for phase-transitions compared to L1 readers, suggesting that familiarity with a language influences whether and how such transitions occur. The results provide evidence for LGH and suggest that the cognitive processes underlying reading are not fully stable across tasks but exhibit soft-assembly in the interaction between task and reader characteristics.
When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.
In transferring the concept of flow to the context of fiction reading a new approach to understanding the evolvement of reading pleasure is provided. This study presents the Reading Flow Short Scale (RFSS), the first reading-specific flow measurement tool. The RFSS was applied to 229 readers via online survey after 20 min of reading in self-selected novels. In a systematic analysis of psychometric properties, the RFSS’ factorial structure, reliability, and associations with theoretically related constructs were examined. As expected, the RFSS showed a two-factor structure, positive correlations with variables related to reading pleasure and flow, and an inverted U-shaped association with perceived fit between reader skills and text challenge. Comparisons of confirmatory factor analysis model confirmed that RFSS items loaded on different latent variables than items assessing other narrative engagement concepts, namely presence, identification, suspense, and cognitive mastery, and hence distinctly capture flow states in fiction reading. In sum, our findings indicate that the RFSS is a useful instrument for assessing flow states in fiction reading, thereby enriching the portfolio of measurement instruments in reading research.
Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4–7 Hz) and gamma band ranges (31–45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8–12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively ‘local’ and ‘global’ scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively ‘unitary’ auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.
Music, like language, is characterized by hierarchically organized structure that unfolds over time. Music listening therefore requires not only the tracking of notes and beats but also internally constructing high-level musical structures or phrases and anticipating incoming contents. Unlike for language, mechanistic evidence for online musical segmentation and prediction at a structural level is sparse. We recorded neurophysiological data from participants listening to music in its original forms as well as in manipulated versions with locally or globally reversed harmonic structures. We discovered a low-frequency neural component that modulated the neural rhythms of beat tracking and reliably parsed musical phrases. We next identified phrasal phase precession, suggesting that listeners established structural predictions from ongoing listening experience to track phrasal boundaries. The data point to brain mechanisms that listeners use to segment continuous music at the phrasal level and to predict abstract structural features of music.