Refine
Document Type
- Article (5)
Language
- English (5)
Has Fulltext
- yes (5) (remove)
Is part of the Bibliography
- no (5) (remove)
Keywords
- Acoustics (5) (remove)
Institute
- MPI für empirische Ästhetik (4)
- Medizin (2)
- Neuere Philologien (1)
- Psychologie (1)
Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4–8 Hz) and beta–gamma (around 15–40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.
Purpose: The aim of the study was to compare three different elastography methods, namely Strain Elastography (SE), Point Shear-Wave Elastography (pSWE) using Acoustic Radiation Force Impulse (ARFI)-Imaging and 2D-Shear Wave Elastography (2D-SWE), in the same study population for the differentiation of thyroid nodules.
Materials and methods: All patients received a conventional ultrasound scan, SE and 2D-SWE, and all patients except for two received ARFI-Imaging. Cytology/histology of thyroid nodules was used as a reference method. SE measures the relative stiffness within the region of interest (ROI) using the surrounding tissue as reference tissue. ARFI mechanically excites the tissue at the ROI using acoustic pulses to generate localized tissue displacements. 2D-SWE measures tissue elasticity using the velocity of many shear waves as they propagate through the tissue.
Results: 84 nodules (73 benign and 11 malignant) in 62 patients were analyzed. Sensitivity, specificity and NPV of SE were 73%, 70% and 94%, respectively. Sensitivity, specificity and NPV of ARFI and 2D-SWE were 90%, 79%, 98% and 73%, 67%, 94% respectively, using a cut-off value of 1.98m/s for ARFI and 2.65m/s (21.07kPa) for 2D-SWE. The AUROC (Area under the Receiver Operating Characteristic) of SE, ARFI and 2D-SWE for the diagnosis of malignant thyroid nodules were 52%, 86% and 71%, respectively. A significant difference in AUROC was found between SE and ARFI (p = 0.008), while no significant difference was found between ARFI and SWE (86% vs. 71%, p = 0.31), or SWE and SE (71% vs. 52%, p = 0.26).
Conclusion: pSWE using ARFI and 2D-SWE showed comparable results for the differentiation of thyroid nodules. ARFI was superior to elastography using SE.
Research on the music-language interface has extensively investigated similarities and differences of poetic and musical meter, but largely disregarded melody. Using a measure of melodic structure in music––autocorrelations of sound sequences consisting of discrete pitch and duration values––, we show that individual poems feature distinct and text-driven pitch and duration contours, just like songs and other pieces of music. We conceptualize these recurrent melodic contours as an additional, hitherto unnoticed dimension of parallelistic patterning. Poetic speech melodies are higher order units beyond the level of individual syntactic phrases, and also beyond the levels of individual sentences and verse lines. Importantly, auto-correlation scores for pitch and duration recurrences across stanzas are predictive of how melodious naive listeners perceive the respective poems to be, and how likely these poems were to be set to music by professional composers. Experimentally removing classical parallelistic features characteristic of prototypical poems (rhyme, meter, and others) led to decreased autocorrelation scores of pitches, independent of spoken renditions, along with reduced ratings for perceived melodiousness. This suggests that the higher order parallelistic feature of poetic melody strongly interacts with the other parallelistic patterns of poems. Our discovery of a genuine poetic speech melody has great potential for deepening the understanding of the music-language interface.
The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support–or even contradicted–the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.
Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4–7 Hz) and gamma band ranges (31–45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8–12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.