Refine
Document Type
- Article (6)
Language
- English (6)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
- Sensory perception (6) (remove)
Institute
Objectives: Combined electric-acoustic stimulation (EAS) is a well-accepted therapeutic treatment for cochlear implant (CI) users with residual hearing in the low frequencies but severe to profound hearing loss in the high frequencies. The recently introduced SONNETeas audio processor offers different microphone directionality (MD) settings and wind noise reduction (WNR) as front-end processing. The aim of this study was to compare speech perception in quiet and noise between two EAS audio processors DUET 2 and SONNETeas, to assess the impact of MD and WNR on speech perception in EAS users in the absence of wind. Furthermore, subjective rating of hearing performance was registered.
Method: Speech perception and subjective rating with SONNETeas or DUET 2 audio processor were assessed in 10 experienced EAS users. Speech perception was measured in quiet and in a diffuse noise setup (MSNF). The SONNETeas processor was tested with three MD settings omnidirectional/natural/adaptive and with different intensities of WNR. Subjective rating of auditory benefit and sound quality was rated using two questionnaires.
Results: There was no significant difference between DUET 2 and SONNETeas processor using the omnidirectional microphone in quiet and in noise. There was a significant improvement in SRT with MD settings natural (2.2 dB) and adaptive (3.6 dB). No detrimental effect of the WNR algorithm on speech perception was found in the absence of wind. Sound quality was rated as “moderate” for both audio processors.
Conclusions: The different MD settings of the SONNETeas can provide EAS users with better speech perception compared to an omnidirectional microphone. Concerning speech perception in quiet and quality of life, the performance of the DUET 2 and SONNETeas audio processors was comparable.
Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4–7 Hz) and gamma band ranges (31–45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8–12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.
Background: Delayed-onset muscle soreness (DOMS) refers to dull pain and discomfort in people after participating in exercise, sport or recreational physical activities. The aim of this study was to detect underlying mechanical thresholds in an experimental model of DOMS.
Methods: Randomised study to detect mechanical pain thresholds in a randomised order following experimentally induced DOMS of the non-dominant arm in healthy participants. Main outcome was the detection of the pressure pain threshold (PPT), secondary thresholds included mechanical detection (MDT) and pain thresholds (MPT), pain intensity, pain perceptions and the maximum isometric voluntary force (MIVF).
Results: Twenty volunteers (9 female and 11 male, age 25.2 ± 3.2 years, weight 70.5 ± 10.8 kg, height 177.4 ± 9.4 cm) participated in the study. DOMS reduced the PPT (at baseline 5.9 ± 0.4 kg/cm2) by a maximum of 1.5 ± 1.4 kg/cm2 (-24%) at 48 hours (p < 0.001). This correlated with the decrease in MIVF (r = -0.48, p = 0.033). Whereas subjective pain was an indicator of the early 48 hours, the PPT was still present after 72 hours (r = 0.48, p = 0.036). Other mechanical thresholds altered significantly due to DOMS, but did show no clinically or physiologically remarkable changes.
Conclusions: Functional impairment following DOMS seems related to the increased excitability of high-threshold mechanosensitive nociceptors. The PPT was the most valid mechanical threshold to quantify the extent of dysfunction. Thus PPT rather than pain intensity should be considered a possible marker indicating the athletes’ potential risk of injury.
Abstract: The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
Author Summary: Human visual perception is a complex cognitive feat known to be mediated by distinct cortical regions of the brain. However, the exact function of these regions remains unknown, and thus it remains unclear how those regions together orchestrate visual perception. Here, we apply an AI-driven brain mapping approach to reveal visual brain function. This approach integrates multiple artificial deep neural networks trained on a diverse set of functions with functional recordings of the whole human brain. Our results reveal a systematic tiling of visual cortex by mapping regions to particular functions of the deep networks. Together this constitutes a comprehensive account of the functions of the distinct cortical regions of the brain that mediate human visual perception.