Refine
Document Type
- Article (3)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Acoustic signals (1)
- Allergen immunotherapy (1)
- Allergoid (1)
- Audio signal processing (1)
- Cumulative dose (1)
- Curvilinear dose response (1)
- Grass pollen (1)
- Music perception (1)
- Sensory perception (1)
- Speech (1)
Institute
Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.
Strong dose response after immunotherapy with PQ grass using conjunctival provocation testing
(2019)
Background: Pollinex Quattro Grass (PQ Grass) is an effective, well-tolerated, short pre-seasonal subcutaneous immunotherapy to treat seasonal allergic rhinoconjunctivitis (SAR) due to grass pollen. In this Phase II study, 4 cumulative doses of PQ Grass and placebo were evaluated to determine its optimal cumulative dose.
Methods: Patients with grass pollen-induced SAR were randomised to either a cumulative dose of PQ Grass (5100, 14400, 27600 and 35600 SU) or placebo, administered as 6 weekly subcutaneous injections over 31–41 days (EudraCT number 2017-000333-31). Standardized conjunctival provocation tests (CPT) using grass pollen allergen extract were performed at screening, baseline and post-treatment to determine the total symptom score (TSS) assessed approximately 4 weeks after dosing. Three models were pre-defined (Emax, logistic, and linear in log-dose model) to evaluate a dose response relationship.
Results: In total, 95.5% of the 447 randomized patients received all 6 injections. A highly statistically significant (p < 0.0001), monotonic dose response was observed for all three pre-specified models. All treatment groups showed a statistically significant decrease from baseline in TSS compared to placebo, with the largest decrease observed after 27600 SU (p < 0.0001). The full course of 6 injections was completed by 95.5% of patients. Treatment-emergent adverse events were similar across PQ Grass groups, and mostly mild and transient in nature.
Conclusions: PQ Grass demonstrated a strong curvilinear dose response in TSS following CPT without compromising its safety profile.
Most human actions produce concomitant sounds. Action sounds can be either part of the action goal (GAS, goal-related action sounds), as for instance in tap dancing, or a mere by-product of the action (BAS, by-product action sounds), as for instance in hurdling. It is currently unclear whether these two types of action sounds—incidental or intentional—differ in their neural representation and whether the impact on the performance evaluation of an action diverges between the two. We here examined whether during the observation of tap dancing compared to hurdling, auditory information is a more important factor for positive action quality ratings. Moreover, we tested whether observation of tap dancing vs. hurdling led to stronger attenuation in primary auditory cortex, and a stronger mismatch signal when sounds do not match our expectations. We recorded individual point-light videos of newly trained participants performing tap dancing and hurdling. In the subsequent functional magnetic resonance imaging (fMRI) session, participants were presented with the videos that displayed their own actions, including corresponding action sounds, and were asked to rate the quality of their performance. Videos were either in their original form or scrambled regarding the visual modality, the auditory modality, or both. As hypothesized, behavioral results showed significantly lower rating scores in the GAS condition compared to the BAS condition when the auditory modality was scrambled. Functional MRI contrasts between BAS and GAS actions revealed higher activation of primary auditory cortex in the BAS condition, speaking in favor of stronger attenuation in GAS, as well as stronger activation of posterior superior temporal gyri and the supplementary motor area in GAS. Results suggest that the processing of self-generated action sounds depends on whether we have the intention to produce a sound with our action or not, and action sounds may be more prone to be used as sensory feedback when they are part of the explicit action goal. Our findings contribute to a better understanding of the function of action sounds for learning and controlling sound-producing actions.