11th Speech in Noise Workshop, 10-11 January 2019, Ghent, BE

Linking audiovisual integration to audiovisual speech-in-noise performance

Anja Gieseler(a), Stephanie Rosemann, Maike Tahden(b), Christiane Thiel, Hans Colonius(b)
Department of Psychology and Cluster of Excellence “Hearing4all”, University of Oldenburg, Germany

(a) Presenting
(b) Attending

The process of audiovisual (AV) integration may allow for improved processing of simple and complex stimuli such as speech. In fact, research has shown that redundant visual information improves speech comprehension, especially in adverse listening conditions. However, listeners differ greatly in understanding speech in noise and the benefit they obtain from additional visual cues. This variability has not yet been fully accounted for by measures of age, auditory - or cognitive abilities. Presumably, further variables such as AV integration capacities, i.e. the extent to integrate auditory and visual input, play a role, as real-life communication does not occur only in the auditory domain but activates different modalities simultaneously, making speech essentially multisensory in nature. Paradoxically, classical speech reception threshold (SRT) tests assess speech intelligibility solely in auditory-only (AO) conditions, which might not reflect a realistic listening scenario.

To address these issues, we focus here on the relation between AV integration capacities and AV speech-in-noise performance. AV integration capacities can be quantified based on illusory percepts induced by incongruent, audiovisual conditions. We use the susceptibility to the McGurk effect and the sound-induced flash illusion (SIFI) as measures for the strength AV integration. In order to determine the temporal window of integration, i.e. the time interval in which the inputs are likely to be integrated, we vary stimulus onset asynchronies (SOAs) between the stimuli between 70 and 420 ms. To assess AV speech intelligibility and the gain from adding visual information in terms of SRTs, we employ a newly developed audiovisual version of the well-established Oldenburg sentence test (OLSA), the AV-OLSA, using AV and AO conditions in different noise types.

Measuring 25 normal-hearing, elderly individuals (60 - 80 years), we aim to investigate:
(1) how inter-individual differences in the strength and window of AV integration relate to the variability in AV speech intelligibility in noise and in the benefit obtained from additional visual information, i.e. audiovisual gains.
(2) how the two tests of AV integration (SIFI, McGurk) relate to each other as they reflect distinct types of changes in perception: In SIFI, visual perception is modulated by auditory stimuli, whereas in the McGurk effect, auditory perception is changed by visual stimuli.

Furthermore, these data shall be compared to a group of mild-to-moderately hearing-impaired elderly individuals in order to quantify the separate contribution of hearing loss from age on changes in AV integration and AV gains. Results are presented at the conference.

Last modified 2018-12-08 00:23:30