Do speakers make an active use of the visual modality when communicating in noise?
It is now well known that seeing speech improves its perception, especially when speech is perturbed in the acoustic domain, for example by a noisy background. However, it is not clear yet how the visual modality is exploited in speech production, and whether speakers can make active use of the visual channel (consciously or not) to improve their intelligibility in noisy conditions.
Six native speakers of Canadian French produced speech in quiet conditions and in 85 dB of babble noise, in three situations: interacting face-to-face with the experimenter (AV), using the auditory modality only (AO), or reading aloud (NI, no interaction). The audio signal was recorded with the three-dimensional movements of their lips and tongue, using electromagnetic articulography.
All the speakers reacted similarly to the presence vs absence of communicative interaction, showing significant speech modifications with noise exposure in both interactive and non-interactive conditions, not only for parameters directly related to voice intensity or for lip movements (very visible) but also for tongue movements (less visible); greater adaptation was observed in interactive conditions, though. However, speakers reacted differently to the availability or unavailability of visual information: as expected, four of them enhanced their visible articulatory movements with NE more in the AV condition than in the AO condition. However, one participant showed the opposite behavior. The final participant applied an intermediate strategy, enhancing acoustic cues more in the AO condition and amplifying lip protrusion cues and visible inter-vowel contrasts more in the AV condition. These results support the idea that the Lombard effect is at least partly a listener-oriented adaptation. However, to clarify their speech in noisy conditions, only some speakers appear to make active use of the visual modality.