On the use of response time in a single-task paradigm to evaluate listening effort in noisy and reverberant conditions
Everyday communication most often takes place in the concomitant presence of reverberation and background noise, and many of the real maskers are fluctuating in character. Such adverse conditions impair the speech intelligibility and make the listening process effortful. The present work investigates the combined effects of noise fluctuations and reverberation on listening effort, with reference to a speech reception task and a group of young adults with normal hearing. The behavioral measure of response time (RT) in a single-task paradigm is used to evaluate the listening effort, and a slowing down of the measure is interpreted as an increase in the allocation of cognitive resources.
Speech-in-noise tests were presented to 79 participants in reverberant conditions, created by convolving the anechoic speech and background noises with simulated binaural impulse responses. Speech reception was measured in presence of continuous spatially diffuse stationary and fluctuating noise (ICRA), over a wide range of signal-to-noise ratios (SNRs); three reverberation conditions were considered (anechoic, 0.30 and 0.65 s). The experiment was presented in a closed-set format; for each participant, data on speech intelligibility (SI) and manual RT (time elapsed between the offset of the audio playback and the response selection on a touchscreen) were collected.
The SI results showed a benefit in speech reception under fluctuating noise in both reverberant conditions, due to the “listening in the dips” phenomenon. The RTs were sensitive to the effect of SNR and reverberation, slowing down with both decreasing SNR and increasing reverberation; noticeably, the effect was more pronounced in the presence of fluctuating noise. Moreover, the RTs were sensitive to the effect of noise type only for the less reverberant condition, where faster RTs were found for the fluctuating noise. When SI was fixed, it was found that RTs in fluctuating noise became gradually longer compared to stationary noise, with increasing reverberation; the opposite trend was observed in anechoic conditions.
The pattern of the data indicates that adding reverberation to background noise decrements the processing speed in a speech reception task, and even more so in presence of noise fluctuations. The outcomes at fixed SI suggest that measures of RT in anechoic conditions may return results that are not fully representative of the listening effort experienced in realistic conditions. The present findings support the need for considering both noise and reverberation when predicting listening effort in real-life conditions.