Speech intelligibility in realistic virtual sound environments
The use of realistic yet controlled sound scenarios for the evaluation of hearing-aid algorithms in a virtual sound environment (VSE) has the potential to positively impact the auditory quality of life of many hearing-impaired (HI) users. To do this in an ecologically valid way, these critical sound scenarios (CSS) need to be selected based on acoustic scenes that hearing-aid users experience as important through their difficulty and occurrence.
This study aims at selecting a set of appropriate CSSs based on results from literature and ecological momentary assessment (EMA) data, acquiring them in a real scene using a spherical microphone array, and reproducing them in an acoustically and perceptually valid way inside a VSE. A speech intelligibility task is implemented to obtain sound reception thresholds (SRT) for normal-hearing (NH) and hearing-impaired (HI) subjects and compare them to SRTs obtained with artificial background noise. In addition, a method for measuring in situ realistic speech levels between normal-hearing subjects is developed, and used to derive NH and HI speech intelligibility performance.