Brain's temporal response function: Can it be improved with behavioural account of attended stream?
During the past decade, there has been growing interest in the neural correlates of selective attention to speech. In these studies, listeners were instructed to focus their attention toward one of two concurrent speech streams. However, in everyday-life situations, listeners are unlikely to maintain undivided attention on a single talker, and instead, can switch rapidly between different voices. To study this phenomenon, we have developed a behavioural protocol that provides information about which of two competing voices is listened to at different time points, thus reflecting the dynamic nature of concurrent speech perception.
A corpus of short stories was extracted from an audiobook. After listening to two simultaneous stories — a target and an interferer — the participants have to find, among a set of words, those present in the target story. The participant’s responses are then used to estimate, retrospectively, when they were listening to the target, or to the interferer.
Neural data, recorded with EEG, and behavioural measures are combined to extract the brain's temporal response function in response to these stimuli. To modulate how many switches between the two voices occur during the course of the stories, the interferers were uttered by the same talker as the target stories, but the voice parameters were manipulated to parametrically control the similarity of the two voices from clearly dissimilar to almost identical.
We will discuss the results in terms of attentional selection and voice confusion, and suggest possible applications of this dynamic behavioural test of selective auditory attention.