Discovering the building blocks of hearing: a data-driven approach
Experimental approaches to study hearing typically require simple stimuli to allow for controlled experiments. In order to improve our general understanding of features that are important for hearing in more complex environments, we propose a data-driven approach to determine good basic auditory features for speech processing. More specifically, we introduce a neuro-inspired feature detection model that relies on a modest amount of parameters. We first show that our model is capable of detecting a range of features that are thought to be important for noise-robust speech processing, such as amplitude modulations and onsets. Additionally, we propose a new methodology to identify important features within the parameter space of our model. This analysis leverages both information theory (in particular the Information Bottleneck principle) and supervised machine learning. The validity of our methodology is confirmed by comparing our results with psychoacoustic studies. Altogether our analysis framework of this new class of feature detectors may improve our current understanding of human hearing in challenging environments, both in terms of fundamental science and reproducing this ability in machine hearing systems.