CarlW is a new system designed to assess hearing-aid (HA) performance in terms of gains in speech intelligibility and speech comprehension. CarlW implements the idea that Automatic Speech Recognition (ASR) can be used to replicate human speech-processing performance (Fontan et al., 2017; Fontan et al., 2016; Fontan et al., 2015), and, thus, can potentially be used to fine-tune HAs. This principle, for which a European patent has been filed (Aumont & Wilhem-Jaureguiberry, 2009), is used to provide the audiologist/HA dispenser with fast, reliable and objective information on HA performances. The system consists of three main parts :
- A real-ear electronic device able to record speech stimuli inside the client/patient’s ear canal.
- Software that simulates some of the perceptual consequences of hearing loss, such as elevated hearing thresholds, loudness recruitment and loss of frequency selectivity.
- An ASR system.
More precisely, CarlW is meant to work as follows. Speech samples (e.g. words, sentences) are recorded inside the patient/client’s ear canal, near the eardrum, both when wearing a HA and without a HA. The recordings are processed in order to simulate the perceptual consequences of the hearing loss of the patient/client, based on his/her audiometric thresholds. The resulting audio files are then fed to the ASR system that tries to recognize the original speech stimuli, and the results are used to qualify the fitting of the HA through various scores (e.g. word error rate, phonological distances between stimuli and ASR results) and analyses (e.g. identification of underperforming frequency ranges).
(See the complete list of references on the CarlW project page)
- Aumont, X., & Wilhem-Jaureguiberry, A. (2009). European Patent No. 2136359 — Method and Device for Measuring the Intelligibility of a Sound Distribution System. Courbevoie, France: Institut National de la Propriété Industrielle.
- Fontan, L., Ferrané, I., Farinas, J., Pinquier, J., Magnen, C., Tardieu, J., Gaillard, P., Aumont, X., & Füllgrabe, C. (2017). Automatic speech recognition predicts speech intelligibility and comprehension for listeners with simulated age-related hearing loss. Journal of Speech, Language, and Hearing Research. DOI: 10.1044/2017_JSLHR-S-16-0269
- Fontan, L., Ferrané, I., Farinas, J., Pinquier, J., & Aumont, X. (2016). Using phonologically weighted Levenshtein distances for the prediction of microscopic intelligibility. In Proceedings of Interspeech ‘16 (pp. 650-654). San Francisco, CA: International Speech and Communication Association.
- Fontan, L., Farinas, J., Ferrané, I., Pinquier, J., & Aumont, X. (2015). Automatic intelligibility measures applied to speech signals simulating age-related hearing loss. In Proceedings of Interspeech '15 (pp. 663–667). Dresden, Germany: International Speech and Communication Association.
- Fontan, L., Magnen, C., Tardieu, J., Ferrané, I., Pinquier, J., Farinas, J., Gaillard, P., & Aumont, X., (2015). Comparaison de mesures perceptives et automatiques de l'intelligibilité Application à de la parole simulant la presbyacousie [Comparison of perceptive and automatic measures of intelligibility: Application to speech simulating age-related hearing loss]. Traitement Automatique des Langues, 55(2),151-174.