Noise Reduction in Hearing-Aid Processors: Traditional Methods vs. Neural Networks

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Simon Klein
  • Lando Rossol
  • Finn Venema
  • Sven Schonewald
  • Jens Karrenbauer
  • Holger Blume
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025
Herausgeber (Verlag)Institute of Electrical and Electronics Engineers Inc.
Seiten172-173
Seitenumfang2
ISBN (elektronisch)9798331595524
ISBN (Print)979-8-3315-9553-1
PublikationsstatusVeröffentlicht - 28 Juli 2025
Veranstaltung36th IEEE International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025 - Vancouver, Kanada
Dauer: 28 Juli 202530 Juli 2025

Publikationsreihe

NameProceedings of the International Conference on Application-Specific Systems, Architectures and Processors
ISSN (Print)2160-0511
ISSN (elektronisch)2160-052X

Abstract

Many deep neural networks (DNNs) have been applied lately in the field of speech enhancement. One particular subfield, where DNNs have shifted the boundaries of what is considered possible, is noise reduction, where the degrading effects of sounds interfering with speech are minimized. This is especially relevant for hearing impaired listeners, as their ability to understand speech in noisy circumstances is reduced. In contrast to traditional methods, which are known to improve speech quality, DNNs promise to also improve speech intelligibility. Due to the high computational complexity, DNNs have not yet been deployed on a hearing aid processor, constrained by frequencies up to 50 MHz and memory up to 2 MB. In this work we deploy a convolutional neural network (CNN) trained for noise reduction to a hearing-aid system-on-chip (SoC) developed at our institute. Real time capability is achieved by thorough optimization of the C -Code, leading to a speed up by a factor of 88 for the inference relevant layers when compared to a naïve C-Code implementation. The CNN approach is compared to an implementation of a traditional noise reduction method regarding their speech enhancement performance on white and complex noise and their computational cost. While both methods improve the speech quality measured with Perceptual Evaluation of Speech Quality (PESQ), only the CNN achieves a Short-Time Objective Intelligibility (STOI) improvement of 0.077 for complex noise. On the other hand, the CNN has a higher processor utilization of 60.1% compared to 23.5% for the traditional approach. Nonetheless, both methods are real time capable and consume only 3.3 mW for the CNN and 1.78 mW for the traditional approach, respectively.

ASJC Scopus Sachgebiete

Ziele für nachhaltige Entwicklung

Zitieren

Noise Reduction in Hearing-Aid Processors: Traditional Methods vs. Neural Networks. / Klein, Simon; Rossol, Lando; Venema, Finn et al.
Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025. Institute of Electrical and Electronics Engineers Inc., 2025. S. 172-173 (Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Klein, S, Rossol, L, Venema, F, Schonewald, S, Karrenbauer, J & Blume, H 2025, Noise Reduction in Hearing-Aid Processors: Traditional Methods vs. Neural Networks. in Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025. Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors, Institute of Electrical and Electronics Engineers Inc., S. 172-173, 36th IEEE International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025, Vancouver, British Columbia, Kanada, 28 Juli 2025. https://doi.org/10.1109/ASAP65064.2025.00037
Klein, S., Rossol, L., Venema, F., Schonewald, S., Karrenbauer, J., & Blume, H. (2025). Noise Reduction in Hearing-Aid Processors: Traditional Methods vs. Neural Networks. In Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025 (S. 172-173). (Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ASAP65064.2025.00037
Klein S, Rossol L, Venema F, Schonewald S, Karrenbauer J, Blume H. Noise Reduction in Hearing-Aid Processors: Traditional Methods vs. Neural Networks. in Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025. Institute of Electrical and Electronics Engineers Inc. 2025. S. 172-173. (Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors). doi: 10.1109/ASAP65064.2025.00037
Klein, Simon ; Rossol, Lando ; Venema, Finn et al. / Noise Reduction in Hearing-Aid Processors : Traditional Methods vs. Neural Networks. Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025. Institute of Electrical and Electronics Engineers Inc., 2025. S. 172-173 (Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors).
Download
@inproceedings{4d12ab00dd914966aaf32f349ad4a66d,
title = "Noise Reduction in Hearing-Aid Processors: Traditional Methods vs. Neural Networks",
abstract = "Many deep neural networks (DNNs) have been applied lately in the field of speech enhancement. One particular subfield, where DNNs have shifted the boundaries of what is considered possible, is noise reduction, where the degrading effects of sounds interfering with speech are minimized. This is especially relevant for hearing impaired listeners, as their ability to understand speech in noisy circumstances is reduced. In contrast to traditional methods, which are known to improve speech quality, DNNs promise to also improve speech intelligibility. Due to the high computational complexity, DNNs have not yet been deployed on a hearing aid processor, constrained by frequencies up to 50 MHz and memory up to 2 MB. In this work we deploy a convolutional neural network (CNN) trained for noise reduction to a hearing-aid system-on-chip (SoC) developed at our institute. Real time capability is achieved by thorough optimization of the C -Code, leading to a speed up by a factor of 88 for the inference relevant layers when compared to a na{\"i}ve C-Code implementation. The CNN approach is compared to an implementation of a traditional noise reduction method regarding their speech enhancement performance on white and complex noise and their computational cost. While both methods improve the speech quality measured with Perceptual Evaluation of Speech Quality (PESQ), only the CNN achieves a Short-Time Objective Intelligibility (STOI) improvement of 0.077 for complex noise. On the other hand, the CNN has a higher processor utilization of 60.1% compared to 23.5% for the traditional approach. Nonetheless, both methods are real time capable and consume only 3.3 mW for the CNN and 1.78 mW for the traditional approach, respectively.",
keywords = "Deep Neural Networks, Hearing Aids, Noise Reduction, SmartHeaP, Speech Enhancement",
author = "Simon Klein and Lando Rossol and Finn Venema and Sven Schonewald and Jens Karrenbauer and Holger Blume",
note = "Publisher Copyright: {\textcopyright} 2025 IEEE.; 36th IEEE International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025, ASAP 2025 ; Conference date: 28-07-2025 Through 30-07-2025",
year = "2025",
month = jul,
day = "28",
doi = "10.1109/ASAP65064.2025.00037",
language = "English",
isbn = "979-8-3315-9553-1",
series = "Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "172--173",
booktitle = "Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025",
address = "United States",

}

Download

TY - GEN

T1 - Noise Reduction in Hearing-Aid Processors

T2 - 36th IEEE International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025

AU - Klein, Simon

AU - Rossol, Lando

AU - Venema, Finn

AU - Schonewald, Sven

AU - Karrenbauer, Jens

AU - Blume, Holger

N1 - Publisher Copyright: © 2025 IEEE.

PY - 2025/7/28

Y1 - 2025/7/28

N2 - Many deep neural networks (DNNs) have been applied lately in the field of speech enhancement. One particular subfield, where DNNs have shifted the boundaries of what is considered possible, is noise reduction, where the degrading effects of sounds interfering with speech are minimized. This is especially relevant for hearing impaired listeners, as their ability to understand speech in noisy circumstances is reduced. In contrast to traditional methods, which are known to improve speech quality, DNNs promise to also improve speech intelligibility. Due to the high computational complexity, DNNs have not yet been deployed on a hearing aid processor, constrained by frequencies up to 50 MHz and memory up to 2 MB. In this work we deploy a convolutional neural network (CNN) trained for noise reduction to a hearing-aid system-on-chip (SoC) developed at our institute. Real time capability is achieved by thorough optimization of the C -Code, leading to a speed up by a factor of 88 for the inference relevant layers when compared to a naïve C-Code implementation. The CNN approach is compared to an implementation of a traditional noise reduction method regarding their speech enhancement performance on white and complex noise and their computational cost. While both methods improve the speech quality measured with Perceptual Evaluation of Speech Quality (PESQ), only the CNN achieves a Short-Time Objective Intelligibility (STOI) improvement of 0.077 for complex noise. On the other hand, the CNN has a higher processor utilization of 60.1% compared to 23.5% for the traditional approach. Nonetheless, both methods are real time capable and consume only 3.3 mW for the CNN and 1.78 mW for the traditional approach, respectively.

AB - Many deep neural networks (DNNs) have been applied lately in the field of speech enhancement. One particular subfield, where DNNs have shifted the boundaries of what is considered possible, is noise reduction, where the degrading effects of sounds interfering with speech are minimized. This is especially relevant for hearing impaired listeners, as their ability to understand speech in noisy circumstances is reduced. In contrast to traditional methods, which are known to improve speech quality, DNNs promise to also improve speech intelligibility. Due to the high computational complexity, DNNs have not yet been deployed on a hearing aid processor, constrained by frequencies up to 50 MHz and memory up to 2 MB. In this work we deploy a convolutional neural network (CNN) trained for noise reduction to a hearing-aid system-on-chip (SoC) developed at our institute. Real time capability is achieved by thorough optimization of the C -Code, leading to a speed up by a factor of 88 for the inference relevant layers when compared to a naïve C-Code implementation. The CNN approach is compared to an implementation of a traditional noise reduction method regarding their speech enhancement performance on white and complex noise and their computational cost. While both methods improve the speech quality measured with Perceptual Evaluation of Speech Quality (PESQ), only the CNN achieves a Short-Time Objective Intelligibility (STOI) improvement of 0.077 for complex noise. On the other hand, the CNN has a higher processor utilization of 60.1% compared to 23.5% for the traditional approach. Nonetheless, both methods are real time capable and consume only 3.3 mW for the CNN and 1.78 mW for the traditional approach, respectively.

KW - Deep Neural Networks

KW - Hearing Aids

KW - Noise Reduction

KW - SmartHeaP

KW - Speech Enhancement

UR - http://www.scopus.com/inward/record.url?scp=105015844270&partnerID=8YFLogxK

U2 - 10.1109/ASAP65064.2025.00037

DO - 10.1109/ASAP65064.2025.00037

M3 - Conference contribution

AN - SCOPUS:105015844270

SN - 979-8-3315-9553-1

T3 - Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors

SP - 172

EP - 173

BT - Proceedings - 2025 IEEE 36th International Conference on Application-Specific Systems, Architectures and Processors, ASAP 2025

PB - Institute of Electrical and Electronics Engineers Inc.

Y2 - 28 July 2025 through 30 July 2025

ER -

Von denselben Autoren