Loading [MathJax]/jax/output/HTML-CSS/config.js

Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data

Research output: Chapter in book/report/conference proceedingConference contributionResearch

Authors

Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 4
  • Captures
    • Readers: 4
see details

Details

Original languageEnglish
Title of host publicationProc. SPIE 12019, AI and Optical Data Sciences III
EditorsBahram Jalali, Ken-ichi Kitayama
ISBN (electronic)9781510649095
Publication statusPublished - 2022
EventSPIE OPTO - San Francisco, United States
Duration: 22 Jan 202227 Jan 2022

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume12019
ISSN (Print)0277-786X
ISSN (electronic)1996-756X

Abstract

Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enables
an exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.

Keywords

    CNN, GAN, Semantic segmentation, U-Net, adversarial methods, damage inspection, endoscopic inspection, semi-supervised learning, transfer learning

ASJC Scopus subject areas

Cite this

Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data. / Hedrich, Kolja; Hinz, Lennart; Reithmeier, Eduard.
Proc. SPIE 12019, AI and Optical Data Sciences III. ed. / Bahram Jalali; Ken-ichi Kitayama. 2022. 1201902 (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 12019).

Research output: Chapter in book/report/conference proceedingConference contributionResearch

Hedrich, K, Hinz, L & Reithmeier, E 2022, Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data. in B Jalali & K Kitayama (eds), Proc. SPIE 12019, AI and Optical Data Sciences III., 1201902, Proceedings of SPIE - The International Society for Optical Engineering, vol. 12019, SPIE OPTO, San Francisco, California, United States, 22 Jan 2022. https://doi.org/10.1117/12.2610123
Hedrich, K., Hinz, L., & Reithmeier, E. (2022). Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data. In B. Jalali, & K. Kitayama (Eds.), Proc. SPIE 12019, AI and Optical Data Sciences III Article 1201902 (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 12019). https://doi.org/10.1117/12.2610123
Hedrich K, Hinz L, Reithmeier E. Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data. In Jalali B, Kitayama K, editors, Proc. SPIE 12019, AI and Optical Data Sciences III. 2022. 1201902. (Proceedings of SPIE - The International Society for Optical Engineering). doi: 10.1117/12.2610123
Hedrich, Kolja ; Hinz, Lennart ; Reithmeier, Eduard. / Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data. Proc. SPIE 12019, AI and Optical Data Sciences III. editor / Bahram Jalali ; Ken-ichi Kitayama. 2022. (Proceedings of SPIE - The International Society for Optical Engineering).
Download
@inproceedings{fada4e167c984e68b80ea0944de772ff,
title = "Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data",
abstract = "Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enablesan exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.",
keywords = "CNN, GAN, Semantic segmentation, U-Net, adversarial methods, damage inspection, endoscopic inspection, semi-supervised learning, transfer learning",
author = "Kolja Hedrich and Lennart Hinz and Eduard Reithmeier",
note = "Funding Information: The underlying project of this conference contribution was funded by the German Federal Ministry of Education and Research as part of the Aviation Research and Technology Program of the Niedersachsen Ministry of Economic Affairs, Employment, Transport and Digitalisation. The project is carried out in cooperation with the MTU Maintenance Hannover GmbH.; SPIE OPTO ; Conference date: 22-01-2022 Through 27-01-2022",
year = "2022",
doi = "10.1117/12.2610123",
language = "English",
series = "Proceedings of SPIE - The International Society for Optical Engineering",
editor = "Bahram Jalali and Ken-ichi Kitayama",
booktitle = "Proc. SPIE 12019, AI and Optical Data Sciences III",

}

Download

TY - GEN

T1 - Damage segmentation using small convolutional neuronal networks and adversarial training methods on low-quality RGB video data

AU - Hedrich, Kolja

AU - Hinz, Lennart

AU - Reithmeier, Eduard

N1 - Funding Information: The underlying project of this conference contribution was funded by the German Federal Ministry of Education and Research as part of the Aviation Research and Technology Program of the Niedersachsen Ministry of Economic Affairs, Employment, Transport and Digitalisation. The project is carried out in cooperation with the MTU Maintenance Hannover GmbH.

PY - 2022

Y1 - 2022

N2 - Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enablesan exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.

AB - Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enablesan exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.

KW - CNN

KW - GAN

KW - Semantic segmentation

KW - U-Net

KW - adversarial methods

KW - damage inspection

KW - endoscopic inspection

KW - semi-supervised learning

KW - transfer learning

UR - http://www.scopus.com/inward/record.url?scp=85129869028&partnerID=8YFLogxK

U2 - 10.1117/12.2610123

DO - 10.1117/12.2610123

M3 - Conference contribution

T3 - Proceedings of SPIE - The International Society for Optical Engineering

BT - Proc. SPIE 12019, AI and Optical Data Sciences III

A2 - Jalali, Bahram

A2 - Kitayama, Ken-ichi

T2 - SPIE OPTO

Y2 - 22 January 2022 through 27 January 2022

ER -

By the same author(s)