Appearance based deep domain adaptation for the classification of aerial images

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • D. Wittich
  • F. Rottensteiner
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)82-102
Seitenumfang21
FachzeitschriftISPRS Journal of Photogrammetry and Remote Sensing
Jahrgang180
Frühes Online-Datum19 Aug. 2021
PublikationsstatusVeröffentlicht - Okt. 2021

Abstract

This paper addresses appearance based domain adaptation for the pixel-wise classification of remotely sensed data using deep neural networks (DNN) as a strategy to reduce the requirements of DNN with respect to the availability of training data. We focus on the setting in which labelled data are only available in a source domain DS, but not in a target domain DT, known as unsupervised domain adaptation in Computer Vision. Our method is based on adversarial training of an appearance adaptation network (AAN) that transforms images from DS such that they look like images from DT. Together with the original label maps from DS, the transformed images are used to adapt a DNN to DT. The AAN has to change the appearance of objects of a certain class such that they resemble objects of the same class in DT. Many approaches try to achieve this goal by incorporating cycle consistency in the adaptation process, but such approaches tend to hallucinate structures that occur frequently in one of the domains. In contrast, we propose a joint training strategy of the AAN and the classifier, which constrains the AAN to transform the images such that they are correctly classified. To further improve the adaptation performance, we propose a new regularization loss for the discriminator network used in adversarial training. We also address the problem of finding the optimal values of the trained network parameters, proposing a new unsupervised entropy based parameter selection criterion, which compensates for the fact that there is no validation set in DT that could be monitored. As a minor contribution, we present a new weighting strategy for the cross-entropy loss, addressing the problem of imbalanced class distributions. Our method is evaluated in 42 adaptation scenarios using datasets from 7 cities, all consisting of high-resolution digital orthophotos and height data. It achieves a positive transfer in all cases, and on average it improves the performance in the target domain by 4.3% in overall accuracy. In adaptation scenarios between the Vaihingen and Potsdam datasets from the ISPRS semantic labelling benchmark our method outperforms those from recent publications by 10-20% with respect to the mean intersection over union.

ASJC Scopus Sachgebiete

Zitieren

Appearance based deep domain adaptation for the classification of aerial images. / Wittich, D.; Rottensteiner, F.
in: ISPRS Journal of Photogrammetry and Remote Sensing, Jahrgang 180, 10.2021, S. 82-102.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Wittich D, Rottensteiner F. Appearance based deep domain adaptation for the classification of aerial images. ISPRS Journal of Photogrammetry and Remote Sensing. 2021 Okt;180:82-102. Epub 2021 Aug 19. doi: 10.48550/arXiv.2108.07779, 10.1016/j.isprsjprs.2021.08.004
Download
@article{d481e668f4964229ba86eb99d3775eaf,
title = "Appearance based deep domain adaptation for the classification of aerial images",
abstract = "This paper addresses appearance based domain adaptation for the pixel-wise classification of remotely sensed data using deep neural networks (DNN) as a strategy to reduce the requirements of DNN with respect to the availability of training data. We focus on the setting in which labelled data are only available in a source domain DS, but not in a target domain DT, known as unsupervised domain adaptation in Computer Vision. Our method is based on adversarial training of an appearance adaptation network (AAN) that transforms images from DS such that they look like images from DT. Together with the original label maps from DS, the transformed images are used to adapt a DNN to DT. The AAN has to change the appearance of objects of a certain class such that they resemble objects of the same class in DT. Many approaches try to achieve this goal by incorporating cycle consistency in the adaptation process, but such approaches tend to hallucinate structures that occur frequently in one of the domains. In contrast, we propose a joint training strategy of the AAN and the classifier, which constrains the AAN to transform the images such that they are correctly classified. To further improve the adaptation performance, we propose a new regularization loss for the discriminator network used in adversarial training. We also address the problem of finding the optimal values of the trained network parameters, proposing a new unsupervised entropy based parameter selection criterion, which compensates for the fact that there is no validation set in DT that could be monitored. As a minor contribution, we present a new weighting strategy for the cross-entropy loss, addressing the problem of imbalanced class distributions. Our method is evaluated in 42 adaptation scenarios using datasets from 7 cities, all consisting of high-resolution digital orthophotos and height data. It achieves a positive transfer in all cases, and on average it improves the performance in the target domain by 4.3% in overall accuracy. In adaptation scenarios between the Vaihingen and Potsdam datasets from the ISPRS semantic labelling benchmark our method outperforms those from recent publications by 10-20% with respect to the mean intersection over union.",
keywords = "Aerial Images, Appearance Adaptation, Deep Learning, Domain Adaptation, Pixel-wise Classification, Remote Sensing",
author = "D. Wittich and F. Rottensteiner",
note = "Funding Information: We thank the Landesamt f{\"u}r Vermessung und Geoinformation Schleswig Holstein and the Landesamt f{\"u}r Geoinformation und Landesvermessung Niedersachsen (LGLN) for providing the datasets. We also thank the International Society for Photogrammetry and Remote Sensing (ISPRS) for providing the data of the ISPRS labelling challenge. The Vaihingen dataset was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) (Cramer, 2010): http://www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html.",
year = "2021",
month = oct,
doi = "10.48550/arXiv.2108.07779",
language = "English",
volume = "180",
pages = "82--102",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
issn = "0924-2716",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - Appearance based deep domain adaptation for the classification of aerial images

AU - Wittich, D.

AU - Rottensteiner, F.

N1 - Funding Information: We thank the Landesamt für Vermessung und Geoinformation Schleswig Holstein and the Landesamt für Geoinformation und Landesvermessung Niedersachsen (LGLN) for providing the datasets. We also thank the International Society for Photogrammetry and Remote Sensing (ISPRS) for providing the data of the ISPRS labelling challenge. The Vaihingen dataset was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) (Cramer, 2010): http://www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html.

PY - 2021/10

Y1 - 2021/10

N2 - This paper addresses appearance based domain adaptation for the pixel-wise classification of remotely sensed data using deep neural networks (DNN) as a strategy to reduce the requirements of DNN with respect to the availability of training data. We focus on the setting in which labelled data are only available in a source domain DS, but not in a target domain DT, known as unsupervised domain adaptation in Computer Vision. Our method is based on adversarial training of an appearance adaptation network (AAN) that transforms images from DS such that they look like images from DT. Together with the original label maps from DS, the transformed images are used to adapt a DNN to DT. The AAN has to change the appearance of objects of a certain class such that they resemble objects of the same class in DT. Many approaches try to achieve this goal by incorporating cycle consistency in the adaptation process, but such approaches tend to hallucinate structures that occur frequently in one of the domains. In contrast, we propose a joint training strategy of the AAN and the classifier, which constrains the AAN to transform the images such that they are correctly classified. To further improve the adaptation performance, we propose a new regularization loss for the discriminator network used in adversarial training. We also address the problem of finding the optimal values of the trained network parameters, proposing a new unsupervised entropy based parameter selection criterion, which compensates for the fact that there is no validation set in DT that could be monitored. As a minor contribution, we present a new weighting strategy for the cross-entropy loss, addressing the problem of imbalanced class distributions. Our method is evaluated in 42 adaptation scenarios using datasets from 7 cities, all consisting of high-resolution digital orthophotos and height data. It achieves a positive transfer in all cases, and on average it improves the performance in the target domain by 4.3% in overall accuracy. In adaptation scenarios between the Vaihingen and Potsdam datasets from the ISPRS semantic labelling benchmark our method outperforms those from recent publications by 10-20% with respect to the mean intersection over union.

AB - This paper addresses appearance based domain adaptation for the pixel-wise classification of remotely sensed data using deep neural networks (DNN) as a strategy to reduce the requirements of DNN with respect to the availability of training data. We focus on the setting in which labelled data are only available in a source domain DS, but not in a target domain DT, known as unsupervised domain adaptation in Computer Vision. Our method is based on adversarial training of an appearance adaptation network (AAN) that transforms images from DS such that they look like images from DT. Together with the original label maps from DS, the transformed images are used to adapt a DNN to DT. The AAN has to change the appearance of objects of a certain class such that they resemble objects of the same class in DT. Many approaches try to achieve this goal by incorporating cycle consistency in the adaptation process, but such approaches tend to hallucinate structures that occur frequently in one of the domains. In contrast, we propose a joint training strategy of the AAN and the classifier, which constrains the AAN to transform the images such that they are correctly classified. To further improve the adaptation performance, we propose a new regularization loss for the discriminator network used in adversarial training. We also address the problem of finding the optimal values of the trained network parameters, proposing a new unsupervised entropy based parameter selection criterion, which compensates for the fact that there is no validation set in DT that could be monitored. As a minor contribution, we present a new weighting strategy for the cross-entropy loss, addressing the problem of imbalanced class distributions. Our method is evaluated in 42 adaptation scenarios using datasets from 7 cities, all consisting of high-resolution digital orthophotos and height data. It achieves a positive transfer in all cases, and on average it improves the performance in the target domain by 4.3% in overall accuracy. In adaptation scenarios between the Vaihingen and Potsdam datasets from the ISPRS semantic labelling benchmark our method outperforms those from recent publications by 10-20% with respect to the mean intersection over union.

KW - Aerial Images

KW - Appearance Adaptation

KW - Deep Learning

KW - Domain Adaptation

KW - Pixel-wise Classification

KW - Remote Sensing

UR - http://www.scopus.com/inward/record.url?scp=85113140428&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2108.07779

DO - 10.48550/arXiv.2108.07779

M3 - Article

AN - SCOPUS:85113140428

VL - 180

SP - 82

EP - 102

JO - ISPRS Journal of Photogrammetry and Remote Sensing

JF - ISPRS Journal of Photogrammetry and Remote Sensing

SN - 0924-2716

ER -