Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings |
Herausgeber/-innen | Linwei Wang, Qi Dou, P. Thomas Fletcher, Stefanie Speidel, Shuo Li |
Herausgeber (Verlag) | Springer Science and Business Media Deutschland GmbH |
Seiten | 654-663 |
Seitenumfang | 10 |
ISBN (elektronisch) | 9783031164316 |
ISBN (Print) | 9783031164309 |
Publikationsstatus | Veröffentlicht - 15 Sept. 2022 |
Veranstaltung | 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 - Singapore, Singapur Dauer: 18 Sept. 2022 → 22 Sept. 2022 |
Publikationsreihe
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Band | 13431 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (elektronisch) | 1611-3349 |
Abstract
Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All ‘modern’ architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings. Hrsg. / Linwei Wang; Qi Dou; P. Thomas Fletcher; Stefanie Speidel; Shuo Li. Springer Science and Business Media Deutschland GmbH, 2022. S. 654-663 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13431 LNCS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000
AU - Ihler, Sontje
AU - Kuhnke, Felix
AU - Spindeldreier, Svenja
PY - 2022/9/15
Y1 - 2022/9/15
N2 - Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All ‘modern’ architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.
AB - Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All ‘modern’ architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.
KW - Limited data
KW - Medical image classification
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85138802602&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16431-6_62
DO - 10.1007/978-3-031-16431-6_62
M3 - Conference contribution
AN - SCOPUS:85138802602
SN - 9783031164309
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 654
EP - 663
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings
A2 - Wang, Linwei
A2 - Dou, Qi
A2 - Fletcher, P. Thomas
A2 - Speidel, Stefanie
A2 - Li, Shuo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
Y2 - 18 September 2022 through 22 September 2022
ER -