Improving instrument detection for a robotic scrub nurse using multi-view voting

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Jorge Badilla-Solórzano
  • Sontje Ihler
  • Nils Claudius Gellrich
  • Simon Spalthoff

Organisationseinheiten

Externe Organisationen

  • Medizinische Hochschule Hannover (MHH)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)1961-1968
Seitenumfang8
FachzeitschriftInternational journal of computer assisted radiology and surgery
Jahrgang18
Ausgabenummer11
Frühes Online-Datum2 Aug. 2023
PublikationsstatusVeröffentlicht - Nov. 2023

Abstract

Purpose: A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task. Methods: We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene. Results: Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model. Conclusion: Our approach can drastically improve an instrument detector’s performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community (https://github.com/Jorebs/Multi-view-Voting-Scheme).

ASJC Scopus Sachgebiete

Zitieren

Improving instrument detection for a robotic scrub nurse using multi-view voting. / Badilla-Solórzano, Jorge; Ihler, Sontje; Gellrich, Nils Claudius et al.
in: International journal of computer assisted radiology and surgery, Jahrgang 18, Nr. 11, 11.2023, S. 1961-1968.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Badilla-Solórzano J, Ihler S, Gellrich NC, Spalthoff S. Improving instrument detection for a robotic scrub nurse using multi-view voting. International journal of computer assisted radiology and surgery. 2023 Nov;18(11):1961-1968. Epub 2023 Aug 2. doi: 10.1007/s11548-023-03002-0
Badilla-Solórzano, Jorge ; Ihler, Sontje ; Gellrich, Nils Claudius et al. / Improving instrument detection for a robotic scrub nurse using multi-view voting. in: International journal of computer assisted radiology and surgery. 2023 ; Jahrgang 18, Nr. 11. S. 1961-1968.
Download
@article{09829cced4174b4f800c5ecff5e118c6,
title = "Improving instrument detection for a robotic scrub nurse using multi-view voting",
abstract = "Purpose: A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task. Methods: We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene. Results: Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model. Conclusion: Our approach can drastically improve an instrument detector{\textquoteright}s performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community (https://github.com/Jorebs/Multi-view-Voting-Scheme).",
keywords = "Mask R-CNN, Multi-viewpoint inference, Robot-assisted surgery, Robotic scrub nurse, Surgical instrument detection",
author = "Jorge Badilla-Sol{\'o}rzano and Sontje Ihler and Gellrich, {Nils Claudius} and Simon Spalthoff",
note = "Funding Information: The main author wants to offer his gratitude to the University of Costa Rica for providing financial support, enabling the completion of the hereby presented research. ",
year = "2023",
month = nov,
doi = "10.1007/s11548-023-03002-0",
language = "English",
volume = "18",
pages = "1961--1968",
journal = "International journal of computer assisted radiology and surgery",
issn = "1861-6410",
publisher = "Springer Verlag",
number = "11",

}

Download

TY - JOUR

T1 - Improving instrument detection for a robotic scrub nurse using multi-view voting

AU - Badilla-Solórzano, Jorge

AU - Ihler, Sontje

AU - Gellrich, Nils Claudius

AU - Spalthoff, Simon

N1 - Funding Information: The main author wants to offer his gratitude to the University of Costa Rica for providing financial support, enabling the completion of the hereby presented research.

PY - 2023/11

Y1 - 2023/11

N2 - Purpose: A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task. Methods: We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene. Results: Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model. Conclusion: Our approach can drastically improve an instrument detector’s performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community (https://github.com/Jorebs/Multi-view-Voting-Scheme).

AB - Purpose: A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task. Methods: We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene. Results: Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model. Conclusion: Our approach can drastically improve an instrument detector’s performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community (https://github.com/Jorebs/Multi-view-Voting-Scheme).

KW - Mask R-CNN

KW - Multi-viewpoint inference

KW - Robot-assisted surgery

KW - Robotic scrub nurse

KW - Surgical instrument detection

UR - http://www.scopus.com/inward/record.url?scp=85166510779&partnerID=8YFLogxK

U2 - 10.1007/s11548-023-03002-0

DO - 10.1007/s11548-023-03002-0

M3 - Article

C2 - 37530904

AN - SCOPUS:85166510779

VL - 18

SP - 1961

EP - 1968

JO - International journal of computer assisted radiology and surgery

JF - International journal of computer assisted radiology and surgery

SN - 1861-6410

IS - 11

ER -

Von denselben Autoren