COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Autoren

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)169-177
Seitenumfang9
FachzeitschriftISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Jahrgang5
Ausgabenummer1
PublikationsstatusVeröffentlicht - 17 Mai 2022
Veranstaltung2022 24th ISPRS Congress on Imaging Today, Foreseeing Tomorrow, Commission I - Nice, Frankreich
Dauer: 6 Juni 202211 Juni 2022

Abstract

In the context of image orientation, it is commonly assumed that the environment is completely static. This is why dynamic elements are typically filtered out using robust estimation procedures. Especially in urban areas, however, many such dynamic elements are present in the environment, which leads to a noticeable amount of errors that have to be detected via robust adjustment. This problem is even more evident in the case of cooperative image orientation using dynamic objects as ground control points (GCPs), because such dynamic objects carry the relevant information. One way to deal with this challenge is to detect these dynamic objects prior to the adjustment and to process the related image points separately. To do so, a novel methodology to distinguish dynamic and static image points in stereoscopic image sequences is introduced in this paper, using a neural network for the detection of potentially dynamic objects and additional checks via forward intersection. To investigate the effects of the consideration of dynamic points in the adjustment, an image sequence of an inner-city traffic scenario is used; image orientation, as well as the 3D coordinates of tie points, are calculated via a robust bundle adjustment. It is shown that compared to a solution without considering dynamic points, errors in the tie points are significantly reduced, while the median of the precision of all 3D coordinates of the tie points is improved.

ASJC Scopus Sachgebiete

Zitieren

COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS. / Trusheim, P.; Mehltretter, M.; Rottensteiner, F. et al.
in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Jahrgang 5, Nr. 1, 17.05.2022, S. 169-177.

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Trusheim, P, Mehltretter, M, Rottensteiner, F & Heipke, C 2022, 'COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS', ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Jg. 5, Nr. 1, S. 169-177. https://doi.org/10.5194/isprs-annals-V-1-2022-169-2022
Trusheim, P., Mehltretter, M., Rottensteiner, F., & Heipke, C. (2022). COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 5(1), 169-177. https://doi.org/10.5194/isprs-annals-V-1-2022-169-2022
Trusheim P, Mehltretter M, Rottensteiner F, Heipke C. COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2022 Mai 17;5(1):169-177. doi: 10.5194/isprs-annals-V-1-2022-169-2022
Trusheim, P. ; Mehltretter, M. ; Rottensteiner, F. et al. / COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS. in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2022 ; Jahrgang 5, Nr. 1. S. 169-177.
Download
@article{1b6d44da891e492fb96265e2c4689d4d,
title = "COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS",
abstract = "In the context of image orientation, it is commonly assumed that the environment is completely static. This is why dynamic elements are typically filtered out using robust estimation procedures. Especially in urban areas, however, many such dynamic elements are present in the environment, which leads to a noticeable amount of errors that have to be detected via robust adjustment. This problem is even more evident in the case of cooperative image orientation using dynamic objects as ground control points (GCPs), because such dynamic objects carry the relevant information. One way to deal with this challenge is to detect these dynamic objects prior to the adjustment and to process the related image points separately. To do so, a novel methodology to distinguish dynamic and static image points in stereoscopic image sequences is introduced in this paper, using a neural network for the detection of potentially dynamic objects and additional checks via forward intersection. To investigate the effects of the consideration of dynamic points in the adjustment, an image sequence of an inner-city traffic scenario is used; image orientation, as well as the 3D coordinates of tie points, are calculated via a robust bundle adjustment. It is shown that compared to a solution without considering dynamic points, errors in the tie points are significantly reduced, while the median of the precision of all 3D coordinates of the tie points is improved. ",
keywords = "Bundle Adjustment, Cooperative Localisation, Dynamic Scene, Image Orientation",
author = "P. Trusheim and M. Mehltretter and F. Rottensteiner and C. Heipke",
note = "Funding Information: This work was supported by the German Research Foundation (DFG) as a part of the Research Training Group i.c.sens [GRK2159].; 2022 24th ISPRS Congress on Imaging Today, Foreseeing Tomorrow, Commission I ; Conference date: 06-06-2022 Through 11-06-2022",
year = "2022",
month = may,
day = "17",
doi = "10.5194/isprs-annals-V-1-2022-169-2022",
language = "English",
volume = "5",
pages = "169--177",
number = "1",

}

Download

TY - JOUR

T1 - COOPERATIVE IMAGE ORIENTATION CONSIDERING DYNAMIC OBJECTS

AU - Trusheim, P.

AU - Mehltretter, M.

AU - Rottensteiner, F.

AU - Heipke, C.

N1 - Funding Information: This work was supported by the German Research Foundation (DFG) as a part of the Research Training Group i.c.sens [GRK2159].

PY - 2022/5/17

Y1 - 2022/5/17

N2 - In the context of image orientation, it is commonly assumed that the environment is completely static. This is why dynamic elements are typically filtered out using robust estimation procedures. Especially in urban areas, however, many such dynamic elements are present in the environment, which leads to a noticeable amount of errors that have to be detected via robust adjustment. This problem is even more evident in the case of cooperative image orientation using dynamic objects as ground control points (GCPs), because such dynamic objects carry the relevant information. One way to deal with this challenge is to detect these dynamic objects prior to the adjustment and to process the related image points separately. To do so, a novel methodology to distinguish dynamic and static image points in stereoscopic image sequences is introduced in this paper, using a neural network for the detection of potentially dynamic objects and additional checks via forward intersection. To investigate the effects of the consideration of dynamic points in the adjustment, an image sequence of an inner-city traffic scenario is used; image orientation, as well as the 3D coordinates of tie points, are calculated via a robust bundle adjustment. It is shown that compared to a solution without considering dynamic points, errors in the tie points are significantly reduced, while the median of the precision of all 3D coordinates of the tie points is improved.

AB - In the context of image orientation, it is commonly assumed that the environment is completely static. This is why dynamic elements are typically filtered out using robust estimation procedures. Especially in urban areas, however, many such dynamic elements are present in the environment, which leads to a noticeable amount of errors that have to be detected via robust adjustment. This problem is even more evident in the case of cooperative image orientation using dynamic objects as ground control points (GCPs), because such dynamic objects carry the relevant information. One way to deal with this challenge is to detect these dynamic objects prior to the adjustment and to process the related image points separately. To do so, a novel methodology to distinguish dynamic and static image points in stereoscopic image sequences is introduced in this paper, using a neural network for the detection of potentially dynamic objects and additional checks via forward intersection. To investigate the effects of the consideration of dynamic points in the adjustment, an image sequence of an inner-city traffic scenario is used; image orientation, as well as the 3D coordinates of tie points, are calculated via a robust bundle adjustment. It is shown that compared to a solution without considering dynamic points, errors in the tie points are significantly reduced, while the median of the precision of all 3D coordinates of the tie points is improved.

KW - Bundle Adjustment

KW - Cooperative Localisation

KW - Dynamic Scene

KW - Image Orientation

UR - http://www.scopus.com/inward/record.url?scp=85132814806&partnerID=8YFLogxK

U2 - 10.5194/isprs-annals-V-1-2022-169-2022

DO - 10.5194/isprs-annals-V-1-2022-169-2022

M3 - Conference article

AN - SCOPUS:85132814806

VL - 5

SP - 169

EP - 177

JO - ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences

JF - ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences

SN - 2194-9042

IS - 1

T2 - 2022 24th ISPRS Congress on Imaging Today, Foreseeing Tomorrow, Commission I

Y2 - 6 June 2022 through 11 June 2022

ER -

Von denselben Autoren