PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Mariia Khan
  • Yue Qiu
  • Yuren Cong
  • Jumana Abu-Khalaf
  • David Suter
  • Bodo Rosenhahn

Research Organisations

External Research Organisations

  • Edith Cowan University
  • National Institute of Advanced Industrial Science and Technology
View graph of relations

Details

Original languageEnglish
Pages (from-to)72456-72476
Number of pages21
JournalIEEE ACCESS
Volume13
Publication statusPublished - 15 Apr 2025

Abstract

Panoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panoramic views capture richer spatial context, object relationships, and occlusions—making them better suited for embodied artificial intelligence (AI) applications. To address this, we introduce Panoramic Scene Change Understanding (PanoSCU), a dataset specifically designed to enhance the visual object rearrangement task. Our dataset comprises 5,300 panoramas generated in an embodied simulator, encompassing 48 common indoor object classes. PanoSCU supports eight research tasks: single-view and panoramic detection, single-view and panoramic segmentation, single-view and panoramic change understanding, embodied object tracking, and change reversal. We also present PanoStitch, a training-free method for automatic panoramic data collection within embodied environments. We evaluate state-of-the-art methods on panoramic segmentation and change understanding tasks. There is a gap in existing methods, as they are not designed for panoramic inputs and struggle with varying ratios and sizes, resulting from the unique challenges of visual object rearrangement. Our findings reveal these limitations and underscore PanoSCU’s potential to drive progress in developing models capable of robust panoramic reasoning and fine-grained scene change understanding.

Keywords

    change detection algorithms, embodied artificial intelligence, image stitching, Object segmentation

ASJC Scopus subject areas

Cite this

PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding. / Khan, Mariia; Qiu, Yue; Cong, Yuren et al.
In: IEEE ACCESS, Vol. 13, 15.04.2025, p. 72456-72476.

Research output: Contribution to journalArticleResearchpeer review

Khan M, Qiu Y, Cong Y, Abu-Khalaf J, Suter D, Rosenhahn B. PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding. IEEE ACCESS. 2025 Apr 15;13:72456-72476. doi: 10.1109/ACCESS.2025.3561055
Khan, Mariia ; Qiu, Yue ; Cong, Yuren et al. / PanoSCU : A Simulation-Based Dataset for Panoramic Indoor Scene Understanding. In: IEEE ACCESS. 2025 ; Vol. 13. pp. 72456-72476.
Download
@article{824999392e7d4ba4b97ad532cc794d7c,
title = "PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding",
abstract = "Panoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panoramic views capture richer spatial context, object relationships, and occlusions—making them better suited for embodied artificial intelligence (AI) applications. To address this, we introduce Panoramic Scene Change Understanding (PanoSCU), a dataset specifically designed to enhance the visual object rearrangement task. Our dataset comprises 5,300 panoramas generated in an embodied simulator, encompassing 48 common indoor object classes. PanoSCU supports eight research tasks: single-view and panoramic detection, single-view and panoramic segmentation, single-view and panoramic change understanding, embodied object tracking, and change reversal. We also present PanoStitch, a training-free method for automatic panoramic data collection within embodied environments. We evaluate state-of-the-art methods on panoramic segmentation and change understanding tasks. There is a gap in existing methods, as they are not designed for panoramic inputs and struggle with varying ratios and sizes, resulting from the unique challenges of visual object rearrangement. Our findings reveal these limitations and underscore PanoSCU{\textquoteright}s potential to drive progress in developing models capable of robust panoramic reasoning and fine-grained scene change understanding.",
keywords = "change detection algorithms, embodied artificial intelligence, image stitching, Object segmentation",
author = "Mariia Khan and Yue Qiu and Yuren Cong and Jumana Abu-Khalaf and David Suter and Bodo Rosenhahn",
note = "Publisher Copyright: {\textcopyright} 2013 IEEE.",
year = "2025",
month = apr,
day = "15",
doi = "10.1109/ACCESS.2025.3561055",
language = "English",
volume = "13",
pages = "72456--72476",
journal = "IEEE ACCESS",
issn = "2169-3536",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Download

TY - JOUR

T1 - PanoSCU

T2 - A Simulation-Based Dataset for Panoramic Indoor Scene Understanding

AU - Khan, Mariia

AU - Qiu, Yue

AU - Cong, Yuren

AU - Abu-Khalaf, Jumana

AU - Suter, David

AU - Rosenhahn, Bodo

N1 - Publisher Copyright: © 2013 IEEE.

PY - 2025/4/15

Y1 - 2025/4/15

N2 - Panoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panoramic views capture richer spatial context, object relationships, and occlusions—making them better suited for embodied artificial intelligence (AI) applications. To address this, we introduce Panoramic Scene Change Understanding (PanoSCU), a dataset specifically designed to enhance the visual object rearrangement task. Our dataset comprises 5,300 panoramas generated in an embodied simulator, encompassing 48 common indoor object classes. PanoSCU supports eight research tasks: single-view and panoramic detection, single-view and panoramic segmentation, single-view and panoramic change understanding, embodied object tracking, and change reversal. We also present PanoStitch, a training-free method for automatic panoramic data collection within embodied environments. We evaluate state-of-the-art methods on panoramic segmentation and change understanding tasks. There is a gap in existing methods, as they are not designed for panoramic inputs and struggle with varying ratios and sizes, resulting from the unique challenges of visual object rearrangement. Our findings reveal these limitations and underscore PanoSCU’s potential to drive progress in developing models capable of robust panoramic reasoning and fine-grained scene change understanding.

AB - Panoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panoramic views capture richer spatial context, object relationships, and occlusions—making them better suited for embodied artificial intelligence (AI) applications. To address this, we introduce Panoramic Scene Change Understanding (PanoSCU), a dataset specifically designed to enhance the visual object rearrangement task. Our dataset comprises 5,300 panoramas generated in an embodied simulator, encompassing 48 common indoor object classes. PanoSCU supports eight research tasks: single-view and panoramic detection, single-view and panoramic segmentation, single-view and panoramic change understanding, embodied object tracking, and change reversal. We also present PanoStitch, a training-free method for automatic panoramic data collection within embodied environments. We evaluate state-of-the-art methods on panoramic segmentation and change understanding tasks. There is a gap in existing methods, as they are not designed for panoramic inputs and struggle with varying ratios and sizes, resulting from the unique challenges of visual object rearrangement. Our findings reveal these limitations and underscore PanoSCU’s potential to drive progress in developing models capable of robust panoramic reasoning and fine-grained scene change understanding.

KW - change detection algorithms

KW - embodied artificial intelligence

KW - image stitching

KW - Object segmentation

UR - http://www.scopus.com/inward/record.url?scp=105002779883&partnerID=8YFLogxK

U2 - 10.1109/ACCESS.2025.3561055

DO - 10.1109/ACCESS.2025.3561055

M3 - Article

AN - SCOPUS:105002779883

VL - 13

SP - 72456

EP - 72476

JO - IEEE ACCESS

JF - IEEE ACCESS

SN - 2169-3536

ER -

By the same author(s)