Details
Original language | English |
---|---|
Pages (from-to) | 15-25 |
Number of pages | 11 |
Journal | ISPRS Journal of Photogrammetry and Remote Sensing |
Volume | 131 |
Publication status | Published - 27 Jul 2017 |
Abstract
Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
Keywords
- Object detection, Scene graph, Scene understanding, Spatial relationship, Support relationship
ASJC Scopus subject areas
- Physics and Astronomy(all)
- Atomic and Molecular Physics, and Optics
- Engineering(all)
- Engineering (miscellaneous)
- Computer Science(all)
- Computer Science Applications
- Earth and Planetary Sciences(all)
- Computers in Earth Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 131, 27.07.2017, p. 15-25.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - On support relations and semantic scene graphs
AU - Yang, Michael Ying
AU - Liao, Wentong
AU - Ackermann, Hanno
AU - Rosenhahn, Bodo
N1 - Funding information: The work is funded by DFG (German Research Foundation) YA 351/2-1. The authors gratefully acknowledge the support.
PY - 2017/7/27
Y1 - 2017/7/27
N2 - Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
AB - Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
KW - Object detection
KW - Scene graph
KW - Scene understanding
KW - Spatial relationship
KW - Support relationship
UR - http://www.scopus.com/inward/record.url?scp=85026192760&partnerID=8YFLogxK
U2 - 10.1016/j.isprsjprs.2017.07.010
DO - 10.1016/j.isprsjprs.2017.07.010
M3 - Article
AN - SCOPUS:85026192760
VL - 131
SP - 15
EP - 25
JO - ISPRS Journal of Photogrammetry and Remote Sensing
JF - ISPRS Journal of Photogrammetry and Remote Sensing
SN - 0924-2716
ER -