RelTR: Relation Transformer for Scene Graph Generation

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Yuren Cong
  • Michael Ying Yang
  • Bodo Rosenhahn

Research Organisations

External Research Organisations

  • University of Twente
View graph of relations

Details

Original languageEnglish
Pages (from-to)11169 - 11183
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number9
Early online date19 Apr 2023
Publication statusPublished - 1 Sept 2023

Abstract

Different objects in the same scene are more or less related to each other, but only a limited number of these relationships are noteworthy. Inspired by Detection Transformer, which excels in object detection, we view scene graph generation as a set prediction problem. In this article, we propose an end-to-end scene graph generation model Relation Transformer (RelTR), which has an encoder-decoder architecture. The encoder reasons about the visual feature context while the decoder infers a fixed-size set of triplets subject-predicate-object using different types of attention mechanisms with coupled subject and object queries. We design a set prediction loss performing the matching between the ground truth and predicted triplets for the end-to-end training. In contrast to most existing scene graph generation methods, RelTR is a one-stage method that predicts sparse scene graphs directly only using visual appearance without combining entities and labeling all possible predicates. Extensive experiments on the Visual Genome, Open Images V6, and VRD datasets demonstrate the superior performance and fast inference of our model.

Keywords

    One-stage, scene graph generation, scene understanding, visual relationship detection

ASJC Scopus subject areas

Cite this

RelTR: Relation Transformer for Scene Graph Generation. / Cong, Yuren; Yang, Michael Ying; Rosenhahn, Bodo.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 9, 01.09.2023, p. 11169 - 11183.

Research output: Contribution to journalArticleResearchpeer review

Cong Y, Yang MY, Rosenhahn B. RelTR: Relation Transformer for Scene Graph Generation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023 Sept 1;45(9):11169 - 11183. Epub 2023 Apr 19. doi: 10.48550/arXiv.2201.11460, 10.1109/TPAMI.2023.3268066
Cong, Yuren ; Yang, Michael Ying ; Rosenhahn, Bodo. / RelTR: Relation Transformer for Scene Graph Generation. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023 ; Vol. 45, No. 9. pp. 11169 - 11183.
Download
@article{2d231ba3d20240f099e7c2e19588ce61,
title = "RelTR: Relation Transformer for Scene Graph Generation",
abstract = "Different objects in the same scene are more or less related to each other, but only a limited number of these relationships are noteworthy. Inspired by Detection Transformer, which excels in object detection, we view scene graph generation as a set prediction problem. In this article, we propose an end-to-end scene graph generation model Relation Transformer (RelTR), which has an encoder-decoder architecture. The encoder reasons about the visual feature context while the decoder infers a fixed-size set of triplets subject-predicate-object using different types of attention mechanisms with coupled subject and object queries. We design a set prediction loss performing the matching between the ground truth and predicted triplets for the end-to-end training. In contrast to most existing scene graph generation methods, RelTR is a one-stage method that predicts sparse scene graphs directly only using visual appearance without combining entities and labeling all possible predicates. Extensive experiments on the Visual Genome, Open Images V6, and VRD datasets demonstrate the superior performance and fast inference of our model.",
keywords = "One-stage, scene graph generation, scene understanding, visual relationship detection",
author = "Yuren Cong and Yang, {Michael Ying} and Bodo Rosenhahn",
note = "Funding Information: Innovations (ZDIN) and the Deutsche Forschungsgemeinschaft (DFG) under Germany{\textquoteright}s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122).",
year = "2023",
month = sep,
day = "1",
doi = "10.48550/arXiv.2201.11460",
language = "English",
volume = "45",
pages = "11169 -- 11183",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "9",

}

Download

TY - JOUR

T1 - RelTR: Relation Transformer for Scene Graph Generation

AU - Cong, Yuren

AU - Yang, Michael Ying

AU - Rosenhahn, Bodo

N1 - Funding Information: Innovations (ZDIN) and the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122).

PY - 2023/9/1

Y1 - 2023/9/1

N2 - Different objects in the same scene are more or less related to each other, but only a limited number of these relationships are noteworthy. Inspired by Detection Transformer, which excels in object detection, we view scene graph generation as a set prediction problem. In this article, we propose an end-to-end scene graph generation model Relation Transformer (RelTR), which has an encoder-decoder architecture. The encoder reasons about the visual feature context while the decoder infers a fixed-size set of triplets subject-predicate-object using different types of attention mechanisms with coupled subject and object queries. We design a set prediction loss performing the matching between the ground truth and predicted triplets for the end-to-end training. In contrast to most existing scene graph generation methods, RelTR is a one-stage method that predicts sparse scene graphs directly only using visual appearance without combining entities and labeling all possible predicates. Extensive experiments on the Visual Genome, Open Images V6, and VRD datasets demonstrate the superior performance and fast inference of our model.

AB - Different objects in the same scene are more or less related to each other, but only a limited number of these relationships are noteworthy. Inspired by Detection Transformer, which excels in object detection, we view scene graph generation as a set prediction problem. In this article, we propose an end-to-end scene graph generation model Relation Transformer (RelTR), which has an encoder-decoder architecture. The encoder reasons about the visual feature context while the decoder infers a fixed-size set of triplets subject-predicate-object using different types of attention mechanisms with coupled subject and object queries. We design a set prediction loss performing the matching between the ground truth and predicted triplets for the end-to-end training. In contrast to most existing scene graph generation methods, RelTR is a one-stage method that predicts sparse scene graphs directly only using visual appearance without combining entities and labeling all possible predicates. Extensive experiments on the Visual Genome, Open Images V6, and VRD datasets demonstrate the superior performance and fast inference of our model.

KW - One-stage

KW - scene graph generation

KW - scene understanding

KW - visual relationship detection

UR - http://www.scopus.com/inward/record.url?scp=85153484290&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2201.11460

DO - 10.48550/arXiv.2201.11460

M3 - Article

VL - 45

SP - 11169

EP - 11183

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 9

ER -