Multi-Vehicle Multi-Camera Tracking with Graph-Based Tracklet Features

Research output: Contribution to journalArticleResearchpeer review

Authors

Research Organisations

External Research Organisations

  • University of Tennessee, Chattanooga
View graph of relations

Details

Original languageEnglish
Pages (from-to)972-983
Number of pages12
JournalIEEE transactions on multimedia
Volume26
Early online date8 May 2023
Publication statusPublished - 2024

Abstract

Multi-target multi-camera tracking (MTMCT) is an important application in intelligent transportation systems (ITS). The conventional works follow the tracking-by-detection scheme and use the information of the object image separately while matching the object from different cameras. As a result, the association information from the object image is lost. To utilize this information, we propose an efficient MTMCT application that builds features in the form of a graph and customizes graph similarity to match the vehicle objects from different cameras. We present algorithms for both the online scenario, where only the past images are used to match a vehicle object, and the offline scenario, where a given vehicle object is tracked with past and future images. For offline scenarios, our method achieves an IDF1-score of 0.8166 on the Cityflow dataset, which contains the actual scenes of the city from multiple street cameras. For online scenarios, our method achieves an IDF1-score of 0.75 with an FPS of 14.

Keywords

    Cameras, Feature extraction, Graph neural networks, ITS, MTMCT, Multi-Camera Tracking, Object detection, Predictive models, Target tracking, Trajectory, Vehicle Tracking, multi-camera tracking, vehicle tracking

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Multi-Vehicle Multi-Camera Tracking with Graph-Based Tracklet Features. / Nguyen, Tuan T.; Nguyen, Hoang H.; Sartipi, Mina et al.
In: IEEE transactions on multimedia, Vol. 26, 2024, p. 972-983.

Research output: Contribution to journalArticleResearchpeer review

Nguyen TT, Nguyen HH, Sartipi M, Fisichella M. Multi-Vehicle Multi-Camera Tracking with Graph-Based Tracklet Features. IEEE transactions on multimedia. 2024;26:972-983. Epub 2023 May 8. doi: 10.1109/TMM.2023.3274369
Nguyen, Tuan T. ; Nguyen, Hoang H. ; Sartipi, Mina et al. / Multi-Vehicle Multi-Camera Tracking with Graph-Based Tracklet Features. In: IEEE transactions on multimedia. 2024 ; Vol. 26. pp. 972-983.
Download
@article{50261600f455438e9c4982eb14c9f1e5,
title = "Multi-Vehicle Multi-Camera Tracking with Graph-Based Tracklet Features",
abstract = "Multi-target multi-camera tracking (MTMCT) is an important application in intelligent transportation systems (ITS). The conventional works follow the tracking-by-detection scheme and use the information of the object image separately while matching the object from different cameras. As a result, the association information from the object image is lost. To utilize this information, we propose an efficient MTMCT application that builds features in the form of a graph and customizes graph similarity to match the vehicle objects from different cameras. We present algorithms for both the online scenario, where only the past images are used to match a vehicle object, and the offline scenario, where a given vehicle object is tracked with past and future images. For offline scenarios, our method achieves an IDF1-score of 0.8166 on the Cityflow dataset, which contains the actual scenes of the city from multiple street cameras. For online scenarios, our method achieves an IDF1-score of 0.75 with an FPS of 14.",
keywords = "Cameras, Feature extraction, Graph neural networks, ITS, MTMCT, Multi-Camera Tracking, Object detection, Predictive models, Target tracking, Trajectory, Vehicle Tracking, multi-camera tracking, vehicle tracking",
author = "Nguyen, {Tuan T.} and Nguyen, {Hoang H.} and Mina Sartipi and Marco Fisichella",
note = "ACKNOWLEDGMENT This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes anywarranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. All authors contributed equally in providing critical feedback and helping shape the research, analysis, planning, and development of the evaluation and manuscript. M.F. conceived the original idea and experimental settings and directed the project. T.T.N. and H.H.N. developed the framework and performed the experiments for the evaluation. M.F. and M.S. verified the analytical methods",
year = "2024",
doi = "10.1109/TMM.2023.3274369",
language = "English",
volume = "26",
pages = "972--983",
journal = "IEEE transactions on multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Download

TY - JOUR

T1 - Multi-Vehicle Multi-Camera Tracking with Graph-Based Tracklet Features

AU - Nguyen, Tuan T.

AU - Nguyen, Hoang H.

AU - Sartipi, Mina

AU - Fisichella, Marco

N1 - ACKNOWLEDGMENT This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes anywarranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. All authors contributed equally in providing critical feedback and helping shape the research, analysis, planning, and development of the evaluation and manuscript. M.F. conceived the original idea and experimental settings and directed the project. T.T.N. and H.H.N. developed the framework and performed the experiments for the evaluation. M.F. and M.S. verified the analytical methods

PY - 2024

Y1 - 2024

N2 - Multi-target multi-camera tracking (MTMCT) is an important application in intelligent transportation systems (ITS). The conventional works follow the tracking-by-detection scheme and use the information of the object image separately while matching the object from different cameras. As a result, the association information from the object image is lost. To utilize this information, we propose an efficient MTMCT application that builds features in the form of a graph and customizes graph similarity to match the vehicle objects from different cameras. We present algorithms for both the online scenario, where only the past images are used to match a vehicle object, and the offline scenario, where a given vehicle object is tracked with past and future images. For offline scenarios, our method achieves an IDF1-score of 0.8166 on the Cityflow dataset, which contains the actual scenes of the city from multiple street cameras. For online scenarios, our method achieves an IDF1-score of 0.75 with an FPS of 14.

AB - Multi-target multi-camera tracking (MTMCT) is an important application in intelligent transportation systems (ITS). The conventional works follow the tracking-by-detection scheme and use the information of the object image separately while matching the object from different cameras. As a result, the association information from the object image is lost. To utilize this information, we propose an efficient MTMCT application that builds features in the form of a graph and customizes graph similarity to match the vehicle objects from different cameras. We present algorithms for both the online scenario, where only the past images are used to match a vehicle object, and the offline scenario, where a given vehicle object is tracked with past and future images. For offline scenarios, our method achieves an IDF1-score of 0.8166 on the Cityflow dataset, which contains the actual scenes of the city from multiple street cameras. For online scenarios, our method achieves an IDF1-score of 0.75 with an FPS of 14.

KW - Cameras

KW - Feature extraction

KW - Graph neural networks

KW - ITS

KW - MTMCT

KW - Multi-Camera Tracking

KW - Object detection

KW - Predictive models

KW - Target tracking

KW - Trajectory

KW - Vehicle Tracking

KW - multi-camera tracking

KW - vehicle tracking

UR - http://www.scopus.com/inward/record.url?scp=85159826656&partnerID=8YFLogxK

U2 - 10.1109/TMM.2023.3274369

DO - 10.1109/TMM.2023.3274369

M3 - Article

AN - SCOPUS:85159826656

VL - 26

SP - 972

EP - 983

JO - IEEE transactions on multimedia

JF - IEEE transactions on multimedia

SN - 1520-9210

ER -

By the same author(s)