Object detection and tracking with decoupled DeepSORT
based on αβ filter
Dublin Core
Title
Object detection and tracking with decoupled DeepSORT
based on αβ filter
based on αβ filter
Subject
Deep learning
DeepSORT
High order tracking accuracy
Object detection
Object tracking
Video surveillance
DeepSORT
High order tracking accuracy
Object detection
Object tracking
Video surveillance
Description
With the rapid growth of the population, the demand for autonomous video
surveillance systems has substantially increased. Recently, artificial intelligence
has played a key role in the development of these systems. In this paper, we
present an enhanced autonomous system for object detection and tracking in
video streams, tailored for transportation and video surveillance applications.
The system comprises two main stages: detection stage; this stage employs you
only look once (YOLO)v8m, trained on the KITTI dataset, and is configured to
detect only pedestrians and cars. The model achieves an average precision of
97.3% and 87.1% for cars and pedestrians classes respectively, resulting a final
mean average precision (mAP) of 92.2%. Tracking stage; the tracking component
utilizes the DeepSORT algorithm, which originally incorporates a Kalman
filter for motion prediction and performs data association using cosine and Mahalanobis
distances to maintain consistent object identifiers across frames. To
improve tracking performance, we introduce two key modifications to the original
DeepSORT: architecture modification and Kalman filter replacement. The
tracking tests are carried out on KITTI and MOTChallenge Benchmarks. The
final order tracking accuracy (HOTA) scores achieve 77.645 and 54.019 for Cars
and Pedestrians classes respectively in the KITTI-Benchmark and 45.436 for the
Pedestrians class in the MOTChallenge-Benchmark.
surveillance systems has substantially increased. Recently, artificial intelligence
has played a key role in the development of these systems. In this paper, we
present an enhanced autonomous system for object detection and tracking in
video streams, tailored for transportation and video surveillance applications.
The system comprises two main stages: detection stage; this stage employs you
only look once (YOLO)v8m, trained on the KITTI dataset, and is configured to
detect only pedestrians and cars. The model achieves an average precision of
97.3% and 87.1% for cars and pedestrians classes respectively, resulting a final
mean average precision (mAP) of 92.2%. Tracking stage; the tracking component
utilizes the DeepSORT algorithm, which originally incorporates a Kalman
filter for motion prediction and performs data association using cosine and Mahalanobis
distances to maintain consistent object identifiers across frames. To
improve tracking performance, we introduce two key modifications to the original
DeepSORT: architecture modification and Kalman filter replacement. The
tracking tests are carried out on KITTI and MOTChallenge Benchmarks. The
final order tracking accuracy (HOTA) scores achieve 77.645 and 54.019 for Cars
and Pedestrians classes respectively in the KITTI-Benchmark and 45.436 for the
Pedestrians class in the MOTChallenge-Benchmark.
Creator
Lakhdar Djelloul Mazouz, Abdessamad Kaddour Trea, Tarek Amiour, Abdelaziz Ouamri
Source
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
Date
Oct 19, 2025
Contributor
PERI IRAWAN
Format
PDF
Language
ENGLISH
Type
TEXT
Files
Collection
Citation
Lakhdar Djelloul Mazouz, Abdessamad Kaddour Trea, Tarek Amiour, Abdelaziz Ouamri, “Object detection and tracking with decoupled DeepSORT
based on αβ filter,” Repository Horizon University Indonesia, accessed January 12, 2026, https://repository.horizon.ac.id/items/show/10397.
based on αβ filter,” Repository Horizon University Indonesia, accessed January 12, 2026, https://repository.horizon.ac.id/items/show/10397.