Convolutional neural network-based real-time drowsy
driver detection for accident prevention
Dublin Core
Title
Convolutional neural network-based real-time drowsy
driver detection for accident prevention
driver detection for accident prevention
Subject
Convolutional neural network
Deep learning
Driver
Drowsiness
Lightweight convolutional neural
network
Deep learning
Driver
Drowsiness
Lightweight convolutional neural
network
Description
Drowsy driving significantly threatens road safety, contributing to many accidents
globally. This paper presents a convolutional neural network (CNN)-based
real-time drowsy driver detection system aimed at preventing such accidents,
particularly for deployment in Android applications. We propose a lightweight
CNN architecture that effectively identifies drowsiness and microsleep episodes
by categorizing driver facial expressions into four distinct categories: close-eye
expressions, open-eye expressions, yawns, and no yawns. Our model, which
employs facial landmark detection and various pre-processing techniques to
enhance accuracy, achieves an impressive 96.6% accuracy. This performance
surpasses several popular CNN architectures, including VGG16, VGG19, MobileNetV2,
ResNet50, and DenseNet121. Notably, our proposed model is highly
efficient, with only 0.4 million parameters and a memory requirement of 1.51
MB, making it ideal for real-time applications. The comparative analysis highlights
the superior balance between accuracy and resource efficiency of our
model, demonstrating its potential for practical deployment in reducing accidents
caused by driver fatigue.
globally. This paper presents a convolutional neural network (CNN)-based
real-time drowsy driver detection system aimed at preventing such accidents,
particularly for deployment in Android applications. We propose a lightweight
CNN architecture that effectively identifies drowsiness and microsleep episodes
by categorizing driver facial expressions into four distinct categories: close-eye
expressions, open-eye expressions, yawns, and no yawns. Our model, which
employs facial landmark detection and various pre-processing techniques to
enhance accuracy, achieves an impressive 96.6% accuracy. This performance
surpasses several popular CNN architectures, including VGG16, VGG19, MobileNetV2,
ResNet50, and DenseNet121. Notably, our proposed model is highly
efficient, with only 0.4 million parameters and a memory requirement of 1.51
MB, making it ideal for real-time applications. The comparative analysis highlights
the superior balance between accuracy and resource efficiency of our
model, demonstrating its potential for practical deployment in reducing accidents
caused by driver fatigue.
Creator
Nippon Datta1, Tanjim Mahmud2, Manoara Begum3, Mohammad Tarek Aziz1, Dilshad Islam4, Md.
Faisal Bin Abdul Aziz5, Khudaybergen Kochkarov6, Temur Eshchanov7, Valisher Sapayev Odilbek
Uglu8, Sobir Parmanov9, Mohammad Shahadat Hossain10,11, Karl Andersson11
Faisal Bin Abdul Aziz5, Khudaybergen Kochkarov6, Temur Eshchanov7, Valisher Sapayev Odilbek
Uglu8, Sobir Parmanov9, Mohammad Shahadat Hossain10,11, Karl Andersson11
Source
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
Date
Mar 11, 2025
Contributor
PERI IRAWAN
Format
PDF
Language
ENGLISH
Type
TEXT
Files
Collection
Citation
Nippon Datta1, Tanjim Mahmud2, Manoara Begum3, Mohammad Tarek Aziz1, Dilshad Islam4, Md.
Faisal Bin Abdul Aziz5, Khudaybergen Kochkarov6, Temur Eshchanov7, Valisher Sapayev Odilbek
Uglu8, Sobir Parmanov9, Mohammad Shahadat Hossain10,11, Karl Andersson11, “Convolutional neural network-based real-time drowsy
driver detection for accident prevention,” Repository Horizon University Indonesia, accessed February 3, 2026, https://repository.horizon.ac.id/items/show/10040.
driver detection for accident prevention,” Repository Horizon University Indonesia, accessed February 3, 2026, https://repository.horizon.ac.id/items/show/10040.