DOI QR코드

DOI QR Code

Ensemble of Convolution Neural Networks for Driver Smartphone Usage Detection Using Multiple Cameras

  • Zhang, Ziyi (School of Mechanical Engineering, Kyungpook National University) ;
  • Kang, Bo-Yeong (School of Mechanical Engineering, Kyungpook National University)
  • Received : 2020.03.13
  • Accepted : 2020.06.09
  • Published : 2020.06.30

Abstract

Approximately 1.3 million people die from traffic accidents each year, and smartphone usage while driving is one of the main causes of such accidents. Therefore, detection of smartphone usage by drivers has become an important part of distracted driving detection. Previous studies have used single camera-based methods to collect the driver images. However, smartphone usage detection by employing a single camera can be unsuccessful if the driver occludes the phone. In this paper, we present a driver smartphone usage detection system that uses multiple cameras to collect driver images from different perspectives, and then processes these images with ensemble convolutional neural networks. The ensemble method comprises three individual convolutional neural networks with a simple voting system. Each network provides a distinct image perspective and the voting mechanism selects the final classification. Experimental results verified that the proposed method avoided the limitations observed in single camera-based methods, and achieved 98.96% accuracy on our dataset.

Keywords

References

  1. M. A. Regan, J. D. Lee, and K. Young, Driver distraction: Theory, effects, and mitigation, CRC Press, 2008.
  2. National highway traffic safety administration traffic safety facts [Internet]. Available: https://www.nhtsa.gov/risky-driving/distracteddriving.
  3. C. Craye and F. Karray, Driver distraction detection and recognition using RGB-D sensor, 2015, [online] Available: https://arxiv.org/abs/1502.00250.
  4. R. A. Berri, A. G. Silva, R. S. Parpinelli, E. Girardi, and R. Arthur, "A pattern recognition system for detecting use of mobile phones while driving," International Conference on Computer Vision Theory and Applications (VISAPP), IEEE, vol. 2, pp. 411-418, 2014. DOI: 10.5220/0004684504110418.
  5. X. Zhang, N. Zheng, F. Wang, and Y. He, "Visual recognition of driver hand-held cell phone use based on hidden CRF," in Proceedings of IEEE International Conference on Vehicular Electronics and Safety, IEEE, pp. 248-251, 2011.
  6. C. Zhao, Y. Gao, J. He, and J. Lian, "Recognition of driving postures by multiwavelet transform and multilayer perceptron classifier," Engineering Applications of Artificial Intelligence, vol. 25, no. 8, pp. 1677-1686, 2012. DOI: 10.1016/j.engappai.2012.09.018.
  7. D. Wang, M. Pei, and L. Zhu, "Detecting driver use of mobile phone based on in-car camera," in IEEE Tenth International Conference on Computational Intelligence and Security, pp. 148-151, 2014.
  8. B. Baheti, S. Gajre, and S. Talbar, "Detection of distracted driver using convolutional neural network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1032-1038, 2018.
  9. T. H. N. Le, Y. Zheng, C. Zhu, K. Luu, and M. Savvides, "Multiple scale faster-rcnn approach to driver's cell-phone usage and hands on steering wheel detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 46-53, 2016.
  10. L. Shiwu, W. Linhong, Y. Zhifa, J. Bingkui, Q. Feiyan, and Y. Zhongkai, "An active driver fatigue identification technique using multiple physiological features," in IEEE International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), pp. 733-737, 2011.
  11. B. F. Wu, Y. H. Chen, and C. H. Yeh, "Driving behaviour-based event data recorder," IET Intelligent Transport Systems, vol. 8, no. 4, pp. 361-367, 2013. https://doi.org/10.1049/iet-its.2013.0009
  12. N. S. Karuppusamy and B. Y. Kang, "Driver fatigue prediction using eeg for autonomous vehicle," Advanced Science Letters, vol. 23, no. 10, pp. 9561-9564, 2017. DOI: 10.1166/asl.2017.9747.
  13. C. Yan, "Vision-based Driver Behaviour Analysis," PhD thesis, University of Liverpool, 2016. [Internet] Available: https://core.ac.uk/reader/80777305.
  14. H. Kim, J. Kim, and H. Jung, "Convolutional Neural Network Based Image Processing System," Journal of Information and Communication Convergence Engineering, vol. 16, no. 3, pp. 160-165, 2018. DOI: 10.6109/jicce.2018.16.3.160
  15. V. H. Phung and E. J. Rhee, "A Deep Learning Approach for Classification of Cloud Image Patches on Small Datasets," Journal of Information and Communication Convergence Engineering, vol. 16, no. 3, pp. 173-178, 2018. DOI: 10.6109/jicce.2018.16.3.173
  16. GoPro Customer Support. [Internet], Available: https://gopro.com/help/hero5-black.
  17. A. Krizhevsky, I. Sutskever, and G.E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, pp. 1097-1105, 2012.
  18. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
  19. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
  20. D.M. Powers, "Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation," Journal of Machine Learning Technologies, vol. 2, issue 1, pp. 37-63, 2011, [online] Available: http://www.bioinfo.in/contents.php?id=51