DOI QR코드

DOI QR Code

운전자 시선 및 선택적 주의 집중 모델 통합 해석을 통한 운전자 보조 시스템

Driver Assistance System for Integration Interpretation of Driver's Gaze and Selective Attention Model

  • 투고 : 2016.03.22
  • 심사 : 2016.06.10
  • 발행 : 2016.06.30

초록

본 논문은 차량의 내부 및 외부 정보를 통합하여 운전자의 인지 상태를 측정하고, 안전운전을 보조하여 주는시스템을 제안한다. 구현된 시스템은 운전자의 시선 정보와 외부 영상을 분석하여 얻은 주변정보를 mutual information기반으로 통합하여 구현되며, 차량의 앞부분과 내부 운전자를 검출하는 2개의 카메라를 이용한다. 외부 카메라에서 정보를 얻기 위해 선택적 집중모델을 기반으로 하는 게슈탈트법칙을 제안하고, 이를 기반으로 구현된 saliency map (SM) 모델은 신호등과 같은 중요한 외부 자극을 두드러지게 표현한다. 내부 카메라에서는 얼굴의 특징정보를 이용하여 운전자의 주의가 집중되는 외부 응시 정보를 파악하고 이를 통해 운전자가 응시하고 있는 영역을 검출한다. 이를 위해서 우리는 실시간으로 운전자의 얼굴특징을 검출하는 알고리즘을 사용한다. 운전자의 얼굴을 검출하기 위하여 modified census transform (MCT) 기반의 Adaboost 알고리즘을 사용하였으며, POSIT (POS with ITerations)알고리즘을 통해 3차원 공간에서 머리의 방향과 운전자 응시 정보를 측정하였다. 실험결과를 통하여 제안한 시스템이 실시간으로 운전자의 응시하고 있는 영역과, 신호등과 같은 운전에 도움이 되는 정보를 파악하는데 도움이 되었음을 확인할 수 있으며, 이러한 시스템이 운전보조 시스템에 효과적으로 적용될 것으로 판단된다.

This paper proposes a system to detect driver's cognitive state by internal and external information of vehicle. The proposed system can measure driver's eye gaze. This is done by concept of information delivery and mutual information measure. For this study, we set up two web-cameras at vehicles to obtain visual information of the driver and front of the vehicle. We propose Gestalt principle based selective attention model to define information quantity of road scene. The saliency map based on gestalt principle is prominently represented by stimulus such as traffic signals. The proposed system assumes driver's cognitive resource allocation on the front scene by gaze analysis and head pose direction information. Then we use several feature algorithms for detecting driver's characteristics in real time. Modified census transform (MCT) based Adaboost is used to detect driver's face and its component whereas POSIT algorithms are used for eye detection and 3D head pose estimation. Experimental results show that the proposed system works well in real environment and confirm its usability.

키워드

참고문헌

  1. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Patt. Anal. Mach. Intell, Vol. 20, No. 11, pp. 1254-1259, 1998. https://doi.org/10.1109/34.730558
  2. Wertheimer, Max. "Untersuchungen zur Lehre von der Gestalt," Psychological Research, Vol. 4, No. 1, pp. 301-350, 1923. https://doi.org/10.1007/BF00410640
  3. Bernhard Froba and Andreas Ernst, "Face Detection with the Modified Census Transform," Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 91-96, 2004.
  4. DeMenthon, Daniel F., and Larry S. Davis. "Model-based object pose in 25 lines of code." Computer Vision-ECCV'92. Springer Berlin Heidelberg, pp. 335-343, 1992.
  5. F. Friedrichs and B. Yang, "Camera-based drowsiness reference for driver state classification under real driving conditions," IEEE Intell. Veh. Symp., pp. 101-106, 2010
  6. T. Brandt, R. Stemmer, and A. Rakotonirainy, "Affordable visual driver monitoring system for fatigue and monotony," IEEE Int. Conf. Syst., Man, Cybern., Vol. 7, pp. 6451-6456, 2004.
  7. X. H. Sun, L. Xu, and J. Y. Yang, "Driver fatigue alarm based on eye detection and gaze estimation," MIPPR-Automatic Target Recognition and Image Analysis; and Multispectral Image Acquisition, pp. 678-612, 2007.
  8. M. Suzuki, N. Yamamoto, O. Yamamoto, T. Nakano, and S. Yamamoto, "Measurement of driver's consciousness by image processing-A method for presuming driver's drowsiness by eye-blinks coping with individual differences," IEEE Int. Conf. Syst., Man, Cybern., Vol. 4, pp. 2891-2896, 2006.
  9. L. Bergasa, J. Nuevo, M. Sotelo, R. Barea, and E. Lopez, “Real-time system for monitoring driver vigilance,” IEEE Trans. Intell. Transp. Syst., Vol. 7, No. 1, pp. 63-77, Mar. 2006. https://doi.org/10.1109/TITS.2006.869598
  10. Cho, Hyeon-Seob, and Hee-Sook Kim. "Real Time Eye and Gaze Tracking." Journal of the Korea Academia-Industrial cooperation Society, Vol. 6, No. 2, pp. 195-201, 2005.
  11. R. Senaratne, D. Hardy, B. Vander, and S. Halgamuge, "Driver fatigue detection by fusing multiple cues," in Proc. 4th Int. Symp. Neural Netw., Vol. 4492, Lecture Notes In Computer Science, 2007, pp. 801-809.
  12. Mbouna, Ralph Oyini, Seong G. Kong, and Myung-Geun Chun. "Visual analysis of eye state and head pose for driver alertness monitoring," IEEE Transactions on Intelligent Transportation Systems, Vol. 14, No. 3, pp. 1462-1469, 2013. https://doi.org/10.1109/TITS.2013.2262098
  13. D‟Orazio, Tiziana, et al. "A visual approach for driver inattention detection," Pattern Recognition, Vol. 40, No. 8, pp. 2341-2355, 2007. https://doi.org/10.1016/j.patcog.2007.01.018
  14. Byung-Hun Oh, JKwang-Woo Chung, Kwang-Seok Hong, “Gaze Recognition System using Random Forests in Vehicular Environment based on Smart-Phone,” The Journal of The Institute of Internet, Broadcasting and Communication(JIIBC), Vol. 15, No. 1, pp. 191-197, 2015. https://doi.org/10.7236/JIIBC.2015.15.1.191
  15. Jeong, Sungmoon, Sang-Woo Ban, and Minho Lee. "Stereo saliency map considering af-fective factors and selective motion analysis in a dynamic environment." Neural networks, Vol. 21, No. 10, pp. 1420-1430, 2008. https://doi.org/10.1016/j.neunet.2008.10.002
  16. K. Fukushima, "Use of non-uniform spatial blure for image comparison: symmetry axis extraction", Neural Network, Vol. 18, pp. 23-22, 2005. https://doi.org/10.1016/j.neunet.2004.08.001
  17. Wang, Haitao, et al. "Self quotient image for face recognition." Image Processing, 2004. ICIP'04. 2004 International Conference on. Vol. 2. IEEE, 2004.