Browse > Article
http://dx.doi.org/10.5391/JKIIS.2008.18.6.754

Development of Driver's Emotion and Attention Recognition System using Multi-modal Sensor Fusion Algorithm  

Han, Cheol-Hun (중앙대학교 전자전기공학부)
Sim, Kwee-Bo (중앙대학교 전자전기공학부)
Publication Information
Journal of the Korean Institute of Intelligent Systems / v.18, no.6, 2008 , pp. 754-761 More about this Journal
Abstract
As the automobile industry and technologies are developed, driver's tend to more concern about service matters than mechanical matters. For this reason, interests about recognition of human knowledge and emotion to make safe and convenient driving environment for driver are increasing more and more. recognition of human knowledge and emotion are emotion engineering technology which has been studied since the late 1980s to provide people with human-friendly services. Emotion engineering technology analyzes people's emotion through their faces, voices and gestures, so if we use this technology for automobile, we can supply drivels with various kinds of service for each driver's situation and help them drive safely. Furthermore, we can prevent accidents which are caused by careless driving or dozing off while driving by recognizing driver's gestures. the purpose of this paper is to develop a system which can recognize states of driver's emotion and attention for safe driving. First of all, we detect a signals of driver's emotion by using bio-motion signals, sleepiness and attention, and then we build several types of databases. by analyzing this databases, we find some special features about drivers' emotion, sleepiness and attention, and fuse the results through Multi-Modal method so that it is possible to develop the system.
Keywords
Multi-Modal Sensor Fusion; PCA; AdaBoost;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 P.Viola, M.J.Jones, 'Robust Real-Time Face Detection,' International Journal of Computer Vision, Vol.57, No.2, pp. 137-154, 2004   DOI
2 D. Silval and P. C. Nag, 'Bimodal emotion recognition', Proc. of Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000, pp. 332-335, 2000
3 Lee C.M., Narayanan S.S. and Pieraccini. R., 'Classifying emotions in human - machine spoken dialogs', ICME'02, vol. 1, pp. 737-740, 2002
4 Y. Freund and R. Schapire, 'Experiments with a New Boosting Algorithm', Proc. of ICML -96, pp. 148-156, 1996
5 강동주, '교통사고 유형별 사고발생 특성 및 요인분석,' 한양대석사논문, 1994
6 R. Schapire and Y. Singer. 'Improving boosting algorithms using confidence-rated predictions,' 1999
7 J. Nicholson and K. Takahashi, R. Nakatsu, 'Emotion recognition in speech using neural networks' Proc. of ICONIP, Vol. 2, 1996
8 C. H. Park and K. B. Sim, 'Pattern Recognition Methods for Emotion Recognition with speech signal', International Journal of Fuzzy Logic and Intelligent Systems', vol. 6, no. 2, pp. 150-154, 2006   과학기술학회마을   DOI   ScienceOn
9 New T.L., Wei F.S. and De Silva L.C., 'Speech based emotion classification', TENCON 2001, vol. 1, pp. 297-301, 2001
10 J. Yang and A. Waibel, 'A real-time face tracker,' Proc. of Third Workshop on Applications of Computer Vision, pp. 142-147, 1996
11 Ho-Duck Kim, Hyun-Chang Yang, Chang-Hyun Park, and Kwee-Bo Sim, 'Emotion Recognition Method of Facial Image using PCA ', Journal of Korea Fuzzy Logic and Intelligent Systems Society(KFIS), vol.16, no.6, pp. 772-776, Dec. 2006   과학기술학회마을   DOI   ScienceOn