• Title/Summary/Keyword: facial expression recognition technology

Search Result 67, Processing Time 0.03 seconds

Enhancing e-Learning Interactivity vla Emotion Recognition Computing Technology (감성 인식 컴퓨팅 기술을 적용한 이러닝 상호작용 기술 연구)

  • Park, Jung-Hyun;Kim, InOk;Jung, SangMok;Song, Ki-Sang;Kim, JongBaek
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.2
    • /
    • pp.89-98
    • /
    • 2008
  • Providing appropriate interactions between learner and e- Learning system is an essential factor of a successful e-Learning system. Although many interaction functions are employed in multimedia Web-based Instruction content, learner experience a lack of similar feedbacks from educators in real- time due to the limitation of Human-Computer Interaction techniques. In this paper, an emotion recognition system via learner facial expressions has been developed and applied to a tutoring system. As human educators do, the system observes learners' emotions from facial expressions and provides any or all pertinent feedback. And various feedbacks can bring to motivations and get rid of isolation from e-Learning environments by oneself. The test results showed that this system may provide significant improvement in terms of interesting and educational achievement.

  • PDF

A New 3D Active Camera System for Robust Face Recognition by Correcting Pose Variation

  • Kim, Young-Ouk;Jang, Sung-Ho;Park, Chang-Woo;Sung, Ha-Gyeong;Kwon, Oh-Yun;Paik, Joon-Ki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1485-1490
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user, does face recognition and vital for many surveillance based systems. Advantage of face recognition when compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to decrease in dimension from of image acquisition step and various changes associated with face pose and background. Factors that deteriorate performance of face recognition are many such as distance from camera to face, lighting change, pose change, and change of facial expression. In this paper, we implement a new 3D active camera system to prevent various pose variation that influence face recognition performance and propose face recognition algorithm for intelligent surveillance system and mobile robot system.

  • PDF

Driver's Face Detection Using Space-time Restrained Adaboost Method

  • Liu, Tong;Xie, Jianbin;Yan, Wei;Li, Peiqin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2341-2350
    • /
    • 2012
  • Face detection is the first step of vision-based driver fatigue detection method. Traditional face detection methods have problems of high false-detection rates and long detection times. A space-time restrained Adaboost method is presented in this paper that resolves these problems. Firstly, the possible position of a driver's face in a video frame is measured relative to the previous frame. Secondly, a space-time restriction strategy is designed to restrain the detection window and scale of the Adaboost method to reduce time consumption and false-detection of face detection. Finally, a face knowledge restriction strategy is designed to confirm that the faces detected by this Adaboost method. Experiments compare the methods and confirm that a driver's face can be detected rapidly and precisely.

Facial Expression Recognition using Model-based Feature Extraction in Image Sequence (동영상에서의 모델기반 특징추출을 이용한 얼굴 표정인식)

  • Park Mi-Ae;Choi Sung-In;Im Don-Gak;Ko Je-Pil
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.343-345
    • /
    • 2006
  • 본 논문에서는 ASM(Active Shape Model)과 상태 기반 모델을 사용하여 동영상으로부터 얼굴 표정을 인식하는 방법을 제안한다. ASM을 이용하여 하나의 입력영상에 대한 얼굴요소 특징점들을 정합하고 그 과정에서 생성되는 모양 파라미터 벡터를 추출한다. 동영상에 대해 추출되는 모양 파라미터 벡터 집합을 세 가지상태 중 한 가지를 가지는 상태 벡터로 변환하고 분류기를 통해 얼굴의 표정을 인식한다. 분류단계에서는 분류성능을 높이기 위해 새로운 개체 기반 학습 방법을 제안한다. 실험에서는 새로이 제안한 개체 기반 학습 방법이 KNN 분류기보다 더 좋은 인식률을 나타내는 것을 보인다.

  • PDF

A Study on Face Recognition Method based on Binary Pattern Image under Varying Lighting Condition (조명 변화 환경에서 이진패턴 영상을 이용한 얼굴인식 방법에 관한 연구)

  • Kim, Dong-Ju;Sohn, Myoung-Kyu;Lee, Sang-Heon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.61-74
    • /
    • 2012
  • In this paper, we propose a illumination-robust face recognition system using MCS-LBP and 2D-PCA algorithm. A binary pattern transform which has been used in the field of the face recognition and facial expression, has a characteristic of robust to illumination. Thus, this paper propose MCS-LBP which is more robust to illumination than previous LBP, and face recognition system fusing 2D-PCA algorithm. The performance evaluation of proposed system was performed by using various binary pattern images and well-known face recognition features such as PCA, LDA, 2D-PCA and ULBP histogram of gabor images. In the process of performance evaluation, we used a YaleB face database, an extended YaleB face database, and a CMU-PIE face database that are constructed under varying lighting condition, and the proposed system which consists of MCS-LBP image and 2D-PCA feature show the best recognition accuracy.

Representation of Dynamic Facial ImageGraphic for Multi-Dimensional (다차원 데이터의 동적 얼굴 이미지그래픽 표현)

  • 최철재;최진식;조규천;차홍준
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.10
    • /
    • pp.1291-1300
    • /
    • 2001
  • This article come to study the visualization representation technique of eye brain of person, basing on the ground of the dynamic graphics which is able to change the real time, manipulating the image as graphic factors of the multi-data. And the important thought in such realization is as follows ; corresponding the character points of human face and the parameter control value which obtains basing on the existing image recognition algorithm to the multi-dimensional data, synthesizing the image, it is to create the virtual image from the emotional expression according to the changing contraction expression. The proposed DyFIG system is realized that it as the completing module and we suggest the module of human face graphics which is able to express the emotional expression by manipulating and experimenting, resulting in realizing the emotional data expression description and technology.

  • PDF

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Factors Affecting the Usage of Face Recognition Payment Service (얼굴인식 결제서비스 이용에 영향을 미치는 요인)

  • Zhang, Yi Ning;Ma, Jian;Park, Hyun Jung
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.8
    • /
    • pp.490-499
    • /
    • 2019
  • Face recognition payment service is an innovative payment method based on face recognition technology and is emerging in China now. Various industries regarding unmanned sales are likely to utilize this face recognition payment service in the future. This study investigated the factors influencing the usage intention of Chinese consumers who have experience using face recognition service. We used questionnaire survey and analysis with SPSS and AMOS. According to the results of the study, conclusions are as followed. First, consumers' attitudes toward the characteristic of face recognition payment service, which are non-contact and non-coercion, positively affected perceived usefulness. Second, the rapidness of the facial recognition payment among the recognition, the security and the rapidness of this service affected the ease of use. Third, social influences such as subjective norms also influence the intention to use. Fourth, the increase of the level of self-expression awareness and the intention of using face recognition payment service are confirmed. Through these results, the implications for design and communication of related innovative services were discussed.

CNN-based facial expression recognition (CNN 기반의 얼굴 표정 인식)

  • Choi, In-Kyu;Ahn, Ha-Eun;Song, Hyok;Ko, Min-Soo;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.271-272
    • /
    • 2016
  • 본 논문에서는 딥러닝 기술 중의 하나인 CNN(Convolutional Neural Network) 기반의 얼굴 표정 인식 기법을 제안한다. 다섯 가지 주요 표정의 얼굴 영상을 CNN 구조에 스스로 학습시켜 각각의 표정 패턴에 적합한 특징 지도(feature map)를 형성하고 이 특징 지도를 통해 들어오는 입력 영상을 적합한 표정으로 분류한다. 기존의 CNN 구조를 본 논문에서 이용한 데이터 셋에 알맞게 convolutional layer 및 node의 수를 변경하여 특징 지도를 형성하고 학습 및 인식에 필요한 파라미터수를 대폭 감소시켰다. 실험 결과 제안하는 기법이 높은 얼굴 표정 분류 성능을 보여준다는 것을 보였다.

  • PDF

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.