• 제목/요약/키워드: facial expression-classification

검색결과 62건 처리시간 0.026초

The Facial Expression Recognition using the Inclined Face Geometrical information

  • Zhao, Dadong;Deng, Lunman;Song, Jeong-Young
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2012년도 추계학술대회
    • /
    • pp.881-886
    • /
    • 2012
  • The paper is facial expression recognition based on the inclined face geometrical information. In facial expression recognition, mouth has a key role in expressing emotions, in this paper the features is mainly based on the shapes of mouth, followed by eyes and eyebrows. This paper makes its efforts to disperse every feature values via the weighting function and proposes method of expression classification with excellent classification effects; the final recognition model has been constructed.

  • PDF

Facial Expression Classification Using Deep Convolutional Neural Network

  • Choi, In-kyu;Ahn, Ha-eun;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권1호
    • /
    • pp.485-492
    • /
    • 2018
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. The proposed structure has general classification performance for any environment or subject. For this purpose, we collect a variety of databases and organize the database into six expression classes such as 'expressionless', 'happy', 'sad', 'angry', 'surprised' and 'disgusted'. Pre-processing and data augmentation techniques are applied to improve training efficiency and classification performance. In the existing CNN structure, the optimal structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of nodes of fully-connected layer. The experimental results show good classification performance compared to the state-of-the-arts in experiments of the cross validation and the cross database. Also, compared to other conventional models, it is confirmed that the proposed structure is superior in classification performance with less execution time.

얼굴 특징점 추적을 통한 사용자 감성 인식 (Emotion Recognition based on Tracking Facial Keypoints)

  • 이용환;김흥준
    • 반도체디스플레이기술학회지
    • /
    • 제18권1호
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • 제9권1호
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • 제32권5호
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

로봇과 인간의 상호작용을 위한 얼굴 표정 인식 및 얼굴 표정 생성 기법 (Recognition and Generation of Facial Expression for Human-Robot Interaction)

  • 정성욱;김도윤;정명진;김도형
    • 제어로봇시스템학회논문지
    • /
    • 제12권3호
    • /
    • pp.255-263
    • /
    • 2006
  • In the last decade, face analysis, e.g. face detection, face recognition, facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial emotion mimic system which can recognize human facial expressions and also generate the recognized facial expression. In order to recognize human facial expression in real-time, we propose a facial expression classification method that is performed by weak classifiers obtained by using new rectangular feature types. In addition, we make the artificial facial expression using the developed robotic system based on biological observation. Finally, experimental results of facial expression recognition and generation are shown for the validity of our robotic system.

에이다부스트와 신경망 조합을 이용한 표정인식 (Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms)

  • 홍용희;한영준;한헌수
    • 한국지능시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.806-813
    • /
    • 2010
  • 표정은 사람의 감정을 표현하는 대표적인 수단이다. 이러한 이유로 표정은 사람의 의도를 컴퓨터에 전하는데 효과적인 방법으로 사용될 수 있다. 본 논문에서는 2D 영상에서 사람의 표정을 보다 빠르고 정확하게 인식하기 위해 Discrete Adaboost 알고리즘과 신경망 알고리즘을 통합하는 방법을 제안한다. 1차로 Adaboost 알고리즘으로 영상에서 얼굴의 위치와 크기를 찾고, 2차로 표정별로 학습된 Adaboost 강분류기를 이용하여 표정별 출력 값을 얻으며, 이를 마지막으로 Adaboost 강분류기 값으로 학습된 신경망 알고리즘의 입력으로 이용하여 최종 표정을 인식한다. 제안하는 방법은 실시간이 보장된 Adaboost 알고리즘의 특성과 정확성을 개선하는 신경망 기반 인식기의 신뢰성을 적절히 활용함으로서 전체 인식기의 실시간성을 확보하면서도 정확성을 향상시킨다. 본 논문에서 구현된 알고리즘은 평온, 행복, 슬픔, 화남, 놀람의 5가지 표정에 대해 평균 86~95%의 정확도로 실시간 인식이 가능하다.

얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구 (Study for Classification of Facial Expression using Distance Features of Facial Landmarks)

  • 배진희;왕보현;임준식
    • 전기전자학회논문지
    • /
    • 제25권4호
    • /
    • pp.613-618
    • /
    • 2021
  • 표정 인식은 다양한 분야에서 지속적인 연구의 주제로서 자리 잡아 왔다. 본 논문에서는 얼굴 이미지 랜드마크 간의 거리를 계산하여 추출된 특징을 사용해 각 랜드마크들의 관계를 분석하고 5가지의 표정을 분류한다. 다수의 관측자들에 의해 수행된 라벨링 작업을 기반으로 데이터와 라벨 신뢰도를 높였다. 또한 원본 데이터에서 얼굴을 인식하고 랜드마크 좌표를 추출해 특징으로 사용하였으며 유전 알고리즘을 이용해 상대적으로 분류에 더 도움이 되는 특징을 선택하였다. 본 논문에서 제안한 방법을 이용하여 표정 인식 분류를 수행하였으며 제안된 방법을 이용하였을 때가 CNN을 이용하여 분류를 수행하였을 때 보다 성능이 향상됨을 볼 수 있었다.

얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습 (Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition)

  • 강현우;임길택;원철호
    • 한국멀티미디어학회논문지
    • /
    • 제20권5호
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법 (New Rectangle Feature Type Selection for Real-time Facial Expression Recognition)

  • 김도형;안광호;정명진;정성욱
    • 제어로봇시스템학회논문지
    • /
    • 제12권2호
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.