• Title/Summary/Keyword: Facial Feature

Search Result 510, Processing Time 0.023 seconds

Intelligent Wheelchair System using Face and Mouth Recognition (얼굴과 입 모양 인식을 이용한 지능형 휠체어 시스템)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.161-168
    • /
    • 2009
  • In this paper, we develop an Intelligent Wheelchair(IW) control system for the people with various disabilities. The aim of the proposed system is to increase the mobility of severely handicapped people by providing an adaptable and effective interface for a power wheelchair. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an Intelligent Wheelchair(IW) is determined by the inclination of the user's face, while proceeding and stopping are determined by the shape of the user's mouth. To analyze these gestures, our system consists of facial feature detector, facial feature recognizer, and converter. In the stage of facial feature detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region detected based on edge information. The extracted features are sent to the facial feature recognizer, which recognize the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to a converter to control the wheelchair. When assessing the effectiveness of the proposed system with 34 users unable to utilize a standard joystick, the results showed that the proposed system provided a friendly and convenient interface.

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Hardware Implementation of Facial Feature Detection Algorithm (얼굴 특징 검출 알고리즘의 하드웨어 설계)

  • Kim, Jung-Ho;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • In this paper, we designed a facial feature(eyes, a moult and a nose) detection hardware based on the ICT transform which was developed for face detection earlier. Our design used a pipeline architecture for high throughput and it also tried to reduce memory size and memory access rate. The algerian and its hardware implementation were tested on the BioID database, which is a worldwide face detection test bed, and its facial feature detection rate was 100% both in software and hardware, assuming the face boundary was correctly detected. After synthesizing the hardware on Dongbu $0.18{\mu}m$ CMOS library, its die size was $376,821{\mu}m^2$ with the maximum operating clock 78MHz.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A Simple Way to Find Face Direction (간단한 얼굴 방향성 검출방법)

  • Park Ji-Sook;Ohm Seong-Yong;Jo Hyun-Hee;Chung Min-Gyo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.234-243
    • /
    • 2006
  • The recent rapid development of HCI and surveillance technologies has brought great interests in application systems to process faces. Much of research efforts in these systems has been primarily focused on such areas as face recognition, facial expression analysis and facial feature extraction. However, not many approaches have been reported toward face direction detection. This paper proposes a method to detect the direction of a face using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, based on the single monocular view of the face, the proposed method introduces very simple formulas to estimate the horizontal or vertical rotation angle of the face. The horizontal rotation angle can be calculated by using a ratio between the areas of left and right facial triangles, while the vertical angle can be obtained from a ratio between the base and height of facial triangle. Experimental results showed that our method makes it possible to obtain the horizontal angle within an error tolerance of ${\pm}1.68^{\circ}$, and that it performs better as the magnitude of the vertical rotation angle increases.

  • PDF

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

A Study on Facial Expression Recognition using Boosted Local Binary Pattern (Boosted 국부 이진 패턴을 적용한 얼굴 표정 인식에 관한 연구)

  • Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1357-1367
    • /
    • 2013
  • Recently, as one of images based methods in facial expression recognition, the research which used ULBP block histogram feature and SVM classifier was performed. Due to the properties of LBP introduced by Ojala, such as highly distinction capability, durability to the illumination changes and simple operation, LBP is widely used in the field of image recognition. In this paper, we combined $LBP_{8,2}$ and $LBP_{8,1}$ to describe micro features in addition to shift, size change in calculating ULBP block histogram. From sub-windows of 660 of $LBP_{8,1}$ and 550 of $LBP_{8,2}$, ULBP histogram feature of 1210 were extracted and weak classifiers of 50 were generated using AdaBoost. By using the combined $LBP_{8,1}$ and $LBP_{8,2}$ hybrid type of ULBP histogram feature and SVM classifier, facial expression recognition rate could be improved and it was confirmed through various experiments. Facial expression recognition rate of 96.3% by hybrid boosted ULBP block histogram showed the superiority of the proposed method.

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF