• Title/Summary/Keyword: facial feature

Search Result 517, Processing Time 0.026 seconds

Implementation of communication system using signals originating from facial muscle constructions

  • Kim, EungSoo;Eum, TaeWan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.217-222
    • /
    • 2004
  • A person does communication between each other using language. But, In the case of disabled person, cannot communicate own idea to use writing and gesture. We embodied communication system using the EEG so that disabled person can do communication. After feature extraction of the EEG included facial muscle signals, it is converted the facial muscle into control signal, and then did so that can select character and communicate idea.

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Intelligent Wheelchair System using Face and Mouth Recognition (얼굴과 입 모양 인식을 이용한 지능형 휠체어 시스템)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.161-168
    • /
    • 2009
  • In this paper, we develop an Intelligent Wheelchair(IW) control system for the people with various disabilities. The aim of the proposed system is to increase the mobility of severely handicapped people by providing an adaptable and effective interface for a power wheelchair. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an Intelligent Wheelchair(IW) is determined by the inclination of the user's face, while proceeding and stopping are determined by the shape of the user's mouth. To analyze these gestures, our system consists of facial feature detector, facial feature recognizer, and converter. In the stage of facial feature detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region detected based on edge information. The extracted features are sent to the facial feature recognizer, which recognize the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to a converter to control the wheelchair. When assessing the effectiveness of the proposed system with 34 users unable to utilize a standard joystick, the results showed that the proposed system provided a friendly and convenient interface.

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Hardware Implementation of Facial Feature Detection Algorithm (얼굴 특징 검출 알고리즘의 하드웨어 설계)

  • Kim, Jung-Ho;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • In this paper, we designed a facial feature(eyes, a moult and a nose) detection hardware based on the ICT transform which was developed for face detection earlier. Our design used a pipeline architecture for high throughput and it also tried to reduce memory size and memory access rate. The algerian and its hardware implementation were tested on the BioID database, which is a worldwide face detection test bed, and its facial feature detection rate was 100% both in software and hardware, assuming the face boundary was correctly detected. After synthesizing the hardware on Dongbu $0.18{\mu}m$ CMOS library, its die size was $376,821{\mu}m^2$ with the maximum operating clock 78MHz.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A Simple Way to Find Face Direction (간단한 얼굴 방향성 검출방법)

  • Park Ji-Sook;Ohm Seong-Yong;Jo Hyun-Hee;Chung Min-Gyo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.234-243
    • /
    • 2006
  • The recent rapid development of HCI and surveillance technologies has brought great interests in application systems to process faces. Much of research efforts in these systems has been primarily focused on such areas as face recognition, facial expression analysis and facial feature extraction. However, not many approaches have been reported toward face direction detection. This paper proposes a method to detect the direction of a face using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, based on the single monocular view of the face, the proposed method introduces very simple formulas to estimate the horizontal or vertical rotation angle of the face. The horizontal rotation angle can be calculated by using a ratio between the areas of left and right facial triangles, while the vertical angle can be obtained from a ratio between the base and height of facial triangle. Experimental results showed that our method makes it possible to obtain the horizontal angle within an error tolerance of ${\pm}1.68^{\circ}$, and that it performs better as the magnitude of the vertical rotation angle increases.

  • PDF

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.