• Title/Summary/Keyword: Facial State Vector

Search Result 13, Processing Time 0.028 seconds

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Automatic facial expression generation system of vector graphic character by simple user interface (간단한 사용자 인터페이스에 의한 벡터 그래픽 캐릭터의 자동 표정 생성 시스템)

  • Park, Tae-Hee;Kim, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1155-1163
    • /
    • 2009
  • This paper proposes an automatic facial expression generation system of vector graphic character using gaussian process model. Proposed method extracts the main feature vectors from twenty-six facial data of character redefined based on Russell's internal emotion state. Also by using new gaussian process model, SGPLVM, we find low-dimensional feature data from extracted high-dimensional feature vectors, and learn probability distribution function (PDF). All parameters of PDF are estimated by maximization the likelihood of learned expression data, and these are used to select wanted facial expressions on two-dimensional space in real time. As a result of simulation, we confirm that proposed facial expression generation tool is working in the small facial expression datasets and can generate various facial expressions without prior knowledge about relation between facial expression and emotion.

  • PDF

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

Interactive Facial Expression Animation of Motion Data using CCA (CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션)

  • Kim Sung-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2005
  • This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points In the space of expression, which is manifold space, is described approximately as following; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

  • PDF

GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model

  • Ahn, Hyunchul;Kim, Seongjin;Kim, Jae Kyeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2056-2069
    • /
    • 2014
  • In order to implement interactive and personalized Web services properly, it is necessary to understand the tangible and intangible responses of the users and to recognize their emotional states. Recently, some studies have attempted to build emotional state estimation models based on facial expressions. Most of these studies have applied multiple regression analysis (MRA), artificial neural network (ANN), and support vector regression (SVR) as the prediction algorithm, but the prediction accuracies have been relatively low. In order to improve the prediction performance of the emotion prediction model, we propose a novel SVR model that is optimized using a genetic algorithm (GA). Our proposed algorithm-GASVR-is designed to optimize the kernel parameters and the feature subsets of SVRs in order to predict the levels of two aspects-valence and arousal-of the emotions of the users. In order to validate the usefulness of GASVR, we collected a real-world data set of facial responses and emotional states via a survey. We applied GASVR and other algorithms including MRA, ANN, and conventional SVR to the data set. Finally, we found that GASVR outperformed all of the comparative algorithms in the prediction of the valence and arousal levels.

Orthodromic Transfer of the Temporalis Muscle in Incomplete Facial Nerve Palsy

  • Aum, Jae Ho;Kang, Dong Hee;Oh, Sang Ah;Gu, Ja Hea
    • Archives of Plastic Surgery
    • /
    • v.40 no.4
    • /
    • pp.348-352
    • /
    • 2013
  • Background Temporalis muscle transfer produces prompt surgical results with a one-stage operation in facial palsy patients. The orthodromic method is surgically simple, and the vector of muscle action is similar to the temporalis muscle action direction. This article describes transferring temporalis muscle insertion to reconstruct incomplete facial nerve palsy patients. Methods Between August 2009 and November 2011, 6 unilateral incomplete facial nerve palsy patients underwent surgery for orthodromic temporalis muscle transfer. A preauricular incision was performed to expose the mandibular coronoid process. Using a saw, the coronoid process was transected. Three strips of the fascia lata were anchored to the muscle of the nasolabial fold through subcutaneous tunneling. The tension of the strips was adjusted by observing the shape of the nasolabial fold. When optimal tension was achieved, the temporalis muscle was sutured to the strips. The surgical results were assessed by comparing pre- and postoperative photographs. Three independent observers evaluated the photographs. Results The symmetry of the mouth corner was improved in the resting state, and movement of the oral commissure was enhanced in facial animation after surgery. Conclusions The orthodromic transfer of temporalis muscle technique can produce prompt results by applying the natural temporalis muscle vector. This technique preserves residual facial nerve function in incomplete facial nerve palsy patients and produces satisfying cosmetic outcomes without malar muscle bulging, which often occurs in the turn-over technique.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

An Adaptive Face Recognition System Based on a Novel Incremental Kernel Nonparametric Discriminant Analysis

  • SOULA, Arbia;SAID, Salma BEN;KSANTINI, Riadh;LACHIRI, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2129-2147
    • /
    • 2019
  • This paper introduces an adaptive face recognition method based on a Novel Incremental Kernel Nonparametric Discriminant Analysis (IKNDA) that is able to learn through time. More precisely, the IKNDA has the advantage of incrementally reducing data dimension, in a discriminative manner, as new samples are added asynchronously. Thus, it handles dynamic and large data in a better way. In order to perform face recognition effectively, we combine the Gabor features and the ordinal measures to extract the facial features that are coded across local parts, as visual primitives. The variegated ordinal measures are extraught from Gabor filtering responses. Then, the histogram of these primitives, across a variety of facial zones, is intermingled to procure a feature vector. This latter's dimension is slimmed down using PCA. Finally, the latter is treated as a facial vector input for the advanced IKNDA. A comparative evaluation of the IKNDA is performed for face recognition, besides, for other classification endeavors, in a decontextualized evaluation schemes. In such a scheme, we compare the IKNDA model to some relevant state-of-the-art incremental and batch discriminant models. Experimental results show that the IKNDA outperforms these discriminant models and is better tool to improve face recognition performance.