• Title/Summary/Keyword: 표정 공간

Search Result 157, Processing Time 0.031 seconds

A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability (표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크)

  • Kim, Jin-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.284-291
    • /
    • 2005
  • I propose a framework for detecting, recognizing and classifying facial features based on learned expression patterns. The framework recognizes facial expressions by using PCA and expression HMM(EHMM) which is Hidden Markov Model (HMM) approach to represent the spatial information and the temporal dynamics of the time varying visual expression patterns. Because the low level spatial feature extraction is fused with the temporal analysis, a unified spatio-temporal approach of HMM to common detection, tracking and classification problems is effective. The proposed recognition framework is accomplished by applying posterior probability between current visual observations and previous visual evidences. Consequently, the framework shows accurate and robust results of recognition on as well simple expressions as basic 6 facial feature patterns. The method allows us to perform a set of important tasks such as facial-expression recognition, HCI and key-frame extraction.

Intuitive Quasi-Eigenfaces for Facial Animation (얼굴 애니메이션을 위한 직관적인 유사 고유 얼굴 모델)

  • Kim, Ig-Jae;Ko, Hyeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.12 no.2
    • /
    • pp.1-7
    • /
    • 2006
  • 블렌드 쉐입 기반 얼굴 애니메이션을 위해 기저 모델(Expression basis)을 생성하는 방법을 크게 두 가지로 구분하면, 애니메이터가 직접 모델링을 하여 생성하는 방법과 통계적 방법에 기초하여 모델링하는 방법이 있다. 그 중 애니메이터에 의한 수동 모델링 방법으로 생성된 기저 모델은 직관적으로 표정을 인식할 수 있다는 장점으로 인해 전통적인 키프레임 제어가 가능하다. 하지만, 표정 공간(Expression Space)의 일부분만을 커버하기 때문에 모션데이터로부터의 재복원 과정에서 많은 오차를 가지게 된다. 반면, 통계적 방법을 기반으로 한 기저모델 생성 방법은 거의 모든 표정공간을 커버하는 고유 얼굴 모델(Eigen Faces)을 생성하므로 재복원 과정에서 최소의 오차를 가지지만, 시각적으로 직관적이지 않은 표정 모델을 만들어 낸다. 따라서 본 논문에서는 수동으로 생성한 기저모델을 유사 고유 얼굴 모델(Quasi-Eigen Faces)로 변형하는 방법을 제시하고자 한다. 결과로 생성되는 기저 모델은 시각적으로 직관적인 얼굴 표정을 유지하면서도 통계적 방법에 의한 얼굴표정 공간의 커버 영역과 유사하도록 확장할 수 있다.

  • PDF

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Model based Facial Expression Recognition using New Feature Space (새로운 얼굴 특징공간을 이용한 모델 기반 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.309-316
    • /
    • 2010
  • This paper introduces a new model based method for facial expression recognition that uses facial grid angles as feature space. In order to be able to recognize the six main facial expression, proposed method uses a grid approach and therefore it establishes a new feature space based on the angles that each gird's edge and vertex form. The way taken in the paper is robust against several affine transformations such as translation, rotation, and scaling which in other approaches are considered very harmful in the overall accuracy of a facial expression recognition algorithm. Also, this paper demonstrates the process that the feature space is created using angles and how a selection process of feature subset within this space is applied with Wrapper approach. Selected features are classified by SVM, 3-NN classifier and classification results are validated with two-tier cross validation. Proposed method shows 94% classification result and feature selection algorithm improves results by up to 10% over the full set of feature.

Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features (표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.821-827
    • /
    • 2009
  • Human emotion can be reflected by their facial expressions. So, it is one of good ways to understand people's emotions by recognizing their facial expressions. General recognition system of facial expressions had selected interesting points, and then only extracted features without analyzing physical meanings. They takes a long time to find interesting points, and it is hard to estimate accurate positions of these feature points. And in order to implement a recognition system of facial expressions on real-time embedded system, it is needed to simplify the algorithm and reduce the using resources. In this paper, we propose a real-time recognition algorithm of facial expressions that project the grid points on an expression space based on Gabor wavelet feature. Facial expression is simply described by feature vectors on the expression space, and is classified by an neural network with its resources dramatically reduced. The proposed system deals 5 expressions: anger, happiness, neutral, sadness, and surprise. In experiment, average execution time is 10.251 ms and recognition rate is measured as 87~93%.

Analysis of the Accuracy of Quaternion-Based Spatial Resection Based on the Layout of Control Points (기준점 배치에 따른 쿼터니언기반 공간후방교회법의 정확도 분석)

  • Kim, Eui Myoung;Choi, Han Seung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.255-262
    • /
    • 2018
  • In order to determine the three-dimensional position in photogrammetry, a spatial resection is a pre-requisite step to determine exterior orientation parameters. The existing spatial resection method is a non-linear equation that requires initial values of exterior orientation parameters and has a problem that a gimbal lock phenomenon may occur. On the other hand, the spatial resection using quaternion is a closed form solution that does not require initial values of EOP (Exterior Orientation Parameters) and is a method that can eliminate the problem of gimbal lock. In this study, to analyze the stability of the quaternion-based spatial resection, the exterior orientation parameters were determined according to the different layout of control points and were compared with the determined values using existing non-linear equation. As a result, it can be seen that the quaternionbased spatial resection is affected by the layout of the control points. Therefore, if the initial value of exterior orientation parameters could not be obtained, it would be more effective to estimate the initial exterior orientation values using the quaternion-based spatial resection and apply it to the collinearity equation-based spatial resection method.

On Parameterizing of Human Expression Using ICA (독립 요소 분석을 이용한 얼굴 표정의 매개변수화)

  • Song, Ji-Hey;Shin, Hyun-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.1
    • /
    • pp.7-15
    • /
    • 2009
  • In this paper, a novel framework that synthesizes and clones facial expression in parameter spaces is presented. To overcome the difficulties in manipulating face geometry models with high degrees of freedom, many parameterization methods have been introduced. In this paper, a data-driven parameterization method is proposed that represents a variety of expressions with a small set of fundamental independent movements based on the ICA technique. The face deformation due to the parameters is also learned from the data to capture the nonlinearity of facial movements. With this parameterization, one can control the expression of an animated character's face by the parameters. By separating the parameterization and the deformation learning process, we believe that we can adopt this framework for a variety applications including expression synthesis and cloning. The experimental result demonstrates the efficient production of realistic expressions using the proposed method.

  • PDF

A Study on the Automation of Interior Orientation and Relative Orientation (내부표정과 상호표정의 자동화에 관한 연구)

  • Jeong, Soo;Park, Choung-Hwan;Yun, Kong-Hyun;Yeu, Bock-Mo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.17 no.2
    • /
    • pp.105-116
    • /
    • 1999
  • Owing to the rapid development of computer system and the introduction of image processing technique, recent photogrammetric studies have been concentrated on the automation of photogrammetric orientation work that have been carried out by skilled professionals in analog and/or analytical pbotogrammetric field. To automate the whole photogrammetric work, the automation of the orientation processes including interior, relative and absolute orientation should be preceded. This study aims to automate interior orientation and relative orientation process. For this purpose, we applied Hough transform to interior orientation process and object space matching technique to relative orientation process. As the result of this study, we can present a method to automate interior and relative orientation process that has been semi-automatically operated in most commercial digital photogrammetric workstations currently available.

  • PDF

The Design of Context-Aware Middleware Architecture for Processing Facial Expression Information (얼굴표정정보를 처리하는 상황인식 미들웨어의 구조 설계)

  • Jin-Bong Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.649-651
    • /
    • 2008
  • 상황인식 컴퓨팅 기술은 넓게 보면 유비쿼터스 컴퓨팅 기술의 일부분으로 볼 수 있다. 그러나 상황인식 컴퓨팅 기술의 적용측면에 대한 접근 방법이 유비쿼터스 컴퓨팅과는 다르다고 할 수 있다. 지금까지 연구된 상황인식 컴퓨팅 기술은 지정된 공간에서 상황을 발생시키는 객체를 식별하는 일과 식별된 객체가 발생하는 상황의 인식에 주된 초점을 두고 있다. 또한, 상황정보로는 객체의 위치 정보만을 주로 사용하고 있다. 그러나 본 논문에서는 객체의 얼굴표정을 상황정보로 사용하여 감성을 인식할 수 있는 상황인식 미들웨어로서 CM-FEIP의 구조를 제안한다. CM-FEIP의 가상공간 모델링은 상황 모델링과 서비스 모델링으로 구성된다. 또한, 얼굴표정의 인식기술을 기반으로 온톨로지를 구축하여 객체의 감성을 인식한다. 객체의 얼굴표정을 상황정보로 사용하고, 무표정일 경우에는 여러 가지 환경정보(온도, 습도, 날씨 등)를 이용한다. 온톨로지를 구축하기 위하여 OWL 언어를 사용하여 객체의 감성을 표현하고, 감성추론 엔진은 Jena를 사용한다.