• Title/Summary/Keyword: 얼굴 표정

Search Result 518, Processing Time 0.029 seconds

Facial Expression Synthesis Using 3D Facial Modeling (3차원 얼굴 모델 링 을 이 용한 표정 합성)

  • 심연숙;변혜란;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.11a
    • /
    • pp.40-44
    • /
    • 1998
  • 사용자에 게 친근감 있는 인터페이스를 제공하기 위해 자연스러운 얼굴 애니메이션에 대한 연구가 활발히 진행 중이다.[5][6] 본 논문에서는 자연스러운 얼굴의 표정 합성을 위한 애니메이션 방법 을 제안하였다. 특정한 사람을 모델로 한 얼굴 애니메이션을 위하여 우선 3차원 메쉬로 구성된 일반 모델(generic model)을 특정 사람에게 정합 하여 특정인의 3차원 얼굴 모델을 얻을 수 있다 본 논문에서는 한국인의 자연스러운 얼굴 표정합성을 위하여, 한국인의 표준얼굴에 관한 연구결과를 토대로 한국인 얼굴의 특징을 반영한 일반모델을 만들고 이를 이용하여 특정인의 3차원 얼굴 모델을 얻을 수 있도록 하였다. 실제 얼굴의 근육 및 피부 조직 등 해부학적 구조에 기반 한 표정 합성방법을 사용하여 현실감 있고 자연스러운 얼굴 애니메이션이 이루어질 수 있도록 하였다.

  • PDF

The Congruent Effects of Gesture and Facial Expression of Virtual Character on Emotional Perception: What Facial Expression is Significant? (가상 캐릭터의 몸짓과 얼굴표정의 일치가 감성지각에 미치는 영향: 어떤 얼굴표정이 중요한가?)

  • Ryu, Jeeheon;Yu, Seungbeom
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.21-34
    • /
    • 2016
  • In the design and develop a virtual character, it is important to correctly deliver target emotions generated by the combination of facial expression and gesture. The purpose of this study is to examine the effect of congruent/incongruent between gesture and facial expression on target emotion. In this study four emotions, joy, sadness, fear, and anger, are applied. The results of study showed that sadness emotion were incorrectly perceived. Moreover, it was perceived as anger instead of sadness. Sadness can be easily confused when facial expression and gestures were simultaneously presented. However, in the other emotional status, the intended emotional expressions were correctly perceived. The overall evaluation of virtual character's emotional expression was significantly low when joy gesture was combined with sad facial expression. The results of this study suggested that emotional gesture is more influential correctly to deliver target emotions to users. This study suggested that social cues like gender or age of virtual character should be further studied.

Facial Expression Feature Extraction for Expression Recognition (표정 인식을 위한 얼굴의 표정 특징 추출)

  • Kim, Young-Il;Kim, Jung-Hoon;Hong, Seok-Keun;Cho, Seok-Je
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.537-540
    • /
    • 2005
  • 본 논문에서는 사람의 감정, 건강상태, 정신상태등 다양한 정보를 포함하고 있는 웃음, 슬픔, 졸림, 놀람, 윙크, 무표정 등의 표정을 인식하기 위한 표정의 특징이 되는 얼굴의 국부적 요소인 눈과 입을 검출하여 표정의 특징을 추출한다. 표정 특징의 추출을 위한 전체적인 알고리즘 과정으로는 입력영상으로부터 칼라 정보를 이용하여 얼굴 영역을 검출하여 얼굴에서 특징점의 위치 정보를 이용하여 국부적 요소인 특징점 눈과 입을 추출한다. 이러한 특징점 추출 과정에서는 에지, 이진화, 모폴로지, 레이블링 등의 전처리 알고리즘을 적용한다. 레이블 영역의 크기를 이용하여 얼굴에서 눈, 눈썹, 코, 입 등의 1차 특징점을 추출하고 누적 히스토그램 값과 구조적인 위치 관계를 이용하여 2차 특징점 추출 과정을 거쳐 정확한 눈과 입을 추출한다. 표정 변화에 대한 표정의 특징을 정량적으로 측정하기 위해 추출된 특징점 눈과 입의 눈과 입의 크기와 면적, 미간 사이의 거리 그리고 눈에서 입까지의 거리 등 기하학적 정보를 이용하여 6가지 표정에 대한 표정의 특징을 추출한다.

  • PDF

Model ins based on Muscle Model for Three-Dimensional Facial Expression Animalion (3차원 얼굴 표정 애니메이션을 위한 근육모델 기반의 모델링)

  • 이혜진;정현숙;이일병
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04a
    • /
    • pp.742-744
    • /
    • 2002
  • 얼굴 애니메이션은 개인을 쉽게 구분하고 의사소통을 효율적으로 할 수 있는 보조도구로써 최근 연구가 활발하다. 본 논문에서는 얼굴 표정생성을 위해서 실제얼굴의 피부조직 얼굴 근육 등 해부학적 구조에 기반한 근육 기반 모델 방법을 사용하여 현실감 있고 자연스러운 얼굴 애니메이션이 이루어지도록 한다. 또한 부드러운 얼굴모델을 구현하기 위하여 폴리곤 메쉬를 분할하고 얼굴 표정에 중요한 영향을 미치는 얼굴근육을 추가하여 다양하고 자연스러운 표정을 연출하는 방법을 제시하고자 한다. 제안된 방법을 water〔3〕의 모델에 적용해 봄으로서 더 실감 있는 얼굴 애니메이션에 접근할 수 있는 결과를 얻을 수 있었다. 이 결과는 화상회의나 가상현실, 원격교육, 영화 등 많은 분야에서 활용될 수 있다.

  • PDF

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

Cartoon Rendering for Facial Expression (얼굴 표정의 카툰 렌더링)

  • Jung, Hye-Moon;Byun, Hae-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.449-454
    • /
    • 2009
  • The human face has "expression" as an important visual factor in contrast with general objects. For this reason, cartoonists draw shadow that emphasizes facial shape and facial expression in order to convey atmosphere of scene and trait of character. This shadow should be considered when doing cartoon rendering for facial expression although it is not an physical shading. This paper proposes a cartoon rendering system for facial expression based on shading techniques of real cartoonist. First of all, we searched such techniques of cartoonist through variety of collected cartoon images and defined shadow templates according to character's facial expression to do cartoon rendering diffently. After that, we demonstrated cartoon rendering system of facial expression on the basis of survey result that effectively emphasizes facial shape and facial expression. Finally, we showed the usefulness through the user questionnaire.

  • PDF

Robust Facial Expression Recognition using PCA Representation (PCA 표상을 이용한 강인한 얼굴 표정 인식)

  • Shin Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.323-331
    • /
    • 2005
  • This paper proposes an improved system for recognizing facial expressions in various internal states that is illumination-invariant and without detectable rue such as a neutral expression. As a preprocessing to extract the facial expression information, a whitening step was applied. The whitening step indicates that the mean of the images is set to zero and the variances are equalized as unit variances, which reduces murk of the variability due to lightening. After the whitening step, we used the facial expression information based on principal component analysis(PCA) representation excluded the first 1 principle component. Therefore, it is possible to extract the features in the lariat expression images without detectable cue of neutral expression from the experimental results, we ran also implement the various and natural facial expression recognition because we perform the facial expression recognition based on dimension model of internal states on the images selected randomly in the various facial expression images corresponding to 83 internal emotional states.

  • PDF

A Study on Face Expression Recognition using LDA Mixture Model and Nearest Neighbor Pattern Classification (LDA 융합모델과 최소거리패턴분류법을 이용한 얼굴 표정 인식 연구)

  • No, Jong-Heun;Baek, Yeong-Hyeon;Mun, Seong-Ryong;Gang, Yeong-Jin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.11a
    • /
    • pp.167-170
    • /
    • 2006
  • 본 논문은 선형분류기인 LDA 융합모델과 최소거리패턴분류법을 이용한 얼굴표정인식 알고리즘 연구에 관한 것이다. 제안된 알고리즘은 얼굴 표정을 인식하기 위해 두 단계의 특징 추출과정과 인식단계를 거치게 된다. 먼저 특징추출 단계에서는 얼굴 표정이 담긴 영상을 PCA를 이용해 고차원에서 저차원의 공간으로 변환한 후, LDA 이용해 특징벡터를 클래스 별로 나누어 분류한다. 다음 단계로 LDA융합모델을 통해 계산된 특징벡터에 최소거리패턴분류법을 적용함으로서 얼굴 표정을 인식한다. 제안된 알고리즘은 6가지 기본 감정(기쁨, 화남, 놀람, 공포, 슬픔, 혐오)으로 구성된 데이터베이스를 이용해 실험한 결과, 기존알고리즘에 비해 향상된 인식률과 특정 표정에 관계없이 고른 인식률을 보임을 확인하였다.

  • PDF