• Title/Summary/Keyword: 얼굴근육

Search Result 65, Processing Time 0.023 seconds

Model-Independent Facial Animation Tool (모델 독립적 얼굴 표정 애니메이션 도구)

  • 이지형;김상원;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.193-196
    • /
    • 1999
  • 컴퓨터 그래픽스에서 인간의 얼굴 표정을 생성하는 것은 고전적인 주제중의 하나이다. 따라서 관련된 많은 연구가 이루어져 왔지만 대부분의 연구들은 특정 모델을 이용한 얼굴 표정 애니메이션을 제공하였다. 이는 얼굴 표정에는 얼굴 근육 정보 및 부수적인 정보가 필요하고 이러한 정보는 3D 얼굴 모델에 종속되기 때문이다. 본 논문에서는 일반적인 3D 얼굴 모델에 근육을 설정하고 기타 정보를 편집하여, 다양한 3D 얼굴모델에서 표정 애니메이션이 가능한 도구를 제안한다.

  • PDF

3D Facial Expression Creation System Based on Muscle Model (근육모델 기반의 3차원 얼굴표정 생성시스템)

  • 이현철;윤재홍;허기택
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.465-468
    • /
    • 2002
  • 최근 컴퓨터를 이용한 시각 분야가 발전하면서 인간과 관계된 연구가 중요시 되어, 사람과 컴퓨터의 인터페이스에 대한 새로운 시도들이 다양하게 이루어지고 있다. 특히 얼굴 형상 모델링과 얼굴 표정변화를 애니메이션 화하는 방법에 대한 연구가 활발히 수행되고 있으며, 그 용도가 매우 다양하고, 적용 범위도 증가하고 있다. 본 논문에서는 한국인의 얼굴특성에 맞는 표준적인 일반모델을 생성하고, 실제 사진과 같이 개개인의 특성에 따라 정확한 형상을 유지할 수 있는 3차원 형상 모델을 제작한다. 그리고 자연스러운 얼굴 표정 생성을 위하여, 근육모델 기반의 얼굴표정 생성 시스템을 개발하여, 자연스럽고 실제감 있는 얼굴애니메이션이 이루어질 수 있도록 하였다.

  • PDF

A Study on 3D Face Modelling based on Dynamic Muscle Model for Face Animation (얼굴 애니메이션을 위한 동적인 근육모델에 기반한 3차원 얼굴 모델링에 관한 연구)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.322-327
    • /
    • 2003
  • Based on dynamic muscle model to construct efficient face animation in this paper 30 face modelling techniques propose. Composed face muscle by faceline that connect 256 point and this point based on dynamic muscle model, and constructed wireframe because using this. After compose standard model who use wireframe, because using front side and side 2D picture, enforce texture mapping and created 3D individual face model. Used front side of characteristic points and side part for correct mapping, after make face that have texture coordinates using 2D coordinate of front side image and front side characteristic points, constructed face that have texture coordinates using 2D coordinate of side image and side characteristic points.

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

Improvement of Face Recognition Rate by Normalization of Facial Expression (표정 정규화를 통한 얼굴 인식율 개선)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.477-486
    • /
    • 2008
  • Facial expression, which changes face geometry, usually has an adverse effect on the performance of a face recognition system. To improve the face recognition rate, we propose a normalization method of facial expression to diminish the difference of facial expression between probe and gallery faces. Two approaches are used to facial expression modeling and normalization from single still images using a generic facial muscle model without the need of large image databases. The first approach estimates the geometry parameters of linear muscle models to obtain a biologically inspired model of the facial expression which may be changed intuitively afterwards. The second approach uses RBF(Radial Basis Function) based interpolation and warping to normalize the facial muscle model as unexpressed face according to the given expression. As a preprocessing stage for face recognition, these approach could achieve significantly higher recognition rates than in the un-normalized case based on the eigenface approach, local binary patterns and a grey-scale correlation measure.

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Study of expression in virtual character of facial smile by emotion recognition (감성인식에 따른 가상 캐릭터의 미소 표정변화에 관한 연구)

  • Lee, Dong-Yeop
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.383-402
    • /
    • 2013
  • In this study, we apply the facial Facial Action Coding System for coding the muscular system anatomical approach facial expressions to be displayed in response to a change in sensitivity. To verify by applying the virtual character the Duchenne smile to the original. I extracted the Duchenne smile by inducing experiment of emotion (man 2, woman 2) and the movie theater department students trained for the experiment. Based on the expression that has been extracted, I collect the data of the facial muscles. Calculates the frequency of expression of the face and other parts of the body muscles around the mouth and lips, to be applied to the virtual character of the data. Orbicularis muscle to contract end of lips due to shrinkage of the Zygomatic Major is a upward movement, cheek goes up, the movement of the muscles, facial expressions appear the outer eyelid under the eye goes up with a look of smile. Muscle movement of large muscle and surrounding Zygomatic Major is observed together (AU9) muscles around the nose and (AU25, AU26, AU27) muscles around the mouth associated with openness. Duchen smile occurred in the form of Orbicularis Oculi and Zygomatic Major moves at the same time. Based on this, by separating the orbicularis muscle that is displayed in the form of laughter and sympathy to emotional feelings and viable large muscle by the will of the person, by applying to the character of the virtual, and expression of human I try to examine expression of the virtual character's ability to distinguish.

A system for facial expression synthesis based on a dimensional model of internal states (내적상태 차원모형에 근거한 얼굴표정 합성 시스템)

  • 한재현;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.3
    • /
    • pp.11-21
    • /
    • 2002
  • Parke and Waters' model[1] of muscle-based face deformation was used to develop a system that can synthesize facial expressions when the pleasure-displeasure and arousal-sleep coordinate values of internal states are specified. Facial expressions sampled from a database developed by Chung, Oh, Lee and Byun [2] and its underlying model of internal states were used to find rules for face deformation. The internal - state model included dimensional and categorical values of the sampled facial expressions. To find out deformation rules for each of the expressions, changes in the lengths of 21 facial muscles were measured. Then, a set of multiple regression analyses was performed to find out the relationship between the muscle lengths and internal states. The deformation rules obtained from the process turned out to produce natural-looking expressions when the internal states were specified by the pleasure-displeasure and arousal-sleep coordinate values. Such a result implies that the rules derived from a large scale database and regression analyses capturing the variations of individual muscles can be served as a useful and powerful tool for synthesizing facial expressions.

  • PDF

Realistics Facial Expression Animation and 3D Face Synthesis (실감 있는 얼굴 표정 애니메이션 및 3차원 얼굴 합성)

  • 한태우;이주호;양현승
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.25-31
    • /
    • 1998
  • 컴퓨터 하드웨어 기술과 멀티미디어 기술의 발달로 멀티미디어 입출력 장치를 이용한 고급 인터메이스의 필요성이 대두되었다. 친근감 있는 사용자 인터페이스를 제공하기 위해 실감 있는 얼굴 애니메이션에 대한 요구가 증대되고 있다. 본 논문에서는 사람의 내적 상태를 잘 표현하는 얼굴의 표정을 3차원 모델을 이용하여 애니메이션을 수행한다. 애니메이션에 실재감을 더하기 위해 실제 얼굴 영상을 사용하여 3차원의 얼굴 모델을 변형하고, 여러 방향에서 얻은 얼굴 영상을 이용하여 텍스터 매핑을 한다. 변형된 3차원 모델을 이용하여 얼굴 표정을 애니메이션 하기 위해서 해부학에 기반한 Waters의 근육 모델을 수정하여 사용한다. 그리고, Ekman이 제안한 대표적인 6가지 표정들을 합성한다.

  • PDF