• Title/Summary/Keyword: 실시간 얼굴 표정 제어

Search Result 18, Processing Time 0.026 seconds

Realtime Synthesis of Virtual Faces with Facial Expressions and Speech (표정짓고 말하는 가상 얼굴의 실시간 합성)

  • 송경준;이기영;최창석;민병의
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.8
    • /
    • pp.3-11
    • /
    • 1998
  • 본 논문에서는 고품질의 얼굴 동영상과 운율이 첨가된 음성을 통합하여 자연스런 가상얼굴을 실시간으로 합성하는 방법을 제안한다. 이 방법에서는 한글 텍스트를 입력하여, 텍스트에 따라 입모양과 음성을 합성하고, 얼굴 동영상과 음성의 동기를 맞추고 있다. 먼저, 텍스트를 음운 변화한 후, 문장을 분석하고 자모음사이의 지속시간을 부여한다. 자모음과 지 속시간에 따라 입모양을 변화시켜 얼굴 동영상을 생성하고 있다. 이때, 텍스트에 부합한 입 모양 변화뿐만 아니라, 두부의 3차원 동작과 다양한 표정변화를 통하여 자연스런 가상얼굴 을 실시간으로 합성하고 있다. 한편, 음성합성에서는 문장분석 결과에 따라 강세구와 억양구 를 정하고 있다. 강세구와 억양구를 이용하여 생성된 운율모델이 고품질의 음성합성에 필요 한 지속시간, 억양 및 휴지기를 제어한다. 합성단위는 무제한 어휘가 가능한 반음절과 triphone(VCV)의 조합이며, 합성방식은 TD-PSOLA를 사용한다.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Supporting the Korean Lip Synchronization and Facial Expression (한글 입술 움직임과 얼굴 표정이 동기화된 3차원 개인 아바타 대화방 시스템)

  • Lee, Jung;Oh, Beom-Soo;Jeong, Won-Ki;Kim, Chang-Hun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.640-642
    • /
    • 2000
  • 대화방 시스템은 텍스트화 화상을 이용한 대화방 또는 메시지 전달시스템이 널리 사용되고 있다. 본 논문은 3차원 아바타가 등장하는 대화방 시스템을 생성 및 관리하는 기술을 제안한다. 본 아바타 대화방의 특징은 사진을 가지고 간단히 3차원 개인 아바타로 변환 생성하는 기술, 3차원 개인 아바타의 한글 발음에 적합한 입술 움직임, 메시지에 따른 적절한 표정변화 등이다. 특히, 3차원 개인 아바타는 사진만으로 생성이 가능하며, 텍스쳐 매핑된 3차원 아바타는 실시간으로 사실감있는 대화방 서비스가 가능하도록 제어된다.

  • PDF

Speech Animation with Multilevel Control (다중 제어 레벨을 갖는 입모양 중심의 표정 생성)

  • Moon, Bo-Hee;Lee, Son-Ou;Wohn, Kwang-yun
    • Korean Journal of Cognitive Science
    • /
    • v.6 no.2
    • /
    • pp.47-79
    • /
    • 1995
  • Since the early age of computer graphics, facial animation has been applied to various fields, and nowadays it has found several novel applications such as virtual reality(for representing virtual agents), teleconference, and man-machine interface.When we want to apply facial animation to the system with multiple participants connected via network, it is hard to animate facial expression as we desire in real-time because of the size of information to maintain an efficient communication.This paper's major contribution is to adapt 'Level-of-Detail'to the facial animation in order to solve the above problem.Level-of-Detail has been studied in the field of computer graphics to reperesent the appearance of complicated objects in efficient and adaptive way, but until now no attempt has mode in the field of facial animation. In this paper, we present a systematic scheme which enables this kind of adaptive control using Level-of-Detail.The implemented system can generate speech synchronized facial expressions with various types of user input such as text, voice, GUI, head motion, etc.

  • PDF

VRmeeting : Distributed Virtual Environment Supporting Real Time Video Chatting on WWW (VRmeeting: 웹상에서 실시간 화상 대화 지원 분산 가상 환경)

  • Jung, Heon-Man;Tak, Jin-Hyun;Lee, Sei-Hoon;Wang, Chang-Jong
    • Annual Conference of KIPS
    • /
    • 2000.10a
    • /
    • pp.715-718
    • /
    • 2000
  • 다중 사용자 분산 가상환경 시스템에서는 참여자들 사이의 의사 교환을 위해 텍스트 중심의 채팅과 TTS 등을 지원하고 언어 외적인 의사교환을 지원하기 위해 참여자의 대리자인 아바타에 몸짓이나 얼굴 표정 및 감정등을 표현할 수 있도록 애니메이션 기능을 추가하여 사용한다. 하지만 아바타 애니메이션으로 참여자의 의사 및 감정 표현을 표현하는 데는 한계가 있기 때문에 자유로운 만남 및 대화를 지원할 수 있는 환경이 필요하다. 따라서 이러한 문제를 해결하기 위해서는 참여자의 얼굴과 음성을 가상 공간상에 포함시킴으로써 보다 분명하고 사실적인 의사교환과 감정표현이 가능할 것이다. 이 논문에서는 컴퓨터 네트워크를 통해 형성되는 다중 사용자 가상 환경에서 참여자들의 의사 교환 및 감정 표현을 극대화하고 자유로운 만남과 대화를 제공하는 실시간 화상 대화가 가능한 분산 가상 환경 시스템을 설계하였다. 설계한 시스템은 참여자들의 거리와 주시 방향에 따라 이벤트의 양을 동적으로 제어함으로써 시스템의 부하를 최적화할 수 있는 구조를 갖고 있다.

  • PDF

New Rectangle Feature Type Selection for Real-time Facial Expression Recognition (실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법)

  • Kim Do Hyoung;An Kwang Ho;Chung Myung Jin;Jung Sung Uk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.2
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.

Representation of Dynamic Facial ImageGraphic for Multi-Dimensional (다차원 데이터의 동적 얼굴 이미지그래픽 표현)

  • 최철재;최진식;조규천;차홍준
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.10
    • /
    • pp.1291-1300
    • /
    • 2001
  • This article come to study the visualization representation technique of eye brain of person, basing on the ground of the dynamic graphics which is able to change the real time, manipulating the image as graphic factors of the multi-data. And the important thought in such realization is as follows ; corresponding the character points of human face and the parameter control value which obtains basing on the existing image recognition algorithm to the multi-dimensional data, synthesizing the image, it is to create the virtual image from the emotional expression according to the changing contraction expression. The proposed DyFIG system is realized that it as the completing module and we suggest the module of human face graphics which is able to express the emotional expression by manipulating and experimenting, resulting in realizing the emotional data expression description and technology.

  • PDF