• Title/Summary/Keyword: 3D facial expression

Search Result 88, Processing Time 0.028 seconds

A Study on the Fabrication of Facial Blend Shape of 3D Character - Focusing on the Facial Capture of the Unreal Engine (3D 캐릭터의 얼굴 블렌드쉐입(blendshape)의 제작연구 -언리얼 엔진의 페이셜 캡처를 중심으로)

  • Lou, Yi-Si;Choi, Dong-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.73-80
    • /
    • 2022
  • Facial expression is an important means of representing characteristics in movies and animations, and facial capture technology can support the production of facial animation for 3D characters more quickly and effectively. Blendshape techniques are the most widely used methods for producing high-quality 3D face animations, but traditional blendshape often takes a long time to produce. Therefore, the purpose of this study is to achieve results that are not far behind the effectiveness of traditional production to reduce the production period of blend shape. In this paper, in order to make a blend shape, the method of using the cross-model to convey the blend shape is compared with the traditional method of making the blend shape, and the validity of the new method is verified. This study used kit boy developed by Unreal Engine as an experiment target conducted a facial capture test using two blend shape production techniques, and compared and analyzed the facial effects linked to blend shape.

Automatic Synchronization of Separately-Captured Facial Expression and Motion Data (표정과 동작 데이터의 자동 동기화 기술)

  • Jeong, Tae-Wan;Park, Sang-II
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we present a new method for automatically synchronize captured facial expression data with its corresponding motion data. In a usual optical motion capture set-up, a detailed facial expression can not be captured simultaneously in the motion capture session because its resolution requirement is higher than that of the motion capture. Therefore, those are captured in two separate sessions and need to be synchronized in the post-process to be used for generating a convincing character animation. Based on the patterns of the actor's neck movement extracted from those two data, we present a non-linear time warping method for the automatic synchronization. We justify our method with the actual examples to show the viability of the method.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

A Study on Character's Emotional Appearance in Distinction Focused on 3D Animation "Inside Out" (3D 애니메이션 "인사이드 아웃" 분석을 통한 감성별 캐릭터 외형특징 연구)

  • Ahn, Duck-ki;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.361-368
    • /
    • 2017
  • This study analyzes into the characteristic appearance in distintion with emotional changes toward visual forms of psychology along with character development in the 3D animation industry. In this regard, the study seeks to propose essential targets of the five emotional characters from the Pixar's animation Inside-Out to prove psychological effects to the character's visual appearance. As a previous research, the study analysis the visual representations oriented toward both emotional facial expression and emotional color expression using both Paul Ekman and Robert Plutchik's human basic emotion research. The purpose of this study is to present the visual guideline of emotional character's appearance through the various human expression for differentiated character development in animation production.

Study on Effective Facial Rigging Process for Facial Expression of 3D Animation Character (3D 애니메이션 캐릭터의 표정연출을 위한 효율적인 페이셜 리깅 공정 연구)

  • Yu, Jiseon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2014.11a
    • /
    • pp.169-170
    • /
    • 2014
  • 컴퓨터 그래픽의 발달로 3D 애니메이션은 시각적 리얼리티와 화려한 영상미로 애니메이션 특유의 비현실적인 상황과 허구적 캐릭터가 주는 재미를 관객에게 전한다. 특히 캐릭터의 얼굴 표정은 관객과의 감정 소통과 의사전달에 중요한 정보로서 디테일한 연기를 필요로 한다. 이에 3D 애니메이션 캐릭터의 경우 페이셜에 다양한 기능들이 요구되며, 일반적인 블렌드 쉐입과 클러스터 외에도 만화적 표현을 위한 다양한 기술들이 사용된다. 기존의 공정 과정에는 한 페이셜에 이러한 모든 기능들이 접목되어 복잡하며 까다로운 페이셜 리깅 공정이 이뤄진다. 본 연구에서는 기존의 공정들에서 한정되게 사용되었던 블렌드 쉐입을 이용하여 다양한 기능들을 타겟팅하는 레이어 방식을 통해 효율적인 페이셜 리깅 공정을 연구하고자 한다.

  • PDF

3D Facial Model Expression Creation with Head Motion (얼굴 움직임이 결합된 3차원 얼굴 모델의 표정 생성)

  • Kwon, Oh-Ryun;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1012-1018
    • /
    • 2007
  • 본 논문에서는 비전 기반 3차원 얼굴 모델의 자동 표정 생성 시스템을 제안한다. 기존의 3차원 얼굴 애니메이션에 관한 연구는 얼굴의 움직임을 나타내는 모션 추정을 배제한 얼굴 표정 생성에 초점을 맞추고 있으며 얼굴 모션 추정과 표정 제어에 관한 연구는 독립적으로 이루어지고 있다. 제안하는 얼굴 모델의 표정 생성 시스템은 크게 얼굴 검출, 얼굴 모션 추정, 표정 제어로 구성되어 있다. 얼굴 검출 방법으로는 얼굴 후보 영역 검출과 얼굴 영역 검출 과정으로 구성된다. HT 컬러 모델을 이용하며 얼굴의 후보 영역을 검출하며 얼굴 후보 영역으로부터 PCA 변환과 템플릿 매칭을 통해 얼굴 영역을 검출하게 된다. 검출된 얼굴 영역으로부터 얼굴 모션 추정과 얼굴 표정 제어를 수행한다. 3차원 실린더 모델의 투영과 LK 알고리즘을 이용하여 얼굴의 모션을 추정하며 추정된 결과를 3차원 얼굴 모델에 적용한다. 또한 영상 보정을 통해 강인한 모션 추정을 할 수 있다. 얼굴 모델의 표정을 생성하기 위해 특징점 기반의 얼굴 모델 표정 생성 방법을 적용하며 12개의 얼굴 특징점으로부터 얼굴 모델의 표정을 생성한다. 얼굴의 구조적 정보와 템플릿 매칭을 이용하여 눈썹, 눈, 입 주위의 얼굴 특징점을 검출하며 LK 알고리즘을 이용하여 특징점을 추적(Tracking)한다. 추적된 특징점의 위치는 얼굴의 모션 정보와 표정 정보의 조합으로 이루어져있기 때문에 기하학적 변환을 이용하여 얼굴의 방향이 정면이었을 경우의 특징점의 변위인 애니메이션 매개변수를 획득한다. 애니메이션 매개변수로부터 얼굴 모델의 제어점을 이동시키며 주위의 정점들은 RBF 보간법을 통해 변형한다. 변형된 얼굴 모델로부터 얼굴 표정을 생성하며 모션 추정 결과를 모델에 적용함으로써 얼굴 모션 정보가 결합된 3차원 얼굴 모델의 표정을 생성한다.

  • PDF

A Study on 3D Character Design for Games (About Improvement efficiency with 2D Graphics) (3D Game 제작을 위한 Character Design에 관한 연구 (3D와 2D Graphics의 결합효율성에 관하여))

  • Cho, Dong-Min;Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.10
    • /
    • pp.1310-1318
    • /
    • 2007
  • First of all, What was the modeling technique used to model 3D-Game character? It's a technique developed along several years, by experience... here is the bases Low polygons characters I always work in low polygon for two reasons -You can easily modify a low-poly character, change shapes, make morph for facial expressions etc -You can easily animate a low-poly character When the modeling is finished, Second, In these days, Computer hardware technologies have been bring about that expansion of various 3D digital motion pictured information and development. 3D digital techniques can be used to be diversity in Animation, Virtual-Reality, Movie, Advertisement, Game and so on. Besides, as computing power has been better and higher, the development of 3D Animations and Character are required gradually. In order to satisfy the requirement, Research about how to make 3D Game modeling that represents Character's emotions, sensibilities, is beginning to set its appearance. 3D characters in 3D Games are the core for the communications of emotion and the informations through their facial expression and characteristic motions, Sounds to Users. All concerning about 3D motion and facial expression are getting higher with extension of frequency in use. Therefore, in this study we suggest the effective method of modeling for 3D character and which are based on 2D Graphics.

  • PDF

Putting Your Best Face Forward: Development of a Database for Digitized Human Facial Expression Animation

  • Lee, Ning-Sung;Alia Reid Zhang Yu;Edmond C. Prakash;Tony K.Y Chan;Edmund M-K. Lai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.6-153
    • /
    • 2001
  • 3-Dimentional 3D digitization of the human is a technology that is still relatively new. There are present uses such as radiotherapy, identification systems and commercial uses and potential future applications. In this paper, we analyzed and experimented to determine the easiest and most efficient method, which would give us the most accurate results. We also constructed a database of realistic expressions and high quality human heads. We scanned people´s heads and facial expressions in 3D using a Minolta Vivid 700 scanner, then edited the models obtained on a Silicon Graphics workstation. Research was done into the present and potential uses of the 3D digitized models of the human head and we develop ideas for ...

  • PDF

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

Noise-Robust Capturing and Animating Facial Expression by Using an Optical Motion Capture System (광학식 동작 포착 장비를 이용한 노이즈에 강건한 얼굴 애니메이션 제작)

  • Park, Sang-Il
    • Journal of Korea Game Society
    • /
    • v.10 no.5
    • /
    • pp.103-113
    • /
    • 2010
  • In this paper, we present a practical method for generating facial animation by using an optical motion capture system. In our setup, we assumed a situation of capturing the body motion and the facial expression simultaneously, which degrades the quality of the captured marker data. To overcome this problem, we provide an integrated framework based on the local coordinate system of each marker for labeling the marker data, hole-filling and removing noises. We justify the method by applying it to generate a short animated film.