• Title/Summary/Keyword: 3D얼굴 모델링

Search Result 68, Processing Time 0.039 seconds

A Study on 3D Character Animation Production Based on Human Body Anatomy (인체 해부학을 바탕으로 한 3D 캐릭터 애니메이션 제작방법에 관한 연구)

  • 백승만
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.87-94
    • /
    • 2004
  • 3D character animation uses the various entertainment factors such as movie, advertisement, game and cyber idol and occupies an important position in video industry. Although character animation makes various productions and real expressions possible, it is difficult to make character like human body without anatomical understanding of human body. Human body anatomy is the basic knowledge which analyzes physical structure anatomically, gives a lot of helps to make character modeling and make physical movement and facial expression delicately when character animation is produced. Therefore this study examines structure and proportion of human body and focuses on character modeling and animation production based on anatomical understanding of human body.

  • PDF

Photoscan method for Achieving the Photorealistic Facial Modeling and Rendering (Photoscan 방식의 살시적인 페이셜 모델링 및 렌더링 제작 방법)

  • Zhang, Qi;Fu, Linwei;Jiang, Haitao;Ji, Yun;Qu, Lin;Yun, Taesoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.51-52
    • /
    • 2018
  • 사실감 있는 디지털 캐릭터를 만드는 것은 3D 영역에서 가장 어려운 도전 중에 하나이다. 특히 얼굴의 특징을 만들기 어렵기 때문에 얼굴포착기술이 점점 보편화되어가고 있다. 본문에서도 Photo Scan의 얼굴포착으로 고품질의 디지털 캐릭터 모델을 얻는 방법을 제시하였다. 이 방법은 시간을 단축시킬 수 있을 뿐 아니라 효율이 높아서 이 작업 과정이 고품질의 디지털 캐릭터 모델을 얻는 데 매우 유용하다.

  • PDF

Facial animation production method based on depth images (깊이 이미지 이용한 페이셜 애니메이션 제작 방법)

  • Fu, Linwei;Jiang, Haitao;Ji, Yun;Qu, Lin;Yun, Taesoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.49-50
    • /
    • 2018
  • 본 논문은 깊이 이미지 이용한 페이셜 애니메이션 제작 방법을 소개한다. iPhone X의 true depth카메라를 사용하여 사람 얼굴의 심도를 정확하게 파악하고, 균등하게 분산된 도트를 통해 얼굴의 모든 표정변화를 모바일 데이터로 기록하여, 페이셜 애니메이션을 제작하는 제작한다. 본문에서의 방식은, 기존 페이셜 애니메이션 제작 과정에서의 rigging 부분을 생략하여, 기록된 얼굴 표정 데이터를 3D 모델링에 바로 전달할 수 있다. 이런 방식을 통해 전체 페이셜 애니메이션 제작 과정을 단축시켜, 제작 방법을 더욱 간단하고 효율적이게 하였다.

  • PDF

A Tracking of Head Movement for Stereophonic 3-D Sound (스테레오 입체음향을 위한 머리 움직임 추정)

  • Kim Hyun-Tae;Lee Kwang-Eui;Park Jang-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1421-1431
    • /
    • 2005
  • There are two methods in 3-D sound reproduction: a surround system, like 3.1 channel method and a binaural system using 2-channel method. The binaural system utilizes the sound localization principle of a human using two ears. Generally, a crosstalk between each channel of 2-channel loudspeaker system should be canceled to produce a natural 3-D sound. To solve this problem, it is necessary to trace a head movement. In this paper, we propose a new algorithm to correctly trace the head movement of a listener. The Proposed algorithm is based on the detection of face and eye. The face detection uses the intensity of an image and the position of eyes is detected by a mathematical morphology. When the head of the listener moves, length of borderline between face area and eyes may change. We use this information to the tracking of head movement. A computer simulation results show That head movement is effectively estimated within +10 margin of error using the proposed algorithm.

  • PDF

A 3D Game Character Design Using MAYA (MAYA를 이용한 3D게임 캐릭터 디자인)

  • Ryu, Chang-Su;Hur, Chang-Wu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.6
    • /
    • pp.1333-1337
    • /
    • 2011
  • 3D engines loading, and expansion of the usable capacity, next-generation smartphone game markets are rising briskly by the improvement in CPU processing speed of Phones (hardware of smartphone). Therefore, in creating 3D game characters, realistic and free-form animations in a small screen of a smartphone are becoming important. Through this paper, as a method of creating characters and operating for game characters to cause user's feeling, with NURBS data of MAYA, We completed a face in turns of eyes, a nose, and a mouth, and with Polygon Cube tool, modeled hands and feet. After dividing a cube into half and modeling it, through mirror copying We completed the whole body and modeled the low-polygon. Then to model realistic and free-form characters, We completed each detail with ZBrush and applied Divide level up to 4. Though they might look rough and exaggerated, We tried to express stuck-out parts and fallen-in parts effectively and smoothly with Smooth brush effect, map and design the low-polygon 3D characters.

A 3D Game Character Design Using MAYA (MAYA를 이용한 3D게임 캐릭터 디자인)

  • Ryu, Chang-Su;Hur, Chang-Wu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.300-303
    • /
    • 2011
  • Owing to the improvement in CPU processing speed of Phones (hardware of smartphone), 3D engines loading, and expansion of the usable capacity, next-generation smartphone game markets are rising briskly. Therefore, in creating 3D game characters, realistic and free-form animations in a small screen of a smartphone are becoming important. Through this paper, as a method of creating characters and operating for game characters to cause user's feeling, with NURBS data of MAYA, We completed a face in turns of eyes, a nose, and a mouth, and with Polygon Cube tool, modeled hands and feet. After dividing a cube into half and modeling it, through mirror copying We completed the whole body and modeled the low-polygon. Then to model realistic and free-form characters, We completed each detail with ZBrush and applied Divide level up to 4. Though they might look rough and exaggerated, We tried to express stuck-out parts and fallen-in parts effectively and smoothly with Smooth brush effect, map and design the low-polygon 3D characters.

  • PDF

Design of Behavioral Classification Model Based on Skeleton Joints (Skeleton Joints 기반 행동 분류 모델 설계)

  • Cho, Jae-hyeon;Moon, Nam-me
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1101-1104
    • /
    • 2019
  • 키넥트는 RGBD 카메라로 인체의 뼈대와 관절을 3D 공간에서 스켈레톤 데이터수집을 가능하게 해주었다. 스켈레톤 데이터를 활용한 행동 분류는 RNN, CNN 등 다양한 인공 신경망으로 접근하고 있다. 본 연구는 키넥트를 이용해서 Skeleton Joints를 수집하고, DNN 기반 스켈레톤 모델링 학습으로 행동을 분류한다. Skeleton Joints Processing 과정은 키넥트의 Depth Map 기반의 Skeleton Tracker로 25가지 Skeleton Joints 좌표를 얻고, 학습을 위한 전처리 과정으로 각 좌표를 상대좌표로 변경하고 데이터 수를 제한하며, Joint가 트래킹 되지 않은 부분에 대한 예외 처리를 수행한다. 스켈레톤 모델링 학습 과정에선 3계층의 DNN 신경망을 구축하고, softmax_cross_entropy 함수로 Skeleton Joints를 집는 모션, 내려놓는 모션, 팔짱 낀 모션, 얼굴을 가까이 가져가는 모션 해서 4가지 행동으로 분류한다.

3D Face Modeling based on FACS (Facial Action Coding System) (FACS 기반을 둔 3D 얼굴 모델링)

  • Oh, Du-Sik;Kim, Yu-Sung;Kim, Jae-Min;Cho, Seoung-Won;Chung, Sun-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1015-1016
    • /
    • 2008
  • In this paper, the method which searchs a character of face and transforms it by FACS(Facial Action Coding System) for face modeling is suggested. FACS has a function to build an expression of face to AUs(Action Units) and make various face expressions. The system performs to find accurate Action Units of sample face and use setted AUs. Consequently it carries out the coefficient for transforming face model by 2D AUs matching.

  • PDF

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

Human-like Fuzzy Lip Synchronization of 3D Facial Model Based on Speech Speed (발화속도를 고려한 3차원 얼굴 모형의 퍼지 모델 기반 립싱크 구현)

  • Park Jong-Ryul;Choi Cheol-Wan;Park Min-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.416-419
    • /
    • 2006
  • 본 논문에서는 음성 속도를 고려한 새로운 립싱크 방법에 대해서 제안한다. 실험을 통해 구축한 데이터베이스로부터 음성속도와 입모양 및 크기와의 관계를 퍼지 알고리즘을 이용하여 정립하였다. 기존 립싱크 방법은 음성 속도를 고려하지 않기 때문에 말의 속도와 상관없이 일정한 입술의 모양과 크기를 보여준다. 본 논문에서 제안한 방법은 음성 속도와 입술 모양의 관계를 적용하여 보다 인간에 근접한 립싱크의 구현이 가능하다. 또한 퍼지 이론을 사용함으로써 수치적으로 정확하게 표현할 수 없는 애매한 입 크기와 모양의 변화를 모델링 할 수 있다. 이를 증명하기 위해 제안된 립싱크 알고리즘과 기존의 방법을 비교하고 3차원 그래픽 플랫폼을 제작하여 실제 응용 프로그램에 적용한다.

  • PDF