• Title/Summary/Keyword: 3D model animation

Search Result 127, Processing Time 0.037 seconds

2D Image-Based Individual 3D Face Model Generation and Animation (2차원 영상 기반 3차원 개인 얼굴 모델 생성 및 애니메이션)

  • 김진우;고한석;김형곤;안상철
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.11b
    • /
    • pp.15-20
    • /
    • 1999
  • 본 논문에서는 사람의 정면 얼굴을 찍은 컬러 동영상에서 얼굴의 각 구성 요소에 대한 특징점들을 추출하여 3차원 개인 얼굴 모델을 생성하고 이를 얼굴의 표정 움직임에 따라 애니메이션 하는 방법을 제시한다. 제안된 방법은 얼굴의 정면만을 촬영하도록 고안된 헬멧형 카메라( Head-mounted camera)를 사용하여 얻은 2차원 동영상의 첫 프레임(frame)으로부터 얼굴의 특징점들을 추출하고 이들과 3차원 일반 얼굴 모델을 바탕으로 3차원 얼굴 특징점들의 좌표를 산출한다. 표정의 변화는 초기 영상의 특징점 위치와 이 후 영상들에서의 특징점 위치의 차이를 기반으로 알아낼 수 있다. 추출된 특징점 및 얼굴 움직임은 보다 다양한 응용 이 가능하도록 최근 1단계 표준이 마무리된 MPEG-4 SNHC의 FDP(Facial Definition Parameters)와FAP(Facial Animation Parameters)의 형식으로 표현되며 이를 이용하여 개인 얼굴 모델 및 애니메이션을 수행하였다. 제안된 방법은 단일 카메라로부터 촬영되는 영상을 기반으로 이루어지는 MPEG-4 기반 화상 통신이나 화상 회의 시스템 등에 유용하게 사용될 수 있다.

  • PDF

A Study on the Construction of a Real-time Sign-language Communication System between Korean and Japanese Using 3D Model on the Internet (인터넷상에 3차원 모델을 이용한 한-일간 실시간 수화 통신 시스템의 구축을 위한 기초적인 검토)

  • Kim, Sang-Woon;Oh, Ji-Young;Aoki, Yoshinao
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.71-80
    • /
    • 1999
  • Sign-language communication can be a useful way of exchanging message between people who using different languages. In this paper, we report an experimental survey on the construction of a Korean-Japanese sign-language communication system using 3D model. For real-time communication, we introduced an intelligent communication method and built the system as a client-server architecture on the Internet. A character model is stored previously in the clients and a series of animation parameters are sent instead of real image data. The input-sentence is converted into a series of parameters of Korean sign language or Japanese sign language at server. The parameters are transmitted to clients and used for generating the animation. We also employ the emotional expressions, variable frames allocation method, and a cubic spline interpolation for the purpose of enhancing the reality of animation. The proposed system is implemented with Visual $C^{++}$ and Open Inventor library on Windows platform. Experimental results show a possibility that the system could be used as a non-verbal communication means beyond the linguistic barrier.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Model-Independent Facial Animation Tool (모델 독립적 얼굴 표정 애니메이션 도구)

  • 이지형;김상원;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.193-196
    • /
    • 1999
  • 컴퓨터 그래픽스에서 인간의 얼굴 표정을 생성하는 것은 고전적인 주제중의 하나이다. 따라서 관련된 많은 연구가 이루어져 왔지만 대부분의 연구들은 특정 모델을 이용한 얼굴 표정 애니메이션을 제공하였다. 이는 얼굴 표정에는 얼굴 근육 정보 및 부수적인 정보가 필요하고 이러한 정보는 3D 얼굴 모델에 종속되기 때문이다. 본 논문에서는 일반적인 3D 얼굴 모델에 근육을 설정하고 기타 정보를 편집하여, 다양한 3D 얼굴모델에서 표정 애니메이션이 가능한 도구를 제안한다.

  • PDF

H-Anim-based Definition of Character Animation Data (캐릭터 애니메이션 데이터의 H-Anim 기반 정의)

  • Lee, Jae-Wook;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.796-800
    • /
    • 2009
  • Currently, there are many software tools that can generate 3D human figure models and animations based on the advancement of computer graphics technology. However, we still have problems in interoperability of human data models in different applications because common data models do not exist. To address this issue, the Web3D Consortium and the ISO/IEC JTC1 SC24 WG6 have developed the H-Anim standard. However, H-Anim does not include human motion data formats although it defines the structure of a human figure. This research is intended to obtain interoperable human animation by defining the data for human motions in H- Anim figures. In this paper, we describe a syntactic method to define motion data for the H-Anim figure and its implementation. In addition, we describe a method of specifying motion parameters necessary for generating animations by using an arbitrary character model data set created by a general graphics tool.

A Study of Technical Innovation Model of Digital Contents (디지털콘텐츠의 기술기반 진화모델연구)

  • Han, Chang-Wan
    • Cartoon and Animation Studies
    • /
    • s.10
    • /
    • pp.159-178
    • /
    • 2006
  • Digital Media have been advanced and disseminated at the same time around the world. Digital contents have been evolved while Product Technology and Process Technology of 3D Digital Animation and On-line game have completed the innovative developments. First of all, several problems can be occurred when product technology can not lead the evolvement of digital contents. To accomplish the evolvement of both product technology and process technology of digital contents, the whole production system and process system must be modulized. Modulization of system can be completed by the consistent and stable integration of platform. Modulization of production system can bring out the modulization of product technology and then the modulization of technology can speed up the commercialization and market test.

  • PDF

3D Face Modeling based on Statistical Model for Animation (애니메이션을 위한 통계적 모델에 기반을 둔 3D 얼굴모델링)

  • Oh, Du-Sik;Kim, Jae-Min;Cho, Seoung-Won;Chung, Sun-Tae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.435-438
    • /
    • 2008
  • 본 논문에서는 애니메이션을 위해서 얼굴의 특징표현(Action Units)의 조합하는 방법으로 얼굴 모델링을 하기 위한 3D대응점(3D dense correspondence)을 찾는 방법을 제시한다. AUs는 표정, 감정, 발음을 나타내는 얼굴의 특징표현으로 통계적 방법인 PCA (Principle Component Analysis)를 이용하여 만들 수 있다. 이를 위해서는 우선 3D 모델상의 대응점을 찾는 것이 필수이다. 2D에서 얼굴의 주요 특징 점은 다양한 알고리즘을 이용하여 찾을 수 있지만 그것만으로 3D상의 얼굴 모델을 표현하기에는 적합하지 않다. 본 논문에서는 3D 얼굴 모델의 대응점을 찾기 위해 원기둥 좌표계 (Cylinderical Coordinates System)을 이용하여 3D 모델을 2D로 투사(Projection)시켜서 만든 2D 이미지간의 워핑(Warping) 을 통한 대응점을 찾아 역으로 3D 모델간의 대응점을 찾는다. 이것은 3D 모델 자체를 변환하는 것보다 적은 연산량으로 계산할 수 있고 본래 형상의 변형이 없다는 장점을 가지고 있다.

  • PDF

3D Library Platform Construction using Drone Images and its Application to Kangwha Dolmen (드론 촬영 영상을 활용한 3D 라이브러리 플랫폼 구축 및 강화지석묘에의 적용)

  • Kim, Kyoung-Ho;Kim, Min-Jung;Lee, Jeongjin
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.199-215
    • /
    • 2017
  • Recently, a drone is used for the general purpose application although the drone was builtfor the military purpose. A drone is actively used for the creation of contents, and an image acquisition. In this paper, we develop a 3D library module platform using 3D mesh model data, which is generated by a drone image and its point cloud. First, a lot of 2D image data are taken by a drone, and a point cloud data is generated from 2D drone images. A 3D mesh data is acquired from point cloud data. Then, we develop a service library platform using a transformed 3D data for multi-purpose uses. Our platform with 3D data can minimize the cost and time of contents creation for special effects during the production of a movie, drama, or documentary. Our platform can contribute the creation of experts for the digital contents production in the field of a realistic media, a special image, and exhibitions.

Dynamic Reconstruction Algorithm of 3D Volumetric Models (3D 볼류메트릭 모델의 동적 복원 알고리즘)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.207-215
    • /
    • 2022
  • The latest volumetric technology's high geometrical accuracy and realism ensure a high degree of correspondence between the real object and the captured 3D model. Nevertheless, since the 3D model obtained in this way constitutes a sequence as a completely independent 3D model between frames, the consistency of the model surface structure (geometry) is not guaranteed for every frame, and the density of vertices is very high. It can be seen that the interconnection node (Edge) becomes very complicated. 3D models created using this technology are inherently different from models created in movie or video game production pipelines and are not suitable for direct use in applications such as real-time rendering, animation and simulation, and compression. In contrast, our method achieves consistency in the quality of the volumetric 3D model sequence by linking re-meshing, which ensures high consistency of the 3D model surface structure between frames and the gradual deformation and texture transfer through correspondence and matching of non-rigid surfaces. And It maintains the consistency of volumetric 3D model sequence quality and provides post-processing automation.

A Study on Creation of 3D Facial Model Using Facial Image (임의의 얼굴 이미지를 이용한 3D 얼굴모델 생성에 관한 연구)

  • Lee, Hea-Jung;Joung, Suck-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.21-28
    • /
    • 2007
  • The facial modeling and animation technology had been studied in computer graphics field. The facial modeling technology is utilized much in virtual reality research purpose of MPEG-4 and so on and movie, advertisement, industry field of game and so on. Therefore, the development of 3D facial model that can do interaction with human is essential to little more realistic interface. We developed realistic and convenient 3D facial modeling system that using a optional facial image only. This system allows easily fitting to optional facial image by using the Korean standard facial model (generic model). So it generates intuitively 3D facial model as controling control points elastically after fitting control points on the generic model wire to the optional facial image. We can confirm and modify the 3D facial model by movement, magnify, reduce and turning. We experimented with 30 facial images of $630{\times}630$ sizes to verify usefulness of system that developed.

  • PDF