• 제목/요약/키워드: 3D Computer Animation

Search Result 235, Processing Time 0.029 seconds

A STUDY FOR MODELING AND ANIMATION OF A HUMAN WITH BONE STRUCTURE AND CLOTHES

  • Suzuki, Tohru;Yamamoto, Toshiyuki;Nagase, Hiroshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.821-824
    • /
    • 2009
  • A method to visualize human body is proposed for various human pose. The method affords three 3D-styles of the same body: firstly, one which wares clothes specified from pattern of dresses, second, body shape, lastly bone structure of body. For this objective, standard body data are prepared which is constructed from CT images. Individual body is measured by 3D body scanner. The present status of our research is limited to offer still images, though we are engaged to accommodate various poses.

  • PDF

Authoring Software Development of 3D Natural Environment for Realistic Contents (실감형 콘텐츠 제작을 위한 3D 자연환경 저작 소프트웨어 개발)

  • Lee, Ran-Hee;Lee, Kyu-Nam;Kang, Im-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.9
    • /
    • pp.108-116
    • /
    • 2007
  • Nowadays, many graphic researchers are interested in 3D outdoor environment. They want to express more realistic natural background with natural phenomenon because computer hardware has become more powerful and increased a demand for a background of 3D natural environment in a content. Especially, e-sports contents and simulation contents with outdoor environment need more natural environment for a background than indoor contents. It is very important technology for a quality of 3D outdoor contents. We propose a software EMtool(Environment Making Tool) for authoring of natural environment for realistic contents. EMtool has been developed to depict relationship and interaction between natural phenomena and include methods for creating natural environment and natural objects. The proposed results are applied to real-time 3D contents such as 3D golf games and simulations for natural objects.

H-Anim-based Definition of Character Animation Data (캐릭터 애니메이션 데이터의 H-Anim 기반 정의)

  • Lee, Jae-Wook;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.796-800
    • /
    • 2009
  • Currently, there are many software tools that can generate 3D human figure models and animations based on the advancement of computer graphics technology. However, we still have problems in interoperability of human data models in different applications because common data models do not exist. To address this issue, the Web3D Consortium and the ISO/IEC JTC1 SC24 WG6 have developed the H-Anim standard. However, H-Anim does not include human motion data formats although it defines the structure of a human figure. This research is intended to obtain interoperable human animation by defining the data for human motions in H- Anim figures. In this paper, we describe a syntactic method to define motion data for the H-Anim figure and its implementation. In addition, we describe a method of specifying motion parameters necessary for generating animations by using an arbitrary character model data set created by a general graphics tool.

Formative Properties of 3D Animation based on the Theory of Gestalt -Centering of Korean film - (게슈탈트 시지각 이론에 의한 3D 애니메이션의 조형성 -한국 영화 <웰컴투 동막골>을 중심으로-)

  • Kim, Kyung-Eun;Yun, Young-Du
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.279-284
    • /
    • 2006
  • The field of film art to be wised expression as development of Media has grown. Recently, the concern and effort has built up in digital film by Computer Graphics(CG) one of SFX. In this paper, I certified the formative properties based on Gestalt theory and centering best show in 2005. As a result, I confirmed that to insert 3D animation in film can lead fantasy or virtual world unable to be felt in real world, with the intention of producer, and that it was applied to free space not to be restricted making a film of scenes. And I confirmed that the partial modeling animation as the Gestalt theory that gives totality to objects of perception and needs closure can play role of understanding the meaning of film.

  • PDF

A Study on the Construction of a Real-time Sign-language Communication System between Korean and Japanese Using 3D Model on the Internet (인터넷상에 3차원 모델을 이용한 한-일간 실시간 수화 통신 시스템의 구축을 위한 기초적인 검토)

  • Kim, Sang-Woon;Oh, Ji-Young;Aoki, Yoshinao
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.71-80
    • /
    • 1999
  • Sign-language communication can be a useful way of exchanging message between people who using different languages. In this paper, we report an experimental survey on the construction of a Korean-Japanese sign-language communication system using 3D model. For real-time communication, we introduced an intelligent communication method and built the system as a client-server architecture on the Internet. A character model is stored previously in the clients and a series of animation parameters are sent instead of real image data. The input-sentence is converted into a series of parameters of Korean sign language or Japanese sign language at server. The parameters are transmitted to clients and used for generating the animation. We also employ the emotional expressions, variable frames allocation method, and a cubic spline interpolation for the purpose of enhancing the reality of animation. The proposed system is implemented with Visual $C^{++}$ and Open Inventor library on Windows platform. Experimental results show a possibility that the system could be used as a non-verbal communication means beyond the linguistic barrier.

  • PDF

Behavioral Generation of Android-oriented 3D character using kinect (Kinect를 활용한 안드로이드용 3D 캐릭터 행동 제작)

  • Choi, Hong-Seon;Lee, Kang-Hee;Lee, Won-Joo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.31-32
    • /
    • 2012
  • 본 논문에서는 3D 캐릭터의 애니메이션을 제작하기에 앞서 kinect를 활용하여 캐릭터의 동작을 모션캡처를 통해 쉽게 제작하는 방법을 다룬다. 또한 제작된 캐릭터 애니메이션을 md2포맷으로 export하고 안드로이드 환경의 OpenGL을 활용하여 재생하는 기술을 제안하고 이를 이용하여 향후 스마트폰 증강현실에서 도우미로서 소프트웨어 로봇 또는 에이전트의 다양한 감성 행동을 제작하고자 한다.

  • PDF

Implementation of 3D Korean Manual Alphabet Animation (수화 아바타를 위한 3D 지화 애니메이션)

  • Ahn, Chang;Bae, Woo-Jeong;Song, Hang-Sook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.627-630
    • /
    • 2001
  • 정보화 시대를 살아가는 구성원중 하나인 청각 및 언어 장애인들의 경우, 정보 전달 및 소통이 필요한 일상 생활은 물론, 교육 현장에서도 어려움을 겪고 있는 것이 현실이다. 따라서 이러한 어려움을 극복하기 위해 장애인 보조 시스템의 개발이 진행되고 있으나 그 결과는 아직 미미하다. 본 논문은 청각 및 언어 장애인들에게 컴퓨터와의 상호 작용을 통해 정보 전달을 지원할 수 있는 시스템을 구축하여 더 나은 교육 환경을 제공하는데 그 목적이 있다. 즉, 사이버 공간에서 수화하는 아바타를 통해 정보를 전달하는 시스템을 설계하고 구축한다. 그 첫 번째 단계로써 입력 장치를 통해 받아들인 단어를 자소별로 변환한 뒤, 3D 지화 애니메이션으로 표현하였으며, 이 연구 결과는 수화 아바타 시스템의 기반이 될 것이다.

  • PDF

Problems of Using Cyberdramaturgy in Modern Foreign Cinematography

  • Portnova, Tatiana V.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.25-30
    • /
    • 2022
  • The article is devoted to the actual problem of the use of digital technologies in modern cinema in developed countries. The purpose of the study is to identify the essence of the term "cyberdramaturgy" and the problems of its use in modern film production. The research methodology is based on a systematic approach and includes the methods of the general scientific group (analysis, synthesis, deduction, induction), as well as a number of special methods: the method of content analysis of scientific literature on the research topic; sociological survey method; as well as the method of statistical analysis. The results of the survey were analyzed using the Neural Designer program (a tool for advanced statistical analytics) and translated into a graphical diagram format for clarity of perception. Answers in 75 questionnaires were evaluated by the average score for six analysis criteria, which made it possible to bring all the calculations to a 10-point scale. As a result of the study, the author of the article concluded the following: directors believe that the use of cyber analogues of actors and backgrounds leads to the blurring of genres, the hybridization of cinema and animation; directors are also concerned about the problem of replacing the director himself with a special program. The writers are completely concerned with the problem of machine scripting with almost infinite variability beyond the human imagination. Directors-producers believe that the cyberdramaturgy development will lead to completely new standards of cinematic quality, sharply different from the traditional assessment of acting and scene setting, to the appreciation of 3D animation as the highest category in the art. Such innovations actually devalue all international cinematography awards, as cyberdrama reduces the value of cyberactors to zero. It is impossible to bail out an "Oscar" or a "Golden Globe" award for a digital double or a separate cyber model that is used in the film instead of the actors.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.