• Title/Summary/Keyword: 3D model animation

Search Result 127, Processing Time 0.034 seconds

Real-time Avatar Animation using Component-based Human Body Tracking (구성요소 기반 인체 추적을 이용한 실시간 아바타 애니메이션)

  • Lee Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.65-74
    • /
    • 2006
  • Human tracking is a requirement for the advanced human-computer interface (HCI), This paper proposes a method which uses a component-based human model, detects body parts, estimates human postures, and animates an avatar, Each body part consists of color, connection, and location information and it matches to a corresponding component of the human model. For human tracking, the 2D information of human posture is used for body tracking by computing similarities between frames, The depth information is decided by a relative location between components and is transferred to a moving direction to build a 2-1/2D human model. While each body part is modelled by posture and directions, the corresponding component of a 3D avatar is rotated in 3D using the information transferred from the human model. We achieved 90% tracking rate of a test video containing a variety of postures and the rate increased as the proposed system processed more frames.

  • PDF

Cloth Simulation System for 3D Fashion shopping mall based on Web (웹 기반 3D 패션몰을 위한 의복 시뮬레이션 시스템)

  • Kim, Ju-Ri;Joung, Suck-Tae;Jung, Sung-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.5
    • /
    • pp.877-886
    • /
    • 2009
  • In this paper, we propose a new method for the design and implementation of Cloth Simulation System for 3D fashion shopping mall based on Web. Web 3D shopping mall is implemented by using a Web3D authoring tool, ISB, which provides easy mouse operation. 3D human models and cloth item model are designed by low polygon modeling method of 3D MAX. The designed 3D human models and cloth item model are exported to XML file. Finally, 3D human models and cloth item model are displayed and animated on the Web by using ActiveX control based on DirectX. We also implemented textile palette and mapped it to clothes model by using alpha blending during simulation.

A Stduy of Design and Simulation for 3Dimension Fashion (3차원 의상 설계 시뮬레이션에 관한 연구)

  • Kim, Ju-Ri;Lee, Hyun-Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.23-26
    • /
    • 2010
  • In this paper, we propose a new method for the design and implementation of a web-based 3D fashion shopping mall. Web 3D shopping mall is implemented by using a Web3D authoring tool, ISB, which provides easy mouse operation. 3D human models and cloth item model are designed by low polygon modeling method of 3D MAX. The designed 3D human models and cloth item model are exported to XML file. Finally, 3D human models and cloth item model are displayed and animated on the Web by using ActiveX control based on DirectX. We also implemented textile palette and mapped it to clothes model by using alpha blending during simulation.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Method of Automatic Reconstruction and Animation of Skeletal Character Using Metacubes (메타큐브를 이용한 캐릭터 골격 및 애니메이션 자동 생성 방법)

  • Kim, Eun-Seok;Hur, Gi-Taek;Youn, Jae-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.11
    • /
    • pp.135-144
    • /
    • 2006
  • Implicit surface model is convenient for modeling objects composed of complicated surfaces such as characters and liquids. Moreover, it can express various forms of surface using a relatively small amount of data. In addition, it can represent both the surface and the volume of objects. Therefore, the modeling technique can be applied efficiently to deformation of objects and 3D animation. However, the existing implicit primitives are parallel to the axis or symmetrical with respect to the axes. Thus it is not easy to use them in modeling objects with various forms of motions. In this paper, we propose an efficient animation method for modeling various poses of characters according to matching with motion capture data by adding the attribute of rotation to metacube which is one of the implicit primitives.

  • PDF

DEVELOPMENT OF AN INTEGRATED MODEL OF 3D CAD OBJECT AND AUTOMATIC SCHEDULING PROCESS

  • Je-Seung Ryu;Kyung-Hwan Kim
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.1468-1473
    • /
    • 2009
  • Efficient communication of construction information has been critical for successful project performance. Building Information Modeling (BIM) has appeared as a tool for efficient communication. Through 3D CAD objects, it is possible to check interception and collisions of each object in advance. In addition, 4D simulation based on 3D objects integrated with time information makes it realize to go over scheduling and to perceive potential errors in scheduling. However, current scheduling simulation is still at a stage of animation due to manual integration of 3D objects and scheduling data. Accordingly, this study aims to develop an integrated model of 3D CAD objects that automatically creates scheduling information.

  • PDF

The Body Shape and 3D Humanbody Model for the Electronic Commerce of the Clothing Manufacture of College Women in their Twenties (의류제품(衣類製品)의 전자상거래(電子商去來)를 위한 20대(代) 여대생(女大生)의 체형(體型) 및 3D 인체(人體) 모형(模型))

  • Kim, Hyo-Sook;Lee, So-Young
    • Journal of Fashion Business
    • /
    • v.8 no.4
    • /
    • pp.94-103
    • /
    • 2004
  • The purpose of this study was to make activated electronic business transaction of clothes. The subject used for this study was 19 - 24 aged 149 college women who most likely buying products through internet. By compare the 149 women's body shape with 3D model, 149 women could be judged their body shape objectively. We showed the average 3D model by the measurement of 19 - 24aged women's body shape. 19 - 24aged women are big customer of internet shopping mall. By understanding of the difference between real somatotype and perceptual somatotype, we can reduce the disadvantage such as returning clothes. Also, imaginary fitting model can be used for internet shopping mall, animation work, fashion show, and advertisement work. Therefore, we can expect the worth of this study to do.

Simulation of Virtual Marionette with 3D Animation Data (3D Animation Data를 활용한 가상 Marionette 시뮬레이션)

  • Oh, Eui-Sang;Sung, Jung-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.1-9
    • /
    • 2009
  • A doll created by various materials is a miniature based on human model, and it has been one of components in a puppet show to take some responsibility for human's culture activity. However, demand and supply keeps on the decrease in the puppet show industry, since professional puppeteer has been reduced rapidly, and also it is difficult to initiate into the skill. Therefore, many studies related Robotic Marionette for automation of puppet show have been internationally accompanied, and more efficient structure design and process development are required for better movement and express of puppet with motor based controller. In this research, we suggest the effective way to enable to express the marionette's motion using motion data based on motion capture and 3D graphic program, and through applying of 3D motion data and proposal of simulation process, it will be useful to save time and expenses when the Robotic Marionette System is practically built.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF