• Title/Summary/Keyword: 3D 모션

Search Result 275, Processing Time 0.026 seconds

A Study on the 3-D Stereoscopic Video Techniques Using Motion Typography (모션 타이포그래피를 이용한 3차원 입체영상 제작방법에 관한 연구)

  • Lee, Jun-Sang;Park, Sung-Dae;Kim, Chee-Yong;Han, Soo-Whan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1070-1081
    • /
    • 2011
  • Among visual communications methods through new media, the typography becomes a key role in information transfer areas with the development of motion graphics. Recently, the researches on stereoscopic image productions have been carried out actively to increase the reality of created images, and the various new approaches for the production of stereoscopic images have been investigated. However, it is obvious that the researches using motion typography have not been studied sufficiently. Thus in this study, the stereoscopic images for moving typography are suggested and produced by using three experimental studies -- the experiments using text movement, camera movement, and finally, editing and composition. In the experiments, from the stereoscopic motion typography produced by our proposed methods, the various visual effects have been obtained, and the visual communication with high reality has been achieved.

A study of center of gravity on 3d character animation (3D 캐릭터 애니메이션에서의 무게중심 관한 연구)

  • Cho, Jae-Yun
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02b
    • /
    • pp.356-361
    • /
    • 2006
  • 모션캡처기술은 이미 많은 애니메이션과 게임에서 보편화되어 사용되고 있다. 하지만, 이런 좋은 기술을 뒤로 한 채 아직도 많은 애니메이터들이 직접 애니메이션을 하고 있다. 모션캡처 기술비용과 제작시간 때문이기도 하지만 사람과 상이한 체형을 가진 3D 캐릭터에 사람의 모션을 적용하기엔 어색한 부분이 많기 때문이다. 또한, 캐릭터의 특징을 부각시키거나 왜곡시키는 등의 과장의 표현은 불가능하다. 캐릭터의 생명은 그 캐릭터가 가진 성격과 체형에서, 그 캐릭터만의 자연스런 움직임을 표현하는데 있다. 기획과 의도에 따른 특성들을 과장하여 표현하되 인간의 동작에 익숙해 있는 우리 눈에 어색함으로 비춰지지 않도록 해주어야 비로소 생명력이 있는 캐릭터를 만들 수 있다. 다양한 모양의 캐릭터는 서로 다른 무게중심을 가졌고 이를 고려하지 않고 애니메이션 했을 때 여러 가지 문제점이 생긴다. 이러한 문제점은 캐릭터가 자연스럽지 못하게 보이는 가장 큰 원인 중 하나다. 본 논문은 게임과 애니메이션 등에서의 3D 캐릭터가 더욱 생생하고 현실적으로 보이도록 돕는데 그 목적이 있다. 그 중, 중요한 요소인 무게중심에 대한 이해와 함께 활용방법에 대한 연구에 목적을 둔다. 캐릭터의 자연스러운 움직임을 위해 무게중심은 반드시 고려해야 할 문제이고 캐릭터의 특성 및 성격 표현에도 중요한 영향을 미친다. 애니메이터들에게 무게중심에 대한 중요성을 알리고 새로운 접근방법을 제시하는 것을 본 논문의 가치로 삼는다.

  • PDF

3D Position Information Extraction of Video Image for Motion Simulation (모션 시뮬레이션을 위한 동영상에서의 3D 위치 정보 추출)

  • 박혜선;강신국;박민호;김항준
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.223-225
    • /
    • 2003
  • 패턴을 기반으로 딴 AR(Augmented Reality) 시스템은 실시간 동영상 안에 가상 물체들을 정확하게 올리기(registering) 위한 좋은 방법이다. AR 시스템을 구현하기 위해서는 우선 카메라가 보고 있는 영상의 3D 위치 정보를 추출하여야 한다. 본 논문에서는 카메라가 보고 있는 체스판 영상의 3D 위치 정보를 자동적으로 추출하여 그것과 동기적으로 움직이는 가상의 object를 구현하는 시스템을 제안한다. 제안된 방법은 카메라 1 대를 가지고 어떠한 sensor 나 marker 를 사용하지 않고 시간적 정보만을 이용하여 비교적 정확한 3D 위지 정보를 추출할 수 있고, 추출된 3D 위치 정보를 통해 자연스러운 3D 모션 시뮬레이션을 구현할 수 있다.

  • PDF

A Study on the Motion Base Control by Using Maya (Maya를 이용한 모션 베이스 컨트롤에 관한 연구)

  • Hong, Min-Sung;Kim, Joo-Chul
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.18 no.4
    • /
    • pp.423-429
    • /
    • 2009
  • The motion base to a film used in 4D cinema theaters or theme parks is different depending on the operator's control or hardware system and this causes inefficiency in management of the theater. There are several different simulation software on the present market, but these are difficult for an animation producer to use them. This study introduces the idea in which a film producer simulates the movement of the motion base by using computer graphic animation tools mostly adopted by production for creating 3D animation and motion control data. Maya and 3D Cross were used to show the path of a motion camera visually. Attitude and axis control data were extracted from the movement of the virtual motion base and were used to control a prototype of the motion base. As a result, the motion control data from computer graphic animation tools can be created so that a film producer can create standard motion control data independently regardless of the hardware and operator's skill.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

3D Avatar Motion Control Using the Script in the Mobile Environment (모바일 환경에서 스크립트를 이용한 3차원 아바타 동작 제어)

  • Choi Seunghyuk;Kim Jaekyung;Lim Soon-Bum;Choy Yoon-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.574-576
    • /
    • 2005
  • 최근 주요 이동통신 사업자, 단말기/칩 제조업체, 콘텐츠 제공업자들 사이에는 모바일 3D 분야가 가장 큰 이슈로 떠오르고 있다. 이러한 환경을 기반으로 모바일 장치에서 3차원 아바타 애니메이션 연구가 진행되고 있다. 이는 아바타의 자연스러운 동작은 사용자에게 아바타가 살아있는 듯한 느낌(Life-likeness)과 사실감(Believability)을 심어주어 보다 친숙한 인터페이스로 활용될 수 있고 이에 따라 채팅이나 온라인 게임 외에도 교육, 쇼핑몰, 비즈니스 등 다양한 분야로 확대되고 있기 때문이다. 하지만 지금까지의 연구는 자연스러운 모션 생성에 집중되어 있을 뿐, 어떻게 쉽게 아바타는 제어할 것 인가에 대한 연구가 적었다. 특히 모바일 환경에서 효율적인 아바타 동작 제어를 위한 연구는 부족하다 할 수 있다. 본 논문에서는 모바일 환경에서 효율적인 아바타 동작을 제어를 위해 스크립트(Script) 기반의 아바타 동작 제어 기법을 제안한다. 모션 생성을 위해 아바타 동작 스크립트(Avatar Motion Script)를 정의하여 아바타 애니메이션의 기록, 재생이 가능하다. 둘째 계층적 스크립트(Multi-Level Script) 기법을 이용하여 적은 양의 데이터만으로도 아바타 동작 제어가 가능하다. 셋째 어느 환경에서도 Motion이 생성, 재생이 가능하한 플랫폼 독립적 구조이다. 넷째, 키 프레임(Key Frame) 기반의 모션을 이용하여 아바타의 상황 상황에 모션이 변하는 동적 동작 생성이 가능하다.

  • PDF

Motion-based Controlling 4D Special Effect Devices to Activate Immersive Contents (실감형 콘텐츠 작동을 위한 모션 기반 4D 특수효과 장치 제어)

  • Kim, Kwang Jin;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.8 no.1
    • /
    • pp.51-58
    • /
    • 2019
  • This paper describes a gesture application to controlling the special effects of physical devices for 4D contents using the PWM (Pulse Width Modulation) method. The user operation recognized by the infrared sensor is interpreted as a command for 3D content control, several of which manipulate the device that generates the special effect to display the physical stimulus to the user. With the content controlled under the NUI (Natural User Interface) technique, the user can be directly put into an immersion experience, which leads to provision of the higher degree of interest and attention. In order to measure the efficiency of the proposed method, we implemented a PWM-based real-time linear control system that manages the parameters of the motion recognition and animation controller using the infrared sensor and transmits the event.

A Study on the Development of Digital Space Design Process Using User′s Motion Data (사용자 모션데이터를 활용한 디지털 공간디자인 프로세스 개발에 관한 연구)

  • 안신욱;박혜경
    • Korean Institute of Interior Design Journal
    • /
    • v.13 no.3
    • /
    • pp.187-196
    • /
    • 2004
  • The purpose of this study is to develope'a digital space design process using user's motion data' through a theoretical and experimental study. In the progress of developing a developing of design process, this study was concentrated on searching a digital method applying user's interactive reflections. As introducing a concept of space form being generated by user's experiences, we proposed'a digital design process using user's motion data'. In the experimental stage, user's motion data were extracted and transferred as digital information by user behavior analysis, optical motion capture system, immersive VR system, 3D softwares com computer programming. As the result of this study, another useful digital design process was embodied by building up a digital form-transforming method using 3D softwares providing internal algorithm. This study would be meaningful in terms of attempting a creative and interactive digital space design method, avoiding dehumanization of existing ones through the theoretical study and the experimental approach.

Development of Smart Phone App. Contents for 3D Sign Language Education (3D 수화교육 스마트폰 앱콘텐츠 개발)

  • Jung, Young Kee
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.8-14
    • /
    • 2012
  • In this paper, we develope the smart phone App. contents of 3D sign language to widen the opportunity of the korean sign language education for the hearing-impaired and normal people. Especially, we propose the sign language conversion algorithm that automatically transform the structure of Korean phrases to the structure of the sign language. Also, we implement the 3D sign language animation DB using motion capture system and data glove for acquiring the natural motions. Finally, UNITY 3D engine is used for the realtime 3D rendering of sign language motion. We are distributing the proposed App. with 3D sign language DB of 1,300 words to the iPhone App. store and Android App. store.

  • PDF

A Case analysis of NFT digital art works (NFT 디지털아트 작품 사례분석)

  • Yoon, Heesun;Chung, Jeanhun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.55-61
    • /
    • 2022
  • With the rapid development of information technology, Metaverse and Non-Fungible Token (hereinafter referred to as NFT) technology will not only create new supply and demand markets for digital art creators, but also for existing art writers. As interest in and trading of virtual assets and coins increases, so does the demand for digital art trading in the NFT market. This study examines the theoretical content of NFTs, blockchains, and Metaverse, and analyzes various expressions of NFT art that are currently popular. As the case study, 100 projects were selected and analyzed in the overall OpenSea ranking, which included 2D graphics, 3D graphics and motion graphics works. Then, from the perspective of creators, the graphic styles of NFT digital art are divided into 4 types: 2D graphics, 3D graphics, 2D dynamic graphics, 3D dynamic graphics, and analyzed and studied. It is hoped that in the future, this study can suggest the direction of creating graphic styles to digital art NFT creators.