• Title/Summary/Keyword: 3D Computer Animation

Search Result 235, Processing Time 0.035 seconds

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

A Study on Replacing Method Global Illumination Using Ambient Occlusion (Ambient Occlusion을 이용한 Global Illumination 대체기법 연구)

  • Park, Jae-Wook;Kim, Yun-Jung
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.493-510
    • /
    • 2014
  • From game consoles to TV and Hollywood films, 3D rendering technology is involved in various fields. Up until the late 90s, the computer image rendering method was rasterization that mainly used Phong Shading, and up until recently it was the go-to method for movies and film animation. In the 21st century, the quality provided by Ray Tracing and the development of Global Illumination was much more realistic and thus became popularized. However, despite its growing use in architectural rendering to the markets, Global Illumination in film animation and movies was limited due to its long render time. So, in this thesis, if one were to take the concept from each rendering method and consider it from a mathematical perspective, one could adapt the Ambient Occlusion's equation to the illumination loop equation used in rasterization. This algorithm modification has the capability to reflect the lighting of a diverse array of colors, like in Global Illumination, with a fast render time, as in rasterization, and the example RenderMan Shader is based upon this new algorithm. In conclusion, with Global Illumination's naturalistic lighting and rasterization's rendering speed, the combination of the best points of each is a new method with a short rendering time while producing good quality. I hope animations and films can benefit from this algorithm by the reduction of budget with an overall better quality output in VFX production.

Design and Implementation of Real-time Augmented Reality Building Information System Combined with 3D Map (3D 지도와 결합된 실시간 증강현실 건물 안내 시스템의 설계 및 구현)

  • Kim, Sang-Joon;Bae, Yoon-Min;Choi, Yoo-Joo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.39-54
    • /
    • 2018
  • Recently, augmented reality(AR) based building information applications using a smart phone provide information in the static form irrespective of the distance between a user and a target building. If many target buildings are located close to each other, discrimination of information is reduced due to overlapping information objects. Furthermore, it is difficult to intuitively grasp the current position of the user in the previous AR-based applications. In this paper, to solve these limitations, we have designed and implemented a novel building information system in which the location and size of information objects are adaptively displayed according to locations of a user and target buildings, and which allows users to intuitively understand their location by providing a 3D map that displays the user's location and all target buildings within a given distance in real-time. The AR-based building information application proposed in this paper focuses on the building guide in Deoksu Palace in Jung-gu, Seoul.

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.

Emotion-based Gesture Stylization For Animated SMS (모바일 SMS용 캐릭터 애니메이션을 위한 감정 기반 제스처 스타일화)

  • Byun, Hae-Won;Lee, Jung-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.802-816
    • /
    • 2010
  • To create gesture from a new text input is an important problem in computer games and virtual reality. Recently, there is increasing interest in gesture stylization to imitate the gestures of celebrities, such as announcer. However, no attempt has been made so far to stylize a gestures using emotion such as happiness and sadness. Previous researches have not focused on real-time algorithm. In this paper, we present a system to automatically make gesture animation from SMS text and stylize the gesture from emotion. A key feature of this system is a real-time algorithm to combine gestures with emotion. Because the system's platform is a mobile phone, we distribute much works on the server and client. Therefore, the system guarantees real-time performance of 15 or more frames per second. At first, we extract words to express feelings and its corresponding gesture from Disney video and model the gesture statistically. And then, we introduce the theory of Laban Movement Analysis to combine gesture and emotion. In order to evaluate our system, we analyze user survey responses.

The Aspectual Theory of the Cybercharacter (사이버캐릭터의 위상론)

  • 이선교
    • Archives of design research
    • /
    • v.12 no.4
    • /
    • pp.182-190
    • /
    • 1999
  • There has been the rapid change of paradigm with the overflow of terms related to computers such as information, digital, cyber, virtual world and the change of time concept on the ground that it is common to change the world into virtual time. This study is about cybercharacters working in air-broadcasting with rapidly-developing internet, The cybercharacters including 3D animation developed from 2D animation are know to be manufactured with use of electronic mediun and computers and to exist in electronics. Though the emergence of the cybercharacters has a lot of gflnetic roots according to their objectives, they have in common that they are made by 3D graphics and they work in the virtual space, The great traits of the cybercharaters lie in the extension of interfacial function and ecological growth. In the cyberspace the interface, the meeting point between a computer and its users is the most important, The cybercharacters as medium providing new ruman interface become effective with growing interest in virtual reality, The cybercharacters also keep the ecological traits, They can also bring about added value with infusion of image and development of the network, These cybercharacters can also play the important parts in the continually developing cyberspace, The successful birth of the cybercharacter are based on, the technological power. assistance of fund and the ctrltural background, The information-entertainment of the cybercharacters functions well with the accompinimene of these three things, The cybercharacters can make a subject which keeps single issue as a central point of the virtual realty, The cybercharacters can also be connected with equity of "Korean knowledge information society" in the cultural rule of the internet and sociocul tural identity, identity.

  • PDF

Support Vector Machine Based Phoneme Segmentation for Lip Synch Application

  • Lee, Kun-Young;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.193-210
    • /
    • 2004
  • In this paper, we develop a real time lip-synch system that activates 2-D avatar's lip motion in synch with an incoming speech utterance. To realize the 'real time' operation of the system, we contain the processing time by invoking merge and split procedures performing coarse-to-fine phoneme classification. At each stage of phoneme classification, we apply the support vector machine (SVM) to reduce the computational load while retraining the desired accuracy. The coarse-to-fine phoneme classification is accomplished via two stages of feature extraction: first, each speech frame is acoustically analyzed for 3 classes of lip opening using Mel Frequency Cepstral Coefficients (MFCC) as a feature; secondly, each frame is further refined in classification for detailed lip shape using formant information. We implemented the system with 2-D lip animation that shows the effectiveness of the proposed two-stage procedure in accomplishing a real-time lip-synch task. It was observed that the method of using phoneme merging and SVM achieved about twice faster speed in recognition than the method employing the Hidden Markov Model (HMM). A typical latency time per a single frame observed for our method was in the order of 18.22 milliseconds while an HMM method applied under identical conditions resulted about 30.67 milliseconds.

  • PDF

Perception based video anticipation generation (선택적 주의 기법 기반의 영상의 기대효과 자동생성)

  • Yoon, Jong-Chul;Lee, In-Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.3
    • /
    • pp.1-6
    • /
    • 2007
  • Anticipation effect has been used as a traditional skill to enhance the dynamic motion of the traditional 2D animation. Basically, anticipation means the action of opposite direction which performs before the real action step. In this paper, we propose the perception-based video anticipation method to guide a user's visual attention to the important region. Using the image based attention map, we calculate the visual attention region and then combine this map with temporal saliency of video. We apply the anticipation effect in these saliency regions using the blur kernel. Using our method, we can generate the dynamic video motion which has attentive guidance.

  • PDF

A Synchronized Playback Method of 3D Model and Video by Extracting Golf Swing Information from Golf Video (골프 동영상으로부터 추출된 스윙 정보를 활용한 3D 모델과 골프 동영상의 동기화 재생)

  • Oh, Hwang-Seok
    • Journal of the Korean Society for Computer Game
    • /
    • v.31 no.4
    • /
    • pp.61-70
    • /
    • 2018
  • In this paper, we propose a synchronized playback method of 3D reference model and video by extracting golf swing information from learner's golf video to precisely compare and analyze each motion in each position and time in the golf swing, and present the implementation result. In order to synchronize the 3D model with the learner's swing video, the learner's golf swing movie is first photographed and relative time information is extracted from the photographed video according to the position of the golf club from the address posture to the finishing posture. Through applying time information from learners' swing video to a 3D reference model that rigs the motion information of a pro-golfer's captured swing motion at 120 frames per second through high-quality motion capture equipment into a 3D model and by synchronizing the 3D reference model with the learner's swing video, the learner can correct or learn his / her posture by precisely comparing his or her posture with the reference model at each position of the golf swing. Synchronized playback can be used to improve the functionality of manually adjusting system for comparing and analyzing the reference model and learner's golf swing. Except for the part where the image processing technology that detects each position of the golf posture is applied, It is expected that the method of automatically extracting the time information of each location from the video and of synchronized playback can be extended to general life sports field.

Ubiquitous Car Maintenance Services Using Augmented Reality and Context Awareness (증강현실을 활용한 상황인지기반의 편재형 자동차 정비 서비스)

  • Rhee, Gue-Won;Seo, Dong-Woo;Lee, Jae-Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.3
    • /
    • pp.171-181
    • /
    • 2007
  • Ubiquitous computing is a vision of our future computing lifestyle in which computer systems seamlessly integrate into our everyday lives, providing services and information in anywhere and anytime fashion. Augmented reality (AR) can naturally complement ubiquitous computing by providing an intuitive and collaborative visualization and simulation interface to a three-dimensional information space embedded within physical reality. This paper presents a service framework and its applications for providing context-aware u-car maintenance services using augmented reality, which can support a rich set of ubiquitous services and collaboration. It realizes bi-augmentation between physical and virtual spaces using augmented reality. It also offers a context processing module to acquire, interpret and disseminate context information. In particular, the context processing module considers user's preferences and security profile for providing private and customer-oriented services. The prototype system has been implemented to support 3D animation, TTS (Text-to-Speech), augmented manual, annotation, and pre- and post-augmentation services in ubiquitous car service environments.