• Title/Summary/Keyword: face animation

Search Result 119, Processing Time 0.026 seconds

A Case Study of Short Animation Production Using Third Party Program in University Animation Curriculum

  • Choi, Chul Young
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.97-102
    • /
    • 2021
  • The development of CG technology throughout the 2000s brought about a significant growth in the animation market. This phenomenon led to an increase in the number of people required by related industries, which led to an increase in the number of related majors in universities. CG application technologies are becoming more common with the advent of YouTube and virtual YouTubers, but high technology is still required for students to get a job. This situation is not easy to include both technological and creative skills in the college animation curriculum. In order to increase students' creativity, we need a lot of production experience, which requires a lot of knowledge and time if we only use tools like Maya and 3D Max. In this paper, We tried to devote more time to storytelling by minimizing the technical process required for production and proceeding with repetitive or difficult processes for content creation using third-party programs. And through the 12-week class, the experimental production process was applied to the process from planning to completion of animation works that students would submit to the advertisement contest.

Detection of Face-element for Facial Analysis (표정분석을 위한 얼굴 구성 요소 검출)

  • 이철희;문성룡
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.131-136
    • /
    • 2004
  • According to development of media, various information is recorded in media, expression is one during interesting information. Because expression includes of relationship of human inside. Intention of inside is expressed by gesture, but expression has more information. And, expression can manufacture voluntarily, include plan of inside on the man. Also, expression has unique character in a person, have alliance that do division possibility. In this paper, to analyze expression of USB camera animation, wish to detect facial building block. Because characteristic point by person's expression change exists on face component. For component detection, in animation one frame with Capture, grasp facial position, and separate face area, and detect characteristic points of face component.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Realtime Face Animation using High-Speed Texture Mapping Algorithm (고속 텍스처 매핑 알고리즘을 이용한 실시간 얼굴 애니메이션)

  • 최창석;김지성;최운영;전준현
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.544-547
    • /
    • 1999
  • This paper proposes a high-speed texture mapping algorithm and apply it for the realtime face animation. The mapping process devide into pixel correspondences, Z-buffering, and pixel value interpolation. Pixel correspondences and Z-buffering are calculated exactly through the algorithm. However, pixel values interpolation is approximated without additional calculations. The algorithm dramatically reduces the operations needed for texture mapping. Only three additions are needed in calculation of a pixel value. We simulate the 256$\times$240 pixel facial image with about 100 pixel face width. Simulation results shows that frame generation speed are about 60, 44, 21 frames/second in pentium PC 550MHz, 400MHz, 200MHz, respectively,

  • PDF

A Study on Facial Blendshape Rig Cloning Method Based on Deformation Transfer Algorithm (메쉬 변형 전달 기법을 통한 블렌드쉐입 페이셜 리그 복제에 대한 연구)

  • Song, Jaewon;Im, Jaeho;Lee, Dongha
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.9
    • /
    • pp.1279-1284
    • /
    • 2021
  • This paper addresses the task of transferring facial blendshape models to an arbitrary target face. Blendshape is a common method for the facial rig; however, production of blendshape rig is a time-consuming process in the current facial animation pipeline. We propose automatic blendshape facial rigging based on our blendshape transfer method. Our method computes the difference between source and target facial model and then transfers the source blendshape to the target face based on a deformation transfer algorithm. Our automatic method provides efficient production of a controllable digital human face; the results can be applied to various applications such as games, VR chating, and AI agent services.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

A Character Speech Animation System for Language Education for Each Hearing Impaired Person (청각장애우의 언어교육을 위한 캐릭터 구화 애니메이션 시스템)

  • Won, Yong-Tae;Kim, Ha-Dong;Lee, Mal-Rey;Jang, Bong-Seog;Kwak, Hoon-Sung
    • Journal of Digital Contents Society
    • /
    • v.9 no.3
    • /
    • pp.389-398
    • /
    • 2008
  • There has been some research into a speech system for communications between those who are hearing impaired and those who hear normally, but the system has been pursued in inefficient teaching ways in which existing teachers teach each individual due to social indifference and a lack of marketability. In order to overcome such a weakness, there appeared to be a need to develop contents utilizing 3D animation and digital technology. For the investigation of a standard face and a standard spherical shape for the preparation of a character, the study collected sufficient data concerning students in the third-sixth grades in elementary schools in Seoul and Gyeonggi, Korea, and drew up standards for a face and a spherical shape of such students. This data is not merely the basic data of content development for the hearing impaired, but it can also offer a standard measurement and a standard type realistically applicable to them. As a system for understanding conversations by applying 3D character animation and educating self-expression, the character speech animation system supports effective learning for language education for hearing impaired children who need language education within their families and in special education institutions with the combination of 3D technology and motion capture.

  • PDF

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF