• Title/Summary/Keyword: Motion Capture Animation

Search Result 122, Processing Time 0.027 seconds

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

Study on Digitalization of Cultural Archetype Based on the Tale of Cheoyong (처용설화의 문화원형 디지털콘텐츠화에 관한 연구)

  • Jung, Jai-Jin
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.591-600
    • /
    • 2008
  • Korean traditional culture has a long history and it is rooted in the deep down our national culture. It is like bread and pride for Korean people. The tale of Cheyong is a precious cultural heritage, which has existed in the public awareness of the Korean people since the period of unified Silla. Based on its mysterious and beautiful story, the value of traditional culture can be revealed through development of digital contents. With such precious cultural resource we can digitalize the fun contents which the general public can enjoy. Ways of digitalize of cultural archetype has been suggested by analyzing the content elements of the tale of Cheyong. Also, by making scientific approach to the development process of digitalization, the possibility of digitalization has been found through 3D animation restoration process for Cheyong Dance and the process of making the tale of Cheyong into animation. Through analyzing the contents development process of cultural archetype of the tale of Chetyong, a cultural archetype sourcehas been developed which has a great value as a contents product that can be developed and used as various cultural content art works. With studies on the theory of development method for digitalization of traditional culture, scientific studies and investigation have to be continuously carried out so the digitalization of Korean cultural heritage, which will shine in the world, can be continued as well.

  • PDF

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

A Synchronized Playback Method of 3D Model and Video by Extracting Golf Swing Information from Golf Video (골프 동영상으로부터 추출된 스윙 정보를 활용한 3D 모델과 골프 동영상의 동기화 재생)

  • Oh, Hwang-Seok
    • Journal of the Korean Society for Computer Game
    • /
    • v.31 no.4
    • /
    • pp.61-70
    • /
    • 2018
  • In this paper, we propose a synchronized playback method of 3D reference model and video by extracting golf swing information from learner's golf video to precisely compare and analyze each motion in each position and time in the golf swing, and present the implementation result. In order to synchronize the 3D model with the learner's swing video, the learner's golf swing movie is first photographed and relative time information is extracted from the photographed video according to the position of the golf club from the address posture to the finishing posture. Through applying time information from learners' swing video to a 3D reference model that rigs the motion information of a pro-golfer's captured swing motion at 120 frames per second through high-quality motion capture equipment into a 3D model and by synchronizing the 3D reference model with the learner's swing video, the learner can correct or learn his / her posture by precisely comparing his or her posture with the reference model at each position of the golf swing. Synchronized playback can be used to improve the functionality of manually adjusting system for comparing and analyzing the reference model and learner's golf swing. Except for the part where the image processing technology that detects each position of the golf posture is applied, It is expected that the method of automatically extracting the time information of each location from the video and of synchronized playback can be extended to general life sports field.

Avatar's Lip Synchronization in Talking Involved Virtual Reality (대화형 가상 현실에서 아바타의 립싱크)

  • Lee, Jae Hyun;Park, Kyoungju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.4
    • /
    • pp.9-15
    • /
    • 2020
  • Having a virtual talking face along with a virtual body increases immersion in VR applications. As virtual reality (VR) techniques develop, various applications are increasing including multi-user social networking and education applications that involve talking avatars. Due to a lack of sensory information for full face and body motion capture in consumer-grade VR, most VR applications do not show a synced talking face and body. We propose a novel method, targeted for VR applications, for talking face synced with audio with an upper-body inverse kinematics. Our system presents a mirrored avatar of a user himself in single-user applications. We implement the mirroring in a single user environment and by visualizing a synced conversational partner in multi-user environment. We found that a realistic talking face avatar is more influential than an un-synced talking avatar or an invisible avatar.

Analysis of Two-Way Communication Virtual Being Technology and Characteristics in the Content Industry (콘텐츠 산업에서 나타난 양방향 소통 가상존재 기술 및 특성 분석)

  • Kim, Jungho;Park, Jin Wan;Yoo, Taekyung
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.507-517
    • /
    • 2020
  • Along with the development of computer graphics, real-time rendering, motion capture, and artificial intelligence technology, virtual being that enables two-way communication has emerged in the content industry. Although the commercialization of technologies and platforms is creating a two-way communication virtual being, there is a lack of analysis of what characteristics this virtual being has and how it can be used in each field. Therefore, through technical background survey and case study for the production of virtual being, the two-way communication virtual being is analyzed on the characteristics necessary for emotional exchange. The characteristics needed for emotional exchange were divided into interaction, individuality, and autonomy, and this characteristic is classified as the focus and how two-way communication virtual being will be used in the content field. This study is expected to provide significant implications for the research of content production and utilization using virtual being as a basic study of virtual being, which analyzes the technical background and characteristics for two-way communication required for virtual being production.

A Study on the Practical Human Robot Interface Design for the Development of Shopping Service Support Robot (쇼핑 서비스 지원 로봇 개발을 위한 실체적인 Human Robot Interface 디자인 개발에 관한 연구)

  • Hong Seong-Soo;Heo Seong-Cheol;Kim Eok;Chang Young-Ju
    • Archives of design research
    • /
    • v.19 no.4 s.66
    • /
    • pp.81-90
    • /
    • 2006
  • Robot design serves as the crucial link between a human and a robot, the cutting edge technology. The importance of the robot design certainly will be more emphasized when the consumer robot market matures. For coexistence of a human and a robot, human friendly interface design and robot design with consideration of human interaction need to be developed. This research extracts series of functions in need which are consisted of series of case studies for planning and designing of 'A Shopping Support Robot'. The plan for the robot is carried out according to HRI aspects of Design and the designing process fellows. Definite results are derived by the application of series of HRI aspects such as gestures, expressions and sound. In order to verify the effectiveness of application of HRI aspects, this research suggests unified interaction that is consisted of motion-capture, animation, brain waves and sound between a human and a robot.

  • PDF

A Study on XR Handball Sports for Individuals with Developmental Disabilities

  • Byong-Kwon Lee;Sang-Hwa Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.31-38
    • /
    • 2024
  • This study proposes a novel approach to enhancing the social inclusion and participation of individuals with developmental disabilities. Utilizing cutting-edge virtual reality (VR) technology, we designed and developed a metaverse simulator that enables individuals with developmental disabilities to safely and conveniently experience indoor handicapped handball sports. This simulator provides an environment where individuals with disabilities can experience and practice handball matches. For the modeling and animation of handball players, we employed advanced modeling and motion capture technologies to accurately replicate the movements required in handball matches. Additionally, we ported various training programs, including basic drills, penalty throws, and target games, onto XR (Extended Reality) devices. Through this research, we have explored the development of immersive assistive tools that enable individuals with developmental disabilities to more easily participate in activities that may be challenging in real-life scenarios. This is anticipated to broaden the scope of social participation for individuals with developmental disabilities and enhance their overall quality of life.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.