• Title/Summary/Keyword: 컴퓨터 애니메이션

Search Result 491, Processing Time 0.023 seconds

한ㆍ미ㆍ일 시나리오 구성 비교

  • Lee, Jin-Heon
    • Digital Contents
    • /
    • no.1 s.152
    • /
    • pp.52-58
    • /
    • 2006
  • 오늘날 독자들을 매료시키고 있는 디지털 문학이란 과연 19세기 소설처럼 온화하고 책임감 있는 것일까, 아니면 환각 작용을 일으키는 마약처럼 위험스럽고 치명적인 것일까, 현재 우리는 뉴미디어 혁명의 한가운데에서 있다. 14세기의 인쇄 활자나 19세기의 사진 기술이 당대의 사회와 문화에 혁명적인 충격을 주었던것처럼, 이제는 모든 문화가 컴퓨터를 매체로 생산 ㆍ배포ㆍ의사소통하고 있다. 본고에서는 헐리우드 공식을 기반으로 한ㆍ미ㆍ일 애니메이션 시나리오 구성 방식으 비교했다.

  • PDF

디지털 경제를 주도할 디지털 컨텐츠 산업의 육성방향

  • 박영일
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.10a
    • /
    • pp.1-11
    • /
    • 1999
  • o 디지털컨텐츠(멀티미디어컨텐츠)란 무엇인가\ulcorner 멀티미디어 : 기존 아날로그 기술에서 개별적으로 성장했던 문자, 음성, 사진, 비디오, 애니메이션의 미디어 영역들이 디지털 기술이 발달하면서 통합된 미디어를 말함. 디지털화는 글, 소리, 그림, 영상, 숫자 등의 온갖 정보들을 컴퓨터가 인식할 수 있는 신호(2진수 코드)로 바꾸는 것임. (중략)

  • PDF

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Vehicle Animation Making Tools based on Simulation and Trajectory Library (차량 시뮬레이션과 경로 라이브러리에 기반한 차량 애니메이션 저작도구)

  • Jeong, Jinuk;Kang, Daeun;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.57-66
    • /
    • 2017
  • In this paper, we suggest a novel physics-based real-time animation technique for vehicles, and introduce an easy and intuitive animation authoring tool which uses our proposed technique. After a user specifies a trajectory of a virtual car as input, our system produces a more accurate simulation faster than a previous research result. This is achieved by a trajectory splitting method based on directional features and a trajectory library. As a result, the user can create not only a car animation including lane changing and passing, but also a crash animation which is a rarely researched topic. Also, we propose a virtual car structure that approximates a real car's structure for real-time simulation, the resulting animation shows high plausibility such as a small vibration which occurs when the virtual car breaks and a deformation of when a car accident happens.

Empirical Analysis of the Feeling of Shooting in 2D Shooting Games (2차원 슈팅 게임에서의 타격감에 대한 실험적 분석)

  • Seo, Jin-Seok;Kim, Nam-Gyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.75-81
    • /
    • 2010
  • Feeling of shooting is one of the most important features of shooting games. Game developers have tried to improve feeling of shooting by using various techniques, such as visual/sound effects, rumble effects, animations, and camera techniques. In this paper, we introduce the results of the empirical analysis of the several techniques in a 2D shooting game. We carried out two experiments in which levels of feeling of shooting were measured in a simple 2D shooting game. The first experiment was configured with 16 combinations of the four techniques (visual, animation, sound, and rumble effects) applied to a shooting object (a cannon), and the second was configured with 16 combinations of the two techniques (visual and sound effects) applied to both or either side of a shooting object and exploding objects (enemy ships). The analysis results of the experiments showed that all of each techniques were statistically significant factors. We could also found that sound effects and rumble effects are more effective than visual effects and animations and that exploding objects are more important that a shooting object.

Facial Color Control based on Emotion-Color Theory (정서-색채 이론에 기반한 게임 캐릭터의 동적 얼굴 색 제어)

  • Park, Kyu-Ho;Kim, Tae-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1128-1141
    • /
    • 2009
  • Graphical expressions are continuously improving, spurred by the astonishing growth of the game technology industry. Despite such improvements, users are still demanding a more natural gaming environment and true reflections of human emotions. In real life, people can read a person's moods from facial color and expression. Hence, interactive facial colors in game characters provide a deeper level of reality. In this paper we propose a facial color adaptive technique, which is a combination of an emotional model based on human emotion theory, emotional expression pattern using colors of animation contents, and emotional reaction speed function based on human personality theory, as opposed to past methods that expressed emotion through blood flow, pulse, or skin temperature. Experiments show this of expression of the Facial Color Model based on facial color adoptive technique and expression of the animation contents is effective in conveying character emotions. Moreover, the proposed Facial Color Adaptive Technique can be applied not only to 2D games, but to 3D games as well.

  • PDF

Study on the shot rhythm by the spatial map model of animation

  • Shin, Yeo-Nu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.63-70
    • /
    • 2020
  • In this paper, we propose a globally successful animation rhythm type. Based on the study of spatial variables in Campbell's heroic narrative and the study on the duration of shots in the video, the rhythm of the shot according to the spatial narrative of was analyzed. Through this, three conclusions were drawn. First, the basis for the visualization of narrative intensity is possible through the shot density of narrative, Second, a shot rhythm type was presented, which was presented as an ascending type, a descending type, a mountain type, a depressed type, and a complex type, and the characteristics of the narrative were analyzed. Third, the strength of the shot rhythm shown in the hero narrative spatial map model was divided into top, middle, and bottom, and the measurement criteria of narrative strength were presented. This study is meaningful in that it visualized and typified artistic emotions as objective data in the flow of animation content focused on philosophical and qualitative methods.

Digital Acting Method (디지털 연기 연구)

  • Park, Hoyoung
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.205-212
    • /
    • 2018
  • Learning how to express and express the characteristics of acting expressions required by different media is easy to express the acting required in various mediums. The style of acting required by each medium is different depending on the characteristics of the medium. In particular, digital acting using computer graphics technology expresses actors as if they are in the space by imagining the imaginary space. Through computer graphics post-production, the actual space that will be visible on the final screen is completed and creates a story based on the actual situation. The role of digital actors in applying motion capture to movies is becoming increasingly important. Natural cross-reaction acting between live-action actors and digital character actors has become a trend in animation films where only digital actors appear. In animation films, a real actor plays a major role in connecting the characters of a digital actor. The core of a digital actor is the realization of a unique character performance. In the era of trans-media, the importance of digital acting is increasing day by day.