• Title/Summary/Keyword: Interactive animation

Search Result 148, Processing Time 0.024 seconds

Study on the brand personality of animation character and the consumer's personality (애니메이션 캐릭터의 브랜드개성과 소비자개성 연구 - 브랜드개성과 소비자개성의 일치성이 브랜드태도에 미치는 영향에 관한 연구 -)

  • Lim, Byung-Woo
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.141-150
    • /
    • 2006
  • The animation should be produced to attract the audience's interests. Its characters will have the intended personalities through the interactions with the audiences, discarding the attributes of the nature. These personalities form the brand identity when they are exposed visually and will be powerful brand assets which lead the animation industry to the high-value added products. The brand identity of the character, the brand assets, can be used for various products in form of licensing and is noted to make an affirmative leverage effect. In this regard, the author has conducted an empirical research on the animation characters from the viewpoint of the brand, adopting, in particular, the Brand Personality Scale (BPS), which is the output of J. Aaker's (1997) study on brand personality defined as the human properties in relation with the brand. In addition, this study determines the correlation among the animation, brand and consumers based on the Sirgy's study (1982) resulting in that the better the brand and the consumer's personality are matched, the more the brand attitude is improved. In consequence, it is found that the animation characters have three personality levels such as refinement/ability, integrity and interests. The consumer's personality is divided into the 'practical ego-image' and the 'ideal ego-image' in the survey, and the survey result shows that the brand personality of the animation character exists between them.

  • PDF

The Aesthetic Transformation of Shadow Images and the Extended Imagination (그림자 이미지의 미학적 변용과 확장된 상상력 :디지털 실루엣 애니메이션과 최근 미디어 아트의 흐름을 중심으로)

  • Kim, Young-Ok
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.651-676
    • /
    • 2017
  • Shadow images are a representative medium and means of expression for the imagination that exists between consciousness and unconsciousness for thousands of years. Wherever light exists, people create play with their own shadows without special skills, and have made a fantasy at once. Shadow images have long been used as subjects and materials of literacy, art, philosophy, and popular culture. Especially in the field of art, people have been experimenting with visual stimulation through the uniqueness of simple silhouettes images. In the field of animation, it became to be recognized as a form of non - mainstream areas that are difficult to make. However, shadow images have been used more actively in the field of digital arts and media art. In this Environment with technologies, Various formative imaginations are being expressed more with shadow images in a new dimension. This study is to introduce and analyze these trends, the aesthetic transformations and extended methods focusing on digital silhouette animation and recent media art works using shadow images. Screen-based silhouette animation combines digital technology and new approaches that have escaped conventional methods have removed most of the elements that have been considered limitations, and these factors have become a matter of choice for the directors. Especially, in the display environment using various light sources, projection, and camera technology, shadow images were expressed with multiple-layered virtual spaces, and it becomes possible to imagine a new extended imagination. Through the computer vision, it became possible to find new gaze and spatial images and use it more flexibly. These changes have given new possibility to the use shadow images in a different way.

A Study of Production Technology of Digital Contents upon the Platform Integration : Focusing on Cross - Platform Game (플랫폼 통합에 따른 디지털콘텐츠 제작기술 경향연구 : 크로스 플랫폼게임(Cross-Platform Game) 사례를 중심으로)

  • Han, Chang-Wan
    • Cartoon and Animation Studies
    • /
    • s.14
    • /
    • pp.151-164
    • /
    • 2008
  • Cross platform game has brought about the expansion of game market, which results in technology innovation overcoming the limit of game consumption. The new model integrates both off and online game services. Gamers can now enjoy game service regardless of age, time, and space. If the technology evolution model of digital contents like cross-platform game engine can provide contents for several platform at the same time, the interactive service can be utilized into maximum level. It is also necessary to allocate, switch data as well as to innovate the transmission technology of data according to each platform. Providing the same contents for several platform as many as possible can be the most suitable strategy to enhance the efficiency and profits. However if the interactive service can be accomplished completely, the development of data switching technology and distribution should be made. To be a leader in the next digital contents market, one should develop the network engine technology which can embody the optimization of consumption in the interactive network service.

  • PDF

Procedural Animation Method for Realistic Behavior Control of Artificial Fish (절차적 애니메이션 방법을 이용한 인공물고기의 사실적 행동제어)

  • Kim, Chong Han;Youn, Jae Hong;Kim, Byung Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.801-808
    • /
    • 2013
  • In the virtual space with the interactive 3D contents, the degree of mental satisfaction is determined by how fully it reflect the real world. There are a few factors for getting the high completeness of virtual space. The first is the modeling technique with high-polygons and high-resolution textures which can heighten an visual effect. The second is the functionality. It is about how realistic represents dynamic actions between the virtual space and the user or the system. Although the studies on the techniques for animating and controlling the virtual characters have been continued, there are problems such that the long production time, the high cost, and the animation without expected behaviors. This paper suggest a method of behavior control of animation by designing the optimized skeleton which produces the movement of character and applying the procedural technique using physical law and mathematical analysis. The proposed method is free from the constraint on one-to-one correspondence rules, and reduce the production time by controlling the simple parameters, and to increase the degree of visual satisfaction.

Animation Techniques with Direction Control in Pull-down Menu for Improving Web User Interface (웹 사용자 인터페이스 향상을 위한 풀다운메뉴에서 방향제어가 가능한 애니메이션 기법)

  • Cho, Han-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.11
    • /
    • pp.525-536
    • /
    • 2016
  • As obtaining information via the Internet has increased recently, the importance of web-based user interface more than ever is being emphasized. The purpose of this study is to improve the sliding structures of submenu in pull-down menu for improving web user interface. Animation techniques applied to the submenu of the pull-down menus that are being used at many web sites have very monotonous sliding structures. In order to solve these problems, a new sliding algorithm based on enlarging/reducing and moving animation techniques with direction control in pull-down menu is proposed. As a result, the proposed method can not only improve visual effects significantly but be also easily applied to the implementation of the web-based user interface in comparison with the previous pull-down menus. Finally, experiments on application of the proposed sliding algorithm to responsive image slider show that the proposed method can achieve good results. Further studies taking into account performance are needed to implement web-based interactive contents using the proposed method.

Development of Korean Music Multimedia Contents for Preschooler - With Priority to Animation - (유아용 한국음악 멀티미디어 콘텐츠 개발 연구 - 애니메이션을 중심으로 -)

  • Choi, Yoo-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.132-141
    • /
    • 2007
  • Traditional music education for infants is the most important step of forming human knowledge. Since their rapid development of intelligence and emotion has an educational value not only it improves the human knowledge but also it makes infants understand the unique emotion of Korean people. Under the several circumstances, however, we have no enough contents for traditional music education. Thus, by researching and analyzing the existing educational contents and complementing them properly for multimedia environment as a form of animation, we perform a series of experiments to infants who are attending to the kindergarten with interactive animations which are familiar with infants. Infants become interested in the contents and after the lesson of the short bamboo flute content, they show the clear improvements for the playing on a musical instrument. This proves that the Korean music educational content realized with animation can be an alternative plan to improve the educational effect by causing enjoyment and interest of infants.

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

A Service Framework for Emotional Contents on Broadcast and Communication Converged IPTV Systems (IPTV를 위한 방송통신 융합형 감성 콘텐츠의 운용 및 서비스 기술)

  • Sung, Min-Young;Paek, Seon-Uck;Ahn, Seong-Hye
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.737-742
    • /
    • 2009
  • As increasing emphasis is being placed on user experience design, the RIA technology is widely deployed for user interface and software operation on embedded devices including cell phones and TVs. In particular, RIA-based IPTV enables creation of various interactive contents via sophisticated animation and various input devices. This paper proposes a service framework for emotional contents on broadcast and communication-converged IPTV systems. We design a programming interface extension for IPTV-based flash contents and develop a prototype of flash runtime with the extended programming support. Since the proposed runtime was carefully designed to fully utilize the built-in graphic acceleration hardware in media processor, it supports high resolution graphic animation in resource-constrained IPTV environments.

  • PDF

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF