• Title/Summary/Keyword: 내러티브 이벤트

Search Result 3, Processing Time 0.015 seconds

User's Emotion Modeling on Dynamic Narrative Structure : towards of Film and Game (동적 내러티브 구조에 대한 사용자 감정모델링 : 영화와 게임을 중심으로)

  • Kim, Mi-Jin;Kim, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.1
    • /
    • pp.103-111
    • /
    • 2012
  • This paper is a basic study for making a system that can predict the success and failure of entertainment contents at the initial stage of production. It proposes the user's emotion modeling of dynamic narrative on entertainment contents. To make this possible, 1) dynamic narrative emotion model is proposed based on theoretical research of narrative structure and cognitive emotion model. 2) configuring the emotion types and emotion value, proposed model of three emotion parameter(desire, expectation, emotion type) are derived. 3)To measure user's emotion in each story event of dynamic narrative, cognitive behavior and description of user(film, game) is established. The earlier studies on the user research of conceptual, analytic approach is aimed of predicting on review of the media and user's attitude, and consequently these results is delineated purely descriptive. In contrast, this paper is proposed the method of user's emotion modeling on dynamic narrative. It would be able to contributed to the emotional evaluation of entertainment contents using specific information.

A Visual Effect Retrieval System Design for Communication in Film-production - Focused on the Effect Using Computer Graphics Technology - (영화 비주얼 이펙트 제작의 커뮤니케이션을 위한 자료검색 시스템 제안 - 컴퓨터 그래픽 기술을 이용한 이펙트를 중심으로 -)

  • Jo, Kook-Jung;Suk, Hae-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.6
    • /
    • pp.92-103
    • /
    • 2009
  • With the help of computer graphics technologies, the visual effects techniques using these technologies replaced most of special effects techniques which had been used for early films. For these changes, directors and visual effects creators make an effect in a scene through their mutual agreement in contemporary films. However, they undergo a lot of trial-and-error while making a visual effects scene because they cannot perfectly communicate their ideas due to the director's narrative language, and also because of the visual effect creator's language of computer graphics technology. This research suggests the design of a visual effects data retrieval system for efficient communication between directors and visual effects creators. This application provides the means to search a database analyzing visual effects scenes extracted from 14 remarkable movies in visual effect history by narrative and visual effects technique. They can search visual effects scenes using this application. also, this data can foster communication with directors and creators so they can make an efficient production pipeline.

ViStoryNet: Neural Networks with Successive Event Order Embedding and BiLSTMs for Video Story Regeneration (ViStoryNet: 비디오 스토리 재현을 위한 연속 이벤트 임베딩 및 BiLSTM 기반 신경망)

  • Heo, Min-Oh;Kim, Kyung-Min;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.138-144
    • /
    • 2018
  • A video is a vivid medium similar to human's visual-linguistic experiences, since it can inculcate a sequence of situations, actions or dialogues that can be told as a story. In this study, we propose story learning/regeneration frameworks from videos with successive event order supervision for contextual coherence. The supervision induces each episode to have a form of trajectory in the latent space, which constructs a composite representation of ordering and semantics. In this study, we incorporated the use of kids videos as a training data. Some of the advantages associated with the kids videos include omnibus style, simple/explicit storyline in short, chronological narrative order, and relatively limited number of characters and spatial environments. We build the encoder-decoder structure with successive event order embedding, and train bi-directional LSTMs as sequence models considering multi-step sequence prediction. Using a series of approximately 200 episodes of kids videos named 'Pororo the Little Penguin', we give empirical results for story regeneration tasks and SEOE. In addition, each episode shows a trajectory-like shape on the latent space of the model, which gives the geometric information for the sequence models.