• Title/Summary/Keyword: Automatic Story Generation

Search Result 6, Processing Time 0.017 seconds

Automatic Music-Story Video Generation Using Music Files and Photos in Automobile Multimedia System (자동차 멀티미디어 시스템에서의 사진과 음악을 이용한 음악스토리 비디오 자동생성 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.5
    • /
    • pp.80-86
    • /
    • 2010
  • This paper presents automated music story video generation technique as one of entertainment features that is equipped in multimedia system of the vehicle. The automated music story video generation is a system that automatically creates stories to accompany musics with photos stored in user's mobile phone by connecting user's mobile phone with multimedia systems in vehicles. Users watch the generated music story video at the same time. while they hear the music according to mood. The performance of the automated music story video generation is measured by accuracies of music classification, photo classification, and text-keyword extraction, and results of user's MOS-test.

Automatic Generation of Diverse Cartoons using User's Profiles and Cartoon Features (사용자 프로파일 및 만화 요소를 활용한 다양한 만화 자동 생성)

  • Song, In-Jee;Jung, Myung-Chul;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.5
    • /
    • pp.465-475
    • /
    • 2007
  • With the spread of Internet, web users express their daily life by articles, pictures and cartons to recollect personal memory or to share their experience. For the easier recollection and sharing process, this paper proposes diverse cartoon generation methods using the landmark lists which represent the behavior and emotional status of the user. From the priority and causality of each landmark, critical landmark is selected for composing the cartoon scenario, which is revised by story ontology. Using similarity between cartoon images and each landmark in the revised scenario, suitable cartoon cut for each landmark is composed. To make cartoon story more diverse, weather, nightscape, supporting character, exaggeration and animation effects are additionally applied. Through example scenarios and usability tests, the diversity of the generated cartoon is verified.

A Study of an AI-Based Content Source Data Generation Model using Folk Paintings and Genre Paintings (민화와 풍속화를 이용한 AI 기반의 콘텐츠 원천 데이터 생성 모델의 연구)

  • Yang, Seokhwan;Lee, Young-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.736-743
    • /
    • 2021
  • Due to COVID-19, the non-face-to-face content market is growing rapidly. However, most of the non-face-to-face content such as webtoons and web novels are produced based on the traditional culture of other countries, not Korean traditional culture. The biggest cause of this situation is the lack of reference materials for creating based on Korean traditional culture. Therefore, the need for materials on traditional Korean culture that can be used for content creation is emerging. In this paper, we propose a generation model of source data based on traditional folk paintings through the fusion of traditional Korean folk paintings and AI technology. The proposed model secures basic data based on folk tales, analyzes the style and characteristics of folk tales, and converts historical backgrounds and various stories related to folk tales into data. In addition, using the built data, various new stories are created based on AI technology. The proposed model is highly utilized in that it provides a foundation for new creation based on Korean traditional folk painting and AI technology.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Automatic Weblog Generation from Mobile Context using Bayesian Network and Petri Net (베이지안 네트워크와 페트리넷을 이용한 모바일 상황정보로부터의 블로그 자동 생성)

  • Lee, Young-Seol;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.467-471
    • /
    • 2010
  • Weblog is one of the most spread web services. The content of the weblog includes daily events and emotions. If we collect personal information using mobile devices and create a weblog, user can create their own weblog easily. Some researchers already developed systems that created weblog in mobile environment. In this paper, user's activity is inferred from personal information in mobile device. The inferred activities and story generation engine are used to generate text for creating a weblog. Finally, the text, photographs and user's movement in Google Map are integrated into a weblog.

Automatic Video Editing Technology based on Matching System using Genre Characteristic Patterns (장르 특성 패턴을 활용한 매칭시스템 기반의 자동영상편집 기술)

  • Mun, Hyejun;Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.861-869
    • /
    • 2020
  • We introduce the application that automatically makes several images stored in user's device into one video by using the different climax patterns appearing for each film genre. For the classification of the genre characteristics of movies, a climax pattern model style was created by analyzing the genre of domestic movie drama, action, horror and foreign movie drama, action, and horror. The climax pattern was characterized by the change in shot size, the length of the shot, and the frequency of insert use in a specific scene part of the movie, and the result was visualized. The model visualized by genre developed as a template using Firebase DB. Images stored in the user's device were selected and matched with the climax pattern model developed as a template for each genre. Although it is a short video, it is a feature of the proposed application that it can create an emotional story video that reflects the characteristics of the genre. Recently, platform operators such as YouTube and Naver are upgrading applications that automatically generate video using a picture or video taken by the user directly with a smartphone. However, applications that have genre characteristics like movies or include video-generation technology to show stories are still insufficient. It is predicted that the proposed automatic video editing has the potential to develop into a video editing application capable of transmitting emotions.