• Title/Summary/Keyword: Generate AI Video

Search Result 15, Processing Time 0.022 seconds

Analysis of the possibility of utilizing customized video production using generative AI

  • Hyun Kyung Seo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.11
    • /
    • pp.127-136
    • /
    • 2024
  • As generative AI technology develops, the paradigm of video production is also changing. After going through an initial stage where it could not be used as actual video footage due to low quality and difficulties with consistency and continuity, various videos produced with generative AI are being used in the video industry. Following these changes, this paper identifies the potential of customized generative AI. It examines the direction of technological development of generative AI in the video industry and analyzes recent cases in advertising, film, and animation to reveal that the reason for the increased utilization of generative AI is to achieve the essential purpose of content as well as quality results. Through this process, we expect the potential of generative AI in the video industry.

Artificial Intelligence-Based Video Content Generation (인공지능 기반 영상 콘텐츠 생성 기술 동향)

  • Son, J.W.;Han, M.H.;Kim, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.34-42
    • /
    • 2019
  • This study introduces artificial intelligence (AI) techniques for video generation. For an effective illustration, techniques for video generation are classified as either semi-automatic or automatic. First, we discuss some recent achievements in semi-automatic video generation, and explain which types of AI techniques can be applied to produce films and improve film quality. Additionally, we provide an example of video content that has been generated by using AI techniques. Then, two automatic video-generation techniques are introduced with technical details. As there is currently no feasible automatic video-generation technique that can generate commercial videos, in this study, we explain their technical details, and suggest the future direction for researchers. Finally, we discuss several considerations for more practical automatic video-generation techniques.

Enhancing Video Storyboarding with Artificial Intelligence: An Integrated Approach Using ChatGPT and Midjourney within AiSAC

  • Sukchang Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.253-259
    • /
    • 2023
  • The increasing incorporation of AI in video storyboard creation has been observed recently. Traditionally, the production of storyboards requires significant time, cost, and specialized expertise. However, the integration of AI can amplify the efficiency of storyboard creation and enhance storytelling. In Korea, AiSAC stands at the forefront of AI-driven storyboard platforms, boasting the capability to generate realistic images built on open datasets foundations. Yet, a notable limitation is the difficulty in intricately conveying a director's vision within the storyboard. To address this challenge, we proposed the application of image generation features from ChatGPT and Midjourney to AiSAC. Through this research, we aimed to enhance the efficiency of storyboard production and refined the intricacy of expression, thereby facilitating advancements in the video production process.

Enhanced MCTS Algorithm for Generating AI Agents in General Video Games (일반적인 비디오 게임의 AI 에이전트 생성을 위한 개선된 MCTS 알고리즘)

  • Oh, Pyeong;Kim, Ji-Min;Kim, Sun-Jeong;Hong, Seokmin
    • The Journal of Information Systems
    • /
    • v.25 no.4
    • /
    • pp.23-36
    • /
    • 2016
  • Purpose Recently, many researchers have paid much attention to the Artificial Intelligence fields of GVGP, PCG. The paper suggests that the improved MCTS algorithm to apply for the framework can generate better AI agent. Design/methodology/approach As noted, the MCTS generate magnificent performance without an advanced training and in turn, fit applying to the field of GVGP which does not need prior knowledge. The improved and modified MCTS shows that the survival rate is increased interestingly and the search can be done in a significant way. The study was done with 2 different sets. Findings The results showed that the 10 training set which was not given any prior knowledge and the other training set which played a role as validation set generated better performance than the existed MCTS algorithm. Besed upon the results, the further study was suggested.

A Feasibility Study on RUNWAY GEN-2 for Generating Realistic Style Images

  • Yifan Cui;Xinyi Shan;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.99-105
    • /
    • 2024
  • Runway released an updated version, Gen-2, in March 2023, which introduced new features that are different from Gen-1: it can convert text and images into videos, or convert text and images together into video images based on text instructions. This update will be officially open to the public in June 2023, so more people can enjoy and use their creativity. With this new feature, users can easily transform text and images into impressive video creations. However, as with all new technologies, comes the instability of AI, which also affects the results generated by Runway. This article verifies the feasibility of using Runway to generate the desired video from several aspects through personal practice. In practice, I discovered Runway generation problems and propose improvement methods to find ways to improve the accuracy of Runway generation. And found that although the instability of AI is a factor that needs attention, through careful adjustment and testing, users can still make full use of this feature and create stunning video works. This update marks the beginning of a more innovative and diverse future for the digital creative field.

Kernel-Based Video Frame Interpolation Techniques Using Feature Map Differencing (특성맵 차분을 활용한 커널 기반 비디오 프레임 보간 기법)

  • Dong-Hyeok Seo;Min-Seong Ko;Seung-Hak Lee;Jong-Hyuk Park
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.17-27
    • /
    • 2024
  • Video frame interpolation is an important technique used in the field of video and media, as it increases the continuity of motion and enables smooth playback of videos. In the study of video frame interpolation using deep learning, Kernel Based Method captures local changes well, but has limitations in handling global changes. In this paper, we propose a new U-Net structure that applies feature map differentiation and two directions to focus on capturing major changes to generate intermediate frames more accurately while reducing the number of parameters. Experimental results show that the proposed structure outperforms the existing model by up to 0.3 in PSNR with about 61% fewer parameters on common datasets such as Vimeo, Middle-burry, and a new YouTube dataset. Code is available at https://github.com/Go-MinSeong/SF-AdaCoF.

GENERATION OF FUTURE MAGNETOGRAMS FROM PREVIOUS SDO/HMI DATA USING DEEP LEARNING

  • Jeon, Seonggyeong;Moon, Yong-Jae;Park, Eunsu;Shin, Kyungin;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.82.3-82.3
    • /
    • 2019
  • In this study, we generate future full disk magnetograms in 12, 24, 36 and 48 hours advance from SDO/HMI images using deep learning. To perform this generation, we apply the convolutional generative adversarial network (cGAN) algorithm to a series of SDO/HMI magnetograms. We use SDO/HMI data from 2011 to 2016 for training four models. The models make AI-generated images for 2017 HMI data and compare them with the actual HMI magnetograms for evaluation. The AI-generated images by each model are very similar to the actual images. The average correlation coefficient between the two images for about 600 data sets are about 0.85 for four models. We are examining hundreds of active regions for more detail comparison. In the future we will use pix2pix HD and video2video translation networks for image prediction.

  • PDF

Automatic Poster Generation System Using Protagonist Face Analysis

  • Yeonhwi You;Sungjung Yong;Hyogyeong Park;Seoyoung Lee;Il-Young Moon
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.287-293
    • /
    • 2023
  • With the rapid development of domestic and international over-the-top markets, a large amount of video content is being created. As the volume of video content increases, consumers tend to increasingly check data concerning the videos before watching them. To address this demand, video summaries in the form of plot descriptions, thumbnails, posters, and other formats are provided to consumers. This study proposes an approach that automatically generates posters to effectively convey video content while reducing the cost of video summarization. In the automatic generation of posters, face recognition and clustering are used to gather and classify character data, and keyframes from the video are extracted to learn the overall atmosphere of the video. This study used the facial data of the characters and keyframes as training data and employed technologies such as DreamBooth, a text-to-image generation model, to automatically generate video posters. This process significantly reduces the time and cost of video-poster production.

Automatic Generation of Video Metadata for the Super-personalized Recommendation of Media

  • Yong, Sung Jung;Park, Hyo Gyeong;You, Yeon Hwi;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.288-294
    • /
    • 2022
  • The media content market has been growing, as various types of content are being mass-produced owing to the recent proliferation of the Internet and digital media. In addition, platforms that provide personalized services for content consumption are emerging and competing with each other to recommend personalized content. Existing platforms use a method in which a user directly inputs video metadata. Consequently, significant amounts of time and cost are consumed in processing large amounts of data. In this study, keyframes and audio spectra based on the YCbCr color model of a movie trailer were extracted for the automatic generation of metadata. The extracted audio spectra and image keyframes were used as learning data for genre recognition in deep learning. Deep learning was implemented to determine genres among the video metadata, and suggestions for utilization were proposed. A system that can automatically generate metadata established through the results of this study will be helpful for studying recommendation systems for media super-personalization.

Generation of Stage Tour Contents with Deep Learning Style Transfer (딥러닝 스타일 전이 기반의 무대 탐방 콘텐츠 생성 기법)

  • Kim, Dong-Min;Kim, Hyeon-Sik;Bong, Dae-Hyeon;Choi, Jong-Yun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1403-1410
    • /
    • 2020
  • Recently, as interest in non-face-to-face experiences and services increases, the demand for web video contents that can be easily consumed using mobile devices such as smartphones or tablets is rapidly increasing. To cope with these requirements, in this paper we propose a technique to efficiently produce video contents that can provide experience of visiting famous places (i.e., stage tour) in animation or movies. To this end, an image dataset was established by collecting images of stage areas using Google Maps and Google Street View APIs. Afterwards, a deep learning-based style transfer method to apply the unique style of animation videos to the collected street view images and generate the video contents from the style-transferred images was presented. Finally, we showed that the proposed method could produce more interesting stage-tour video contents through various experiments.