• Title/Summary/Keyword: Video Generation

Search Result 577, Processing Time 0.03 seconds

A Feasibility Study on RUNWAY GEN-2 for Generating Realistic Style Images

  • Yifan Cui;Xinyi Shan;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.99-105
    • /
    • 2024
  • Runway released an updated version, Gen-2, in March 2023, which introduced new features that are different from Gen-1: it can convert text and images into videos, or convert text and images together into video images based on text instructions. This update will be officially open to the public in June 2023, so more people can enjoy and use their creativity. With this new feature, users can easily transform text and images into impressive video creations. However, as with all new technologies, comes the instability of AI, which also affects the results generated by Runway. This article verifies the feasibility of using Runway to generate the desired video from several aspects through personal practice. In practice, I discovered Runway generation problems and propose improvement methods to find ways to improve the accuracy of Runway generation. And found that although the instability of AI is a factor that needs attention, through careful adjustment and testing, users can still make full use of this feature and create stunning video works. This update marks the beginning of a more innovative and diverse future for the digital creative field.

Seeking for Underlying Meaning of the 'house' and Characteristics in Music Video - Analyzing Seotaiji and Boys and BTS Music Video in Perspective of Generation - ( 뮤직비디오에 나타난 '집'의 의미와 성격 - 서태지와 아이들, 방탄소년단 작품에 대한 세대론적 접근 -)

  • Kil, Hye Bin;Ahn, Soong Beum
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.5
    • /
    • pp.24-34
    • /
    • 2019
  • This study compares in depth the song performed by two groups, one by 'Seo Taiji And His Boys'(X Generation) and the other by 'BTS'(C Generation) based on the discourse about the 'X Generation' in the 1990s and the 'C Generation' in the 2010s. It will specifically focus on the nature of 'home' that has great significance in the music video and will find the sociocultural meaning of it. Based on the analysis, the original performance by 'Seo Taiji and The Boys' demonstrated the vertical structure of enlightenment and discipline and narrated the story with the plot of 'maturity'. The meaning of 'home' in the original version shifts from a target of resistance to a subject of internalization. The remake music video of BTS demonstrated a horizontal structure of empathy and solidarity and narrated the story with the plot of 'pursuit/discovery'. The 'home' here can be considered the life itself of a person who maintains one's self identity.

Fast Generation of 3-D Video Holograms using a Look-up Table and Temporal Redundancy of 3-D Video Image (룩업테이블과 3차원 동영상의 시간적 중복성을 이용한 3차원 비디오 홀로그램의 고속 생성)

  • Kim, Seung-Cheol;Kim, Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1076-1085
    • /
    • 2009
  • In this paper, a new method for efficient computation of CGH patterns for 3-D video images is proposed by combined use of temporal redundancy and look-up table techniques. In the conventional N-LT method, fringe patterns for other object points on that image plane can be obtained by simply shifting these pre-calculated PFP (Principle Fringe Patterns). But there have been many practical limitations in real-time generation of 3-D video holograms because the computation time required for the generation of 3-D video holograms must be massively increased compared to that of the static holograms. On the other hand, as ordinary 3-D moving pictures have numerous similarities between video frames, called by temporal redundancy, and this redundancy is used to compress the video image. Therefore, in this paper, we proposed the efficient hologram generation method using the temporal redundancy of 3-D video image and N-LT method. To confirm the feasibility of the proposed method, some experiments with test 3-D videos are carried out, and the results are comparatively discussed with the conventional methods in terms of the number of object points and computation time.

Efficient Side Infonnation Generation Techniques and Perfonnance Evaluation for Distributed Video Coding System (분산 동영상 부호화 시스템을 위한 부가정보 생성 기법의 성능 평가)

  • Moon, Hak-Soo;Lee, Chang-Woo;Lee, Seong-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3C
    • /
    • pp.140-148
    • /
    • 2011
  • The side information in the distributed video coding system is generated using motion compensated interpolation methods. Since the accuracy of the generated side information affects the amount of parity bits for the reconstruction of Wyner-Ziv frame, it is important to produce an accurate side information. In this paper, we analyze the informance of various side information generation methods and propose an effective side information generation technique. Also, we compare each side information generation methods from the hardware point of view and analyze the performance of distributed video coding system using various side information generation methods.

Study of Next Generation Game Animation (넥스트 제너레이션 게임애니메이션 연구)

  • Park, Hong-Kyu
    • Cartoon and Animation Studies
    • /
    • s.13
    • /
    • pp.223-236
    • /
    • 2008
  • The video game industry is obsessed by the perception of "Next Generation Game". Appearance of the next generation game console has required the video game industry to renovate new technologies for their entire production. This tendency increases a huge mount of production cost. Game companies have to hire more designers to create a solid concept, artists to generate more detailed content, and programmers to optimize for more complex hardware. All those high cost efforts provide great locking games, but the potential of next generation game consoles does not end there. They also bring possibilities of the new types of gameplay. Next generation game contains a much larger pool of memories for every video game elements. The entire video game used to use roughly 800 animation files, but next generation game is pushing scripted event well over 4000 animation flies. That allows a lot of very unique custom animation for pretty much every action in the game. It gives game players much more vivid and realistic appreciation of the virtual world. Players are not being able to see any recycling of the same animation over and over when they are playing next generation game. The main purpose of this thesis is that defines the concept of next generation game and analyzes new animation-pipeline to be used in the shooter games.

  • PDF

Optimizing the Joint Source/Network Coding for Video Streaming over Multi-hop Wireless Networks

  • Cui, Huali;Qian, Depei;Zhang, Xingjun;You, Ilsun;Dong, Xiaoshe
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.800-818
    • /
    • 2013
  • Supporting video streaming over multi-hop wireless networks is particularly challenging due to the time-varying and error-prone characteristics of the wireless channel. In this paper, we propose a joint optimization scheme for video streaming over multi-hop wireless networks. Our coding scheme, called Joint Source/Network Coding (JSNC), combines source coding and network coding to maximize the video quality under the limited wireless resources and coding constraints. JSNC segments the streaming data into generations at the source node and exploits the intra-session coding on both the source and the intermediate nodes. The size of the generation and the level of redundancy influence the streaming performance significantly and need to be determined carefully. We formulate the problem as an optimization problem with the objective of minimizing the end-to-end distortion by jointly considering the generation size and the coding redundancy. The simulation results demonstrate that, with the appropriate generation size and coding redundancy, the JSNC scheme can achieve an optimal performance for video streaming over multi-hop wireless networks.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Generation of Video Clips Utilizing Shot Boundary Detection (샷 경계 검출을 이용한 영상 클립 생성)

  • Kim, Hyeok-Man;Cho, Seong-Kil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.582-592
    • /
    • 2001
  • Video indexing plays an important role in the applications such as digital video libraries or web VOD which archive large volume of digital videos. Video indexing is usually based on video segmentation. In this paper, we propose a software tool called V2Web Studio which can generate video clips utilizing shot boundary detection algorithm. With the V2Web Studio, the process of clip generation consists of the following four steps: 1) Automatic detection of shot boundaries by parsing the video, 2) Elimination of errors by manually verifying the results of the detection, 3) Building a modeling structure of logical hierarchy using the verified shots, and 4) Generating multiple video clips corresponding to each logically modeled segment. The aforementioned steps are performed by shot detector, shot verifier, video modeler and clip generator in the V2Web Studio respectively.

  • PDF

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

Construction of a Video Dataset for Face Tracking Benchmarking Using a Ground Truth Generation Tool

  • Do, Luu Ngoc;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Na, In Seop;Kim, Sun Hee
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2014
  • In the current generation of smart mobile devices, object tracking is one of the most important research topics for computer vision. Because human face tracking can be widely used for many applications, collecting a dataset of face videos is necessary for evaluating the performance of a tracker and for comparing different approaches. Unfortunately, the well-known benchmark datasets of face videos are not sufficiently diverse. As a result, it is difficult to compare the accuracy between different tracking algorithms in various conditions, namely illumination, background complexity, and subject movement. In this paper, we propose a new dataset that includes 91 face video clips that were recorded in different conditions. We also provide a semi-automatic ground-truth generation tool that can easily be used to evaluate the performance of face tracking systems. This tool helps to maintain the consistency of the definitions for the ground-truth in each frame. The resulting video data set is used to evaluate well-known approaches and test their efficiency.