• Title/Summary/Keyword: Video Contents Generation

Search Result 96, Processing Time 0.022 seconds

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

Biological Infectious Watermarking Model for Video Copyright Protection

  • Jang, Bong-Joo;Lee, Suk-Hwan;Lim, SangHun;Kwon, Ki-Ryong
    • Journal of Information Processing Systems
    • /
    • v.11 no.2
    • /
    • pp.280-294
    • /
    • 2015
  • This paper presents the infectious watermarking model (IWM) for the protection of video contents that are based on biological virus modeling by the infectious route and procedure. Our infectious watermarking is designed as a new paradigm protection for video contents, regarding the hidden watermark for video protection as an infectious virus, video content as host, and codec as contagion medium. We used pathogen, mutant, and contagion as the infectious watermark and defined the techniques of infectious watermark generation and authentication, kernel-based infectious watermarking, and content-based infectious watermarking. We experimented with our watermarking model by using existing watermarking methods as kernel-based infectious watermarking and content-based infectious watermarking medium, and verified the practical applications of our model based on these experiments.

Block-Centered Symmetric Motion Estimation for Side Information Generation in Distributed Video Coding (분산 비디오 부호화에서 보조정보 생성을 위한 블록중심 대칭형의 움직임 탐색 기법)

  • Lee, Chan-Hee;Kim, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.35-42
    • /
    • 2010
  • Side information generation techniques play a great role in determining the overall performance of the DVC (Distributed Video Coding) coding system. Most conventional techniques for side information generation are mainly based on the block matching algorithm with symmetric motion estimation between the previously reconstructed key frames. But, these techniques tend to show mismatches between the motion vectors and the real placements of moving objects. So these techniques need to be modified so as to search well the real placements of moving objects. To overcome this problem, this paper proposes a block-centered symmetric motion estimation technique which uses the same coordinates with the given block. Through computer simulations, it is shown that the proposed algorithm outperforms the conventional schemes in the objective quality.

Improved Side Information Generation using Field Coding for Wyner-Ziv Codec (Wyner-Ziv 부호화기를 위한 필드 부호화 기반 개선된 보조정보 생성)

  • Han, Chan-Hee;Jeon, Yeong-Il;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.10-17
    • /
    • 2009
  • Wyner-Ziv video coding is a new video compression paradigm based on distributed source coding theory of Slepian-Wolf and Wyner-Ziv. Wyner-Ziv coding enables light-encoder/heavy-decoder structure by shifting complex modules including motion estimation/compensation task to the decoder. Instead of performing the complicated motion estimation process in the encoder, the Wyner-Ziv decoder performs the motion estimation for the generation of side information in order to make the predicted signal of the Wyner-Ziv frame. The efficiency of side information generation deeply affects the overall coding performance, since the bit-rates of the Wyner-Ziv coding is directly dependent on side information. In this paper, an improved side information generation method using field coding is proposed. In the proposed method, top fields are coded with the existing SI generation method and bottom fields are coded with new SI generation method using the information of the top fields. Simulation results show that the proposed method improves the quality of the side information and rate-distortion performance compared to the conventional method.

A Comparative Study on the Features and Applications of AI Tools -Focus on PIKA Labs and RUNWAY

  • Biying Guo;Xinyi Shan;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.86-91
    • /
    • 2024
  • In the field of artistic creation, the iterative development of AI-generated video software has pushed the boundaries of multimedia content creation and provided powerful creative tools for non-professionals. This paper extensively examines two leading AI-generated video software, PIKA Labs and RUNWAY, discussing their functions, performance differences, and application scopes in the video generation domain. Through detailed operational examples, a comparative analysis of their functionalities, as well as the advantages and limitations of each in generating video content, is presented. By comparison, it can be found that PIKA Labs and RUNWAY have excellent performance in stability and creativity. Therefore, the purpose of this study is to comprehensively elucidate the operating mechanisms of these two AI software, in order to intuitively demonstrate the advantages of each software. Simultaneously, this study provides valuable references for professionals and creators in the video production field, assisting them in selecting the most suitable tools for different scenarios, thereby advancing the application and development of AI-generated video software in multimedia content creation.

From the Viewpoint of Technological Innovation, Generation Classification of the Video Game Industry (기술혁신 관점에서 비디오 게임 산업의 세대구분)

  • Jeon, Jeong-Hwan;Son, Sang-Il;Kim, Dong-Nam;Cho, Hyung-Rae
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.6
    • /
    • pp.203-224
    • /
    • 2017
  • With the development of the IT industry and the growth of the cultural industry, the game industry is becoming an important industry. In this regard, the study seeks to differentiate the generation of video games based on technological characteristics from the perspective of technological innovation. SEGA, Nintendo, MicroSoft, SONY, and ATARI were chosen as research subjects. The survy was conducted from ATARI to 2017. The results of the study are expected to help develop the technology strategy of the future video game industry.

A Study for Depth-map Generation using Vanishing Point (소실점을 이용한 Depth-map 생성에 관한 연구)

  • Kim, Jong-Chan;Ban, Kyeong-Jin;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.329-338
    • /
    • 2011
  • Recent augmentation reality demands more realistic multimedia data with the mixture of various media. High-technology for multimedia data which combines existing media data with various media such as audio and video dominates entire media industries. In particular, there is a growing need to serve augmentation reality, 3-dimensional contents and realtime interaction system development which are communication method and visualization tool in Internet. The existing services do not correspond to generate depth value for 3-dimensional space structure recovery which is to form solidity in existing contents. Therefore, it requires research for effective depth-map generation using 2-dimensional video. Complementing shortcomings of existing depth-map generation method using 2-dimensional video, this paper proposes an enhanced depth-map generation method that defines the depth direction in regard to loss location in a video in which none of existing algorithms has defined.

The Design and Implementation of Internet Broadcasting Move Picture Solution apply to FlashVideo (FlashVideo를 적용한 인터넷 방송 동영상 솔루션의 설계 및 구현)

  • Kwon, O-Byung;Kim, Kyeong-Su
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.241-246
    • /
    • 2012
  • In this paper, we apply the next generation Internet Broadcasting Move Picture solution, FlashVideo has been designed and implemented. Currently being broadcast in the field to compress HD video in real time, as well as live Internet VOD services are available through the online system, the Internet LIVE broadcast and VOD service easy to operate and UCC services that support the solution. VOD video cameras and in real time using H264 CORECODEC to compress MPEC4, WMV, and real-time video streaming on the Internet, and phone system that supports the first, real-time recording of camera images featured nation's first real-time encoder system (Real time encoder system) is, Web and smart environment suitable for supporting the latest CORECODEC technology and software products. Second, the video can be played in MP4 player and customize your chat, and customizing is a possible two-way Internet Broadcasting System. Third, CMS (Contents Management System) feature video contents and course management contents in real time via the Android phone and iPhone streaming service is available.

UCC Cutout Animation Generation using Video Inputs (비디오 입력을 이용한 UCC 컷아웃 애니메이션 생성 기법)

  • Lee, Yun-Jin;Yang, Seung-Jae;Kim, Jun-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.6
    • /
    • pp.67-75
    • /
    • 2011
  • We propose a novel non-photorealistic rendering technique which generates a cutout animation from a video for UCC. Our method consists of four parts. First, we construct an interactive system to build an articulated character. Second, we extract motions from an input video. Third, we transform motions of a character in order to reflect characteristics of cutout animations. Fourth, we render the extracted or transformed components in cutout animation style. We developed a unified system for a user to easily create a cutout animation from an input video and showed the system generated a cutout animation efficiently.