• Title/Summary/Keyword: Video Synthesis

Search Result 116, Processing Time 0.025 seconds

HDR Video Synthesis Using Superpixel-Based Motion Estimation (슈퍼픽셀 기반의 움직임 추정을 이용한 HDR 동영상 합성)

  • Vo, Tu Van;Lee, Chul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.90-91
    • /
    • 2018
  • We propose a novel high dynamic range (HDR) video synthesis algorithm using alternatively exposed low dynamic range (LDR) videos. We first develop a superpixel-based illumination invariant correspondence estimation algorithm. Then, we propose a reliability weight to further improve the quality of the synthesized HDR frame. Experimental results show that the proposed algorithm provides high-quality HDR frames compared to conventional algorithms.

  • PDF

Hardware Implementation of Transform and Quantization for H.264/JVT (하드웨어 기반의 H.264/JVT 변환 및 양자화 구현)

  • 임영훈;정용진
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.83-86
    • /
    • 2003
  • In this paper, we propose a new hardware architecture for integer transform, quantizer operation of a new video coding standard H.264/JVT. We describe the algorithm to derive hardware architecture emphasizing the importance of area for low cost and low power consumption. The proposed architecture has been verified by PCI-interfaced emulation board using APEX-II Altera FPGA and also by ASIC synthesis using Samsung 0.18 ${\mu}{\textrm}{m}$ CMOS cell library. The ASIC synthesis result shows that the proposed hardware can operate at 100 MHz, processing more than 1, 300 QCIF video frames per second. The hardware is going to be used as a core module when implementing a complete H.264 video encoder/decoder ASIC for real-time multimedia application.

  • PDF

Analysis of Depth Map Resolution for Coding Performance in 3D Video System (깊이영상 해상도 조절에 따른 3 차원 비디오 부호화 성능 분석)

  • Lee, Do Hoon;Yang, Yun mo;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.452-454
    • /
    • 2015
  • This paper provides the coding performance comparisons of depth map resolution in 3D video system. In multiview plus depth map system, depth map is used for synthesis view rendering, and affects to synthesis views quality. In the paper, we show the experimental results as depth map resolution in 3D video system, and show performance variation as dilation filter.

  • PDF

Interaction art using Video Synthesis Technology

  • Kim, Sung-Soo;Eom, Hyun-Young;Lim, Chan
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.195-200
    • /
    • 2019
  • Media art, which is a combination of media technology and art, is making a lot of progress in combination with AI, IoT and VR. This paper aims to meet people's needs by creating a video that simulates the dance moves of an object that users admire by using media art that features interactive interactions between users and works. The project proposed a universal image synthesis system that minimizes equipment constraints by utilizing a deep running-based Skeleton estimation system and one of the deep-running neural network structures, rather than a Kinect-based Skeleton image. The results of the experiment showed that the images implemented through the deep learning system were successful in generating the same results as the user did when they actually danced through inference and synthesis of motion that they did not actually behave.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

Implementation Method for DASH-based Free-viewpoint Video Streaming System (DASH 기반 자유시점 비디오 스트리밍 시스템 구현)

  • Seo, Minjae;Paik, Jong-ho
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.47-55
    • /
    • 2019
  • Free-viewpoint video (FVV) service provides multi viewpoints of contents and synthesizes intermediate video files which are not captured on some view angles so that enables users to watch as they choose wherever they want. Synthesizing video is necessary technique to provide FVV video service, because every video of the FVV contents for different view angles cannot be stored to the content server physically. For the reason, fast view synthesis can improve the quality of video service and increase user's satisfaction. One of the studies for FVV service, a method was proposed to transmit FVV service based on DASH (Dynamic Adaptive Streaming over HTTP). There is big advantage on using DASH that it is commonly used to transport video service. However, the method was only a conceptual proposal, so it is difficult to implement the system using the proposal. In this paper, we propose an implementation method to provide real-time FVV service smoothly. We suggest a system structure and operation method on the server and client side in detail, which is to be applicable to synthesize video quickly. Also, we suggest generating FVV service map additionally which controls a FVV service overall. We manage real-time information of the whole service through the service map. The service can be controlled by reducing the possible delay from network situation.

Depth Estimation and Intermediate View Synthesis for Three-dimensional Video Generation (3차원 영상 생성을 위한 깊이맵 추정 및 중간시점 영상합성 방법)

  • Lee, Sang-Beom;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1070-1075
    • /
    • 2009
  • In this paper, we propose new depth estimation and intermediate view synthesis algorithms for three-dimensional video generation. In order to improve temporal consistency of the depth map sequence, we add a temporal weighting function to the conventional matching function when we compute the matching cost for estimating the depth information. In addition, we propose a boundary noise removal method in the view synthesis operation. after finding boundary noise areas using the depth map, we replace them with corresponding texture information from the other reference image. Experimental results showed that the proposed algorithm improved temporal consistency of the depth sequence and reduced flickering artifacts in the virtual view. It also improved visual quality of the synthesized virtual views by removing the boundary noise.

Hardware Implementation of Integer Transform and Quantization for H.264 (하드웨어 기반의 H.264 정수 변환 및 양자화 구현)

  • 임영훈;정용진
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1182-1191
    • /
    • 2003
  • In this paper, we propose a new hardware architecture for integer transform, quantizer, inverse quantizer, and inverse integer transform of a new video coding standard H.264/JVT. We describe the algorithm and derive hardware architecture emphasizing the importance of area for low cost and low power consumption. The proposed architecture has been verified by PCI-interfaced emulation board using APEX-II Alters FPGA and also by ASIC synthesis using Samsung 0.18 um CMOS cell library. The ASIC synthesis result shows that the proposed hardware can operate at 100 MHz, processing more than 1,300 QCIF video frames per second. The hardware is going to be used as a core module when implementing a complete H.264 video encoder/decoder ASIC for real-time multimedia application.

Fast Stereoscopic 3D Broadcasting System using x264 and GPU (x264와 GPU를 이용한 고속 양안식 3차원 방송 시스템)

  • Choi, Jung-Ah;Shin, In-Yong;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.15 no.4
    • /
    • pp.540-546
    • /
    • 2010
  • Since the stereoscopic 3-dimensional (3D) video that provides users with a realistic multimedia service requires twice as much data as 2-dimensional (2D) video, it is difficult to construct the fast system. In this paper, we propose a fast stereoscopic 3D broadcasting system based on the depth information. Before the transmission, we encode the input 2D+depth video using x264, an open source H.264/AVC fast encoder to reduce the size of the data. At the receiver, we decode the transmitted bitstream in real time using a compute unified device architecture (CUDA) video decoder API on NVIDIA graphics processing unit (GPU). Then, we apply a fast view synthesis method that generates the virtual view using GPU. The proposed system can display the output video in both 2DTV and 3DTV. From the experiment, we verified that the proposed system can service the stereoscopic 3D contents in 24 frames per second at most.

Improved Video Synthesis Method by Depth Map Rearrangement (깊이 맵의 재배열을 통한 개선된 영상 합성 방법)

  • Kim, Tae-Woo;Park, Jin-Hyun;Won, Seok-Ho;Shin, Jitae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.352-355
    • /
    • 2011
  • 본 논문에서는 깊이 맵의 재배열 과정을 통해서, 보다 개선된 영상을 합성하는 방법을 제안한다. 제안하는 방법은 전체 깊이 맵을 여러 그룹(Group)으로 나누고, 각각의 그룹에 서로 다른 가중치를 주어 가까운 물체에 좀 더 많은 깊이 값을 가질수 있도록 조절하였다. 깊이 맵 추정(Depth Estimation) 및 중간 시점 영상의 합성(View Synthesis)을 통하여 기존 방식과의 비교를 진행하였고 그 결과, 전체적인 비디오 시퀀스(Video Sequence)에 대한 PSNR은 유지하면서, 보다 시각적으로 자연스러운 영상을 얻을 수 있었다.

  • PDF