• Title/Summary/Keyword: Video Synthesis

Search Result 116, Processing Time 0.024 seconds

Implementing 3DoF+ 360 Video Compression System for Immersive Media (실감형 미디어를 위한 3DoF+ 360 비디오 압축 시스템 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Jang, Dongmin;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.743-754
    • /
    • 2019
  • System for three degrees of freedom plus (3DoF+) and 6DoF requires multi-view high resolution 360 video transmission to provide user viewport adaptive 360 video streaming. In this paper, we implement 3DoF+ 360 video compression system which removes the redundancy between multi-view videos and merges the residual into one video to provide high quality 360 video corresponding to an user's head movement efficiently. Implementations about 3D warping based redundancy removal method between 3DoF+ 360 videos and residual extraction and merger are explained in this paper. With the proposed system, 20.14% of BD-rate reduction in maximum is shown compared to traditional high-efficiency video coding (HEVC) based system.

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

Depth Video Post-processing for Immersive Teleconference (원격 영상회의 시스템을 위한 깊이 영상 후처리 기술)

  • Lee, Sang-Beom;Yang, Seung-Jun;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6A
    • /
    • pp.497-502
    • /
    • 2012
  • In this paper, we present an immersive videoconferencing system that enables gaze correction between users in the internet protocol TV (IPTV) environment. The proposed system synthesizes the gaze corrected images using the depth estimation and the virtual view synthesis algorithms as one of the most important techniques of 3D video system. The conventional processes, however, causes several problems, especially temporal inconsistency of a depth video. This problem leads to flickering artifacts discomforting viewers. Therefore, in order to reduce the temporal inconsistency problem, we exploit the joint bilateral filter which is extended to the temporal domain. In addition, we apply an outlier reduction operation in the temporal domain. From experimental results, we have verified that the proposed system is sufficient to generate the natural gaze-corrected image and realize immersive videoconferencing.

A Study on the Spacing of the Camera Axis of the Video Shooting Multi-planar live-action (실사 다면영상 촬영에서의 카메라 축 간격에 대한 연구)

  • Baek, Seoung-ho;Choi, Won-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.529-530
    • /
    • 2014
  • Multiplanar image video has been used in various fields advertising, exhibitions, and PR. But the content is applied therein, image synthesis, and graphics in most cases. That for taking pictures become a stock video exactly not only very difficult, to complement work in the second half of the problems of the shooting stage is difficult. Therefore, in this study, and then grasp the problems later work with imaging and attempts to validate the experiment improvements. As the study specific method, it is intended to advance the experiments for determining the distance to minimize the distortion generated at the edges of the image varies with the distance between cameras. It is intended to contribute to the creation of the content of the image-based, which can be utilized more effectively excellence media of the video if to build on this.

  • PDF

A Depth-based Disocclusion Filling Method for Virtual Viewpoint Image Synthesis (가상 시점 영상 합성을 위한 깊이 기반 가려짐 영역 메움법)

  • Ahn, Il-Koo;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.48-60
    • /
    • 2011
  • Nowadays, the 3D community is actively researching on 3D imaging and free-viewpoint video (FVV). The free-viewpoint rendering in multi-view video, virtually move through the scenes in order to create different viewpoints, has become a popular topic in 3D research that can lead to various applications. However, there are restrictions of cost-effectiveness and occupying large bandwidth in video transmission. An alternative to solve this problem is to generate virtual views using a single texture image and a corresponding depth image. A critical issue on generating virtual views is that the regions occluded by the foreground (FG) objects in the original views may become visible in the synthesized views. Filling this disocclusions (holes) in a visually plausible manner determines the quality of synthesis results. In this paper, a new approach for handling disocclusions using depth based inpainting algorithm in synthesized views is presented. Patch based non-parametric texture synthesis which shows excellent performance has two critical elements: determining where to fill first and determining what patch to be copied. In this work, a noise-robust filling priority using the structure tensor of Hessian matrix is proposed. Moreover, a patch matching algorithm excluding foreground region using depth map and considering epipolar line is proposed. Superiority of the proposed method over the existing methods is proved by comparing the experimental results.

Hardware Implementation of a Fast Inter Prediction Engine for MPEG-4 AVC (MPEG-4 AVC를 위한 고속 인터 예측기의 하드웨어 구현)

  • Lim Young hun;Lee Dae joon;Jeong Yong jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.102-111
    • /
    • 2005
  • In this paper, we propose an advanced hardware architecture for the fast inter prediction engine of the video coding standard MPEG-4 AVC. We describe the algorithm and derive the hardware architecture emphasizing and real time operation of the quarter_pel based motion estimation. The fast inter prediction engine is composed of block segmentation, motion estimation, motion compensation, and the fast quarter_pel calculator. The proposed architecture has been verified by ARM-interfaced emulation board using Excalibur & Virtex2 FPGA, and also by synthesis on Samsung 0.18 um CMOS technology. The synthesis result shows that the proposed hardware can operate at 62.5MHz. In this case, it can process about 88 QCIF video frames per second. The hardware is being used as a core module when implementing a complete MPEG-4 AVC video encoder chip for real-time multimedia application.

Depth Boundary Sharpening for Improved 3D View Synthesis (3차원 합성영상의 화질 개선을 위한 깊이 경계 선명화)

  • Song, Yunseok;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37A no.9
    • /
    • pp.786-791
    • /
    • 2012
  • This paper presents a depth boundary sharpening method for improved view synthesis in 3D video. In depth coding, distortion occurs around object boundaries, degrading the quality of synthesized images. In order to encounter this problem, the proposed method estimates an edge map for each frame to filter only the boundary regions. In particular, a window-based filter is employed to choose the most reliable pixel as the replacement considering three factors: frequency, similarity and closeness. The proposed method was implemented as post-processing of the deblocking filter in JMVC 8.3.Compared to the conventional methods, the proposed method generated 0.49 dB PSNR increase and 16.58% bitrate decrease on average. The improved portions were subjectively confirmed as well.

A Instructional Contents Creator using Wavelet for Lossless Image Compression (웨이브렛 기반 무손실 압축 방법을 사용한 동영상 강의 콘텐츠 제작기 구현)

  • Lee, Sang-Yeob;Park, Seong-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.71-81
    • /
    • 2011
  • In order to easily create video tutorials, the algorithm is needed that video camera recording, white board images, video attachments, and document data are combined in real-time. In this study, we implemented the video lecture content creation system using wavelet-based lossless compression to composite multimedia objects in real-time and reproduce the images. Using commercially available PC can be useful when lecturers want to make video institutional contents, it can be operated easily and fastly. Therefore, it can be very efficient system for e-Learning and m-Learning. In addition, the proposed system including multimedia synthesis technology and real-time lossless compression technology can be applied to various fields, different kinds of multimedia creation, remote conferencing, and e-commerce so there are highly significant.

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model

  • Kim, Soowoong;Kang, Jungwon
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.51-61
    • /
    • 2022
  • In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.

Boundary Artifacts Reduction in View Synthesis of 3D Video System (3차원 비디오의 합성영상 경계 잡음 제거)

  • Lee, Dohoon;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.878-888
    • /
    • 2016
  • This paper proposes an efficient method to remove the boundary artifacts of rendered views caused by damaged depth maps in the 3D video system. First, characteristics of boundary artifacts with the compression noise in depth maps are carefully studied. Then, the artifacts suppression method is proposed by the iterative projection onto convex sets (POCS) algorithm with setting the convex set in pixel and frequency domain. The proposed method is applied to both texture and depth maps separately during view rendering. The simulation results show the boundary artifacts are greatly reduced with improving the quality of synthesized views.