• Title/Summary/Keyword: 3D Video

Search Result 1,163, Processing Time 0.024 seconds

3D Video Processing for 3DTV

  • Sohn, Kwang-Hoon
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1231-1234
    • /
    • 2007
  • This paper presents the overview of 3D video processing technologies for 3DTV such as 3D content generation, 3D video codec and video processing techniques for 3D displays. Some experimental results for 3D contents generation are shown in 3D mixed reality and 2D/3D conversion.

  • PDF

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

Convert 2D Video Frames into 3D Video Frames (2차원 동영상의 3차원 동영상 변화)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.117-123
    • /
    • 2009
  • In this paper, An algorithm which converts 2D video frames into 3D video frames of parallel looking stereo camea is proposed. The proposed algorithm finds the disparity information between two consecutive video frames and generates 3D video frames from the obtained disparity maps. The disparity information is obtained from the modified iterative convergence algorithm. The method of generating 3D video frames from the disparity information is also proposed. The proposed algorithm uses coherence method which overcomes the video pattern based algorithms.

Stereo Audio Matched with 3D Video (3D영상에 정합되는 스테레오 오디오)

  • Park, Sung-Wook;Chung, Tae-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.153-158
    • /
    • 2011
  • This paper presents subjective experimental results to understand how audio should be changed when a video clip is watched in 3D than 2D. This paper divided auditory perceptual information into two categories; distance and azimuth that a sound source contributes mostly, and spaciousness that scene or environment contribute mostly. According to the experiment for distance and azimuth, i.e. sound localization, we found that distance and azimuth of sound sources were magnified when heard with 3D than 2D video. This lead us to conclude 3D sound for localization should be designed to have more distance and azimuth than 2D sound. Also we found 3D sound are preferred to be played with not only 3D video clip but also 2D video clip. According to the experiment for spaciousness, we found people prefer sound with more reverberation when they watch 3D video clips than 2D video clips. This can be understood that 3D video provides more spacial information than 2D video. Those subjective experimental results can help audio engineer familiar with 2D audio to create 3D audio, and be fundamental information of future research to make 2D to 3D audio conversion system. Furthermore when designing 3D broadcasting system with limited bandwidth and with 2D TV supportive, we propose to consider transmitting stereoscopic video, audio with enhanced localization, and metadata for TV sets to generate reverberation for spaciousness.

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

Method for Applying Wavefront Parallel Processing on Cubemap Video (큐브맵 영상에 Wavefront 병렬 처리를 적용하는 방법)

  • Hong, Seok Jong;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.401-404
    • /
    • 2017
  • The 360 VR video has a format of a stereoscopic shape such as an isometric shape or a cubic shape or a cubic shape. Although these formats have different characteristics, they have in common that the resolution is higher than that of a normal 2D video. Therefore, it takes much longer time to perform coding/decoding on 360 VR video than 2D Video, so parallel processing techniques are essential when it comes to coding 360 VR video. HEVC, the state of art 2D video codec, uses Wavefront Parallel Processing (WPP) technology as a standard for parallelization. This technique is optimized for 2D videos and does not show optimal performance when used in 3D videos. Therefore, a suitable method for WPP is required for 3D video. In this paper, we propose WPP coding/decoding method which improves WPP performance on cube map format 3D video. The experiment was applied to the HEVC reference software HM 12.0. The experimental results show that there is no significant loss of PSNR compared with the existing WPP, and the coding complexity of 15% to 20% is further reduced. The proposed method is expected to be included in the future 3D VR video codecs.

Haptic Rendering Technology for Touchable Video (만질 수 있는 비디오를 위한 햅틱 렌더링 기술)

  • Lee, Hwan-Mun;Kim, Ki-Kwon;Sung, Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.691-701
    • /
    • 2010
  • We propose a haptic rendering technology for touchable video. Our touchable video technique allows users for feeling the sense of touch while probing directly on 2D objects in video scenes or manipulating 3D objects brought out from video scenes using haptic devices. In our technique, a server sends video and haptic data as well as the information of 3D model objects. The clients receive video and haptic data from the server and render 3D models. A video scene is divided into small grids, and each cell has its tactile information which corresponds to a specific combination of four attributes: stiffness, damping, static friction, and dynamic friction. Users can feel the sense of touch when they touch directly cells of a scene using a haptic device. Users can also examine objects by touching or manipulating them after bringing out the corresponding 3D objects from the screen. Our touchable video technique proposed in this paper can lead us to feel maximum satisfaction the haptic-audio-vidual effects directly on the video scenes of movies or home-shopping video contents.

Temporal Anti-aliasing of a Stereoscopic 3D Video

  • Kim, Wook-Joong;Kim, Seong-Dae;Hur, Nam-Ho;Kim, Jin-Woong
    • ETRI Journal
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Frequency domain analysis is a fundamental procedure for understanding the characteristics of visual data. Several studies have been conducted with 2D videos, but analysis of stereoscopic 3D videos is rarely carried out. In this paper, we derive the Fourier transform of a simplified 3D video signal and analyze how a 3D video is influenced by disparity and motion in terms of temporal aliasing. It is already known that object motion affects temporal frequency characteristics of a time-varying image sequence. In our analysis, we show that a 3D video is influenced not only by motion but also by disparity. Based on this conclusion, we present a temporal anti-aliasing filter for a 3D video. Since the human process of depth perception mainly determines the quality of a reproduced 3D image, 2D image processing techniques are not directly applicable to 3D images. The analysis presented in this paper will be useful for reducing undesirable visual artifacts in 3D video as well as for assisting the development of relevant technologies.

  • PDF

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

A Novel Selective Frame Discard Method for 3D Video over IP Networks

  • Chung, Young-Uk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1209-1221
    • /
    • 2010
  • Three dimensional (3D) video is expected to be an important application for broadcast and IP streaming services. One of the main limitations for the transmission of 3D video over IP networks is network bandwidth mismatch due to the large size of 3D data, which causes fatal decoding errors and mosaic-like damage. This paper presents a novel selective frame discard method to address the problem. The main idea of the proposed method is the symmetrical discard of the two dimensional (2D) video frame and the depth map frame. Also, the frames to be discarded are selected after additional consideration of the playback deadline, the network bandwidth, and the inter-frame dependency relationship within a group of pictures (GOP). It enables the efficient utilization of the network bandwidth and high quality 3D IPTV service. The simulation results demonstrate that the proposed method enhances the media quality of 3D video streaming even in the case of bad network conditions.