• Title/Summary/Keyword: 2D/3D Video Conversion

Search Result 45, Processing Time 0.022 seconds

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

A Trend Study on 2D to 3D Video Conversion Technology using Analysis of Patent Data (특허 분석을 통한 2D to 3D 영상 데이터 변환 기술 동향 연구)

  • Kang, Michael M.;Lee, Wookey;Lee, Rich. C.
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.495-504
    • /
    • 2014
  • This paper present a strategy of intellectual property acquisition and core technology development direction using analysis of 2D to 3D video conversion technology patent data. As a result of analysis of trends in patent 2D to 3D technology, it is very promising technology field. Using a strategic patent map using research of patent trend, you will keep ahead of the competition in 2D3D image data conversion market.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

3D Video Processing for 3DTV

  • Sohn, Kwang-Hoon
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1231-1234
    • /
    • 2007
  • This paper presents the overview of 3D video processing technologies for 3DTV such as 3D content generation, 3D video codec and video processing techniques for 3D displays. Some experimental results for 3D contents generation are shown in 3D mixed reality and 2D/3D conversion.

  • PDF

3D Conversion of 2D Video Encoded by H.264

  • Hong, Ho-Ki;Ko, Min-Soo;Seo, Young-Ho;Kim, Dong-Wook;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.6
    • /
    • pp.990-1000
    • /
    • 2012
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using two cameras conventionally. Very accurate motion vectors are available in H.264 bit streams because of the availability of a variety of block sizes. 2D/3D conversion algorithm proposed in this paper can create left and right images by using extracted motion information. Image type of a given image is first determined from the extracted motion information and each image type gives a different conversion algorithm. The cut detection has also been performed in order to prevent overlapping of two totally different scenes for left and right images. We show an improved performance of the proposed algorithm through experimental results.

3D conversion of 2D video using depth layer partition (Depth layer partition을 이용한 2D 동영상의 3D 변환 기법)

  • Kim, Su-Dong;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.44-53
    • /
    • 2011
  • In this paper, we propose a 3D conversion algorithm of 2D video using depth layer partition method. In the proposed algorithm, we first set frame groups using cut detection algorithm. Each divided frame groups will reduce the possibility of error propagation in the process of motion estimation. Depth image generation is the core technique in 2D/3D conversion algorithm. Therefore, we use two depth map generation algorithms. In the first, segmentation and motion information are used, and in the other, edge directional histogram is used. After applying depth layer partition algorithm which separates objects(foreground) and the background from the original image, the extracted two depth maps are properly merged. Through experiments, we verify that the proposed algorithm generates reliable depth map and good conversion results.

Stereo Audio Matched with 3D Video (3D영상에 정합되는 스테레오 오디오)

  • Park, Sung-Wook;Chung, Tae-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.153-158
    • /
    • 2011
  • This paper presents subjective experimental results to understand how audio should be changed when a video clip is watched in 3D than 2D. This paper divided auditory perceptual information into two categories; distance and azimuth that a sound source contributes mostly, and spaciousness that scene or environment contribute mostly. According to the experiment for distance and azimuth, i.e. sound localization, we found that distance and azimuth of sound sources were magnified when heard with 3D than 2D video. This lead us to conclude 3D sound for localization should be designed to have more distance and azimuth than 2D sound. Also we found 3D sound are preferred to be played with not only 3D video clip but also 2D video clip. According to the experiment for spaciousness, we found people prefer sound with more reverberation when they watch 3D video clips than 2D video clips. This can be understood that 3D video provides more spacial information than 2D video. Those subjective experimental results can help audio engineer familiar with 2D audio to create 3D audio, and be fundamental information of future research to make 2D to 3D audio conversion system. Furthermore when designing 3D broadcasting system with limited bandwidth and with 2D TV supportive, we propose to consider transmitting stereoscopic video, audio with enhanced localization, and metadata for TV sets to generate reverberation for spaciousness.

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

Convert 2D Video Frames into 3D Video Frames (2차원 동영상의 3차원 동영상 변화)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.117-123
    • /
    • 2009
  • In this paper, An algorithm which converts 2D video frames into 3D video frames of parallel looking stereo camea is proposed. The proposed algorithm finds the disparity information between two consecutive video frames and generates 3D video frames from the obtained disparity maps. The disparity information is obtained from the modified iterative convergence algorithm. The method of generating 3D video frames from the disparity information is also proposed. The proposed algorithm uses coherence method which overcomes the video pattern based algorithms.