• Title/Summary/Keyword: stereoscopic conversion

Search Result 47, Processing Time 0.02 seconds

3D Conversion of 2D H.264 Video (2D H.264 동영상의 3D 입체 변환)

  • Hong, Ho-Ki;Baek, Yun-Ki;Lee, Seung-Hyun;Kim, Dong-Wook;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.12C
    • /
    • pp.1208-1215
    • /
    • 2006
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using the conventional stereo-camera process. Motion information of each frame can be obtained by the given motion vectors in most of videos encoded by MPEG standards. Especially, we have accurate motion vectors for H.264 streams because of the availability of a variety of block sizes. 2D/3D video conversion algorithm proposed in this paper can create the left and right images that correspond to the original image by using cut detection method, delay factors, motion types, and image types. We usually have consistent motion type na direction in a given cut because each frame in the same cut has high correlation. We show the improved performance of the proposed algorithm through experimental results.

A Method for Estimating a Distance Using the Stereo Zoom Lens Module (양안 줌렌즈를 이용한 물체의 거리추정)

  • Hwang, Eun-Seop;Kim, Nam;Kwon, Ki-Chul
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.6
    • /
    • pp.537-543
    • /
    • 2006
  • A method of estimating the distance using single zoom camera limits a distance range(only optical axis) in field of view. So, in this paper, we propose a method of estimating the distance information in Stereoscopic display using the stereo zoom lens module for estimating the distance in the wide range. The binocular stereo zoom lens system is composed using a horizontal moving camera module. The left and right images are acquired in polarized stereo monitor for getting the conversion and estimating a distance. The error distance is under 10mm which has difference between optically a traced distance and an estimated distance in left and right range $(0mm{\sim}500mm)$ at center. This presents the system using a function of the zoom and conversion has more precise distance information than that of conversion control. Also, a method of estimating a distance from horizontal moving camera is more precise value than that from toe-in camera by comparing the error distance of the two camera methods.

Implementation of Real-time Stereoscopic Image Conversion Algorithm Using Luminance and Vertical Position (휘도와 수직 위치 정보를 이용한 입체 변환 알고리즘 구현)

  • Yun, Jong-Ho;Choi, Myul-Rul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.5
    • /
    • pp.1225-1233
    • /
    • 2008
  • In this paper, the 2D/3D converting algorithm is proposed. The single frame of 2D image is used fur the real-time processing of the proposed algorithm. The proposed algorithm creates a 3D image with the depth map by using the vertical position information of a object in a single frame. In order to real-time processing and improve the hardware complexity, it performs the generation of a depth map using the image sampling, the object segmentation with the luminance standardization and the boundary scan. It might be suitable to a still image and a moving image, and it can provide a good 3D effect on a image such as a long distance image, a landscape, or a panorama photo because it uses a vertical position information. The proposed algorithm can adapt a 3D effect to a image without the restrictions of the direction, velocity or scene change of an object. It has been evaluated with the visual test and the comparing to the MTD(Modified Time Difference) method using the APD(Absolute Parallax Difference).

3DTV System Adaptive to User's Environment (사용자 환경에 적응적인 3DTV 시스템)

  • Baek, Yun-Ki;Choi, Mi-Nam;Park, Se-Whan;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.982-989
    • /
    • 2007
  • In this paper, we propose a 3DTV system that considers user's view point and display environment. The proposed system consists of 3 parts - multi-view encoder/decoder, face-tracker, and 2D/3D converter. The proposed system try to encode multi-view sequence and decode it in accordance with the user's view point and it also gives a stereopsis to the multi-view image by using of 2D/3D conversion which converts decoded two-dimensional(2D) image to three-dimensional(3D) image. Experimental results shows that we are able to correctly reconstruct a stereoscopic view that is exactly corresponding to user's view point.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Stereo Audio Matched with 3D Video (3D영상에 정합되는 스테레오 오디오)

  • Park, Sung-Wook;Chung, Tae-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.153-158
    • /
    • 2011
  • This paper presents subjective experimental results to understand how audio should be changed when a video clip is watched in 3D than 2D. This paper divided auditory perceptual information into two categories; distance and azimuth that a sound source contributes mostly, and spaciousness that scene or environment contribute mostly. According to the experiment for distance and azimuth, i.e. sound localization, we found that distance and azimuth of sound sources were magnified when heard with 3D than 2D video. This lead us to conclude 3D sound for localization should be designed to have more distance and azimuth than 2D sound. Also we found 3D sound are preferred to be played with not only 3D video clip but also 2D video clip. According to the experiment for spaciousness, we found people prefer sound with more reverberation when they watch 3D video clips than 2D video clips. This can be understood that 3D video provides more spacial information than 2D video. Those subjective experimental results can help audio engineer familiar with 2D audio to create 3D audio, and be fundamental information of future research to make 2D to 3D audio conversion system. Furthermore when designing 3D broadcasting system with limited bandwidth and with 2D TV supportive, we propose to consider transmitting stereoscopic video, audio with enhanced localization, and metadata for TV sets to generate reverberation for spaciousness.