• Title/Summary/Keyword: 2D depth map

Search Result 170, Processing Time 0.022 seconds

Depth estimation and View Synthesis using Haze Information (실안개를 이용한 단일 영상으로부터의 깊이정보 획득 및 뷰 생성 알고리듬)

  • Soh, Yong-Seok;Hyun, Dae-Young;Lee, Sang-Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.241-243
    • /
    • 2010
  • Previous approaches to the 2D to 3D conversion problem require heavy computation or considerable amount of user input. In this paper, we propose a rather simple method in estimating the depth map from a single image using a monocular depth cue: haze. Using the haze imaging model, we obtain the distance information and estimate a reliable depth map from a single scenery image. Using the depth map, we also suggest an algorithm that converts the single image to 3D stereoscopic images. We determine a disparity value for each pixel from the original 'left' image and generate a corresponding 'right' image. Results show that the algorithm gives well refined depth maps despite the simplicity of the approach.

  • PDF

The Study of Stereo Matching for 3D Image Implementation in Augmented Reality (증강현실에서 3D이미지 구현을 위한 스테레오 정합 연구)

  • Lee, Yonghwan;Kim, Youngseop;Park, Inho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.4
    • /
    • pp.103-106
    • /
    • 2016
  • 3D technology is main factor in Augmented Reality. Depth map is essential to make cubic effect using 2d image. There are a lot of ways to construct Depth map. Among them, stereo matching is mainly used. This paper presents how to generate depth map using stereo matching. For stereo matching, existing Dynamic programming method is used. To make accurate stereo matching, High-Boost Filter is applied to preprocessing method. As a result, when depth map is generated, accuracy based on Ground Truth soared.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

Depth Map Coding Using Histogram-Based Segmentation and Depth Range Updating

  • Lin, Chunyu;Zhao, Yao;Xiao, Jimin;Tillo, Tammam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1121-1139
    • /
    • 2015
  • In texture-plus-depth format, depth map compression is an important task. Different from normal texture images, depth maps have less texture information, while contain many homogeneous regions separated by sharp edges. This feature will be employed to form an efficient depth map coding scheme in this paper. Firstly, the histogram of the depth map will be analyzed to find an appropriate threshold that segments the depth map into the foreground and background regions, allowing the edge between these two kinds of regions to be obtained. Secondly, the two regions will be encoded through rate distortion optimization with a shape adaptive wavelet transform, while the edges are lossless encoded with JBIG2. Finally, a depth-updating algorithm based on the threshold and the depth range is applied to enhance the quality of the decoded depth maps. Experimental results demonstrate the effective performance on both the depth map quality and the synthesized view quality.

Multi-scale 3D Panor ama Content Augmented System using Depth-map

  • Kim, Cheeyong;Kim, Eung-Kon;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.6
    • /
    • pp.733-740
    • /
    • 2014
  • With the development and spread of 3D display, users can easily experience an augmented reality with 3D features. Therefore, the demand for content of an augmented reality is exponentially growing in various fields. A traditional augmented reality environment was generally created by CG(Computer Graphics) modelling production tools. However, this method takes too much time and efforts to create an augmented environment. To create an augmented environment similar to the real world, everything in the real world should be measured, gone through modeling, and located in an augmented environment. But the time and efforts spent in the creation don't produce the same environment as the real world, making it hard for users to feel the sense of reality. In this study, multi-scale 3D panorama content augmented system is suggested by using a depth-map. By finding matching features from images to add 3D features to an augmented environment, a depth-map is derived and embodied as panorama, producing high-quality augmented content system with a sense of reality. With this study, limits of 2D panorama technologies will be overcome and a sense of reality and immersion will be provided to users with a natural navigation.

Foreground Extraction and Depth Map Creation Method based on Analyzing Focus/Defocus for 2D/3D Video Conversion (2D/3D 동영상 변환을 위한 초점/비초점 분석 기반의 전경 영역 추출과 깊이 정보 생성 기법)

  • Han, Hyun-Ho;Chung, Gye-Dong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.1
    • /
    • pp.243-248
    • /
    • 2013
  • In this paper, depth of foreground is analysed by focus and color analysis grouping for 2D/3D video conversion and depth of foreground progressing method is preposed by using focus and motion information. Candidate foreground image is generated by estimated movement of image focus information for extracting foreground from 2D video. Area of foreground is extracted by filling progress using color analysis on hole area of inner object existing candidate foreground image. Depth information is generated by analysing value of focus existing on actual frame for allocating depth at generated foreground area. Depth information is allocated by weighting motion information. Results of previous proposed algorithm is compared with proposed method from this paper for evaluating the quality of generated depth information.

3D HDTV service method based on MPEG-C part.3 (MPEG-C part.3를 이용한 고화질 3D HDTV 전송방안)

  • Kang, Jeonho;Lee, Gilbok;Kim, Kyuheon;Cheong, Won-Sik;Yun, Kugjin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.298-301
    • /
    • 2010
  • 최근 전자업계의 중요한 화두 중 하나는 3D이다. 3D 입체 영상 기술은 미디어 환경을 변화시키고, 이에 발맞춰 방송환경 또한 변하고 있다. 고화질 3D 입체 방송은 기존의 2D 방송 서비스와 호환성을 유지하면서 시간적으로 2D 프로그램과 3D 프로그램이 혼용되어 제공되는 입체 방송 서비스가 될 것이다. 기존의 2D 기반 디지털 방송 서비스 환경에서 고화질 3D 입체 방송을 할 수 없다. 현재 국제 표준기구인 MPEG에서는 3D 서비스 방안으로 ISO/IEC 23002-3(MPEG-C part.3)이 표준화되었다. MPEG-C part.3에서 부가데이터인 depth map과 parallax map을 사용한다. 하지만 공간 주파수가 높은 영상의 depth map과 parallax map은 객체의 경계가 확실하지 않아 3D 입체 구현 시 객체의 경계면이 뭉개질 수 있다. 따라서 본 논문은 고화질 3D 입체 방송 서비스를 위한 전송 방안을 제시하고 있으며, MPEG-C part.3를 이용한 스테레오스코픽 영상 전송방안에 대해 소개한다.

  • PDF

Coding Technique using Depth Map in 3D Scalable Video Codec (확장된 스케일러블 비디오 코덱에서 깊이 영상 정보를 활용한 부호화 기법)

  • Lee, Jae-Yung;Lee, Min-Ho;Chae, Jin-Kee;Kim, Jae-Gon;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.237-251
    • /
    • 2016
  • The conventional 3D-HEVC uses the depth data of the other view instead of that of the current view because the texture data has to be encoded before the corresponding depth data of the current view has been encoded, where the depth data of the other view is used as the predicted depth for the current view. Whereas the conventional 3D-HEVC has no other candidate for the predicted depth information except for that of the other view, the scalable 3D-HEVC utilizes the depth data of the lower spatial layer whose view ID is equal to that of the current picture. The depth data of the lower spatial layer is up-scaled to the resolution of the current picture, and then the enlarged depth data is used as the predicted depth information. Because the quality of the enlarged depth is much higher than that of the depth of the other view, the proposed scheme increases the coding efficiency of the scalable 3D-HEVC codec. Computer simulation results show that the scalable 3D-HEVC is useful and the proposed scheme to use the enlarged depth data for the current picture provides the significant coding gain.

Effective Route Decision of an Automatic Moving Robot(AMR) using a 2D Spatial Map of the Stereo Camera System

  • Lee, Jae-Soo;Han, Kwang-Sik;Ko, Jung-Hwan
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.20 no.9
    • /
    • pp.45-53
    • /
    • 2006
  • This paper proposes a method for an effective intelligent route decision for automatic moving robots(AMR) using a 2D spatial map of a stereo camera system. In this method, information about depth and disparity map are detected in the inputting images of a parallel stereo camera. The distance between the automatic moving robot and the obstacle is detected, and a 2D spatial map is obtained from the location coordinates. Then the relative distances between the obstacle and other objects are deduced. The robot move automatically by effective and intelligent route decision using the obtained 2D spatial map. From experiments on robot driving with 240 frames of stereo images, it was found that the error ratio of the calculated distance to the measured distance between objects was very low, 1.52[%] on average.