• Title/Summary/Keyword: depth-Information

Search Result 4,438, Processing Time 0.025 seconds

Resolution-independent Up-sampling for Depth Map Using Fractal Transforms

  • Liu, Meiqin;Zhao, Yao;Lin, Chunyu;Bai, Huihui;Yao, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2730-2747
    • /
    • 2016
  • Due to the limitation of the bandwidth resource and capture resolution of depth cameras, low resolution depth maps should be up-sampled to high resolution so that they can correspond to their texture images. In this paper, a novel depth map up-sampling algorithm is proposed by exploiting the fractal internal self-referential feature. Fractal parameters which are extracted from a depth map, describe the internal self-referential feature of the depth map, do not introduce inherent scale and just retain the relational information of the depth map, i.e., fractal transforms provide a resolution-independent description for depth maps and could up-sample depth maps to an arbitrary high resolution. Then, an enhancement method is also proposed to further improve the performance of the up-sampled depth map. The experimental results demonstrate that better quality of synthesized views is achieved both on objective and subjective performance. Most important of all, arbitrary resolution depth maps can be obtained with the aid of the proposed scheme.

Depth Map Coding Using Histogram-Based Segmentation and Depth Range Updating

  • Lin, Chunyu;Zhao, Yao;Xiao, Jimin;Tillo, Tammam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1121-1139
    • /
    • 2015
  • In texture-plus-depth format, depth map compression is an important task. Different from normal texture images, depth maps have less texture information, while contain many homogeneous regions separated by sharp edges. This feature will be employed to form an efficient depth map coding scheme in this paper. Firstly, the histogram of the depth map will be analyzed to find an appropriate threshold that segments the depth map into the foreground and background regions, allowing the edge between these two kinds of regions to be obtained. Secondly, the two regions will be encoded through rate distortion optimization with a shape adaptive wavelet transform, while the edges are lossless encoded with JBIG2. Finally, a depth-updating algorithm based on the threshold and the depth range is applied to enhance the quality of the decoded depth maps. Experimental results demonstrate the effective performance on both the depth map quality and the synthesized view quality.

A New Copyright Protection Scheme for Depth Map in 3D Video

  • Li, Zhaotian;Zhu, Yuesheng;Luo, Guibo;Guo, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3558-3577
    • /
    • 2017
  • In 2D-to-3D video conversion process, the virtual left and right view can be generated from 2D video and its corresponding depth map by depth image based rendering (DIBR). The depth map plays an important role in conversion system, so the copyright protection for depth map is necessary. However, the provided virtual views may be distributed illegally and the depth map does not directly expose to viewers. In previous works, the copyright information embedded into the depth map cannot be extracted from virtual views after the DIBR process. In this paper, a new copyright protection scheme for the depth map is proposed, in which the copyright information can be detected from the virtual views even without the depth map. The experimental results have shown that the proposed method has a good robustness against JPEG attacks, filtering and noise.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Fast Mode Decision For Depth Video Coding Based On Depth Segmentation

  • Wang, Yequn;Peng, Zongju;Jiang, Gangyi;Yu, Mei;Shao, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1128-1139
    • /
    • 2012
  • With the development of three-dimensional display and related technologies, depth video coding becomes a new topic and attracts great attention from industries and research institutes. Because (1) the depth video is not a sequence of images for final viewing by end users but an aid for rendering, and (2) depth video is simpler than the corresponding color video, fast algorithm for depth video is necessary and possible to reduce the computational burden of the encoder. This paper proposes a fast mode decision algorithm for depth video coding based on depth segmentation. Firstly, based on depth perception, the depth video is segmented into three regions: edge, foreground and background. Then, different mode candidates are searched to decide the encoding macroblock mode. Finally, encoding time, bit rate and video quality of virtual view of the proposed algorithm are tested. Experimental results show that the proposed algorithm save encoding time ranging from 82.49% to 93.21% with negligible quality degradation of rendered virtual view image and bit rate increment.

Auto-Covariance Analysis for Depth Map Coding

  • Liu, Lei;Zhao, Yao;Lin, Chunyu;Bai, Huihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3146-3158
    • /
    • 2014
  • Efficient depth map coding is very crucial to the multi-view plus depth (MVD) format of 3-D video representation, as the quality of the synthesized virtual views highly depends on the accuracy of the depth map. Depth map contains smooth area within an object but distinct boundary, and these boundary areas affect the visual quality of synthesized views significantly. In this paper, we characterize the depth map by an auto-covariance analysis to show the locally anisotropic features of depth map. According to the characterization analysis, we propose an efficient depth map coding scheme, in which the directional discrete cosine transforms (DDCT) is adopted to substitute the conventional 2-D DCT to preserve the boundary information and thereby increase the quality of synthesized view. Experimental results show that the proposed scheme achieves better performance than that of conventional DCT with respect to the bitrate savings and rendering quality.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Volumetric Visualization using Depth Information of Stereo Images (스테레오 영상에서의 깊이정보를 이용한 3차원 입체화)

  • 이성재;김정훈;윤성원;최종주;이명호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.541-541
    • /
    • 2000
  • This paper Presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we peformed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) algorithm. The final result image is helpful for the understanding of depth information visually.

  • PDF

2D-to-3D Conversion System using Depth Map Enhancement

  • Chen, Ju-Chin;Huang, Meng-yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1159-1181
    • /
    • 2016
  • This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer's main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

Depth Up-Sampling via Pixel-Classifying and Joint Bilateral Filtering

  • Ren, Yannan;Liu, Ju;Yuan, Hui;Xiao, Yifan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3217-3238
    • /
    • 2018
  • In this paper, a depth image up-sampling method is put forward by using pixel classifying and jointed bilateral filtering. By analyzing the edge maps originated from the high-resolution color image and low-resolution depth map respectively, pixels in up-sampled depth maps can be classified into four categories: edge points, edge-neighbor points, texture points and smooth points. First, joint bilateral up-sampling (JBU) method is used to generate an initial up-sampling depth image. Then, for each pixel category, different refinement methods are employed to modify the initial up-sampling depth image. Experimental results show that the proposed algorithm can reduce the blurring artifact with lower bad pixel rate (BPR).