• Title/Summary/Keyword: depth image-based

Search Result 825, Processing Time 0.034 seconds

An Image Coding Algorithm for the Representation of the Set of the Zoom Images (Zoom 영상 표현을 위한 영상 코딩 알고리듬)

  • Jang, Bo-Hyeon;Kim, Do-Hyeon;Yang, Yeong-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.498-508
    • /
    • 2001
  • In this paper, we propose an efficient coding algorithm for the zoom images to find the optimal depth and texture information. The proposed algorithm is the area-based method consisting of two consecutive steps, i) the depth extraction step and ii) the texture extraction step. The X-Y plane of the object space is divided into triangular patches and the depth value of the node is determined in the first step and then the texture of the each patch is extracted in the second step. In the depth extraction step, the depth of the node is determined by applying the block-based disparity compensation method to the windowed area centered at the node. In the second step, the texture of the triangular patches is extracted from the zoom images by applying the affine transformation based disparity compensation method to the triangular patches with the depth value extracted from the first step. To improve the quality of image, the interpolation is peformed on the object space instead of the interpolation on the image plane.

  • PDF

A Landmark Based Localization System using a Kinect Sensor (키넥트 센서를 이용한 인공표식 기반의 위치결정 시스템)

  • Park, Kwiwoo;Chae, JeongGeun;Moon, Sang-Ho;Park, Chansik
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.1
    • /
    • pp.99-107
    • /
    • 2014
  • In this paper, a landmark based localization system using a Kinect sensor is proposed and evaluated with the implemented system for precise and autonomous navigation of low cost robots. The proposed localization method finds the positions of landmark on the image plane and the depth value using color and depth images. The coordinates transforms are defined using the depth value. Using coordinate transformation, the position in the image plane is transformed to the position in the body frame. The ranges between the landmarks and the Kinect sensor are the norm of the landmark positions in body frame. The Kinect sensor position is computed using the tri-lateral whose inputs are the ranges and the known landmark positions. In addition, a new matching method using the pin hole model is proposed to reduce the mismatch between depth and color images. Furthermore, a height error compensation method using the relationship between the body frame and real world coordinates is proposed to reduce the effect of wrong leveling. The error analysis are also given to find out the effect of focal length, principal point and depth value to the range. The experiments using 2D bar code with the implemented system show that the position with less than 3cm error is obtained in enclosed space($3,500mm{\times}3,000mm{\times}2,500mm$).

2D to 3D Anaglyph Image Conversion using Linear Curve in HTML5 (HTML5에서 직선의 기울기를 이용한 2D to 3D 입체 이미지 변환)

  • Park, Young Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.521-528
    • /
    • 2014
  • In this paper, we propose the method of converting 2D image to 3D image using linear curves in HTML5. We use only one image without any other information about depth map for creating 3D images. So we filter the original image to extract RGB colors for left and right eyes. After selecting the ready-made control point of linear curves to set up depth values, users can set up the depth values and modify them. Based on the depth values that the end users select, we reflect them. Anaglyph 3D is automatically made with the whole and partial depth information. As all of this work has been designed and implemented in Web environment using HTML5, it is very easy and convenient and end users can create any 3D image that they want to make.

2D-to-3D Conversion System using Depth Map Enhancement

  • Chen, Ju-Chin;Huang, Meng-yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1159-1181
    • /
    • 2016
  • This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer's main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

Depth Map Generation Using Infocused and Defocused Images (초점 영상 및 비초점 영상으로부터 깊이맵을 생성하는 방법)

  • Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.362-371
    • /
    • 2014
  • Blur variation caused by camera de-focusing provides a proper cue for depth estimation. Depth from Defocus (DFD) technique calculates the blur amount present in an image considering that blur amount is directly related to scene depth. Conventional DFD methods use two defocused images that might yield the low quality of an estimated depth map as well as a reconstructed infocused image. To solve this, a new DFD methodology based on infocused and defocused images is proposed in this paper. In the proposed method, the outcome of Subbaro's DFD is combined with a novel edge blur estimation method so that improved blur estimation can be achieved. In addition, a saliency map mitigates the ill-posed problem of blur estimation in the region with low intensity variation. For validating the feasibility of the proposed method, twenty image sets of infocused and defocused images with 2K FHD resolution were acquired from a camera with a focus control in the experiments. 3D stereoscopic image generated by an estimated depth map and an input infocused image could deliver the satisfactory 3D perception in terms of spatial depth perception of scene objects.

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Image Enhancement Using Adaptive Region-based Histogram Equalization for Multiple Color-Filter Aperture System (다중 컬러필터 조리개 시스템을 위한 적응적 히스토그램 평활화를 이용한 영상 개선)

  • Lee, Eun-Sung;Kang, Won-Seok;Kim, Sang-Jin;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.65-73
    • /
    • 2011
  • In this paper, we present a novel digital multifocusing approach using adaptive region-based histogram equalization for the multiple color-filter aperture (MCA) system with insufficient amount of incoming light. From the image acquired by the MCA system, we can estimate the depth information of objects at different distances by measuring the amount of misalignment among the RGB color planes. The estimated depth information is used to obtain multifocused images together with the process of the region-of-interests (ROIs) classification, registration, and fusion. However, the MCA system results in the low-exposure problem because of the limited size of the apertures. For overcoming this problem, we propose adaptive region-based histogram equalization. Based on the experimental results, the proposed algorithm is proved to be able to obtain in-focused images under the low light level environment.

Image Synthesis and Multiview Image Generation using Control of Layer-based Depth Image (레이어 기반의 깊이영상 조절을 이용한 영상 합성 및 다시점 영상 생성)

  • Seo, Young-Ho;Yang, Jung-Mo;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1704-1713
    • /
    • 2011
  • This paper proposes a method to generate multiview images which use a synthesized image consisting of layered objects. The camera system which consists of a depth camera and a RGB camera is used in capturing objects and extracts 3-dimensional information. Considering the position and distance of the synthesizing image, the objects are synthesized into a layered image. The synthesized image is spaned to multiview images by using multiview generation tools. In this paper, we synthesized two images which consist of objects and human and the multiview images which have 37 view points were generated by using the synthesized images.

Reconstruction of 3D Virtual Reality Using Depth Information of Stereo Image (스테레오 영상에서의 깊이정보를 이용한 3D 가상현실 구현)

  • Lee, S.J.;Kim, J.H.;Lee, J.W.;Ahn, J.S.;Lee, D.J.;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2950-2952
    • /
    • 1999
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) method and OpenGL. The final result image is helpful for the understanding of depth information visually.

  • PDF