• Title/Summary/Keyword: Depth camera

Search Result 716, Processing Time 0.029 seconds

3D Depth Measurement System based on Parameter Calibration of the Mu1ti-Sensors (실거리 파라미터 교정식 복합센서 기반 3차원 거리측정 시스템)

  • Kim, Jong-Man;Kim, Won-Sop;Hwang, Jong-Sun;Kim, Yeong-Min
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.05a
    • /
    • pp.125-129
    • /
    • 2006
  • The analysis of the depth measurement system with multi-sensors (laser, camera, mirror) has been done and the parameter calibration technique has been proposed. In the proposed depth measurement system, the laser beam is reflected to the object by the rotating mirror and again the position of the laser beam is observed through the same mirror by the camera. The depth of the object pointed by the laser beam is computed depending on the pixel position on the CCD. There involved several number of internal and external parameters such as inter-pixel distance, focal length, position and orientation of the system components in the depth measurement error. In this paper, it is shown through the error sensitivity analysis of the parameters that the most important parameters in the sense of error sources are the angle of the laser beam and the inter pixel distance.

  • PDF

Investigation on the Applicability of Defocus Blur Variations to Depth Calculation Using Target Sheet Images Captured by a DSLR Camera

  • Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.109-121
    • /
    • 2020
  • Depth calculation of objects in a scene from images is one of the most studied processes in the fields of image processing, computer vision, and photogrammetry. Conventionally, depth is calculated using a pair of overlapped images captured at different view points. However, there have been studies to calculate depths from a single image. Theoretically, it is known to be possible to calculate depth using the diameter of CoC (Circle of Confusion) caused by defocus under the assumption of a thin lens model. Thus, this study aims to verify the validity of the thin lens model to calculate depth from edge blur amount which corresponds to the radius of CoC. For this study, a commercially available DSLR (Digital Single Lens Reflex) camera was used to capture a set of target sheets which had different edge contrasts. In order to find out the pattern of the variations of edge blur against varying combination of FD (Focusing Distance) and OD (Object Distance), the camera was set to varying FD and target sheet images were captured at varying OD under each FD. Then, the edge blur and edge displacement were estimated from edge slope profiles using a brute-force method. The experimental results show that the pattern of the variations of edge blur observed in the target images was apart from their corresponding theoretical amounts derived under the thin lens assumption but can still be utilized to calculate depth from a single image for the cases similar to the limited conditions experimented under which the tendency between FD and OD is manifest.

Fast Digital Hologram Generation Using True 3D Object (실물에 대한 디지털 홀로그램 고속 생성)

  • Kang, Hoon-Jong;Lee, Gang-Sung;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1283-1288
    • /
    • 2009
  • In general, a 3D computer graphic model is being used to generate a digital hologram as theinput information because the 3D information of an object can be extracted from a 3D model, easily. The 3D information of a real scene can be extracted by using a depth camera. The 3D information, point cloud, corresponding to real scene is extracted from a taken image pair, a gray texture and a depth map, by a depth camera. The extracted point cloud is used to generate a digital hologram as input information. The digital hologram is generated by using the coherent holographic stereogram, which is a fast digital hologram generation algorithm based on segmentation. The generated digital hologram using the taken image pair by a depth camera is reconstructed by the Fresnel approximation. By this method, the digital hologram corresponding to a real scene or a real object could be generated by using the fast digital hologram generation algorithm. Furthermore, experimental results are satisfactory.

Implementation of Vehicle Plate Recognition Using Depth Camera

  • Choi, Eun-seok;Kwon, Soon-kak
    • Journal of Multimedia Information System
    • /
    • v.6 no.3
    • /
    • pp.119-124
    • /
    • 2019
  • In this paper, a method of detecting vehicle plates through depth pictures is proposed. A vehicle plate can be recognized by detecting the plane areas. First, plane factors of each square block are calculated. After that, the same plane areas are grouped by comparing the neighboring blocks to whether they are similar planes. Width and height for the detected plane area are obtained. If the height and width are matched to an actual vehicle plate, the area is recognized as a vehicle plate. Simulations results show that the recognition rates for the proposed method are about 87.8%.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

Estimation of Human Height Using Downward Depth Images (하방 촬영된 깊이 영상을 이용한 신장 추정)

  • Kim, Heung-Jun;Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.7
    • /
    • pp.1014-1023
    • /
    • 2017
  • In this paper, we propose a method for estimating the human height by using downward depth images. We detect a point with the lowest depth value in an object as top of the head and estimate the height by calculating the depth difference with the floor. Since the depth of the floor varies depending on the angle of the camera, the correction formula is applied. In addition, the binarization threshold is variably applied so that height can be estimated even when several people are adjacent. Simulation results show that the proposed method has better performance than the conventional methods. The proposed method is expected to be widely used in body measurement, intelligent surveillance, and marketing.

A Recognition Method for Moving Objects Using Depth and Color Information (깊이와 색상 정보를 이용한 움직임 영역의 인식 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.681-688
    • /
    • 2016
  • In the intelligent video surveillance, recognizing the moving objects is important issue. However, the conventional moving object recognition methods have some problems, that is, the influence of light, the distinguishing between similar colors, and so on. The recognition methods for the moving objects using depth information have been also studied, but these methods have limit of accuracy because the depth camera cannot measure the depth value accurately. In this paper, we propose a recognition method for the moving objects by using both the depth and the color information. The depth information is used for extracting areas of moving object and then the color information for correcting the extracted areas. Through tests with typical videos including moving objects, we confirmed that the proposed method could extract areas of moving objects more accurately than a method using only one of two information. The proposed method can be not only used in CCTV field, but also used in other fields of recognizing moving objects.

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

High-qualtiy 3-D Video Generation using Scale Space (계위 공간을 이용한 고품질 3차원 비디오 생성 방법 -다단계 계위공간 개념을 이용해 깊이맵의 경계영역을 정제하는 고화질 복합형 카메라 시스템과 고품질 3차원 스캐너를 결합하여 고품질 깊이맵을 생성하는 방법-)

  • Lee, Eun-Kyung;Jung, Young-Ki;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.620-624
    • /
    • 2009
  • In this paper, we present a new camera system combining a high-quality 3-D scanner and hybrid camera system to generate a multiview video-plus-depth. In order to get the 3-D video using the hybrid camera system and 3-D scanner, we first obtain depth information for background region from the 3-D scanner. Then, we get the depth map for foreground area from the hybrid camera system. Initial depths of each view image are estimated by performing 3-D warping with the depth information. Thereafter, multiview depth estimation using the initial depths is carried out to get each view initial disparity map. We correct the initial disparity map using a belief propagation algorithm so that we can generate the high-quality multiview disparity map. Finally, we refine depths of the foreground boundary using extracted edge information. Experimental results show that the proposed depth maps generation method produces a 3-D video with more accurate multiview depths and supports more natural 3-D views than the previous works.

  • PDF

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF