• Title/Summary/Keyword: 3D Depth Camera

Search Result 299, Processing Time 0.03 seconds

3D Depth Measurement System-based Nonliniar Trail Recognition for Mobile Robots (3 차원 거리 측정 장치 기반 이동로봇용 비선형 도로 인식)

  • Kim, Jong-Man;Kim, Won-Sop;Shin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2007.06a
    • /
    • pp.517-518
    • /
    • 2007
  • A method to recognize unpaved road region using a 3D depth measurement system is proposed for mobile robots. For autonomous maneuvering of mobile robots, recognition of obstacles or recognition of road region is the essential task. In this paper, the 3D depth measurement system which is composed of a rotating mirror, a line laser and mono-camera is employed to detect depth, where the laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The obtained depth information is converted into an image. Such depth images of the road region represent even and plane while that of off-road region is irregular or textured. Therefore, the problem falls into a texture identification problem. Road region is detected employing a simple spatial differentiation technique to detect the plain textured area. Identification results of the diverse situation of Nonlinear trail are included in this paper.

  • PDF

Detection of Moving Objects using Depth Frame Data of 3D Sensor (3D센서의 Depth frame 데이터를 이용한 이동물체 감지)

  • Lee, Seong-Ho;Han, Kyong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.243-248
    • /
    • 2014
  • This study presents an investigation into the ways to detect the areas of object movement with Kinect's Depth Frame, which is capable of receiving 3D information regardless of external light sources. Applied to remove noises along the boundaries of objects among the depth information received from sensors were the blurring technique for the x and y coordinates of pixels and the frequency filter for the z coordinate. In addition, a clustering filter was applied according to the changing amounts of adjacent pixels to extract the areas of moving objects. It was also designed to detect fast movements above the standard according to filter settings, being applicable to mobile robots. Detected movements can be applied to security systems when being delivered to distant places via a network and can also be expanded to large-scale data through concerned information.

Depth Map Enhancement and Up-sampling Techniques of 3D Images for the Smart Media (스마트미디어를 위한 입체 영상의 깊이맵 화질 향상 및 업샘플링 기술)

  • Jung, Jae-Il;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.22-28
    • /
    • 2012
  • As the smart media becomes more popular, the demand for high-quality 3D images and depth maps is increasing. However, performance of the current technologies to acquire depth maps is not sufficient. The depth maps from stereo matching methods have low accuracy in homogeneous regions. The depth maps from depth cameras are noisy and have low-resolution due to technical limitations. In this paper, we introduce the state-of-the-art algorithms for depth map enhancement and up-sampling from conventional methods using only depth maps to the latest algorithms referring to both depth maps and their corresponding color images. We also present depth map enhancement algorithms for hybrid camera systems in detail.

  • PDF

Depth Acquisition Techniques for 3D Contents Generation (3차원 콘텐츠 제작을 위한 깊이 정보 획득 기술)

  • Jang, Woo-Seok;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.15-21
    • /
    • 2012
  • Depth information is necessary for various three dimensional contents generation. Depth acquisition techniques can be categorized broadly into two approaches: active, passive depth sensors depending on how to obtain depth information. In this paper, we take a look at several ways of depth acquirement. We present not only depth acquisition methods using discussed ways, but also hybrid methods which combine both approaches to compensate for drawbacks of each approach. Furthermore, we introduce several matching cost functions and post-processing techniques to enhance the temporal consistency and reduce flickering artifacts and discomforts of users caused by inaccurate depth estimation in 3D video.

  • PDF

Panoramic 3D Reconstruction of an Indoor Scene Using Depth and Color Images Acquired from A Multi-view Camera (다시점 카메라로부터 획득된 깊이 및 컬러 영상을 이용한 실내환경의 파노라믹 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.24-32
    • /
    • 2006
  • 본 논문에서는 다시점 카메라부터 획득된 부분적인 3D 점군을 사용하여 실내환경의 3D 복원을 위한 새로운 방법을 제안한다. 지금까지 다양한 양안차 추정 알고리즘이 제안되었으며, 이는 활용 가능한 깊이 영상이 다양함을 의미한다. 따라서, 본 논문에서는 일반화된 다시점 카메라를 이용하여 실내환경을 복원하는 방법을 다룬다. 첫 번째, 3D 점군들의 시간적 특성을 기반으로 변화량이 큰 3D 점들을 제거하고, 공간적 특성을 기반으로 주변의 3D 점을 참조하여 빈 영역을 채움으로써 깊이 영상 정제 과정을 수행한다. 두 번째, 연속된 두 시점에서의 3D 점군을 동일한 영상 평면으로 투영하고, 수정된 KLT (Kanade-Lucas-Tomasi) 특징 추적기를 사용하여 대응점을 찾는다. 그리고 대응점 간의 거리 오차를 최소화함으로써 정밀한 정합을 수행한다. 마지막으로, 여러 시점에서 획득된 3D 점군과 한 쌍의 2D 영상을 동시에 이용하여 3D 점들의 위치를 세밀하게 조절함으로써 최종적인 3D 모델을 생성한다. 제안된 방법은 대응점을 2D 영상 평면에서 찾음으로써 계산의 복잡도를 줄였으며, 3D 데이터의 정밀도가 낮은 경우에도 효과적으로 동작한다. 또한, 다시점 카메라를 이용함으로써 수 시점에서의 깊이 영상과 컬러 영상만으로도 실내환경 3D 복원이 가능하다. 제안된 방법은 네비게이션 뿐만 아니라 상호작용을 위한 3D 모델 생성에 활용될 수 있다.

  • PDF

Depth Map Generation Using Infocused and Defocused Images (초점 영상 및 비초점 영상으로부터 깊이맵을 생성하는 방법)

  • Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.362-371
    • /
    • 2014
  • Blur variation caused by camera de-focusing provides a proper cue for depth estimation. Depth from Defocus (DFD) technique calculates the blur amount present in an image considering that blur amount is directly related to scene depth. Conventional DFD methods use two defocused images that might yield the low quality of an estimated depth map as well as a reconstructed infocused image. To solve this, a new DFD methodology based on infocused and defocused images is proposed in this paper. In the proposed method, the outcome of Subbaro's DFD is combined with a novel edge blur estimation method so that improved blur estimation can be achieved. In addition, a saliency map mitigates the ill-posed problem of blur estimation in the region with low intensity variation. For validating the feasibility of the proposed method, twenty image sets of infocused and defocused images with 2K FHD resolution were acquired from a camera with a focus control in the experiments. 3D stereoscopic image generated by an estimated depth map and an input infocused image could deliver the satisfactory 3D perception in terms of spatial depth perception of scene objects.

Registration of Dental Range Images from a Intraoral Scanner (Intraoral Scanner로 촬영된 치아 이미지의 정렬)

  • Ko, Min Soo;Park, Sang Chul
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.3
    • /
    • pp.296-305
    • /
    • 2016
  • This paper proposes a framework to automatically align Dental range image captured by depth sensors like the Microsoft Kinect. Aligning dental images by intraoral scanning technology is a difficult problem for applications requiring accurate model of dental-scan datasets with efficiency in computation time. The most important thing in dental scanning system is accuracy of the dental prosthesis. Previous approaches in intraoral scanning uses a Z-buffer ICP algorithm for fast registration, but it is relatively not accurate and it may cause cumulative errors. This paper proposes additional Alignment using the rough result comes after intraoral scanning alignment. It requires that Each Depth Image of the total set shares some overlap with at least one other Depth image. This research implements the automatically additional alignment system that aligns all depth images into Completed model by computing a network of pairwise registrations. The order of the each individual transformation is derived from a global network and AABB box overlap detection methods.

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.

Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics (다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론)

  • Son, Hosung;Shin, Minjung;Kim, Joonsoo;Yun, Kug-jin;Cheong, Won-sik;Lee, Hyun-woo;Kang, Suk-ju
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.4-7
    • /
    • 2022
  • Recently, multi-view depth estimation methods using deep learning network for the 3D scene reconstruction have gained lots of attention. Multi-view video contents have various characteristics according to their camera composition, environment, and setting. It is important to understand these characteristics and apply the proper depth estimation methods for high-quality 3D reconstruction tasks. The camera setting represents the physical distance which is called baseline, between each camera viewpoint. Our proposed methods focus on deciding the appropriate depth estimation methodologies according to the characteristics of multi-view video contents. Some limitations were found from the empirical results when the existing multi-view depth estimation methods were applied to a divergent or large baseline dataset. Therefore, we verified the necessity of obtaining the proper number of source views and the application of the source view selection algorithm suitable for each dataset's capturing environment. In conclusion, when implementing a deep learning-based depth estimation network for 3D scene reconstruction, the results of this study can be used as a guideline for finding adaptive depth estimation methods.

  • PDF

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.