• Title/Summary/Keyword: depth detection

Search Result 728, Processing Time 0.022 seconds

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.

2D-to-3D Conversion System using Depth Map Enhancement

  • Chen, Ju-Chin;Huang, Meng-yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1159-1181
    • /
    • 2016
  • This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer's main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

Implementation of Vehicle Plate Recognition Using Depth Camera

  • Choi, Eun-seok;Kwon, Soon-kak
    • Journal of Multimedia Information System
    • /
    • v.6 no.3
    • /
    • pp.119-124
    • /
    • 2019
  • In this paper, a method of detecting vehicle plates through depth pictures is proposed. A vehicle plate can be recognized by detecting the plane areas. First, plane factors of each square block are calculated. After that, the same plane areas are grouped by comparing the neighboring blocks to whether they are similar planes. Width and height for the detected plane area are obtained. If the height and width are matched to an actual vehicle plate, the area is recognized as a vehicle plate. Simulations results show that the recognition rates for the proposed method are about 87.8%.

Real-Time Stereoscopic Image Conversion Using Motion Detection and Region Segmentation (움직임 검출과 영역 분할을 이용한 실시간 입체 영상 변환)

  • Kwon Byong-Heon;Seo Burm-suk
    • Journal of Digital Contents Society
    • /
    • v.6 no.3
    • /
    • pp.157-162
    • /
    • 2005
  • In this paper we propose real-time cocersion methods that can convert into stereoscopic image using depth map that is formed by motion detection extracted from 2-D moving image and region segmentation separated from image. Depth map which represents depth information of image and the proposed absolute parallax image are used as the measure of qualitative evaluation. We have compared depth information, parallax processing, and segmentation between objects with different depth for proposed and conventional method. As a result, we have confirmed the proposed method can offer realistic stereoscopic effect regardless of direction and velocity of moving object for a moving image.

  • PDF

Improving Detection Range for Short Baseline Stereo Cameras Using Convolutional Neural Networks and Keypoint Matching (컨볼루션 뉴럴 네트워크와 키포인트 매칭을 이용한 짧은 베이스라인 스테레오 카메라의 거리 센싱 능력 향상)

  • Byungjae Park
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.98-104
    • /
    • 2024
  • This study proposes a method to overcome the limited detection range of short-baseline stereo cameras (SBSCs). The proposed method includes two steps: (1) predicting an unscaled initial depth using monocular depth estimation (MDE) and (2) adjusting the unscaled initial depth by a scale factor. The scale factor is computed by triangulating the sparse visual keypoints extracted from the left and right images of the SBSC. The proposed method allows the use of any pre-trained MDE model without the need for additional training or data collection, making it efficient even when considering the computational constraints of small platforms. Using an open dataset, the performance of the proposed method was demonstrated by comparing it with other conventional stereo-based depth estimation methods.

A Fast and Accurate Face Detection and Tracking Method by using Depth Information (깊이정보를 이용한 고속 고정밀 얼굴검출 및 추적 방법)

  • Bae, Yun-Jin;Choi, Hyun-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7A
    • /
    • pp.586-599
    • /
    • 2012
  • This paper proposes a fast face detection and tracking method which uses depth images as well as RGB images. It consists of the face detection procedure and the face tracking procedure. The face detection method basically uses an existing method, Adaboost, but it reduces the size of the search area by using the depth image. The proposed face tracking method uses a template matching technique and incorporates an early-termination scheme to reduce the execution time further. The results from implementing and experimenting the proposed methods showed that the proposed face detection method takes only about 39% of the execution time of the existing method. The proposed tracking method takes only 2.48ms per frame with $640{\times}480$ resolution. For the exactness, the proposed detection method showed a little lower in detection ratio but in the error ratio, which is for the cases when a detected one as a face is not really a face, the proposed method showed only about 38% of that of the previous method. The proposed face tracking method turned out to have a trade-off relationship between the execution time and the exactness. In all the cases except a special one, the tracking error ratio is as low as about 1%. Therefore, we expect the proposed face detection and tracking methods can be used individually or in combined for many applications that need fast execution and exact detection or tracking.

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.

Depth-map Preprocessing Algorithm Using Two Step Boundary Detection for Boundary Noise Removal (경계 잡음 제거를 위한 2단계 경계 탐색 기반의 깊이지도 전처리 알고리즘)

  • Pak, Young-Gil;Kim, Jun-Ho;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.555-564
    • /
    • 2014
  • The boundary noise in image syntheses using DIBR consists of noisy pixels that are separated from foreground objects into background region. It is generated mainly by edge misalignment between the reference image and depth map or blurred edge in the reference image. Since hole areas are generally filled with neighboring pixels, boundary noise adjacent to the hole is the main cause of quality degradation in synthesized images. To solve this problem, a new boundary noise removal algorithm using a preprocessing of the depth map is proposed in this paper. The most common way to eliminate boundary noise caused by boundary misalignment is to modify depth map so that the boundary of the depth map can be matched to that of the reference image. Most conventional methods, however, show poor performances of boundary detection especially in blurred edge, because they are based on a simple boundary search algorithm which exploits signal gradient. In the proposed method, a two-step hierarchical approach for boundary detection is adopted which enables effective boundary detection between the transition and background regions. Experimental results show that the proposed method outperforms conventional ones subjectively and objectively.

Structural Crack Detection Using Deep Learning: An In-depth Review

  • Safran Khan;Abdullah Jan;Suyoung Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.371-393
    • /
    • 2023
  • Crack detection in structures plays a vital role in ensuring their safety, durability, and reliability. Traditional crack detection methods sometimes need significant manual inspections, which are laborious, expensive, and prone to error by humans. Deep learning algorithms, which can learn intricate features from large-scale datasets, have emerged as a viable option for automated crack detection recently. This study presents an in-depth review of crack detection methods used till now, like image processing, traditional machine learning, and deep learning methods. Specifically, it will provide a comparative analysis of crack detection methods using deep learning, aiming to provide insights into the advancements, challenges, and future directions in this field. To facilitate comparative analysis, this study surveys publicly available crack detection datasets and benchmarks commonly used in deep learning research. Evaluation metrics employed to check the performance of different models are discussed, with emphasis on accuracy, precision, recall, and F1-score. Moreover, this study provides an in-depth analysis of recent studies and highlights key findings, including state-of-the-art techniques, novel architectures, and innovative approaches to address the shortcomings of the existing methods. Finally, this study provides a summary of the key insights gained from the comparative analysis, highlighting the potential of deep learning in revolutionizing methodologies for crack detection. The findings of this research will serve as a valuable resource for researchers in the field, aiding them in selecting appropriate methods for crack detection and inspiring further advancements in this domain.

Real-time Hand Region Detection based on Cascade using Depth Information (깊이정보를 이용한 케스케이드 방식의 실시간 손 영역 검출)

  • Joo, Sung Il;Weon, Sun Hee;Choi, Hyung Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.713-722
    • /
    • 2013
  • This paper proposes a method of using depth information to detect the hand region in real-time based on the cascade method. In order to ensure stable and speedy detection of the hand region even under conditions of lighting changes in the test environment, this study uses only features based on depth information, and proposes a method of detecting the hand region by means of a classifier that uses boosting and cascading methods. First, in order to extract features using only depth information, we calculate the difference between the depth value at the center of the input image and the average of depth value within the segmented block, and to ensure that hand regions of all sizes will be detected, we use the central depth value and the second order linear model to predict the size of the hand region. The cascade method is applied to implement training and recognition by extracting features from the hand region. The classifier proposed in this paper maintains accuracy and enhances speed by composing each stage into a single weak classifier and obtaining the threshold value that satisfies the detection rate while exhibiting the lowest error rate to perform over-fitting training. The trained classifier is used to classify the hand region, and detects the final hand region in the final merger stage. Lastly, to verify performance, we perform quantitative and qualitative comparative analyses with various conventional AdaBoost algorithms to confirm the efficiency of the hand region detection algorithm proposed in this paper.