• Title/Summary/Keyword: Vision/LiDAR

Search Result 30, Processing Time 0.023 seconds

Integrated Navigation Design Using a Gimbaled Vision/LiDAR System with an Approximate Ground Description Model

  • Yun, Sukchang;Lee, Young Jae;Kim, Chang Joo;Sung, Sangkyung
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.14 no.4
    • /
    • pp.369-378
    • /
    • 2013
  • This paper presents a vision/LiDAR integrated navigation system that provides accurate relative navigation performance on a general ground surface, in GNSS-denied environments. The considered ground surface during flight is approximated as a piecewise continuous model, with flat and slope surface profiles. In its implementation, the presented system consists of a strapdown IMU, and an aided sensor block, consisting of a vision sensor and a LiDAR on a stabilized gimbal platform. Thus, two-dimensional optical flow vectors from the vision sensor, and range information from LiDAR to ground are used to overcome the performance limit of the tactical grade inertial navigation solution without GNSS signal. In filter realization, the INS error model is employed, with measurement vectors containing two-dimensional velocity errors, and one differenced altitude in the navigation frame. In computing the altitude difference, the ground slope angle is estimated in a novel way, through two bisectional LiDAR signals, with a practical assumption representing a general ground profile. Finally, the overall integrated system is implemented, based on the extended Kalman filter framework, and the performance is demonstrated through a simulation study, with an aircraft flight trajectory scenario.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Development of Parallel Signal Processing Algorithm for FMCW LiDAR based on FPGA (FPGA 고속병렬처리 구조의 FMCW LiDAR 신호처리 알고리즘 개발)

  • Jong-Heon Lee;Ji-Eun Choi;Jong-Pil La
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.2
    • /
    • pp.335-343
    • /
    • 2024
  • Real-time target signal processing techniques for FMCW LiDAR are described in this paper. FMCW LiDAR is gaining attention as the next-generation LiDAR for self-driving cars because of its detection robustness even in adverse environmental conditions such as rain, snow and fog etc. in addition to its long range measurement capability. The hardware architecture which is required for high-speed data acquisition, data transfer, and parallel signal processing for frequency-domain signal processing is described in this article. Fourier transformation of the acquired time-domain signal is implemented on FPGA in real time. The paper also details the C-FAR algorithm for ensuring robust target detection from the transformed target spectrum. This paper elaborates on enhancing frequency measurement resolution from the target spectrum and converting them into range and velocity data. The 3D image was generated and displayed using the 2D scanner position and target distance data. Real-time target signal processing and high-resolution image acquisition capability of FMCW LiDAR by using the proposed parallel signal processing algorithms based on FPGA architecture are verified in this paper.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Development of Wideband Frequency Modulated Laser for High Resolution FMCW LiDAR Sensor (고분해능 FMCW LiDAR 센서 구성을 위한 광대역 주파수변조 레이저 개발)

  • Jong-Pil La;Ji-Eun Choi
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1023-1030
    • /
    • 2023
  • FMCW LiDAR system with robust target detection capabilities even under adverse operating conditions such as snow, rain, and fog is addressed in this paper. Our focus is primarily on enhancing the performance of FMCW LiDAR by improving the characteristics of the frequency-modulated laser, which directly influence range resolution, coherence length, and maximum measurement range etc. of LiDAR. We describe the utilization of an unbalanced Mach-Zehnder laser interferometer to measure real-time changes of the lasing frequency and to correct frequency modulation errors through an optical phase-locked loop technique. To extend the coherence length of laser, we employ an extended-cavity laser diode as the laser source and implement a laser interferometer with an photonic integrated circuit for miniaturization of optical system. The developed FMCW LiDAR system exhibits a bandwidth of 10.045GHz and a remarkable distance resolution of 0.84mm.

Anomaly Event Detection Algorithm of Single-person Households Fusing Vision, Activity, and LiDAR Sensors

  • Lee, Do-Hyeon;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.23-31
    • /
    • 2022
  • Due to the recent outbreak of COVID-19 and an aging population and an increase in single-person households, the amount of time that household members spend doing various activities at home has increased significantly. In this study, we propose an algorithm for detecting anomalies in members of single-person households, including the elderly, based on the results of human movement and fall detection using an image sensor algorithm through home CCTV, an activity sensor algorithm using an acceleration sensor built into a smartphone, and a 2D LiDAR sensor-based LiDAR sensor algorithm. However, each single sensor-based algorithm has a disadvantage in that it is difficult to detect anomalies in a specific situation due to the limitations of the sensor. Accordingly, rather than using only a single sensor-based algorithm, we developed a fusion method that combines each algorithm to detect anomalies in various situations. We evaluated the performance of algorithms through the data collected by each sensor, and show that even in situations where only one algorithm cannot be used to detect accurate anomaly event through certain scenarios we can complement each other to efficiently detect accurate anomaly event.

2D LiDAR based 3D Pothole Detection System (2차원 라이다 기반 3차원 포트홀 검출 시스템)

  • Kim, Jeong-joo;Kang, Byung-ho;Choi, Su-il
    • Journal of Digital Contents Society
    • /
    • v.18 no.5
    • /
    • pp.989-994
    • /
    • 2017
  • In this paper, we propose a pothole detection system using 2D LiDAR and a pothole detection algorithm. Conventional pothole detection methods can be divided into vibration-based method, 3D reconstruction method, and vision-based method. Proposed pothole detection system uses two inexpensive 2D LiDARs and improves pothole detection performance. Pothole detection algorithm is divided into preprocessing for noise reduction, clustering and line extraction for visualization, and gradient function for pothole decision. By using gradient of distance data function, we check the existence of a pothole and measure the depth and width of the pothole. The pothole detection system is developed using two LiDARs, and the 3D pothole detection performance is shown by detecting a pothole with moving LiDAR system.

Ceiling-Based Localization of Indoor Robots Using Ceiling-Looking 2D-LiDAR Rotation Module (천장지향 2D-LiDAR 회전 모듈을 이용한 실내 주행 로봇의 천장 기반 위치 추정)

  • An, Jae Won;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.7
    • /
    • pp.780-789
    • /
    • 2019
  • In this paper, we propose a new indoor localization method for indoor mobile robots using LiDAR. The indoor mobile robots operating in limited areas usually require high-precision localization to provide high level services. The performance of the widely used localization methods based on radio waves or computer vision are highly dependent on their usage environment. Therefore, the reproducibility of the localization is insufficient to provide high level services. To overcome this problem, we propose a new localization method based on the comparison between ceiling shape information obtained from LiDAR measurement and the blueprint. Specifically, the method includes a reliable segmentation method to classify point clouds into connected planes, an effective comparison method to estimate position by matching 3D point clouds and 2D blueprint information. Since the ceiling shape information is rarely changed, the proposed localization method is robust to its usage environment. Simulation results prove that the position error of the proposed localization method is less than 10 cm.

A Study on the Displacement Measuring Method of High-rise Buildingas using LiDAR (라이다를 이용한 고층 건물의 변위 계측 기법에 관한 연구)

  • Lee Hong-Min;Park Hyo-Seon
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2006.04a
    • /
    • pp.151-158
    • /
    • 2006
  • Structural health monitoring is concerned with the safety and serviceability of the users of structures, especially for the case of building structures and infrastructures. When considering the safety of a structure, the maximum stress in a member due to live load, earthquake, wind, or other unexpected loadings must be checked not to exceed the stress specified in a code. It will not fail at yield, excessively large displacements will deteriorate the serviceability of a structure. To guarantee the safety and serviceability of structures, the maximum displacement in a structures must be monitored because actual displacement is a direct assessment index on its stiffness. However, no practical method has been reported to monitor the displacement, especially for the case of displacement of high-rise buildings because of not to easy accessive. In this paper, it is studied displacement measuring method of high-rise buildings using LiDAR The method is evaluated by analyzing accuracy of measured displacements for existing building.

  • PDF

A Research on V2I-based Accident Prevention System for the Prevention of Unexpected Accident of Autonomous Vehicle (자율주행 차량의 돌발사고 방지를 위한 V2I 기반의 사고 방지체계 연구)

  • Han, SangYong;Kim, Myeong-jun;Kang, Dongwan;Baek, Sunwoo;Shin, Hee-seok;Kim, Jungha
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.3
    • /
    • pp.86-99
    • /
    • 2021
  • This research proposes the Accident Prevention System to prevent collision accident that can occur due to blind spots such as crossway or school zone using V2I communication. Vision sensor and LiDAR sensor located in the infrastructure of crossway somewhere like that recognize objects and warn vehicles at risk of accidents to prevent accidents in advance. Using deep learning-based YOLOv4 to recognize the object entering the intersection and using the Manhattan Distance value with LiDAR sensors to calculate the expected collision time and the weight of braking distance and secure safe distance. V2I communication used ROS (Robot Operating System) communication to prevent accidents in advance by conveying various information to the vehicle, including class, distance, and speed of entry objects, in addition to collision warning.