• Title/Summary/Keyword: LiDAR sensor

Search Result 136, Processing Time 0.024 seconds

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

Building DSMs Generation Integrating Three Line Scanner (TLS) and LiDAR

  • Suh, Yong-Cheol;Nakagawa , Masafumi
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.229-242
    • /
    • 2005
  • Photogrammetry is a current method of GIS data acquisition. However, as a matter of fact, a large manpower and expenditure for making detailed 3D spatial information is required especially in urban areas where various buildings exist. There are no photogrammetric systems which can automate a process of spatial information acquisition completely. On the other hand, LiDAR has high potential of automating 3D spatial data acquisition because it can directly measure 3D coordinates of objects, but it is rather difficult to recognize the object with only LiDAR data, for its low resolution at this moment. With this background, we believe that it is very advantageous to integrate LiDAR data and stereo CCD images for more efficient and automated acquisition of the 3D spatial data with higher resolution. In this research, the automatic urban object recognition methodology was proposed by integrating ultra highresolution stereo images and LiDAR data. Moreover, a method to enable more reliable and detailed stereo matching method for CCD images was examined by using LiDAR data as an initial 3D data to determine the search range and to detect possibility of occlusions. Finally, intellectual DSMs, which were identified urban features with high resolution, were generated with high speed processing.

Developing and Valuating 3D Building Models Based on Multi Sensor Data (LiDAR, Digital Image and Digital Map) (멀티센서 데이터를 이용한 건물의 3차원 모델링 기법 개발 및 평가)

  • Wie, Gwang-Jae;Kim, Eun-Young;Yun, Hong-Sic;Kang, In-Gu
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.1
    • /
    • pp.19-30
    • /
    • 2007
  • Modeling 3D buildings is an essential process to revive the real world into a computer. There are two ways to create a 3D building model. The first method is to use the building layer of 1:1000 digital maps based on high density point data gained from airborne laser surveying. The second method is to use LiDAR point data with digital images achieved with LiDAR. In this research we tested one sheet area of 1:1000 digital map with both methods to process a 3D building model. We have developed a process, analyzed quantitatively and evaluated the efficiency, accuracy, and reality. The resulted differed depending on the buildings shape. The first method was effective on simple buildings, and the second method was effective on complicated buildings. Also, we evaluated the accuracy of the produced model. Comparing the 3D building based on LiDAR data and digital image with digital maps, the horizontal accuracy was within ${\pm}50cm$. From the above we derived a conclusion that 3D building modeling is more effective when it is based on LiDAR data and digital maps. Using produced 3D building modeling data, we will be utilized as digital contents in various fields like 3D GIS, U-City, telematics, navigation, virtual reality and games etc.

Development of Wideband Frequency Modulated Laser for High Resolution FMCW LiDAR Sensor (고분해능 FMCW LiDAR 센서 구성을 위한 광대역 주파수변조 레이저 개발)

  • Jong-Pil La;Ji-Eun Choi
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1023-1030
    • /
    • 2023
  • FMCW LiDAR system with robust target detection capabilities even under adverse operating conditions such as snow, rain, and fog is addressed in this paper. Our focus is primarily on enhancing the performance of FMCW LiDAR by improving the characteristics of the frequency-modulated laser, which directly influence range resolution, coherence length, and maximum measurement range etc. of LiDAR. We describe the utilization of an unbalanced Mach-Zehnder laser interferometer to measure real-time changes of the lasing frequency and to correct frequency modulation errors through an optical phase-locked loop technique. To extend the coherence length of laser, we employ an extended-cavity laser diode as the laser source and implement a laser interferometer with an photonic integrated circuit for miniaturization of optical system. The developed FMCW LiDAR system exhibits a bandwidth of 10.045GHz and a remarkable distance resolution of 0.84mm.

Identifying Puddles based on Intensity Measurement using LiDAR

  • Minyoung Lee;Ji-Chul Kim;Moo Hyun Cha;Hanmin Lee;Sooyong Lee
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.267-274
    • /
    • 2023
  • LiDAR, one of the most important sensing methods used in mobile robots and cars with assistive/autonomous driving functions, is used to locate surrounding obstacles or to build maps. For real-time path generation, the detection of potholes or puddles on the driving surface is crucial. To achieve this, we used the coordinates of the reflection points provided by LiDAR as well as the intensity information to classify water areas, which was achieved by applying a linear regression method to the intensity distribution. The rationale for using the LiDAR index as an input variable for linear regression is presented, and we demonstrated that it is not affected by errors in the distance measurement value. Because of LiDAR vertical scanning, if the reflective surface is not uniform, it is divided into different groups according to the intensity distribution, and a mathematical basis for this is presented. Through experiments in an outdoor driving area, we could distinguish between flat ground, potholes, and puddles, and kinematic analysis was performed to calculate the maximum width that could be crossed for a given vehicle body size and wheel radius.

An Automatic Collision Avoidance System for Drone using a LiDAR sensor (LiDAR 센서를 이용한 드론 자동 충돌방지 시스템)

  • Chong, Ui-Pil;An, Woo-Jin;Kim, Yearn-Min;Lee, Jung-Chul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.2
    • /
    • pp.54-60
    • /
    • 2018
  • In this paper, we propose an efficient automatic control method for the collision avoidance of drones. In general, the drones are controlled by transmitting to the flight control (FC) module the received PWM signals transmitted from a RC controller which transduce movements of the knob into PWM signal. We implemented the collision avoidance module in-between receiver and FC module to monitor and change the throttle, pitch and roll control signals to avoid drone collision. In order to avoid the collision, a LiDAR distance sensor and a servo-motor are installed and periodically measure the obstacle distance within -45 degrees from 45 degrees in flight direction. If the collision is predicted, the received PWM signal is changed and transmitted to the FC module to prevent the collision. We applied our proposed method to a hexacopter and the experimental results show that the safety is improved because it can prevent the collision caused by the inadvertency or inexperienced maneuver.

An Algorithm of Identifying Roaming Pedestrians' Trajectories using LiDAR Sensor (LiDAR 센서를 활용한 배회 동선 검출 알고리즘 개발)

  • Jeong, Eunbi;You, So-Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.6
    • /
    • pp.1-15
    • /
    • 2017
  • Recently terrorism targets unspecified masses and causes massive destruction, which is so-called Super Terrorism. Many countries have tried hard to protect their citizens with various preparation and safety net. With inexpensive and advanced technologies of sensors, the surveillance systems have been paid attention, but few studies associated with the classification of the pedestrians' trajectories and the difference among themselves have attempted. Therefore, we collected individual trajectories at Samseoung Station using an analytical solution (system) of pedestrian trajectory by LiDAR sensor. Based on the collected trajectory data, a comprehensive framework of classifying the types of pedestrians' trajectories has been developed with data normalization and "trajectory association rule-based algorithm." As a result, trajectories with low similarity within the very same cluster is possibly detected.

LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving (도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘)

  • Kim, Jongho;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.

Semantic Object Detection based on LiDAR Distance-based Clustering Techniques for Lightweight Embedded Processors (경량형 임베디드 프로세서를 위한 라이다 거리 기반 클러스터링 기법을 활용한 의미론적 물체 인식)

  • Jung, Dongkyu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1453-1461
    • /
    • 2022
  • The accuracy of peripheral object recognition algorithms using 3D data sensors such as LiDAR in autonomous vehicles has been increasing through many studies, but this requires high performance hardware and complex structures. This object recognition algorithm acts as a large load on the main processor of an autonomous vehicle that requires performing and managing many processors while driving. To reduce this load and simultaneously exploit the advantages of 3D sensor data, we propose 2D data-based recognition using the ROI generated by extracting physical properties from 3D sensor data. In the environment where the brightness value was reduced by 50% in the basic image, it showed 5.3% higher accuracy and 28.57% lower performance time than the existing 2D-based model. Instead of having a 2.46 percent lower accuracy than the 3D-based model in the base image, it has a 6.25 percent reduction in performance time.