• Title/Summary/Keyword: Light detection and ranging

Search Result 223, Processing Time 0.031 seconds

A robust collision prediction and detection method based on neural network for autonomous delivery robots

  • Seonghun Seo;Hoon Jung
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.329-337
    • /
    • 2023
  • For safe last-mile autonomous robot delivery services in complex environments, rapid and accurate collision prediction and detection is vital. This study proposes a suitable neural network model that relies on multiple navigation sensors. A light detection and ranging technique is used to measure the relative distances to potential collision obstacles along the robot's path of motion, and an accelerometer is used to detect impacts. The proposed method tightly couples relative distance and acceleration time-series data in a complementary fashion to minimize errors. A long short-term memory, fully connected layer, and SoftMax function are integrated to train and classify the rapidly changing collision countermeasure state during robot motion. Simulation results show that the proposed method effectively performs collision prediction and detection for various obstacles.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Automatic Extraction of Fractures and Their Characteristics in Rock Masses by LIDAR System and the Split-FX Software (LIDAR와 Split-FX 소프트웨어를 이용한 암반 절리면의 자동추출과 절리의 특성 분석)

  • Kim, Chee-Hwan;Kemeny, John
    • Tunnel and Underground Space
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2009
  • Site characterization for structural stability in rock masses mainly involves the collection of joint property data, and in the current practice, much of this data is collected by hand directly at exposed slopes and outcrops. There are many issues with the collection of this data in the field, including issues of safety, slope access, field time, lack of data quantity, reusability of data and human bias. It is shown that information on joint orientation, spacing and roughness in rock masses, can be automatically extracted from LIDAR (light detection and ranging) point floods using the currently available Split-FX point cloud processing software, thereby reducing processing time, safety and human bias issues.

Experimental Analysis of Physical Signal Jamming Attacks on Automotive LiDAR Sensors and Proposal of Countermeasures (차량용 LiDAR 센서 물리적 신호교란 공격 중심의 실험적 분석과 대응방안 제안)

  • Ji-ung Hwang;Yo-seob Yoon;In-su Oh;Kang-bin Yim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.217-228
    • /
    • 2024
  • LiDAR(Light Detection And Ranging) sensors, which play a pivotal role among cameras, RADAR(RAdio Detection And Ranging), and ultrasonic sensors for the safe operation of autonomous vehicles, can recognize and detect objects in 360 degrees. However, since LiDAR sensors use lasers to measure distance, they are vulnerable to attackers and face various security threats. In this paper, we examine several security threats against LiDAR sensors: relay, spoofing, and replay attacks, analyze the possibility and impact of physical jamming attacks, and analyze the risk these attacks pose to the reliability of autonomous driving systems. Through experiments, we show that jamming attacks can cause errors in the ranging ability of LiDAR sensors. With vehicle-to-vehicle (V2V) communication, multi-sensor fusion under development and LiDAR anomaly data detection, this work aims to provide a basic direction for countermeasures against these threats enhancing the security of autonomous vehicles, and verify the practical applicability and effectiveness of the proposed countermeasures in future research.

Uncertainty Analysis on Vertical Wind Profile Measurement of LIDAR for Wind Resource Assessment (풍력자원평가를 위한 라이다 관측 시 풍속연직분포 불확도 분석)

  • Kim, Hyun-Goo;Choi, Ji-Hwee;Jang, Moon-Seok;Jeon, Wan-Ho
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2010.06a
    • /
    • pp.185.1-185.1
    • /
    • 2010
  • 원격탐사(remote sensing)란 관측 대상과의 접촉 없이 멀리서 정보를 얻어내는 기술을 말한다. 기상관측분야에는 이미 소다(SODAR) 장비가 폭넓게 사용되거 왔으나 최근 풍력자원평가(wind resource assessment)를 위한 풍황측정에 SODAR와 더불어 라이다(LIDAR)가 적극적으로 활용되기 시작하고 있다. 참고로 SODAR(SOnic Detection And Ranging)는 수직 및 동서 남북 방향으로 음파를 발생시키고 대기유동에 의해 산란 반사된 에코를 수신하여 진동수 변화와 반사에코 강도를 측정하여 각 방향의 에코자료를 벡터 합성함으로써 풍향 및 풍속을 산출하는 원리이다. 반면 LIDAR(Light Detection And Ranging)는 비교적 최근에 풍황측정 용도로 개발된 레이저 탐지에 바탕을 둔 원거리 센서로, 공기입자(먼지, 수증기, 구름, 안개, 오염물질 등)에 의해 산란된 레이저 발산의 도플러 쉬프트(Doppler shift)를 이용하여 풍향 및 풍속을 측정하는 원격탐사 장비이다. 풍력자원평가 측면에서 라이다는 그 정확도가 IEC61400-12에 의거한 풍황탑(met-mast) 측정자료 다수와의 비교검증 실측평가(Albers et al., 2009)를 통하여 입증된 바 있다. 한편 한국에너지기술연구원에서 운용 중인 라이다 시스템은 그림 1의 우측 그림과 같이 1초에 $360^{\circ}$를 스캔하여 50지점에서 반사되는 레이저를 스펙트럼으로 측정하되 설정된 관측높이에서 풍속은 샘플링 부피(sampling volume)의 평균값으로 정의된다. 그런데 샘플링 부피는 설정된 관측높이로부터 상하 12.5m, 총 25m의 높이구간에서 관측한 스펙트럼의 평균값을 그 중앙지점에서의 풍속으로 환산하는 알고리듬(algorithm)을 채택하고 있다. 따라서 비선형적으로 변화하는 풍속연직분포 관측 시 풍속환산 알고리듬에 의한 측정오차가 개입될 가능성이 존재하는 것이다. 이에 본 연구에서는 라이다에 의한 풍속연직분포 측정 시 샘플링 부피의 구간 평균화 과정에서 발생하는 불확도(uncertainty)를 정량적으로 분석함으로써 라이다에 의한 풍속연직분포 관측의 불확도를 정량평가하고자 한다.

  • PDF

Accuracy Analysis of Point Cloud Data Produced Via Mobile Mapping System LiDAR in Construction Site (건설현장 MMS 라이다 기반 점군 데이터의 정확도 분석)

  • Park, Jae-Woo;Yeom, Dong-Jun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.3
    • /
    • pp.397-406
    • /
    • 2022
  • Recently, research and development to revitalize smart construction are being actively carried out. Accordingly, 3D mapping technology that digitizes construction site is drawing attention. To create a 3D digital map for construction site a point cloud generation method based on LiDAR(Light detection and ranging) using MMS(Mobile mapping system) is mainly used. The purpose of this study is to analyze the accuracy of MMS LiDAR-based point cloud data. As a result, accuracy of MMS point cloud data was analyzed as dx = 0.048m, dy = 0.018m, dz = 0.045m on average. In future studies, accuracy comparison of point cloud data produced via UAV(Unmanned aerial vegicle) photogrammetry and MMS LiDAR should be studied.

Correction in the Measurement Error of Water Depth Caused by the Effect of Seafloor Slope on Peak Timing of Airborne LiDAR Waveforms (지형 기울기에 의한 항공 수심 라이다 수심 측정 오차 보정)

  • Sim, Ki Hyeon;Woo, Jae Heun;Lee, Jae Yong;Kim, Jae Wan
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.34 no.3
    • /
    • pp.191-197
    • /
    • 2017
  • Light detection and ranging (LiDAR) is one of the most efficient technologies to obtain the topographic and bathymetric map of coastal zones, superior to other technologies, such as sound navigation and ranging (SONAR) and synthetic aperture radar (SAR). However, the measurement results using LiDAR are vulnerable to environmental factors. To achieve a correspondence between the acquired LiDAR data and reality, error sources must be considered, such as the water surface slope, water turbidity, and seafloor slope. Based on the knowledge of those factors' effects, error corrections can be applied. We concentrated on the effect of the seafloor slope on LiDAR waveforms while restricting other error sources. A simulation regarding in-water beam scattering was conducted, followed by an investigation of the correlation between the seafloor slope and peak timing of return waveforms. As a result, an equation was derived to correct the depth error caused by the seafloor slope.

ETLi: Efficiently annotated traffic LiDAR dataset using incremental and suggestive annotation

  • Kang, Jungyu;Han, Seung-Jun;Kim, Nahyeon;Min, Kyoung-Wook
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.630-639
    • /
    • 2021
  • Autonomous driving requires a computerized perception of the environment for safety and machine-learning evaluation. Recognizing semantic information is difficult, as the objective is to instantly recognize and distinguish items in the environment. Training a model with real-time semantic capability and high reliability requires extensive and specialized datasets. However, generalized datasets are unavailable and are typically difficult to construct for specific tasks. Hence, a light detection and ranging semantic dataset suitable for semantic simultaneous localization and mapping and specialized for autonomous driving is proposed. This dataset is provided in a form that can be easily used by users familiar with existing two-dimensional image datasets, and it contains various weather and light conditions collected from a complex and diverse practical setting. An incremental and suggestive annotation routine is proposed to improve annotation efficiency. A model is trained to simultaneously predict segmentation labels and suggest class-representative frames. Experimental results demonstrate that the proposed algorithm yields a more efficient dataset than uniformly sampled datasets.

Miniature Biochip Fluorescence Detection System with Spatial Separation of Fluorescence from Excitation Light (형광과 여기광을 공간적으로 분리하는 바이오칩용 소형 형광측정시스템)

  • Kim Ho-seong;Choi Jea-ho;Park Ju-han;Lee Kook-nyung;Kim Yong-Kweon
    • The Transactions of the Korean Institute of Electrical Engineers C
    • /
    • v.54 no.8
    • /
    • pp.378-383
    • /
    • 2005
  • We report the development of miniature fluorescence detection systems that employ miniature prism, mirrors and low coat CCD camera to detect the fluorescence emitted from 40 fluorescently-labeled protein patterns without scanner. This kind of miniature fluorescence detection system can be used in point of care. We introduce two systems, one uses prism+mirror block and the other uses prism and two mirrors. A large NA microscope eyepiece and low cost CCD camera are used. We fabricated protein chip containing multi-pattern BSA labeled with Cy5, using MEMS technology and modified the surface chemically to clean and to immobilize proteins. The measurements show that the combination of prism and mirrors can homogenize elliptical excitation light over the sample with higher optical efficiency, and increase the separation between excitation and fluorescence light at the CCD to give higher signal intensity and higher signal to noise ratio. The measurements also show that protein concentrations ranging from 10 ng/ml to 1000 ng/ml can be assayed with very small error. We believe that the proposed fluorescence detection system can be refined to build a commercially valuable hand-held or miniature detection device.

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.