• 제목/요약/키워드: Light Detection And Ranging

검색결과 223건 처리시간 0.025초

A robust collision prediction and detection method based on neural network for autonomous delivery robots

  • Seonghun Seo;Hoon Jung
    • ETRI Journal
    • /
    • 제45권2호
    • /
    • pp.329-337
    • /
    • 2023
  • For safe last-mile autonomous robot delivery services in complex environments, rapid and accurate collision prediction and detection is vital. This study proposes a suitable neural network model that relies on multiple navigation sensors. A light detection and ranging technique is used to measure the relative distances to potential collision obstacles along the robot's path of motion, and an accelerometer is used to detect impacts. The proposed method tightly couples relative distance and acceleration time-series data in a complementary fashion to minimize errors. A long short-term memory, fully connected layer, and SoftMax function are integrated to train and classify the rapidly changing collision countermeasure state during robot motion. Simulation results show that the proposed method effectively performs collision prediction and detection for various obstacles.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

LIDAR와 Split-FX 소프트웨어를 이용한 암반 절리면의 자동추출과 절리의 특성 분석 (Automatic Extraction of Fractures and Their Characteristics in Rock Masses by LIDAR System and the Split-FX Software)

  • 김치환
    • 터널과지하공간
    • /
    • 제19권1호
    • /
    • pp.1-10
    • /
    • 2009
  • 암반 내 구조물을 시공하는 경우 역학적 안정성을 평가하기 위하여 암반의 특성을 조사한다. 이 경우 암반의 특성은 주로 암반 내 절리의 특성에 의하여 좌우된다. 지금까지는 암반 내 절리의 특성을 조사하기 위하여 암반이 노출된 사면이나 노두에 접근하고 육안으로 직접 관찰하였다. 이때 급사면과 같은 곳에서 접근의 문제, 작업의 안전 문제, 많은 시판이 걸리는 문제, 조사시간에 비하여 얻은 정보량의 부족, 정보의 재현 문제, 측정 오차 문제 등의 제한이 있었다. 따라서 이와 같은 문제를 개선하기 위하여 LIDAR (light detection and ranging)로 암반을 스캔하여 얻은 포인트 클라우드(point cloud)글 Split-FX 소프트웨어로 처리한 결과 절기의 방향과 간격 및 절리면의 거칠기 등 절리의 특성을 정확하고 효율적으로 분석할 수 있었다.

차량용 LiDAR 센서 물리적 신호교란 공격 중심의 실험적 분석과 대응방안 제안 (Experimental Analysis of Physical Signal Jamming Attacks on Automotive LiDAR Sensors and Proposal of Countermeasures)

  • 황지웅;윤요섭;오인수;임강빈
    • 정보보호학회논문지
    • /
    • 제34권2호
    • /
    • pp.217-228
    • /
    • 2024
  • 자율주행 자동차의 안전한 운행을 위해 카메라, RADAR(RAdio Detection And Ranging), 초음파 센서 중 중추적인 역할을 하는 LiDAR(Light Detection And Ranging) 센서는 360도에서 사물을 인식하고 탐지할 수 있다. 하지만 이러한 LiDAR 센서는 레이저를 통해서 거리를 측정하기 때문에 공격자에 노출되기 쉬우며 다양한 보안위협에 직면해있다. 따라서 본 논문에서는 LiDAR 센서를 대상으로 한 여러 가지 보안 위협인 Relay, Spoofing, Replay 공격을 살펴보고 물리적 신호교란(Jamming) 공격의 가능성과 그 영향을 분석하며, 이러한 공격이 자율주행 시스템의 안정성에 미치는 위험을 분석한다. 실험을 통해, 물리적 신호교란 공격이 LiDAR 센서의 거리 측정 능력에 오류를 유발할 수 있음을 보여준다. 개발이 진행 중인 차량 간 통신(Vehicle-to-Vehicle, V2V), 다중 센서 융합과 LiDAR 비정상 데이터 탐지를 통해 이러한 위협에 대한 대응방안과 자율주행 차량의 보안 강화를 위한 기초적인 방향을 제시하고 향후 연구에서 제안된 대응방안의 실제 적용 가능성과 효과를 검증하는 것을 목표로 한다.

풍력자원평가를 위한 라이다 관측 시 풍속연직분포 불확도 분석 (Uncertainty Analysis on Vertical Wind Profile Measurement of LIDAR for Wind Resource Assessment)

  • 김현구;최지휘;장문석;전완호
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 한국신재생에너지학회 2010년도 춘계학술대회 초록집
    • /
    • pp.185.1-185.1
    • /
    • 2010
  • 원격탐사(remote sensing)란 관측 대상과의 접촉 없이 멀리서 정보를 얻어내는 기술을 말한다. 기상관측분야에는 이미 소다(SODAR) 장비가 폭넓게 사용되거 왔으나 최근 풍력자원평가(wind resource assessment)를 위한 풍황측정에 SODAR와 더불어 라이다(LIDAR)가 적극적으로 활용되기 시작하고 있다. 참고로 SODAR(SOnic Detection And Ranging)는 수직 및 동서 남북 방향으로 음파를 발생시키고 대기유동에 의해 산란 반사된 에코를 수신하여 진동수 변화와 반사에코 강도를 측정하여 각 방향의 에코자료를 벡터 합성함으로써 풍향 및 풍속을 산출하는 원리이다. 반면 LIDAR(Light Detection And Ranging)는 비교적 최근에 풍황측정 용도로 개발된 레이저 탐지에 바탕을 둔 원거리 센서로, 공기입자(먼지, 수증기, 구름, 안개, 오염물질 등)에 의해 산란된 레이저 발산의 도플러 쉬프트(Doppler shift)를 이용하여 풍향 및 풍속을 측정하는 원격탐사 장비이다. 풍력자원평가 측면에서 라이다는 그 정확도가 IEC61400-12에 의거한 풍황탑(met-mast) 측정자료 다수와의 비교검증 실측평가(Albers et al., 2009)를 통하여 입증된 바 있다. 한편 한국에너지기술연구원에서 운용 중인 라이다 시스템은 그림 1의 우측 그림과 같이 1초에 $360^{\circ}$를 스캔하여 50지점에서 반사되는 레이저를 스펙트럼으로 측정하되 설정된 관측높이에서 풍속은 샘플링 부피(sampling volume)의 평균값으로 정의된다. 그런데 샘플링 부피는 설정된 관측높이로부터 상하 12.5m, 총 25m의 높이구간에서 관측한 스펙트럼의 평균값을 그 중앙지점에서의 풍속으로 환산하는 알고리듬(algorithm)을 채택하고 있다. 따라서 비선형적으로 변화하는 풍속연직분포 관측 시 풍속환산 알고리듬에 의한 측정오차가 개입될 가능성이 존재하는 것이다. 이에 본 연구에서는 라이다에 의한 풍속연직분포 측정 시 샘플링 부피의 구간 평균화 과정에서 발생하는 불확도(uncertainty)를 정량적으로 분석함으로써 라이다에 의한 풍속연직분포 관측의 불확도를 정량평가하고자 한다.

  • PDF

건설현장 MMS 라이다 기반 점군 데이터의 정확도 분석 (Accuracy Analysis of Point Cloud Data Produced Via Mobile Mapping System LiDAR in Construction Site)

  • 박재우;염동준
    • 한국산업융합학회 논문집
    • /
    • 제25권3호
    • /
    • pp.397-406
    • /
    • 2022
  • Recently, research and development to revitalize smart construction are being actively carried out. Accordingly, 3D mapping technology that digitizes construction site is drawing attention. To create a 3D digital map for construction site a point cloud generation method based on LiDAR(Light detection and ranging) using MMS(Mobile mapping system) is mainly used. The purpose of this study is to analyze the accuracy of MMS LiDAR-based point cloud data. As a result, accuracy of MMS point cloud data was analyzed as dx = 0.048m, dy = 0.018m, dz = 0.045m on average. In future studies, accuracy comparison of point cloud data produced via UAV(Unmanned aerial vegicle) photogrammetry and MMS LiDAR should be studied.

지형 기울기에 의한 항공 수심 라이다 수심 측정 오차 보정 (Correction in the Measurement Error of Water Depth Caused by the Effect of Seafloor Slope on Peak Timing of Airborne LiDAR Waveforms)

  • 심기현;우제흔;이재용;김재완
    • 한국정밀공학회지
    • /
    • 제34권3호
    • /
    • pp.191-197
    • /
    • 2017
  • Light detection and ranging (LiDAR) is one of the most efficient technologies to obtain the topographic and bathymetric map of coastal zones, superior to other technologies, such as sound navigation and ranging (SONAR) and synthetic aperture radar (SAR). However, the measurement results using LiDAR are vulnerable to environmental factors. To achieve a correspondence between the acquired LiDAR data and reality, error sources must be considered, such as the water surface slope, water turbidity, and seafloor slope. Based on the knowledge of those factors' effects, error corrections can be applied. We concentrated on the effect of the seafloor slope on LiDAR waveforms while restricting other error sources. A simulation regarding in-water beam scattering was conducted, followed by an investigation of the correlation between the seafloor slope and peak timing of return waveforms. As a result, an equation was derived to correct the depth error caused by the seafloor slope.

ETLi: Efficiently annotated traffic LiDAR dataset using incremental and suggestive annotation

  • Kang, Jungyu;Han, Seung-Jun;Kim, Nahyeon;Min, Kyoung-Wook
    • ETRI Journal
    • /
    • 제43권4호
    • /
    • pp.630-639
    • /
    • 2021
  • Autonomous driving requires a computerized perception of the environment for safety and machine-learning evaluation. Recognizing semantic information is difficult, as the objective is to instantly recognize and distinguish items in the environment. Training a model with real-time semantic capability and high reliability requires extensive and specialized datasets. However, generalized datasets are unavailable and are typically difficult to construct for specific tasks. Hence, a light detection and ranging semantic dataset suitable for semantic simultaneous localization and mapping and specialized for autonomous driving is proposed. This dataset is provided in a form that can be easily used by users familiar with existing two-dimensional image datasets, and it contains various weather and light conditions collected from a complex and diverse practical setting. An incremental and suggestive annotation routine is proposed to improve annotation efficiency. A model is trained to simultaneously predict segmentation labels and suggest class-representative frames. Experimental results demonstrate that the proposed algorithm yields a more efficient dataset than uniformly sampled datasets.

형광과 여기광을 공간적으로 분리하는 바이오칩용 소형 형광측정시스템 (Miniature Biochip Fluorescence Detection System with Spatial Separation of Fluorescence from Excitation Light)

  • 김호성;김용권;박주한;이국녕;최재호
    • 대한전기학회논문지:전기물성ㆍ응용부문C
    • /
    • 제54권8호
    • /
    • pp.378-383
    • /
    • 2005
  • We report the development of miniature fluorescence detection systems that employ miniature prism, mirrors and low coat CCD camera to detect the fluorescence emitted from 40 fluorescently-labeled protein patterns without scanner. This kind of miniature fluorescence detection system can be used in point of care. We introduce two systems, one uses prism+mirror block and the other uses prism and two mirrors. A large NA microscope eyepiece and low cost CCD camera are used. We fabricated protein chip containing multi-pattern BSA labeled with Cy5, using MEMS technology and modified the surface chemically to clean and to immobilize proteins. The measurements show that the combination of prism and mirrors can homogenize elliptical excitation light over the sample with higher optical efficiency, and increase the separation between excitation and fluorescence light at the CCD to give higher signal intensity and higher signal to noise ratio. The measurements also show that protein concentrations ranging from 10 ng/ml to 1000 ng/ml can be assayed with very small error. We believe that the proposed fluorescence detection system can be refined to build a commercially valuable hand-held or miniature detection device.

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • 제43권4호
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.