• Title/Summary/Keyword: lidar sensors

Search Result 71, Processing Time 0.023 seconds

Registration of Aerial Image with Lines using RANSAC Algorithm

  • Ahn, Y.;Shin, S.;Schenk, T.;Cho, W.
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.529-536
    • /
    • 2007
  • Registration between image and object space is a fundamental step in photogrammetry and computer vision. Along with rapid development of sensors - multi/hyper spectral sensor, laser scanning sensor, radar sensor etc., the needs for registration between different sensors are ever increasing. There are two important considerations on different sensor registration. They are sensor invariant feature extraction and correspondence between them. Since point to point correspondence does not exist in image and laser scanning data, it is necessary to have higher entities for extraction and correspondence. This leads to modify first, existing mathematical and geometrical model which was suitable for point measurement to line measurements, second, matching scheme. In this research, linear feature is selected for sensor invariant features and matching entity. Linear features are incorporated into mathematical equation in the form of extended collinearity equation for registration problem known as photo resection which calculates exterior orientation parameters. The other emphasis is on the scheme of finding matched entities in the aide of RANSAC (RANdom SAmple Consensus) in the absence of correspondences. To relieve computational load which is a common problem in sampling theorem, deterministic sampling technique and selecting 4 line features from 4 sectors are applied.

Overview of sensor fusion techniques for vehicle positioning (차량정밀측위를 위한 복합측위 기술 동향)

  • Park, Jin-Won;Choi, Kae-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.139-144
    • /
    • 2016
  • This paper provides an overview of recent trends in sensor fusion technologies for vehicle positioning. The GNSS by itself cannot satisfy precision and reliability required by autonomous driving. We survey sensor fusion techniques that combine the outputs from the GNSS and the inertial navigation sensors such as an odometer and a gyroscope. Moreover, we overview landmark-based positioning that matches landmarks detected by a lidar or a stereo vision to high-precision digital maps.

Lane Change Driving Analysis based on Road Driving Data (실도로 주행 데이터 기반 차선변경 주행 특성 분석)

  • Park, Jongcherl;Chae, Heungseok;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.10 no.1
    • /
    • pp.38-44
    • /
    • 2018
  • This paper presents an analysis on driving safety in lane change situation based on road driving data. Autonomous driving is a global trend in vehicle industry. LKAS technologies are already applied in commercial vehicle and researches about lane change maneuver have been actively studied. In autonomous vehicle, not only safety control issue but also imitating human driving maneuver is important. Driving data analysis in lane change situation has been usually dealt with ego vehicle information such as longitudinal acceleration, yaw rate, and steering angle. For this reason, developing safety index according to surrounding vehicle information based on human driving data is needed. In this research, driving data is collected from perception module using LIDAR, radar and RT-GPS sensors. By analyzing human driving pattern in lane change maneuver, safety index that considers both ego vehicle and surrounding vehicle state by using relative velocity and longitudinal clearance has been designed.

Longitudinal Motion Planning of Autonomous Vehicle for Pedestrian Collision Avoidance (보행자 충돌 회피를 위한 자율주행 차량의 종방향 거동 계획)

  • Kim, Yujin;Moon, Jongsik;Jeong, Yonghwan;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.3
    • /
    • pp.37-42
    • /
    • 2019
  • This paper presents an autonomous acceleration planning algorithm for pedestrian collision avoidance at urban. Various scenarios between pedestrians and a vehicle are designed to maneuver the planning algorithm. To simulate the scenarios, we analyze pedestrian's behavior and identify limitations of fusion sensors, lidar and vision camera. Acceleration is optimally determined by considering TTC (Time To Collision) and pedestrian's intention. Pedestrian's crossing intention is estimated for quick control decision to minimize full-braking situation, based on their velocity and position change. Feasibility of the proposed algorithm is verified by simulations using Carsim and Simulink, and comparisons with actual driving data.

Integrated Visualization Method using Multiple Lidar Sensors (다수 라이다 센서를 이용한 통합 시각화 방법)

  • Lee, Eun-Seok;Lee, Yoon-Yim;Noh, Heejeon;Kim, Young-Chul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.159-160
    • /
    • 2022
  • 본 논문에서는 최근 주요시설의 경계에 주로 사용되기 시작한 라이다 센서를 여러대 사용할때 보다 효율적으로 사용하기 위해서 통합된 3차원 좌표계에서 시각화하는 방법에 대해 설명한다. 주로 카메라 기반 CCTV의 경우 정확성은 높지만 시야각(Field of View)이 좁기 때문에 레이더(RADAR)센서와 같은 센서와 함께 혼용되는 경우가 많다. 레이더 센서의 데이터는 넓은 범위에 대한 감지를 할 수 있지만 노이즈가 많고 물체의 형상을 정확하게 측정하기 힘들다. 라이다(LiDAR) 센서는 레이져를 이용하여 멀고 넓은 범위를 정교하게 측정할 수 있다. 이러한 라이다 센서는 정교한 만큼 처리해야할 데이터의 양이 많으며, 다수의 센서를 이용하더라도 하나의 화면에서 처리하기 힘들다는 단점이 있다. 제안하는 논문은 여러개의 라이다 센서에서 측정한 데이터를 실시간에 하나의 좌표계로 통일하여 하나의 영상을 보일 수 있도록 통합 뷰잉 환경을 제공한다.

  • PDF

Development of Simulation Environment for Autonomous Driving Algorithm Validation based on ROS (ROS 기반 자율주행 알고리즘 성능 검증을 위한 시뮬레이션 환경 개발)

  • Kwak, Jisub;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.1
    • /
    • pp.20-25
    • /
    • 2022
  • This paper presents a development of simulation environment for validation of autonomous driving (AD) algorithm based on Robot Operating System (ROS). ROS is one of the commonly-used frameworks utilized to control autonomous vehicles. For the evaluation of AD algorithm, a 3D autonomous driving simulator has been developed based on LGSVL. Two additional sensors are implemented in the simulation vehicle. First, Lidar sensor is mounted on the ego vehicle for real-time driving environment perception. Second, GPS sensor is equipped to estimate ego vehicle's position. With the vehicle sensor configuration in the simulation, the AD algorithm can predict the local environment and determine control commands with motion planning. The simulation environment has been evaluated with lane changing and keeping scenarios. The simulation results show that the proposed 3D simulator can successfully imitate the operation of a real-world vehicle.

A 5-DOF Ground Testbed for Developing Rendezvous/Docking Algorithm of a Nano-satellite (초소형 위성의 랑데부/도킹 알고리즘 개발을 위한 5자유도 지상 테스트베드)

  • Choi, Won-Sub;Cho, Dong-Hyun;Song, Ha-Ryong;Kim, Jong-Hak;Ko, Su-Jeong;Kim, Hae-Dong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.43 no.12
    • /
    • pp.1124-1131
    • /
    • 2015
  • This paper describes a 5-dof ground testbed which emulates micro-gravity environment for developing Rendezvous/docking algorithm of a nano-satellite. The testbed consists of two parts, the low part which eliminates friction force with ground and the upper part which has 3-dof rotational motion with respect to the low part. For Vison-based autonomous navigation algorithm, we use camera, LIDAR and AHRS as sensors and eight cold gas thrusters and three axis directional reaction wheels as actuators. All system software are implemented with C++ based on on-board computer and Linux OS.

Object detection and distance measurement system with sensor fusion (센서 융합을 통한 물체 거리 측정 및 인식 시스템)

  • Lee, Tae-Min;Kim, Jung-Hwan;Lim, Joonhong
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.232-237
    • /
    • 2020
  • In this paper, we propose an efficient sensor fusion method for autonomous vehicle recognition and distance measurement. Typical sensors used in autonomous vehicles are radar, lidar and camera. Among these, the lidar sensor is used to create a map around the vehicle. This has the disadvantage, however, of poor performance in weather conditions and the high cost of the sensor. In this paper, to compensate for these shortcomings, the distance is measured with a radar sensor that is relatively inexpensive and free of snow, rain and fog. The camera sensor with excellent object recognition rate is fused to measure object distance. The converged video is transmitted to a smartphone in real time through an IP server and can be used for an autonomous driving assistance system that determines the current vehicle situation from inside and outside.

Road Environment Black Ice Detection Limits Using a Single LIDAR Sensor (단일 라이다 센서를 이용한 도로환경 블랙아이스 검출 한계)

  • Sung-Tae Kim;Won-Hyuck Choi;Je-Hong Park;Seok-Min Hong;Yeong-Geun Lim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.865-870
    • /
    • 2023
  • Recently, accidents caused by black ice, a road freezing phenomenon caused by natural power, are increasing. Black ice is difficult to identify directly with the human eye and is more likely to misunderstand it as standing water, so there is a high accident rate caused by car sliding. To solve this problem, this paper presents a method of detecting black ice centered on LiDAR sensors. With a small, inexpensive, and high-accuracy light detection and ranging (LiDAR) sensor, the temperature and inclination angle are set differently to detect black ice and asphalt by setting different reflection angles of asphalt and black ice differently in temperatures and inclinations. The LIDARO carried out in the study points out that additional research and improvement are needed to increase accuracy, and through this, more reliable black ice detection methods can be suggested. This method suggests a method of detecting black ice through early system design research by preventing accidents caused by black ice in advance.

Evaluation of Applicability for 3D Scanning of Abandoned or Flooded Mine Sites Using Unmanned Mobility (무인 이동체를 이용한 폐광산 갱도 및 수몰 갱도의 3차원 형상화 위한 적용성 평가)

  • Soolo Kim;Gwan-in Bak;Sang-Wook Kim;Seung-han Baek
    • Tunnel and Underground Space
    • /
    • v.34 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • An image-reconstruction technology, involving the deployment of an unmanned mobility equipped with high-speed LiDAR (Light Detection And Ranging) has been proposed to reconstruct the shape of abandoned mine. Unmanned mobility operation is remarkably useful in abandoned mines fraught with operational difficulties including, but not limited to, obstacles, sludge, underwater and narrow tunnel with the diameter of 1.5 m or more. For cases of real abandoned mines, quadruped robots, quadcopter drones and underwater drones are respectively deployed on land, air, and water-filled sites. In addition to the advantage of scanning the abandoned mines with 2D solid-state lidar sensors, rotation of radiation at an inclination angle offers an increased efficiency for simultaneous reconstruction of mineshaft shapes and detecting obstacles. Sensor and robot posture were used for computing rotation matrices that helped compute geographical coordinates of the solid-state lidar data. Next, the quadruped robot scanned the actual site to reconstruct tunnel shape. Lastly, the optimal elements necessary to increase utility in actual fields were found and proposed.