• Title/Summary/Keyword: LiDAR Sensor

Search Result 139, Processing Time 0.024 seconds

Road Environment Black Ice Detection Limits Using a Single LIDAR Sensor (단일 라이다 센서를 이용한 도로환경 블랙아이스 검출 한계)

  • Sung-Tae Kim;Won-Hyuck Choi;Je-Hong Park;Seok-Min Hong;Yeong-Geun Lim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.865-870
    • /
    • 2023
  • Recently, accidents caused by black ice, a road freezing phenomenon caused by natural power, are increasing. Black ice is difficult to identify directly with the human eye and is more likely to misunderstand it as standing water, so there is a high accident rate caused by car sliding. To solve this problem, this paper presents a method of detecting black ice centered on LiDAR sensors. With a small, inexpensive, and high-accuracy light detection and ranging (LiDAR) sensor, the temperature and inclination angle are set differently to detect black ice and asphalt by setting different reflection angles of asphalt and black ice differently in temperatures and inclinations. The LIDARO carried out in the study points out that additional research and improvement are needed to increase accuracy, and through this, more reliable black ice detection methods can be suggested. This method suggests a method of detecting black ice through early system design research by preventing accidents caused by black ice in advance.

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

GENERATION OF AIRBORNE LIDAR INTENSITY IMAGE BY NORMALIZAING RANGE DIFFERENCES

  • Shin, Jung-Il;Yoon, Jong-Suk;Lee, Kyu-Sung
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.504-507
    • /
    • 2006
  • Airborn Lidar technology has been applied to diverse applications with the advantages of accurate 3D information. Further, Lidar intensity, backscattered signal power, can provid us additional information regarding target's characteristics. Lidar intensity varies by the target reflectance, moisture condition, range, and viewing geometry. This study purposes to generate normalized airborne LiDAR intensity image considering those influential factors such as reflectance, range and geometric/topographic factors (scan angle, ground height, aspect, slope, local incidence angle: LIA). Laser points from one flight line were extracted to simplify the geometric conditions. Laser intensities of sample plots, selected by using a set of reference data and ground survey, werethen statistically analyzed with independent variables. Target reflectance, range between sensor and target, and surface slope were main factors to influence the laser intensity. Intensity of laser points was initially normalized by removing range effect only. However, microsite topographic factor, such as slope angle, was not normalized due to difficulty of automatic calculation.

  • PDF

lmplementation of Bellboy Robot using LiDAR (라이더를 이용한 벨보이 로봇의 구현)

  • Park, Cha-Hun;Park, Seong-Sik;Jo, Seong-Gik;Kim, Young-Chan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.01a
    • /
    • pp.231-232
    • /
    • 2018
  • 최근 항공 산업 발전으로 여행객들의 수가 증가하고 있으며, 호텔 이용객 수도 더불어 증가하는 추세이다. 호텔 이용객의 증가로 시설 이용 시 서비스 지연과 혼잡 그리고 서비스 품질 저하 등이 발생하게 되면서 호텔 이용객들의 시설 이용 만족도가 하락하게 되고 또한 호텔 근무 직원들의 업무 피로도가 높아지고 누적되면서 직업 만족도가 떨어지는 연쇄적인 문제점들이 발생하고 있다. 본 연구는 호텔에 방문하는 이용객들에게 캐리어 운반과 객실 안내 및 호출 서비스 등 각종 서비스 업무를 직원 대신 수행하게 되면서 현 문제점들을 어느 정도 경감시키고 이에 따라 고객 만족도 및 호텔 직원들의 업무 만족도를 증가시킬 수 있을 것으로 사료된다.

  • PDF

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

A New Object Region Detection and Classification Method using Multiple Sensors on the Driving Environment (다중 센서를 사용한 주행 환경에서의 객체 검출 및 분류 방법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1271-1281
    • /
    • 2017
  • It is essential to collect and analyze target information around the vehicle for autonomous driving of the vehicle. Based on the analysis, environmental information such as location and direction should be analyzed in real time to control the vehicle. In particular, obstruction or cutting of objects in the image must be handled to provide accurate information about the vehicle environment and to facilitate safe operation. In this paper, we propose a method to simultaneously generate 2D and 3D bounding box proposals using LiDAR Edge generated by filtering LiDAR sensor information. We classify the classes of each proposal by connecting them with Region-based Fully-Covolutional Networks (R-FCN), which is an object classifier based on Deep Learning, which uses two-dimensional images as inputs. Each 3D box is rearranged by using the class label and the subcategory information of each class to finally complete the 3D bounding box corresponding to the object. Because 3D bounding boxes are created in 3D space, object information such as space coordinates and object size can be obtained at once, and 2D bounding boxes associated with 3D boxes do not have problems such as occlusion.

A Study on Utilizing Raspberry Pi and Multi-Sensors for Effective Time Management in Shared Spaces (공유 공간에서의 효과적인 사용 시간 관리를 위한 라즈베리 파이와 다중 센서 활용에 관한 연구)

  • Sung Jin Kim;Hyun Bin Jeong;Chae Ryeong Ahn;Hyeon Bin Yang;Da Hyeon Kim;Ju Heon Lee;Jai Soon Baek
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.661-664
    • /
    • 2023
  • 이 프로젝트의 목표는 공유 공간에서의 고객 이용 시간 관리를 향상하기 위해 라즈베리 파이와 센서 기술을 활용한 자동 시간 체크 시스템의 개발입니다. 이 시스템은 로드셀, 진동 감지 센서, 그리고 LiDAR 센서, 이 세 가지 센서를 활용해 의자에서 사용자의 존재 여부를 감지하고, 사람과 물건을 구별하며, 사용자가 의자에서 일어나는 시점을 파악합니다. 특히, 고객이 의자에 앉게 되면 시스템이 자동으로 시간을 체크하여 실시간으로 이용 시간을 측정하게 됩니다. 이렇게 수집된 정보는 웹 기반의 사용자 인터페이스를 통해 제공되어, 이용 시간 관리가 보다 편리해집니다.

  • PDF

Development of a Fluorescence Measurement System Capable of Rapid Red Tide Monitoring (신속한 적조 예찰이 가능한 형광 측정시스템 개발)

  • Kyung-hoon Baek;Yeongji Oh;Hyeonseo Cho;Yoonja Kang;Joon-seok Lee
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.1
    • /
    • pp.30-33
    • /
    • 2024
  • The occurrence of harmful algae on the coast of Korea has been a cause of damage to the aquaculture industry and deterioration of the coastal ecosystem environment. A method is required to predict their outbreak in real-time at the site. Therefore, this study attempted to develop a small hybrid optical sensor and real-time monitoring system based on LiDAR that can be used in the field and laboratory and can be applied to various platforms. FMS-L specifically suggested the amount of Chlorophyll a (Chl a) in the sample by measuring and analyzing the fluorescence emitted by the irradiating light. The accuracy of FMS-L was verified by measuring the concentrations of standard Chlorophyll a substances and Margalfidinium polykirkoids. In addition, the precision was verified by comparing the measurement results of FMS-L using commercial equipment Phyto-PAM-II. This equipment is compact and easy to move. Therefore, it can be easily applied to field surveys, allows short time measurements (10 s), and can be applied at a distance of 10 m from the measurement site.

Drone Obstacle Avoidance Algorithm using Camera-based Reinforcement Learning (카메라 기반 강화학습을 이용한 드론 장애물 회피 알고리즘)

  • Jo, Si-hun;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.63-71
    • /
    • 2021
  • Among drone autonomous flight technologies, obstacle avoidance is a very important technology that can prevent damage to drones or surrounding environments and prevent danger. Although the LiDAR sensor-based obstacle avoidance method shows relatively high accuracy and is widely used in recent studies, it has disadvantages of high unit price and limited processing capacity for visual information. Therefore, this paper proposes an obstacle avoidance algorithm for drones using camera-based PPO(Proximal Policy Optimization) reinforcement learning, which is relatively inexpensive and highly scalable using visual information. Drone, obstacles, target points, etc. are randomly located in a learning environment in the three-dimensional space, stereo images are obtained using a Unity camera, and then YOLov4Tiny object detection is performed. Next, the distance between the drone and the detected object is measured through triangulation of the stereo camera. Based on this distance, the presence or absence of obstacles is determined. Penalties are set if they are obstacles and rewards are given if they are target points. The experimennt of this method shows that a camera-based obstacle avoidance algorithm can be a sufficiently similar level of accuracy and average target point arrival time compared to a LiDAR-based obstacle avoidance algorithm, so it is highly likely to be used.

Empirical Research on Improving Traffic Cone Considering LiDAR's Characteristics (LiDAR의 특성을 고려한 자율주행 대응 교통콘 개선 실증 연구)

  • Kim, Jiyoon;Kim, Jisoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.253-273
    • /
    • 2022
  • Automated vehicles rely on information collected through sensors to drive. Therefore, the uncertainty of the information collected from a sensor is an important to address. To this end, research is conducted in the field of road and traffic to solve the uncertainty of these sensors through infrastructure or facilities. Therefore, this study developed a traffic cone that can maintaing the gaze guidance function in the construction site by securing sufficient LiDAR detection performance even in rainy conditions and verified its improvement effect through demonstration. Two types of cones were manufactured, a cross-type and a flat-type, to increase the reflective performance compared to an existing cone. The demonstration confirms that the flat-type traffic cone has better detection performance than an existing cone, even in 50 mm/h rainfall, which affects a driver's field of vision. In addition, it was confirmed that the detection level on a clear day was maintained at the 20 mm/h rain for both cones. In the future, improvement measures should be developed so that the traffic cones, that can improve the safety of automated driving, can be applied.