• Title/Summary/Keyword: light detection and ranging

Search Result 223, Processing Time 0.022 seconds

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Development of Autonomous Algorithm for Boat Using Robot Operating System (로봇운영체제를 이용한 보트의 자율운항 알고리즘 개발)

  • Jo, Hyun-Jae;Kim, Jung-Hyeon;Kim, Su-Rim;Woo, Ju-Hyun;Park, Jong-Yong
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.2
    • /
    • pp.121-128
    • /
    • 2021
  • According to the increasing interest and demand for the Autonomous Surface Vessels (ASV), the autonomous navigation system is being developed such as obstacle detection, avoidance, and path planning. In general, autonomous navigation algorithm controls the ship by detecting the obstacles with various sensors and planning path for collision avoidance. This study aims to construct and prove autonomous algorithm with integrated various sensor using the Robot Operating System (ROS). In this study, the safety zone technique was used to avoid obstacles. The safety zone was selected by an algorithm to determine an obstacle-free area using 2D LiDAR. Then, drift angle of the ship was controlled by the propulsion difference of the port and starboard side that based on PID control. The algorithm performance was verified by participating in the 2020 Korea Autonomous BOAT (KABOAT).

Displacement Measuring Method using Terrestrial LiDAR for Safety and Serviceability Monitoring of Steel Beams (지상 LiDAR를 이용한 철골보의 안전 및 사용성 모니터링을 위한 변위 계측기법)

  • Lee Hong-Min;Park Hyo-Seon;Lee Im-Pyeong
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2005.04a
    • /
    • pp.190-197
    • /
    • 2005
  • To monitor the safety and serviceability of a structures, structural responses including displacements due to various design and unexpected loadings must be measured. The maximum displacement and its distributions of a structure can be used as a direct assessment index on its stiffness. For this reason, there have been diversely studied on measuring of the maximum displacement of a structure. However, there is no practical method for measuring displacement of a structure. Therefore, in this paper, new displacement measuring method is developed and accuracy of LiDAR is examined in detail for development of a new method for measuring displacement of a structure.

  • PDF

A Fast Ground Segmentation Method for 3D Point Cloud

  • Chu, Phuong;Cho, Seoungjae;Sim, Sungdae;Kwak, Kiho;Cho, Kyungeun
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.491-499
    • /
    • 2017
  • In this study, we proposed a new approach to segment ground and nonground points gained from a 3D laser range sensor. The primary aim of this research was to provide a fast and effective method for ground segmentation. In each frame, we divide the point cloud into small groups. All threshold points and start-ground points in each group are then analyzed. To determine threshold points we depend on three features: gradient, lost threshold points, and abnormalities in the distance between the sensor and a particular threshold point. After a threshold point is determined, a start-ground point is then identified by considering the height difference between two consecutive points. All points from a start-ground point to the next threshold point are ground points. Other points are nonground. This process is then repeated until all points are labelled.

Intelligent robotic walker with actively controlled human interaction

  • Weon, Ihn-Sik;Lee, Soon-Geul
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.522-530
    • /
    • 2018
  • In this study, we developed a robotic walker that actively controls its speed and direction of movement according to the user's gait intention. Sensor fusion between a low-cost light detection and ranging (LiDAR) sensor and inertia measurement units (IMUs) helps determine the user's gait intention. The LiDAR determines the walking direction by detecting both knees, and the IMUs attached on each foot obtain the angular rate of the gait. The user's gait intention is given as the directional angle and the speed of movement. The two motors in the robotic walker are controlled with these two variables, which represent the user's gait intention. The estimated direction angle is verified by comparison with a Kinect sensor that detects the centroid trajectory of both the user's feet. We validated the robotic walker with an experiment by controlling it using the estimated gait intention.

MEASURING CROWN PROJECTION AREA AND TREE HEIGHT USINGLIDAR

  • Kwak Doo-Ahn;Lee Woo-Kyun;Son Min-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.515-518
    • /
    • 2005
  • LiDAR(Light Detection and Ranging) with digital aerial photograph can be used to measure tree growth factors like total height, height of clear-length, dbh(diameter at breast height) and crown projection area. Delineating crown is an important process for identifying and numbering individual trees. Crown delineation can be done by watershed method to segment basin according to elevation values of DSMmax produced by LiDAR. Digital aerial photograph can be used to validate the crown projection area using LiDAR. And tree height can be acquired by image processing using window filter$(3cell\times3cell\;or\;5cell\times5cell)$ that compares grid elevation values of individual crown segmented by watershed.

  • PDF

Investigation of Airborne LIDAR Intensity data

  • Chang Hwijeong;Cho Woosug
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.646-649
    • /
    • 2004
  • LiDAR(Light Detection and Ranging) system can record intensity data as well as range data. Recently, LiDAR intensity data is widely used for landcover classification, ancillary data of feature extraction, vegetation species identification, and so on. Since the intensity return value is associated with several factors, same features is not consistent for same flight or multiple flights. This paper investigated correlation between intensity and range data. Once the effects of range was determined, the single flight line normalization and the multiple flight line normalization was performed by an empirical function that was derived from relationship between range and return intensity

  • PDF

A Study of Store & Management of Airborne LiDAR Data (항공LiDAR 데이터의 관계형 DBMS 저장 및 관리방안 연구)

  • Kim, Ho-Kun;Kwon, Chang-Hee
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.6
    • /
    • pp.548-553
    • /
    • 2008
  • While in the past map-making process by field survey devices such as MicroStation needs more time relatively, we can make more precise map effectively with airbone LiDAR and GPS devices. Also the data, captured by LiDAR, are very large in size and so it needs to use Relational DBMS to manage and process LiDAR data. In this study we propose how to store and manage LiDAR data using RDBMS.

  • PDF

LiDAR-based Mapping Considering Laser Reflectivity in Indoor Environments (실내 환경에서의 레이저 반사도를 고려한 라이다 기반 지도 작성)

  • Roun Lee;Jeonghong Park;Seonghun Hong
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.135-142
    • /
    • 2023
  • Light detection and ranging (LiDAR) sensors have been most widely used in terrestrial robotic applications because they can provide dense and precise measurements of the surrounding environments. However, the reliability of LiDAR measurements can considerably vary due to the different reflectivities of laser beams to the reflecting surface materials. This study presents a robust LiDAR-based mapping method for the varying laser reflectivities in indoor environments using the framework of simultaneous localization and mapping (SLAM). The proposed method can minimize the performance degradations in the SLAM accuracy by checking and discarding potentially unreliable LiDAR measurements in the SLAM front-end process. The gaps in point-cloud maps created by the proposed approach are filled by a Gaussian process regression method. Experimental results with a mobile robot platform in an indoor environment are presented to validate the effectiveness of the proposed methodology.

LiDAR-based Mobile Robot Exploration Considering Navigability in Indoor Environments (실내 환경에서의 주행가능성을 고려한 라이다 기반 이동 로봇 탐사 기법)

  • Hyejeong Ryu;Jinwoo Choi;Taehyeon Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.487-495
    • /
    • 2023
  • This paper presents a method for autonomous exploration of indoor environments using a 2-dimensional Light Detection And Ranging (LiDAR) scanner. The proposed frontier-based exploration method considers navigability from the current robot position to extracted frontier targets. An approach to constructing the point cloud grid map that accurately reflects the occupancy probability of glass obstacles is proposed, enabling identification of safe frontier grids on the safety grid map calculated from the point cloud grid map. Navigability, indicating whether the robot can successfully navigate to each frontier target, is calculated by applying the skeletonization-informed rapidly exploring random tree algorithm to the safety grid map. While conventional exploration approaches have focused on frontier detection and target position/direction decision, the proposed method discusses a safe navigation approach for the overall exploration process until the completion of mapping. Real-world experiments have been conducted to verify that the proposed method leads the robot to avoid glass obstacles and safely navigate the entire environment, constructing the point cloud map and calculating the navigability with low computing time deviation.