• Title/Summary/Keyword: LiDAR point cloud

Search Result 138, Processing Time 0.027 seconds

Fruit Tree Row Recognition and 2D Map Generation for Autonomous Driving in Orchards (과수원 자율 주행을 위한 과수 줄 인식 및 2차원 지도 생성 방법)

  • Ho Young Yun;Duksu Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.1-8
    • /
    • 2024
  • We present a novel algorithm for creating 2D maps tailored for autonomous navigation within orchards. Recognizing that fruit trees in orchards are typically aligned in rows, our primary goal is to accurately detect these tree rows and project this information onto the map. Initially, we propose a simple algorithm that recognizes trees from point cloud data by analyzing the spatial distribution of points. We then introduce a method for detecting fruit tree rows based on the positions of recognized fruit trees, which are integrated into the 2D orchard map. Validation of the proposed approach was conducted using real-world orchard point cloud data acquired via LiDAR. The results demonstrate high tree detection accuracy of 90% and precise tree row mapping, confirming the method's efficacy. Additionally, the generated maps facilitate the development of natural navigation paths that align with the orchard's layout.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

Applicability Assessment of Disaster Rapid Mapping: Focused on Fusion of Multi-sensing Data Derived from UAVs and Disaster Investigation Vehicle (재난조사 특수차량과 드론의 다중센서 자료융합을 통한 재난 긴급 맵핑의 활용성 평가)

  • Kim, Seongsam;Park, Jesung;Shin, Dongyoon;Yoo, Suhong;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.841-850
    • /
    • 2019
  • The purpose of this study is to strengthen the capability of rapid mapping for disaster through improving the positioning accuracy of mapping and fusion of multi-sensing point cloud data derived from Unmanned Aerial Vehicles (UAVs) and disaster investigation vehicle. The positioning accuracy was evaluated for two procedures of drone mapping with Agisoft PhotoScan: 1) general geo-referencing by self-calibration, 2) proposed geo-referencing with optimized camera model by using fixed accurate Interior Orientation Parameters (IOPs) derived from indoor camera calibration test and bundle adjustment. The analysis result of positioning accuracy showed that positioning RMS error was improved 2~3 m to 0.11~0.28 m in horizontal and 2.85 m to 0.45 m in vertical accuracy, respectively. In addition, proposed data fusion approach of multi-sensing point cloud with the constraints of the height showed that the point matching error was greatly reduced under about 0.07 m. Accordingly, our proposed data fusion approach will enable us to generate effectively and timelinessly ortho-imagery and high-resolution three dimensional geographic data for national disaster management in the future.

Design of Memory-Efficient Octree to Query Large 3D Point Cloud (대용량 3차원 포인트 클라우드의 탐색을 위한 메모리 효율적인 옥트리의 설계)

  • Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.1
    • /
    • pp.41-48
    • /
    • 2013
  • The aim of the present study is to design a memory-efficient octree for querying large 3D point cloud. The aim has been fulfilled by omitting variables for minimum bounding hexahedral (MBH) of each octree node expressed in C++ language and by passing the re-estimated MBH from parent nodes to child nodes. More efficiency has been reported by two-fold processes of generating pseudo and regular trees to declare an array for all anticipated nodes, instead of using new operator to declare each child node. Experiments were conducted by constructing tree structures and querying neighbor points out of real point cloud composed of more than 18 million points. Compared with conventional methods using MBH information defined in each node, the suggested methods have proved themselves, in spite of existing trade-off between speed and memory efficiency, to be more memory-efficient than the comparative ones and to be practical alternatives applicable to large 3D point cloud.

Collision Avoidance Sensor System for Mobile Crane (전지형 크레인의 인양물 충돌방지를 위한 환경탐지 센서 시스템 개발)

  • Kim, Ji-Chul;Kim, Young Jea;Kim, Mingeuk;Lee, Hanmin
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.62-69
    • /
    • 2022
  • Construction machinery is exposed to accidents such as collisions, narrowness, and overturns during operation. In particular, mobile crane is operated only with the driver's vision and limited information of the assistant worker. Thus, there is a high risk of an accident. Recently, some collision avoidance device using sensors such as cameras and LiDAR have been applied. However, they are still insufficient to prevent collisions in the omnidirectional 3D space. In this study, a rotating LiDAR device was developed and applied to a 250-ton crane to obtain a full-space point cloud. An algorithm that could provide distance information and safety status to the driver was developed. Also, deep-learning segmentation algorithm was used to classify human-worker. The developed device could recognize obstacles within 100m of a 360-degree range. In the experiment, a safety distance was calculated with an error of 10.3cm at 30m to give the operator an accurate distance and collision alarm.

Development of the Program for Reconnaissance and Exploratory Drones based on Open Source (오픈 소스 기반의 정찰 및 탐색용 드론 프로그램 개발)

  • Chae, Bum-sug;Kim, Jung-hwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.33-40
    • /
    • 2022
  • With the recent increase in the development of military drones, they are adopted and used as the combat system of battalion level or higher. However, it is difficult to use drones that can be used in battles below the platoon level due to the current conditions for the formation of units in the Korean military. In this paper, therefore, we developed a program drones equipped with a thermal imaging camera and LiDAR sensor for reconnaissance and exploration that can be applied in battles below the platoon level. Using these drones, we studied the possibility and feasibility of drones for small-scale combats that can find hidden enemies, search for an appropriate detour through image processing and conduct reconnaissance and search for battlefields, hiding and cover-up through image processing. In addition to the purpose of using the proposed drone to search for an enemies lying in ambush in the battlefield, it can be used as a function to check the optimal movement path when a combat unit is moving, or as a function to check the optimal place for cover-up or hiding. In particular, it is possible to check another route other than the route recommended by the program because the features of the terrain can be checked from various viewpoints through 3D modeling. We verified the possiblity of flying by designing and assembling in a form of adding LiDAR and thermal imaging camera module to a drone assembled based on racing drone parts, which are open source hardware, and developed autonomous flight and search functions which can be used even by non-professional drone operators based on open source software, and then installed them to verify their feasibility.

A Parallel Approach for Accurate and High Performance Gridding of 3D Point Data (3D 점 데이터 그리딩을 위한 고성능 병렬처리 기법)

  • Lee, Changseop;Rizki, Permata Nur Miftahur;Lee, Heezin;Oh, Sangyoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.8
    • /
    • pp.251-260
    • /
    • 2014
  • 3D point data is utilized in various industry domains for its high accuracy to the surface information of an object. It is substantially utilized in geography for terrain scanning and analysis. Generally, 3D point data need to be changed by Gridding which produces a regularly spaced array of z values from irregularly spaced xyz data. But it requires long processing time and high resource cost to interpolate grid coordination. Kriging interpolation in Gridding has attracted because Kriging interpolation has more accuracy than other methods. However it haven't been used frequently since a processing is complex and slow. In this paper, we presented a parallel Gridding algorithm which contains Kriging and an application of grid data structure to fit MapReduce paradigm to this algorithm. Experiment was conducted for 1.6 and 4.3 billions of points from Airborne LiDAR files using our proposed MapReduce structure and the results show that the total execution time is decreased more than three times to the convention sequential program on three heterogenous clusters.

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

Analysis of Optimal Pathways for Terrestrial LiDAR Scanning for the Establishment of Digital Inventory of Forest Resources (디지털 산림자원정보 구축을 위한 최적의 지상LiDAR 스캔 경로 분석)

  • Ko, Chi-Ung;Yim, Jong-Su;Kim, Dong-Geun;Kang, Jin-Taek
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.245-256
    • /
    • 2021
  • This study was conducted to identify the applicability of a LiDAR sensor to forest resources inventories by comparing data on a tree's position, height, and DBH obtained by the sensor with those by existing forest inventory methods, for the tree species of Criptomeria japonica in Jeolmul forest in Jeju, South Korea. To this end, a backpack personal LiDAR (Greenvalley International, Model D50) was employed. To facilitate the process of the data collection, patterns of collecting the data by the sensor were divided into seven ones, considering the density of sample plots and the work efficiency. Then, the accuracy of estimating the variables of each tree was assessed. The amount of time spent on acquiring and processing the data by each method was compared to evaluate the efficiency. The findings showed that the rate of detecting standing trees by the LiDAR was 100%. Also, the high statistical accuracy was observed in both Pattern 5 (DBH: RMSE 1.07 cm, Bias -0.79 cm, Height: RMSE 0.95 m, Bias -3.2 m), and Pattern 7 (DBH: RMSE 1.18 cm, Bias -0.82 cm, Height: RMSE 1.13 m, Bias -2.62 m), compared to the results drawn in the typical inventory manner. Concerning the time issue, 115 to 135 minutes per 1ha were taken to process the data by utilizing the LiDAR, while 375 to 1,115 spent in the existing way, proving the higher efficiency of the device. It can thus be concluded that using a backpack personal LiDAR helps increase efficiency in conducting a forest resources inventory in an planted coniferous forest with understory vegetation, implying a need for further research in a variety of forests.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.