• Title/Summary/Keyword: Cloud-point extraction

Search Result 73, Processing Time 0.023 seconds

Semi-automatic Extraction of 3D Building Boundary Using DSM from Stereo Images Matching (영상 매칭으로 생성된 DSM을 이용한 반자동 3차원 건물 외곽선 추출 기법 개발)

  • Kim, Soohyeon;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1067-1087
    • /
    • 2018
  • In a study for LiDAR data based building boundary extraction, usually dense point cloud was used to cluster building rooftop area and extract building outline. However, when we used DSM generated from stereo image matching to extract building boundary, it is not trivial to cluster building roof top area automatically due to outliers and large holes of point cloud. Thus, we propose a technique to extract building boundary semi-automatically from the DSM created from stereo images. The technique consists of watershed segmentation for using user input as markers and recursive MBR algorithm. Since the proposed method only inputs simple marker information that represents building areas within the DSM, it can create building boundary efficiently by minimizing user input.

Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV (MMS로부터 취득된 LiDAR 점군데이터의 반사강도 영상과 UAV 영상의 정합을 위한 특징점 기반 매칭 기법 연구)

  • Choi, Yoonjo;Farkoushi, Mohammad Gholami;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.453-464
    • /
    • 2019
  • Recently, as the demand for 3D geospatial information increases, the importance of rapid and accurate data construction has increased. Although many studies have been conducted to register UAV (Unmanned Aerial Vehicle) imagery based on LiDAR (Light Detection and Ranging) data, which is capable of precise 3D data construction, studies using LiDAR data embedded in MMS (Mobile Mapping System) are insufficient. Therefore, this study compared and analyzed 9 matching algorithms based on feature points for registering reflectance image converted from LiDAR point cloud intensity data acquired from MMS with image data from UAV. Our results indicated that when the SIFT (Scale Invariant Feature Transform) algorithm was applied, it was able to stable secure a high matching accuracy, and it was confirmed that sufficient conjugate points were extracted even in various road environments. For the registration accuracy analysis, the SIFT algorithm was able to secure the accuracy at about 10 pixels except the case when the overlapping area is low and the same pattern is repeated. This is a reasonable result considering that the distortion of the UAV altitude is included at the time of UAV image capturing. Therefore, the results of this study are expected to be used as a basic research for 3D registration of LiDAR point cloud intensity data and UAV imagery.

Development of Mean Stand Height Module Using Image-Based Point Cloud and FUSION S/W (영상 기반 3차원 점군과 FUSION S/W 기반의 임분고 분석 모듈 개발)

  • KIM, Kyoung-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.169-185
    • /
    • 2016
  • Recently mean stand height has been added as new attribute to forest type maps, but it is often too costly and time consuming to manually measure 9,100,000 points from countrywide stereo aerial photos. In addition, tree heights are frequently measured around tombs and forest edges, which are poor representations of the interior tree stand. This work proposes an estimation of mean stand height using an image-based point cloud, which was extracted from stereo aerial photo with FUSION S/W. Then, a digital terrain model was created by filtering the DSM point cloud and subtracting the DTM from DSM, resulting in nDSM, which represents object heights (buildings, trees, etc.). The RMSE was calculated to compare differences in tree heights between those observed and extracted from the nDSM. The resulting RMSE of average total plot height was 0.96 m. Individual tree heights of the whole study site area were extracted using the USDA Forest Service's FUSION S/W. Finally, mean stand height was produced by averaging individual tree heights in a stand polygon of the forest type map. In order to automate the mean stand height extraction using photogrammetric methods, a module was developed as an ArcGIS add-in toolbox.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

Technical Development for Extraction of Discontinuities in Rock Mass Using LiDAR (LiDAR를 이용한 암반 불연속면 추출 기술의 개발 현황)

  • Lee, Hyeon-woo;Kim, Byung-ryeol;Choi, Sung-oong
    • Tunnel and Underground Space
    • /
    • v.31 no.1
    • /
    • pp.10-24
    • /
    • 2021
  • Rock mass classification for construction of underground facilities is essential to secure their stabilities. Therefore, the reliable values for rock mass classification from the precise information on rock discontinuities are most important factors, because rock mass discontinuities can affect exclusively on the physical and mechanical properties of rock mass. The conventional classification operation for rock mass has been usually performed by hand mapping. However, there have been many issues for its precision and reliability; for instance, in large-scale survey area for regional geological survey, or rock mass classification operation by non-professional engineers. For these reasons, automated rock mass classification using LiDAR becomes popular for obtaining the quick and precise information. But there are several suggested algorithms for analyzing the rock mass discontinuities from point cloud data by LiDAR scanning, and it is known that the different algorithm gives usually different solution. Also, it is not simple to obtain the exact same value to hand mapping. In this paper, several discontinuity extract algorithms have been explained, and their processes for extracting rock mass discontinuities have been simulated for real rock bench. The application process for several algorithms is anticipated to be a good reference for future researches on extracting rock mass discontinuities from digital point cloud data by laser scanner, such as LiDAR.

Vehicle Detection Method Based on Object-Based Point Cloud Analysis Using Vertical Elevation Data (OBPCA 기반의 수직단면 이용 차량 추출 기법)

  • Jeon, Junbeom;Lee, Heezin;Oh, Sangyoon;Lee, Minsu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.8
    • /
    • pp.369-376
    • /
    • 2016
  • Among various vehicle extraction techniques, OBPCA (Object-Based Point Cloud Analysis) calculates features quickly by coarse-grained rectangles from top-view of the vehicle candidates. However, it uses only a top-view rectangle to detect a vehicle. Thus, it is hard to extract rectangular objects with similar size. For this reason, accuracy issue has raised on the OBPCA method which influences on DEM generation and traffic monitoring tasks. In this paper, we propose a novel method which uses the most distinguishing vertical elevations to calculate additional features. Our proposed method uses same features with top-view, determines new thresholds, and decides whether the candidate is vehicle or not. We compared the accuracy and execution time between original OBPCA and the proposed one. The experiment result shows that our method produces 6.61% increase of precision and 13.96% decrease of false positive rate despite with marginal increase of execution time. We can see that the proposed method can reduce misclassification.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

The extraction of high-quality frame from video for 3D reconstruction (3 차원 복원을 위한 비디오에서 고품질 프레임 추출)

  • Choi, Jongho;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.9-11
    • /
    • 2017
  • 비디오 시퀀스에서 3D 모델을 복원하기 위해서는 기하 모델 추정이 용이한 프레임을 선택해야 한다. 본 논문에서는 안정 장치 도움을 받는 전문 비디오가 아닌 일반 비디오에서 고품질의 프레임을 손쉽게 자동 추출하는 방법을 제안한다. 제안하는 기법은 optical flow 기반 매칭 분석, 프레임 간 적당한 기준선 거리 판단, 비디오 내에서 빠른 탐색을 위한 고속 도약, 두 프레임 간의 호모그래피와 기본 행렬에 대한 GRIC 점수, 모션 블러 프레임 제거 방법 모두를 결합한다. 실내 공간에 촬영된 비디오를 이용한 실험을 통해, 우리의 방법이 모션 블러와 저하 움직임이 있는 상황에서 더 강건하게 3D point cloud 를 생성하는 것을 보여준다.

  • PDF

Extraction of Key Frames for 3D Reconstruction (3차원 재구성을 위한 키 프레임 추출)

  • Choi, Jongho;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.5-8
    • /
    • 2016
  • 키 프레임 추출 기법은 2차원 비오 영상을 3차원으로 재구성하기 위해 꼭 필요한 프레임을 선택하는 방법이다. 본 논문에서는 비디오에서 빠르게 프레임을 검사하며 최적의 키 프레임을 선택하는 기법을 제안한다. 제안하는 기법은 3차원 재구성을 위한 전처리 과정에 초점을 둔 것으로 프레임 간 대응점 비율 검사를 통해 프레임의 도약 강도를 결정하고 기하 모델 추정이 원활한 프레임을 선택한다. 이로부터 3차원 복원 후처리 과정을 통해 최종적인 3차원 점군(point cloud) 데이터를 획득한다. 실험을 통해 다른 기법과 성능을 비교했을 때, 제안하는 기법이 복원 소요 시간도 적게 들고 보다 밀집된 3차원 데이터를 얻을 수 있었다.

  • PDF