• Title/Summary/Keyword: 영상 점군

Search Result 49, Processing Time 0.019 seconds

The Analysis of Accuracy in According to the Registration Methods of Terrestrial LiDAR Data for Indoor Spatial Modeling (건물 실내 공간 모델링을 위한 지상라이다 영상 정합 방법에 따른 정확도 분석)

  • Kim, Hyung-Tae;Pyeon, Mu-Wook;Park, Jae-Sun;Kang, Min-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.4
    • /
    • pp.333-340
    • /
    • 2008
  • For the indoor spatial modeling by terrestrial LiDAR and the analyzing its positional accuracy result, two terrestrial LiDARs which have different specification each other were used at test site. This paper shows disparity of accuracy between (1) the structural coordinate transformation by point cloud unit using control points and (2) the relative registration among all point cloud units then structural coordinate transformation in bulk, under condition of limited number of control points. As results, the latter had smaller size and distribution of errors than the former although different specifications and acquistion methods are used.

A Hybrid Approach for Automated Building Area Extraction from High-Resolution Satellite Imagery (고해상도 위성영상을 활용한 자동화된 건물 영역 추출 하이브리드 접근법)

  • An, Hyowon;Kim, Changjae;Lee, Hyosung;Kwon, Wonsuk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.545-554
    • /
    • 2019
  • This research aims to provide a building area extraction approach over the areas where data acquisition is impossible through field surveying, aerial photography and lidar scanning. Hence, high-resolution satellite images, which have high accessibility over the earth, are utilized for the automated building extraction in this study. 3D point clouds or DSM (Digital Surface Models), derived from the stereo image matching process, provides low quality of building area extraction due to their high level of noises and holes. In this regards, this research proposes a hybrid building area extraction approach which utilizes 3D point clouds (from image matching), and color and linear information (from imagery). First of all, ground and non-ground points are separated from 3D point clouds; then, the initial building hypothesis is extracted from the non-ground points. Secondly, color based building hypothesis is produced by considering the overlapping between the initial building hypothesis and the color segmentation result. Afterwards, line detection and space partitioning results are utilized to acquire the final building areas. The proposed approach shows 98.44% of correctness, 95.05% of completeness, and 1.05m of positional accuracy. Moreover, we see the possibility that the irregular shapes of building areas can be extracted through the proposed approach.

A Study on Utilization 3D Shape Pointcloud without GCPs using UAV images (UAV 영상을 이용한 무기준점 3D 형상 점군데이터 활용 연구)

  • Kim, Min-Chul;Yoon, Hyuk-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.97-104
    • /
    • 2018
  • Recently, many studies have examined UAVs (unmanned aerial vehicles), which can replace and supplement existing surveying sensors, systems, and images. This study focused on the use of UAV images and assessed the possibility of utilization in areas where it is difficult to obtain GCPs (ground control points), such as disasters. Therefore, 3D (dimensional) pointcloud data were generated using UAV images and the absolute/relative accuracy of the generated model data using GCPs and without GCPs was assessed. The results showed the 3D shape pointcloud generated by UAV image matching was proven if the relative accuracy was set, regardless of whether GCPs were used or not; the quantitative measurement error rate was within 1%. Even if the absolute accuracy was low, the 3D shape pointcloud that had been post processed quickly was sufficient to be utilized when it is impossible to acquire GCPs or urgent analysis is required. In particular, the results can obtain quantitative measurements and meaningful data, such as the length and area, even in cases with the ground reference point surveying and post-process.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Implementation of CUDA-based Octree Algorithm for Efficient Search for LiDAR Point Cloud (라이다 점군의 효율적 검색을 위한 CUDA 기반 옥트리 알고리듬 구현)

  • Kim, Hyung-Woo;Lee, Yang-Won
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1009-1024
    • /
    • 2018
  • With the increased use of LiDAR (Light Detection and Ranging) that can obtain over millions of point dataset, methodologies for efficient search and dimensionality reduction for the point cloud became a crucial technique. The existing octree-based "parametric algorithm" has proved its efficiency and contributed as a part of PCL (Point Cloud Library). However, the implementation of the algorithm on GPU (Graphics Processing Unit) is considered very difficult because of structural constraints of the octree implemented in PCL. In this paper, we present a method for the parametric algorithm on GPU environment and implement a projection of the queried points on four directions with an improved noise reduction.

Three-Dimensional Positional Accuracy Analysis of UAV Imagery Using Ground Control Points Acquired from Multisource Geospatial Data (다종 공간정보로부터 취득한 지상기준점을 활용한 UAV 영상의 3차원 위치 정확도 비교 분석)

  • Park, Soyeon;Choi, Yoonjo;Bae, Junsu;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1013-1025
    • /
    • 2020
  • Unmanned Aerial Vehicle (UAV) platform is being widely used in disaster monitoring and smart city, having the advantage of being able to quickly acquire images in small areas at a low cost. Ground Control Points (GCPs) for positioning UAV images are essential to acquire cm-level accuracy when producing UAV-based orthoimages and Digital Surface Model (DSM). However, the on-site acquisition of GCPs takes considerable manpower and time. This research aims to provide an efficient and accurate way to replace the on-site GNSS surveying with three different sources of geospatial data. The three geospatial data used in this study is as follows; 1) 25 cm aerial orthoimages, and Digital Elevation Model (DEM) based on 1:1000 digital topographic map, 2) point cloud data acquired by Mobile Mapping System (MMS), and 3) hybrid point cloud data created by merging MMS data with UAV data. For each dataset a three-dimensional positional accuracy analysis of UAV-based orthoimage and DSM was performed by comparing differences in three-dimensional coordinates of independent check point obtained with those of the RTK-GNSS survey. The result shows the third case, in which MMS data and UAV data combined, to be the most accurate, showing an RMSE accuracy of 8.9 cm in horizontal and 24.5 cm in vertical, respectively. In addition, it has been shown that the distribution of geospatial GCPs has more sensitive on the vertical accuracy than on horizontal accuracy.

Development of Mean Stand Height Module Using Image-Based Point Cloud and FUSION S/W (영상 기반 3차원 점군과 FUSION S/W 기반의 임분고 분석 모듈 개발)

  • KIM, Kyoung-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.169-185
    • /
    • 2016
  • Recently mean stand height has been added as new attribute to forest type maps, but it is often too costly and time consuming to manually measure 9,100,000 points from countrywide stereo aerial photos. In addition, tree heights are frequently measured around tombs and forest edges, which are poor representations of the interior tree stand. This work proposes an estimation of mean stand height using an image-based point cloud, which was extracted from stereo aerial photo with FUSION S/W. Then, a digital terrain model was created by filtering the DSM point cloud and subtracting the DTM from DSM, resulting in nDSM, which represents object heights (buildings, trees, etc.). The RMSE was calculated to compare differences in tree heights between those observed and extracted from the nDSM. The resulting RMSE of average total plot height was 0.96 m. Individual tree heights of the whole study site area were extracted using the USDA Forest Service's FUSION S/W. Finally, mean stand height was produced by averaging individual tree heights in a stand polygon of the forest type map. In order to automate the mean stand height extraction using photogrammetric methods, a module was developed as an ArcGIS add-in toolbox.

Automatic Registration of Point Cloud Data between MMS and UAV using ICP Method (ICP 기법을 이용한 MSS 및 UAV 간 점군 데이터 자동정합)

  • KIM, Jae-Hak;LEE, Chang-Min;KIM, Hyeong-Joon;LEE, Dong-Ha
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.229-240
    • /
    • 2019
  • 3D geo-spatial model have been widely used in the field of Civil Engineering, Medical, Computer Graphics, Urban Management and many other. Especially, the demand for high quality 3D spatial information such as precise road map construction has explosively increased, MMS and UAV techniques have been actively used to acquire them more easily and conveniently in surveying and geo-spatial field. However, in order to perform 3D modeling by integrating the two data set from MMS and UAV, its so needed an proper registration method is required to efficiently correct the difference between the raw data acquisition sensor, the point cloud data generation method, and the observation accuracy occurred when the two techniques are applied. In this study, we obtained UAV point colud data in Yeouido area as the study area in order to determine the automatic registration performance between MMS and UAV point cloud data using ICP(Iterative Closet Point) method. MMS observations was then performed in the study area by dividing 4 zones according to the level of overlap ratio and observation noise with based on UAV data. After we manually registered the MMS data to the UAV data, then compared the results which automatic registered using ICP method. In conclusion, the higher the overlap ratio and the lower the noise level, can bring the more accurate results in the automatic registration using ICP method.

Camera Exterior Orientation for Image Registration onto 3D Data (3차원 데이터상에 영상등록을 위한 카메라 외부표정 계산)

  • Chon, Jae-Choon;Ding, Min;Shankar, Sastry
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.5
    • /
    • pp.375-381
    • /
    • 2007
  • A novel method to register images onto 3D data, such as 3D point cloud, 3D vectors, and 3D surfaces, is proposed. The proposed method estimates the exterior orientation of a camera with respective to the 3D data though fitting pairs of the normal vectors of two planes passing a focal point and 2D and 3D lines extracted from an image and the 3D data, respectively. The fitting condition is that the angle between each pair of the normal vectors has to be zero. This condition can be represented as a numerical formula using the inner product of the normal vectors. This paper demonstrates the proposed method can estimate the exterior orientation for the image registration as simulation tests.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.