• Title/Summary/Keyword: Drone photogrammetry

Search Result 87, Processing Time 0.023 seconds

A Study on 3D Model Building of Drones-Based Urban Digital Twin (드론기반 도심지 디지털트윈 3차원 모형 구축에 관한 연구)

  • Lim, Seong-Ha;Choi, Kyu-Myeong;Cho, Gi-Sung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.1
    • /
    • pp.163-180
    • /
    • 2020
  • In this study, to build a spatial information infrastructure, which is a component of a smart city, a 3D digital twin model in the downtown area was built based on the latest spatial information acquisition technology, the drone. Several analysis models were implemented by utilizing. While the data processing time and quality of the three types of drone photogrammetry software are different, the accuracy of the construction model is ± 0.04 in the N direction and ± 0.03m in the E direction. In the m and Z directions, ± 0.02m was found to be less than 0.1m, which is defined as the allowable range of surveying performance and inspection performance for the boundary point in the area where the registration of the boundary point registration is executed. 1: 500 to 1 of the aerial survey work regulation: The standard deviation, which is the error limit of the photographic reference point of the 600 scale, appeared within 0.14 cm, and it was found that the error limit of the large scale specified in the cadastral and aerial survey was satisfied. In addition, in order to increase the usability of smart city realization using a drone-based 3D urban digital twin model, the model built in this study was used to implement Prospect right analysis, landscape analysis, Right of light analysis, patrol route analysis, and fire suppression simulation training. Compared to the existing aerial photographic survey method, it was judged that the accuracy of the naked eye reading point is more accurate (about 10cm) than the existing aerial photographic survey, and it is possible to reduce the construction cost compared to the existing aerial photographic survey at a construction area of about 30㎢ or less.

Drone Image based Time Series Analysis for the Range of Eradication of Clover in Lawn (드론 영상기반 잔디밭 내 클로버의 퇴치 범위에 대한 시계열 분석)

  • Lee, Yong Chang;Kang, Joon Oh;Oh, Seong Jong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.4
    • /
    • pp.211-221
    • /
    • 2021
  • The Rabbit grass(Trifolium Repens, call it 'Clover') is a representative harmful plant of lawn, and it starts growing earlier than lawn, forming a water pipe on top of the lawn and hindering the photosynthesis and growth of the lawn. As a result, in competition between lawn and clover, clover territory spreads, but lawn is damaged and dried up. Damage to the affected lawn area will accelerate during the rainy season as well as during the plant's rear stage, spreading the area where soil is exposed. Therefore, the restoration of damaged lawn is causing psychological stress and a lot of economic burden. The purpose of this study is to distinguish clover which is a representative harmful plant on lawn, to identify the distribution of damaged areas due to the spread of clover, and to review of changes in vegetation before and after the eradication of clover. For this purpose, a time series analysis of three vegetation indices calculated based on images of convergence Drone with RGB(Red Green Blue) and BG-NIR(Near Infra Red)sensors was reviewed to identify the separation between lawn and clover for selective eradication, and the distribution of damaged lawn for recovery plan. In particular, examined timeseries changes in the ecology of clover before and after the weed-whacking by manual and brush cutter. And also, the method of distinguishing lawn from clover was explored during the mid-year period of growth of the two plants. This study shows that the time series analysis of the MGRVI(Modified Green-Red Vegetation Index), NDVI(Normalized Difference Vegetation Index), and MSAVI(Modified Soil Adjusted Vegetation Index) indices of drone-based RGB and BG-NIR images according to the growth characteristics between lawn and clover can confirm the availability of change trends after lawn damage and clover eradication.

Automatic Geo-referencing of Sequential Drone Images Using Linear Features and Distinct Points (선형과 특징점을 이용한 연속적인 드론영상의 자동기하보정)

  • Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.1
    • /
    • pp.19-28
    • /
    • 2019
  • Images captured by drone have the advantage of quickly constructing spatial information in small areas and are applied to fields that require quick decision making. If an image registration technique that can automatically register the drone image on the ortho-image with the ground coordinate system is applied, it can be used for various analyses. In this study, a methodology for geo-referencing of a single image and sequential images using drones was proposed even if they differ in spatio-temporal resolution using linear features and distinct points. Through the method using linear features, projective transformation parameters for the initial geo-referencing between images were determined, and then finally the geo-referencing of the image was performed through the template matching for distinct points that can be extracted from the images. Experimental results showed that the accuracy of the geo-referencing was high in an area where relief displacement of the terrain was not large. On the other hand, there were some errors in the quantitative aspect of the area where the change of the terrain was large. However, it was considered that the results of geo-referencing of the sequential images could be fully utilized for the qualitative analysis.

Stability Analysis of a Stereo-Camera for Close-range Photogrammetry (근거리 사진측량을 위한 스테레오 카메라의 안정성 분석)

  • Kim, Eui Myoung;Choi, In Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.123-132
    • /
    • 2021
  • To determine 3D(three-dimensional) positions using a stereo-camera in close-range photogrammetry, camera calibration to determine not only the interior orientation parameters of each camera but also the relative orientation parameters between the cameras must be preceded. As time passes after performing camera calibration, in the case of non-metric cameras, the interior and relative orientation parameters may change due to internal instability or external factors. In this study, to evaluate the stability of the stereo-camera, not only the stability of two single cameras and a stereo-camera were analyzed, but also the three-dimensional position accuracy was evaluated using checkpoints. As a result of evaluating the stability of two single cameras through three camera calibration experiments over four months, the root mean square error was ±0.001mm, and the root mean square error of the stereo-camera was ±0.012mm ~ ±0.025mm, respectively. In addition, as the results of distance accuracy using the checkpoint were ±1mm, the interior and relative orientation parameters of the stereo-camera were considered stable over that period.

Quality Evaluation of Drone Image using Siemens star (Siemens star를 이용한 드론 영상의 품질 평가)

  • Lee, Jae One;Sung, Sang Min;Back, Ki Suk;Yun, Bu Yeol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.217-226
    • /
    • 2022
  • In the view of the application of high-precision spatial information production, UAV (Umanned Aerial Vehicle)-Photogrammetry has a problem in that it lacks specific procedures and detailed regulations for quantitative quality verification methods or certification of captured images. In addition, test tools for UAV image quality assessment use only the GSD (Ground Sample Distance), not MTF (Modulation Transfer Function), which reflects image resolution and contrast at the same time. This fact makes often the quality of UAV image inferior to that of manned aerial image. We performed MTF and GSD analysis simultaneously using a siemens star to confirm the necessity of MTF analysis in UAV image quality assessment. The analyzing results of UAV images taken with different payload and sensors show that there is a big difference in σMTF values, representing image resolution and the degree of contrast, but slightly different in GSD. It concluded that the MTF analysis is a more objective and reliable analysis method than just the GSD analysis method, and high-quality drone images can only be obtained when the operator make images after judging the proper selection the sensor performance, image overlaps, and payload type. However, the results of this study are derived from analyzing only images acquired by limited sensors and imaging conditions. It is therefore expected that more objective and reliable results will be obtained if continuous research is conducted by accumulating various experimental data in related fields in the future.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Image Registration of Drone Images through Association Analysis of Linear Features (선형정보의 연관분석을 통한 드론영상의 영상등록)

  • Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.441-452
    • /
    • 2017
  • Drones are increasingly being used to investigate disaster damage because they can quickly capture images in the air. It is necessary to extract the damaged area by registering the drones and the existing ortho-images in order to investigate the disaster damage. In this process, we might be faced the problem of registering two images with different time and spatial resolution. In order to solve this problem, we propose a new methodology that performs initial image transformation using line pairs extracted from images and association matrix, and final registration of images using linear features to refine the initial transformed result. The applicability of the newly proposed methodology in this study was evaluated through experiments using artifacts and the natural terrain areas. Experimental results showed that the root mean square error of artifacts and the natural terrain was 1.29 pixels and 4.12 pixels, respectively, and relatively high accuracy was obtained in the region with artifacts extracted a lot of linear information.

Efficient method for acquirement of geospatial information using drone equipment in stream (드론을 이용한 하천공간정보 획득의 효율적 방안)

  • Lee, Jong-Seok;Kim, Si-Chul
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.2
    • /
    • pp.135-145
    • /
    • 2022
  • This study aims to verify the Drone utilization and the accuracy of the global navigation satellite system (GNSS), Drone RGB (Photogrammetry) (D-RGB), and Drone LiDAR (D-LiDAR) surveying performance in the downstream reaches of the local stream. The results of the measurement of Ground Control Point (GCP) and Check Point (CP) coordinates confirmed the excellence. This study was carried out by comparing GNSS, D-RGB, and D-LiDAR with the values which the hydraulic characteristics calculated using HEC-RAS model. The accuracy of three survey methods was compared in the area of the study which is the ownership station, to 6 GCP and 3 CP were installed. The comparison results showed that the D-LiDAR survey was excellent. The 100-year frequency design flood discharge was applied in the channel sections of the small stream. As a result of D-RGB surveying 2.30 m and D-LiDAR 1.80 m in the average bed elevation, and D-RGB surveying 4.73 m and D-LiDAR 4.25 m in the average flood condition. It is recommended that the performance of D-LiDAR surveying is efficient method and useful as the surveying technique of the geospatial information using the drone equipment in stream channel.

Example of Application of Drone Mapping System based on LiDAR to Highway Construction Site (드론 LiDAR에 기반한 매핑 시스템의 고속도로 건설 현장 적용 사례)

  • Seung-Min Shin;Oh-Soung Kwon;Chang-Woo Ban
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_3
    • /
    • pp.1325-1332
    • /
    • 2023
  • Recently, much research is being conducted based on point cloud data for the growth of innovations such as construction automation in the transportation field and virtual national space. This data is often measured through remote control in terrain that is difficult for humans to access using devices such as UAVs and UGVs. Drones, one of the UAVs, are mainly used to acquire point cloud data, but photogrammetry using a vision camera, which takes a lot of time to create a point cloud map, is difficult to apply in construction sites where the terrain changes periodically and surveying is difficult. In this paper, we developed a point cloud mapping system by adopting non-repetitive scanning LiDAR and attempted to confirm improvements through field application. For accuracy analysis, a point cloud map was created through a 2 minute 40 second flight and about 30 seconds of software post-processing on a terrain measuring 144.5 × 138.8 m. As a result of comparing the actual measured distance for structures with an average of 4 m, an average error of 4.3 cm was recorded, confirming that the performance was within the error range applicable to the field.

Land Cover Mapping and Availability Evaluation Based on Drone Images with Multi-Spectral Camera (다중분광 카메라 탑재 드론 영상 기반 토지피복도 제작 및 활용성 평가)

  • Xu, Chun Xu;Lim, Jae Hyoung;Jin, Xin Mei;Yun, Hee Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.589-599
    • /
    • 2018
  • The land cover map has been produced by using satellite and aerial images. However, these two images have the limitations in spatial resolution, and it is difficult to acquire images of a area at desired time because of the influence of clouds. In addition, it is costly and time-consuming that mapping land cover map of a small area used by satellite and aerial images. This study used multispectral camera-based drone to acquire multi-temporal images for orthoimages generation. The efficiency of produced land cover map was evaluated using time series analysis. The results indicated that the proposed method can generated RGB orthoimage and multispectral orthoimage with RMSE (Root Mean Square Error) of ${\pm}10mm$, ${\pm}11mm$, ${\pm}26mm$ and ${\pm}28mm$, ${\pm}27mm$, ${\pm}47mm$ on X, Y, H respectively. The accuracy of the pixel-based and object-based land cover map was analyzed and the results showed that the accuracy and Kappa coefficient of object-based classification were higher than that of pixel-based classification, which were 93.75%, 92.42% on July, 92.50%, 91.20% on October, 92.92%, 91.77% on February, respectively. Moreover, the proposed method can accurately capture the quantitative area change of the object. In summary, the suggest study demonstrated the possibility and efficiency of using multispectral camera-based drone in production of land cover map.