• Title/Summary/Keyword: Drone images

Search Result 200, Processing Time 0.022 seconds

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Estimation of the Lodging Area in Rice Using Deep Learning (딥러닝을 이용한 벼 도복 면적 추정)

  • Ban, Ho-Young;Baek, Jae-Kyeong;Sang, Wan-Gyu;Kim, Jun-Hwan;Seo, Myung-Chul
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • Rice lodging is an annual occurrence caused by typhoons accompanied by strong winds and strong rainfall, resulting in damage relating to pre-harvest sprouting during the ripening period. Thus, rapid estimations of the area of lodged rice are necessary to enable timely responses to damage. To this end, we obtained images related to rice lodging using a drone in Gimje, Buan, and Gunsan, which were converted to 128 × 128 pixels images. A convolutional neural network (CNN) model, a deep learning model based on these images, was used to predict rice lodging, which was classified into two types (lodging and non-lodging), and the images were divided in a 8:2 ratio into a training set and a validation set. The CNN model was layered and trained using three optimizers (Adam, Rmsprop, and SGD). The area of rice lodging was evaluated for the three fields using the obtained data, with the exception of the training set and validation set. The images were combined to give composites images of the entire fields using Metashape, and these images were divided into 128 × 128 pixels. Lodging in the divided images was predicted using the trained CNN model, and the extent of lodging was calculated by multiplying the ratio of the total number of field images by the number of lodging images by the area of the entire field. The results for the training and validation sets showed that accuracy increased with a progression in learning and eventually reached a level greater than 0.919. The results obtained for each of the three fields showed high accuracy with respect to all optimizers, among which, Adam showed the highest accuracy (normalized root mean square error: 2.73%). On the basis of the findings of this study, it is anticipated that the area of lodged rice can be rapidly predicted using deep learning.

Comparison of Topographic Surveying Results using a Fixed-wing and a Popular Rotary-wing Unmanned Aerial Vehicle (Drone) (고정익 무인항공기(드론)와 보급형 회전익 무인항공기를 이용한 지형측량 결과의 비교)

  • Lee, Sungjae;Choi, Yosoon
    • Tunnel and Underground Space
    • /
    • v.26 no.1
    • /
    • pp.24-31
    • /
    • 2016
  • Recently, many studies have been conducted to use fixed-wing and rotary-wing unmanned aerial vehicles (UAVs, Drones) for topographic surveying in open-pit mines. Because the fixed-wing and rotary-wing UAVs have different characteristics such as flight height, speed, time and performance of mounted cameras, their results of topographic surveying at a same site need to be compared. This study selected a construction site in Yangsan-si, Gyeongsangnam-do, Korea as a study area and compared the topographic surveying results from a fixed-wing UAV (SenseFly eBee) and a popular rotary-wing UAV (DJI Phantom2 Vision+). As results of data processing for aerial photos taken from eBee and Phantom2 Vision+, orthomosaic images and digital surface models with about 4 cm grid spacing could be generated. Comparisons of the X, Y, Z-coordinates of 7 ground control points measured by differential global positioning system and those determined by eBee and Phantom2 Vision+ revealed that the root mean squared errors of X, Y, Z-coordinates were around 10 cm, respectively.

Field and remote acquisition of hyperspectral information for classification of riverside area materials (현장 및 원격 초분광 정보 계측을 통한 하천 수변공간 재료 구분)

  • Shin, Jaehyun;Seong, Hoje;Rhee, Dong Sop
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1265-1274
    • /
    • 2021
  • The analysis of hyperspectral characteristics of materials near the South Han River has been conducted using riverside area measurements by drone installed hyperspectral sensors. Each spectrum reflectance of the riverside materials were compared and analyzed which were consisted of grass, concrete, soil, etc. To verify the drone installed hyperspectral measurements, a ground spectrometer was deployed for field measurements and comparisons for the materials. The comparison results showed that the riverside materials had their unique hyperspectral band characteristics, and the field measurements were similar to the remote sensing data. For the classification of the riverside area, the K-means clustering method and SVM classification method were utilized. The supervised SVM method showed accurate classification of the riverside area than the unsupervised K-means method. Using classification and clustering methods, the inherent spectral characteristic for each material was found to classify the riverside materials of hyperspectral images from drones.

Evaluation of SWIR bands utilization of Worldview-3 satellite imagery for mineral detection (광물탐지를 위한 Worldview-3 위성영상의 SWIR 밴드 활용성 평가)

  • Kim, Sungbo;Park, Honglyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.203-209
    • /
    • 2021
  • With the recent development of satellite sensor technology, high-spatial-resolution imagery of various spectral wavelength bands have become possible. Worldview-3 satellite sensor provides panchromatic images with high-spatial-resolution and VNIR (Visible Near InfraRed) and SWIR (ShortWave InfraRed) bands with low-spatial-resolution, so it can be used in various fields such as defense, environment, and surveying. In this study, mineral detection was performed using Worldview-3 satellite imagery. In order to effectively utilize the VNIR and SWIR bands of the Worldview-3 satellite image, the sharpening technique was applied to the spatial resolution of the panchromatic image. To confirm the utility of SWIR bands for mineral detection, mineral detection using only VNIR bands was performed and comparatively evaluated. As the mineral detection technique, SAM (Spectral Angle Mapper), a representative similarity technique, was applied, and the pixels detected as minerals were selected by applying an empirical threshold to the analysis result. Quantitative evaluation was performed using reference data on the results of similarity analysis to evaluate the accuracy of mineral detection. As a result of the accuracy evaluation, the detection rate and false detection rate of mineral detecting using SWIR bands were calculated to be 0.882 and 0.011, respectively, and the results using only VNIR bands were 0.891 and 0.037, respectively. It was found that the detection rate when the SWIR bands were additionally used was lower than that when only the VNIR bands were used. However, it was found that the false detection rate was significantly reduced, and through this, it was possible to confirm the applicability of SWIR bands in mineral detection.

A Study on Orthogonal Image Detection Precision Improvement Using Data of Dead Pine Trees Extracted by Period Based on U-Net model (U-Net 모델에 기반한 기간별 추출 소나무 고사목 데이터를 이용한 정사영상 탐지 정밀도 향상 연구)

  • Kim, Sung Hun;Kwon, Ki Wook;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.251-260
    • /
    • 2022
  • Although the number of trees affected by pine wilt disease is decreasing, the affected area is expanding across the country. Recently, with the development of deep learning technology, it is being rapidly applied to the detection study of pine wilt nematodes and dead trees. The purpose of this study is to efficiently acquire deep learning training data and acquire accurate true values to further improve the detection ability of U-Net models through learning. To achieve this purpose, by using a filtering method applying a step-by-step deep learning algorithm the ambiguous analysis basis of the deep learning model is minimized, enabling efficient analysis and judgment. As a result of the analysis the U-Net model using the true values analyzed by period in the detection and performance improvement of dead pine trees of wilt nematode using the U-Net algorithm had a recall rate of -0.5%p than the U-Net model using the previously provided true values, precision was 7.6%p and F-1 score was 4.1%p. In the future, it is judged that there is a possibility to increase the precision of wilt detection by applying various filtering techniques, and it is judged that the drone surveillance method using drone orthographic images and artificial intelligence can be used in the pine wilt nematode disaster prevention project.

Study on the Effect of Emissivity for Estimation of the Surface Temperature from Drone-based Thermal Images (드론 열화상 화소값의 타겟 온도변환을 위한 방사율 영향 분석)

  • Jo, Hyeon Jeong;Lee, Jae Wang;Jung, Na Young;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.1
    • /
    • pp.41-49
    • /
    • 2022
  • Recently interests on the application of thermal cameras have increased with the advance of image analysis technology. Aside from a simple image acquisition, applications such as digital twin and thermal image management systems have gained popularity. To this end, we studied the effect of emissivity on the DN (Digital Number) value in the process of derivation of a relational expression for converting DN to an actual surface temperature. The DN value is a number representing the spectral band value of the thermal image, and is an important element constituting the thermal image data. However, the DN value is not a temperature value indicating the actual surface temperature, but a brightness value indicating high and low heat as brightness, and has a non-linear relationship with the actual surface temperature. The reliable relationship between DN and the actual surface temperature is critical for a thermal image processing. We tested the relationship between the actual surface temperature and the DN value of the thermal image, and then the radiation adjustment was performed to better estimate actual surface temperatures. As a result, the relation graph between the actual surface temperature and the DN value similarly show linear pattern with the relation graph between the radiation-controlled non-contact thermometer and the DN value. And the non-contact temperature after adjusting the emissivity was closer to the actual surface temperature than before adjusting the emissivity.

Comparative analysis of water surface spectral characteristics based on hyperspectral images for chlorophyll-a estimation in Namyang estuarine reservoir and Baekje weir (남양호와 백제보의 Chlorophyll-a 산정을 위한 초분광 영상기반 수체분광특성 비교 분석)

  • Jang, Wonjin;Kim, Jinuk;Kim, Jinhwi;Nam, Guisook;Kang, Euetae;Park, Yongeun;Kim, Seongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.2
    • /
    • pp.91-101
    • /
    • 2023
  • In this study, we estimated the concentration of chlorophyll-a (Chl-a) using hyperspectral water surface reflectance in an inland weir (Baekjae weir) and estuarine reservoir (Namyang Reservoir) for monitoring the occurrence of algae in freshwater in South Korea. The hyperspectral reflectance was measured by aircraft in Baekjae Weir (BJW) from 2016 to 2017, and a drone in Namyang Reservoir (NYR) from 2020 to 2021. The 30 reflectance bands (BJW: 400-530, 620-680, 710-730, 760-790 nm, NYR: 400-430, 655-680, 740-800 nm) that were highly related to Chl-a concentration were selected using permutation importance. Artificial neural network based Chl-a estimation model was developed using the selected reflectance in both water bodies. And the performance of the model was evaluated with the coefficient of determination (R2), the root mean square error (RMSE), and the mean absolute error (MAE). The performance evaluation results of the Chl-a estimation model for each watershed was R2: 0.63, 0.82, RMSE: 9.67, 6.99, and MAE: 11.25, 8.48, respectively. The developed Chl-a model of this study may be used as foundation tool for the optimal management of freshwater algal blooms in the future.

A Study on the Applicability of Deep Learning Algorithm for Detection and Resolving of Occlusion Area (영상 폐색영역 검출 및 해결을 위한 딥러닝 알고리즘 적용 가능성 연구)

  • Bae, Kyoung-Ho;Park, Hong-Gi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.11
    • /
    • pp.305-313
    • /
    • 2019
  • Recently, spatial information is being constructed actively based on the images obtained by drones. Because occlusion areas occur due to buildings as well as many obstacles, such as trees, pedestrians, and banners in the urban areas, an efficient way to resolve the problem is necessary. Instead of the traditional way, which replaces the occlusion area with other images obtained at different positions, various models based on deep learning were examined and compared. A comparison of a type of feature descriptor, HOG, to the machine learning-based SVM, deep learning-based DNN, CNN, and RNN showed that the CNN is used broadly to detect and classify objects. Until now, many studies have focused on the development and application of models so that it is impossible to select an optimal model. On the other hand, the upgrade of a deep learning-based detection and classification technique is expected because many researchers have attempted to upgrade the accuracy of the model as well as reduce the computation time. In that case, the procedures for generating spatial information will be changed to detect the occlusion area and replace it with simulated images automatically, and the efficiency of time, cost, and workforce will also be improved.

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.