• Title/Summary/Keyword: UAV Imagery

Search Result 81, Processing Time 0.028 seconds

Semantic Segmentation of Heterogeneous Unmanned Aerial Vehicle Datasets Using Combined Segmentation Network

  • Ahram, Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.87-97
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs) can capture high-resolution imagery from a variety of viewing angles and altitudes; they are generally limited to collecting images of small scenes from larger regions. To improve the utility of UAV-appropriated datasetsfor use with deep learning applications, multiple datasets created from variousregions under different conditions are needed. To demonstrate a powerful new method for integrating heterogeneous UAV datasets, this paper applies a combined segmentation network (CSN) to share UAVid and semantic drone dataset encoding blocks to learn their general features, whereas its decoding blocks are trained separately on each dataset. Experimental results show that our CSN improves the accuracy of specific classes (e.g., cars), which currently comprise a low ratio in both datasets. From this result, it is expected that the range of UAV dataset utilization will increase.

Development of Biomass Evaluation Model of Winter Crop Using RGB Imagery Based on Unmanned Aerial Vehicle (무인기 기반 RGB 영상을 이용한 동계작물 바이오매스 평가 모델 개발)

  • Na, Sang-il;Park, Chan-won;So, Kyu-ho;Ahn, Ho-yong;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.709-720
    • /
    • 2018
  • In order to optimize the evaluation of biomass in crop monitoring, accurate and timely data of the crop-field are required. Evaluating above-ground biomass helps to monitor crop vitality and to predict yield. Unmanned Aerial Vehicle (UAV) imagery are being assessed for analyzing within field spatial variability for agricultural precision management, because UAV imagery may be acquired quickly during critical periods of rapid crop growth. This study reports on the development of remote sensing techniques for evaluating the biomass of winter crop. Specific objective was to develop statistical models for estimating the dry weight of barley and wheat using a Excess Green index ($E{\times}G$) based Vegetation Fraction (VF) and a Crop Surface Model (CSM) based Plant Height (PH) value. As a result, the multiple linear regression equations consisting of three independent variables (VF, PH, and $VF{\times}PH$) and above-ground dry weight provided good fits with coefficients of determination ($R^2$) ranging from 0.86 to 0.99 with 5 cultivars. In the case of the barley, the coefficient of determination was 0.91 and the root mean squared error of measurement was $102.09g/m^2$. And for the wheat, the coefficient of determination was 0.90 and the root mean squared error of measurement was $110.87g/m^2$. Therefore, it will be possible to evaluate the biomass of winter crop through the UAV image for the crop growth monitoring.

Application and Evaluation of the Attention U-Net Using UAV Imagery for Corn Cultivation Field Extraction (무인기 영상 기반 옥수수 재배필지 추출을 위한 Attention U-NET 적용 및 평가)

  • Shin, Hyoung Sub;Song, Seok Ho;Lee, Dong Ho;Park, Jong Hwa
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.4
    • /
    • pp.253-265
    • /
    • 2021
  • In this study, crop cultivation filed was extracted by using Unmanned Aerial Vehicle (UAV) imagery and deep learning models to overcome the limitations of satellite imagery and to contribute to the technological development of understanding the status of crop cultivation field. The study area was set around Chungbuk Goesan-gun Gammul-myeon Yidam-li and orthogonal images of the area were acquired by using UAV images. In addition, study data for deep learning models was collected by using Farm Map that modified by fieldwork. The Attention U-Net was used as a deep learning model to extract feature of UAV in this study. After the model learning process, the performance evaluation of the model for corn cultivation extraction was performed using non-learning data. We present the model's performance using precision, recall, and F1-score; the metrics show 0.94, 0.96, and 0.92, respectively. This study proved that the method is an effective methodology of extracting corn cultivation field, also presented the potential applicability for other crops.

A Study on Data Acquisition in the Invisible Zone of UAV through LTE Remote Control (LTE 원격관제를 통한 UAV의 비가시권 데이터 취득방안)

  • Jeong, HoHyun;Lee, Jaehee;Park, Seongjin
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.987-997
    • /
    • 2019
  • Recently the demand for drones is rapidly increasing, as developing Unmanned Aerial Vehicle (UAV) and growing interest in them. Compared to traditional satellite and aerial imagery, it can be used for various researches (environment, geographic information, ocean observation, and remote sensing) because it can be managed with low operating costs and effective data acquisition. However, there is a disadvantage in that only a small area is acquired compared to the satellite and an aircraft, which is a traditional remote sensing method, depending on the battery capacity of the UAV, and the distance limit between Ground Control System (GCS) and UAV. If remote control at long range is possible, the possibility of using UAV in the field of remote sensing can be increased. Therefore, there is a need for a communication network system capable of controlling regardless of the distance between the UAV and the GCS. The distance between UAV and GCS can be transmitted and received using simple radio devices (RF 2.4 GHz, 915 MHz, 433 MHz), which is limited to around 2 km. If the UAV can be managed simultaneously by improving the operating environment of the UAV using a Long-Term Evolution (LTE) communication network, it can make greater effects by converging with the existing industries. In this study, we performed the maximum straight-line distance 6.1 km, the test area 2.2 ㎢, and the total flight distance 41.75 km based on GCS through LTE communication. In addition, we analyzed the possibility of disconnected communication through the base station of LTE communication.

Three-Dimensional Positional Accuracy Analysis of UAV Imagery Using Ground Control Points Acquired from Multisource Geospatial Data (다종 공간정보로부터 취득한 지상기준점을 활용한 UAV 영상의 3차원 위치 정확도 비교 분석)

  • Park, Soyeon;Choi, Yoonjo;Bae, Junsu;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1013-1025
    • /
    • 2020
  • Unmanned Aerial Vehicle (UAV) platform is being widely used in disaster monitoring and smart city, having the advantage of being able to quickly acquire images in small areas at a low cost. Ground Control Points (GCPs) for positioning UAV images are essential to acquire cm-level accuracy when producing UAV-based orthoimages and Digital Surface Model (DSM). However, the on-site acquisition of GCPs takes considerable manpower and time. This research aims to provide an efficient and accurate way to replace the on-site GNSS surveying with three different sources of geospatial data. The three geospatial data used in this study is as follows; 1) 25 cm aerial orthoimages, and Digital Elevation Model (DEM) based on 1:1000 digital topographic map, 2) point cloud data acquired by Mobile Mapping System (MMS), and 3) hybrid point cloud data created by merging MMS data with UAV data. For each dataset a three-dimensional positional accuracy analysis of UAV-based orthoimage and DSM was performed by comparing differences in three-dimensional coordinates of independent check point obtained with those of the RTK-GNSS survey. The result shows the third case, in which MMS data and UAV data combined, to be the most accurate, showing an RMSE accuracy of 8.9 cm in horizontal and 24.5 cm in vertical, respectively. In addition, it has been shown that the distribution of geospatial GCPs has more sensitive on the vertical accuracy than on horizontal accuracy.

Detection of Collapse Buildings Using UAV and Bitemporal Satellite Imagery (UAV와 다시기 위성영상을 이용한 붕괴건물 탐지)

  • Jung, Sejung;Lee, Kirim;Yun, Yerin;Lee, Won Hee;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.187-196
    • /
    • 2020
  • In this study, collapsed building detection using UAV (Unmanned Aerial Vehicle) and PlanetScope satellite images was carried out, suggesting the possibility of utilization of heterogeneous sensors in object detection located on the surface. To this end, the area where about 20 buildings collapsed due to forest fire damage was selected as study site. First of all, the feature information of objects such as ExG (Excess Green), GLCM (Gray-Level Co-Occurrence Matrix), and DSM (Digital Surface Model) were generated using high-resolution UAV images performed object-based segmentation to detect collapsed buildings. The features were then used to detect candidates for collapsed buildings. In this process, a result of the change detection using PlanetScope were used together to improve detection accuracy. More specifically, the changed pixels acquired by the bitemporal PlanetScope images were used as seed pixels to correct the misdetected and overdetected areas in the candidate group of collapsed buildings. The accuracy of the detection results of collapse buildings using only UAV image and the accuracy of collapse building detection result when UAV and PlanetScope images were used together were analyzed through the manually dizitized reference image. As a result, the results using only UAV image had 0.4867 F1-score, and the results using UAV and PlanetScope images together showed that the value improved to 0.8064 F1-score. Moreover, the Kappa coefficiant value was also dramatically improved from 0.3674 to 0.8225.

Analysis of Surface Temperature Characteristics by Land Surface Fabrics Using UAV TIR Images (UAV 열적외 영상을 활용한 피복재질별 표면온도 특성 분석)

  • SONG, Bong-Geun;KIM, Gyeong-Ah;SEO, Kyeong-Ho;LEE, Seung-Won;PARK, Kyung-Hun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.162-175
    • /
    • 2018
  • The purpose of this study was to analyze the surface temperature of surface fabrics using UAV TIR images, to mitigate problems in the thermal environment of urban areas. Surface temperature values derived from UAV images were compared with those measured in-situ during the similar period as when the images were taken. The difference in the in-situ measured and UAV image derived surface temperatures is the highest for gray colored concrete roof fabrics, at $17^{\circ}C$, and urethane fabrics show the lowest difference, at $0.3^{\circ}C$. The experiment power of the scatter plot of in-situ measured and UAV image derived surface temperatures was 63.75%, indicating that the correlation between the two is high. The surface fabrics with high temperature are metal roofs($48.9^{\circ}C$), urethane($43.4^{\circ}C$), and gray colored concrete roofs($42.9^{\circ}C$), and those with low temperature are barren land($30.2^{\circ}C$), area with trees and lawns($30.2^{\circ}C$), and white colored concrete roofs($34.9^{\circ}C$). These results show that accurate analysis of the thermal characteristics of surface fabrics is possible using UAV images. In future, it will be necessary to increase the usability of UAV images via comparison with in-situ data and linkage to satellite imagery.

Monitoring Onion Growth using UAV NDVI and Meteorological Factors

  • Na, Sang-Il;Park, Chan-Won;So, Kyu-Ho;Park, Jae-Moon;Lee, Kyung-Do
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.50 no.4
    • /
    • pp.306-317
    • /
    • 2017
  • Unmanned aerial vehicles (UAVs) became popular platforms for the collection of remotely sensed data in the last years. This study deals with the monitoring of multi-temporal onion growth with very high resolution by means of low-cost equipment. The concept of the monitoring was estimation of multi-temporal onion growth using normalized difference vegetation index (NDVI) and meteorological factors. For this study, UAV imagery was taken on the Changnyeong, Hapcheon and Muan regions eight times from early February to late June during the onion growing season. In precision agriculture frequent remote sensing on such scales during the vegetation period provided important spatial information on the crop status. Meanwhile, four plant growth parameters, plant height (P.H.), leaf number (L.N.), plant diameter (P.D.) and fresh weight (F.W.) were measured for about three hundred plants (twenty plants per plot) for each field campaign. Three meteorological factors included average temperature, rainfall and irradiation over an entire onion growth period. The multiple linear regression models were suggested by using stepwise regression in the extraction of independent variables. As a result, $NDVI_{UAV}$ and rainfall in the model explain 88% and 68% of the P.H. and F.W. with a root mean square error (RMSE) of 7.29 cm and 59.47 g, respectively. And $NDVI_{UAV}$ in the model explain 43% of the L.N. with a RMSE of 0.96. These lead to the result that the characteristics of variations in onion growth according to $NDVI_{UAV}$ and other meteorological factors were well reflected in the model.

Comparative Analysis of Rice Lodging Area Using a UAV-based Multispectral Imagery (무인기 기반 다중분광 영상을 이용한 벼 쓰러짐 영역의 특성 분석)

  • Moon, Hyun-Dong;Ryu, Jae-Hyun;Na, Sang-il;Jang, Seon Woong;Sin, Seo-ho;Cho, Jaeil
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.917-926
    • /
    • 2021
  • Lodging rice is one of critical agro-meteorological disasters. In this study, the UAV-based multispectral imageries before and after rice lodging in rice paddy field of Jeollanamdo agricultural research and extension servicesin 2020 was analyzed. The UAV imagery on 14th Aug. includesthe paddy rice without any damage. However, 4th and 19th Sep. showed the area of rice lodging. Multispectral camera of 10 bands from 444 nm to 842 nm was used. At the area of restoration work against lodging rice, the reflectance from 531 nm to 842 nm were decreased in comparison to un-lodging rice. At the area of lodging rice, the reflectance of around 668 nm had small increases. Further, the blue and NIR (Near-Infrared) wavelength had larger. However, according to the types of lodging, the change of reflectance was different. The NDVI (Normalized Difference Vegetation Index) and NDRE (Normalized Difference Red Edge) shows dome sensitivities to lodging rice, but they were different to types of lodging. These results will be useful to make algorithm to detect the area of lodging rice using a UAV.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.