• Title/Summary/Keyword: Drone images

Search Result 200, Processing Time 0.027 seconds

Classification of Summer Paddy and Winter Cropping Fields Using Sentinel-2 Images (Sentinel-2 위성영상을 이용한 하계 논벼와 동계작물 재배 필지 분류 및 정확도 평가)

  • Hong, Joo-Pyo;Jang, Seong-Ju;Park, Jin-Seok;Shin, Hyung-Jin;Song, In-Hong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.1
    • /
    • pp.51-63
    • /
    • 2022
  • Up-to-date statistics of crop cultivation status is essential for farm land management planning and the advancement in remote sensing technology allows for rapid update of farming information. The objective of this study was to develop a classification model of rice paddy or winter crop fields based on NDWI, NDVI, and HSV indices using Sentinel-2 satellite images. The 18 locations in central Korea were selected as target areas and photographed once for each during summer and winter with a eBee drone to identify ground truth crop cultivation. The NDWI was used to classify summer paddy fields, while the NDVI and HSV were used and compared in identification of winter crop cultivation areas. The summer paddy field classification with the criteria of -0.195

Drone Image Classification based on Convolutional Neural Networks (컨볼루션 신경망을 기반으로 한 드론 영상 분류)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.97-102
    • /
    • 2017
  • Recently deep learning techniques such as convolutional neural networks (CNN) have been introduced to classify high-resolution remote sensing data. In this paper, we investigated the possibility of applying CNN to crop classification of farmland images captured by drones. The farming area was divided into seven classes: rice field, sweet potato, red pepper, corn, sesame leaf, fruit tree, and vinyl greenhouse. We performed image pre-processing and normalization to apply CNN, and the accuracy of image classification was more than 98%. With the output of this study, it is expected that the transition from the existing image classification methods to the deep learning based image classification methods will be facilitated in a fast manner, and the possibility of success can be confirmed.

Operation Model for Forest-UAV for Detection of Forest Disease (산림병해충 검출을 위한 산림무인항공기 운영 모델)

  • Byun, Sangwoo;Kang, Yunhee
    • Journal of Platform Technology
    • /
    • v.8 no.1
    • /
    • pp.3-9
    • /
    • 2020
  • In Korea, 63% of the nation's land is made up of forests, and the average temperature of the earth has been increasing. Forest service has been operating a proactive control system for preventing the spread of forest pests such as Pine wilt disease. but there were some hurdles in timely control due to weather, topography and manpower management difficulties. In this paper, we propose a model for building fast, accurate and efficient control system by categorizing the damage and dead wood automatically based on the images acquired using small unmanned aerial vehicles based on information and communication technology. In particular, the proposed model establishes an effective response system for government affairs through cooperation in the private sector. It can also create new jobs in the unmanned aerial vehicle business and service industries.

  • PDF

Accuracy Analysis of Satellite Imagery in Road Construction Site Using UAV (도로 토목 공사 현장에서 UAV를 활용한 위성 영상 지도의 정확도 분석)

  • Shin, Seung-Min;Ban, Chang-Woo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.753-762
    • /
    • 2021
  • Google provides mapping services using satellite imagery, this is widely used for the study. Since about 20 years ago, research and business using drones have been expanding. Pix4D is widely used to create 3D information models using drones. This study compared the distance error by comparing the result of the road construction site with the DSM data of Google Earth and Pix4 D. Through this, we tried to understand the reliability of the result of distance measurement in Google Earth. A DTM result of 3.08 cm/pixel was obtained as a result of matching with 49666 key points for each image. The length and altitude of Pix4D and Google Earth were measured and compared using the obtained PCD. As a result, the average error of the distance based on the data of Pix4D was measured to be 0.68 m, confirming that the error was relatively small. As a result of measuring the altitude of Google Earth and Pix4D and comparing them, it was confirmed that the maximum error was 83.214m, which was measured using satellite images, but the error was quite large and there was inaccuracy. Through this, it was confirmed that there are difficulties in analyzing and acquiring data at road construction sites using Google Earth, and the result was obtained that point cloud data using drones is necessary.

Implementation of Photovoltaic Panel failure detection system using semantic segmentation (시멘틱세그멘테이션을 활용한 태양광 패널 고장 감지 시스템 구현)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1777-1783
    • /
    • 2021
  • The use of drones is gradually increasing for the efficient maintenance of large-scale renewable energy power generation complexes. For a long time, photovoltaic panels have been photographed with drones to manage panel loss and contamination. Various approaches using artificial intelligence are being tried for efficient maintenance of large-scale photovoltaic complexes. Recently, semantic segmentation-based application techniques have been developed to solve the image classification problem. In this paper, we propose a classification model using semantic segmentation to determine the presence or absence of failures such as arcs, disconnections, and cracks in solar panel images obtained using a drone equipped with a thermal imaging camera. In addition, an efficient classification model was implemented by tuning several factors such as data size and type and loss function customization in U-Net, which shows robust classification performance even with a small dataset.

A Measurement Technique of Flow Velocities in Small and Medium Sized-Rivers using Drone Images (드론 영상을 이용한 중소하천의 유속 측정)

  • Liu, Binghao;Yu, Kwonkyu;Lee, Namjoo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.52-52
    • /
    • 2020
  • 영상을 이용하여 하천유속과 유량을 측정하는 방법은 신속하고 간결하게 유속을 측정할 수 있는 방법으로 주목을 받고 있다. 또한 새로운 기술인 드론을 활용하여 하천의 유속과 유량을 측정하는 방법도 다양하게 시도되고 있다. 본 연구는 드론을 정지영상과 동영상을 이용하여 별도의 측량이나 복잡한 과정없이도 하천의 유속분포를 추정할 수 있는 방안을 제시하였다. 이 때, 중소하천의 유속 측정에 보다 쉽게 활용할 수 있도록 카메라 보정을 통한 카메라 내부변수 획득과 정지영상에 수록된 드론의 위치와 자세에 대한 EXIF 자료를 이용하는 방안, 약간의 흔들림이 있는 영상에서도 유속을 추정하는 방안을 제안하였다. 먼저, 드론의 카메라 보정을 위한 프로그램을 작성하고, 카메라의 내부 변수를 추정하였다. 드론의 위치와 자세에 대한 정보는 정지영상(JPG 파일)에 수록된 EXIF 정보를 이용하여 드론의 위치(GPS)와 자세(자이로스코프)를 알아내었다. 이를 이용하여 현장의 참조점에 대한 위치 정보를 확인하고, 또 수면촬영을 위한 정지비행시의 카메라의 대략의 위치와 자세 정보를 확인하였다. 이 자료들은 실제 유속측정에 이용하는 동영상에서 나타나는 참조점의 위치 정보를 결정하기 위한 것이다. 그리고 연이어 촬영된 동영상에서 시공간영상분석법으로 측정 단면의 유속분포를 분석하였다. 이 때, 동영상 내에 있는 약간의 흔들림은 FFT 분석으로 적절히 보완할 수 있다. 개발된 방법을 밀양강의 단장천 대리 수위표 지점 인근에서 시험한 결과 기존의 유속계로 측정한 방법과 상당히 근사한 결과를 얻을 수 있었다.

  • PDF

Tack Coat Inspection Using Unmanned Aerial Vehicle and Deep Learning

  • da Silva, Aida;Dai, Fei;Zhu, Zhenhua
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.784-791
    • /
    • 2022
  • Tack coat is a thin layer of asphalt between the existing pavement and asphalt overlay. During construction, insufficient tack coat layering can later cause surface defects such as slippage, shoving, and rutting. This paper proposed a method for tack coat inspection improvement using an unmanned aerial vehicle (UAV) and deep learning neural network for automatic non-uniform assessment of the applied tack coat area. In this method, the drone-captured images are exploited for assessment using a combination of Mask R-CNN and Grey Level Co-occurrence Matrix (GLCM). Mask R-CNN is utilized to detect the tack coat region and segment the region of interest from the surroundings. GLCM is used to analyze the texture of the segmented region and measure the uniformity and non-uniformity of the tack coat on the existing pavements. The results of the field experiment showed both the intersection over union of Mask R-CNN and the non-uniformity measured by GLCM were promising with respect to their accuracy. The proposed method is automatic and cost-efficient, which would be of value to state Departments of Transportation for better management of their work in pavement construction and rehabilitation.

  • PDF

Development of Deep Learning-based Land Monitoring Web Service (딥러닝 기반의 국토모니터링 웹 서비스 개발)

  • In-Hak Kong;Dong-Hoon Jeong;Gu-Ha Jeong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.275-284
    • /
    • 2023
  • Land monitoring involves systematically understanding changes in land use, leveraging spatial information such as satellite imagery and aerial photographs. Recently, the integration of deep learning technologies, notably object detection and semantic segmentation, into land monitoring has spurred active research. This study developed a web service to facilitate such integrations, allowing users to analyze aerial and drone images using CNN models. The web service architecture comprises AI, WEB/WAS, and DB servers and employs three primary deep learning models: DeepLab V3, YOLO, and Rotated Mask R-CNN. Specifically, YOLO offers rapid detection capabilities, Rotated Mask R-CNN excels in detecting rotated objects, while DeepLab V3 provides pixel-wise image classification. The performance of these models fluctuates depending on the quantity and quality of the training data. Anticipated to be integrated into the LX Corporation's operational network and the Land-XI system, this service is expected to enhance the accuracy and efficiency of land monitoring.

Development of tracer concentration analysis method using drone-based spatio-temporal hyperspectral image and RGB image (드론기반 시공간 초분광영상 및 RGB영상을 활용한 추적자 농도분석 기법 개발)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun;Han, Eunjin;Kwon, Siyoon;Kim, Youngdo
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.8
    • /
    • pp.623-634
    • /
    • 2022
  • Due to river maintenance projects such as the creation of hydrophilic areas around rivers and the Four Rivers Project, the flow characteristics of rivers are continuously changing, and the risk of water quality accidents due to the inflow of various pollutants is increasing. In the event of a water quality accident, it is necessary to minimize the effect on the downstream side by predicting the concentration and arrival time of pollutants in consideration of the flow characteristics of the river. In order to track the behavior of these pollutants, it is necessary to calculate the diffusion coefficient and dispersion coefficient for each section of the river. Among them, the dispersion coefficient is used to analyze the diffusion range of soluble pollutants. Existing experimental research cases for tracking the behavior of pollutants require a lot of manpower and cost, and it is difficult to obtain spatially high-resolution data due to limited equipment operation. Recently, research on tracking contaminants using RGB drones has been conducted, but RGB images also have a limitation in that spectral information is limitedly collected. In this study, to supplement the limitations of existing studies, a hyperspectral sensor was mounted on a remote sensing platform using a drone to collect temporally and spatially higher-resolution data than conventional contact measurement. Using the collected spatio-temporal hyperspectral images, the tracer concentration was calculated and the transverse dispersion coefficient was derived. It is expected that by overcoming the limitations of the drone platform through future research and upgrading the dispersion coefficient calculation technology, it will be possible to detect various pollutants leaking into the water system, and to detect changes in various water quality items and river factors.

A Performance Comparison of Land-Based Floating Debris Detection Based on Deep Learning and Its Field Applications (딥러닝 기반 육상기인 부유쓰레기 탐지 모델 성능 비교 및 현장 적용성 평가)

  • Suho Bak;Seon Woong Jang;Heung-Min Kim;Tak-Young Kim;Geon Hui Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.193-205
    • /
    • 2023
  • A large amount of floating debris from land-based sources during heavy rainfall has negative social, economic, and environmental impacts, but there is a lack of monitoring systems for floating debris accumulation areas and amounts. With the recent development of artificial intelligence technology, there is a need to quickly and efficiently study large areas of water systems using drone imagery and deep learning-based object detection models. In this study, we acquired various images as well as drone images and trained with You Only Look Once (YOLO)v5s and the recently developed YOLO7 and YOLOv8s to compare the performance of each model to propose an efficient detection technique for land-based floating debris. The qualitative performance evaluation of each model showed that all three models are good at detecting floating debris under normal circumstances, but the YOLOv8s model missed or duplicated objects when the image was overexposed or the water surface was highly reflective of sunlight. The quantitative performance evaluation showed that YOLOv7 had the best performance with a mean Average Precision (intersection over union, IoU 0.5) of 0.940, which was better than YOLOv5s (0.922) and YOLOv8s (0.922). As a result of generating distortion in the color and high-frequency components to compare the performance of models according to data quality, the performance degradation of the YOLOv8s model was the most obvious, and the YOLOv7 model showed the lowest performance degradation. This study confirms that the YOLOv7 model is more robust than the YOLOv5s and YOLOv8s models in detecting land-based floating debris. The deep learning-based floating debris detection technique proposed in this study can identify the spatial distribution of floating debris by category, which can contribute to the planning of future cleanup work.