• Title/Summary/Keyword: 무인항공기 이미지

Search Result 28, Processing Time 0.018 seconds

High-Resolution Mapping Techniques for Coastal Debris Using YOLOv8 and Unmanned Aerial Vehicle (YOLOv8과 무인항공기를 활용한 고해상도 해안쓰레기 매핑)

  • Suho Bak;Heung-Min Kim;Youngmin Kim;Inji Lee;Miso Park;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.151-166
    • /
    • 2024
  • Coastal debris presents a significant environmental threat globally. This research sought to improve the monitoring methods for coastal debris by employing deep learning and remote sensing technologies. To achieve this, an object detection approach utilizing the You Only Look Once (YOLO)v8 model was implemented to develop a comprehensive image dataset for 11 primary types of coastal debris in our country, proposing a protocol for the real-time detection and analysis of debris. Drone imagery was collected over Sinja Island, situated at the estuary of the Nakdong River, and analyzed using our custom YOLOv8-based analysis program to identify type-specific hotspots of coastal debris. The deployment of these mapping and analysis methodologies is anticipated to be effectively utilized in managing coastal debris.

A Study on Field Compost Detection by Using Unmanned AerialVehicle Image and Semantic Segmentation Technique based Deep Learning (무인항공기 영상과 딥러닝 기반의 의미론적 분할 기법을 활용한 야적퇴비 탐지 연구)

  • Kim, Na-Kyeong;Park, Mi-So;Jeong, Min-Ji;Hwang, Do-Hyun;Yoon, Hong-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.367-378
    • /
    • 2021
  • Field compost is a representative non-point pollution source for livestock. If the field compost flows into the water system due to rainfall, nutrients such as phosphorus and nitrogen contained in the field compost can adversely affect the water quality of the river. In this paper, we propose a method for detecting field compost using unmanned aerial vehicle images and deep learning-based semantic segmentation. Based on 39 ortho images acquired in the study area, about 30,000 data were obtained through data augmentation. Then, the accuracy was evaluated by applying the semantic segmentation algorithm developed based on U-net and the filtering technique of Open CV. As a result of the accuracy evaluation, the pixel accuracy was 99.97%, the precision was 83.80%, the recall rate was 60.95%, and the F1-Score was 70.57%. The low recall compared to precision is due to the underestimation of compost pixels when there is a small proportion of compost pixels at the edges of the image. After, It seems that accuracy can be improved by combining additional data sets with additional bands other than the RGB band.

Real-time Underground Facility Map Production Using Drones (드론을 이용한 실시간 지하시설물도 작성)

  • NO, Hong-Suk;BAEK, Tae-Kyung
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.39-54
    • /
    • 2017
  • Between 1998 and 2010, the computerization of underground facilities was completed in 84 cities. Since 2011, new pipelines have been laid or existing pipelines have been maintained, renovated, and renewed. To measure the exact location and depth of the exposure pipe, a map of underground facilities was created before filling the ground. This method is based on the time when the underground facilities of the National Geographic Information Institute Regulation No. 134 of the National Geographic Information Office revised in 2010 were drafted. The process of the drone taking the video is based on a theoretical basis of ground control points. The method works by removing all ground control points located outside of the error range and re-processing it for calculating the best result. Furthermore, using a drone-based spontaneous measuring method allows workers to obtain a high accuracy underground facilities map in error bound. The proposed method could be used as a new way to standardize the processing.

Study on the Application of RT-DETR to Monitoring of Coastal Debris on Unmanaged Coasts (비관리 해변의 해안 쓰레기 모니터링을 위한 RT-DETR 적용 방안 연구)

  • Ye-Been Do;Hong-Joo Yoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.2
    • /
    • pp.453-466
    • /
    • 2024
  • To improve the monitoring of Coastal Debris in the South Korea, which is difficult to estimate due to limited resources and vertex-based surveys, an approach based on UAV(Unmanned Aerial Vehicle) images and the RT-DETR(Realtime DEtection TRansformer) model was proposed for detecting Coastal Debris. By comparing to field investigation, the study suggested the possibility of quantitatively detecting coastal garbage and estimating the total capacity of garbage deposited on the natural coastline of the South Korea. The RT-DETR model achieved an accuracy of 0.894 for mAP@0.5 and 0.693 for mAP@0.5:0.95 in training. When applied to unmanaged coasts, the accuracy for the total number of coastal debris items was 72.9%. It is anticipated that if guidelines for defining monitoring of unmanaged coasts are established alongside this research, it should be possible to estimate the total capacity of the deposited coastal debris in the South Korea.

Vehicle Detection in Dense Area Using UAV Aerial Images (무인 항공기를 이용한 밀집영역 자동차 탐지)

  • Seo, Chang-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.693-698
    • /
    • 2018
  • This paper proposes a vehicle detection method for parking areas using unmanned aerial vehicles (UAVs) and using YOLOv2, which is a recent, known, fast, object-detection real-time algorithm. The YOLOv2 convolutional network algorithm can calculate the probability of each class in an entire image with a one-pass evaluation, and can also predict the location of bounding boxes. It has the advantage of very fast, easy, and optimized-at-detection performance, because the object detection process has a single network. The sliding windows methods and region-based convolutional neural network series detection algorithms use a lot of region proposals and take too much calculation time for each class. So these algorithms have a disadvantage in real-time applications. This research uses the YOLOv2 algorithm to overcome the disadvantage that previous algorithms have in real-time processing problems. Using Darknet, OpenCV, and the Compute Unified Device Architecture as open sources for object detection. a deep learning server is used for the learning and detecting process with each car. In the experiment results, the algorithm could detect cars in a dense area using UAVs, and reduced overhead for object detection. It could be applied in real time.

A Study on the UAV-based Vegetable Index Comparison for Detection of Pine Wilt Disease Trees (소나무재선충병 피해목 탐지를 위한 UAV기반의 식생지수 비교 연구)

  • Jung, Yoon-Young;Kim, Sang-Wook
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.1
    • /
    • pp.201-214
    • /
    • 2020
  • This study aimed to early detect damaged trees by pine wilt disease using the vegetation indices of UAV images. The location data of 193 pine wilt disease trees were constructed through field surveys and vegetation index analyses of NDVI, GNDVI, NDRE and SAVI were performed using multi-spectral UAV images at the same time. K-Means algorithm was adopted to classify damaged trees and confusion matrix was used to compare and analyze the classification accuracy. The results of the study are summarized as follows. First, the overall accuracy of the classification was analyzed in order of NDVI (88.04%, Kappa coefficient 0.76) > GNDVI (86.01%, Kappa coefficient 0.72) > NDRE (77.35%, Kappa coefficient 0.55) > SAVI (76.84%, Kappa coefficient 0.54) and showed the highest accuracy of NDVI. Second, K-Means unsupervised classification method using NDVI or GNDVI is possible to some extent to find out the damaged trees. In particular, this technique is to help early detection of damaged trees due to its intensive operation, low user intervention and relatively simple analysis process. In the future, it is expected that the utilization of time series images or the application of deep learning techniques will increase the accuracy of classification.

Implementation of virtual reality for interactive disaster evacuation training using close-range image information (근거리 영상정보를 활용한 실감형 재난재해 대피 훈련 가상 현실 구현)

  • KIM, Du-Young;HUH, Jung-Rim;LEE, Jin-Duk;BHANG, Kon-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.140-153
    • /
    • 2019
  • Cloase-range image information from drones and ground-based camera has been frequently used in the field of disaster mitigation with 3D modeling and mapping. In addition, the utilization of virtual reality(VR) is being increased by implementing realistic 3D models with the VR technology simulating disaster circumstances in large scale. In this paper, we created a VR training program by extracting realistic 3D models from close-range images from unmanned aircraft and digital camera on hand and observed several issues occurring during the implementation and the effectiveness in the case of a VR application in training for disaster mitigation. First of all, we built up a scenario of disaster and created 3D models after image processing with the close-range imagery. The 3D models were imported into Unity, a software for creation of augmented/virtual reality, as a background for android-based mobile phones and VR environment was created with C#-based script language. The generated virtual reality includes a scenario in which the trainer moves to a safe place along the evacuation route in the event of a disaster, and it was considered that the successful training can be obtained with virtual reality. In addition, the training through the virtual reality has advantages relative to actual evacuation training in terms of cost, space and time efficiencies.

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.