• Title/Summary/Keyword: Drone images

Search Result 203, Processing Time 0.025 seconds

Comparison of Deep-Learning Algorithms for the Detection of Railroad Pedestrians

  • Fang, Ziyu;Kim, Pyeoungkee
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.28-32
    • /
    • 2020
  • Railway transportation is the main land-based transportation in most countries. Accordingly, railway-transportation safety has always been a key issue for many researchers. Railway pedestrian accidents are the main reasons of railway-transportation casualties. In this study, we conduct experiments to determine which of the latest convolutional neural network models and algorithms are appropriate to build pedestrian railroad accident prevention systems. When a drone cruises over a pre-specified path and altitude, the real-time status around the rail is recorded, following which the image information is transmitted back to the server in time. Subsequently, the images are analyzed to determine whether pedestrians are present around the railroads, and a speed-deceleration order is immediately sent to the train driver, resulting in a reduction of the instances of pedestrian railroad accidents. This is the first part of an envisioned drone-based intelligent security system. This system can effectively address the problem of insufficient manual police force.

Semantic Segmentation of Heterogeneous Unmanned Aerial Vehicle Datasets Using Combined Segmentation Network

  • Ahram, Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.87-97
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs) can capture high-resolution imagery from a variety of viewing angles and altitudes; they are generally limited to collecting images of small scenes from larger regions. To improve the utility of UAV-appropriated datasetsfor use with deep learning applications, multiple datasets created from variousregions under different conditions are needed. To demonstrate a powerful new method for integrating heterogeneous UAV datasets, this paper applies a combined segmentation network (CSN) to share UAVid and semantic drone dataset encoding blocks to learn their general features, whereas its decoding blocks are trained separately on each dataset. Experimental results show that our CSN improves the accuracy of specific classes (e.g., cars), which currently comprise a low ratio in both datasets. From this result, it is expected that the range of UAV dataset utilization will increase.

Development and Verification of UAV-UGV Hybrid Robot System (드론-지상 하이브리드 로봇 시스템 개발 및 검증)

  • Jongwoon Woo;Jihoon Kim;Changhyun Sung;Byeongwoo Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.233-240
    • /
    • 2023
  • In this paper, we proposed a hybrid type robot that simultaneously surveillance and reconnaissance on the ground and in the air. It was possible to expand the surveillance and reconnaissance range by expanding the surveillance and reconnaissance area of the ground robot and quickly moving to the hidden area through the drone. First, ground robots go to mission areas through drones and perform surveillance and reconnaissance missions for urban warfare or mountainous areas. Second, drones move ground robots quickly. It transmits surveillance and reconnaissance images of ground robots to the control system and performs reconnaissance missions at the same time. Finally, in order to secure the interoperability of these hybrid robots, basic performance and environmental performance were verified. The evaluation method was tested and verified based on the KS standards.

Research on Digital Construction Site Management Using Drone and Vision Processing Technology (드론 및 비전 프로세싱 기술을 활용한 디지털 건설현장 관리에 대한 연구)

  • Seo, Min Jo;Park, Kyung Kyu;Lee, Seung Been;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyeung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.239-240
    • /
    • 2023
  • Construction site management involves overseeing tasks from the construction phase to the maintenance stage, and digitalization of construction sites is necessary for digital construction site management. In this study, we aim to conduct research on object recognition at construction sites using drones. Images of construction sites captured by drones are reconstructed into BIM (Building Information Modeling) models, and objects are recognized after partially rendering the models using artificial intelligence. For the photorealistic rendering of the BIM models, both traditional filtering techniques and the generative adversarial network (GAN) model were used, while the YOLO (You Only Look Once) model was employed for object recognition. This study is expected to provide insights into the research direction of digital construction site management and help assess the potential and future value of introducing artificial intelligence in the construction industry.

  • PDF

Research on the Design of Drone System for Field Support Using AR Smart Glasses Technology (AR스마트안경 기술을 접목한 현장 지원용 드론(Drone)시스템 설계에 대한 연구)

  • Lee, Kyung-Hwan;Jeong, Jin-Kuk;Ryu, Gab-Sang
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.4
    • /
    • pp.27-32
    • /
    • 2020
  • High-resolution images taken by drones are being used for a variety of information, including monitoring. The management of agricultural facilities still uses mostly human survey methods. Surveying agricultural facilities, surveying the appearance of agricultural facilities, and the sleeping environment have legal and environmental constraints that are inaccessible to humans. In addition, in an area where information such as 3D maps and satellite maps are outdated or not provided, human investigation is inevitable, and a lot of time and money are invested. The purpose of this research is to design and develop drone system for field support incorporating AR smart glasses technology for the maintenance and management of agricultural facilities to improve the difficulties of using existing drones. In addition, We will also suggest ways to secure the safety of personal information in order to solve the damages caused by the exposure of personal information that may occur through video shooting.

Discriminant analysis to detect fire blight infection on pear trees using RGB imagery obtained by a rotary wing drone

  • Kim, Hyun-Jung;Noh, Hyun-Kwon;Kang, Tae-Hwan
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.349-360
    • /
    • 2020
  • Fire-blight disease is a kind of contagious disease affecting apples, pears, and some other members of the family Rosaceae. Due to its extremely strong infectivity, once an orchard is confirmed to be infected, all of the orchards located within 100 m must be buried under the ground, and the sites are prohibited to cultivate any fruit trees for 5 years. In South Korea, fire-blight was confirmed for the first time in the Ansung area in 2015, and the infection is still being identified every year. Traditional approaches to detect fire-blight are expensive and require much time, additionally, also the inspectors have the potential to transmit the pathogen, Thus, it is necessary to develop a remote, unmanned monitoring system for fire-blight to prevent the spread of the disease. This study was conducted to detect fire-blight on pear trees using discriminant analysis with color information collected from a rotary-wing drone. The images of the infected trees were obtained at a pear orchard in Cheonan using an RGB camera attached to a rotary-wing drone at an altitude of 4 m, and also using a smart phone RGB camera on the ground. RGB and Lab color spaces and discriminant analysis were used to develop the image processing algorithm. As a result, the proposed method had an accuracy of approximately 75% although the system still requires many flaws to be improved.

Gimbal System Control for Drone for 3D Image (입체영상 촬영을 위한 드론용 짐벌시스템 제어)

  • Kim, Min;Byun, Gi-Sig;Kim, Gwan-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.11
    • /
    • pp.2107-2112
    • /
    • 2016
  • This paper is designed to develop a Gimbal control stabilizer for drones Gimbal system control for drone for 3D image to make sure clean image in the shaking and wavering environments of drone system. The stabilizer is made of tools which support camera modules and IMU(Inertial Measurement Unit) sensor modules follow exact angles, which can brock vibrations outside of the camera modules. It is difficult for the camera modules to get clean image, because of irregular movements and various vibrations produced by flying drones. Moreover, a general PID controller used for the movements of rolling, pitching and yawing in order to control the various vibrations of various frequencies needs often to readjust PID control parameters. Therefore, this paper aims to conduct the Intelligent-PID controller as well as design the Gimbal control stabilizer to get clean images and to improve irregular movements and various vibrations problems referenced above.

A Study on Automatic Vehicle Extraction within Drone Image Bounding Box Using Unsupervised SVM Classification Technique (무감독 SVM 분류 기법을 통한 드론 영상 경계 박스 내 차량 자동 추출 연구)

  • Junho Yeom
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • Numerous investigations have explored the integration of machine leaning algorithms with high-resolution drone image for object detection in urban settings. However, a prevalent limitation in vehicle extraction studies involves the reliance on bounding boxes rather than instance segmentation. This limitation hinders the precise determination of vehicle direction and exact boundaries. Instance segmentation, while providing detailed object boundaries, necessitates labour intensive labelling for individual objects, prompting the need for research on automating unsupervised instance segmentation in vehicle extraction. In this study, a novel approach was proposed for vehicle extraction utilizing unsupervised SVM classification applied to vehicle bounding boxes in drone images. The method aims to address the challenges associated with bounding box-based approaches and provide a more accurate representation of vehicle boundaries. The study showed promising results, demonstrating an 89% accuracy in vehicle extraction. Notably, the proposed technique proved effective even when dealing with significant variations in spectral characteristics within the vehicles. This research contributes to advancing the field by offering a viable solution for automatic and unsupervised instance segmentation in the context of vehicle extraction from image.

Drone Image based Time Series Analysis for the Range of Eradication of Clover in Lawn (드론 영상기반 잔디밭 내 클로버의 퇴치 범위에 대한 시계열 분석)

  • Lee, Yong Chang;Kang, Joon Oh;Oh, Seong Jong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.4
    • /
    • pp.211-221
    • /
    • 2021
  • The Rabbit grass(Trifolium Repens, call it 'Clover') is a representative harmful plant of lawn, and it starts growing earlier than lawn, forming a water pipe on top of the lawn and hindering the photosynthesis and growth of the lawn. As a result, in competition between lawn and clover, clover territory spreads, but lawn is damaged and dried up. Damage to the affected lawn area will accelerate during the rainy season as well as during the plant's rear stage, spreading the area where soil is exposed. Therefore, the restoration of damaged lawn is causing psychological stress and a lot of economic burden. The purpose of this study is to distinguish clover which is a representative harmful plant on lawn, to identify the distribution of damaged areas due to the spread of clover, and to review of changes in vegetation before and after the eradication of clover. For this purpose, a time series analysis of three vegetation indices calculated based on images of convergence Drone with RGB(Red Green Blue) and BG-NIR(Near Infra Red)sensors was reviewed to identify the separation between lawn and clover for selective eradication, and the distribution of damaged lawn for recovery plan. In particular, examined timeseries changes in the ecology of clover before and after the weed-whacking by manual and brush cutter. And also, the method of distinguishing lawn from clover was explored during the mid-year period of growth of the two plants. This study shows that the time series analysis of the MGRVI(Modified Green-Red Vegetation Index), NDVI(Normalized Difference Vegetation Index), and MSAVI(Modified Soil Adjusted Vegetation Index) indices of drone-based RGB and BG-NIR images according to the growth characteristics between lawn and clover can confirm the availability of change trends after lawn damage and clover eradication.

The Study on Spatial Classification of Riverine Environment using UAV Hyperspectral Image (UAV를 활용한 초분광 영상의 하천공간특성 분류 연구)

  • Kim, Young-Joo;Han, Hyeong-Jun;Kang, Joon-Gu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.10
    • /
    • pp.633-639
    • /
    • 2018
  • High-resolution images using remote sensing (RS) is importance to secure for spatial classification depending on the characteristics of the complex and various factors that make up the river environment. The purpose of this study is to evaluate the accuracy of the classification results and to suggest the possibility of applying the high resolution hyperspectral images obtained by using the drone to perform spatial classification. Hyperspectral images obtained from study area were reduced the dimensionality with PCA and MNF transformation to remove effects of noise. Spatial classification was performed by supervised classifications such as MLC(Maximum Likelihood Classification), SVM(Support Vector Machine) and SAM(Spectral Angle Mapping). In overall, the highest classification accuracy was showed when the MLC supervised classification was used by MNF transformed image. However, it was confirmed that the misclassification was mainly found in the boundary of some classes including water body and the shadowing area. The results of this study can be used as basic data for remote sensing using drone and hyperspectral sensor, and it is expected that it can be applied to a wider range of river environments through the development of additional algorithms.