• Title/Summary/Keyword: Drone Images

Search Result 205, Processing Time 0.024 seconds

A Study on the Density Analysis of Multi-objects Using Drone Imaging (드론 영상을 활용한 다중객체의 밀집도 분석 연구)

  • WonSeok Jang;HyunSu Kim;JinMan Park;MiSeon Han;SeongChae Baek;JeJin Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.69-78
    • /
    • 2024
  • Recently, the use of CCTV to prevent crowd accidents has been promoted, but research is needed to compensate for the spatial limitations of CCTV. In this study, pedestrian density was measured using drone footage, and based on a review of existing literature, a threshold of 6.7 people/m2 was selected as the cutoff risk level for crowd accidents. In addition, we conducted a preliminary study to determine drone parameters and found that the pedestrian recognition rate was high at a drone altitude of 20 meters and an angle of 60°. Based on a previous study, we selected a target area with a high concentration of pedestrians and measured pedestrian density, which was found to be 0.27~0.30 per m2. The study shows it is possible to measure risk levels by determining pedestrian densities in target areas using drone images. We believe drone surveillance will be utilized for crowd safety management in the near future.

Comparison of Deep-Learning Algorithms for the Detection of Railroad Pedestrians

  • Fang, Ziyu;Kim, Pyeoungkee
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.28-32
    • /
    • 2020
  • Railway transportation is the main land-based transportation in most countries. Accordingly, railway-transportation safety has always been a key issue for many researchers. Railway pedestrian accidents are the main reasons of railway-transportation casualties. In this study, we conduct experiments to determine which of the latest convolutional neural network models and algorithms are appropriate to build pedestrian railroad accident prevention systems. When a drone cruises over a pre-specified path and altitude, the real-time status around the rail is recorded, following which the image information is transmitted back to the server in time. Subsequently, the images are analyzed to determine whether pedestrians are present around the railroads, and a speed-deceleration order is immediately sent to the train driver, resulting in a reduction of the instances of pedestrian railroad accidents. This is the first part of an envisioned drone-based intelligent security system. This system can effectively address the problem of insufficient manual police force.

Semantic Segmentation of Heterogeneous Unmanned Aerial Vehicle Datasets Using Combined Segmentation Network

  • Ahram, Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.87-97
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs) can capture high-resolution imagery from a variety of viewing angles and altitudes; they are generally limited to collecting images of small scenes from larger regions. To improve the utility of UAV-appropriated datasetsfor use with deep learning applications, multiple datasets created from variousregions under different conditions are needed. To demonstrate a powerful new method for integrating heterogeneous UAV datasets, this paper applies a combined segmentation network (CSN) to share UAVid and semantic drone dataset encoding blocks to learn their general features, whereas its decoding blocks are trained separately on each dataset. Experimental results show that our CSN improves the accuracy of specific classes (e.g., cars), which currently comprise a low ratio in both datasets. From this result, it is expected that the range of UAV dataset utilization will increase.

Development and Verification of UAV-UGV Hybrid Robot System (드론-지상 하이브리드 로봇 시스템 개발 및 검증)

  • Jongwoon Woo;Jihoon Kim;Changhyun Sung;Byeongwoo Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.233-240
    • /
    • 2023
  • In this paper, we proposed a hybrid type robot that simultaneously surveillance and reconnaissance on the ground and in the air. It was possible to expand the surveillance and reconnaissance range by expanding the surveillance and reconnaissance area of the ground robot and quickly moving to the hidden area through the drone. First, ground robots go to mission areas through drones and perform surveillance and reconnaissance missions for urban warfare or mountainous areas. Second, drones move ground robots quickly. It transmits surveillance and reconnaissance images of ground robots to the control system and performs reconnaissance missions at the same time. Finally, in order to secure the interoperability of these hybrid robots, basic performance and environmental performance were verified. The evaluation method was tested and verified based on the KS standards.

Research on Digital Construction Site Management Using Drone and Vision Processing Technology (드론 및 비전 프로세싱 기술을 활용한 디지털 건설현장 관리에 대한 연구)

  • Seo, Min Jo;Park, Kyung Kyu;Lee, Seung Been;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyeung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.239-240
    • /
    • 2023
  • Construction site management involves overseeing tasks from the construction phase to the maintenance stage, and digitalization of construction sites is necessary for digital construction site management. In this study, we aim to conduct research on object recognition at construction sites using drones. Images of construction sites captured by drones are reconstructed into BIM (Building Information Modeling) models, and objects are recognized after partially rendering the models using artificial intelligence. For the photorealistic rendering of the BIM models, both traditional filtering techniques and the generative adversarial network (GAN) model were used, while the YOLO (You Only Look Once) model was employed for object recognition. This study is expected to provide insights into the research direction of digital construction site management and help assess the potential and future value of introducing artificial intelligence in the construction industry.

  • PDF

Research on the Design of Drone System for Field Support Using AR Smart Glasses Technology (AR스마트안경 기술을 접목한 현장 지원용 드론(Drone)시스템 설계에 대한 연구)

  • Lee, Kyung-Hwan;Jeong, Jin-Kuk;Ryu, Gab-Sang
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.4
    • /
    • pp.27-32
    • /
    • 2020
  • High-resolution images taken by drones are being used for a variety of information, including monitoring. The management of agricultural facilities still uses mostly human survey methods. Surveying agricultural facilities, surveying the appearance of agricultural facilities, and the sleeping environment have legal and environmental constraints that are inaccessible to humans. In addition, in an area where information such as 3D maps and satellite maps are outdated or not provided, human investigation is inevitable, and a lot of time and money are invested. The purpose of this research is to design and develop drone system for field support incorporating AR smart glasses technology for the maintenance and management of agricultural facilities to improve the difficulties of using existing drones. In addition, We will also suggest ways to secure the safety of personal information in order to solve the damages caused by the exposure of personal information that may occur through video shooting.

Discriminant analysis to detect fire blight infection on pear trees using RGB imagery obtained by a rotary wing drone

  • Kim, Hyun-Jung;Noh, Hyun-Kwon;Kang, Tae-Hwan
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.349-360
    • /
    • 2020
  • Fire-blight disease is a kind of contagious disease affecting apples, pears, and some other members of the family Rosaceae. Due to its extremely strong infectivity, once an orchard is confirmed to be infected, all of the orchards located within 100 m must be buried under the ground, and the sites are prohibited to cultivate any fruit trees for 5 years. In South Korea, fire-blight was confirmed for the first time in the Ansung area in 2015, and the infection is still being identified every year. Traditional approaches to detect fire-blight are expensive and require much time, additionally, also the inspectors have the potential to transmit the pathogen, Thus, it is necessary to develop a remote, unmanned monitoring system for fire-blight to prevent the spread of the disease. This study was conducted to detect fire-blight on pear trees using discriminant analysis with color information collected from a rotary-wing drone. The images of the infected trees were obtained at a pear orchard in Cheonan using an RGB camera attached to a rotary-wing drone at an altitude of 4 m, and also using a smart phone RGB camera on the ground. RGB and Lab color spaces and discriminant analysis were used to develop the image processing algorithm. As a result, the proposed method had an accuracy of approximately 75% although the system still requires many flaws to be improved.

Gimbal System Control for Drone for 3D Image (입체영상 촬영을 위한 드론용 짐벌시스템 제어)

  • Kim, Min;Byun, Gi-Sig;Kim, Gwan-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.11
    • /
    • pp.2107-2112
    • /
    • 2016
  • This paper is designed to develop a Gimbal control stabilizer for drones Gimbal system control for drone for 3D image to make sure clean image in the shaking and wavering environments of drone system. The stabilizer is made of tools which support camera modules and IMU(Inertial Measurement Unit) sensor modules follow exact angles, which can brock vibrations outside of the camera modules. It is difficult for the camera modules to get clean image, because of irregular movements and various vibrations produced by flying drones. Moreover, a general PID controller used for the movements of rolling, pitching and yawing in order to control the various vibrations of various frequencies needs often to readjust PID control parameters. Therefore, this paper aims to conduct the Intelligent-PID controller as well as design the Gimbal control stabilizer to get clean images and to improve irregular movements and various vibrations problems referenced above.

A Study on Automatic Vehicle Extraction within Drone Image Bounding Box Using Unsupervised SVM Classification Technique (무감독 SVM 분류 기법을 통한 드론 영상 경계 박스 내 차량 자동 추출 연구)

  • Junho Yeom
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • Numerous investigations have explored the integration of machine leaning algorithms with high-resolution drone image for object detection in urban settings. However, a prevalent limitation in vehicle extraction studies involves the reliance on bounding boxes rather than instance segmentation. This limitation hinders the precise determination of vehicle direction and exact boundaries. Instance segmentation, while providing detailed object boundaries, necessitates labour intensive labelling for individual objects, prompting the need for research on automating unsupervised instance segmentation in vehicle extraction. In this study, a novel approach was proposed for vehicle extraction utilizing unsupervised SVM classification applied to vehicle bounding boxes in drone images. The method aims to address the challenges associated with bounding box-based approaches and provide a more accurate representation of vehicle boundaries. The study showed promising results, demonstrating an 89% accuracy in vehicle extraction. Notably, the proposed technique proved effective even when dealing with significant variations in spectral characteristics within the vehicles. This research contributes to advancing the field by offering a viable solution for automatic and unsupervised instance segmentation in the context of vehicle extraction from image.

Performance Comparison of CNN-Based Image Classification Models for Drone Identification System (드론 식별 시스템을 위한 합성곱 신경망 기반 이미지 분류 모델 성능 비교)

  • YeongWan Kim;DaeKyun Cho;GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.639-644
    • /
    • 2024
  • Recent developments in the use of drones on battlefields, extending beyond reconnaissance to firepower support, have greatly increased the importance of technologies for early automatic drone identification. In this study, to identify an effective image classification model that can distinguish drones from other aerial targets of similar size and appearance, such as birds and balloons, we utilized a dataset of 3,600 images collected from the internet. We adopted a transfer learning approach that combines the feature extraction capabilities of three pre-trained convolutional neural network models (VGG16, ResNet50, InceptionV3) with an additional classifier. Specifically, we conducted a comparative analysis of the performance of these three pre-trained models to determine the most effective one. The results showed that the InceptionV3 model achieved the highest accuracy at 99.66%. This research represents a new endeavor in utilizing existing convolutional neural network models and transfer learning for drone identification, which is expected to make a significant contribution to the advancement of drone identification technologies.