• Title/Summary/Keyword: Drone image

Search Result 270, Processing Time 0.029 seconds

Development of Face Recognition System based on Real-time Mini Drone Camera Images (실시간 미니드론 카메라 영상을 기반으로 한 얼굴 인식 시스템 개발)

  • Kim, Sung-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.12
    • /
    • pp.17-23
    • /
    • 2019
  • In this paper, I propose a system development methodology that accepts images taken by camera attached to drone in real time while controlling mini drone and recognize and confirm the face of certain person. For the development of this system, OpenCV, Python related libraries and the drone SDK are used. To increase face recognition ratio of certain person from real-time drone images, it uses Deep Learning-based facial recognition algorithm and uses the principle of Triples in particular. To check the performance of the system, the results of 30 experiments for face recognition based on the author's face showed a recognition rate of about 95% or higher. It is believed that research results of this paper can be used to quickly find specific person through drone at tourist sites and festival venues.

Deep Learning Based Drone Detection and Classification (딥러닝 기반 드론 검출 및 분류)

  • Yi, Keon Young;Kyeong, Deokhwan;Seo, Kisung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.2
    • /
    • pp.359-363
    • /
    • 2019
  • As commercial drones have been widely used, concerns for collision accidents with people and invading secured properties are emerging. The detection of drone is a challenging problem. The deep learning based object detection techniques for detecting drones have been applied, but limited to the specific cases such as detection of drones from bird and/or background. We have tried not only detection of drones, but classification of different drones with an end-to-end model. YOLOv2 is used as an object detection model. In order to supplement insufficient data by shooting drones, data augmentation from collected images is executed. Also transfer learning from ImageNet for YOLOv2 darknet framework is performed. The experimental results for drone detection with average IoU and recall are compared and analysed.

Study on Design of Two-Axis Image Stabilization Controller through Drone Flight Test Data Standardization

  • Jeongwon, Kim;Gyuchan, Lee;Dong-gi, Kwag
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.470-477
    • /
    • 2022
  • EOTS for drones is showing another aspect of market expansion in detection and recognition areas previously occupied by artificial satellites. The two-axis EOTS for drones controls the vibration or disturbance caused by the drone during the mission so that EOTS can accurately recognize the goal. Vibration generated by drones is transmitted to EOTS. Therefore, it is essential to develop a stabilization controller that attenuates vibrations transmitted from drones so that EOTS can maintain the viewing angle. Therefore, it is necessary to standardize drone disturbance and secure the performance of EOTS disturbance attenuation controller optimized for disturbance level through this. In this paper, a method of standardizing drone disturbance applied to EOTS is studied, through which EOTS controller simulation is performed and stabilization controller shape is selected and designed.

Automatic Geo-referencing of Sequential Drone Images Using Linear Features and Distinct Points (선형과 특징점을 이용한 연속적인 드론영상의 자동기하보정)

  • Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.1
    • /
    • pp.19-28
    • /
    • 2019
  • Images captured by drone have the advantage of quickly constructing spatial information in small areas and are applied to fields that require quick decision making. If an image registration technique that can automatically register the drone image on the ortho-image with the ground coordinate system is applied, it can be used for various analyses. In this study, a methodology for geo-referencing of a single image and sequential images using drones was proposed even if they differ in spatio-temporal resolution using linear features and distinct points. Through the method using linear features, projective transformation parameters for the initial geo-referencing between images were determined, and then finally the geo-referencing of the image was performed through the template matching for distinct points that can be extracted from the images. Experimental results showed that the accuracy of the geo-referencing was high in an area where relief displacement of the terrain was not large. On the other hand, there were some errors in the quantitative aspect of the area where the change of the terrain was large. However, it was considered that the results of geo-referencing of the sequential images could be fully utilized for the qualitative analysis.

Sensor Fusion Docking System of Drone and Ground Vehicles Using Image Object Detection (영상 객체 검출을 이용한 드론과 지상로봇의 센서 융합 도킹 시스템)

  • Beck, Jong-Hwan;Park, Hee-Su;Oh, Se-Ryeong;Shin, Ji-Hun;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.217-222
    • /
    • 2017
  • Recent studies for working robot in dangerous places have been carried out on large unmanned ground vehicles or 4-legged robots with the advantage of long working time, but it is difficult to apply in practical dangerous fields which require the real-time system with high locomotion and capability of delicate working. This research shows the collaborated docking system of drone and ground vehicles which combines image processing algorithm and laser sensors for effective detection of docking markers, and is finally capable of moving a long distance and doing very delicate works. We proposed the docking system of drone and ground vehicles with sensor fusion which also suggests two template matching methods appropriate for this application. The system showed 95% docking success rate in 50 docking attempts.

Performance Comparison of CNN-Based Image Classification Models for Drone Identification System (드론 식별 시스템을 위한 합성곱 신경망 기반 이미지 분류 모델 성능 비교)

  • YeongWan Kim;DaeKyun Cho;GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.639-644
    • /
    • 2024
  • Recent developments in the use of drones on battlefields, extending beyond reconnaissance to firepower support, have greatly increased the importance of technologies for early automatic drone identification. In this study, to identify an effective image classification model that can distinguish drones from other aerial targets of similar size and appearance, such as birds and balloons, we utilized a dataset of 3,600 images collected from the internet. We adopted a transfer learning approach that combines the feature extraction capabilities of three pre-trained convolutional neural network models (VGG16, ResNet50, InceptionV3) with an additional classifier. Specifically, we conducted a comparative analysis of the performance of these three pre-trained models to determine the most effective one. The results showed that the InceptionV3 model achieved the highest accuracy at 99.66%. This research represents a new endeavor in utilizing existing convolutional neural network models and transfer learning for drone identification, which is expected to make a significant contribution to the advancement of drone identification technologies.

How to Acquire the Evidence Capability of Video Images Taken by Drone (드론으로 촬영한 영상물의 증거능력 확보방안)

  • Kim, Yong-Jin;Song, Jae-Keun;Lee, Gyu-An
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.1
    • /
    • pp.163-168
    • /
    • 2018
  • With the advent of the fourth industrial revolution era, the use of drone has been progressing rapidly in various fields. Now the drones will be used extensively in the area of investigation. Until now the criminal photographs stayed in 2D digital images, it would be possible to reproduce not only 3D images but also make a crime scene with 3D printer. Firstly, the video images taken by the investigation agency using the drones are digital image evidence, and the requirements for securing the evidence capability are not different from the conditions for obtaining the proof of digital evidence. However, when the drones become a new area of scientific investigation, it is essential to systematize the authenticity of the images taken by the drones so that they can be used as evidence. In this paper, I propose a method to secure the evidence capability of digital images taken by drone.

Comparison and analysis of spatial information measurement values of specialized software in drone triangulation (드론 삼각측량에서 전문 소프트웨어의 공간정보 정확도 비교 분석)

  • Park, Dong Joo;Choi, Yeonsung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.4
    • /
    • pp.249-256
    • /
    • 2022
  • In the case of Drone Photogrammetry, the "pixel to point tool" module of Metashape, Pix4D Mapper, ContextCapture, and Global MapperGIS, which is a simple software, are widely used. Each SW has its own logic for the analysis of aerial triangulation, but from the user's point of view, it is necessary to select a SW by comparative analysis of the coordinate values of geospatial information for the result. Taking aerial photos for drone photogrammetry, surveying GCP reference points through VRS-GPS Survey, processing the acquired basic data using each SW to construct ortho image and DSM, and GCPSurvey performance and acquisition from each SW The coordinates (X,Y) of the center point of the GCP target on the Ortho-Image and the height value (EL) of the GCP point by DSM were compared. According to the "Public Surveying Work Regulations", the results of each SW are all within the margin of error. It turned out that there is no problem with the regulations no matter which SW is included within the scope.

Drone Flight Path for Countacting of Industry Disaster (산업 재해 대응 드론 비행경로 설정 방법)

  • Choo, Sang-Mok;Chong, Ui-Pil;Lee, Jung-Chul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.132-137
    • /
    • 2017
  • Drone is currently used for wide application areas in our real life. Also it performs more important functions. We propose a method of drone operation system for the prevention of industrial disaster. In normal operation of drone system the drone monitors the industrial sites according to the planned flight path with acquiring the monitored images and send the image information to the server. The server analyzes and compares the images to DB information by calculating the similarity based on the threshold. Then the system decides whether the industrial sites has problems or not. If the abnormal condition is occurred, the drone change the flight path to abnormal flight path and keep monitoring the industrial sites with measuring the air status by sensors and sends all information to server system on the ground. If the emergency case is occurred, drone approaches the closest position of accident points and acquiring the all information and send them to server and 119 center.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.