• Title/Summary/Keyword: Drone image

Search Result 260, Processing Time 0.03 seconds

A Study on Automatic Vehicle Extraction within Drone Image Bounding Box Using Unsupervised SVM Classification Technique (무감독 SVM 분류 기법을 통한 드론 영상 경계 박스 내 차량 자동 추출 연구)

  • Junho Yeom
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • Numerous investigations have explored the integration of machine leaning algorithms with high-resolution drone image for object detection in urban settings. However, a prevalent limitation in vehicle extraction studies involves the reliance on bounding boxes rather than instance segmentation. This limitation hinders the precise determination of vehicle direction and exact boundaries. Instance segmentation, while providing detailed object boundaries, necessitates labour intensive labelling for individual objects, prompting the need for research on automating unsupervised instance segmentation in vehicle extraction. In this study, a novel approach was proposed for vehicle extraction utilizing unsupervised SVM classification applied to vehicle bounding boxes in drone images. The method aims to address the challenges associated with bounding box-based approaches and provide a more accurate representation of vehicle boundaries. The study showed promising results, demonstrating an 89% accuracy in vehicle extraction. Notably, the proposed technique proved effective even when dealing with significant variations in spectral characteristics within the vehicles. This research contributes to advancing the field by offering a viable solution for automatic and unsupervised instance segmentation in the context of vehicle extraction from image.

Generation of Epipolar Image from Drone Image Using Direction Cosine (방향코사인을 이용한 드론영상의 에피폴라 영상제작)

  • Kim, Eui Myoung;Choi, Han Seung;Hong, Song Pyo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.271-277
    • /
    • 2018
  • Generating an epipolar image which is removed a y-parallax from an original image is an essential technique for creating a 3D stereoscopic model or producing a map. In epipolar image production, there is a method of generating epipolar images by estimating the relative orientation parameters after matching the extracted distinct points in two images and a method of generating epipolar images by using the baseline and rotation angles of the two images after determining the exterior orientation parameters In this study, it was proposed a methodology to generate epipolar images using direction cosine in the exterior orientation parameters of the input images, and a method to use the transformation matrix for easy calculation when converting from the original image to the epipolar image. The applicability of the proposed methodology was evaluated by using images taken from the fixed wing and rotary wing drones. As a result, it was found that epipolar images were generated regardless of the type of drones.

Accuracy of Drone Based Stereophotogrammetry in Underground Environments (지하 환경에서의 드론 기반 입체사진측량기법의 정확도 분석)

  • Kim, Jineon;Kang, Il-Seok;Lee, Yong-Ki;Choi, Ji-won;Song, Jae-Joon
    • Explosives and Blasting
    • /
    • v.38 no.3
    • /
    • pp.1-14
    • /
    • 2020
  • Stereophotogrammetry can be used for accurate and fast investigation of over-break or under-break which may form during the blasting of underground space. When integrated with small unmanned aerial vehicles(UAVs) or drones, stereophotogrammetry can be performed much more efficiently. However, since previous research are mostly focused on surface environments, underground applications of drone-based stereophotogrammetry are limited and rare. In order to expand the use of drone-based stereophotogrammetry in underground environments, this study investigated a rock surface of a underground mine through drone-based stereophotogrammetry. The accuracy of the investigation was evaluated and analyzed, which proved the method to be accurate in underground environments. Also, recommendations were proposed for the image acquisition and matching conditions for accurate and efficient application of drone-based stereophotogrammetry in underground environments.

Measurement of Construction Material Quantity through Analyzing Images Acquired by Drone And Data Augmentation (드론 영상 분석과 자료 증가 방법을 통한 건설 자재 수량 측정)

  • Moon, Ji-Hwan;Song, Nu-Lee;Choi, Jae-Gab;Park, Jin-Ho;Kim, Gye-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.33-38
    • /
    • 2020
  • This paper proposes a technique for counting construction materials by analyzing an image acquired by a Drone. The proposed technique use drone log which includes drone and camera information, RCNN for predicting construction material type, dummy area and Photogrammetry for counting the number of construction material. The existing research has large error ranges for predicting construction material detection and material dummy area, because of a lack of training data. To reduce the error ranges and improve prediction stability, this paper increases the training data with a method of data augmentation, but only uses rotated training data for data augmentation to prevent overfitting of the training model. For the quantity calculation, we use a drone log containing drones and camera information such as Yaw and FOV, RCNN model to find the pile of building materials in the image and to predict the type. And we synthesize all the information and apply it to the formula suggested in the paper to calculate the actual quantity of material pile. The superiority of the proposed method is demonstrated through experiments.

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

Experimental Optimal Choice Of Initial Candidate Inliers Of The Feature Pairs With Well-Ordering Property For The Sample Consensus Method In The Stitching Of Drone-based Aerial Images

  • Shin, Byeong-Chun;Seo, Jeong-Kweon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1648-1672
    • /
    • 2020
  • There are several types of image registration in the sense of stitching separated images that overlap each other. One of these is feature-based registration by a common feature descriptor. In this study, we generate a mosaic of images using feature-based registration for drone aerial images. As a feature descriptor, we apply the scale-invariant feature transform descriptor. In order to investigate the authenticity of the feature points and to have the mapping function, we employ the sample consensus method; we consider the sensed image's inherent characteristic such as the geometric congruence between the feature points of the images to propose a novel hypothesis estimation of the mapping function of the stitching via some optimally chosen initial candidate inliers in the sample consensus method. Based on the experimental results, we show the efficiency of the proposed method compared with benchmark methodologies of random sampling consensus method (RANSAC); the well-ordering property defined in the context and the extensive stitching examples have supported the utility. Moreover, the sample consensus scheme proposed in this study is uncomplicated and robust, and some fatal miss stitching by RANSAC is remarkably reduced in the measure of the pixel difference.

Automatic Extraction of Rescue Requests from Drone Images: Focused on Urban Area Images (드론영상에서 구조요청자 자동추출 방안: 도심지역 촬영영상을 중심으로)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.3
    • /
    • pp.37-44
    • /
    • 2019
  • In this study, we propose the automatic extraction method of Rescue Requests from Drone Images. A central object is extracted from each image by using central object extraction method[7] before classification. A central object in an images are defined as a set of regions that is lined around center of the image and has significant texture distribution against its surrounding. In this case of artificial objects, edge of straight line is often found, and texture is regular and directive. However, natural object's case is not. Such characteristics are extracted using Edge direction histogram energy and texture Gabor energy. The Edge direction histogram energy calculated based on the direction of only non-circular edges. The texture Gabor energy is calculated based on the 24-dimension Gebor filter bank. Maximum and minimum energy along direction in Gabor filter dictionary is selected. Finally, the extracted rescue requestor object areas using the dominant features of the objects. Through experiments, we obtain accuracy of more than 75% for extraction method using each features.

Standardization Research on Drone Image Metadata in the Agricultural Field (농업분야 드론영상 메타데이터 표준화 연구)

  • Won-Hui Lee;Seung-Hun Bae;Jin Kim;Young Jae Lee;Keo Bae Lim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.293-302
    • /
    • 2023
  • This study examines and proposes standardization approaches to address the heterogeneous issues of metadata in drone imagery within the agricultural sector. Image metadata comes in various formats depending on different camera manufacturers, with most utilizing EXIF and XMP. The metadata of cameras used in fixed-wing and rotary-wing platforms, along with the metadata requirements in image alignment software, were analyzed for sensors like DJI XT2, MicaSense RedEdge-M, and Sentera Double4K. In the agricultural domain, multispectral imagery is crucial for vegetation analysis, making the provision of such imagery essential. Based on Pix4D SW, a comparative analysis of metadata attributes was performed, and necessary elements were compiled and presented as a proposed standardization (draft) in the form of tag information.

Performance Comparison and Analysis between Keypoints Extraction Algorithms using Drone Images (드론 영상을 이용한 특징점 추출 알고리즘 간의 성능 비교)

  • Lee, Chung Ho;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.79-89
    • /
    • 2022
  • Images taken using drones have been applied to fields that require rapid decision-making as they can quickly construct high-quality 3D spatial information for small regions. To construct spatial information based on drone images, it is necessary to determine the relationship between images by extracting keypoints between adjacent drone images and performing image matching. Therefore, in this study, three study regions photographed using a drone were selected: a region where parking lots and a lake coexisted, a downtown region with buildings, and a field region of natural terrain, and the performance of AKAZE (Accelerated-KAZE), BRISK (Binary Robust Invariant Scalable Keypoints), KAZE, ORB (Oriented FAST and Rotated BRIEF), SIFT (Scale Invariant Feature Transform), and SURF (Speeded Up Robust Features) algorithms were analyzed. The performance of the keypoints extraction algorithms was compared with the distribution of extracted keypoints, distribution of matched points, processing time, and matching accuracy. In the region where the parking lot and lake coexist, the processing speed of the BRISK algorithm was fast, and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the downtown region with buildings, the processing speed of the AKAZE algorithm was fast and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the field region of natural terrain, the keypoints and matched points of the SURF algorithm were evenly distributed throughout the image taken by drone, but the AKAZE algorithm showed the highest matching accuracy and processing speed.

Development of Face Recognition System based on Real-time Mini Drone Camera Images (실시간 미니드론 카메라 영상을 기반으로 한 얼굴 인식 시스템 개발)

  • Kim, Sung-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.12
    • /
    • pp.17-23
    • /
    • 2019
  • In this paper, I propose a system development methodology that accepts images taken by camera attached to drone in real time while controlling mini drone and recognize and confirm the face of certain person. For the development of this system, OpenCV, Python related libraries and the drone SDK are used. To increase face recognition ratio of certain person from real-time drone images, it uses Deep Learning-based facial recognition algorithm and uses the principle of Triples in particular. To check the performance of the system, the results of 30 experiments for face recognition based on the author's face showed a recognition rate of about 95% or higher. It is believed that research results of this paper can be used to quickly find specific person through drone at tourist sites and festival venues.