• Title/Summary/Keyword: 드론 영상

Search Result 429, Processing Time 0.026 seconds

Real-time Tele-operated Drone System with LTE Communication (LTE 통신을 이용한 실시간 원격주행 드론 시스템)

  • Kang, Byoung Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.35-40
    • /
    • 2019
  • In this research, we suggest a real-time tele-driving system for unmanned drone operations using the LTE communication system. The drone operator is located 180km away and controls the altitude and position of the drone with a 50ms time delay. The motion data and video from the drone is streamed to the operator. The video is played on the operator's head-mounted display (HMD) and the motion data emulates the drone on the simulator for the operator. In general, a drone is operated using RF signal and the maximum distance for direct control is limited to 2km. For long range drone control over 2km, an auto flying mode is enabled using a mission plan along with GPS data. In an emergency situation, the autopilot is stopped and the "return home" function is executed. In this research, the immersion tele-driving system is suggested for drone operation with a 50ms time delay using LTE communication. A successful test run of the suggested tele-driving system has already been performed between an operator in Daejeon and a drone in Inje (Gangwon-Do) which is approximately 180km apart.

Machine learning based radar imaging algorithm for drone detection and classification (드론 탐지 및 분류를 위한 레이다 영상 기계학습 활용)

  • Moon, Min-Jung;Lee, Woo-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.619-627
    • /
    • 2021
  • Recent advance in low cost and light-weight drones has extended their application areas in both military and private sectors. Accordingly surveillance program against unfriendly drones has become an important issue. Drone detection and classification technique has long been emphasized in order to prevent attacks or accidents by commercial drones in urban areas. Most commercial drones have small sizes and low reflection and hence typical sensors that use acoustic, infrared, or radar signals exhibit limited performances. Recently, artificial intelligence algorithm has been actively exploited to enhance radar image identification performance. In this paper, we adopt machined learning algorithm for high resolution radar imaging in drone detection and classification applications. For this purpose, simulation is carried out against commercial drone models and compared with experimental data obtained through high resolution radar field test.

The Optimal GSD and Image Size for Deep Learning Semantic Segmentation Training of Drone Images of Winter Vegetables (드론 영상으로부터 월동 작물 분류를 위한 의미론적 분할 딥러닝 모델 학습 최적 공간 해상도와 영상 크기 선정)

  • Chung, Dongki;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1573-1587
    • /
    • 2021
  • A Drone image is an ultra-high-resolution image that is several or tens of times higher in spatial resolution than a satellite or aerial image. Therefore, drone image-based remote sensing is different from traditional remote sensing in terms of the level of object to be extracted from the image and the amount of data to be processed. In addition, the optimal scale and size of data used for model training is different depending on the characteristics of the applied deep learning model. However, moststudies do not consider the size of the object to be found in the image, the spatial resolution of the image that reflects the scale, and in many cases, the data specification used in the model is applied as it is before. In this study, the effect ofspatial resolution and image size of drone image on the accuracy and training time of the semantic segmentation deep learning model of six wintering vegetables was quantitatively analyzed through experiments. As a result of the experiment, it was found that the average accuracy of dividing six wintering vegetablesincreases asthe spatial resolution increases, but the increase rate and convergence section are different for each crop, and there is a big difference in accuracy and time depending on the size of the image at the same resolution. In particular, it wasfound that the optimal resolution and image size were different from each crop. The research results can be utilized as data for getting the efficiency of drone images acquisition and production of training data when developing a winter vegetable segmentation model using drone images.

Measurement of Construction Material Quantity through Analyzing Images Acquired by Drone And Data Augmentation (드론 영상 분석과 자료 증가 방법을 통한 건설 자재 수량 측정)

  • Moon, Ji-Hwan;Song, Nu-Lee;Choi, Jae-Gab;Park, Jin-Ho;Kim, Gye-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.33-38
    • /
    • 2020
  • This paper proposes a technique for counting construction materials by analyzing an image acquired by a Drone. The proposed technique use drone log which includes drone and camera information, RCNN for predicting construction material type, dummy area and Photogrammetry for counting the number of construction material. The existing research has large error ranges for predicting construction material detection and material dummy area, because of a lack of training data. To reduce the error ranges and improve prediction stability, this paper increases the training data with a method of data augmentation, but only uses rotated training data for data augmentation to prevent overfitting of the training model. For the quantity calculation, we use a drone log containing drones and camera information such as Yaw and FOV, RCNN model to find the pile of building materials in the image and to predict the type. And we synthesize all the information and apply it to the formula suggested in the paper to calculate the actual quantity of material pile. The superiority of the proposed method is demonstrated through experiments.

Object Detection based on Image Processing for Indoor Drone Localization (실내 드론의 위치 추정을 위한 영상처리 기반 객체 검출)

  • Beck, Jong-Hwan;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.1003-1004
    • /
    • 2017
  • 본 연구에서는 실내 환경에서 드론의 측위를 위한 마커 인식 및 검출 기술을 소개한다. 기존 실내 측위를 위한 기술인 Global Positioning System이나 Wi-Fi를 이용한 삼각측량 기법은 실내 환경에서 각각의 성질로 인하여 사용하기 어려운 점이 있다. 본 논문에서는 2차원 바코드와 마커 등의 객체를 드론의 카메라를 이용한 실시간 영상 전송을 통하여 검출하여 위치 정보를 획득하는 기술을 소개한다. 실험에서는 드론의 카메라를 통하여 실시간 전송된 영상에서 OpenCV V2.4.10을 통하여 객체를 검출하였고, 카메라와 객체 사이의 거리와 바코드 크기에 따른 2차원 바코드의 검출 여부를 보였으며 15*15cm의 2차원 바코드는 비교적 잘 인식하였으나 비교적 작은 11*11cm의 2차원 바코드는 거리가 멀어질 수록 인식이 힘들어지는 결과를 보였다.

Display Resolution Optimization Method of A Drone Projector (드론 탑재형 프로젝터 디스플레이 영역 해상도 최적화 방법)

  • Lee, Joonhyung;Jeon, Byeungwoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.616-618
    • /
    • 2020
  • 드론 탑재형 프로젝터 시스템의 경우 비행 시 드론의 모터와 프로펠러 그리고 비행 환경에 의해 발생하는 흔들림이 그대로 프로젝터에 전달되기 때문에 프로젝터에 의해 투영된 영상에 왜곡이 발생하게 된다. 이를 보정하기 위해 센서를 통해 얻어진 드론의 비행정보 기반 투영영상 변환행렬이 적용된다. 본 논문에서는 디스플레이 영역의 해상도를 고정된 값으로 제한하는 대신 비행 환경에 따라 해상도를 결정하는 방법을 제안하고 실제 영상에 적용하였다. 실험 결과, 제안한 디스플레이 영역의 해상도 최적화 방법을 적용하는 경우 기존의 고정된 디스플레이 영역의 해상도보다 확장된 디스플레이 영역의 해상도로 운용 가능함을 관찰할 수 있었다.

  • PDF

Drone-Based Micro-SAR Imaging System and Performance Analysis through Error Corrections (드론을 활용한 초소형 SAR 영상 구현 및 품질 보상 분석)

  • Lee, Kee-Woong;Kim, Bum-Seung;Moon, Min-Jung;Song, Jung-Hwan;Lee, Woo-Kyung;Song, Yong-Kyu
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.27 no.9
    • /
    • pp.854-864
    • /
    • 2016
  • The use of small drone platform has become a popular topic in these days but its application for SAR operation has been little known due to the burden of the payload implementation. Drone platforms are distinguished from the conventional UAV system by the increased vulnerability to the turbulences, control-errors and poor motion stability. Consequently, sophisticated motion compensation may be required to guarantee the successful acquisition of high quality SAR imagery. Extremely limited power and mass budgets may prevent the use of additional hardwares for motion compensation and the difficulty of SAR focusing is further aggravated. In this paper, we have carried out a feasibility study of mico-SAR drone operation. We present the image acquisition results from the preliminary flight tests and a quality assessment is followed on the experimental SAR images. The in-flight motion errors derived from the unique drone movements are investigated and attempts have been made to compensate for the geometrical and phase errors caused by motions against the nominal trajectory. Finally, the successful operation of drone SAR system is validated through the focussed SAR images taken over test sites.

A Study on Damage Scale Tacking Technique for Debris Flow Occurrence Section Using Drones Image (드론영상을 활용한 토석류 발생구간의 피해규모 추적기법)

  • Shin, Hyunsun;Um, Jungsup;Kim, Junhyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.517-526
    • /
    • 2017
  • In this study, we analyzed the accuracy of elevation, slope, and area to the damage scale of the debris flow using the drones to track the details of the debris flow that method was between the digital topographical map(1/5,000) method and GPS ground survey method. The results are summarized as follows. At first, in the comparison of elevation, the value by the drones was 3.024m lower than the digital topography map, but in case of slope the average slope was $1.20^{\circ}$ and the maximum slope was $10.46^{\circ}$ which was higher by the drones image. Secondly, the difference area is $462m^2$ between on the digital topographic map and the drones image was calculated high, because it is determined by reflecting the uplift of the terrain as a point that calculated more accurate than the digital topographic map. Therefore, when compared with the existing method, the drone image method was very effective in terms of time and manpower.

Performance Comparison and Analysis between Keypoints Extraction Algorithms using Drone Images (드론 영상을 이용한 특징점 추출 알고리즘 간의 성능 비교)

  • Lee, Chung Ho;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.79-89
    • /
    • 2022
  • Images taken using drones have been applied to fields that require rapid decision-making as they can quickly construct high-quality 3D spatial information for small regions. To construct spatial information based on drone images, it is necessary to determine the relationship between images by extracting keypoints between adjacent drone images and performing image matching. Therefore, in this study, three study regions photographed using a drone were selected: a region where parking lots and a lake coexisted, a downtown region with buildings, and a field region of natural terrain, and the performance of AKAZE (Accelerated-KAZE), BRISK (Binary Robust Invariant Scalable Keypoints), KAZE, ORB (Oriented FAST and Rotated BRIEF), SIFT (Scale Invariant Feature Transform), and SURF (Speeded Up Robust Features) algorithms were analyzed. The performance of the keypoints extraction algorithms was compared with the distribution of extracted keypoints, distribution of matched points, processing time, and matching accuracy. In the region where the parking lot and lake coexist, the processing speed of the BRISK algorithm was fast, and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the downtown region with buildings, the processing speed of the AKAZE algorithm was fast and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the field region of natural terrain, the keypoints and matched points of the SURF algorithm were evenly distributed throughout the image taken by drone, but the AKAZE algorithm showed the highest matching accuracy and processing speed.

Analysis of inundation tracing using advanced image (첨단영상기반 침수흔적 분석)

  • Kim, Soo Hyun;Kim, Dong Kyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.66-66
    • /
    • 2019
  • 침수흔적도는 풍수해로 인한 침수기록(침수심, 침수위, 침수시간 등)을 조사하여 표시한 도면으로 자연재해 경감 및 신속한 대피를 위하여 작성하도록 자연재해대책법에 따라 규정되어있다. 이러한 침수흔적도는 국가 방재에 따른 기초자료로 사용되지만 광범위한 지역을 신속 정확하게 조사하기에는 예산 부족 및 관리 미흡으로 한계가 있다. 따라서 본 연구는 2018년 10월초 경북 영덕군에서 발생한 태풍 콩레이 침수피해사상을 대상으로 위성영상기반 침수판별지도를 작성하였고, 이를 실제자료와 비교하여 침수흔적도 작성 시 첨단영상의 활용가능성을 확인하였다. 위성영상으로는 ESA의 Sentinel-1과 PlanetLab사(社)의 PlanetScope를 활용하였고, 검증에 활용한 자료는 CCTV를 영상자료를 활용하여 정확성을 평가하였다. 침수심과 침수규모를 확인하기 위해 사용한 지형자료는 10m DEM자료와 드론영상자료를 통해 구축한 DSM을 활용하였다. 그 결과 위성영상을 활용한 침수판별지도는 실제 CCTV영상자료와 높은 상관관계를 보이는 것으로 나타났으며, 드론영상을 통해 지형자료를 구축한 경우 DEM에 비해 정확도가 높아지는 것을 확인할 수 있었다. 또한 위성영상자료의 해상도가 높을수록 실제자료와 유사하게 침수규모를 판별할 수 있는 것으로 나타났다. 첨단영상을 활용한 침수흔적도 작성은 기존조사보다 신속하고 광범위하게 자료를 수집할 수 있을 것으로 기대한다.

  • PDF