• Title/Summary/Keyword: Camera drone

Search Result 104, Processing Time 0.04 seconds

Position Recognition and Indoor Autonomous Flight of a Small Quadcopter Using Distributed Image Matching (분산영상 매칭을 이용한 소형 쿼드콥터의 실내 비행 위치인식과 자율비행)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.2_2
    • /
    • pp.255-261
    • /
    • 2020
  • We consider the problem of autonomously flying a quadcopter in indoor environments. Navigation in indoor settings poses two major issues. First, real time recognition of the marker captured by the camera. Second, The combination of the distributed images is used to determine the position and orientation of the quadcopter in an indoor environment. We autonomously fly a miniature RC quadcopter in small known environments using an on-board camera as the only sensor. We use an algorithm that combines data-driven image classification with image-combine techniques on the images captured by the camera to achieve real 3D localization and navigation.

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.

Coastal Shallow-Water Bathymetry Survey through a Drone and Optical Remote Sensors (드론과 광학원격탐사 기법을 이용한 천해 수심측량)

  • Oh, Chan Young;Ahn, Kyungmo;Park, Jaeseong;Park, Sung Woo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.3
    • /
    • pp.162-168
    • /
    • 2017
  • Shallow-water bathymetry survey has been conducted using high definition color images obtained at the altitude of 100 m above sea level using a drone. Shallow-water bathymetry data are one of the most important input data for the research of beach erosion problems. Especially, accurate bathymetry data within closure depth are critically important, because most of the interesting phenomena occur in the surf zone. However, it is extremely difficult to obtain accurate bathymetry data due to wave-induced currents and breaking waves in this region. Therefore, optical remote sensing technique using a small drone is considered to be attractive alternative. This paper presents the potential utilization of image processing algorithms using multi-variable linear regression applied to red, green, blue and grey band images for estimating shallow water depth using a drone with HD camera. Optical remote sensing analysis conducted at Wolpo beach showed promising results. Estimated water depths within 5 m showed correlation coefficient of 0.99 and maximum error of 0.2 m compared with water depth surveyed through manual as well as ship-board echo-sounder measurements.

Design of Deep Learning-Based Automatic Drone Landing Technique Using Google Maps API (구글 맵 API를 이용한 딥러닝 기반의 드론 자동 착륙 기법 설계)

  • Lee, Ji-Eun;Mun, Hyung-Jin
    • Journal of Industrial Convergence
    • /
    • v.18 no.1
    • /
    • pp.79-85
    • /
    • 2020
  • Recently, the RPAS(Remote Piloted Aircraft System), by remote control and autonomous navigation, has been increasing in interest and utilization in various industries and public organizations along with delivery drones, fire drones, ambulances, agricultural drones, and others. The problems of the stability of unmanned drones, which can be self-controlled, are also the biggest challenge to be solved along the development of the drone industry. drones should be able to fly in the specified path the autonomous flight control system sets, and perform automatically an accurate landing at the destination. This study proposes a technique to check arrival by landing point images and control landing at the correct point, compensating for errors in location data of the drone sensors and GPS. Receiving from the Google Map API and learning from the destination video, taking images of the landing point with a drone equipped with a NAVIO2 and Raspberry Pi, camera, sending them to the server, adjusting the location of the drone in line with threshold, Drones can automatically land at the landing point.

Local and Global Navigation Maps for Safe UAV Flight (드론의 안전비행을 위한 국부 및 전역지도 인터페이스)

  • Yu, Sanghyeong;Jeon, Jongwoo;Cho, Kwangsu
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.2
    • /
    • pp.113-120
    • /
    • 2018
  • To fly a drone or unmanned aerial vechicle(UAV) safely, its pilot needs to maintain high situation awareness of its flight space. One of the important ways to improve the flight space awareness is to integrate both the global and the local navigation map a drone provides. However, the drone pilot often has to use the inconsistent reference frames or perspectives between the two maps. In specific, the global navigation map tends to display space information in the third-person perspective, whereas the local map tends to use the first-person perspective through the drone camera. This inconsistent perspective problem makes the pilot use mental rotation to align the different perspectives. In addition, integrating different dimensionalities (2D vs. 3D) of the two maps may aggravate the pilot's cognitive load of mental rotation. Therefore, this study aims to investigate the relation between perspective difference ($0^{\circ}$, $90^{\circ}$, $180^{\circ}$, $270^{\circ}$) and the map dimensionality matches (3D-3D vs. 3D-2D) to improve the way of integrating the two maps. The results show that the pilot's flight space awareness improves when the perspective differences are smaller and also when the dimensionalities between the two maps are matched.

Standardization Research on Drone Image Metadata in the Agricultural Field (농업분야 드론영상 메타데이터 표준화 연구)

  • Won-Hui Lee;Seung-Hun Bae;Jin Kim;Young Jae Lee;Keo Bae Lim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.293-302
    • /
    • 2023
  • This study examines and proposes standardization approaches to address the heterogeneous issues of metadata in drone imagery within the agricultural sector. Image metadata comes in various formats depending on different camera manufacturers, with most utilizing EXIF and XMP. The metadata of cameras used in fixed-wing and rotary-wing platforms, along with the metadata requirements in image alignment software, were analyzed for sensors like DJI XT2, MicaSense RedEdge-M, and Sentera Double4K. In the agricultural domain, multispectral imagery is crucial for vegetation analysis, making the provision of such imagery essential. Based on Pix4D SW, a comparative analysis of metadata attributes was performed, and necessary elements were compiled and presented as a proposed standardization (draft) in the form of tag information.

A Study for Drone to Keep a Formation and Prevent Collisions in Case of Formation Flying (드론의 삼각 편대비행에서 포메이션 유지 및 충돌 방지 제어를 위한 연구)

  • Cho, Eun-sol;Lee, Kang-whan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.499-501
    • /
    • 2016
  • In this paper, we suggest an advance method for maintaining a perceived behavior as triangle formation and preventing collision between each other in case of a flying drone. In the existing studies, the collision of the drone is only controlled by using light entered in the camera or the image processing. However, when there is no light, it is difficult to confirm the position of each other and they can collide because this system can not confirm the each other's position. Therefore, in this paper, we propose the system to solve the problems by using the distance and the relative coordinates of the three drones that were determined using the ALPS(Ad hoc network Localized Positioning System) algorithm. This system can be a new algorithm that will prevent collisions between each other during flying the drone object. The proposed algorithm is that we make drones maintaining a determined constant value of the distance between coordinates of each drone and the measured center of the drone of triangle formation. Therefore, if the form of fixed formation is disturbed, they reset the position of the drone so as to keep the distance between each drone and the center coordinates constant. As a result of the simulation, if we use the system where the supposed algorithm is applied, we can expect that it is possible to prevent malfunction or an accident due to collisions by preventing collisions of drones in advanced behavior system.

  • PDF

The Visual Aesthetics of Drone Shot and Hand-held Shot based on the Representation of Place and Space : focusing on World Travel' Peninsula de Yucatán' Episode (장소와 공간의 재현적 관점에서 본 드론 쇼트와 핸드헬드 쇼트의 영상 미학 : <세계테마기행> '유카탄 반도'편을 중심으로)

  • Ryu, Jae-Hyung
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.251-265
    • /
    • 2020
  • The Drone shot is moving images captured by a remotely controlled unmanned aerial vehicle, takes usually bird's eye view. The hand-held shot is moving images recorded by literal handheld shooting which is specialized to on-the-spot filming. It takes a walker's viewpoint through vivid realism of its self-reflexive camera movements. The purpose of this study is to analyze comparatively aesthetic functions of the drone shot and the hand-held shot. For this, the study understood Certeau's concepts of 'place' and 'space,' chose World Travel 'Peninsula de Yucatan' episode as a research object, and analytically applied two concepts to the scenes clearly presenting two shots' aesthetic characteristics. As a result, the drone shot took the authoritative viewpoint providing the general information and atmosphere as it overlooked the city with silent movements removing the self-reflexivity. This aesthetic function was reinforced the narration and subtitles mediating prior-knowledge about proper rules and orders of the place. The drone shot tended to project the location as a place. Conversely, the hand-held shot practically experienced the space with free walking which is free from rules and orders inherent in the city. The aesthetics of hand-held images represented the tactic resisting against the strategy of a subject of will and power in that the hand-held shot practiced anthropological walking by means of noticing everyday lives of the small town and countryside than main tourist attraction. In opposition to the drone shot, the hand-held shot tended to reflect the location as a space.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.

Stability Analysis of a Stereo-Camera for Close-range Photogrammetry (근거리 사진측량을 위한 스테레오 카메라의 안정성 분석)

  • Kim, Eui Myoung;Choi, In Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.123-132
    • /
    • 2021
  • To determine 3D(three-dimensional) positions using a stereo-camera in close-range photogrammetry, camera calibration to determine not only the interior orientation parameters of each camera but also the relative orientation parameters between the cameras must be preceded. As time passes after performing camera calibration, in the case of non-metric cameras, the interior and relative orientation parameters may change due to internal instability or external factors. In this study, to evaluate the stability of the stereo-camera, not only the stability of two single cameras and a stereo-camera were analyzed, but also the three-dimensional position accuracy was evaluated using checkpoints. As a result of evaluating the stability of two single cameras through three camera calibration experiments over four months, the root mean square error was ±0.001mm, and the root mean square error of the stereo-camera was ±0.012mm ~ ±0.025mm, respectively. In addition, as the results of distance accuracy using the checkpoint were ±1mm, the interior and relative orientation parameters of the stereo-camera were considered stable over that period.