• Title/Summary/Keyword: UAV camera

Search Result 161, Processing Time 0.023 seconds

Improve utilization of Drone for Private Security (Drone의 민간 시큐리티 활용성 제고)

  • Gong, Bae Wan
    • Convergence Security Journal
    • /
    • v.16 no.3_2
    • /
    • pp.25-32
    • /
    • 2016
  • Drone refers to an unmanned flying system according to the remote control. That is a remote control systems on the ground or a system that automatically or semi auto-piloted system without pilot on board. Drones have been used and developed before for military purposes. However there are currently utilized in a variety of areas such as logistics and distribution of relief supplies disaster areas, wireless Internet connection, TV, video shooting and disaster observation, tracking criminals etc. Especially it can be actively used in activities such as search or the structure of the disaster site, and may be able to detect the movement of people and an attacker using an infrared camera at night. Drones are very effective for private security.

Implementation of Photovoltaic Panel failure detection system using semantic segmentation (시멘틱세그멘테이션을 활용한 태양광 패널 고장 감지 시스템 구현)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1777-1783
    • /
    • 2021
  • The use of drones is gradually increasing for the efficient maintenance of large-scale renewable energy power generation complexes. For a long time, photovoltaic panels have been photographed with drones to manage panel loss and contamination. Various approaches using artificial intelligence are being tried for efficient maintenance of large-scale photovoltaic complexes. Recently, semantic segmentation-based application techniques have been developed to solve the image classification problem. In this paper, we propose a classification model using semantic segmentation to determine the presence or absence of failures such as arcs, disconnections, and cracks in solar panel images obtained using a drone equipped with a thermal imaging camera. In addition, an efficient classification model was implemented by tuning several factors such as data size and type and loss function customization in U-Net, which shows robust classification performance even with a small dataset.

Comparison of estimating vegetation index for outdoor free-range pig production using convolutional neural networks

  • Sang-Hyon OH;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.6
    • /
    • pp.1254-1269
    • /
    • 2023
  • This study aims to predict the change in corn share according to the grazing of 20 gestational sows in a mature corn field by taking images with a camera-equipped unmanned air vehicle (UAV). Deep learning based on convolutional neural networks (CNNs) has been verified for its performance in various areas. It has also demonstrated high recognition accuracy and detection time in agricultural applications such as pest and disease diagnosis and prediction. A large amount of data is required to train CNNs effectively. Still, since UAVs capture only a limited number of images, we propose a data augmentation method that can effectively increase data. And most occupancy prediction predicts occupancy by designing a CNN-based object detector for an image and counting the number of recognized objects or calculating the number of pixels occupied by an object. These methods require complex occupancy rate calculations; the accuracy depends on whether the object features of interest are visible in the image. However, in this study, CNN is not approached as a corn object detection and classification problem but as a function approximation and regression problem so that the occupancy rate of corn objects in an image can be represented as the CNN output. The proposed method effectively estimates occupancy for a limited number of cornfield photos, shows excellent prediction accuracy, and confirms the potential and scalability of deep learning.

Example of Application of Drone Mapping System based on LiDAR to Highway Construction Site (드론 LiDAR에 기반한 매핑 시스템의 고속도로 건설 현장 적용 사례)

  • Seung-Min Shin;Oh-Soung Kwon;Chang-Woo Ban
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_3
    • /
    • pp.1325-1332
    • /
    • 2023
  • Recently, much research is being conducted based on point cloud data for the growth of innovations such as construction automation in the transportation field and virtual national space. This data is often measured through remote control in terrain that is difficult for humans to access using devices such as UAVs and UGVs. Drones, one of the UAVs, are mainly used to acquire point cloud data, but photogrammetry using a vision camera, which takes a lot of time to create a point cloud map, is difficult to apply in construction sites where the terrain changes periodically and surveying is difficult. In this paper, we developed a point cloud mapping system by adopting non-repetitive scanning LiDAR and attempted to confirm improvements through field application. For accuracy analysis, a point cloud map was created through a 2 minute 40 second flight and about 30 seconds of software post-processing on a terrain measuring 144.5 × 138.8 m. As a result of comparing the actual measured distance for structures with an average of 4 m, an average error of 4.3 cm was recorded, confirming that the performance was within the error range applicable to the field.

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.

Accuracy Assessment on the Stereoscope based Digital Mapping Using Unmanned Aircraft Vehicle Image (무인항공기 영상을 이용한 입체시기반 수치도화 정확도 평가)

  • Yun, Kong-Hyun;Kim, Deok-In;Song, Yeong Sun
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.111-121
    • /
    • 2018
  • RIn this research, digital elevation models, true-ortho image and 3-dimensional digital complied data was generated and evaluated using unmanned aircraft vehicle stereoscopic images by applying photogrammetric principles. In order to implement stereoscopic vision, digital Photogrammetric Workstation should be used necessarily. For conducting this, in this study GEOMAPPER 1.0 is used. That was developed by the Ministry of Trade, Industry and Energy. To realize stereoscopic vision using two overlapping images of the unmanned aerial vehicle, the interior and exterior orientation parameters should be calculated. Especially lens distortion of non-metric camera must be accurately compensated for stereoscope. In this work. photogrammetric orientation process was conducted using commercial Software, PhotoScan 1.4. Fixed wing KRobotics KD-2 was used for the acquisition of UAV images. True-ortho photo was generated and digital topographic map was partially produced. Finally, we presented error analysis on the generated digital complied map. As the results, it is confirmed that the production of digital terrain map with a scale 1:2,500~1:3,000 is available using stereoscope method.

Automatic Detection of Malfunctioning Photovoltaic Modules Using Unmanned Aerial Vehicle Thermal Infrared Images

  • Kim, Dusik;Youn, Junhee;Kim, Changyoon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.6
    • /
    • pp.619-627
    • /
    • 2016
  • Cells of a PV (photovoltaic) module can suffer defects due to various causes resulting in a loss of power output. As a malfunctioning cell has a higher temperature than adjacent normal cells, it can be easily detected with a thermal infrared sensor. A conventional method of PV cell inspection is to use a hand-held infrared sensor for visual inspection. The main disadvantages of this method, when applied to a large-scale PV power plant, are that it is time-consuming and costly. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule was applied to automatically detect defective panels using the mean intensity and standard deviation range of each panel by array. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97% or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule. In this study, we used a panel area extraction method that we previously developed; fault detection accuracy would be improved if panel area extraction from images was more precise. Furthermore, the proposed algorithm contributes to the development of a maintenance and repair system for large-scale PV power plants, in combination with a geo-referencing algorithm for accurate determination of panel locations using sensor-based orientation parameters and photogrammetry from ground control points.

Accuracy Analysis of Low-cost UAV Photogrammetry for Corridor Mapping (선형 대상지에 대한 저가의 무인항공기 사진측량 정확도 평가)

  • Oh, Jae Hong;Jang, Yeong Jae;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.565-572
    • /
    • 2018
  • Recently, UAVs (Unmanned Aerial Vehicles) or drones have gained popularity for the engineering surveying and mapping because they enable the rapid data acquisition and processing as well as their operation cost is low. The applicable fields become much wider including the topographic monitoring, agriculture, and forestry. It is reported that the high geospatial accuracy is achievable with the drone photogrammetry for many applications. However most studies reported the best achievable mapping results using well-distributed ground control points though some studies investigated the impact of control points on the accuracy. In this study, we focused on the drone mapping of corridors such as roads and pipelines. The distribution and the number of control points along the corridor were diversified for the accuracy assessment. In addition, the effects of the camera self-calibration and the number of the image strips were also studied. The experimental results showed that the biased distribution of ground control points has more negative impact on the accuracy compared to the density of points. The prior camera calibration was favored than the on-the-fly self-calibration that may produce poor positional accuracy for the case of less or biased control points. In addition, increasing the number of strips along the corridor was not helpful to increase the positional accuracy.

BATHYMETRIC MODULATION ON WAVE SPECTRA

  • Liu, Cho-Teng;Doong, Dong-Jiing
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.344-347
    • /
    • 2008
  • Ocean surface waves may be modified by ocean current and their observation may be severely distorted if the observer is on a moving platform with changing speed. Tidal current near a sill varies inversely with the water depth, and results spatially inhomogeneous modulation on the surface waves near the sill. For waves propagating upstream, they will encounter stronger current before reaching the sill, and therefore, they will shorten their wavelength with frequency unchanged, increase its amplitude, and it may break if the wave height is larger than 1/7 of the wavelength. These small scale (${\sim}$ 1 km changes is not suitable for satellite radar observation. Spatial distribution of wave-height spectra S(x, y) can not be acquired from wave gauges that are designed for collecting 2-D wave spectra at fixed locations, nor from satellite radar image which is more suitable for observing long swells. Optical images collected from cameras on-board a ship, over high-ground, or onboard an unmanned auto-piloting vehicle (UAV) may have pixel size that is small enough to resolve decimeter-scale short gravity waves. If diffuse sky light is the only source of lighting and it is uniform in camera-viewing directions, then the image intensity is proportional to the surface reflectance R(x, y) of diffuse light, and R is directly related to the surface slope. The slope spectrum and wave-height spectra S(x, y) may then be derived from R(x, y). The results are compared with the in situ measurement of wave spectra over Keelung Sill from a research vessel. The application of this method is for analysis and interpretation of satellite images on studies of current and wave interaction that often require fine scale information of wave-height spectra S(x, y) that changes dynamically with time and space.

  • PDF

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.