• Title/Summary/Keyword: UAV images

Search Result 291, Processing Time 0.028 seconds

Automated Analysis of Scaffold Joint Installation Status of UAV-Acquired Images

  • Paik, Sunwoong;Kim, Yohan;Kim, Juhyeon;Kim, Hyoungkwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.871-876
    • /
    • 2022
  • In the construction industry, fatal accidents related to scaffolds frequently occur. To prevent such accidents, scaffolds should be carefully monitored for their safety status. However, manual observation of scaffolds is time-consuming and labor-intensive. This paper proposes a method that automatically analyzes the installation status of scaffold joints based on images acquired from a Unmanned Aerial Vehicle (UAV). Using a deep learning-based object detection algorithm (YOLOv5), scaffold joints and joint components are detected. Based on the detection result, a two-stage rule-based classifier is used to analyze the joint installation status. Experimental results show that joints can be classified as safe or unsafe with 98.2 % and 85.7 % F1-scores, respectively. These results indicate that the proposed method can effectively analyze the joint installation status in UAV-acquired scaffold images.

  • PDF

Damage Analysis and Accuracy Assessment for River-side Facilities using UAV images (UAV 영상을 활용한 수변구조물 피해분석 및 정확도 평가)

  • Kim, Min Chul;Yoon, Hyuk Jin;Chang, Hwi Jeong;Yoo, Jong Su
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.1
    • /
    • pp.81-87
    • /
    • 2016
  • It is important to analyze the exact damage information for fast recovery when natural disasters cause damage on river-side facilities such as dams, bridges, embankments etc. In this study, we shows the method to effectively damage analysis plan using UAV(Unmanned aerial vehicle) images and accuracy assessment of it. The UAV images are captured on area near the river-side facilities and the core methodology for damage analysis are image matching and change detection algorithm. The result(point cloud) from image matching is to construct 3-dimensional data using by 2-dimensional images, it extracts damage areas by comparing the height values on same area with reference data. The results are tested absolute locational precision compared by post-processed aerial LiDAR data named reference data. The assessment analysis test shows our matching results 10-20 centimeter level precision if external orientation parameters are very accurate. This study shows suggested method is very useful for damage analysis in a large size structure like river-side facilities. But the complexity building can't apply this method, it need to the other method for damage analysis.

Comparison of Orthophotos and 3D Models Generated by UAV-Based Oblique Images Taken in Various Angles

  • Lee, Ki Rim;Han, You Kyung;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.117-126
    • /
    • 2018
  • Due to intelligent transport systems, location-based applications, and augmented reality, demand for image maps and 3D (Three-Dimensional) maps is increasing. As a result, data acquisition using UAV (Unmanned Aerial Vehicles) has flourished in recent years. However, even though orthophoto map production and research using UAVs are flourishing, few studies on 3D modeling have been conducted. In this study, orthophoto and 3D modeling research was performed using various angle images acquired by a UAV. For orthophotos, accuracy was evaluated using a GPS (Global Positioning System) survey that employed VRS (Virtual Reference Station) acquired checkpoints. 3D modeling was evaluated by calculating the RMSE (Root Mean Square Error) of the difference between the outline height values of buildings obtained from the GPS survey to the corresponding 3D modeling height values. The orthophotos satisfied the acceptable accuracy of NGII (National Geographic Information Institute) for a 1/500 scale map from all angles. In the case of 3D modeling, models based on images taken at 45 degrees revealed better accuracy of building outlines than models based on images taken at 30, 60, or 75 degrees. To summarize, it was shown that for orthophotos, the accuracy for 1/500 maps was satisfied at all angles; for 3D modeling, images taken at 45 degrees produced the most accurate models.

Automatic Counting of Rice Plant Numbers After Transplanting Using Low Altitude UAV Images

  • Reza, Md Nasim;Na, In Seop;Lee, Kyeong-Hwan
    • International Journal of Contents
    • /
    • v.13 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • Rice plant numbers and density are key factors for yield and quality of rice grains. Precise and properly estimated rice plant numbers and density can assure high yield from rice fields. The main objective of this study was to automatically detect and count rice plants using images of usual field condition from an unmanned aerial vehicle (UAV). We proposed an automatic image processing method based on morphological operation and boundaries of the connected component to count rice plant numbers after transplanting. We converted RGB images to binary images and applied adaptive median filter to remove distortion and noises. Then we applied a morphological operation to the binary image and draw boundaries to the connected component to count rice plants using those images. The result reveals the algorithm can conduct a performance of 89% by the F-measure, corresponding to a Precision of 87% and a Recall of 91%. The best fit image gives a performance of 93% by the F-measure, corresponding to a Precision of 91% and a Recall of 96%. Comparison between the numbers of rice plants detected and counted by the naked eye and the numbers of rice plants found by the proposed method provided viable and acceptable results. The $R^2$ value was approximately 0.893.

Acquiring Precise Coordinates of Ground Targets through GCP Geometric Correction of Captured Images in UAS (무인 항공 시스템에서 촬영 영상의 GCP 기하보정을 통한 정밀한 지상 표적 좌표 획득 방법)

  • Namwon An;Kyung-Mee Lim;So-Young Jeong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.2
    • /
    • pp.129-138
    • /
    • 2023
  • Acquiring precise coordinates of ground targets can be regarded as the key mission of the tactical-level military UAS(Unmanned Aerial System) operations. The coordinates deviations for the ground targets estimated from UAV (Unmanned Aerial Vehicle) images may depend on the sensor specifications and slant ranges between UAV and ground targets. It has an order of several tens to hundreds of meters for typical tactical UAV mission scenarios. In this paper, we propose a scheme that precisely acquires target coordinates from UAS by mapping image pixels to geographical coordinates based on GCP(Ground Control Points). This scheme was implemented and tested from ground control station for UAS. We took images of targets of which exact location is known and acquired the target coordinates using our proposed scheme. The experimental results showed that errors of the acquired coordinates remained within an order of several meters and the coordinates accuracy was significantly improved.

Erosion and Sedimentation Monitoring of Coastal Region using Time Series UAV Image (시계열 UAV 영상을 활용한 연안지역 침식·퇴적 변화 모니터링)

  • CHO, Gi-Sung;HYUN, Jae-Hyeok;LEE, Geun-Sang
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.2
    • /
    • pp.95-105
    • /
    • 2020
  • In order to promote efficient coastal management, it is important to continuously monitor the characteristics of the terrain, which are changed by various factors. In this study, time series UAV images were taken of Gyeokpo beach. And the standard deviation of ±11cm(X), ±10cm(Y), and ±15cm(Z) was obtained as a result of comparing with the VRS measurement performance for UAV position accuracy evaluation. Therefore, it was confirmed that the tolerance of the digital map work rule was satisfied. In addition, as a result of monitoring the erosion and sedimentation changes using the DSM(digital surface model) constructed through UAV images, an average of 0.01 m deposition occurred between June 2018 and December 2018, and in December 2018 and June 2019. It was analyzed that 0.03m of erosion occurred. Therefore, 0.02m of erosion occurred between June 2018 and June 2019. From the topographical change analysis results, the area of erosion and sediment height was analyzed, and the area of erosion and sedimentation was widely distributed in the ±0.5m section. If we continuously monitor the topographical changes in the coastal regions by using the 3D terrain modeling results using the time series UAV images presented in this study, we can support the coastal management tasks such as supplement or dredging of sand.

Lane Extraction through UAV Mapping and Its Accuracy Assessment (무인항공기 매핑을 통한 차선 추출 및 정확도 평가)

  • Park, Chan Hyeok;Choi, Kyoungah;Lee, Impyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.11-19
    • /
    • 2016
  • Recently, global companies are developing the automobile technologies, converged with state-of-the-art IT technologies for the commercialization of autonomous vehicles. These autonomous vehicles are required the accurate lane information to enhance its reliability by controlling the vehicles safely. Hence, the study planned to examine possibilities of applying UAV photogrammetry of high-resolution images, obtained from the low altitudes. The high-resolution DSM and the ortho-images were generated from the GSD 7cm-level digital images that were obtained and based on the generated data, when the positions information of the roads including the lanes were extracted. In fact, the RMSE of verifying the extracted data was shown to be about 15cm. Through the results from the study, it could be concluded that the low alititude UAV photogrammetry can be applied for generating and updating a high-accuracy map of road areas.

Evaluation of the Feasibility of Deep Learning for Vegetation Monitoring (딥러닝 기반의 식생 모니터링 가능성 평가)

  • Kim, Dong-woo;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.6
    • /
    • pp.85-96
    • /
    • 2023
  • This study proposes a method for forest vegetation monitoring using high-resolution aerial imagery captured by unmanned aerial vehicles(UAV) and deep learning technology. The research site was selected in the forested area of Mountain Dogo, Asan City, Chungcheongnam-do, and the target species for monitoring included Pinus densiflora, Quercus mongolica, and Quercus acutissima. To classify vegetation species at the pixel level in UAV imagery based on characteristics such as leaf shape, size, and color, the study employed the semantic segmentation method using the prominent U-net deep learning model. The research results indicated that it was possible to visually distinguish Pinus densiflora Siebold & Zucc, Quercus mongolica Fisch. ex Ledeb, and Quercus acutissima Carruth in 135 aerial images captured by UAV. Out of these, 104 images were used as training data for the deep learning model, while 31 images were used for inference. The optimization of the deep learning model resulted in an overall average pixel accuracy of 92.60, with mIoU at 0.80 and FIoU at 0.82, demonstrating the successful construction of a reliable deep learning model. This study is significant as a pilot case for the application of UAV and deep learning to monitor and manage representative species among climate-vulnerable vegetation, including Pinus densiflora, Quercus mongolica, and Quercus acutissima. It is expected that in the future, UAV and deep learning models can be applied to a variety of vegetation species to better address forest management.

Measurement Accuracy for 3D Structure Shape Change using UAV Images Matching (UAV 영상정합을 통한 구조물 형상변화 측정 정확도 연구)

  • Kim, Min Chul;Yoon, Hyuk Jin;Chang, Hwi Jeong;Yoo, Jong Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.47-54
    • /
    • 2017
  • Recently, there are many studies related aerial mapping project and 3 dimensional shape and model reconstruction using UAV(unmanned aerial vehicle) system and images. In this study, we create 3D reconstruction point data using image matching technology of the UAV overlap images, detect shape change of structure and perform accuracy assessment of area($m^2$) and volume($m^3$) value. First, we build the test structure model data and capturing its images of shape change Before and After. Second, for post-processing the Before dataset is convert the form of raster format image to ensure the compare with all 3D point clouds of the After dataset. The result shows high accuracy in the shape change of more than 30 centimeters, but less is still it becomes difficult to apply because of image matching technology has its own limits. But proposed methodology seems very useful to detect illegal any structures and the quantitative analysis of the structure's a certain amount of damage and management.

Development of Android-Based Photogrammetric Unmanned Aerial Vehicle System (안드로이드 기반 무인항공 사진측량 시스템 개발)

  • Park, Jinwoo;Shin, Dongyoon;Choi, Chuluong;Jeong, Hohyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.215-226
    • /
    • 2015
  • Normally, aero photography using UAV uses about 430 MHz bandwidth radio frequency (RF) modem and navigates and remotely controls through the connection between UAV and ground control system. When using the exhausting method, it has communication range of 1-2 km with frequent cross line and since wireless communication sends information using radio wave as a carrier, it has 10 mW of signal strength limitation which gave restraints on life my distance communication. The purpose of research is to use communication technologies such as long-term evolution (LTE) of smart camera, Bluetooth, Wi-Fi and other communication modules and cameras that can transfer data to design and develop automatic shooting system that acquires images to UAV at the necessary locations. We conclude that the android based UAV filming and communication module system can not only film images with just one smart camera but also connects UAV system and ground control system together and also able to obtain real-time 3D location information and 3D position information using UAV system, GPS, a gyroscope, an accelerometer, and magnetic measuring sensor which will allow us to use real-time position of the UAV and correction work through aerial triangulation.