• Title/Summary/Keyword: Metashape

Search Result 7, Processing Time 0.022 seconds

Assessment of Parallel Computing Performance of Agisoft Metashape for Orthomosaic Generation (정사모자이크 제작을 위한 Agisoft Metashape의 병렬처리 성능 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.427-434
    • /
    • 2019
  • In the present study, we assessed the parallel computing performance of Agisoft Metashape for orthomosaic generation, which can implement aerial triangulation, generate a three-dimensional point cloud, and make an orthomosaic based on SfM (Structure from Motion) technology. Due to the nature of SfM, most of the time is spent on Align photos, which runs as a relative orientation, and Build dense cloud, which generates a three-dimensional point cloud. Metashape can parallelize the two processes by using multi-cores of CPU (Central Processing Unit) and GPU (Graphics Processing Unit). An orthomosaic was created from large UAV (Unmanned Aerial Vehicle) images by six conditions combined by three parallel methods (CPU only, GPU only, and CPU + GPU) and two operating systems (Windows and Linux). To assess the consistency of the results of the conditions, RMSE (Root Mean Square Error) of aerial triangulation was measured using ground control points which were automatically detected on the images without human intervention. The results of orthomosaic generation from 521 UAV images of 42.2 million pixels showed that the combination of CPU and GPU showed the best performance using the present system, and Linux showed better performance than Windows in all conditions. However, the RMSE values of aerial triangulation revealed a slight difference within an error range among the combinations. Therefore, Metashape seems to leave things to be desired so that the consistency is obtained regardless of parallel methods and operating systems.

Comparison and analysis of spatial information measurement values of specialized software in drone triangulation (드론 삼각측량에서 전문 소프트웨어의 공간정보 정확도 비교 분석)

  • Park, Dong Joo;Choi, Yeonsung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.4
    • /
    • pp.249-256
    • /
    • 2022
  • In the case of Drone Photogrammetry, the "pixel to point tool" module of Metashape, Pix4D Mapper, ContextCapture, and Global MapperGIS, which is a simple software, are widely used. Each SW has its own logic for the analysis of aerial triangulation, but from the user's point of view, it is necessary to select a SW by comparative analysis of the coordinate values of geospatial information for the result. Taking aerial photos for drone photogrammetry, surveying GCP reference points through VRS-GPS Survey, processing the acquired basic data using each SW to construct ortho image and DSM, and GCPSurvey performance and acquisition from each SW The coordinates (X,Y) of the center point of the GCP target on the Ortho-Image and the height value (EL) of the GCP point by DSM were compared. According to the "Public Surveying Work Regulations", the results of each SW are all within the margin of error. It turned out that there is no problem with the regulations no matter which SW is included within the scope.

Cloud Computing-Based Processing of Large Volume UAV Images Acquired in Disaster Sites (재해/재난 현장에서 취득한 대용량 무인기 영상의 클라우드 컴퓨팅 기반 처리)

  • Han, Soohee
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1027-1036
    • /
    • 2020
  • In this study, a cloud-based processing method using Agisoft Metashape, a commercial software, and Amazon web service, a cloud computing service, is introduced and evaluated to quickly generate high-precision 3D realistic data from large volume UAV images acquired in disaster sites. Compared with on-premises method using a local computer and cloud services provided by Agisoft and Pix4D, the processes of aerial triangulation, 3D point cloud and DSM generation, mesh and texture generation, ortho-mosaic image production recorded similar time duration. The cloud method required uploading and downloading time for large volume data, but it showed a clear advantage that in situ processing was practically possible. In both the on-premises and cloud methods, there is a difference in processing time depending on the performance of the CPU and GPU, but notso much asin a performance benchmark. However, it wasfound that a laptop computer equipped with a low-performance GPU takes too much time to apply to in situ processing.

Sharpness Evaluation of UAV Images Using Gradient Formula (Gradient 공식을 이용한 무인항공영상의 선명도 평가)

  • Lee, Jae One;Sung, Sang Min
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.1
    • /
    • pp.49-56
    • /
    • 2020
  • In this study, we analyzed the sharpness of UAV-images using the gradient formula and produced a MATLAB GUI (Graphical User Interface)-based sharpness analysis tool for easy use. In order to verify the reliability of the proposed sharpness analysis method, sharpness values of the UAV-images measured by the proposed method were compared with those by measured the commercial software Metashape of the Agisoft. As a result of measuring the sharpness with both tools on 10 UAV-images, sharpness values themselves were different from each other for the same image. However, there was constant bias of 011 ~ 0.20 between two results, and then the same sharpness was obtained by eliminating this bias. This fact proved the reliability of the proposed sharpness analysis method in this study. In addition, in order to verify the practicality of the proposed sharpness analysis method, unsharp images were classified as low quality ones, and the quality of orthoimages was compared each other, which were generated included low quality images and excluded them. As a result, the quality of orthoimage including low quality images could not be analyzed due to blurring of the resolution target. However, the GSD (Ground Sample Distance) of orthoimage excluding low quality images was 3.2cm with a Bar target and 4.0cm with a Siemens star thanks to the clear resolution targets. It therefore demonstrates the practicality of the proposed sharpness analysis method in this study.

Accuracy Assessment of Aerial Triangulation of Network RTK UAV (네트워크 RTK 무인기의 항공삼각측량 정확도 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.663-670
    • /
    • 2020
  • In the present study, we assessed the accuracy of aerial triangulation using a UAV (Unmanned Aerial Vehicle) capable of network RTK (Real-Time Kinematic) survey in a disaster situation that may occur in a semi-urban area mixed with buildings. For a reliable survey of check points, they were installed on the roofs of buildings, and static GNSS (Global Navigation Satellite System) survey was conducted for more than four hours. For objective accuracy assessment, coded aerial targets were installed on the check points to be automatically recognized by software. At the instance of image acquisition, the 3D coordinates of the UAV camera were measured using VRS (Virtual Reference Station) method, as a kind of network RTK survey, and the 3-axial angles were achieved using IMU (Inertial Measurement Unit) and gimbal rotation measurement. As a result of estimation and update of the interior and exterior orientation parameters using Agisoft Metashape, the 3D RMSE (Root Mean Square Error) of aerial triangulation ranged from 0.153 m to 0.102 m according to the combination of the image overlap and the angle of the image acquisition. To get higher aerial triangulation accuracy, it was proved to be effective to incorporate oblique images, though it is common to increase the overlap of vertical images. Therefore, to conduct a UAV mapping in an urgent disaster site, it is necessary to acquire oblique images together rather than improving image overlap.

Crack Inspection and Mapping of Concrete Bridges using Integrated Image Processing Techniques (통합 이미지 처리 기술을 이용한 콘크리트 교량 균열 탐지 및 매핑)

  • Kim, Byunghyun;Cho, Soojin
    • Journal of the Korean Society of Safety
    • /
    • v.36 no.1
    • /
    • pp.18-25
    • /
    • 2021
  • In many developed countries, such as South Korea, efficiently maintaining the aging infrastructures is an important issue. Currently, inspectors visually inspect the infrastructure for maintenance needs, but this method is inefficient due to its high costs, long logistic times, and hazards to the inspectors. Thus, in this paper, a novel crack inspection approach for concrete bridges is proposed using integrated image processing techniques. The proposed approach consists of four steps: (1) training a deep learning model to automatically detect cracks on concrete bridges, (2) acquiring in-situ images using a drone, (3) generating orthomosaic images based on 3D modeling, and (4) detecting cracks on the orthmosaic image using the trained deep learning model. Cascade Mask R-CNN, a state-of-the-art instance segmentation deep learning model, was trained with 3235 crack images that included 2415 hard negative images. We selected the Tancheon overpass, located in Seoul, South Korea, as a testbed for the proposed approach, and we captured images of pier 34-37 and slab 34-36 using a commercial drone. Agisoft Metashape was utilized as a 3D model generation program to generate an orthomosaic of the captured images. We applied the proposed approach to four orthomosaic images that displayed the front, back, left, and right sides of pier 37. Using pixel-level precision referencing visual inspection of the captured images, we evaluated the trained Cascade Mask R-CNN's crack detection performance. At the coping of the front side of pier 37, the model obtained its best precision: 94.34%. It achieved an average precision of 72.93% for the orthomosaics of the four sides of the pier. The test results show that this proposed approach for crack detection can be a suitable alternative to the conventional visual inspection method.

Estimation of the Lodging Area in Rice Using Deep Learning (딥러닝을 이용한 벼 도복 면적 추정)

  • Ban, Ho-Young;Baek, Jae-Kyeong;Sang, Wan-Gyu;Kim, Jun-Hwan;Seo, Myung-Chul
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • Rice lodging is an annual occurrence caused by typhoons accompanied by strong winds and strong rainfall, resulting in damage relating to pre-harvest sprouting during the ripening period. Thus, rapid estimations of the area of lodged rice are necessary to enable timely responses to damage. To this end, we obtained images related to rice lodging using a drone in Gimje, Buan, and Gunsan, which were converted to 128 × 128 pixels images. A convolutional neural network (CNN) model, a deep learning model based on these images, was used to predict rice lodging, which was classified into two types (lodging and non-lodging), and the images were divided in a 8:2 ratio into a training set and a validation set. The CNN model was layered and trained using three optimizers (Adam, Rmsprop, and SGD). The area of rice lodging was evaluated for the three fields using the obtained data, with the exception of the training set and validation set. The images were combined to give composites images of the entire fields using Metashape, and these images were divided into 128 × 128 pixels. Lodging in the divided images was predicted using the trained CNN model, and the extent of lodging was calculated by multiplying the ratio of the total number of field images by the number of lodging images by the area of the entire field. The results for the training and validation sets showed that accuracy increased with a progression in learning and eventually reached a level greater than 0.919. The results obtained for each of the three fields showed high accuracy with respect to all optimizers, among which, Adam showed the highest accuracy (normalized root mean square error: 2.73%). On the basis of the findings of this study, it is anticipated that the area of lodged rice can be rapidly predicted using deep learning.