• Title/Summary/Keyword: Agisoft Metashape

Search Result 5, Processing Time 0.02 seconds

Assessment of Parallel Computing Performance of Agisoft Metashape for Orthomosaic Generation (정사모자이크 제작을 위한 Agisoft Metashape의 병렬처리 성능 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.427-434
    • /
    • 2019
  • In the present study, we assessed the parallel computing performance of Agisoft Metashape for orthomosaic generation, which can implement aerial triangulation, generate a three-dimensional point cloud, and make an orthomosaic based on SfM (Structure from Motion) technology. Due to the nature of SfM, most of the time is spent on Align photos, which runs as a relative orientation, and Build dense cloud, which generates a three-dimensional point cloud. Metashape can parallelize the two processes by using multi-cores of CPU (Central Processing Unit) and GPU (Graphics Processing Unit). An orthomosaic was created from large UAV (Unmanned Aerial Vehicle) images by six conditions combined by three parallel methods (CPU only, GPU only, and CPU + GPU) and two operating systems (Windows and Linux). To assess the consistency of the results of the conditions, RMSE (Root Mean Square Error) of aerial triangulation was measured using ground control points which were automatically detected on the images without human intervention. The results of orthomosaic generation from 521 UAV images of 42.2 million pixels showed that the combination of CPU and GPU showed the best performance using the present system, and Linux showed better performance than Windows in all conditions. However, the RMSE values of aerial triangulation revealed a slight difference within an error range among the combinations. Therefore, Metashape seems to leave things to be desired so that the consistency is obtained regardless of parallel methods and operating systems.

Cloud Computing-Based Processing of Large Volume UAV Images Acquired in Disaster Sites (재해/재난 현장에서 취득한 대용량 무인기 영상의 클라우드 컴퓨팅 기반 처리)

  • Han, Soohee
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1027-1036
    • /
    • 2020
  • In this study, a cloud-based processing method using Agisoft Metashape, a commercial software, and Amazon web service, a cloud computing service, is introduced and evaluated to quickly generate high-precision 3D realistic data from large volume UAV images acquired in disaster sites. Compared with on-premises method using a local computer and cloud services provided by Agisoft and Pix4D, the processes of aerial triangulation, 3D point cloud and DSM generation, mesh and texture generation, ortho-mosaic image production recorded similar time duration. The cloud method required uploading and downloading time for large volume data, but it showed a clear advantage that in situ processing was practically possible. In both the on-premises and cloud methods, there is a difference in processing time depending on the performance of the CPU and GPU, but notso much asin a performance benchmark. However, it wasfound that a laptop computer equipped with a low-performance GPU takes too much time to apply to in situ processing.

Sharpness Evaluation of UAV Images Using Gradient Formula (Gradient 공식을 이용한 무인항공영상의 선명도 평가)

  • Lee, Jae One;Sung, Sang Min
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.1
    • /
    • pp.49-56
    • /
    • 2020
  • In this study, we analyzed the sharpness of UAV-images using the gradient formula and produced a MATLAB GUI (Graphical User Interface)-based sharpness analysis tool for easy use. In order to verify the reliability of the proposed sharpness analysis method, sharpness values of the UAV-images measured by the proposed method were compared with those by measured the commercial software Metashape of the Agisoft. As a result of measuring the sharpness with both tools on 10 UAV-images, sharpness values themselves were different from each other for the same image. However, there was constant bias of 011 ~ 0.20 between two results, and then the same sharpness was obtained by eliminating this bias. This fact proved the reliability of the proposed sharpness analysis method in this study. In addition, in order to verify the practicality of the proposed sharpness analysis method, unsharp images were classified as low quality ones, and the quality of orthoimages was compared each other, which were generated included low quality images and excluded them. As a result, the quality of orthoimage including low quality images could not be analyzed due to blurring of the resolution target. However, the GSD (Ground Sample Distance) of orthoimage excluding low quality images was 3.2cm with a Bar target and 4.0cm with a Siemens star thanks to the clear resolution targets. It therefore demonstrates the practicality of the proposed sharpness analysis method in this study.

Accuracy Assessment of Aerial Triangulation of Network RTK UAV (네트워크 RTK 무인기의 항공삼각측량 정확도 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.663-670
    • /
    • 2020
  • In the present study, we assessed the accuracy of aerial triangulation using a UAV (Unmanned Aerial Vehicle) capable of network RTK (Real-Time Kinematic) survey in a disaster situation that may occur in a semi-urban area mixed with buildings. For a reliable survey of check points, they were installed on the roofs of buildings, and static GNSS (Global Navigation Satellite System) survey was conducted for more than four hours. For objective accuracy assessment, coded aerial targets were installed on the check points to be automatically recognized by software. At the instance of image acquisition, the 3D coordinates of the UAV camera were measured using VRS (Virtual Reference Station) method, as a kind of network RTK survey, and the 3-axial angles were achieved using IMU (Inertial Measurement Unit) and gimbal rotation measurement. As a result of estimation and update of the interior and exterior orientation parameters using Agisoft Metashape, the 3D RMSE (Root Mean Square Error) of aerial triangulation ranged from 0.153 m to 0.102 m according to the combination of the image overlap and the angle of the image acquisition. To get higher aerial triangulation accuracy, it was proved to be effective to incorporate oblique images, though it is common to increase the overlap of vertical images. Therefore, to conduct a UAV mapping in an urgent disaster site, it is necessary to acquire oblique images together rather than improving image overlap.

Crack Inspection and Mapping of Concrete Bridges using Integrated Image Processing Techniques (통합 이미지 처리 기술을 이용한 콘크리트 교량 균열 탐지 및 매핑)

  • Kim, Byunghyun;Cho, Soojin
    • Journal of the Korean Society of Safety
    • /
    • v.36 no.1
    • /
    • pp.18-25
    • /
    • 2021
  • In many developed countries, such as South Korea, efficiently maintaining the aging infrastructures is an important issue. Currently, inspectors visually inspect the infrastructure for maintenance needs, but this method is inefficient due to its high costs, long logistic times, and hazards to the inspectors. Thus, in this paper, a novel crack inspection approach for concrete bridges is proposed using integrated image processing techniques. The proposed approach consists of four steps: (1) training a deep learning model to automatically detect cracks on concrete bridges, (2) acquiring in-situ images using a drone, (3) generating orthomosaic images based on 3D modeling, and (4) detecting cracks on the orthmosaic image using the trained deep learning model. Cascade Mask R-CNN, a state-of-the-art instance segmentation deep learning model, was trained with 3235 crack images that included 2415 hard negative images. We selected the Tancheon overpass, located in Seoul, South Korea, as a testbed for the proposed approach, and we captured images of pier 34-37 and slab 34-36 using a commercial drone. Agisoft Metashape was utilized as a 3D model generation program to generate an orthomosaic of the captured images. We applied the proposed approach to four orthomosaic images that displayed the front, back, left, and right sides of pier 37. Using pixel-level precision referencing visual inspection of the captured images, we evaluated the trained Cascade Mask R-CNN's crack detection performance. At the coping of the front side of pier 37, the model obtained its best precision: 94.34%. It achieved an average precision of 72.93% for the orthomosaics of the four sides of the pier. The test results show that this proposed approach for crack detection can be a suitable alternative to the conventional visual inspection method.