DOI QR코드

DOI QR Code

Comparison the Mapping Accuracy of Construction Sites Using UAVs with Low-Cost Cameras

  • Jeong, Hohyun (Department of Spatial Information Engineering, Pukyong National University) ;
  • Ahn, Hoyong (Department of Climate Change and Agro-Ecology Division, National Institute of Agricultural Sciences) ;
  • Shin, Dongyoon (Disaster Scientific Investigation Division, National Disaster Management Research Institute) ;
  • Choi, Chuluong (Department of Spatial Information Engineering, Pukyong National University)
  • 투고 : 2018.10.12
  • 심사 : 2018.10.22
  • 발행 : 2019.02.28

초록

The advent of a fourth industrial revolution, built on advances in digital technology, has coincided with studies using various unmanned aerial vehicles (UAVs) being performed worldwide. However, the accuracy of different sensors and their suitability for particular research studies are factors that need to be carefully evaluated. In this study, we evaluated UAV photogrammetry using smart technology. To assess the performance of digital photogrammetry, the accuracy of common procedures for generating orthomosaic images and digital surface models (DSMs) using terrestrial laser scanning (TLS) techniques was measured. Two different type of non-surveying camera(Smartphone camera, fisheye camera) were attached to UAV platform. For fisheye camera, lens distortion was corrected by considering characteristics of lens. Accuracy of orthoimage and DSM generated were comparatively analyzed using aerial and TLS data. Accuracy comparison analysis proceeded as follows. First, we used Ortho mosaic image to compare the check point with a certain area. In addition, vertical errors of camera DSM were compared and analyzed based on TLS. In this study, we propose and evaluate the feasibility of UAV photogrammetry which can acquire 3 - D spatial information at low cost in a construction site.

키워드

1. Introduction

The high accuracy of elevation measurements from bare-earth LiDAR digital terrain model (DTM) data means that this method is preferred for local-scale mapping and three-dimensional(3D) modeling studies. However, in many cases, the high acquisition costs associated with this procedure can limit the geographical area that it is practical to cover (Gruszczyński, 2017).

Good quality DTMs can also be generated for relatively small areas using Global Navigation Satellite System(GNSS)survey data, particularly ifthe elements of the terrain are simple and high-density ground sampling is unnecessary (Coveney, 2010). In recent years, UAV photogrammetry has seen a boost in computational power, and multispectral 3D data collected by UAVs, structure from motion (SfM) algorithms (Dandois, 2015) and multi-view stereo (MVS)(Seitz, 2006) analysis of digital photogrammetry and computer vision have propelled the technique’s application in image-based surface reconstruction.The potential for generating on-demand photogrammetric DEM data from UAV photogrammetry systems (Remondino, 2011) offers new opportunities for DEM-based environmental modeling and mapping.

In this paper, we present and demonstrate an innovative and extremely cost-efficient aerial monitoring and surveillance platform, based on integration of open-source and small UAVs with highly capable, compact, and lightweight smartphones. We investigated the inbuiltsensorsin smartphonessuch as the camera, GPS, gyroscope, and accelerometer, in the interests of integrating their computing and sensing capabilities with the UAVin an extremely cost-effective way. They are more cost-efficient than traditional photogrammetry, and can be used in collecting realtime data to provide overlapping low-altitude images (Chiabrando, 2011; Eisenbeiss, 2006; Lambers, 2007).

SeveralSfMphotogrammetry programs are available. Among the more commonly used proprietary solutions are PhotoScan, APS and Pix4D, which have become popular owing to their user-friendliness and readily available customer support. These software solutions yield similar 3D reconstruction results. However, there are three main disadvantages: the 3D-reconstructions of the models are not always sufficiently clear for photogrammetric applications (Tscharf, 2015); in 3D reconstructions, detailsincluding the sides of buildings and sharp corners and edges may be inaccurately rendered (Schwind, 2016); and not all software packages provide sufficient information for judging a reconstruction’s completeness (Rumpler, 2017). In acknowledgement of these limitations, this study conducted experiment design, image processing and accuracy evaluation using a UAV. Severalstudies have used UAVs, but the quality of the research and the results have depended on the commercial program; mere presentation ofthe resultsisinsufficient,forifthe process has not been afforded due consideration, errors may be overlooked.

In this study, we performed UAV photogrammetry using smart technology, and evaluated the accuracy and applicability of popular sensors used to produce orthomosaic images and DSMs, and compared the results with TLS data. The use of a smartphone in this test was intended as an experimental examination of the practical applicability ofsmartphone drones, which have undergone significant development recently. In sum, the purpose ofthe experiment wasto evaluate the accuracy of the results obtained using a smartphone, in comparison with those obtained using TLS.

2. Methodology

1) UAV Photogrammetry System

UAV photogrammetry costs less than conventional aerial photogrammetric methods and offersresearchers a variety of options with regard to mapping applications. It is also associated with swift and easy field data acquisition for precision applications (Candiago, 2015). Smartphone and smart camera technologies have recently advanced significantly. These advances, together with developments in information/communication technologies and Micro-electromechanical Systems (MEMS) sensors, have enhanced the options available for UAV photogrammetry.

Our study mounted two types ofrelatively low-cost, non-metric cameras on a UAV, to assess the accuracy of the orthomosaic images and DSMs, and to evaluate the suitability ofthese setupsfor UAS photogrammetry. An application capable of adapting to the ambient light level that automatically generates overlapping images was developed for use with the smartphones and smart cameras used in the UAVphotography.This application usessmart-camera technology to automatically capture the targets specified by the user.

The specifications and performance parameters of the UAV used to acquire the images are presented in Table 1. The UAV used a coaxial motor, and, as such, was easy to maintain and highly efficient for photogrammetry over a small area (Fig. 1). A flight control unit (FCU) is an integral component of all UAVs, and an inertial measurement unit (IMU) assesses the direction of flight, monitors acceleration, and functions as a barometric altimeter.

Table 1. UAV Photogrammetry System Specification

OGCSBN_2019_v35n1_1_t0001.png 이미지

OGCSBN_2019_v35n1_1_t0002.png 이미지

Fig. 1. The Anti-vibration mount designed for the camera.

To evaluate their efficacy for photogrammetry, we attached two different cameras to a UAV: smartphone camera and a fisheye camera (Fig. 2). The Samsung Galaxy S6 Edge camera (Fig. 2 (a)) uses optical image stabilization and auto-exposure real-time high dynamic range technologies. The camera also has a relatively large aperture size (f 1.9) that allows the sensor to receive 34% more light compared to the Galaxy S5. The S6 Edge has a resolution of 16 MP(5,312 × 2,988), a focal distance of 4.3 mm, and a pixel size of 1.120 μm (Samsung Galaxy S6 Edge, 2016). The GoPro Hero3 Black (Fig. 2 (b)) is a fisheye camera with an ultra-sharp ƒ/2.8 six-element aspherical glass lens that uses an ultra-wide angle to reduce image distortion. This camera has a resolution of 12 MP(3,000 × 4,000), a FLof 2.77 mm, and a pixelsize of 1.55 μm (Balletti, 2014).

OGCSBN_2019_v35n1_1_f0001.png 이미지

Fig. 2. The cameras (a) smartphone cameras (S6 Edge), (b) fisheye camera (GoPro).​​​​​​​

2) Study Area and Data Collection

This study was conducted in an industrial complex (Fig. 3) currently under construction in Yeongdeokgun, Gyeongsangbuk-do, South Korea. The complex is located at an elevation of 120 m above mean sea level, and the construction area measured 328,260 m2 in total. Overall construction at the site was 65% complete, with 85% of the earthworks finished. The study area included piles of gravel and sand for construction, and fluctuations in their relief rendered the terrain ideal for evaluation using DSMs generated from UAV images.

OGCSBN_2019_v35n1_1_f0002.png 이미지

Fig. 3. Study area and Flight paths.

Fig. 3 shows the base positions for TLS, the TLS targets, the CPs, and the GCP positions in the study area. GCPs were used for georeferencing the model.A minimum ofthree GCPsisrequired to scale,rotate, and locate the model, and each should be checked in at least two images. If there are abundant GCPs in a given project, some may be used as CPs to assess the accuracy of a project (Tong, 2015). GCPsimprove the relative and absolute accuracy of a model.

82 GCPs of outskirt ofstudy area were selected and use for the study. The red zone is used to check accuracy of each camera with a mosaic orthoimage. Also, the check area for the orthorectified images’ positional accuracy according toCP(equivalent in size to a manhole cover). Landformsthat had changed were designated as the blue zone. To verify the accuracy of DSMs, the area with no change in landform relief was designated as the green zone.

The UAV was used to collect the image data on 9 July 2016 with the anti-vibration mount. The S6 Edge had flight times of approximately 3 min. The GoPro camera followed a longer flight path, requiring a flight time of approximately 10 min. Approximately 80 images were taken altogether with settings of 80% overlap and 60% sidelap.

Taking the instantaneousfield of view into account, the time-lapse intervalsof eachcamerawere:smartphone, 4-5s and fisheye camera, 7s. The weather was clear during the UAVflights, with visibility of approximately 16 km and wind velocity of approximately 2 m/s. As shown inTable 2, exposure conditions were determined by the ambient light level. The ISO setting was normal (100-200) and the F-stop value was approximately 2, All images captured were unaffected by blurring orthe jello effect.

Table 2. Camera settings and environmental conditions

OGCSBN_2019_v35n1_1_t0003.png 이미지

As shown in Table 2, the UAV was controlled automatically to maintain an altitude of 150 m following take-off. Ground sample distance (GSD) was determined by pixel size and focal length according to Equation 1. GSD represents the distance between two consecutive pixel centers measured on the ground. Since the orthomosaic was generated using a 3D point cloud and the camera positions, an average GSD was calculated and applied.

\(\mathrm{GSD}=\mathrm{P} \frac{\mathrm{H}}{\mathrm{C}}\)       (1)

where, P is the pixel size, H is the flying altitude, and C is the camera focal length.

The GSD values were 2.5 cm for the S6 Edge and 8.84 cmforthe fisheye camera.The GSD ofthe fisheye camera, which had the smallest pixel size and the shortest focal length, was greatest.

In this study, TLS data was used to generate DSMs for the study area, and the UAV images were used to verify the horizontal/vertical accuracy of the DSMs.A GLS-1000 system was used for TLS and scans were performed at 2 stations to ensure the accuracy of the DSMs acrossthe study area. GLS-1000 point accuracy is 4 mm at 150 m. The raw scan data consisted of approximately 3,739,257 points, which included all signalsreflected by the ground, objects, trees, or grass. Superfluous data were discarded during processing, and the finalscan dataset included approximately 2,365,839 points (Fig. 4). The point density of the scan dataset was approximately 100 points/ m2 .

OGCSBN_2019_v35n1_1_f0003.png 이미지

Fig. 4. Results from the GLS-1000 scan after registering and georeferencing point cloud.​​​​​​​

GPS measurements were taken for the GCPs and CPs, using real-time kinematics with GPS.To improve the positional accuracy, the 82 GCPsillustrated in Fig. 3 were included, and measurements were taken using a Sokkia GRX-1 GPS. The inbuilt accuracy of the GRX-1 GNSS receiveris 10 mm + 1 ppm × horizontal distance (km), and 15 mm + 1 ppm × vertical distance (km). Each point was measured fourtimes.The relative measurementsimplied that the root-mean-square error (RMSE) was 12-21 mm (horizontal) and 5-7 mm (vertical). However, the expected absolute accuracy of anRTK survey is around 2 cm horizontally and 3-5 cm vertically.

3. Result and Discussion

1) Camera calibration

Camera lens cannot have ideal curvature. For this reason,ray cannot passthrough lens and head to image plane in a straight line. In other words, distortion of camera lens must be corrected. Accurate interior orientation parameters of camera are required to extract accurate and reliable 3D position information from photogrammetry, and the process of determining these factors is known as camera calibration.

\(\left(\begin{array}{l} \Delta \mathrm{x} \\ \Delta \mathrm{y} \end{array}\right)=\left(\begin{array}{l} \left(1+\mathrm{K}_{1} \mathrm{r}^{2}+\mathrm{K}_{2} \mathrm{r}^{4}+\mathrm{K}_{3} \mathrm{r}^{6}\right) \mathrm{x}+2 \mathrm{P}_{1} \mathrm{xy}+\mathrm{P}_{2}\left(\mathrm{r}^{2}+2 \mathrm{x}^{2}\right) \\ \left(1+\mathrm{K}_{1} \mathrm{r}^{2}+\mathrm{K}_{2} \mathrm{r}^{4}+\mathrm{K}_{3} \mathrm{r}^{6}\right) \mathrm{y}+2 \mathrm{P}_{2} \mathrm{xy}+\mathrm{P}_{1}\left(\mathrm{r}^{2}+2 \mathrm{y}^{2}\right) \end{array}\right)\)       (2)

where Δx, Δy are the deviations of coordinates x, y due to distortion r2 = x2 + y2 , K1, K2, K3 and P1, P2 are the radial and tangential lens distortion parameters, respectively.Thisformsthe basis of Pix4D models, and the distortion terms are represented as Rx = Kx f2x+1 (Leica Geosystems, 2008).

Fig. 5 shows that the S6 Edge camera and GP (Go Pro Camera) raw data had significantly increased distortion levels with increasing radial distance, compared to the other cameras. The S6 Edge camera does not have a function limiting image distortion.The results of the self-calibration and IO calculations demonstrate that the differences between the initial and calibrated focal lengths were asfollows: 19.55%→0% forthe S6 Edge; 50.16%→0.84% forthe GPraw data, and 2.13% → 0.75% for the GP corrected data. For effective optimization, each camera should operate at no more than 5% from its optimal performance value (Pix4D Support, 2016).The lens distortion ratios, based on the comparison between the various cameras, are presented in Fig. 5. The greatest distortion ratio was observed in the GP raw data. The maximum level of distortion in the GP raw file was 261.7, which was 654× greater than that in the corrected image.

OGCSBN_2019_v35n1_1_f0004.png 이미지

Fig. 5. Lens distortion rates : (a) S6 Edge, Fisheye Camera corrected and (b) Fisheye Camera Raw.​​​​​​​

Especially, low-priced single lensis commonly used as camera lens ofsmart phones. Low-priced single lens haslarge lens distortion compared to surveying camera. Also, GoPro used in this study is a super-wide angle lens made using spherical aberration with angle of 180° orlarger.In addition, thislensshows negative distortion within limited range of size in an image or pictorial image, and this distortion is greater compared to general camera lens. Accordingly, lens calibration was performed on raw files in this study.

Fig. 6 presents images that visually show distortion quantity of 2 types of camera. Fig. 6 (a) raw file image and (b) image after calibration of distortion on smart phone. Likewise (c) and (d) are fisheye camera images.

OGCSBN_2019_v35n1_1_f0005.png 이미지

Fig. 6. The result from the lens correction process can be realized when the original image (left) is compared to the one without the lens distortion (right).​​​​​​​

2) Orthoimage and DSM Generation

Owing to recent advances in UAS technology, the SfM method can process hundreds of UAS photographs within a relatively short timeframe. Additionally, because SfM can be applied using nonmetric cameras, it produces effective DSMs and 3D models of geographic features, based on UAS photogrammetry, which can be used to develop various software programs. During the initial processing ofthe Pix4D program used in this study, a binary descriptor ofthe scale-invariantfeature transform(SIFT) algorithm (Lowe, 2004), similar to that of Strecha et al. (2012) was used to extract and subsequently match features from photographs (Küng, 2011). Based on these, and the GCP data, Pix4D performed an iterative routine of camera self-calibration, automatic aerial triangulation (AAT) and block bundle adjustment(BBA)to determine and optimize the interior and exterior parameters. The camera’s exterior orientation parameters were determined using the extracted image coordinates and collinearity conditions, and a precise 3D point cloud was generated by bundle adjustment. Following the initial processing, maximum point cloud densification was performed, based on multi-view stereo (MVS) algorithms (Seitz, 2006), and orthoimages and DSMs were generated (Fig. 7).

OGCSBN_2019_v35n1_1_f0006.png 이미지

Fig. 7. The orthomosaic image.​​​​​​​

3) Horizontal Accuracy evaluation

Accuracy assessments using x-coordinate and ycoordinate data were performed on the surveyed points that were not used for georeferencing. The difference in the RMSE of the GPs with and without lens correction was approximately 0.015 m.This difference may be because images collected by the GoPro camera were captured using a fisheye lens. A fisheye lens results in distortion at the edges of the images, and the focusis difficult to adjust. When the RMSE valuesfor GCPs and CPs were converted into GSDs and compared according to pixelation, the S6 Edge cameras were 2 pixels. This was likely due to limitations in the performance of the relatively inexpensive lenses used in these smartphones. Following accuracy assessment using the GCPs and CPs, the scale and georeferencing of the orthomosaics generated from the UAV imagery were analyzed. Accuracy was assessed in terms of the shiftsthat indicated errorsin georeferencing, or changes in area that indicated errors in scale. To evaluate the scale and georeferencing parameters, vector data from a previous conventional topographic survey were used. CP errors were expressed using graduated symbols, as illustrated in Fig. 8.

OGCSBN_2019_v35n1_1_f0007.png 이미지

Fig. 8. Errors and error sizes in the UAV-generated mosaic data (Error sizes are shown in meters).

Using orthoimages generated by each camera, The reference target were selected to compare surface area accuracy. The results demonstrate that the reference surface areas’(targetsize: 2.011 m2 ) averages/standard deviations (SDs) were: 0.059 m2 /0.060 m2 for the S6 Edge camera; 0.133 m2 /0.070 m2 for the GP raw data; and 0.211 m2 /0.057 m2 for the GP corrected data.

The images from the S6 Edge camera exhibited various divergencesfrom the average targetsize of 0.6 m2 orless.Imagesfrom the GPcamera had a minimum target size of 0.133 m2, owing to the camera’s low resolution, which differed markedly from the s6 edge camera.The GoPro camera also generated images with significant differencesin scaling, and there was a clear shift in the NW direction (Fig. 9). This shift probably occurred because the NW edge was smaller at the site of image overlap than at the center, resulting in poorer 3D reconstruction and diminishing the model’s accuracy.

OGCSBN_2019_v35n1_1_f0008.png 이미지

Fig. 9. Camera polygon accuracy.​​​​​​​

4) Vertical Accuracy evaluation

To evaluate the accuracy ofthe UAV-SfM procedure, this study used TLS to assess volumes and generate a more accurate comparison between the results.Azone (green zone in Fig. 3) without any topographic change wasselected to confirm the accuracy ofthe DSM.TLS was compared with the DSM of each camera. Using theTLS reference data, a quantitative evaluation ofthe absolute vertical differences between the DSM and UAV-SfM data was performed, and the results are illustrated as a histogram in Fig. 10.

OGCSBN_2019_v35n1_1_f0009.png 이미지

Fig. 10. The DSM and difference validation histogram for the LiDAR site​​​​​​​.

The difference between the TLS area values and those generated by each camera (mean/SD) were: 0.048 m2/0.043 m2 for the S6 Edge; 0.018 m2/0.040 m2 for the GP raw data; and -0.020 m2/0.031 m2 for the GP corrected data.The histogram distribution shows that— with the exception of the S6 Edge camera data—the differences between the DSMs generated by each camera and the TLS data were less than 1 pixel GSD for 75% of the data. The RMS data were similar to the SDs, but the S6 Edge camera exhibited a significant difference. The GP raw and GP corrected data were quasi-Gaussian curves, and the S6 Edge camera data had a moderately positive tail. The fit was very close to unity for all camera comparisons, with the exception of the S6 Edge data. These results demonstrate that all of the surfaces had excellent vertical accuracy.

Changesin the topography are illustrated in Fig. 11, which shows the DSM differences (red) between each camera, based on changes in the elevation of the topography (black), and theTLS.Compared to the flat, unchanging topography, for which accuracy was verified, the topographic changes reveal the high standard deviation of the DSM. Regarding the area of changing topography, the difference between the TLS area values and those generated by each camera (mean/SD) were: 0.085 m2 /0.065 m2 for the S6 Edge; 0.067 m2/0.130 m2 for the GP raw data; and -0.030 m2/0.063 m2 for the GP corrected data. The inclined topography revealed a high standard deviation of DSM compared to the flat topography. In the case of the GP raw data using the fisheye lens, the SD in the inclined topography varied by more than a factor of two, compared to the GP corrected data. This implies that distortion deviation was higher in the inclined topography. Topographic surveys are more expensive using TLS than using UAVs, and although GNSS surveys are similar in cost to UAV scans, the resulting DSMs for a similar field-effort are less detailed.

OGCSBN_2019_v35n1_1_f0010.png 이미지

Fig. 11. The DSM and difference validation for the LiDAR site (Topography changed).​​​​​​​

4. Conclusion

UAVphotogrammetry is a new and valuable toolfor rapidly providing low-cost image data over a small geographical area. An autopilot system guarantees a predetermined flight path, and the camera is also controlled automatically. Interest in using UAVs for digital photogrammetry continues to increase (Ruzgiene et al., 2015). UAV photogrammetry has many advantages and can be used for a variety of terrains; however, it is important to ensure that the equipment used is reliable.

In this paper, we verified the availability through two non - surveying cameras. In the Bundle Adjustment, IO calculation by self-calibration shows that the percentage of difference between initial and calibrated focal lengthsis 19.55% ats6 edge, Fisheye camera was 50.16% before correction and 2.13% after correction. The values differ depending on whetherthe camera has distortion correction function or not. In other words, it was confirmed that the camera distortion correction function affects the output.

Georeferencing accuracy confirmed the accuracy of GCP and CP. The RMSE (m) s6 edge was 0.061 m, GP_Raw 0.124 m and GP_Corrected 0.084 m. Result of calculating specific point area of Ortho image, each camera average (m2) / Stdev (m2) was s6 edge 0.059 m2 / 0.060 m2, GP_Raw 0.133 m2 / 0.070 m2 and GP_Corrected 0.211 m2 / 0.057 m2. Since the area estimation is visually calculated, the GP with the lowest resolution showed the biggest difference.

In DSM height accuracy evaluation based on LiDAR, accuracy evaluation of UAV DSM based on TLS was evaluated by point-by-point comparison. Mean (m2) and standard deviation (m2) of TLS each camera were -0.039 m2/0.068 m2 for s6 edge, -0.020 m2/0.041 m2 for GP Raw, and 0.019 m2/0.031 m2 for GPCorrected.

UAVphotogrammetry islow-cost, user-friendly, and ideally suited to small- and medium-sized construction sites. However, to utilize geospatial information systems in the monitoring and management of civil engineering projects, 3D spatial information from a constantly changing construction site needs to be acquired and processed rapidly. The experiences with the developed UAV system are useful to researchers or practitioners in need for successfully adapting UAV technology for their application.

Acknowledgements

This work was supported by a Research Grant of Pukyong National University (2017 year).

참고문헌

  1. Balletti, C., F. Guerra, V. Tsioukas, and P. Vernier, 2014. Calibration of action cameras for photogrammetric purposes, Sensors, 14(9): 17471-17490. https://doi.org/10.3390/s140917471
  2. Chiabrando, F., F. Nex, D. Piatti, and F. Rinaudo, 2011. UAV and RPV systems for photogrammetric surveys in archaelogical areas: two tests in the Piedmont region (Italy), Journal of Archaeological Science, 38(3): 697-710. https://doi.org/10.1016/j.jas.2010.10.022
  3. Coveney, S., A. S. Fotheringham, M. Charlton, and T. McCarthy, 2010. Dual-scale validation of a medium-resolution coastal DEM with terrestrial LiDAR DSM and GPS, Computers & Geosciences, 36(4): 489-499. https://doi.org/10.1016/j.cageo.2009.10.003
  4. Candiago, S., F. Remondino, M. De Giglio, M. Dubbini, and M. Gattelli, 2015. Evaluating multispectral images and vegetation indices for precision farming applications from UAV images, Remote Sensing, 7(4): 4026-4047. https://doi.org/10.3390/rs70404026
  5. Dandois, J. P., M. Olano, and E. C. Ellis, 2015. Optimal altitude, overlap, and weather conditions for computer vision UAV estimates of forest structure, Remote Sensing, 7(10): 13895-13920. https://doi.org/10.3390/rs71013895
  6. Eisenbeiss, H. and L. Zhang, 2006. Comparison of DSMs generated from mini UAV imagery and terrestrial laser scanner in a cultural heritage application, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36(5): 90-96.
  7. Gruszczynski, W., W. Matwij, and P. Cwiakala, 2017. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation, ISPRS Journal of Photogrammetry and Remote Sensing, 126: 168-179. https://doi.org/10.1016/j.isprsjprs.2017.02.015
  8. Kung, O., C. Strecha, A. Beyeler, J. C. Zufferey, D. Floreano, P. Fua, and F. Gervaix, 2011. The accuracy of automatic photogrammetric techniques on ultra-light UAV imagery, Proc. of UAV-g 2011-Unmanned Aerial Vehicle in Geomatics, Zurich, CH, Sep. 14-16, no. EPFL-CONF-168806.
  9. Lambers, K., H. Eisenbeiss, M. Sauerbier, D. Kupferschmidt, T. Gaisecker, S. Sotoodeh, and T. Hanusch, 2007. Combining photogrammetry and laser scanning for the recording and modelling of the Late Intermediate Period site of Pinchango Alto, Palpa, Peru, Journal of Archaeological Science, 34(10): 1702-1712. https://doi.org/10.1016/j.jas.2006.12.008
  10. Leica Geosystems, 2008. Leica ALS60 Airborne Laser Scanner Product Specifications, Leica Geosystems AG, Heerbrugg, Switzerland.
  11. Lowe, D. G., 2004. Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60(2): 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  12. Pix4D Support, 2016. Quality Report Help, https://support.pix4d.com/hc/en-us/articles/202558689#label102&gsc.tab=0, Accessed on Dec. 20, 2016.
  13. Remondino, F., L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, 2011. UAV photogrammetry for mapping and 3d modeling-current status and future perspectives, Proc. of International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Sep. 14-16, vol. XXXVIII-1/C22, pp. 25-31.
  14. Ruzgienė, B., T. Berteska, S. Gecyte, E. Jakubauskienė, and V. C. Aksamitauskas, 2015. The surface modelling based on UAV Photogrammetry and qualitative estimation, Measurement, 73: 619-627. https://doi.org/10.1016/j.measurement.2015.04.018
  15. Rumpler, M., A. Tscharf, C. Mostegel, S. Daftry, C. Hoppe, R. Prettenthaler, F. Fraundorfer, G. Mayer, and H. Bischof, 2017. Evaluations on multi-scale camera networks for precise and geo-accurate reconstructions from aerial and terrestrial images with user guidance, Computer vision and image understanding, 157: 255-273. https://doi.org/10.1016/j.cviu.2016.04.008
  16. Samsung Galaxy S6 Edge, 2016. Specifications. Available online, http://www.samsung.com/us/support/owners/product/galaxy-s6-edgeverizon, Accessed on Dec. 14, 2016.
  17. Seitz, S. M., B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, 2006. A comparison and evaluation of multi-view stereo reconstruction algorithms, Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, Jun. 17-22, pp. 519-528.
  18. Strecha, C., A. Bronstein, M. Bronstein, and P. Fua, 2012. LDAHash: Improved matching with smaller descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1): 66-78. https://doi.org/10.1109/TPAMI.2011.103
  19. Schwind, M., 2016. Comparing and characterizing three-dimensional point clouds derived by structure from motion photogrammetry, Texas A&M University, Texas, USA.
  20. Tscharf, A., M. Rumpler, F. Fraundorfer, G. Mayer, and H. Bischof, 2015. On the use of UAVs in mining and archaeology-geo-accurate 3d reconstructions using various platforms and terrestrial views, Proc. of ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Toronto, Aug. 30-Sep. 2, vol. 2-1/W1, pp. 15-22.
  21. Tong, X., X. Liu, P. Chen, S. Liu, K. Luan, L. Li, S. Liu, X. Liu, H. Xie, Y. Jin, and Z. Hong, 2015. Integration of UAV-based photogrammetry and terrestrial laser scanning for the three-dimensional mapping and monitoring of open-pit mine areas, Remote Sensing, 7(6): 6635-6662. https://doi.org/10.3390/rs70606635

피인용 문헌

  1. Application of Terrestrial Laser Scanning (TLS) in the Architecture, Engineering and Construction (AEC) Industry vol.22, pp.1, 2019, https://doi.org/10.3390/s22010265