• Title/Summary/Keyword: 번들조정

Search Result 50, Processing Time 0.022 seconds

Application of Smartphone Camera Calibration for Close-Range Digital Photogrammetry (근접수치사진측량을 위한 스마트폰 카메라 검보정)

  • Yun, MyungHyun;Yu, Yeon;Choi, Chuluong;Park, Jinwoo
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.149-160
    • /
    • 2014
  • Recently studies on application development and utilization using sensors and devices embedded in smartphones have flourished at home and abroad. This study aimed to analyze the accuracy of the images of smartphone to determine three-dimension position of close objects prior to the development of photogrammetric system applying smartphone and evaluate the feasibility to use. First of all, camera calibration was conducted on autofocus and infinite focus. Regarding camera calibration distortion model with balance system and unbalance system was used for the decision of lens distortion coefficient, the results of calibration on 16 types of projects showed that all cases were in RMS error by less than 1 mm from bundle adjustment. Also in terms of autofocus and infinite focus on S and S2 model, the pattern of distorted curve was almost the same, so it could be judged that change in distortion pattern according to focus mode is very little. The result comparison according to autofocus and infinite focus and the result comparison according to a software used for multi-image processing showed that all cases were in standard deviation less than ${\pm}3$ mm. It is judged that there is little result difference between focus mode and determination of three-dimension position by distortion model. Lastly the checkpoint performance by total station was fixed as most probable value and the checkpoint performance determined by each project was fixed as observed value to calculate statistics on residual of individual methods. The result showed that all projects had relatively large errors in the direction of Y, the direction of object distance compared to the direction of X and Z. Like above, in terms of accuracy for determination of three-dimension position for a close object, the feasibility to use smartphone camera would be enough.

Camera Tracking Method based on Model with Multiple Planes (다수의 평면을 가지는 모델기반 카메라 추적방법)

  • Lee, In-Pyo;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.11 no.4
    • /
    • pp.143-149
    • /
    • 2011
  • This paper presents a novel camera tracking method based on model with multiple planes. The proposed algorithm detects QR code that is one of the most popular types of two-dimensional barcodes. A 3D model is imported from the detected QR code for augmented reality application. Based on the geometric property of the model, the vertices are detected and tracked using optical flow. A clipping algorithm is applied to identify each plane from model surfaces. The proposed method estimates the homography from coplanar feature correspondences, which is used to obtain the initial camera motion parameters. After deriving a linear equation from many feature points on the model and their 3D information, we employ DLT(Direct Linear Transform) to compute camera information. In the final step, the error of camera poses in every frame are minimized with local Bundle Adjustment algorithm in real-time.

A Study on the Generation of Digital Elevation Model from SPOT Satellite Data (SPOT 위성데이타를 이용한 수치표고모델 생성에 관한 연구)

  • 안철호;안기원;박병욱
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.9 no.2
    • /
    • pp.93-102
    • /
    • 1991
  • This study aims to develop techniques for generating Digital Elevation Model(DEM) from SPOT Computer Compatible Tape(CCT) data, so as to present an effective way of generation of DEM for large area. As the first phase of extracting ground heights from SPOT stereo digital data, the bundle adjustment technique was used to determine the satellite exterior orientation parameters. Because SPOT data has the characteristics of multiple perspective projection, exterior orientation Parameters were modelled as a function of scan lines. In the second phase, a normalized cross correlation matching technique was applied to search for the conjugate pixels ill stereo pairs. The preliminary study showed that the matching window size of 13$\times$13 was adequate. After image coordinates of the conjugate pixels were determined by the matching technique, the ground coordinates of the corresponding pixels were calculated by the space intersection method. Then DEM was generated by interpolations. In addtion an algorithm for the elimination of abnormal elevation was developed and applied. The algorithm was very effective to improve the accuracy of the generated DEM.

  • PDF

Semi-automatic Camera Calibration Using Quaternions (쿼터니언을 이용한 반자동 카메라 캘리브레이션)

  • Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • The camera is a key element in image-based three-dimensional positioning, and camera calibration, which properly determines the internal characteristics of such a camera, is a necessary process that must be preceded in order to determine the three-dimensional coordinates of the object. In this study, a new methodology was proposed to determine interior orientation parameters of a camera semi-automatically without being influenced by size and shape of checkerboard for camera calibration. The proposed method consists of exterior orientation parameters estimation using quaternion, recognition of calibration target, and interior orientation parameter determination through bundle block adjustment. After determining the interior orientation parameters using the chessboard calibration target, the three-dimensional position of the small 3D model was determined. In addition, the horizontal and vertical position errors were about ${\pm}0.006m$ and ${\pm}0.007m$, respectively, through the accuracy evaluation using the checkpoints.

Analysis of very close-range photogrammetry by non-metric camera (비측정용 사진기에 의한 초근접 사진해석)

  • 강준묵;오원진
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.8 no.2
    • /
    • pp.23-29
    • /
    • 1990
  • Non-metric camera is more profitable than metric camera for geometric analysis of small and elaborate structures, because focal length control of non-metric camera is flexible while generally metric camera has limited focal length. And if the problem of conventional method requiring much time and endeavor to measure three dimensional positions of control points will be solved effectively, it is possible to analyze very close-range photogrammetry more easily and quickly. The purposes of this study are to propose the efficiency of non-metric camera and to reduce difficulty of control survey sharply by introducing self-control survey method. For these, very close-range photographs of the small object were obtained by using non-metric camera calibrated for systematic errors and then bundle adjustment is used for analysis procedure. As a result, the superiority of non-metric camera in analyzing very close-range photographs and the application proptriety of self-control survey were proved, therefore it is expected to be able to apply to precise analysis of small structures or spearhead parts.

  • PDF

Use of a Drone for Mapping and Time Series Image Acquisition of Tidal Zones (드론을 활용한 갯벌 지형 및 시계열 정보의 획득)

  • Oh, Jaehong;Kim, Duk-jin;Lee, Hyoseong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.119-125
    • /
    • 2017
  • The mud flat in Korea is the geographical feature generated from the sediment of rivers of Korea and China and it is the important topography for pollution purification and fishing industry. The mud flat is difficult to access such that it requires the aerial survey for the high-resolution spatial information of the area. In this study we used drones instead of the conventional aerial and remote sensing approaches which have shortcomings of costs and revisit times. We carried out GPS-based control point survey, temporal image acquisition using drones, bundle adjustment, stereo image processing for DSM and ortho photo generation, followed by co-registration between the spatio-temporal information.

3D Reconstruction of Structure Fusion-Based on UAS and Terrestrial LiDAR (UAS 및 지상 LiDAR 융합기반 건축물의 3D 재현)

  • Han, Seung-Hee;Kang, Joon-Oh;Oh, Seong-Jong;Lee, Yong-Chang
    • Journal of Urban Science
    • /
    • v.7 no.2
    • /
    • pp.53-60
    • /
    • 2018
  • Digital Twin is a technology that creates a photocopy of real-world objects on a computer and analyzes the past and present operational status by fusing the structure, context, and operation of various physical systems with property information, and predicts the future society's countermeasures. In particular, 3D rendering technology (UAS, LiDAR, GNSS, etc.) is a core technology in digital twin. so, the research and application are actively performed in the industry in recent years. However, UAS (Unmanned Aerial System) and LiDAR (Light Detection And Ranging) have to be solved by compensating blind spot which is not reconstructed according to the object shape. In addition, the terrestrial LiDAR can acquire the point cloud of the object more precisely and quickly at a short distance, but a blind spot is generated at the upper part of the object, thereby imposing restrictions on the forward digital twin modeling. The UAS is capable of modeling a specific range of objects with high accuracy by using high resolution images at low altitudes, and has the advantage of generating a high density point group based on SfM (Structure-from-Motion) image analysis technology. However, It is relatively far from the target LiDAR than the terrestrial LiDAR, and it takes time to analyze the image. In particular, it is necessary to reduce the accuracy of the side part and compensate the blind spot. By re-optimizing it after fusion with UAS and Terrestrial LiDAR, the residual error of each modeling method was compensated and the mutual correction result was obtained. The accuracy of fusion-based 3D model is less than 1cm and it is expected to be useful for digital twin construction.

Relative RPCs Bias-compensation for Satellite Stereo Images Processing (고해상도 입체 위성영상 처리를 위한 무기준점 기반 상호표정)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.287-293
    • /
    • 2018
  • It is prerequisite to generate epipolar resampled images by reducing the y-parallax for accurate and efficient processing of satellite stereo images. Minimizing y-parallax requires the accurate sensor modeling that is carried out with ground control points. However, the approach is not feasible over inaccessible areas where control points cannot be easily acquired. For the case, a relative orientation can be utilized only with conjugate points, but its accuracy for satellite sensor should be studied because the sensor has different geometry compared to well-known frame type cameras. Therefore, we carried out the bias-compensation of RPCs (Rational Polynomial Coefficients) without any ground control points to study its precision and effects on the y-parallax in epipolar resampled images. The conjugate points were generated with stereo image matching with outlier removals. RPCs compensation was performed based on the affine and polynomial models. We analyzed the reprojection error of the compensated RPCs and the y-parallax in the resampled images. Experimental result showed one-pixel level of y-parallax for Kompsat-3 stereo data.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Automatic Validation of the Geometric Quality of Crowdsourcing Drone Imagery (크라우드소싱 드론 영상의 기하학적 품질 자동 검증)

  • Dongho Lee ;Kyoungah Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.577-587
    • /
    • 2023
  • The utilization of crowdsourced spatial data has been actively researched; however, issues stemming from the uncertainty of data quality have been raised. In particular, when low-quality data is mixed into drone imagery datasets, it can degrade the quality of spatial information output. In order to address these problems, the study presents a methodology for automatically validating the geometric quality of crowdsourced imagery. Key quality factors such as spatial resolution, resolution variation, matching point reprojection error, and bundle adjustment results are utilized. To classify imagery suitable for spatial information generation, training and validation datasets are constructed, and machine learning is conducted using a radial basis function (RBF)-based support vector machine (SVM) model. The trained SVM model achieved a classification accuracy of 99.1%. To evaluate the effectiveness of the quality validation model, imagery sets before and after applying the model to drone imagery not used in training and validation are compared by generating orthoimages. The results confirm that the application of the quality validation model reduces various distortions that can be included in orthoimages and enhances object identifiability. The proposed quality validation methodology is expected to increase the utility of crowdsourced data in spatial information generation by automatically selecting high-quality data from the multitude of crowdsourced data with varying qualities.