• Title/Summary/Keyword: epipolar

Search Result 125, Processing Time 0.025 seconds

Image segmentation and line segment extraction for 3-d building reconstruction

  • Ye, Chul-Soo;Kim, Kyoung-Ok;Lee, Jong-Hun;Lee, Kwae-Hi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.59-64
    • /
    • 2002
  • This paper presents a method for line segment extraction for 3-d building reconstruction. Building roofs are described as a set of planar polygonal patches, each of which is extracted by watershed-based image segmentation, line segment matching and coplanar grouping. Coplanar grouping and polygonal patch formation are performed per region by selecting 3-d line segments that are matched using epipolar geometry and flight information. The algorithm has been applied to high resolution aerial images and the results show accurate 3-d building reconstruction.

  • PDF

Topographic Information Extraction from Kompsat Satellite Stereo Data Using SGM

  • Jang, Yeong Jae;Lee, Jae Wang;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.315-322
    • /
    • 2019
  • DSM (Digital Surface Model) is a digital representation of ground surface topography or terrain that is widely used for hydrology, slope analysis, and urban planning. Aerial photogrammetry and LiDAR (Light Detection And Ranging) are main technology for urban DSM generation but high-resolution satellite imagery is the only ingredient for remote inaccessible areas. Traditional automated DSM generation method is based on correlation-based methods but recent study shows that a modern pixelwise image matching method, SGM (Semi-Global Matching) can be an alternative. Therefore this study investigated the application of SGM for Kompsat satellite data of KARI (Korea Aerospace Research Institute). Firstly, the sensor modeling was carried out for precise ground-to-image computation, followed by the epipolar image resampling for efficient stereo processing. Secondly, SGM was applied using different parameterizations. The generated DSM was evaluated with a reference DSM generated by the first pulse returns of the LIDAR reference dataset.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

Single-Camera Micro-Stereo 4D-PTV (단일카메라 마이크로 스테레오 4D-PTV)

  • Doh, Deog-Hee;Cho, Young-Beom;Lee, Jae-Min;Kim, Dong-Hyuk;Jo, Hyo-Jae
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.34 no.12
    • /
    • pp.1087-1092
    • /
    • 2010
  • A micro 3D-PTV system has been constructed using a single camera system. Two viewing holes were created behind the object lens of the microscopic system to construct a stereoscopic viewing image. A hybrid recursive PTV algorithm was used. A concept of epipolar line was adopted to eliminate many spurious candidates. Three-dimensional velocity vector fields were obtained by calculating the three-dimensional displacements of particles that were identified as being identical. The system consists of a laser light source (Ar-ion, 500 mW), one high-definition camera ($1028{\times}1024$ pixels, 500 fps), a circular plate with two viewing holes, and a host computer. The performance of the developed algorithm was tested using artificial images. The characteristic of the vector recovery ratio was investigated for the particle numbers. A micro backward-facing step channel ($H{\times}h{\times}W:\;36{\mu}m{\times}70{\mu}m{\times}3000{\mu}m$) was measured using the developed measurement system. The results were in good qualitative agreement with other results.

Relative RPCs Bias-compensation for Satellite Stereo Images Processing (고해상도 입체 위성영상 처리를 위한 무기준점 기반 상호표정)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.287-293
    • /
    • 2018
  • It is prerequisite to generate epipolar resampled images by reducing the y-parallax for accurate and efficient processing of satellite stereo images. Minimizing y-parallax requires the accurate sensor modeling that is carried out with ground control points. However, the approach is not feasible over inaccessible areas where control points cannot be easily acquired. For the case, a relative orientation can be utilized only with conjugate points, but its accuracy for satellite sensor should be studied because the sensor has different geometry compared to well-known frame type cameras. Therefore, we carried out the bias-compensation of RPCs (Rational Polynomial Coefficients) without any ground control points to study its precision and effects on the y-parallax in epipolar resampled images. The conjugate points were generated with stereo image matching with outlier removals. RPCs compensation was performed based on the affine and polynomial models. We analyzed the reprojection error of the compensated RPCs and the y-parallax in the resampled images. Experimental result showed one-pixel level of y-parallax for Kompsat-3 stereo data.

Calibration Comparison of Single Camera and Stereo Camera (단일 카메라 캘리브레이션과 스테레오 카메라의 캘리브레이션의 비교)

  • Kim, Eui Myoung;Hong, Song Pyo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.295-303
    • /
    • 2018
  • The stereo camera system has a fixed baseline and therefore has a constant scale. However, it is difficult to measure the actual three-dimensional coordinate since the scale is not fixed when relative orientation parameters are determined through the key-point matching in the stereo image each time. Therefore, the purpose of this study was to perform the stereo camera calibration that simultaneously determines the internal characteristics of the left and right cameras and the camera relationship between them using the modified collinearity equation and compared it with the two independent single cameras calibration. In the experiment using the images taken at close range, the RMSE (Root Mean Square Error) of ${\pm}0.014m$ was occurred when the three dimensional distances were compared in the single calibration results. On the other hand, the accuracy of the three-dimensional distance of the stereo camera calibration was better because the stereo camera results were almost no error compared to the results from two single cameras. In the comparison of the epipolar images, the RMSE of the stereo camera was 0.3 pixel more than that of the two single cameras, but the effect was not significant.

Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera (스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성)

  • Kim, Eun-Kyeong;Kim, Sung-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.58-63
    • /
    • 2020
  • This paper proposes the method for improving the localization accuracy of the mobile robot based on the stereo camera. To restore the position information from stereo images obtained by the stereo camera, the corresponding point which corresponds to one pixel on the left image should be found on the right image. For this, there is the general method to search for corresponding point by calculating the similarity of pixel with pixels on the epipolar line. However, there are some disadvantages because all pixels on the epipolar line should be calculated and the similarity is calculated by only pixel value like RGB color space. To make up for this weak point, this paper implements the method to search for the corresponding point simply by calculating the gap of x-coordinate when the feature points, which are extracted by feature extraction and matched by feature matching method, are a pair and located on the same y-coordinate on the left/right image. In addition, the proposed method tries to preserve the number of feature points as much as possible by finding the corresponding points through the conventional algorithm in case of unmatched features. Because the number of the feature points has effect on the accuracy of the localization. The position of the mobile robot is compensated based on 3-D coordinates of the features which are restored by the feature points and corresponding points. As experimental results, by the proposed method, the number of the feature points are increased for compensating the position and the position of the mobile robot can be compensated more than only feature extraction.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.