• Title/Summary/Keyword: epipolar constraint

Search Result 12, Processing Time 0.021 seconds

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

An Epipolar Rectification for Object Segmentation (객체분할을 위한 에피폴라 Rectification)

  • Jeong, Seung-Do;Kang, Sung-Suk;CHo, Jung-Won;Choi, Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.83-91
    • /
    • 2004
  • An epipolar rectification is the process of transforming the epipolar geometry of a pair of images into a canonical form. This is accomplished by applying a homography to each image that maps the epipole to a predetermined point. In this process, rectified images transformed by homographies must be satisfied with the epipolar constraint. These homographies are not unique, however, we find out homographies that are suited to system's purpose by means of an additive constraint. Since the rectified image pair be a stereo image pair, we are able to find the disparity efficiently. Therefore, we are able to estimate the three-dimensional information of objects within an image and apply this information to object segmentation. This paper proposes a rectification method for object segmentation and applies the rectification result to the object segmentation. Using color and relative continuity of disparity for the object segmentation, the drawbacks of previous segmentation method, which are that the object is segmented to several region because of having different color information or another object is merged into one because of having similar color information, are complemented. Experimental result shows that the disparity of result image of proposed rectification method have continuity about unique object. Therefore we have confirmed that our rectification method is suitable to the object segmentation.

MOEPE: Merged Odd-Even PE Architecture for Stereo Matching Hardware (MOEPE: 스테레오 정합 하드웨어를 위한 Merged Odd-Even PE 구조)

  • 한필우;양영일
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1137-1140
    • /
    • 1998
  • In this paper, we propose the new hardware architecture which implements the stereo matching algorithm using the dynamic programming method. The dynamic programming method is used in finding the corresponding pixels between the left image and the right image. The proposed MOEPE(Merged Odd-Even PE) architecture operates in the systolic manner and finds the disparities from the intensities of the pixels on the epipolar line. The number of PEs used in the MOEPE architecture is the number of the range constraint, which reduced the number of the necessary PEs dramatically compared to the traditional method which uses the PEs with the number of pixels on the epipolar line. For the normal method by 25 times. The proposed architecture is modeled with the VHDL code and simulated by the SYNOPSYS tool.

  • PDF

Efficient Depth Map Generation for Various Stereo Camera Arrangements (다양한 스테레오 카메라 배열을 위한 효율적인 깊이 지도 생성 방법)

  • Jang, Woo-Seok;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6A
    • /
    • pp.458-463
    • /
    • 2012
  • In this paper, we propose a direct depth map acquisition method for the convergence camera array as well as the parallel camera array. The conventional methods perform image rectification to reduce complexity and improve accuarcy. However, image rectification may lead to unwanted consequences for the convergence camera array. Thus, the proposed method excludes image rectification and directly extracts depth values using the epipolar constraint. In order to acquire a more accurate depth map, occlusion detection and handling processes are added. Reasonable depth values are assigned to the obtained occlusion region by the distance and color differences from neighboring pixels. Experimental results show that the proposed method has fewer limitations than the conventional methods and generates more accurate depth maps stably.

MOEPE: Merged Odd-Even PE Architecture for Stereo Matching Hardware (MOEPE: 스테레오 정합 하드웨어를 위한 Merged Odd-Even PE구조)

  • Han, Phil-Woo;Yang, Yeong-Yil
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.37 no.10
    • /
    • pp.57-64
    • /
    • 2000
  • In this paper, we propose the new hardware architecture which implements the stereo matching algorithm using the dynamprogrammethod. The proposed MOEPE(Merged Odd-Even PE) architecture operates in the systolic manner and finds the disparities form the intensities of the pixels on the epipolar line. The number of PEs used in the MOEPE architecture is the same number of the range constraint, which reduced the nuMber of the necessary PEs draMatically compared to the traditional method which uses the PEs with the same number of pixels on the epipolar line. For the normal sized images, the numof the MOEPE architecture is less than that of the PEs in the traditional method by 25${\times}$The proposed architecture is modeled with the VHDL code and simulated by the SYNOPSYS tool.

  • PDF

Recovering the Elevation Map by Stereo Modeling of the Aerial Image Sequence (연속 항공영상의 스테레오 모델링에 의한 지형 복원)

  • 강민석;김준식;박래홍;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.64-75
    • /
    • 1993
  • This paper proposes a recovering technique of the elevation map by stereo modeling of the aerial image sequence which is transformed based on the aircraft situation. The area-based stereo matching method is simulated and the various parameters are experimentally chosen. In a depth extraction step, the depth is determined by solving the vector equation. The equation is suitable for stereo modeling of aerial images which do not satisfy the epipolar constraint. Also, the performance of the conventional feature-based matching scheme is compared. Finally, techniques analyzing the accuracy of the recovered elevation map (REM) are described. The analysis includes the error estimation for both height and contour lines, where the accuracy is based on the measurements of deviations from the estimates obtained manually. The experimental results show the efficiency of the proposed technique.

  • PDF

A Study on the 3D Shape Reconstruction Algorithm of an Indoor Environment Using Active Stereo Vision (능동 스테레오 비젼을 이용한 실내환경의 3차원 형상 재구성 알고리즘)

  • Byun, Ki-Won;Joo, Jae-Heum;Nam, Ki-Gon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.10 no.1
    • /
    • pp.13-22
    • /
    • 2009
  • In this paper, we propose the 3D shape reconstruction method that combine the mosaic method and the active stereo matching using the laser beam. The active stereo matching method detects the position information of the irradiated laser beam on object by analyzing the color and brightness variation of left and right image, and acquires the depth information in epipolar line. The mosaic method extracts feature point of image by using harris comer detection and matches the same keypoint between the sequence of images using the keypoint descriptor index method and infers correlation between the sequence of images. The depth information of the sequence image was calculated by the active stereo matching and the mosaic method. The merged depth information was reconstructed to the 3D shape information by wrapping and blending with image color and texture. The proposed reconstruction method could acquire strong the 3D distance information, and overcome constraint of place and distance etc, by using laser slit beam and stereo camera.

  • PDF

A Head-Eye Calibration Technique Using Image Rectification (영상 교정을 이용한 헤드-아이 보정 기법)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.8
    • /
    • pp.11-23
    • /
    • 2000
  • Head-eye calibration is a process for estimating the unknown orientation and position of a camera with respect to a mobile platform, such as a robot wrist. We present a new head-eye calibration technique which can be applied for platforms with rather limited motion capability In particular, the proposed calibration technique can be applied to find the relative orientation of a camera mounted on a linear translation platform which does not have rotation capability. The algorithm find the rotation using a calibration data obtained from pure Translation of a camera along two different axes We have derived a calibration algorithm exploiting the rectification technique in such a way that the rectified images should satisfy the epipolar constraint. We present the calibration procedure for both the rotation and the translation components of a camera relative to the platform coordinates. The efficacy of the algorithm is demonstrated through simulations and real experiments.

  • PDF

Precise Rectification of Misaligned Stereo Images for 3D Image Generation (입체영상 제작을 위한 비정렬 스테레오 영상의 정밀편위수정)

  • Kim, Jae-In;Kim, Tae-Jung
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.411-421
    • /
    • 2012
  • The stagnant growth in 3D market due to 3D movie contents shortage is encouraging development of techniques for production cost reduction. Elimination of vertical disparity generated during image acquisition requires heaviest time and effort in the whole stereoscopic film-making process. This matter is directly related to competitiveness in the market and is being dealt with as a very important task. The removal of vertical disparity, i.e. image rectification has been treated for a long time in the photogrammetry field. While computer vision methods are focused on fast processing and automation, photogrammetry methods on accuracy and precision. However, photogrammetric approaches have not been tried for the 3D film-making. In this paper, proposed is a photogrammetry-based rectification algorithm that enable to eliminate the vertical disparity precisely by reconstruction of geometric relationship at the time of shooting. Evaluation of proposed algorithm was carried out by comparing the performance with two existing computer vision algorithms. The epipolar constraint satisfaction, epipolar line accuracy and vertical disparity of result images were tested. As a result, the proposed algorithm showed excellent performance than the other algorithms in term of accuracy and precision, and also revealed robustness about position error of tie-points.