• Title/Summary/Keyword: epipolar image

Search Result 94, Processing Time 0.029 seconds

An Epipolar Rectification for Object Segmentation (객체분할을 위한 에피폴라 Rectification)

  • Jeong, Seung-Do;Kang, Sung-Suk;CHo, Jung-Won;Choi, Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.83-91
    • /
    • 2004
  • An epipolar rectification is the process of transforming the epipolar geometry of a pair of images into a canonical form. This is accomplished by applying a homography to each image that maps the epipole to a predetermined point. In this process, rectified images transformed by homographies must be satisfied with the epipolar constraint. These homographies are not unique, however, we find out homographies that are suited to system's purpose by means of an additive constraint. Since the rectified image pair be a stereo image pair, we are able to find the disparity efficiently. Therefore, we are able to estimate the three-dimensional information of objects within an image and apply this information to object segmentation. This paper proposes a rectification method for object segmentation and applies the rectification result to the object segmentation. Using color and relative continuity of disparity for the object segmentation, the drawbacks of previous segmentation method, which are that the object is segmented to several region because of having different color information or another object is merged into one because of having similar color information, are complemented. Experimental result shows that the disparity of result image of proposed rectification method have continuity about unique object. Therefore we have confirmed that our rectification method is suitable to the object segmentation.

Epipolar Image Resampling from Kompsat-3 In-track Stereo Images (아리랑3호 스테레오 영상의 에피폴라 기하 분석 및 영상 리샘플링)

  • Oh, Jae Hong;Seo, Doo Chun;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_1
    • /
    • pp.455-461
    • /
    • 2013
  • Kompsat-3 is an optical high-resolution earth observation satellite launched in May 2012. The AEISS sensor of the Korean satellite provides 0.7m panchromatic and 2.8m multi-spectral images with 16.8km swath width from the sun-synchronous near-circular orbit of 685km altitude. Kompsat-3 is more advanced than Kompsat-2 and the improvements include better agility such as in-track stereo acquisition capability. This study investigated the characteristic of the epipolar curves of in-track Kompsat-3 stereo images. To this end we used the RPCs(Rational Polynomial Coefficients) to derive the epipolar curves over the entire image area and found out that the third order polynomial equation is required to model the curves. In addition, we could observe two different groups of curve patterns due to the dual CCDs of AEISS sensor. From the experiment we concluded that the third order polynomial-based RPCs update is required to minimize the sample direction image distortion. Finally we carried out the experiment on the epipolar resampling and the result showed the third order polynomial image transformation produced less than 0.7 pixels level of y-parallax.

SYNTHESIS OF STEREO-MATE THROUGH THE FUSION OF A SINGLE AERIAL PHOTO AND LIDAR DATA

  • Chang, Ho-Wook;Choi, Jae-Wan;Kim, Hye-Jin;Lee, Jae-Bin;Yu, Ki-Yun
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.508-511
    • /
    • 2006
  • Generally, stereo pair images are necessary for 3D viewing. In the absence of quality stereo-pair images, it is possible to synthesize a stereo-mate suitable for 3D viewing with a single image and a depth-map. In remote sensing, DEM is usually used as a depth-map. In this paper, LiDAR data was used instead of DEM to make a stereo pair from a single aerial photo. Each LiDAR point was assigned a brightness value from the original single image by registration of the image and LiDAR data. And then, imaginary exposure station and image plane were assumed. Finally, LiDAR points with already-assigned brightness values were back-projected to the imaginary plane for synthesis of a stereo-mate. The imaginary exposure station and image plane were determined to have only a horizontal shift from the original image's exposure station and plane. As a result, the stereo-mate synthesized in this paper fulfilled epipolar geometry and yielded easily-perceivable 3D viewing effect together with the original image. The 3D viewing effect was tested with anaglyph at the end.

  • PDF

On Design of Visual Servoing using an Uncalibrated Camera and a Calibrated Robot

  • Uchikado, Shigeru;Morita, Masahiko;Osa, Yasuhiro;Mabuchi, Tesuo;Tanya, Kanya
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.23.2-23
    • /
    • 2001
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry (영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법)

  • Lee, Yong-Joo;Kim, Do-Wan;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.5 no.2
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

Fast and Accurate Visual Place Recognition Using Street-View Images

  • Lee, Keundong;Lee, Seungjae;Jung, Won Jo;Kim, Kee Tae
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2017
  • A fast and accurate building-level visual place recognition method built on an image-retrieval scheme using street-view images is proposed. Reference images generated from street-view images usually depict multiple buildings and confusing regions, such as roads, sky, and vehicles, which degrades retrieval accuracy and causes matching ambiguity. The proposed practical database refinement method uses informative reference image and keypoint selection. For database refinement, the method uses a spatial layout of the buildings in the reference image, specifically a building-identification mask image, which is obtained from a prebuilt three-dimensional model of the site. A global-positioning-system-aware retrieval structure is incorporated in it. To evaluate the method, we constructed a dataset over an area of $0.26km^2$. It was comprised of 38,700 reference images and corresponding building-identification mask images. The proposed method removed 25% of the database images using informative reference image selection. It achieved 85.6% recall of the top five candidates in 1.25 s of full processing. The method thus achieved high accuracy at a low computational complexity.

A Study on Extraction Depth Information Using a Non-parallel Axis Image (사각영상을 이용한 물체의 고도정보 추출에 관한 연구)

  • 이우영;엄기문;박찬응;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.2
    • /
    • pp.7-19
    • /
    • 1993
  • In stereo vision, when we use two parallel axis images, small portion of object is contained and B/H(Base-line to Height) ratio is limited due to the size of object and depth information is inaccurate. To overcome these difficulities we take a non-parallel axis image which is rotated $\theta$ about y-axis and match other parallel-axis image. Epipolar lines of non-parallel axis image are not same as those of parallel-axis image and we can't match these two images directly. In this paper, we transform the non-parallel axis image geometrically with camera parameters, whose epipolar lines are alingned parallel. NCC(Normalized Cross Correlation) is used as match measure, area-based matching technique is used find correspondence and 9$\times$9 window size is used, which is chosen experimentally. Focal length which is necessary to get depth information of given object is calculated with least-squares method by CCD camera characteristics and lenz property. Finally, we select 30 test points from given object whose elevation is varied to 150 mm, calculate heights and know that height RMS error is 7.9 mm.

Development of Photogrammetric Rectification Method Applying Bayesian Approach for High Quality 3D Contents Production (고품질의 3D 콘텐츠 제작을 위한 베이지안 접근방식의 사진측량기반 편위수정기법 개발)

  • Kim, Jae-In;Kim, Taejung
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.31-42
    • /
    • 2013
  • This paper proposes a photogrammetric rectification method based on Bayesian approach as a method that eliminates vertical parallax between stereo images to minimize visual fatigue of 3D contents. The image rectification consists of two phases; geometry estimation and epipolar transformation. For geometry estimation, coplanarity-based relative orientation algorithm was used in this paper. To ensure robustness for mismatch and localization error occurred by automation of tie point extraction, Bayesian approach was applied by introducing several prior constraints. As epipolar transformation perspective transformation was used based on condition of collinearity to minimize distortion of result images and modification for input images. Other algorithms were compared to evaluate performance. For geometry estimation, traditional relative orientation algorithm, 8-points algorithm and stereo calibration algorithm were employed. For epipolar transformation, Hartley algorithm and Bouguet algorithm were employed. The evaluation results showed that the proposed algorithm produced results with high accuracy, robustness about error sources and minimum image modification.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Analysis on 3D Positioning Precision Using Mobile Mapping System Images in Photograrmmetric Perspective (사진측량 관점에서 차량측량시스템 영상을 이용한 3차원 위치의 정밀도 분석)

  • 조우석;황현덕
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.6
    • /
    • pp.431-445
    • /
    • 2003
  • In this paper, we experimentally investigated the precision of 3D positioning using 4S-Van images in photograrmmetric perspective. The 3D calibration target was built over building facade outside and was captured separately by two CCD cameras installed in 4S-Van. After then, we determined the interior orientation parameter for each CCD camera through self-calibration technique. With the interior orientation parameter computed, the bundle adjustment was performed to obtain the exterior orientation parameters simultaneously for two CCD cameras using calibration target image and object coordinates. The reverse lens distortion coefficients were computed and acquired by least squares method so as to introduce lens distortion into epipolar line. It was shown that the reverse lens distortion coefficients could transform image coordinates into lens distorted image coordinates within about 0.5 pixel. The proposed semi-automatic matching scheme incorporated with lens distorted epipolar line was implemented with scene images captured by 4S-Van in moving. The experimental results showed that the precision of 3D positioning from 4S-Van images in photograrmmetric perspective is within 2cm in the range of 20m from the camera.