• Title/Summary/Keyword: Camera Extrinsic Parameter

Search Result 13, Processing Time 0.023 seconds

B-snake Based Lane Detection with Feature Merging and Extrinsic Camera Parameter Estimation (특징점 병합과 카메라 외부 파라미터 추정 결과를 고려한 B-snake기반 차선 검출)

  • Ha, Sangheon;Kim, Gyeonghwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.1
    • /
    • pp.215-224
    • /
    • 2013
  • This paper proposes a robust lane detection algorithm for bumpy or slope changing roads by estimating extrinsic camera parameters, which represent the pose of the camera mounted on the car. The proposed algorithm assumes that two lanes are parallel with the predefined width. The lane detection and the extrinsic camera parameter estimation are performed simultaneously by utilizing B-snake in motion compensated and merged feature map with consecutive sequences. The experimental results show the robustness of the proposed algorithm in various road environments. Furthermore, the accuracy of extrinsic camera parameter estimation is evaluated by calculating the distance to a preceding car with the estimated parameters and comparing to the radar-measured distance.

Camera Extrinsic Parameter Estimation using 2D Homography and LM Method based on PPIV Recognition (PPIV 인식기반 2D 호모그래피와 LM방법을 이용한 카메라 외부인수 산출)

  • Cha Jeong-Hee;Jeon Young-Min
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.2 s.308
    • /
    • pp.11-19
    • /
    • 2006
  • In this paper, we propose a method to estimate camera extrinsic parameter based on projective and permutation invariance point features. Because feature informations in previous research is variant to c.:men viewpoint, extraction of correspondent point is difficult. Therefore, in this paper, we propose the extracting method of invariant point features, and new matching method using similarity evaluation function and Graham search method for reducing time complexity and finding correspondent points accurately. In the calculation of camera extrinsic parameter stage, we also propose two-stage motion parameter estimation method for enhancing convergent degree of LM algorithm. In the experiment, we compare and analyse the proposed method with existing method by using various indoor images to demonstrate the superiority of the proposed algorithms.

Vision-based Camera Localization using DEM and Mountain Image (DEM과 산영상을 이용한 비전기반 카메라 위치인식)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.177-186
    • /
    • 2005
  • In this Paper. we propose vision-based camera localization technique using 3D information which is created by mapping of DEM and mountain image. Typically, image features for localization have drawbacks, it is variable to camera viewpoint and after time information quantify increases . In this paper, we extract invariance features of geometry which is irrelevant to camera viewpoint and estimate camera extrinsic Parameter through accurate corresponding Points matching by Proposed similarity evaluation function and Graham search method we also propose 3D information creation method by using graphic theory and visual clues, The Proposed method has the three following stages; point features invariance vector extraction, 3D information creation, camera extrinsic Parameter estimation. In the experiments, we compare and analyse the proposed method with existing methods to demonstrate the superiority of the proposed methods.

  • PDF

Estimation of Camera Calibration Parameters using Line Corresponding Method (선 대응 기법을 이용한 카메라 교정파라미터 추정)

  • 최성구;고현민;노도환
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.10
    • /
    • pp.569-574
    • /
    • 2003
  • Computer vision system is broadly adapted like as autonomous vehicle system, product line inspection, etc., because it has merits which can deal with environment flexibly. However, for applying it for that industry, it has to clear the problem that recognize position parameter of itself. So that computer vision system stands in need of camera calibration to solve that. Camera calibration consists of the intrinsic parameter which describe electrical and optical characteristics and the extrinsic parameter which express the pose and the position of camera. And these parameters have to be reorganized as the environment changes. In traditional methods, however, camera calibration was achieved at off-line condition so that estimation of parameters is in need again. In this paper, we propose a method to the calibration of camera using line correspondence in image sequence varied environment. This method complements the corresponding errors of the point corresponding method statistically by the extraction of line. The line corresponding method is strong by varying environment. Experimental results show that the error of parameter estimated is within 1% and those is effective.

Camera Extrinsic Parameter Estimation using 2D Homography and Nonlinear Minimizing Method based on Geometric Invariance Vector (기하학적 불변벡터 기탄 2D 호모그래피와 비선형 최소화기법을 이용한 카메라 외부인수 측정)

  • Cha, Jeong-Hee
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.187-197
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features, Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time, The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum, In order to complement these shortfalls, we, first proposed constructing feature models using invariant vector of geometry, Secondly, we proposed a two-stage calculation method to improve accuracy and convergence by using 2D homography and LM method, In the experiment, we compared and analyzed the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

  • PDF

Camera Motion Parameter Estimation Technique using 2D Homography and LM Method based on Invariant Features

  • Cha, Jeong-Hee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.4
    • /
    • pp.297-301
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features. Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time. The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum. In order to complement these shortfalls, we, first propose constructing feature models using invariant vector of geometry. Secondly, we propose a two-stage calculation method to improve accuracy and convergence by using homography and LM method. In the experiment, we compare and analyze the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

Estimation of Camera Motion Parameter using Invariant Feature Models (불변 특징모델을 이용한 카메라 동작인수 측정)

  • Cha, Jeong-Hee;Lee, Keun-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.191-201
    • /
    • 2005
  • In this paper, we propose a method to calculate camera motion parameter, which is based on efficient invariant features irrelevant to the camera veiwpoint. As feature information in previous research is variant to camera viewpoint. information content is increased, therefore, extraction of accurate features is difficult. LM(Levenberg-Marquardt) method for camera extrinsic parameter converges on the goat value exactly, but it has also drawback to take long time because of minimization process by small step size. Therefore, in this paper, we propose the extracting method of invariant features to camera viewpoint and two-stage calculation method of camera motion parameter which enhances accuracy and convergent degree by using camera motion parameter by 2D homography to the initial value of LM method. The proposed method are composed of features extraction stage, matching stage and calculation stage of motion parameter. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed algorithm.

  • PDF

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

A Calibration Method for Multimodal dual Camera Environment (멀티모달 다중 카메라의 영상 보정방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2138-2144
    • /
    • 2015
  • Multimodal dual camera system has a stereo-like configuration equipped with an infrared thermal and optical camera. This paper presents stereo calibration methods on multimodal dual camera system using a target board that can be recognized by both thermal and optical camera. While a typical stereo calibration method usually performed with extracted intrinsic and extrinsic camera parameter, consecutive image processing steps were applied in this paper as follows. Firstly, the corner points were detected from the two images, and then the pixel error rate, the size difference, the rotation degree between the two images were calculated by using the pixel coordinates of detected corner points. Secondly, calibration was performed with the calculated values via affine transform. Lastly, result image was reconstructed with mapping regions on calibrated image.

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation (3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘)

  • Jeon, Hyun Ho;Ko, Yun Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1865-1873
    • /
    • 2017
  • The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.