• 제목/요약/키워드: , Camera estimation method

검색결과 451건 처리시간 0.028초

Camera Motion Parameter Estimation Technique using 2D Homography and LM Method based on Invariant Features

  • Cha, Jeong-Hee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권4호
    • /
    • pp.297-301
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features. Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time. The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum. In order to complement these shortfalls, we, first propose constructing feature models using invariant vector of geometry. Secondly, we propose a two-stage calculation method to improve accuracy and convergence by using homography and LM method. In the experiment, we compare and analyze the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

A Study on Ceiling Light and Guided Line based Moving Detection Estimation Algorithm using Multi-Camera in Factory

  • Kim, Ki Rhyoung;Lee, Kang Hun;Cho, Su Hyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제10권4호
    • /
    • pp.70-74
    • /
    • 2018
  • In order to ensure the flow of goods available and more flexible, reduce labor costs, many factories and industrial zones around the world are gradually moving to use automated solutions. One of them is to use Automated guided vehicles (AGV). Currently, there are a line tracing method as an AGV operating method, and a method of estimating the current position of the AGV and matching with a factory map and knowing the moving direction of the AGV. In this paper, we propose ceiling Light and guided line based moving direction estimation algorithm using multi-camera on the AGV in smart factory that can operate stable AGV by compensating the disadvantages of existing AGV operation method. The proposed algorithm is able to estimate its position and direction using a general - purpose camera instead of a sensor. Based on this, it can correct its movement error and estimate its own movement path.

차선검출 기반 카메라 포즈 추정 (Lane Detection-based Camera Pose Estimation)

  • 정호기;서재규
    • 한국자동차공학회논문집
    • /
    • 제23권5호
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법 (A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM)

  • 이재민;김곤우
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

Estimation of Rotation of Camera Direction and Distance Between Two Camera Positions by Using Fisheye Lens System

  • Aregawi, Tewodros A.;Kwon, Oh-Yeol;Park, Soon-Yong;Chien, Sung-Il
    • 센서학회지
    • /
    • 제22권6호
    • /
    • pp.393-399
    • /
    • 2013
  • We propose a method of sensing the rotation and distance of a camera by using a fisheye lens system as a vision sensor. We estimate the rotation angle of a camera with a modified correlation method by clipping similar regions to avoid symmetry problems and suppressing highlight areas. In order to eliminate the rectification process of the distorted points of a fisheye lens image, we introduce an offline process using the normalized focal length, which does not require the image sensor size. We also formulate an equation for calculating the distance of a camera movement by matching the feature points of the test image with those of the reference image.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권2호
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정 (Head Pose Estimation Based on Perspective Projection Using PTZ Camera)

  • 김진서;이경주;김계영
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권7호
    • /
    • pp.267-274
    • /
    • 2018
  • 본 논문에서는 PTZ 카메라를 이용한 머리자세추정 방법에 대하여 서술한다. 회전 또는 이동에 의하여 카메라의 외부인자가 변경되면, 추정된 얼굴자세도 변한다. 본 논문에는 PTZ 카메라의 회전과 위치 변화에 독립적으로 머리자세를 추정하는 새로운 방법을 제안한다. 제안하는 방법은 얼굴검출, 특징추출 그리고 자세추정으로 이루어진다. 얼굴검출은 MCT특징을 이용해 검출하고, 얼굴 특징추출은 회귀트리 방법을 이용해 추출하고, 머리자세 추정은 POSIT 알고리즘을 사용한다. 기존의 POSIT 알고리즘은 카메라의 회전을 고려하지 않지만, 카메라의 외부인자 변화에도 강건하게 머리자세를 추정하기 위하여 본 논문은 원근투영법에 기반하여 POSIT를 개선한다. 실험을 통하여 본 논문에서 제안하는 방법이 기존의 방법 보다 RMSE가 약 $0.6^{\circ}$ 개선되는 것을 확인했다.

지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차 (A Vision-based Position Estimation Method Using a Horizon)

  • 신종진;남화진;김병주
    • 한국군사과학기술학회지
    • /
    • 제15권2호
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.

선 대응 기법을 이용한 카메라 교정파라미터 추정 (Estimation of Camera Calibration Parameters using Line Corresponding Method)

  • 최성구;고현민;노도환
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제52권10호
    • /
    • pp.569-574
    • /
    • 2003
  • Computer vision system is broadly adapted like as autonomous vehicle system, product line inspection, etc., because it has merits which can deal with environment flexibly. However, for applying it for that industry, it has to clear the problem that recognize position parameter of itself. So that computer vision system stands in need of camera calibration to solve that. Camera calibration consists of the intrinsic parameter which describe electrical and optical characteristics and the extrinsic parameter which express the pose and the position of camera. And these parameters have to be reorganized as the environment changes. In traditional methods, however, camera calibration was achieved at off-line condition so that estimation of parameters is in need again. In this paper, we propose a method to the calibration of camera using line correspondence in image sequence varied environment. This method complements the corresponding errors of the point corresponding method statistically by the extraction of line. The line corresponding method is strong by varying environment. Experimental results show that the error of parameter estimated is within 1% and those is effective.

ESTIMATION OF PEDESTRIAN FLOW SPEED IN SURVEILLANCE VIDEOS

  • Lee, Gwang-Gook;Ka, Kee-Hwan;Kim, Whoi-Yul
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.330-333
    • /
    • 2009
  • This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, pixel-to-meter conversion factors are calculated from camera geometry. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.1m/s. The proposed method also showed a promising result for the real video.

  • PDF