• Title/Summary/Keyword: camera calibration

Search Result 693, Processing Time 0.026 seconds

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

On-Site vs. Laboratorial Implementation of Camera Self-Calibration for UAV Photogrammetry

  • Han, Soohee;Park, Jinhwan;Lee, Wonhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.349-356
    • /
    • 2016
  • This study investigates two camera self-calibration approaches, on-site self-calibration and laboratorial self-calibration, both of which are based on self-calibration theory and implemented by using a commercial photogrammetric solution, Agisoft PhotoScan. On-site self-calibration implements camera self-calibration and aerial triangulation by using the same aerial photos. Laboratorial self-calibration implements camera self-calibration by using photos captured onto a patterned target displayed on a digital panel, then conducts aerial triangulation by using the aerial photos. Aerial photos are captured by an unmanned aerial vehicle, and target photos are captured onto a 27in LCD monitor and a 47in LCD TV in two experiments. Calibration parameters are estimated by the two approaches and errors of aerial triangulation are analyzed. Results reveal that on-site self-calibration excels laboratorial self-calibration in terms of vertical accuracy. By contrast, laboratorial self-calibration obtains better horizontal accuracy if photos are captured at a greater distance from the target by using a larger display panel.

Active Calibration of the Robot/camera Pose using Cylindrical Objects (원형 물체를 이용한 로봇/카메라 자세의 능동보정)

  • 한만용;김병화;김국헌;이장명
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.3
    • /
    • pp.314-323
    • /
    • 1999
  • This paper introduces a methodology of active calibration of a camera pose (orientation and position) using the images of cylindrical objects that are going to be manipulated. This active calibration method is different from the passive calibration where a specific pattern needs to be located at a certain position. In the active calibration, a camera attached on the robot captures images of objects that are going to be manipulated. That is, the prespecified position and orientation data of the cylindrical object are transformed into the camera pose through the two consecutive image frames. An ellipse can be extracted from each image frame, which is defined as a circular-feature matrix. Therefore, two circular-feature matrices and motion parameters between the two ellipses are enough for the active calibration process. This active calibration scheme is very effective for the precise control of a mobile/task robot that needs to be calibrated dynamically. To verify the effectiveness of active calibration, fundamental experiments are peformed.

  • PDF

Novel Calibration Method for the Multi-Camera Measurement System

  • Wang, Xinlei
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.746-752
    • /
    • 2014
  • In a multi-camera measurement system, the determination of the external parameters is one of the vital tasks, referred to as the calibration of the system. In this paper, a new geometrical calibration method, which is based on the theory of the vanishing line, is proposed. Using a planar target with three equally spaced parallel lines, the normal vector of the target plane can be confirmed easily in every camera coordinate system of the measurement system. By moving the target into more than two different positions, the rotation matrix can be determined from related theory, i.e., the expression of the same vector in different coordinate systems. Moreover, the translation matrix can be derived from the known distance between the adjacent parallel lines. In this paper, the main factors effecting the calibration are analyzed. Simulations show that the proposed method achieves robustness and accuracy. Experimental results show that the calibration can reach 1.25 mm with the range about 0.5m. Furthermore, this calibration method also can be used for auto-calibration of the multi-camera mefasurement system as the feature of parallels exists widely.

악조건하의 비동일평면 카메라 교정을 위한 알고리즘

  • Ahn, Taek-Jin;Lee, Moon-Kyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.12
    • /
    • pp.1001-1008
    • /
    • 2001
  • This paper presents a new camera calibration algorithm for ill-conditioned cases in which the camera plane is nearly parallel to a set of non-coplanar calibration boards. for the ill-conditioned case, most of existing calibration approaches such as Tsais radial-alignment-constraint method cannot be applied. Recently, for the ill-conditioned coplanar calibration Lee&Lee[16] proposed an iterative algorithm based on the least square method. The non-coplanar calibration algorithm presented in this paper is an iterative two-stage procedure with extends the previous coplanar calibration algorithm. Through the first stage, camera, position and orientation parameters as well as one radial distortion factor are determined optimally for a given data of the scale factor and the focal length. In the second stage, the scale factor and the focal length are locally optimized. This process is repeated until any improvement cannot be expected any more Computational results are provided to show the performance of the algorithm developed.

  • PDF

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.191-204
    • /
    • 2014
  • In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

Extrinsic calibration using a multi-view camera (멀티뷰 카메라를 사용한 외부 카메라 보정)

  • 김기영;김세환;박종일;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.187-190
    • /
    • 2003
  • In this paper, we propose an extrinsic calibration method for a multi-view camera to get an optimal pose in 3D space. Conventional calibration algorithms do not guarantee the calibration accuracy at a mid/long distance because pixel errors increase as the distance between camera and pattern goes far. To compensate for the calibration errors, firstly, we apply the Tsai's algorithm to each lens so that we obtain initial extrinsic parameters Then, we estimate extrinsic parameters by using distance vectors obtained from structural cues of a multi-view camera. After we get the estimated extrinsic parameters of each lens, we carry out a non-linear optimization using the relationship between camera coordinate and world coordinate iteratively. The optimal camera parameters can be used in generating 3D panoramic virtual environment and supporting AR applications.

  • PDF

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

Development of a Camera Self-calibration Method for 10-parameter Mapping Function

  • Park, Sung-Min;Lee, Chang-je;Kong, Dae-Kyeong;Hwang, Kwang-il;Doh, Deog-Hee;Cho, Gyeong-Rae
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.3
    • /
    • pp.183-190
    • /
    • 2021
  • Tomographic particle image velocimetry (PIV) is a widely used method that measures a three-dimensional (3D) flow field by reconstructing camera images into voxel images. In 3D measurements, the setting and calibration of the camera's mapping function significantly impact the obtained results. In this study, a camera self-calibration technique is applied to tomographic PIV to reduce the occurrence of errors arising from such functions. The measured 3D particles are superimposed on the image to create a disparity map. Camera self-calibration is performed by reflecting the error of the disparity map to the center value of the particles. Vortex ring synthetic images are generated and the developed algorithm is applied. The optimal result is obtained by applying self-calibration once when the center error is less than 1 pixel and by applying self-calibration 2-3 times when it was more than 1 pixel; the maximum recovery ratio is 96%. Further self-correlation did not improve the results. The algorithm is evaluated by performing an actual rotational flow experiment, and the optimal result was obtained when self-calibration was applied once, as shown in the virtual image result. Therefore, the developed algorithm is expected to be utilized for the performance improvement of 3D flow measurements.