• Title/Summary/Keyword: Camera calibration data

Search Result 167, Processing Time 0.026 seconds

Camera Calibration when the Accuracies of Camera Model and Data Are Uncertain (카메라 모델과 데이터의 정확도가 불확실한 상황에서의 카메라 보정)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.1
    • /
    • pp.27-34
    • /
    • 2004
  • Camera calibration is an important and fundamental procedure for the application of a vision sensor to 3D problems. Recently many camera calibration methods have been proposed particularly in the area of robot vision. However, the reliability of data used in calibration has been seldomly considered in spite of its importance. In addition, a camera model can not guarantee good results consistently in various conditions. This paper proposes methods to overcome such uncertainty problems of data and camera models as we often encounter them in practical camera calibration steps. By the use of the RANSAC (Random Sample Consensus) algorithm, few data having excessive magnitudes of errors are excluded. Artificial neural networks combined in a two-step structure are trained to compensate for the result by a calibration method of a particular model in a given condition. The proposed methods are useful because they can be employed additionally to most existing camera calibration techniques if needed. We applied them to a linear camera calibration method and could get improved results.

Detection of Calibration Patterns for Camera Calibration with Irregular Lighting and Complicated Backgrounds

  • Kang, Dong-Joong;Ha, Jong-Eun;Jeong, Mun-Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.5
    • /
    • pp.746-754
    • /
    • 2008
  • This paper proposes a method to detect calibration patterns for accurate camera calibration under complicated backgrounds and uneven lighting conditions of industrial fields. Required to measure object dimensions, the preprocessing of camera calibration must be able to extract calibration points from a calibration pattern. However, industrial fields for visual inspection rarely provide the proper lighting conditions for camera calibration of a measurement system. In this paper, a probabilistic criterion is proposed to detect a local set of calibration points, which would guide the extraction of other calibration points in a cluttered background under irregular lighting conditions. If only a local part of the calibration pattern can be seen, input data can be extracted for camera calibration. In an experiment using real images, we verified that the method can be applied to camera calibration for poor quality images obtained under uneven illumination and cluttered background.

Camera Calibration Using Neural Network with a Small Amount of Data (소수 데이터의 신경망 학습에 의한 카메라 보정)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.191-204
    • /
    • 2014
  • In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.

A 2-D Image Camera Calibration using a Mapping Approximation of Multi-Layer Perceptrons (다층퍼셉트론의 정합 근사화에 의한 2차원 영상의 카메라 오차보정)

  • 이문규;이정화
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.4
    • /
    • pp.487-493
    • /
    • 1998
  • Camera calibration is the process of determining the coordinate relationship between a camera image and its real world space. Accurate calibration of a camera is necessary for the applications that involve quantitative measurement of camera images. However, if the camera plane is parallel or near parallel to the calibration board on which 2 dimensional objects are defined(this is called "ill-conditioned"), existing solution procedures are not well applied. In this paper, we propose a neural network-based approach to camera calibration for 2D images formed by a mono-camera or a pair of cameras. Multi-layer perceptrons are developed to transform the coordinates of each image point to the world coordinates. The validity of the approach is tested with data points which cover the whole 2D space concerned. Experimental results for both mono-camera and stereo-camera cases indicate that the proposed approach is comparable to Tsai's method[8]. Especially for the stereo camera case, the approach works better than the Tsai's method as the angle between the camera optical axis and the Z-axis increases. Therefore, we believe the approach could be an alternative solution procedure for the ill -conditioned camera calibration.libration.

  • PDF

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

Active Calibration of the Robot/camera Pose using Cylindrical Objects (원형 물체를 이용한 로봇/카메라 자세의 능동보정)

  • 한만용;김병화;김국헌;이장명
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.3
    • /
    • pp.314-323
    • /
    • 1999
  • This paper introduces a methodology of active calibration of a camera pose (orientation and position) using the images of cylindrical objects that are going to be manipulated. This active calibration method is different from the passive calibration where a specific pattern needs to be located at a certain position. In the active calibration, a camera attached on the robot captures images of objects that are going to be manipulated. That is, the prespecified position and orientation data of the cylindrical object are transformed into the camera pose through the two consecutive image frames. An ellipse can be extracted from each image frame, which is defined as a circular-feature matrix. Therefore, two circular-feature matrices and motion parameters between the two ellipses are enough for the active calibration process. This active calibration scheme is very effective for the precise control of a mobile/task robot that needs to be calibrated dynamically. To verify the effectiveness of active calibration, fundamental experiments are peformed.

  • PDF

악조건하의 비동일평면 카메라 교정을 위한 알고리즘

  • Ahn, Taek-Jin;Lee, Moon-Kyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.12
    • /
    • pp.1001-1008
    • /
    • 2001
  • This paper presents a new camera calibration algorithm for ill-conditioned cases in which the camera plane is nearly parallel to a set of non-coplanar calibration boards. for the ill-conditioned case, most of existing calibration approaches such as Tsais radial-alignment-constraint method cannot be applied. Recently, for the ill-conditioned coplanar calibration Lee&Lee[16] proposed an iterative algorithm based on the least square method. The non-coplanar calibration algorithm presented in this paper is an iterative two-stage procedure with extends the previous coplanar calibration algorithm. Through the first stage, camera, position and orientation parameters as well as one radial distortion factor are determined optimally for a given data of the scale factor and the focal length. In the second stage, the scale factor and the focal length are locally optimized. This process is repeated until any improvement cannot be expected any more Computational results are provided to show the performance of the algorithm developed.

  • PDF

Accurate Camera Self-Calibration based on Image Quality Assessment

  • Fayyaz, Rabia;Rhee, Eun Joo
    • Journal of Information Technology Applications and Management
    • /
    • v.25 no.2
    • /
    • pp.41-52
    • /
    • 2018
  • This paper presents a method for accurate camera self-calibration based on SIFT Feature Detection and image quality assessment. We performed image quality assessment to select high quality images for the camera self-calibration process. We defined high quality images as those that contain little or no blur, and have maximum contrast among images captured within a short period. The image quality assessment includes blur detection and contrast assessment. Blur detection is based on the statistical analysis of energy and standard deviation of high frequency components of the images using Discrete Cosine Transform. Contrast assessment is based on contrast measurement and selection of the high contrast images among some images captured in a short period. Experimental results show little or no distortion in the perspective view of the images. Thus, the suggested method achieves camera self-calibration accuracy of approximately 93%.

A Study on the Camera Calibration Algorithm of Robot Vision Using Cartesian Coordinates

  • Lee, Yong-Joong
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.11 no.6
    • /
    • pp.98-104
    • /
    • 2002
  • In this study, we have developed an algorithm by attaching a camera at the end-effector of industrial six-axis robot in order to determine position and orientation of the camera system from cartesian coordinates. Cartesian coordinate as a starting point to evaluate for suggested algorithm, it was easy to confront increase of orientation vector for a linear line point that connects two points from coordinate space applied by recursive least square method which includes previous data result and new data result according to increase of image point. Therefore, when the camera attached to the end-effector has been applied to production location, with a calibration mask that has more than eight points arranged, this simulation approved that it is possible to determine position and orientation of cartesian coordinates of camera system even without a special measuring equipment.