• Title/Summary/Keyword: 2D camera calibration

Search Result 116, Processing Time 0.026 seconds

3D Reconstruction using vanishing points (소실점을 이용한 3차원 재구성)

  • Kim, Sang-Hoon;Choi, Jong-Soo;Kim, Tae-Eun
    • The KIPS Transactions:PartB
    • /
    • v.10B no.5
    • /
    • pp.515-520
    • /
    • 2003
  • This paper proposes a calibration method from two images. Camera calibration is necessarily required to obtain 3D Information from 2D images. Previous works to accomplish the camera calibration needed the calibration object or required more than three images to calculate the Kruppa equation, however, we use the geometric constraints of parallelism and orthogonality can be easily presented in man-made scenes. The task of it is to obtain intrinsic and extrinsic camera parameters. The intrinsic parameters are evaluated from vanishing points and then the extrinsic parameters which are consisted of rotation matrix and translation vector of the camera are estimated from corresponding points of two views. From the calibrated parameters, we can recover the projection matrices for each view point. These projection matrices are used to recover 3D information of the scene and can be used to visualize new viewpoints.

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Hard calibration of a structured light for the Euclidian reconstruction (3차원 복원을 위한 구조적 조명 보정방법)

  • 신동조;양성우;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • A vision sensor should be calibrated prior to infer a Euclidian shape reconstruction. A point to point calibration. also referred to as a hard calibration, estimates calibration parameters by means of a set of 3D to 2D point pairs. We proposed a new method for determining a set of 3D to 2D pairs for the structured light hard calibration. It is simply determined based on epipolar geometry between camera image plane and projector plane, and a projector calibrating grid pattern. The projector calibration is divided two stages; world 3D data acquisition Stage and corresponding 2D data acquisition stage. After 3D data points are derived using cross ratio, corresponding 2D point in the projector plane can be determined by the fundamental matrix and horizontal grid ID of a projector calibrating pattern. Euclidian reconstruction can be achieved by linear triangulation. and experimental results from simulation are presented.

  • PDF

Development of Color 3D Scanner Using Laser Structured-light Imaging Method

  • Ko, Youngjun;Yi, Sooyeong
    • Current Optics and Photonics
    • /
    • v.2 no.6
    • /
    • pp.554-562
    • /
    • 2018
  • This study presents a color 3D scanner based on the laser structured-light imaging method that can simultaneously acquire 3D shape data and color of a target object using a single camera. The 3D data acquisition of the scanner is based on the structured-light imaging method, and the color data is obtained from a natural color image. Because both the laser image and the color image are acquired by the same camera, it is efficient to obtain the 3D data and the color data of a pixel by avoiding the complicated correspondence algorithm. In addition to the 3D data, the color data is helpful for enhancing the realism of an object model. The proposed scanner consists of two line lasers, a color camera, and a rotation table. The line lasers are deployed at either side of the camera to eliminate shadow areas of a target object. This study addresses the calibration methods for the parameters of the camera, the plane equations covered by the line lasers, and the center of the rotation table. Experimental results demonstrate the performance in terms of accurate color and 3D data acquisition in this study.

Multi-camera Calibration Method for Optical Motion Capture System (광학식 모션캡처를 위한 다중 카메라 보정 방법)

  • Shin, Ki-Young;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.41-49
    • /
    • 2009
  • In this paper, the multi-camera calibration algorithm for optical motion capture system is proposed. This algorithm performs 1st camera calibration using DLT(Direct linear transformation} method and 3-axis calibration frame with 7 optical markers. And 2nd calibration is performed by waving with a wand of known length(so called wand dance} throughout desired calibration volume. In the 1st camera calibration, it is obtained not only camera parameter but also radial lens distortion parameters. These parameters are used initial solution for optimization in the 2nd camera calibration. In the 2nd camera calibration, the optimization is performed. The objective function is to minimize the difference of distance between real markers and reconstructed markers. For verification of the proposed algorithm, re-projection errors are calculated and the distance among markers in the 3-axis frame and in the wand calculated. And then it compares the proposed algorithm with commercial motion capture system. In the 3D reconstruction error of 3-axis frame, average error presents 1.7042mm(commercial system) and 0.8765mm(proposed algorithm). Average error reduces to 51.4 percent in commercial system. In the distance between markers in the wand, the average error shows 1.8897mm in the commercial system and 2.0183mm in the proposed algorithm.

A Switched Visual Servoing Technique Robust to Camera Calibration Errors for Reaching the Desired Location Following a Straight Line in 3-D Space (카메라 교정 오차에 강인한 3차원 직선 경로 추종을 위한 전환 비주얼 서보잉 기법)

  • Kim, Do-Hyoung;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.125-134
    • /
    • 2006
  • The problem of establishing the servo system to reach the desired location keeping all features in the field of view and following a straight line is considered. In addition, robustness of camera calibration parameters is considered in this paper. The proposed approach is based on switching from position-based visual servoing (PBVS) to image-based visual servoing (IBVS) and allows the camera path to follow a straight line. To achieve the objective, a pose estimation method is required; the camera's target pose is estimated from the obtained images without the knowledge of the object. A switched control law moves the camera equipped to a robot end-effector near the desired location following a straight line in Cartesian space and then positions it to the desired pose with robustness to camera calibration error. Finally simulation results show the feasibility of the proposed visual servoing technique.

  • PDF

A Study on Depth Data Extraction for Object Based on Camera Calibration of Known Patterns (기지 패턴의 카메라 Calibration에 기반한 물체의 깊이 데이터 추출에 관한 연구)

  • 조현우;서경호;김태효
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.173-176
    • /
    • 2001
  • In this thesis, a new measurement system is implemented for depth data extraction based on the camera calibration of the known pattern. The relation between 3D world coordinate and 2D image coordinate is analyzed. A new camera calibration algorithm is established from the analysis and then, the internal variables and external variables of the CCD camera are obtained. Suppose that the measurement plane is horizontal plane, from the 2D plane equation and coordinate transformation equation the approximation values corresponding minimum values using Newton-Rabbson method is obtained and they are stored into the look-up table for real time processing . A slit laser light is projected onto the object, and a 2D image obtained on the x-z plane in the measurement system. A 3D shape image can be obtained as the 2D (x-z)images are continuously acquired, during the object is moving to the y direction. The 3D shape images are displayed on computer monitor by use of OpenGL software. In a measuremental result, we found that the resolution of pixels have $\pm$ 1% of error in depth data. It seems that the error components are due to the vibration of mechanic and optical system. We expect that the measurement system need some of mechanic stability and precision optical system in order to improve the system.

  • PDF

Calibration of a Rotating Stereo Line Camera System for Indoor Precise Mapping (실내 정밀 매핑을 위한 회전식 스테레오 라인 카메라 시스템의 캘리브레이션)

  • Oh, Sojung;Shin, Jinsoo;Kang, Jeongin;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.171-182
    • /
    • 2015
  • We propose a camera system to acquire indoor stereo omni-directional images and its calibration method. These images can be utilized for indoor precise mapping and sophisticated imagebased services. The proposed system is configured with a rotating stereo line camera system, providing stereo omni-directional images appropriate to stable stereoscopy and precise derivation of object point coordinates. Based on the projection model, we derive a mathematical model for the system calibration. After performing the system calibration, we can estimate object points with an accuracy of less than ${\pm}16cm$ in indoor space. The proposed system and calibration method will be applied to indoor precise 3D modeling.

The Camera Calibration Parameters Estimation using The Projection Variations of Line Widths (선폭들의 투영변화율을 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Moon, Sung-Young;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2372-2374
    • /
    • 2003
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as focal length, scale factor, pose, orientations, and distance. But, radial lens distortion is not modeled. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1,2,3,4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF