• Title/Summary/Keyword: lens calibration

Search Result 148, Processing Time 0.028 seconds

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Evaluation of Long-term Stability of Interior Orientation Parameters of a Non-metric Camera (비측량용 카메라 내부표정요소의 장기간 안정성 평가)

  • Jeong, Soo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.3
    • /
    • pp.283-291
    • /
    • 2011
  • In case of metric cameras, not only fiducial marks but also various parameters related to camera lens are provided to users for the interior orientation process. The parameters have been acquired through precise camera calibration in laboratory by camera maker. But, in case of non-metric cameras, the interior orientation parameters should be determined in person by users through camera calibration with great number of control points. The interior orientation parameters of metric cameras are practically used for long time. But in case of non-metric cameras, the long-term stability of the interior orientation parameters have not been established. Generally, the interior orientation parameters of non-metric cameras are determined in every photogrammetric work. It's been an obstacle to use the non-metric camera in photogrammetric project because so many control points are required to get the interior orientation parameters. In this study, camera calibrations and photogrammetric observations using a non-metric camera have been implemented 25 times periodically for 6 months and the results have been analyzed. As a result, long-them stability of the interior orientation parameters of a non-metric camera is analyzed.

Calibrating Stereoscopic 3D Position Measurement Systems Using Artificial Neural Nets (3차원 위치측정을 위한 스테레오 카메라 시스템의 인공 신경망을 이용한 보정)

  • Do, Yong-Tae;Lee, Dae-Sik;Yoo, Seog-Hwan
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.418-425
    • /
    • 1998
  • Stereo cameras are the most widely used sensing systems for automated machines including robots to interact with their three-dimensional(3D) working environments. The position of a target point in the 3D world coordinates can be measured by the use of stereo cameras and the camera calibration is an important preliminary step for the task. Existing camera calibration techniques can be classified into two large categories - linear and nonlinear techniques. While linear techniques are simple but somewhat inaccurate, the nonlinear ones require a modeling process to compensate for the lens distortion and a rather complicated procedure to solve the nonlinear equations. In this paper, a method employing a neural network for the calibration problem is described for tackling the problems arisen when existing techniques are applied and the results are reported. Particularly, it is shown experimentally that by utilizing the function approximation capability of multi-layer neural networks trained by the back-propagation(BP) algorithm to learn the error pattern of a linear technique, the measurement accuracy can be simply and efficiently increased.

  • PDF

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

Automatic Target Recognition for Camera Calibration (카메라 캘리브레이션을 위한 자동 타겟 인식)

  • Kim, Eui Myoung;Kwon, Sang Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.525-534
    • /
    • 2018
  • Camera calibration is the process of determining the parameters such as the focal length of a camera, the position of a principal point, and lens distortions. For this purpose, images of checkerboard have been mainly used. When targets were automatically recognized in checkerboard image, the existing studies had limitations in that the user should have a good understanding of the input parameters for recognizing the target or that all checkerboard should appear in the image. In this study, a methodology for automatic target recognition was proposed. In this method, even if only a part of the checkerboard image was captured using rectangles including eight blobs, four each at the central portion and the outer portion of the checkerboard, the index of the target can be automatically assigned. In addition, there is no need for input parameters. In this study, three conditions were used to automatically extract the center point of the checkerboard target: the distortion of black and white pattern, the frequency of edge change, and the ratio of black and white pixels. Also, the direction and numbering of the checkerboard targets were made with blobs. Through experiments on two types of checkerboards, it was possible to automatically recognize checkerboard targets within a minute for 36 images.

Development of a Fine Digital Sun Sensor for STSAT-2

  • Rhee, Sung-Ho;Lyou, Joon
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.13 no.2
    • /
    • pp.260-265
    • /
    • 2012
  • Satellite devices for fine attitude control of the Science & Technology Satellite-2 (STSAT-2). Based on the mission requirements of STSAT-2, the conventional analog-type sun sensors were found to be inadequate, motivating the development of a compact, fast and fine digital sun sensor (FDSS). The FDSS uses a CMOS image sensor and has an accuracy of less than 0.03degrees, an update rate of 5Hz and a weight of less than 800g. A pinhole-type aperture is substituted for the optical lens to minimize its weight. The target process speed is obtained by utilizing the Field Programmable Gate Array (FPGA), which acquires images from the CMOS sensor, and stores and processes the image data. The sensor accuracy is maintained by a rigorous centroid algorithm. This paper describes the FDSS designs, realizations, tests and calibration results.

Calibration pattern for accurately-extracting lens array lattice in integral imaging (집적 영상에서 정확한 렌즈 배열 격자 검출을 위한 캘리브레이션 패턴)

  • Jeong, Hyeon-Ah;Cho, Hyunji;Yoo, Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.76-77
    • /
    • 2017
  • 본 논문에서는 집적 영상에서 렌즈 배열의 격자를 정확하게 검출하기 위한 캘리브레이션 패턴 영상을 제안한다. 렌즈 배열의 격자를 검출하기 위해서 수직, 수평 방향의 에지 영상이 필요하다. 입력 영상의 에지를 잘 검출하지 못하면, 렌즈 배열의 요소 영상 크기를 결정할 때 오류가 발생할 수 있다. 이를 위해, 본 논문에서는 에지를 잘 검출할 수 있는 캘리브레이션 패턴 영상을 제안하여 정확도를 향상 시킨다. 본 논문에서는 실험을 통하여 제안하는 방법이 기존의 방법보다 집적 영상에서 렌즈 배열의 격자를 검출할 때 우수하게 적용될 수 있음을 보여주었다.

  • PDF

The Development of adaptive optical dimension measuring system (적응형 광학 치수 측정 장치 개발)

  • 윤경환;강영준;백성훈;강신재
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.690-695
    • /
    • 2004
  • A new dimension measuring method for the measurement of diameter of an object has been developed using laser triangulation. The 3-D data of an object was calculated from the 2dimensional image information obtained by the laser stripe using the laser triangulation. The system can measure the diameter of hole not only in a normal plane but also in an incline plane. We can experiment with magnification that is optimized according to size of object using zoom lens. In this paper, the theoretical formula and calibration of the system were described. The measuring precision of the system was investigated by experiment.

  • PDF