• Title/Summary/Keyword: 3D Calibration

Search Result 666, Processing Time 0.032 seconds

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

Statistical analysis for RMSE of 3D space calibration using the DLT (DLT를 이용한 3차원 공간검증시 RMSE에 대한 통계학적 분석)

  • Lee, Hyun-Seob;Kim, Ky-Hyeung
    • Korean Journal of Applied Biomechanics
    • /
    • v.13 no.1
    • /
    • pp.1-12
    • /
    • 2003
  • The purpose of this study was to design the method of 3D space calibration to reduce RMSE by statistical analysis when using the DLT algorithm and control frame. Control frame for 3D space calibration was consist of $1{\times}3{\times}2m$ and 162 contort points adhere to it. For calculate of 3D coordination used two methods about 2D coordination on image frame, 2D coordinate on each image frame and mean coordination. The methods of statistical analysis used one-way ANOVA and T-test. Significant level was ${\alpha}=.05$. The compose of methods for reduce RMSE were as follow. 1. Use the control frame composed of 24-44 control points arranged equally. 2. When photographing, locate control frame to center of image plane(image frame) o. use the lens of a few distortion. 3. When calculate of 3D coordination, use mean of 2D coordinate obtainable from all image frames.

Stereo Calibration Using Support Vector Machine

  • Kim, Se-Hoon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.250-255
    • /
    • 2003
  • The position of a 3-dimensional(3D) point can be measured by using calibrated stereo camera. To obtain more accurate measurement ,more accurate camera calibration is required. There are many existing methods to calibrate camera. The simple linear methods are usually not accurate due to nonlinear lens distortion. The nonlinear methods are accurate more than linear method, but it increase computational cost and good initial guess is needed. The multi step methods need to know some camera parameters of used camera. Recent years, these explicit model based camera calibration work with the development of more precise camera models involving correction of lens distortion. But these explicit model based camera calibration have disadvantages. So implicit camera calibration methods have been derived. One of the popular implicit camera calibration method is to use neural network. In this paper, we propose implicit stereo camera calibration method for 3D reconstruction using support vector machine. SVM can learn the relationship between 3D coordinate and image coordinate, and it shows the robust property with the presence of noise and lens distortion, results of simulation are shown in section 4.

  • PDF

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

Marker-less Calibration of Multiple Kinect Devices for 3D Environment Reconstruction (3차원 환경 복원을 위한 다중 키넥트의 마커리스 캘리브레이션)

  • Lee, Suwon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.10
    • /
    • pp.1142-1148
    • /
    • 2019
  • Reconstruction of the three-dimensional (3D) environment is a key aspect of augmented reality and augmented virtuality, which utilize and incorporate a user's surroundings. Such reconstruction can be easily realized by employing a Kinect device. However, multiple Kinect devices are required for enhancing the reconstruction density and for spatial expansion. While employing multiple Kinect devices, they must be calibrated with respect to each other in advance, and a marker is often used for this purpose. However, a marker needs to be placed at each calibration, and the result of marker detection significantly affects the calibration accuracy. Therefore, a user-friendly, efficient, accurate, and marker-less method for calibrating multiple Kinect devices is proposed in this study. The proposed method includes a joint tracking algorithm for approximate calibration, and the obtained result is further refined by applying the iterative closest point algorithm. Experimental results indicate that the proposed method is a convenient alternative to conventional marker-based methods for calibrating multiple Kinect devices. Hence, the proposed method can be incorporated in various applications of augmented reality and augmented virtuality that require 3D environment reconstruction by employing multiple Kinect devices.

A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor (CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구)

  • Kim, Jin-Dae;Lee, Jeh-Won;Shin, Chan-Bai
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.11 s.188
    • /
    • pp.58-67
    • /
    • 2006
  • Due to the variety of signal processing and complicated mathematical analysis, it is not easy to accomplish 3D bin-picking with non-contact sensor. To solve this difficulties the reliable signal processing algorithm and a good sensing device has been recommended. In this research, 3D laser scanner and CCD camera is applied as a sensing device respectively. With these sensor we develop a two-step bin-picking method and reliable algorithm for the recognition of 3D bin object. In the proposed bin-picking, the problem is reduced to 2D intial recognition with CCD camera at first, and then 3D pose detection with a laser scanner. To get a good movement in the robot base frame, the hand eye calibration between robot's end effector and sensing device should be also carried out. In this paper, we examine auto-calibration technique in the sensor calibration step. A new thinning algorithm and constrained hough transform is also studied for the robustness in the real environment usage. From the experimental results, we could see the robust bin-picking operation under the non-aligned 3D hole object.

Modeling and Calibration of a 3D Robot Laser Scanning System (3차원 로봇 레이저 스캐닝 시스템의 모델링과 캘리브레이션)

  • Lee Jong-Kwang;Yoon Ji Sup;Kang E-Sok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.34-40
    • /
    • 2005
  • In this paper, we describe the modeling for the 3D robot laser scanning system consisting of a laser stripe projector, camera, and 5-DOF robot and propose its calibration method. Nonlinear radial distortion in the camera model is considered for improving the calibration accuracy. The 3D range data is calculated using the optical triangulation principle which uses the geometrical relationship between the camera and the laser stripe plane. For optimal estimation of the system model parameters, real-coded genetic algorithm is applied in the calibration process. Experimental results show that the constructed system is able to measure the 3D position within about 1mm error. The proposed scheme could be applied to the kinematically dissimilar robot system without losing the generality and has a potential for recognition for the unknown environment.

The Slit Beam Laser Calibration Method Based On Triangulation (삼각법을 이용한 슬릿 빔 레이저 캘리브레이션)

  • 주기세
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.168-173
    • /
    • 1999
  • Many sensors such as a laser, CCD camera to obtain 3D information have been used, but most of calibration algorithms are inefficient since a huge memory and an experiment data for laser calibration are required. In this paper, the calibration algorithm of a slit beam laser based on triangulation method is introduced to calculate 3D information in the real world. The laser beam orthogonally mounted on the XY table is projected on the floor. A Cm camera observes the intersection plane of a light and an object plane. The 3D information is calculated using observed and calibration data. This method saves a memory and an experimental data since the 3D information are obtained simply triangulation method.

  • PDF

3D Reconstruction using the Key-frame Selection from Reprojection Error (카메라 재투영 오차로부터 중요영상 선택을 이용한 3차원 재구성)

  • Seo, Yung-Ho;Kim, Sang-Hoon;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.38-46
    • /
    • 2008
  • Key-frame selection algorithm is defined as the process of selecting a necessary images for 3D reconstruction from the uncalibrated images. Also, camera calibration of images is necessary for 3D reconstuction. In this paper, we propose a new method of Key-frame selection with the minimal error for camera calibration. Using the full-auto-calibration, we estimate camera parameters for all selected Key-frames. We remove the false matching using the fundamental matrix computed by algebraic deviation from the estimated camera parameters. Finally we obtain 3D reconstructed data. Our experimental results show that the proposed approach is required rather lower time costs than others, the error of reconstructed data is the smallest. The elapsed time for estimating the fundamental matrix is very fast and the error of estimated fundamental matrix is similar to others.