• Title/Summary/Keyword: 2D camera calibration

Search Result 116, Processing Time 0.025 seconds

Camera Calibration Using Neural Network with a Small Amount of Data (소수 데이터의 신경망 학습에 의한 카메라 보정)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.

Learning the nonlinearity of a camera calibration model using GMDH algorithm (GMDH 알고리즘에 의한 카메라 보정 모델의 비선형성 학습)

  • Kim, Myoung-Hwan;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.109-115
    • /
    • 2005
  • Calibration is a prerequisite procedure for employing a camera as a 3D sensor in an automated machines like robots. As accurate sensing is possible only when the vision sensor is calibrated accurately, many different approaches and models have been proposed for increasing calibration accuracy. Particularly an important factor which greatly affects the calibration accuracy is the nonlinearity in the mapping between 3D world and corresponding 2D image. In this paper GMDH algorithm is used to learn the nonlinearity without physical modelling. The technique proposed can be effective in various situations where the levels of noises and characteristics of nonlinear distortion are different. In simulations and an experiment, the proposed technique showed good and reliable results.

Parameter Calibration of Laser Scan Camera for Measuring the Impact Point of Arrow (화살 탄착점 측정을 위한 레이저 스캔 카메라 파라미터 보정)

  • Baek, Gyeong-Dong;Cheon, Seong-Pyo;Lee, In-Seong;Kim, Sung-Shin
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.21 no.1
    • /
    • pp.76-84
    • /
    • 2012
  • This paper presents the measurement system of arrow's point of impact using laser scan camera and describes the image calibration method. The calibration process of distorted image is primarily divided into explicit and implicit method. Explicit method focuses on direct optical property using physical camera and its parameter adjustment functionality, while implicit method relies on a calibration plate which assumed relations between image pixels and target positions. To find the relations of image and target position in implicit method, we proposed the performance criteria based polynomial theorem model that overcome some limitations of conventional image calibration model such as over-fitting problem. The proposed method can be verified with 2D position of arrow that were taken by SICK Ranger-D50 laser scan camera.

A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor (CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구)

  • Kim, Jin-Dae;Lee, Jeh-Won;Shin, Chan-Bai
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.11 s.188
    • /
    • pp.58-67
    • /
    • 2006
  • Due to the variety of signal processing and complicated mathematical analysis, it is not easy to accomplish 3D bin-picking with non-contact sensor. To solve this difficulties the reliable signal processing algorithm and a good sensing device has been recommended. In this research, 3D laser scanner and CCD camera is applied as a sensing device respectively. With these sensor we develop a two-step bin-picking method and reliable algorithm for the recognition of 3D bin object. In the proposed bin-picking, the problem is reduced to 2D intial recognition with CCD camera at first, and then 3D pose detection with a laser scanner. To get a good movement in the robot base frame, the hand eye calibration between robot's end effector and sensing device should be also carried out. In this paper, we examine auto-calibration technique in the sensor calibration step. A new thinning algorithm and constrained hough transform is also studied for the robustness in the real environment usage. From the experimental results, we could see the robust bin-picking operation under the non-aligned 3D hole object.

3D Calibration Method on Large-Scale Hull Pieces Profile Measurement using Multi-Slit Beams (선박용 곡판형상의 실시간 측정을 위한 다중 슬릿빔 보정법)

  • Kim, ByoungChang;Lee, Se-Han
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.968-973
    • /
    • 2013
  • In the transportation industry, especially in the shipbuilding process, 3D surface measurement of large-scale hull pieces is needed for fabrication and assembly. We suggest an efficient method for checking the shape of curved plates under the forming operation with short time by measuring 3D profiles along the multi lines of the target surface. For accurate profile reconstruction, 2D camera calibration and 3D calibration using gauge blocks were performed. The evaluation test shows that the measurement accuracy is within the boundary of tolerance required in the shipbuilding process.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Accurate Camera Calibration Using GMDH Algorithm (GMDH 알고리즘을 이용한 정확한 카메라의 보정기법)

  • Kim, Myoung-Hwan;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.592-594
    • /
    • 2004
  • Camera calibration is an important problem to determine the relationship between 3D real world and 2D camera image. The existing calibration methods can be classified into linear and non-linear models. The linear methods are simple and robust against noise, but the accuracy expectation is generally poor. In comparison, if the non-linearity, which is due mainly to lens distortion, is corrected, the accuracy can be better. However, as the optical features of lens are diverse, no non-linear method can be always effective for diverse vision systems. In this paper, we propose a new approach to correct the calibration error of a linear method using GMDH algorithm. The proposed technique is simple in concept and showed improved accuracy in various cases.

  • PDF

Correction of Photometric Distortion of a Micro Camera-Projector System for Structured Light 3D Scanning

  • Park, Go-Gwang;Park, Soon-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.96-102
    • /
    • 2012
  • This paper addresses photometric distortion problems of a compact 3D scanning sensor which is composed of a micro-size and inexpensive camera-projector system. Recently, many micro-size cameras and projectors are available. However, erroneous 3D scanning results may arise due to the poor and nonlinear photometric properties of the sensors. This paper solves two inherent photometric distortions of the sensors. First, the response functions of both the camera and projector are derived from the least squares solutions of passive and active calibration, respectively. Second, vignetting correction of the vision camera is done by using a conventional method, however the projector vignetting is corrected by using the planar homography between the image planes of the projector and camera, respectively. Experimental results show that the proposed technique enhances the linear properties of the phase patterns that are generated by the sensor.

A 3D Foot Scanner Using Mirrors and Single Camera (거울 및 단일 카메라를 이용한 3차원 발 스캐너)

  • Chung, Seong-Youb;Park, Sang-Kun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.1
    • /
    • pp.11-20
    • /
    • 2011
  • A structured beam laser is often used to scan object and make 3D model. Multiple cameras are inevitable to see occluded areas, which is the main reason of the high price of the scanner. In this paper, a low cost 3D foot scanner is developed using one camera and two mirrors. The camera and two mirrors are located below and above the foot, respectively. Occluded area, which is the top of the foot, is reflected by the mirrors. Then the camera measures 3D point data of the bottom and top of the foot at the same time. Then, the whole foot model is reconstructed after symmetrical transformation of the data reflected by mirrors. The reliability of the scan data depends on the accuracy of the parameters between the camera and the laser. A calibration method is also proposed and verified by experiments. The results of the experiments show that the worst errors of the system are 2 mm along x, y, and z directions.

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF