• Title/Summary/Keyword: Camera calibration data

Search Result 167, Processing Time 0.025 seconds

Calibration Technology for Precise Alignment of Large Flat Panel Displays (대형 평판 디스플레이의 정밀 정렬을 위한 캘리브레이션 기술)

  • Hong, Jun-Ho;Shin, Dongwon
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.21 no.3
    • /
    • pp.100-109
    • /
    • 2022
  • In this study, calibration technology that increases the alignment accuracy in large flexible flat panels was studied. For precise of calibration, a systematization of the calibration algorithm was established, and a calibration correction technique was studied to revise calibration errors. A coordinate systems of camera and UVW stage was established to get the global position of the mark, and equations for translational and rotational calibration were systematically derived based on geometrical analysis. Correction process for the calibration data was carried, and alignment experiments were performed sequentially in cases of the presence or absence of calibration-correction. Alignment results of both calibration correction and non-calibration correction showed accuracy performance less than 1㎛. On the other hand, the standard deviation in calibration-correction is smaller than non-calibration correction. Therefore, calibration correction showed improvement of the alignment repeatability.

Ground-based Remote Sensing Technology for Precision Farming - Calibration of Image-based Data to Reflectance -

  • Shin B.S.;Zhang Q.;Han S.;Noh H.K.
    • Agricultural and Biosystems Engineering
    • /
    • v.6 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • Assessing health condition of crop in the field is one of core operation in precision fanning. A sensing system was proposed to remotely detect the crop health condition in terms of SP AD readings directly related to chlorophyll contents of crop using a multispectral camera equipped on ground-based platform. Since the image taken by a camera was sensitive to changes in ambient light intensity, it was needed to convert gray scale image data into reflectance, an index to indicate the reflection characteristics of target crop. A reference reflectance panel consisting of four pieces of sub-panels with different reflectance was developed for a dynamic calibration, by which a calibration equation was updated for every crop image captured by the camera. The system performance was evaluated in a field by investigating the relationship between com canopy reflectance and SP AD values. The validation tests revealed that the com canopy reflectance induced from Green band in the multispectral camera had the most significant correlation with SPAD values $(r^2=0.75)$ and NIR band could be used to filter out unwanted non-crop features such as soil background and empty space in a crop canopy. This research confirmed that it was technically feasible to develop a ground-based remote sensing system for assessing crop health condition.

  • PDF

The Slit Beam Laser Calibration Method Based On Triangulation (삼각법을 이용한 슬릿 빔 레이저 캘리브레이션)

  • 주기세
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.168-173
    • /
    • 1999
  • Many sensors such as a laser, CCD camera to obtain 3D information have been used, but most of calibration algorithms are inefficient since a huge memory and an experiment data for laser calibration are required. In this paper, the calibration algorithm of a slit beam laser based on triangulation method is introduced to calculate 3D information in the real world. The laser beam orthogonally mounted on the XY table is projected on the floor. A Cm camera observes the intersection plane of a light and an object plane. The 3D information is calculated using observed and calibration data. This method saves a memory and an experimental data since the 3D information are obtained simply triangulation method.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

A Head-Eye Calibration Technique Using Image Rectification (영상 교정을 이용한 헤드-아이 보정 기법)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.8
    • /
    • pp.11-23
    • /
    • 2000
  • Head-eye calibration is a process for estimating the unknown orientation and position of a camera with respect to a mobile platform, such as a robot wrist. We present a new head-eye calibration technique which can be applied for platforms with rather limited motion capability In particular, the proposed calibration technique can be applied to find the relative orientation of a camera mounted on a linear translation platform which does not have rotation capability. The algorithm find the rotation using a calibration data obtained from pure Translation of a camera along two different axes We have derived a calibration algorithm exploiting the rectification technique in such a way that the rectified images should satisfy the epipolar constraint. We present the calibration procedure for both the rotation and the translation components of a camera relative to the platform coordinates. The efficacy of the algorithm is demonstrated through simulations and real experiments.

  • PDF

A Visual Calibration Scheme for Off-Line Programming of SCARA Robots (스카라 로봇의 오프라인 프로그래밍을 위한 시각정보 보정기법)

  • Park, Chang-Kyoo;Son, Kwon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.1
    • /
    • pp.62-72
    • /
    • 1997
  • High flexibility and productivity using industrial robots are being achieved in manufacturing lines with off-line robot programmings. A good off-line programming system should have functions of robot modelling, trajectory planning, graphical teach-in, kinematic and dynamic simulations. Simulated results, however, can hardly be applied to on-line tasks until any calibration procedure is accompained. This paper proposes a visual calibration scheme in order to provide a calibration tool for our own off-line programming system of SCARA robots. The suggested scheme is based on the position-based visual servoings, and the perspective projection. The scheme requires only one camera as it uses saved kinematic data for three-dimensional visual calibration. Predicted images are generated and then compared with camera images for updating positions and orientations of objects. The scheme is simple and effective enough to be used in real time robot programming.

A Study of Digitalizing Analog Gamma Camera Using Gamma-PF Board (Gamma-PF 보드를 이용한 아날로그 감마카메라의 디지털화 연구)

  • Kim, Hui-Jung;So, Su-Gil;Bong, Jeong-Gyun;Kim, Han-Myeong;Kim, Jang-Hwi;Ju, Gwan-Sik;Lee, Jong-Du
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.4
    • /
    • pp.351-360
    • /
    • 1998
  • Digital gamma camera has many advantages over analog gamma camera. These include convenient quality control, easy calibration and operation, and possible image quantitation which results in improving diagnostic accuracies. The digital data can also be utilized for telemedicine and picture archiving and communication system. However, many hospitals still operate analog cameras and have difficult situation to replace them with digital cameras. We have studied a feasibility of digitalizing an analog gamma camera into a digital camera using Gamma-PF interface board. The physical characteristics that we have measured are spatial resolution, sensitivity, uniformity, and image contrast. The patient's data obtained for both analog and digital camera showed very similar image quality. The results suggest that it may be feasible to upgrade an analog camera into a digital gamma camera in clinical environments.

  • PDF

Assessment of a smartphone-based monitoring system and its application

  • Ahn, Hoyong;Choi, Chuluong;Yu, Yeon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.3
    • /
    • pp.383-397
    • /
    • 2014
  • Information technology advances are allowing conventional surveillance systems to be combined with mobile communication technologies, creating ubiquitous monitoring systems. This paper proposes monitoring system that uses smart camera technology. We discuss the dependence of interior orientation parameters on calibration target sheets and compare the accuracy of a three-dimensional monitoring system with camera location calculated by space resectioning using a Digital Surface Model (DSM) generated from stereo images. A monitoring housing is designed to protect a camera from various weather conditions and to provide the camera for power generated from solar panel. A smart camera is installed in the monitoring housing. The smart camera is operated and controlled through an Android application. At last the accuracy of a three-dimensional monitoring system is evaluated using a DSM. The proposed system was then tested against a DSM created from ground control points determined by Global Positioning Systems (GPSs) and light detection and ranging data. The standard deviation of the differences between DSMs are less than 0.12 m. Therefore the monitoring system is appropriate for extracting the information of objects' position and deformation as well as monitoring them. Through incorporation of components, such as camera housing, a solar power supply, the smart camera the system can be used as a ubiquitous monitoring system.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.