• Title/Summary/Keyword: Camera orientation

Search Result 312, Processing Time 0.02 seconds

Evaluation of Long-term Stability of Interior Orientation Parameters of a Non-metric Camera (비측량용 카메라 내부표정요소의 장기간 안정성 평가)

  • Jeong, Soo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.3
    • /
    • pp.283-291
    • /
    • 2011
  • In case of metric cameras, not only fiducial marks but also various parameters related to camera lens are provided to users for the interior orientation process. The parameters have been acquired through precise camera calibration in laboratory by camera maker. But, in case of non-metric cameras, the interior orientation parameters should be determined in person by users through camera calibration with great number of control points. The interior orientation parameters of metric cameras are practically used for long time. But in case of non-metric cameras, the long-term stability of the interior orientation parameters have not been established. Generally, the interior orientation parameters of non-metric cameras are determined in every photogrammetric work. It's been an obstacle to use the non-metric camera in photogrammetric project because so many control points are required to get the interior orientation parameters. In this study, camera calibrations and photogrammetric observations using a non-metric camera have been implemented 25 times periodically for 6 months and the results have been analyzed. As a result, long-them stability of the interior orientation parameters of a non-metric camera is analyzed.

Experiment on Camera Platform Calibration of a Multi-Looking Camera System using single Non-Metric Camera (비측정용 카메라를 이용한 Multi-Looking 카메라의 플랫폼 캘리브레이션 실험 연구)

  • Lee, Chang-No;Lee, Byoung-Kil;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.351-357
    • /
    • 2008
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. The geometric relationship of oblique cameras and a vertical camera can be modelled by 6 exterior orientation parameters. Once the relationship between the vertical camera and each oblique camera is determined, the exterior orientation parameters of the oblique images can be calculated by the exterior orientation parameters of the vertical image. In order to examine the exterior orientation of both a vertical camera and each oblique cameras in the multi-looking camera relatively, calibration targets were installed in a lab and 14 images were taken from three image stations by tilting and rotating a non-metric digital camera. The interior orientation parameters of the camera and the exterior orientation parameters of the images were estimated. The exterior orientation parameters of the oblique image with respect to the vertical image were calculated relatively by the exterior orientation parameters of the images and error propagation of the orientation angles and the position of the projection center was examined.

Stability Analysis of a Stereo-Camera for Close-range Photogrammetry (근거리 사진측량을 위한 스테레오 카메라의 안정성 분석)

  • Kim, Eui Myoung;Choi, In Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.123-132
    • /
    • 2021
  • To determine 3D(three-dimensional) positions using a stereo-camera in close-range photogrammetry, camera calibration to determine not only the interior orientation parameters of each camera but also the relative orientation parameters between the cameras must be preceded. As time passes after performing camera calibration, in the case of non-metric cameras, the interior and relative orientation parameters may change due to internal instability or external factors. In this study, to evaluate the stability of the stereo-camera, not only the stability of two single cameras and a stereo-camera were analyzed, but also the three-dimensional position accuracy was evaluated using checkpoints. As a result of evaluating the stability of two single cameras through three camera calibration experiments over four months, the root mean square error was ±0.001mm, and the root mean square error of the stereo-camera was ±0.012mm ~ ±0.025mm, respectively. In addition, as the results of distance accuracy using the checkpoint were ±1mm, the interior and relative orientation parameters of the stereo-camera were considered stable over that period.

A Study on the Camera Calibration Algorithm of Robot Vision Using Cartesian Coordinates

  • Lee, Yong-Joong
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.11 no.6
    • /
    • pp.98-104
    • /
    • 2002
  • In this study, we have developed an algorithm by attaching a camera at the end-effector of industrial six-axis robot in order to determine position and orientation of the camera system from cartesian coordinates. Cartesian coordinate as a starting point to evaluate for suggested algorithm, it was easy to confront increase of orientation vector for a linear line point that connects two points from coordinate space applied by recursive least square method which includes previous data result and new data result according to increase of image point. Therefore, when the camera attached to the end-effector has been applied to production location, with a calibration mask that has more than eight points arranged, this simulation approved that it is possible to determine position and orientation of cartesian coordinates of camera system even without a special measuring equipment.

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Semi-automatic Camera Calibration Using Quaternions (쿼터니언을 이용한 반자동 카메라 캘리브레이션)

  • Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • The camera is a key element in image-based three-dimensional positioning, and camera calibration, which properly determines the internal characteristics of such a camera, is a necessary process that must be preceded in order to determine the three-dimensional coordinates of the object. In this study, a new methodology was proposed to determine interior orientation parameters of a camera semi-automatically without being influenced by size and shape of checkerboard for camera calibration. The proposed method consists of exterior orientation parameters estimation using quaternion, recognition of calibration target, and interior orientation parameter determination through bundle block adjustment. After determining the interior orientation parameters using the chessboard calibration target, the three-dimensional position of the small 3D model was determined. In addition, the horizontal and vertical position errors were about ${\pm}0.006m$ and ${\pm}0.007m$, respectively, through the accuracy evaluation using the checkpoints.

Camera Exterior Parameters Based on Vector Inner Production Application: Absolute Orientation (벡터내적 기반 카메라 외부 파라메터 응용 : 절대표정)

  • Chon, Jae-Choon;Sastry, Shankar
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.1
    • /
    • pp.70-74
    • /
    • 2008
  • In the field of camera motion research, it is widely held that the position (movement) and pose (rotation) of cameras are correlated and cannot be independently separated. A new equation based on inner product is proposed here to independently separate the position and pose. It is proved that the position and pose are not correlated and the equation is applied to estimation of the camera exterior parameters using a real image and 3D data.

Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery

  • Shin, Sung-Woong;Schenk, Tony
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.223-233
    • /
    • 2008
  • In the mid 90's, the U.S. government released images acquired by the first generation of photo reconnaissance satellite missions between 1960 and 1972. The Declassified Intelligent Satellite Photographs (DISP) from the Corona mission are of high quality with an astounding ground resolution of about 2 m. The KH-4A panoramic camera system employed a scan angle of $70^{\circ}$ that produces film strips with a dimension of $55\;mm\;{\times}\;757\;mm$. Since GPS/INS did not exist at the time of data acquisition, the exterior orientation must be established in the traditional way by using control information and the interior orientation of the camera. Detailed information about the camera is not available, however. For reconstructing points in object space from DISP imagery to an accuracy that is comparable to high resolution (a few meters), a precise camera model is essential. This paper is concerned with the derivation of a rigorous mathematical model for the KH-4A/B panoramic camera. The proposed model is compared with generic sensor models, such as affine transformation and rational functions. The paper concludes with experimental results concerning the precision of reconstructed points in object space. The rigorous mathematical panoramic camera model for the KH-4A camera system is based on extended collinearity equations assuming that the satellite trajectory during one scan is smooth and the attitude remains unchanged. As a result, the collinearity equations express the perspective center as a function of the scan time. With the known satellite velocity this will translate into a shift along-track. Therefore, the exterior orientation contains seven parameters to be estimated. The reconstruction of object points can now be performed with the exterior orientation parameters, either by intersecting bundle rays with a known surface or by using the stereoscopic KH-4A arrangement with fore and aft cameras mounted an angle of $30^{\circ}$.

A Wafer Pre-Alignment System Using a High-Order Polynomial Transformation Based Camera Calibration (고차 다항식 변환 기반 카메라 캘리브레이션을 이용한 웨이퍼 Pre-Alignment 시스템)

  • Lee, Nam-Hee;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.1
    • /
    • pp.11-16
    • /
    • 2010
  • Wafer Pre-Alignment is to find the center and the orientation of a wafer and to move the wafer to the desired position and orientation. In this paper, an area camera based pre-aligning method is presented that captures 8 wafer images regularly during 360 degrees rotation. From the images, wafer edge positions are extracted and used to estimate the wafer's center and orientation using least squares circle fitting. These data are utilized for the proper alignment of the wafer. For accurate alignments, camera calibration methods using high order polynomials are used for converting pixel coordinates into real-world coordinates. A complete pre-alignment system was constructed using mechanical and optical components and tested. Experimental results show that alignment of wafer center and orientation can be done with the standard deviation of 0.002 mm and 0.028 degree, respectively.