• Title/Summary/Keyword: Camera orientation

Search Result 312, Processing Time 0.037 seconds

Determination of Camera System Orientation and Translation in Cartesian Coordinate (직교 좌표에서 카메라 시스템의 방향과 위치 결정)

  • 이용중
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.04a
    • /
    • pp.109-114
    • /
    • 2000
  • A new method for the determination of camera system rotation and translation from in 3-D space using recursive least square method is presented in this paper. With this method, the calculation of the equation is found by a linear algorithm. Where the equation are either given or be obtained by solving five or more point correspondences. Good results can be obtained in the presence if more than the eight point. A main advantage of this new method is that it decouple rotation and translation, and then reduces computation. With respect to error in the solution point number in the input image data, adding one more feature correspondence to required minimum number improves the solution accuracy drastically. However, further increase in the number of feature correspondence improve the solution accuracy only slowly. The algorithm proposed by this paper is used to make camera system rotation and translation easy to recognize even when camera system attached at end effecter of six degrees of freedom industrial robot manipulator are applied industrial field.

  • PDF

악조건하의 비동일평면 카메라 교정을 위한 알고리즘

  • Ahn, Taek-Jin;Lee, Moon-Kyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.12
    • /
    • pp.1001-1008
    • /
    • 2001
  • This paper presents a new camera calibration algorithm for ill-conditioned cases in which the camera plane is nearly parallel to a set of non-coplanar calibration boards. for the ill-conditioned case, most of existing calibration approaches such as Tsais radial-alignment-constraint method cannot be applied. Recently, for the ill-conditioned coplanar calibration Lee&Lee[16] proposed an iterative algorithm based on the least square method. The non-coplanar calibration algorithm presented in this paper is an iterative two-stage procedure with extends the previous coplanar calibration algorithm. Through the first stage, camera, position and orientation parameters as well as one radial distortion factor are determined optimally for a given data of the scale factor and the focal length. In the second stage, the scale factor and the focal length are locally optimized. This process is repeated until any improvement cannot be expected any more Computational results are provided to show the performance of the algorithm developed.

  • PDF

4S-Van Design for Application Environment

  • Lee, Seung-Yong;Kim, Seong-Baek;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.106-110
    • /
    • 2002
  • 4S-Van is being developed in order to provide the spatial data rapidly and accurately. 4S-Van technique is a system for spatial data construction that is heart of 4S technique. Architecture of 4S-Van system consists of hardware integration part and post-processing part. Hardware part has GPS, INS, color CCD, camera, B/W CCD camera, infrared rays camera, and laser. Software part has GPS/INS integration algorithm, coordinate conversion, lens correction, camera orientation correction, and three dimension position production. In this paper, we suggest that adequate 4S-Van design is needed according to application environment from various test results.

  • PDF

Availability Evaluation For Generation Orthoimage Using Photogrammetric UAV System (사진측량용 UAV 시스템을 이용한 정사영상 제작 및 활용성 평가)

  • Shin, Dongyoon;Han, Jihye;Jin, Yujin;Park, Jaeyoung;Jeong, Hohyun
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.3
    • /
    • pp.275-285
    • /
    • 2016
  • This study analyzes the accuracy of ortho imagery based on whether camera calibration performed or not, using an unmanned aerial vehicle which equipped smart camera. Photgrammetric UAV system application was developed and smart camera performed image triangulation, and then created image as ortho imagery. Image triangulation was performed depending on whether interior orientation (IO) parameters were considered or not, which determined at the camera calibration phase. As a result of the camera calibration, RMS error appeared 0.57 pixel, which is more accurate compared to the result of the previous study using non-metric camera. When IO parameters were considered in static experiment, the triangulation resulted in 2 pixel or less (RMSE), which is at least 200 % higher than when IO parameters were not considered. After generate ortho imagery, the accuracy is 89% higher when camera calibration are considered than when they are not considered. Therefore, smart camera has high potential to use as a payload for UAV system and is expected to be equipped on the current UAV system to function directly or indirectly.

Combined Static and Dynamic Platform Calibration for an Aerial Multi-Camera System

  • Cui, Hong-Xia;Liu, Jia-Qi;Su, Guo-Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2689-2708
    • /
    • 2016
  • Multi-camera systems which integrate two or more low-cost digital cameras are adopted to reach higher ground coverage and improve the base-height ratio in low altitude remote sensing. To guarantee accurate multi-camera integration, the geometric relationship among cameras must be determined through platform calibration techniques. This paper proposed a combined two-step platform calibration method. In the first step, the static platform calibration was conducted based on the stable relative orientation constraint and convergent conditions among cameras in static environments. In the second step, a dynamic platform self-calibration approach was proposed based on not only tie points but also straight lines in order to correct the small change of the relative relationship among cameras during dynamic flight. Experiments based on the proposed two-step platform calibration method were carried out with terrestrial and aerial images from a multi-camera system combined with four consumer-grade digital cameras onboard an unmanned aerial vehicle. The experimental results have shown that the proposed platform calibration approach is able to compensate the varied relative relationship during flight, acquiring the mosaicing accuracy of virtual images smaller than 0.5pixel. The proposed approach can be extended for calibrating other low-cost multi-camera system without rigorously mechanical structure.

Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot (이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출)

  • Kwon, Oh-Sang
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

Agent-based Automatic Camera Placement for Video Surveillance Systems (영상 감시 시스템을 위한 에이전트 기반의 자동화된 카메라 배치)

  • Burn, U-In;Nam, Yun-Young;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.103-116
    • /
    • 2010
  • In this paper, we propose an optimal camera placement using agent-based simulation. To derive importance of space and to cover the space efficiently, we accomplished an agent-based simulation based on classification of space and pattern analysis of moving people. We developed an agent-based camera placement method considering camera performance as well as space priority extracted from path finding algorithms. We demonstrate that the method not only determinates the optimal number of cameras, but also coordinates the position and orientation of the cameras with considering the installation costs. To validate the method, we compare simulation results with videos of real materials and show experimental results simulated in a specific space.

A Head-Eye Calibration Technique Using Image Rectification (영상 교정을 이용한 헤드-아이 보정 기법)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.8
    • /
    • pp.11-23
    • /
    • 2000
  • Head-eye calibration is a process for estimating the unknown orientation and position of a camera with respect to a mobile platform, such as a robot wrist. We present a new head-eye calibration technique which can be applied for platforms with rather limited motion capability In particular, the proposed calibration technique can be applied to find the relative orientation of a camera mounted on a linear translation platform which does not have rotation capability. The algorithm find the rotation using a calibration data obtained from pure Translation of a camera along two different axes We have derived a calibration algorithm exploiting the rectification technique in such a way that the rectified images should satisfy the epipolar constraint. We present the calibration procedure for both the rotation and the translation components of a camera relative to the platform coordinates. The efficacy of the algorithm is demonstrated through simulations and real experiments.

  • PDF

3D Positioning Using a UAV Equipped with a Stereo Camera (스테레오 카메라를 탑재한 UAV를 이용한 3차원 위치결정)

  • Park, Sung-Geun;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.51 no.2
    • /
    • pp.185-198
    • /
    • 2021
  • Researches using UAVs are being actively conducted in the field of quickly constructing 3D spatial information in small areas. In this study, without using ground control points, a stereo camera was mounted on a UAV to collect images and quickly construct three-dimensional positions through image matching, bundle adjustment, and the determination of a scale factor. Through the experiment, when bundle adjustment was performed using stereo constraints, the root mean square error was 1.475m, and when absolute orientation was performed in consideration of a scale, it was found to be 0.029m. Through this, it was found that when using the data processing method of a UAV equipped with a stereo camera proposed in this study, high-accuracy 3D spatial information can be constructed without using ground control points.

A Distortion Correction Method of Wide-Angle Camera Images through the Estimation and Validation of a Camera Model (카메라 모델의 추정과 검증을 통한 광각 카메라 영상의 왜곡 보정 방법)

  • Kim, Kyeong-Im;Han, Soon-Hee;Park, Jeong-Seon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1923-1932
    • /
    • 2013
  • In order to solve the problem of severely distorted images from a wide-angle camera, we propose a calibration method which corrects a radial distortion in wide-angle images by estimation and validation of camera model. First, we estimate a camera model consisting of intrinsic and extrinsic parameters from calibration patterns, where intrinsic parameters are the focal length, the principal point and so on, and extrinsic parameters are the relative position and orientation of calibration pattern from a camera. Next we validate the estimated camera model by re-extracting corner points by inversing the model to images. Finally we correct the distortion of the image using the validated camera model. We confirm that the proposed method can correct the distortion more than 80% by the calibration experiments using the lattice shaped pattern images captured from a general web camera and a wide-angle camera.