• Title/Summary/Keyword: Camera calibration data

Search Result 167, Processing Time 0.023 seconds

Hard calibration of a structured light for the Euclidian reconstruction (3차원 복원을 위한 구조적 조명 보정방법)

  • 신동조;양성우;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • A vision sensor should be calibrated prior to infer a Euclidian shape reconstruction. A point to point calibration. also referred to as a hard calibration, estimates calibration parameters by means of a set of 3D to 2D point pairs. We proposed a new method for determining a set of 3D to 2D pairs for the structured light hard calibration. It is simply determined based on epipolar geometry between camera image plane and projector plane, and a projector calibrating grid pattern. The projector calibration is divided two stages; world 3D data acquisition Stage and corresponding 2D data acquisition stage. After 3D data points are derived using cross ratio, corresponding 2D point in the projector plane can be determined by the fundamental matrix and horizontal grid ID of a projector calibrating pattern. Euclidian reconstruction can be achieved by linear triangulation. and experimental results from simulation are presented.

  • PDF

The Laser Calibration Based On Triangulation Method (삼각법을 기반으로 한 레이저 캘리브레이션)

  • 주기세
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.4
    • /
    • pp.859-865
    • /
    • 1999
  • Many sensors such as a laser, and CCD camera to obtain 3D information have been used, but most of algorithms for laser calibration are inefficient since a huge memory and experiment data are required. This method saves a memory and an experimental data since the 3D information are obtained simply triangulation method. In this paper, the calibration algorithm of a slit km laser based on triangulation method is introduced to calculate 3D information in the real world. The laser beam orthogonally mounted on the XY table is projected on the floor. A CCD camera observes the intersection plane of a light and an object plane. The 3D information is calculated using observed and calibration data.

  • PDF

A Image-based 3-D Shape Reconstruction using Pyramidal Volume Intersection (피라미드 볼륨 교차기법을 이용한 영상기반의 3차원 형상 복원)

  • Lee Sang-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.127-135
    • /
    • 2006
  • The image-based 3D modeling is the technique of generating a 3D graphic model from images acquired using cameras. It is being researched as an alternative technique for the expensive 3D scanner. In this paper, I propose the image-based 3D modeling system using calibrated camera. The proposed algorithm for rendering 3D model is consisted of three steps, camera calibration, 3D shape reconstruction and 3D surface generation step. In the camera calibration step, I estimate the camera matrix for the image aquisition camera. In the 3D shape reconstruction step, I calculate 3D volume data from silhouette using pyramidal volume intersection. In the 3D surface generation step, the reconstructed volume data is converted to 3D mesh surface. As shown the result, I generated relatively accurate 3D model.

Performance of CQUEAN camera

  • Choi, Chang-Su;Park, Won-Kee;Jeon, Yi-Seul;Pak, Soo-Jong;Im, Myung-Shin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.2
    • /
    • pp.63.1-63.1
    • /
    • 2010
  • CQUEAN (Camera for QUasars in EArly uNiverse) is a newly developed camera system by CEOU optimized at 0.8 - $1.1{\mu}m$ wavelength region. From Aug. 10 to Aug. 17, 2010, the camera was installed at 2.1m Otto Struve telescope at McDonald Observatory, USA, and engineering test observation was performed. We obtained the data for the characteristics of camera and scientific purpose using 7 filters (g, r, i, z, Is, Iz, Y). For the purpose of discovery of z - 5~6 quasar, we specially used new filters (Is,Iz). During the test observation, we obtained the data of Gamma-Ray Burst, high redshift quasars, high redshift quasar candidates and other calibration data. We present general characteristics of the reduced data taken with CQUEAN and show the performance of the camera.

  • PDF

RECTIFICATION OF PURE TRANSLATION 2D CAMERA ARRAY

  • Ota, Makoto;Fukushima, Norishige;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.659-663
    • /
    • 2009
  • In this paper, we propose a rectification method that can convert ray space data obtained by controlled camera array to ideal data. Here, Ideal data is obtained by getting longitudinal and transversal epipolar line between cameras vertical and horizontal. However it is actually difficult to arrange cameras strictly because we arrange cameras by hand. As conventional method, we have use camera-calibration method. But if we use this method there are some errors on the output image. When we generate arbitrary viewpoint images this error is critical problem. We focus attention on ideal trajectory of characteristic point. And to minimize the error directly we parallelize the real one. And we showed usefulness of proposed technique. Then using the proposed technique, we were successful reducing the error to less than 0.5 pixels.

  • PDF

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.

Self-calibration of a Multi-camera System using Factorization Techniques for Realistic Contents Generation (실감 콘텐츠 생성을 위한 분해법 기반 다수 카메라 시스템 자동 보정 알고리즘)

  • Kim, Ki-Young;Woo, Woon-Tack
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.495-506
    • /
    • 2006
  • In this paper, we propose a self-calibration of a multi-camera system using factorization techniques for realistic contents generation. The traditional self-calibration algorithms for multi-camera systems have been focused on stereo(-rig) camera systems or multiple camera systems with a fixed configuration. Thus, it is required to exploit them in 3D reconstruction with a mobile multi-camera system and another general applications. For those reasons, we suggest the robust algorithm for general structured multi-camera systems including the algorithm for a plane-structured multi-camera system. In our paper, we explain the theoretical background and practical usages based on a projective factorization and the proposed affine factorization. We show experimental results with simulated data and real images as well. The proposed algorithm can be used for a 3D reconstruction and a mobile Augmented Reality.

다목적 실용위성 1호 EOC의 Dark Calibration Data 분석

  • 강치호;전갑호;전정남;최해진
    • Bulletin of the Korean Space Science Society
    • /
    • 2003.10a
    • /
    • pp.101-101
    • /
    • 2003
  • 다목적 실용위성 1호에 탑재된 EOC(Electro-Optical Camera)는 2,000년부터 현재까지 한반도 인근 및 세계의 주요 육지 지역을 관측하고 있다. DOC는 크게 광학부(Sensor Assembly)와 전자부(Electronics Assembly)로 구성되어 있으며, 지상으로부터 입사하는 광 정보를 디지털 신호로 재구성하여 PDTS(Payload Data Transmission System)을 통해 지상으로 전송한다. EOC 광학부는 2,592개의 CCD(Charge-Coupled Device) 센서들로 구성된 선형 시스템이며, push-broom 주사 방식으로 구동된다. 한편, EOC의 임무 전, 후로 Aperture Cover Mechanism에 의해 EOC의 덮개를 덮은 상태로 짧은 시간동안 촬영을 수행, 획득된 영상 역시 지상으로 전송한다. 이러한 영상들은 EOC 영상에 포함되어 있는 암전류(Dark Current)에 대한 간접적인 정보를 제공하며, Dark Calibration Data로 정의된다. Dark Calibration Data는 지상에서 수신된 후, EOC 영상에 대한 복사 보정에 이용된다. 본 연구에서는 EOC Dark Calibration Data에 대한 분석을 통해, EOC 영상 내의 잡음 성분을 분석한다.

  • PDF

Development and Application of High-resolution 3-D Volume PIV System by Cross-Correlation (해상도 3차원 상호상관 Volume PIV 시스템 개발 및 적용)

  • Kim Mi-Young;Choi Jang-Woon;Lee Hyun;Lee Young-Ho
    • Proceedings of the KSME Conference
    • /
    • 2002.08a
    • /
    • pp.507-510
    • /
    • 2002
  • An algorithm of 3-D particle image velocimetry(3D-PIV) was developed for the measurement of 3-D velocity Held of complex flows. The measurement system consists of two or three CCD camera and one RGB image grabber. Flows size is $1500{\times}100{\times}180(mm)$, particle is Nylon12(1mm) and illuminator is Hollogen type lamp(100w). The stereo photogrammetry is adopted for the three dimensional geometrical mesurement of tracer particle. For the stereo-pair matching, the camera parameters should be decide in advance by a camera calibration. Camera parameter calculation equation is collinearity equation. In order to calculate the particle 3-D position based on the stereo photograrnrnetry, the eleven parameters of each camera should be obtained by the calibration of the camera. Epipolar line is used for stereo pair matching. The 3-D position of particle is calculated from the three camera parameters, centers of projection of the three cameras, and photographic coordinates of a particle, which is based on the collinear condition. To find velocity vector used 3-D position data of the first frame and the second frame. To extract error vector applied continuity equation. This study developed of various 3D-PIV animation technique.

  • PDF

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.