• Title/Summary/Keyword: Stereo calibration

Search Result 138, Processing Time 0.022 seconds

Depth error calibration of maladjusted stereo cameras for translation of instrumented image information in dynamic objects (동영상 정보의 계측정보 전송을 위한 비선형 스테레오 카메라의 오차 보정)

  • Kim, Jong-Man;Kim, Yeong-Min;Hwang, Jong-Sun;Lim, Byung-Hyun
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.05b
    • /
    • pp.109-114
    • /
    • 2003
  • Depth error correction effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of intra parameter is calibrated with known points, such error can be compensated in some amount. Such error compensation effect with the calibrated pixel distance parameter is demonstrated with various experimental results.

  • PDF

A Calibration Method for Multimodal dual Camera Environment (멀티모달 다중 카메라의 영상 보정방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2138-2144
    • /
    • 2015
  • Multimodal dual camera system has a stereo-like configuration equipped with an infrared thermal and optical camera. This paper presents stereo calibration methods on multimodal dual camera system using a target board that can be recognized by both thermal and optical camera. While a typical stereo calibration method usually performed with extracted intrinsic and extrinsic camera parameter, consecutive image processing steps were applied in this paper as follows. Firstly, the corner points were detected from the two images, and then the pixel error rate, the size difference, the rotation degree between the two images were calculated by using the pixel coordinates of detected corner points. Secondly, calibration was performed with the calculated values via affine transform. Lastly, result image was reconstructed with mapping regions on calibrated image.

A study on the transformation of EO parameters using Boresight calibration (Boresight calibration을 이용한 외부표정요소 산출에 관한 연구)

  • 박수영;윤여상;김준철;정주권;주영은
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.129-134
    • /
    • 2003
  • Mobile Mapping System needs system calibration of multi sensors. System calibration is defined as determination of spatial and rotational offsets between the sensors. Especially, EO parameters of GPS/INS require knowledge of the calibration to camera frame. The calibration parameters must be determined with the highest achievable accuracy in order to get 3D coordinate points in stereo CCD images. This study applies Boresight calibration for the calibration between GPS/INS and camera, and estimates the Performance of the calibration.

  • PDF

Multi-Range Approach of Stereo Vision for Mobile Robot Navigation in Uncertain Environments

  • Park, Kwang-Ho;Kim, Hyung-O;Baek, Moon-Yeol;Kee, Chang-Doo
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.10
    • /
    • pp.1411-1422
    • /
    • 2003
  • The detection of free spaces between obstacles in a scene is a prerequisite for navigation of a mobile robot. Especially for stereo vision-based navigation, the problem of correspondence between two images is well known to be of crucial importance. This paper describes multi-range approach of area-based stereo matching for grid mapping and visual navigation in uncertain environment. Camera calibration parameters are optimized by evolutionary algorithm for successful stereo matching. To obtain reliable disparity information from both images, stereo images are to be decomposed into three pairs of images with different resolution based on measurement of disparities. The advantage of multi-range approach is that we can get more reliable disparity in each defined range because disparities from high resolution image are used for farther object a while disparities from low resolution images are used for close objects. The reliable disparity map is combined through post-processing for rejecting incorrect disparity information from each disparity map. The real distance from a disparity image is converted into an occupancy grid representation of a mobile robot. We have investigated the possibility of multi-range approach for the detection of obstacles and visual mapping through various experiments.

A 2-D Image Camera Calibration using a Mapping Approximation of Multi-Layer Perceptrons (다층퍼셉트론의 정합 근사화에 의한 2차원 영상의 카메라 오차보정)

  • 이문규;이정화
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.4
    • /
    • pp.487-493
    • /
    • 1998
  • Camera calibration is the process of determining the coordinate relationship between a camera image and its real world space. Accurate calibration of a camera is necessary for the applications that involve quantitative measurement of camera images. However, if the camera plane is parallel or near parallel to the calibration board on which 2 dimensional objects are defined(this is called "ill-conditioned"), existing solution procedures are not well applied. In this paper, we propose a neural network-based approach to camera calibration for 2D images formed by a mono-camera or a pair of cameras. Multi-layer perceptrons are developed to transform the coordinates of each image point to the world coordinates. The validity of the approach is tested with data points which cover the whole 2D space concerned. Experimental results for both mono-camera and stereo-camera cases indicate that the proposed approach is comparable to Tsai's method[8]. Especially for the stereo camera case, the approach works better than the Tsai's method as the angle between the camera optical axis and the Z-axis increases. Therefore, we believe the approach could be an alternative solution procedure for the ill -conditioned camera calibration.libration.

  • PDF

Real-Time Compressed Video Acquisition System for Stereo 360 VR (Stereo 360 VR을 위한 실시간 압축 영상 획득 시스템)

  • Choi, Minsu;Paik, Joonki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.965-973
    • /
    • 2019
  • In this paper, Stereo 4K@60fps 360 VR real-time video capture system which consists of video stream capture, video encoding and stitching module is been designed. The system captures stereo 4K@60fps 360 VR video by stitching 6 of 2K@60fps stream which are captured through HDMI interface from 6 cameras in real-time. In video capture phase, video is captured from each camera using multi-thread in real-time. In video encoding phase, raw frame memory transmission and parallel encoding are used to reduce the resource usage in data transmission between video capture and video stitching modules. In video stitching phase, Real-time stitching is secured by stitching calibration preprocessing.

Measurement of Distance and Velocity of Moving Objects using Single Camera Pseudo-Stereo Images

  • Lee, Jae-Soo;Kim, Soo-In;Choi, In-Ho
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.21 no.9
    • /
    • pp.32-38
    • /
    • 2007
  • In this study, a new algorithm for measuring the velocity and distance from a camera to a moving object by using pseudo-stereo images obtained from a single camera with a stereo adapter is proposed. The proposed system is similar to a parallel visual stereo system using a two-camera system, but because this system can obtain pseudo-stereo images form a single camera, it has advantages not only in the aspect of cost but also in stereo conformity by arrangement and the calibration of the left and right stereo cameras upon image processing.

The Image Measuring System for accurate calibration-matching in objects (정밀 켈리브레이션 정합을 위한 화상측징계)

  • Kim, Jong-Man
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.357-358
    • /
    • 2006
  • Accurate calibration matching for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of Intra parameter is calibrated with known points, such error can be compensated in some amount and showed the variable experiments for accurate effects.

  • PDF

A Study on the Sensor Calibration for Low Cost Motion Capture Sensor using PSD Sensor (PSD센서를 이용한 모션캡쳐 시스템의 센서보정에 관한 연구)

  • Kim, Yu-Geon;Choi, Hun-Il;Ryu, Young-Kee;Oh, Choon-Suk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.603-605
    • /
    • 2005
  • In this paper, we deal with a calibration method for low cost motion capture sensor using PSD (Position Sensitive Detection). The PSD sensor is employed to measure the direction of incident light from moving markers attached to motion body. To calibrate the PSD optical module, a conventional camera calibration algorithm introduced by Tsai. The 3-dimensional positions of the markers are measured by using stereo camera geometry. From the experimental results, the low cost motion capture sensor can be used in a real time system.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF