• Title/Summary/Keyword: Multi-camera calibration

Search Result 82, Processing Time 0.023 seconds

Multi-camera Calibration Method for Optical Motion Capture System (광학식 모션캡처를 위한 다중 카메라 보정 방법)

  • Shin, Ki-Young;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.41-49
    • /
    • 2009
  • In this paper, the multi-camera calibration algorithm for optical motion capture system is proposed. This algorithm performs 1st camera calibration using DLT(Direct linear transformation} method and 3-axis calibration frame with 7 optical markers. And 2nd calibration is performed by waving with a wand of known length(so called wand dance} throughout desired calibration volume. In the 1st camera calibration, it is obtained not only camera parameter but also radial lens distortion parameters. These parameters are used initial solution for optimization in the 2nd camera calibration. In the 2nd camera calibration, the optimization is performed. The objective function is to minimize the difference of distance between real markers and reconstructed markers. For verification of the proposed algorithm, re-projection errors are calculated and the distance among markers in the 3-axis frame and in the wand calculated. And then it compares the proposed algorithm with commercial motion capture system. In the 3D reconstruction error of 3-axis frame, average error presents 1.7042mm(commercial system) and 0.8765mm(proposed algorithm). Average error reduces to 51.4 percent in commercial system. In the distance between markers in the wand, the average error shows 1.8897mm in the commercial system and 2.0183mm in the proposed algorithm.

Self-calibration of a Multi-camera System using Factorization Techniques for Realistic Contents Generation (실감 콘텐츠 생성을 위한 분해법 기반 다수 카메라 시스템 자동 보정 알고리즘)

  • Kim, Ki-Young;Woo, Woon-Tack
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.495-506
    • /
    • 2006
  • In this paper, we propose a self-calibration of a multi-camera system using factorization techniques for realistic contents generation. The traditional self-calibration algorithms for multi-camera systems have been focused on stereo(-rig) camera systems or multiple camera systems with a fixed configuration. Thus, it is required to exploit them in 3D reconstruction with a mobile multi-camera system and another general applications. For those reasons, we suggest the robust algorithm for general structured multi-camera systems including the algorithm for a plane-structured multi-camera system. In our paper, we explain the theoretical background and practical usages based on a projective factorization and the proposed affine factorization. We show experimental results with simulated data and real images as well. The proposed algorithm can be used for a 3D reconstruction and a mobile Augmented Reality.

3D Calibration Method on Large-Scale Hull Pieces Profile Measurement using Multi-Slit Beams (선박용 곡판형상의 실시간 측정을 위한 다중 슬릿빔 보정법)

  • Kim, ByoungChang;Lee, Se-Han
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.968-973
    • /
    • 2013
  • In the transportation industry, especially in the shipbuilding process, 3D surface measurement of large-scale hull pieces is needed for fabrication and assembly. We suggest an efficient method for checking the shape of curved plates under the forming operation with short time by measuring 3D profiles along the multi lines of the target surface. For accurate profile reconstruction, 2D camera calibration and 3D calibration using gauge blocks were performed. The evaluation test shows that the measurement accuracy is within the boundary of tolerance required in the shipbuilding process.

Experiment on Camera Platform Calibration of a Multi-Looking Camera System using single Non-Metric Camera (비측정용 카메라를 이용한 Multi-Looking 카메라의 플랫폼 캘리브레이션 실험 연구)

  • Lee, Chang-No;Lee, Byoung-Kil;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.351-357
    • /
    • 2008
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. The geometric relationship of oblique cameras and a vertical camera can be modelled by 6 exterior orientation parameters. Once the relationship between the vertical camera and each oblique camera is determined, the exterior orientation parameters of the oblique images can be calculated by the exterior orientation parameters of the vertical image. In order to examine the exterior orientation of both a vertical camera and each oblique cameras in the multi-looking camera relatively, calibration targets were installed in a lab and 14 images were taken from three image stations by tilting and rotating a non-metric digital camera. The interior orientation parameters of the camera and the exterior orientation parameters of the images were estimated. The exterior orientation parameters of the oblique image with respect to the vertical image were calculated relatively by the exterior orientation parameters of the images and error propagation of the orientation angles and the position of the projection center was examined.

A New Calibration of 3D Point Cloud using 3D Skeleton (3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션)

  • Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.247-257
    • /
    • 2021
  • This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

The Design of MSC(Multi-Spectral Camera) Calibration Operation

  • Yong Sang-Soon;Kang Geum-Sil;Jang Young-Jun;Kim Jong-Ah;Kang Song-Doug;Paik Hong-Yul
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.601-603
    • /
    • 2004
  • Multi-Spectral Camera(MSC) is a payload on the KOMPSAT -2 satellite to perform the earth remote sensing. The instrument images the earth using a push-broom motion with a swath width of 15 km and a ground sample distance (GSD) of 1 m over the entire field of view (FOV) at altitude 685 Km. The instrument is designed to have an on-orbit operation duty cycle of $20\%$ over the mission lifetime of 3 years with the functions of programmable gain! offset and onboard image data compression/storage. MSC instrument has one(1) channel for panchromatic Imaging and four(4) channel for multi-spectral Imaging covering the spectral range from 450nm to 900nm using TDI CCD Focal Plane Array (FPA). In this paper, the configuration, the interface of MSC hardware and the MSC operation concept are described. And the method of the MSC calibration are described and the design of MSC calibration operation to measure the change of MSC after Launch & Early Operation(LEOP) and normal mission operations are discussed and analyzed.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

3D Depth Measurement System based on Parameter Calibration of the Mu1ti-Sensors (실거리 파라미터 교정식 복합센서 기반 3차원 거리측정 시스템)

  • Kim, Jong-Man;Kim, Won-Sop;Hwang, Jong-Sun;Kim, Yeong-Min
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.05a
    • /
    • pp.125-129
    • /
    • 2006
  • The analysis of the depth measurement system with multi-sensors (laser, camera, mirror) has been done and the parameter calibration technique has been proposed. In the proposed depth measurement system, the laser beam is reflected to the object by the rotating mirror and again the position of the laser beam is observed through the same mirror by the camera. The depth of the object pointed by the laser beam is computed depending on the pixel position on the CCD. There involved several number of internal and external parameters such as inter-pixel distance, focal length, position and orientation of the system components in the depth measurement error. In this paper, it is shown through the error sensitivity analysis of the parameters that the most important parameters in the sense of error sources are the angle of the laser beam and the inter pixel distance.

  • PDF

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF