• Title/Summary/Keyword: multi-camera calibration

Search Result 82, Processing Time 0.033 seconds

Application of Smartphone Camera Calibration for Close-Range Digital Photogrammetry (근접수치사진측량을 위한 스마트폰 카메라 검보정)

  • Yun, MyungHyun;Yu, Yeon;Choi, Chuluong;Park, Jinwoo
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.149-160
    • /
    • 2014
  • Recently studies on application development and utilization using sensors and devices embedded in smartphones have flourished at home and abroad. This study aimed to analyze the accuracy of the images of smartphone to determine three-dimension position of close objects prior to the development of photogrammetric system applying smartphone and evaluate the feasibility to use. First of all, camera calibration was conducted on autofocus and infinite focus. Regarding camera calibration distortion model with balance system and unbalance system was used for the decision of lens distortion coefficient, the results of calibration on 16 types of projects showed that all cases were in RMS error by less than 1 mm from bundle adjustment. Also in terms of autofocus and infinite focus on S and S2 model, the pattern of distorted curve was almost the same, so it could be judged that change in distortion pattern according to focus mode is very little. The result comparison according to autofocus and infinite focus and the result comparison according to a software used for multi-image processing showed that all cases were in standard deviation less than ${\pm}3$ mm. It is judged that there is little result difference between focus mode and determination of three-dimension position by distortion model. Lastly the checkpoint performance by total station was fixed as most probable value and the checkpoint performance determined by each project was fixed as observed value to calculate statistics on residual of individual methods. The result showed that all projects had relatively large errors in the direction of Y, the direction of object distance compared to the direction of X and Z. Like above, in terms of accuracy for determination of three-dimension position for a close object, the feasibility to use smartphone camera would be enough.

The compensation of kinematic differences of a robot using image information (화상정보를 이용한 로봇기구학의 오차 보정)

  • Lee, Young-Jin;Lee, Min-Chul;Ahn, Chul-Ki;Son, Kwon;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1840-1843
    • /
    • 1997
  • The task environment of a robot is changing rapidly and task itself becomes complicated due to current industrial trends of multi-product and small lot size production. A convenient user-interfaced off-line programming(OLP) system is being developed in order to overcome the difficulty in teaching a robot task. Using the OLP system, operators can easily teach robot tasks off-line and verify feasibility of the task through simulation of a robot prior to the on-line execution. However, some task errors are inevitable by kinematic differences between the robot model in OLP and the actual robot. Three calibration methods using image information are proposed to compensate the kinematic differences. These methods compose of a relative position vector method, three point compensation method, and base line compensation method. To compensate a kinematic differences the vision system with one monochrome camera is used in the calibration experiment.

  • PDF

The Extract of 3D Road Centerline Using Video Camera (비디오 카메라를 이용한 3차원 도로중심선 추출)

  • Seo Dong-Ju;Lee Jong-Chool
    • International Journal of Highway Engineering
    • /
    • v.8 no.1 s.27
    • /
    • pp.65-75
    • /
    • 2006
  • According to development of computer technology, the utilization of the fourth generation of digital photogrammetry progresses favorable. Especially the method of using digital video camera is very practicable and has an advantage such as a profitability for the amateur. In road field which if centrical facilities of national industry, this method was utilized to acquire road information for the safety diagnosis or maintenance. In this study, 3-dimensional position information of road centerline was extracted using digital video camera which has practicality and economical efficiency. This data could be a basic source in road information project.

  • PDF

A 3D Face Modeling Method Using Region Segmentation and Multiple light beams (지역 분할과 다중 라이트 빔을 이용한 3차원 얼굴 형상 모델링 기법)

  • Lee, Yo-Han;Cho, Joo-Hyun;Song, Tai-Kyong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.70-81
    • /
    • 2001
  • This paper presents a 3D face modeling method using a CCD camera and a projector (LCD projector or Slide projector). The camera faces the human face and the projector casts white stripe patterns on the human face. The 3D shape of the face is extracted from spatial and temporal locations of the white stripe patterns on a series of image frames. The proposed method employs region segmentation and multi-beam techniques for efficient 3D modeling of hair region and faster 3D scanning respectively. In the proposed method, each image is segmented into face, hair, and shadow regions, which are independently processed to obtain the optimum results for each region. The multi-beam method, which uses a number of equally spaced stripe patterns, reduces the total number of image frames and consequently the overall data acquisition time. Light beam calibration is adopted for efficient light plane measurement, which is not influenced by the direction (vertical or horizontal) of the stripe patterns. Experimental results show that the proposed method provides a favorable 3D face modeling results, including the hair region.

  • PDF

Development of Multi-Laser Vision System For 3D Surface Scanning (3 차원 곡면 데이터 획득을 위한 멀티 레이져 비젼 시스템 개발)

  • Lee, J.H.;Kwon, K.Y.;Lee, H.C.;Doe, Y.C.;Choi, D.J.;Park, J.H.;Kim, D.K.;Park, Y.J.
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.768-772
    • /
    • 2008
  • Various scanning systems have been studied in many industrial areas to acquire a range data or to reconstruct an explicit 3D model. Currently optical technology has been used widely by virtue of noncontactness and high-accuracy. In this paper, we describe a 3D laser scanning system developped to reconstruct the 3D surface of a large-scale object such as a curved-plate of ship-hull. Our scanning system comprises of 4ch-parallel laser vision modules using a triangulation technique. For multi laser vision, calibration method based on least square technique is applied. In global scanning, an effective method without solving difficulty of matching problem among the scanning results of each camera is presented. Also minimal image processing algorithm and robot-based calibration technique are applied. A prototype had been implemented for testing.

  • PDF

Calibration of a UAV Based Low Altitude Multi-sensor Photogrammetric System (UAV기반 저고도 멀티센서 사진측량 시스템의 캘리브레이션)

  • Lee, Ji-Hun;Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.31-38
    • /
    • 2012
  • The geo-referencing accuracy of the images acquired by a UAV based multi-sensor system is affected by the accuracy of the mounting parameters involving the relationship between a camera and a GPS/INS system as well as the performance of a GPS/INS system. Therefore, the estimation of the accurate mounting parameters of a multi-sensor system is important. Currently, we are developing a low altitude multi-sensor system based on a UAV, which can monitor target areas in real time for rapid responses for emergency situations such as natural disasters and accidents. In this study, we suggest a system calibration method for the estimation of the mounting parameters of a multi-sensor system like our system. We also generate simulation data with the sensor specifications of our system, and derive an effective flight configuration and the number of ground control points for accurate and efficient system calibration by applying the proposed method to the simulated data. The experimental results indicate that the proposed method can estimate accurate mounting parameters using over five ground control points and flight configuration composed of six strips. In the near future, we plan to estimate mounting parameters of our system using the proposed method and evaluate the geo-referencing accuracy of the acquired sensory data.

Adaptive illumination change compensation method for multi-view video coding (다시점 비디오 부호화를 위한 적응적인 조명변화 보상 방법)

  • Hur, Jae-Ho;Cho, Suk-Hee;Hur, Nam-Ho;Kim, Jin-Woong;Lee, Yung-Lyul
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.407-419
    • /
    • 2006
  • In this paper, an adaptive illumination change compensation method is proposed for multi-view video coding. In multi-view video, an illumination change can occur due to physically imperfect camera calibration, each different camera position and direction, and so on. These characteristics can cause a performance decrease in the multi-view video coding that uses an inter-view prediction by referring to the pictures obtained from the neighboring views. By using the proposed method, a compression ratio of the proposed method in the multi-view video coding is increased, and finally $0.1{\sim}0.6dB$ PSNR(Peak Signal-to-Noise Ratio) improvement was obtained compared with the case of not using the proposed method.

Calibrating Stereoscopic 3D Position Measurement Systems Using Artificial Neural Nets (3차원 위치측정을 위한 스테레오 카메라 시스템의 인공 신경망을 이용한 보정)

  • Do, Yong-Tae;Lee, Dae-Sik;Yoo, Seog-Hwan
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.418-425
    • /
    • 1998
  • Stereo cameras are the most widely used sensing systems for automated machines including robots to interact with their three-dimensional(3D) working environments. The position of a target point in the 3D world coordinates can be measured by the use of stereo cameras and the camera calibration is an important preliminary step for the task. Existing camera calibration techniques can be classified into two large categories - linear and nonlinear techniques. While linear techniques are simple but somewhat inaccurate, the nonlinear ones require a modeling process to compensate for the lens distortion and a rather complicated procedure to solve the nonlinear equations. In this paper, a method employing a neural network for the calibration problem is described for tackling the problems arisen when existing techniques are applied and the results are reported. Particularly, it is shown experimentally that by utilizing the function approximation capability of multi-layer neural networks trained by the back-propagation(BP) algorithm to learn the error pattern of a linear technique, the measurement accuracy can be simply and efficiently increased.

  • PDF

Moving object segmentation and tracking using feature based motion flow (특징 기반 움직임 플로우를 이용한 이동 물체의 검출 및 추적)

  • 이규원;김학수;전준근;박규태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.8
    • /
    • pp.1998-2009
    • /
    • 1998
  • An effective algorithm for tracking rigid or non-rigid moving object(s) which segments local moving parts from image sequence in the presence of backgraound motion by camera movenment, predicts the direction of it, and tracks the object is proposed. It requires no camera calibration and no knowledge of the installed position of camera. In order to segment the moving object, feature points configuring the shape of moving object are firstly selected, feature flow field composed of motion vectors of the feature points is computed, and moving object(s) is (are) segmented by clustering the feature flow field in the multi-dimensional feature space. Also, we propose IRMAS, an efficient algorithm that finds the convex hull in order to cinstruct the shape of moving object(s) from clustered feature points. And, for the purpose of robjst tracking the objects whose movement characteristics bring about the abrupt change of moving trajectory, an improved order adaptive lattice structured linear predictor is used.

  • PDF

Real-Time Compressed Video Acquisition System for Stereo 360 VR (Stereo 360 VR을 위한 실시간 압축 영상 획득 시스템)

  • Choi, Minsu;Paik, Joonki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.965-973
    • /
    • 2019
  • In this paper, Stereo 4K@60fps 360 VR real-time video capture system which consists of video stream capture, video encoding and stitching module is been designed. The system captures stereo 4K@60fps 360 VR video by stitching 6 of 2K@60fps stream which are captured through HDMI interface from 6 cameras in real-time. In video capture phase, video is captured from each camera using multi-thread in real-time. In video encoding phase, raw frame memory transmission and parallel encoding are used to reduce the resource usage in data transmission between video capture and video stitching modules. In video stitching phase, Real-time stitching is secured by stitching calibration preprocessing.