• Title/Summary/Keyword: extrinsic calibration

Search Result 40, Processing Time 0.024 seconds

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

3D Rigid Body Tracking Algorithm Using 2D Passive Marker Image (2D 패시브마커 영상을 이용한 3차원 리지드 바디 추적 알고리즘)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.587-588
    • /
    • 2022
  • In this paper, we propose a rigid body tracking method in 3D space using 2D passive marker images from multiple motion capture cameras. First, a calibration process using a chess board is performed to obtain the internal variables of individual cameras, and in the second calibration process, the triangular structure with three markers is moved so that all cameras can observe it, and then the accumulated data for each frame is calculated. Correction and update of relative position information between cameras. After that, the three-dimensional coordinates of the three markers were restored through the process of converting the coordinate system of each camera into the 3D world coordinate system, the distance between each marker was calculated, and the difference with the actual distance was compared. As a result, an error within an average of 2mm was measured.

  • PDF

Sampling-based Control of SAR System Mounted on A Simple Manipulator (간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법)

  • Lee, Ahyun;Lee, Joo-Ho;Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.

Simultaneous Measurement of Strain and Temperature During and After Cure of Unsymmetric Composite Laminate Using Fiber Optic Sensors (비대칭 복합적층판의 성형시 및 성형후 광섬유 센서를 이용한 변형률 및 온도의 동시 측정)

  • 강동훈;강현규;김대현;방형준;홍창선;김천곤
    • Proceedings of the Korean Society For Composite Materials Conference
    • /
    • 2001.05a
    • /
    • pp.244-249
    • /
    • 2001
  • In this paper, we present the simultaneous measurement of the fabricaition strain and temperature during and after cure of unsymmetric composite laminate uising fiber optic sensors. Fiber Bragg grating/extrinsic Fabry-Perot interferometric (FBG/EFPl) hybrid sensors are used to measure those measurands. The characteristic matrix of sensor is analytically derived and measurements can be done without sensor calibration. A wavelength-swept fiber laser is utilized as a light source. FBG/EFPI sensors are embedded in a graphite/epoxy unsymmetric cross-ply composite laminate at different direction and different location. We perform the real time measurement of fabrication strains and temperatures at two points of the composite laminate during cure process in an autoclave. Also, the thermal strains and temperatures of the fabricated laminate are measured in thermal chamber. Through these experiments, we can provide a basis for the efficient smart processing of composite and know the thermal behavior of unsymmetric cross-ply composite laminate.

  • PDF

Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner (카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종)

  • Kim, Hyoung-Rae;Cui, Xue-Nan;Lee, Jae-Hong;Lee, Seung-Jun;Kim, Hakil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

Head Motion Detection and Alarm System during MRI scanning (MRI 영상획득 중의 피험자 움직임 감지 및 알림 시스템)

  • Pae, Chong-Won;Park, Hae-Jeong;Kim, Dae-Jin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.16 no.1
    • /
    • pp.55-66
    • /
    • 2012
  • Purpose : During brain MRI scanning, subject's head motion can adversely affect MRI images. To minimize MR image distortion by head movement, we developed an optical tracking system to detect the 3-D movement of subjects. Materials and Methods: The system consisted of 2 CCD cameras, two infrared illuminators, reflective sphere-type markers, and frame grabber with desktop PC. Using calibration which is the procedure to calculate intrinsic/extrinsic parameters of each camera and triangulation, the system was desiged to detect 3-D coordinates of subject's head movement. We evaluated the accuracy of 3-D position of reflective markers on both test board and the real MRI scans. Results: The stereo system computed the 3-D position of markers accurately for the test board and for the subject with glasses with attached optical reflective marker, required to make regular head motion during MRI scanning. This head motion tracking didn't affect the resulting MR images even in the environment varying magnetic gradient and several RF pulses. Conclusion: This system has an advantage to detect subject's head motion in real-time. Using the developed system, MRI operator is able to determine whether he/she should stop or intervene in MRI acquisition to prevent more image distortions.

Papers : Simultaneous Monitoring of Strain and Temperature During and After Cure of Unsymmetric Cross - ply Composite Laminate Using Fiber Optic Sensors (논문 : 비대칭 직교적층 복합재료 적층판의 성형시 및 성형후 광섬유 센서를 이용한 변형률 및 온도의 동시 모니터링)

  • Gang,Hyeon-Gyu;Gang,Dong-Hun;Hong,Chang-Seon;Kim,Cheon-Gon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.1
    • /
    • pp.49-55
    • /
    • 2002
  • In this paper, we present the simulation monitoring of strain and temperature during and after the cure of unsymmetric composite laminate using fiber optic sensors. Fiber Bragg grating/extrinsic Fabry-Perot interferometric (FBG/EFPI) hybrid sensors are used to measure those measurands. The characteristic matrix of the sensor is analytically derived and measurements can be done without sensor calibration. A wavelength-swept fiber laser is utilised as a lighr source. Two FBG/EFPI sensors are embedded in a graphite/epoxy unsymmetric cross-ply composite laminate in different directions and different locations. We perform a real time monitoring of fabrication strains and temperatures at two points of the composite laminate during cure process in an autoclave. Also, the thermal strains and temperatures of the fabricated laminate are measured in a thermal chamber. Through these experiments, we can provide a basis for the efficient smart processing of composite and know the thermal behavior of unsymmetric cross-ply composite laminate.

3D Image Construction Using Color and Depth Cameras (색상과 깊이 카메라를 이용한 3차원 영상 구성)

  • Jung, Ha-Hyoung;Kim, Tae-Yeon;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a method for 3D image construction using the hybrid (color and depth) camera system, in which the drawbacks of each camera can be compensated for. Prior to an image generation, intrinsic parameters and extrinsic parameters of each camera are extracted through experiments. The geometry between two cameras is established with theses parameters so as to match the color and depth images. After the preprocessing step, the relation between depth information and distance is derived experimentally as a simple linear function, and 3D image is constructed by coordinate transformations of the matched images. The present scheme has been realized using the Microsoft hybrid camera system named Kinect, and experimental results of 3D image and the distance measurements are given to evaluate the method.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.