• Title/Summary/Keyword: Camera coordinate

Search Result 338, Processing Time 0.025 seconds

Camera Calibration Using the Fuzzy Model (퍼지 모델을 이용한 카메라 보정에 관한 연구)

  • 박민기
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.413-418
    • /
    • 2001
  • In this paper, we propose a new camera calibration method which is based on a fuzzy model instead of a physical camera model of the conventional method. The camera calibration is to determine the correlation between camera image coordinate and real world coordinate. The camera calibration method using a fuzzy model can not estimate camera physical parameters which can be obtained in the conventional methods. However, the proposed method is very simple and efficient because it can determine the correlation between camera image coordinate and real world coordinate without any restriction, which is the objective of camera calibration. With calibration points acquired out of experiments, 3-D real world coordinate and 2-D image coordinate are estimated using the fuzzy modeling method and the results of the experiments demonstrate the validity of the proposed method.

  • PDF

A Study m Camera Calibration Using Artificial Neural Network (신경망을 이용한 카메라 보정에 관한 연구)

  • Jeon, Kyong-Pil;Woo, Dong-Min;Park, Dong-Chul
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1248-1250
    • /
    • 1996
  • The objective of camera calibration is to obtain the correlation between camera image coordinate and 3-D real world coordinate. Most calibration methods are based on the camera model which consists of physical parameters of the camera like position, orientation, focal length, etc and in this case camera calibration means the process of computing those parameters. In this research, we suggest a new approach which must be very efficient because the artificial neural network(ANN) model implicitly contains all the physical parameters, some of which are very difficult to be estimated by the existing calibration methods. Implicit camera calibration which means the process of calibrating a camera without explicitly computing its physical parameters can be used for both 3-D measurement and generation of image coordinates. As training each calibration points having different height, we can find the perspective projection point. The point can be used for reconstruction 3-D real world coordinate having arbitrary height and image coordinate of arbitrary 3-D real world coordinate. Experimental comparison of our method with well-known Tsai's 2 stage method is made to verify the effectiveness of the proposed method.

  • PDF

A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM (사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법)

  • Lee, Jae-Min;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

A method for image processing by use of inertial data of camera

  • Kaba, K.;Kashiwagi, H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.221-225
    • /
    • 1998
  • This paper is to present a method for recognizing an image of a tracking object by processing the image from a camera, whose attitude is controlled in inertial space with inertial co-ordinate system. In order to recognize an object, a pseudo-random M-array is attached on the object and it is observed by the camera which is controlled on inertial coordinate basis by inertial stabilization unit. When the attitude of the camera is changed, the observed image of M-array is transformed by use of affine transformation to the image in inertial coordinate system. Taking the cross-correlation function between the affine-transformed image and the original image, we can recognize the object. As parameters of the attitude of the camera, we used the azimuth angle of camera, which is de-fected by gyroscope of an inertial sensor, and elevation an91e of camera which is calculated from the gravitational acceleration detected by servo accelerometer.

  • PDF

New Method of Visual Servoing using an Uncalibrated Camera and a Calibrated Robot

  • Morita, Masahiko;Shigeru, Uchikado;Yasuhiro, Osa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.4-41
    • /
    • 2002
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. Here we consider two coordinate systems, the world coordinate system and the camera coordinate one and we use a pinhole camera model as the camera one. First of all, the essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. And these plays an important role in designing visual servoing in the later chapters. Statement of the problem is giver. Provided two a priori...

  • PDF

Coordinate Measuring Technique based on Optical Triangulation using the Two Images (두장의 사진을 이용한 광삼각법 삼차원측정)

  • 양주웅;이호재
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.76-80
    • /
    • 2000
  • This paper describes a coordinate measuring technique based on optical triangulation using the two images. To overcome the defect of structured light system which measures coordinate point by point, light source is replaced by CCD camera. Pixels in CCD camera were considered as virtual light source. The overall geometry including two camera images is modeled. Using this geometry, the formula for calculating 3D coordinate of specified point is derived. In a word, the ray from a virtual light source was reflected on measuring point and the corresponding image point was made on the other image. Through the simulation result, validation of formula is verified. This method enables to acquire multiple points detection by photographing.

  • PDF

Camera Calibration with Two Calibration Planes and Oblique Coordinate Mapping (두 보정면과 사교좌표 매핑을 이용한 카메라 보정법)

  • Ahn, Jeong-Ho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.7
    • /
    • pp.119-124
    • /
    • 1999
  • A method to find the line of sight ray in space which corresponds to a point in an image plane is presented. The line of sight ray is defined by two points which are the intersections between the two calibration planes and the sight ray. The intersection point is found by the oblique coordinate mapping between the image plane and the calibration plane in the space. The proposed oblique coordinate mapping method has advantages over the transformation matrix method in the required memory space and computation time.

  • PDF

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS (단일 카메라와 GPS를 이용한 영상 내 객체 위치 좌표 추정 기법)

  • Seung, Teak-Young;Kwon, Gi-Chang;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.112-121
    • /
    • 2016
  • ADAS(Advanced Driver Assistance Systems) and street furniture information collecting car like as MMS(Mobile Mapping System), they require object location estimation method for recognizing spatial information of object in road images. But, the case of conventional methods, these methods require additional hardware module for gathering spatial information of object and have high computational complexity. In this paper, for a coordinate of road sign in single camera image, a position estimation scheme of object in road images is proposed using the relationship between the pixel and object size in real world. In this scheme, coordinate value and direction are used to get coordinate value of a road sign in images after estimating the equation related on pixel and real size of road sign. By experiments with test video set, it is confirmed that proposed method has high accuracy for mapping estimated object coordinate into commercial map. Therefore, proposed method can be used for MMS in commercial region.

Golf Green Slope Estimation Using a Cross Laser Structured Light System and an Accelerometer

  • Pham, Duy Duong;Dang, Quoc Khanh;Suh, Young Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.508-518
    • /
    • 2016
  • In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.