• 제목/요약/키워드: 3D world coordinate

검색결과 55건 처리시간 0.022초

신경망을 이용한 카메라 보정에 관한 연구 (A Study m Camera Calibration Using Artificial Neural Network)

  • 정경필;우동민;박동철
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1996년도 하계학술대회 논문집 B
    • /
    • pp.1248-1250
    • /
    • 1996
  • The objective of camera calibration is to obtain the correlation between camera image coordinate and 3-D real world coordinate. Most calibration methods are based on the camera model which consists of physical parameters of the camera like position, orientation, focal length, etc and in this case camera calibration means the process of computing those parameters. In this research, we suggest a new approach which must be very efficient because the artificial neural network(ANN) model implicitly contains all the physical parameters, some of which are very difficult to be estimated by the existing calibration methods. Implicit camera calibration which means the process of calibrating a camera without explicitly computing its physical parameters can be used for both 3-D measurement and generation of image coordinates. As training each calibration points having different height, we can find the perspective projection point. The point can be used for reconstruction 3-D real world coordinate having arbitrary height and image coordinate of arbitrary 3-D real world coordinate. Experimental comparison of our method with well-known Tsai's 2 stage method is made to verify the effectiveness of the proposed method.

  • PDF

퍼지 모델을 이용한 카메라 보정에 관한 연구 (Camera Calibration Using the Fuzzy Model)

  • 박민기
    • 한국지능시스템학회논문지
    • /
    • 제11권5호
    • /
    • pp.413-418
    • /
    • 2001
  • 본 논문에서는 기존에 사용한 물리적 카메라 모델 대신 퍼지 모델을 사용한 새로운 카메라 보정 방식을 제안한다. 카메라 보정은 카메라의 영상 좌표계와 실제 환경이 가지는 좌표계와의 관계를 규정하는 것으로, 퍼지 모델을 이용하는 방법은 기존의 방법에서 이용했던 물리적 변수들을 설정할 수는 없지만 카메라 보정의 목적인 카메라 좌표계와 실제 환경 좌표계와의 관계를 별다른 제약없이 규정할 수 있으므로 매우 간단하고 효율적인 카메라 보정 방법이다. 실제 실험을 통해 얻은 실공간상의 하나의 보정면 좌표에 대해 퍼지 모델링 방법을 이용하여 3차원 실 공간 좌표 및 2차원 영상좌표 예측을 통해 제안한 방법의 유효성을 보인다.

  • PDF

레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발 (Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting)

  • 임동혁;황헌
    • Journal of Biosystems Engineering
    • /
    • 제24권2호
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출- (Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object-)

  • 김시찬;최동엽;황헌
    • Journal of Biosystems Engineering
    • /
    • 제26권1호
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

전차량의 3차원 동역학 모델 (Three-Dimensional Dynamic Model of Full Vehicle)

  • 민경득;김영철
    • 전기학회논문지
    • /
    • 제63권1호
    • /
    • pp.162-172
    • /
    • 2014
  • A three-dimensional dynamic model for simulating various motions of full vehicle is presented. The model has 16 independent degrees of freedom (DOF) consisting of three kinds of components; a vehicle body of 6 DOF, 4 independent suspensions equipped at every corner of the body, and 4 tire models linked with each suspension. The dynamic equations are represented in six coordinate frames such as world fixed coordinate, vehicle fixed coordinate, and four wheel fixed coordinate frames. Then these lead to the approximated prediction model of vehicle posture. Both lateral and longitudinal dynamics can be computed simultaneously under the conditions of which various inputs including steering command, driving torque, gravity, rolling resistance of tire, aerodynamic resistance, etc. are considered. It is shown through simulations that the proposed 3D model can be useful for precise design and performance analysis of any full vehicle control systems.

합성곱 신경망 기반 맨하탄 좌표계 추정 (Estimation of Manhattan Coordinate System using Convolutional Neural Network)

  • 이진우;이현준;김준호
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제23권3호
    • /
    • pp.31-38
    • /
    • 2017
  • 본 논문에서는 도심 영상에 대해 맨하탄 좌표계를 추정하는 합성곱 신경망(Convolutional Neural Network) 기반의 시스템을 제안한다. 도심 영상에서 맨하탄 좌표계를 추정하는 것은 영상 조정, 3차원 장면 복원 등 컴퓨터 그래픽스 및 비전 문제 해결의 기본이 된다. 제안하는 합성곱 신경망은 GoogLeNet[1]을 기반으로 구성한다. 합성곱 신경망을 훈련하기 위해 구글 스트리트 뷰 API로 영상을 수집하고 기존 캘리브레이션 방법으로 맨하탄 좌표계를 계산하여 데이터셋을 생성한다. 장면마다 새롭게 합성곱 신경망을 학습해야하는 PoseNet[2]과 달리, 본 논문에서 제안하는 시스템은 장면의 구조를 학습하여 맨하탄 좌표계를 추정하기 때문에 학습되지 않은 새로운 장면에 대해서도 맨하탄 좌표계를 추정한다. 제안하는 방법은 학습에 참여하지 않은 구글 스트리트 뷰 영상을 검증 데이터로 테스트하였을 때 $3.157^{\circ}$의 중간 오차로 맨하탄 좌표계를 추정하였다. 또한, 동일 검증 데이터에 대해 제안하는 방법이 기존 맨하탄 좌표계 추정 알고리즘[3]보다 더 낮은 중간 오차를 보이는 것을 확인하였다.

V.F. 모델링을 이용한 주행차량의 진동에 대한 도로영상의 계측오차 보정 알고리듬 (A Measurement Error Correction Algorithm of Road Image for Traveling Vehicle's Fluctuation Using V.F. Modeling)

  • 김태효;서경호
    • 제어로봇시스템학회논문지
    • /
    • 제12권8호
    • /
    • pp.824-833
    • /
    • 2006
  • In this paper, the image modelling of road's lane markings is established using view frustum(VF) model. From this model, a measurement system of lane markings and obstacles is proposed. The system also involve the real time processing of the 3D position coordinate and the distance data from the camera to the points on the 3D world coordinate by virtue of the camera calibration. In order to reduce their measurement error, an useful algorithm for which analyze the geometric variations due to traveling vehicle's fluctuation using VF model is proposed. In experiments, without correction, for instance, the $0.4^{\circ}$ of pitching rotation gives the error of $0.4{\sim}0.6m$ at the distance of 10m, but the more far distance cause exponentially the more error. We con finned that this algorithm can be reduced less than 0.1m of error at the same condition.

수치지질도의 세계측지계 좌표변환 (ArcToolbox를 이용한 충주 및 황강리 도폭의 사례) (The Coordinate Transformation of Digital Geological Map in accordance with the World Geodetic System (A Case Study of Chungju and Hwanggang-ri Sheets using ArcToolbox))

  • 오현주
    • 자원환경지질
    • /
    • 제48권6호
    • /
    • pp.537-543
    • /
    • 2015
  • 우리나라에서는 2010년부터 국제적으로 통용되는 세계측지계의 사용을 의무화하였다. 이에 따라 현재 국토지리정보원에서 발행되는 수치지도는 세계측지계 기준으로 제공되고 있다. 그러나 현재도 대부분의 수치지질도는 동경측지계 기준으로 제공되고 있다. 따라서 사용자들은 수치지질도를 이용한 2D/3D 지질공간분석에 있어서 수치지질도의 좌표를 세계측지계로 변환하여 사용할 수밖에 없는 실정이다. 따라서 본 논문에서는 수치지질도의 좌표정의 및 변환에 익숙하지 않은 사용자들을 위해 ArcToolbox를 이용한 동경측지계로의 좌표정의 및 세계측지계로의 좌표변환 방법을 소개하고자 한다. 이때 좌표정의 및 변환의 객관성을 도모하기 위해서 인접한 1:5만 축척의 충주와 황강리의 지질도폭을 예를 들어 사용하였다. 동경측지계 좌표정의에 있어 충주와 황강리 도폭을 각각 동경측지계 중부원점과 동부원점으로 정의하였고, 황강리 도폭을 다시 동경측지계 중부원점으로 변환하여 충주 도폭과 병합하였다. 세계측지계 좌표변환에 있어 병합된 충주 및 황강리 도폭은 다시 세계측지계 중부원점으로 변환하였다. 변환 결과, 병합된 충주와 황강리 도폭은 수치지도(대소 367041) 위치와 정확히 일치하였다. 이와 같은 방법으로 과거에 만들어진 수치지질도의 좌표문제를 효과적으로 해결 할 수 있었다. 따라서 본 논문에서 제시한 좌표정의 및 변환 방법은 향후 다양한 수치주제도들의 2D/3D 지질공간분석시 주제도들의 전처리 과정으로 유용하게 사용될 수 있을 것으로 기대된다.

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권3호
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

멀티뷰 카메라를 사용한 외부 카메라 보정 (Extrinsic calibration using a multi-view camera)

  • 김기영;김세환;박종일;우운택
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 신호처리소사이어티 추계학술대회 논문집
    • /
    • pp.187-190
    • /
    • 2003
  • In this paper, we propose an extrinsic calibration method for a multi-view camera to get an optimal pose in 3D space. Conventional calibration algorithms do not guarantee the calibration accuracy at a mid/long distance because pixel errors increase as the distance between camera and pattern goes far. To compensate for the calibration errors, firstly, we apply the Tsai's algorithm to each lens so that we obtain initial extrinsic parameters Then, we estimate extrinsic parameters by using distance vectors obtained from structural cues of a multi-view camera. After we get the estimated extrinsic parameters of each lens, we carry out a non-linear optimization using the relationship between camera coordinate and world coordinate iteratively. The optimal camera parameters can be used in generating 3D panoramic virtual environment and supporting AR applications.

  • PDF