• 제목/요약/키워드: , Camera estimation method

검색결과 451건 처리시간 0.039초

특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법 (Localization of a Monocular Camera using a Feature-based Probabilistic Map)

  • 김형진;이동화;오택준;명현
    • 제어로봇시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구 (A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance)

  • 정완식;김경석;신광수;주철;김재확;윤현권
    • 한국정밀공학회지
    • /
    • 제16권12호
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

스펙트럼 특성행렬을 이용한 효율적인 반사 스펙트럼 복원 방법 (Efficient Method for Recovering Spectral Reflectance Using Spectrum Characteristic Matrix)

  • 심규동;박종일
    • 한국멀티미디어학회논문지
    • /
    • 제18권12호
    • /
    • pp.1439-1444
    • /
    • 2015
  • Measuring spectral reflectance can be regarded as obtaining inherent color parameters, and spectral reflectance has been used in image processing. Model-based spectrum recovering, one of the method for obtaining spectral reflectance, uses ordinary camera with multiple illuminations. Conventional model-based methods allow to recover spectral reflectance efficiently by using only a few parameters, however it requires some parameters such as power spectrum of illuminations and spectrum sensitivity of camera. In this paper, we propose an enhanced model-based spectrum recovering method without pre-measured parameters: power spectrum of illuminations and spectrum sensitivity of camera. Instead of measuring each parameters, spectral reflectance can be efficiently recovered by estimating and using the spectrum characteristic matrix which contains spectrum parameters: basis function, power spectrum of illumination, and spectrum sensitivity of camera. The spectrum characteristic matrix can be easily estimated using captured images from scenes with color checker under multiple illuminations. Additionally, we suggest fast recovering method preserving positive constraint of spectrum by nonnegative basis function of spectral reflectance. Results of our method showed accurately reconstructed spectral reflectance and fast constrained estimation with unmeasured camera and illumination. As our method could be conducted conveniently, measuring spectral reflectance is expected to be widely used.

여러 장의 영상을 사용하는 3차원 계측용 카메라 교정방법 (A Camera Calibration Method using Several Images for Three Dimensional Measurement)

  • 강동중
    • 제어로봇시스템학회논문지
    • /
    • 제13권3호
    • /
    • pp.224-229
    • /
    • 2007
  • This paper presents a camera calibration method using several images for three dimensional measurement applications such as stereo systems, mobile robots, and visual inspection systems in factories. Conventional calibration methods that use single image suffer from errors related to reference point extraction in image, lens distortion, and numerical analysis of nonlinear optimization. The camera parameter values obtained from images of same camera is not same even though we use same calibration method. The camera parameters that are obtained from several images of different view for a calibration target is usaully not same with large error values and we can not assume a special probabilistic distribution when we estimate the parameter values. In this paper, the median value of camera parameters from several images is used to improve estimation of the camera values in an iterative step with nonlinear optimization. The proposed method is proved by experiments using real images.

단일 카메라와 GPS를 이용한 영상 내 객체 위치 좌표 추정 기법 (An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS)

  • 성택영;권기창;문광석;이석환;권기룡
    • 한국멀티미디어학회논문지
    • /
    • 제19권2호
    • /
    • pp.112-121
    • /
    • 2016
  • ADAS(Advanced Driver Assistance Systems) and street furniture information collecting car like as MMS(Mobile Mapping System), they require object location estimation method for recognizing spatial information of object in road images. But, the case of conventional methods, these methods require additional hardware module for gathering spatial information of object and have high computational complexity. In this paper, for a coordinate of road sign in single camera image, a position estimation scheme of object in road images is proposed using the relationship between the pixel and object size in real world. In this scheme, coordinate value and direction are used to get coordinate value of a road sign in images after estimating the equation related on pixel and real size of road sign. By experiments with test video set, it is confirmed that proposed method has high accuracy for mapping estimated object coordinate into commercial map. Therefore, proposed method can be used for MMS in commercial region.

다중 입체 영상 획득을 위한 정밀 카메라 캘리브레이션 기법 (Accurate Camera Calibration Method for Multiview Stereoscopic Image Acquisition)

  • 김중희;윤여훈;김준수;윤국진;정원식;강석주
    • 방송공학회논문지
    • /
    • 제24권6호
    • /
    • pp.919-927
    • /
    • 2019
  • 본 논문에서는 입체 영상을 획득하기 위한 정밀 카메라 캘리브레이션(calibration) 기법을 제안한다. 일반적인 카메라 캘리브레이션 기법은 체커보드 구조의 목적 패턴을 이용하여 수행한다. 체커보드 패턴은 사전에 인지된 격자구조를 활용할 수 있으며, 체커보드 코너점을 통해 특징점 매칭을 용이하게 수행할 수 있음에 따라 2차원 영상 픽셀 지점과 3차원 공간상의 관계를 정확히 추정할 수 있다. 특징점 매칭을 통해 카메라 파라미터를 추정하므로 정밀한 카메라 캘리브레이션을 위해선 영상 평면내의 정확한 체커보드 코너 검출이 필요하다. 따라서 본 논문은 정확한 체커보드 코너 검출을 통해 정밀한 카메라 캘리브레이션을 수행하는 기법을 제안한다. 정확한 코너를 검출하기 위해 1-D 가우시안 필터링을 활용하여 코너 후보군들을 검출한 후 코너 정제(refinement) 과정을 통해 이상치(outlier)들을 제거하며 영상내의 부분 픽셀(sub-pixel) 단위의 정확한 코너를 검출한다. 제안한 기법을 검증하기 위해 카메라 내부 파라미터를 추정 결과를 판단하는 재투사 오차(reprojection error)를 확인하며, 카메라 위치 ground truth 값이 제공된 데이터 셋을 활용하여 카메라 외부 파라미터 추정 결과를 확인한다.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권10호
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘 (Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects)

  • 김경진;박병서;김동욱;서영호
    • 방송공학회논문지
    • /
    • 제24권5호
    • /
    • pp.765-774
    • /
    • 2019
  • 본 논문에서는 다중 RGB-D 카메라의 포인트 클라우드 정합 알고리즘을 제안한다. 일반적으로 컴퓨터 비전 분야에서는 카메라의 위치를 정밀하게 추정하는 문제에 많은 관심을 두고 있다. 기존의 3D 모델 생성 방식들은 많은 카메라 대수나 고가의 3D Camera를 필요로 한다. 또한 2차원 이미지를 통해 카메라 외부 파라미터를 얻는 기존의 방식은 큰 오차를 가지고 있다. 본 논문에서는 저가의 RGB-D 카메라 8대를 사용하여 전방위 3차원 모델을 생성하기 위해 깊이 이미지와 함수 최적화 방식을 이용하여 유효한 범위 내의 오차를 갖는 좌표 변환 파라미터를 구하는 방식을 제안한다.

PPIV 인식기반 2D 호모그래피와 LM방법을 이용한 카메라 외부인수 산출 (Camera Extrinsic Parameter Estimation using 2D Homography and LM Method based on PPIV Recognition)

  • 차정희;전영민
    • 전자공학회논문지SC
    • /
    • 제43권2호
    • /
    • pp.11-19
    • /
    • 2006
  • 본 논문에서는 사영과 치환불변 점 특징을 기반으로 카메라의 외부인수를 산출하는 방법을 제안한다. 기존 연구에서의 특징 정보들은 카메라의 뷰 포인트에 따라 변화하기 때문에 대응점 산출이 어렵다. 따라서 본 논문에서는 카메라 위치에 무관한 불변 점 특징을 추출하고 시간 복잡도 감소와 정확한 대응점 산출을 위해 유사도 평가함수와 Graham 탐색 방법을 이용한 새로운 정합방법을 제안한다. 또한 카메라 외부인수 산출단계에서는 LM 알고리즘의 수렴도를 향상시키기 위해 2단계 카메라 동작인수 산출방법을 제안한다. 실험에서는 다양한 실내영상을 이용하여 기존방법과 비교, 분석함으로써 제안한 알고리즘의 우수성을 입증하였다.

Golf Green Slope Estimation Using a Cross Laser Structured Light System and an Accelerometer

  • Pham, Duy Duong;Dang, Quoc Khanh;Suh, Young Soo
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권2호
    • /
    • pp.508-518
    • /
    • 2016
  • In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.