• Title/Summary/Keyword: 카메라 위치 추정

Search Result 291, Processing Time 0.03 seconds

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Implementation of Camera-Based Autonomous Driving Vehicle for Indoor Delivery using SLAM (SLAM을 이용한 카메라 기반의 실내 배송용 자율주행 차량 구현)

  • Kim, Yu-Jung;Kang, Jun-Woo;Yoon, Jung-Bin;Lee, Yu-Bin;Baek, Soo-Whang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.687-694
    • /
    • 2022
  • In this paper, we proposed an autonomous vehicle platform that delivers goods to a designated destination based on the SLAM (Simultaneous Localization and Mapping) map generated indoors by applying the Visual SLAM technology. To generate a SLAM map indoors, a depth camera for SLAM map generation was installed on the top of a small autonomous vehicle platform, and a tracking camera was installed for accurate location estimation in the SLAM map. In addition, a convolutional neural network (CNN) was used to recognize the label of the destination, and the driving algorithm was applied to accurately arrive at the destination. A prototype of an indoor delivery autonomous vehicle was manufactured, and the accuracy of the SLAM map was verified and a destination label recognition experiment was performed through CNN. As a result, the suitability of the autonomous driving vehicle implemented by increasing the label recognition success rate for indoor delivery purposes was verified.

A Markerless Augmented Reality Approach for Outdoor/Indoor (실내외 연동을 위한 markerless 증강현실 구현)

  • Kim, Albert Heekwan;Cho, Hyeondal
    • Annual Conference of KIPS
    • /
    • 2009.04a
    • /
    • pp.59-62
    • /
    • 2009
  • 증강현실 기술은 실제 환경에 가상의 물체를 덧씌우는 기술을 말하며, 이는 지리정보의 가시화 같은 작업에 매우 큰 잠재력을 갖고 있다. 하지만 지금까지 연구된 이동형 증강현실 시스템은 사용자의 위치를 파악하기 위해 GPS를 사용하거나 마커를 현장에 붙이는 방식을 사용하였다. 최근 연구들은 마커를 사용하지 않는 방법을 지향하고 있으나 많은 제약을 갖고 있다. 특히 실내의 경우는 GPS정보를 사용할 수 없기 때문에 실내 위치파악을 위해서는 새로운 기술이 필요하다. 최근 무선(RF)기반의 실내 위치 추정 연구가 활발히 수행되고 있지만, 이 또한 다량의 센서와 인식기를 설치해야한다는 제약이 존재한다. 본 연구에서는 한 대의 카메라를 사용하는 SLAM 알고리듬을 이용한 위치 추정기법을 제시하였으며, 추정된 위치를 이용하여 증강현실을 통한 정보 가시화 프로그램을 개발하였다. 이는 실내외 seamless 연동형 u_GIS 시스템의 밑바탕이 될 것이다.

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

Nonlinear Destriping Algorithm of Satellite Images (비선형 보정을 이용한 위성영상의 줄무늬잡음 제거 알고리즘)

  • 박종현;최은철;강문기;김용승
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.135-138
    • /
    • 2001
  • 위성에 탑재된 전자광학 카메라(Electro-Optical Camera)로부터 획득된 영상에서는 카메라의 스캔 방향과 통일한 방향으로 "줄무늬잡음"이 발생하게 된다. 이는 센서의 특성이 통일하지 않고, 우주라는 열악한 환경에서 영상의 획득이 일어나기 때문이다. 똔 논문에서는 줄무늬잡음을 제거하기 위해 비선형 보정방법을 제안한다. 영상의 준균일성(quasi-homogeneous)과 센서특성의 시불변성(time-invariancy) 가정을 바탕으로, 보정하려는 열의 이웃 열을 참조하여 줄무늬잡음에 의한 오차를 추정하고 이를 최소화한다. 줄무늬 잡음 정도를 추정하기 위해 줄무늬잡음을 바이어스에 의한 것과 특성곡선의 경향 차이에 의한 것으로 나눈다. 바이어스에 의한 오차는 센서가 스캔하는 방향과 통일한 방향으로 통계적 특성을 이용하여 추정한다. 특성곡선의 경향차에 의한 오차는 보정하려는 열에서 동일한 박기 레벨을 갖는 화소들을 조사하고, 이들과 이웃하는 열의 동일 위치에 있는 화소의 밝기 레벨의 통계적 특성을 파악하여 추정한다. 이렇게 추정된 오차를 최소화함으로써, 줄무늬잡음을 효과적으로 제거하였다.적으로 제거하였다.

  • PDF

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

초다시점 영상 합성을 위한 온라인 삼차원 복원 기술

  • Kim, Jeong-Ho;Kim, Je-U;Gwon, In-So
    • Information and Communications Magazine
    • /
    • v.31 no.2
    • /
    • pp.44-51
    • /
    • 2014
  • 본 논문에서는 초다시점 (Super Multi-view) 영상 합성을 위한 영상 기반의 온라인 삼차원 복원 기술들을 소개한다. 복원의 정확성을 높이고자 하는 방법은 크게 두 부류로 나뉜다. 먼저 재투영 오차를 비용 함수(Cost function)으로 정의하고, 이를 Bundle Adjustment로부터 최적화를 수행하는 방법과 카메라의 위치와 삼차원 복원 결과에 대해 확률적인 분포를 정의하고 이를 순차적으로 추정하는 확률적인 필터링(Stochastic filtering)에 기반한 방법이 존재한다. 본 논문에서는 두 방법의 장단점을 분석하고, 이로부터 새로운 확률적 필터링에 기반한 3차원 복원 및 카메라 위치 추정 방법을 제안한다. 이로부터 대공간 환경에 적용하여 성능을 검증한다.

Cross-covariance 3D Coordinate Estimation Method for Virtual Space Movement Platform (가상공간 이동플랫폼을 위한 교차 공분산 3D 좌표 추정 방법)

  • Jung, HaHyoung;Park, Jinha;Kim, Min Kyoung;Chang, Min Hyuk
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.41-48
    • /
    • 2020
  • Recently, as the demand for the mobile platform market in the virtual/augmented/mixed reality field is increasing, experiential content that gives users a real-world felt through a virtual environment is drawing attention. In this paper, as a method of tracking a tracker for user location estimation in a virtual space movement platform for motion capture of trainees, we present a method of estimating 3D coordinates of the 3D cross covariance through the coordinates of the markers projected on the image. In addition, the validity of the proposed algorithm is verified through rigid body tracking experiments.

A Geographic Modeling System Using GIS and Real Images (GIS와 실영상을 이용한 지리 모델링 시스템)

  • 안현식
    • Spatial Information Research
    • /
    • v.12 no.2
    • /
    • pp.137-149
    • /
    • 2004
  • For 3D modelling artificial objects with computers, we have to draw frames and paint the facet images on each side. In this paper, a geographic modelling system building automatically 3D geographic spaces using GIS data and real images of buildings is proposed. First, the 3D model of terrain is constructed by using TIN and DEM algorithms. The images of buildings are acquired with a camera and its position is estimated using vertical lines of the image and the GIS data. The height of the building is computed with the image and the position of the camera, which used for making up the frames of buildings. The 3D model of the building is obtained by detecting the facet iamges of the building and texture mapping them on the 3D frame. The proposed geographical modeling system is applied to real area and shows its effectiveness.

  • PDF

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF