• 제목/요약/키워드: Vision based localization

검색결과 134건 처리시간 0.028초

가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발 (Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes)

  • 전영산;최종은;이정욱
    • 제어로봇시스템학회논문지
    • /
    • 제20권11호
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

VRML 영상오버레이기법을 이용한 로봇의 Self-Localization (VRML image overlay method for Robot's Self-Localization)

  • 손은호;권방현;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정 (UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments)

  • 최지훈;박용운;주상현;심성대;민지홍
    • 한국군사과학기술학회지
    • /
    • 제15권5호
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템 (Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images)

  • 김종록;임미섭;임준홍
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구 (A Study on Fisheye Lens based Features on the Ceiling for Self-Localization)

  • 최철희;최병재
    • 한국지능시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.442-448
    • /
    • 2011
  • 이동 로봇의 위치인식 기술을 위하여 SLAM(Simultaneous Localization and Mapping)에 관한 많은 연구가 진행되고 있다. 본 논문에서는 시야각이 넓은 어안렌즈를 장착한 단일 카메라를 사용하여 천장의 특징점을 이용한 자기위치 인식에 관한 방안을 제시한다. 여기서는 어안렌즈 기반의 비전 시스템이 가지는 왜곡 영상의 보정, SIFT(Scale Invariant Feature Transform) 기반의 강인한 특징점을 추출하여 이전 영상과 이동한 영상과의 정합을 통해 최적화된 영역 함수를 도출하는 과정, 그리고 기하학적 적합모델 설계 등을 제시한다. 제안한 방법을 실험실 환경 및 복도 환경에 적용하여 그 유용성을 확인한다.

로봇의 위치보정을 통한 경로계획 (Path finding via VRML and VISION overlay for Autonomous Robotic)

  • 손은호;박종호;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정 (Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment)

  • 진태석;이민중;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제13권5호
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구 (Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information)

  • 최재영;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제18권8호
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

천정부착 랜드마크 위치와 에지 화소의 이동벡터 정보에 의한 이동로봇 위치 인식 (Mobile Robot Localization using Ceiling Landmark Positions and Edge Pixel Movement Vectors)

  • 진홍신;아디카리 써얌프;김성우;김형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.368-373
    • /
    • 2010
  • A new indoor mobile robot localization method is presented. Robot recognizes well designed single color landmarks on the ceiling by vision system, as reference to compute its precise position. The proposed likelihood prediction based method enables the robot to estimate its position based only on the orientation of landmark.The use of single color landmarks helps to reduce the complexity of the landmark structure and makes it easily detectable. Edge based optical flow is further used to compensate for some landmark recognition error. This technique is applicable for navigation in an unlimited sized indoor space. Prediction scheme and localization algorithm are proposed, and edge based optical flow and data fusing are presented. Experimental results show that the proposed method provides accurate estimation of the robot position with a localization error within a range of 5 cm and directional error less than 4 degrees.

어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘 (Localization using Ego Motion based on Fisheye Warping Image)

  • 최윤원;최경식;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.