• 제목/요약/키워드: Vision Based Navigation

검색결과 193건 처리시간 0.032초

로봇의 위치보정을 통한 경로계획 (Path finding via VRML and VISION overlay for Autonomous Robotic)

  • 손은호;박종호;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Development of an Autonomous Navigation System for Unmanned Ground Vehicle

  • Kim, Yoon-Gu;Lee, Ki-Dong
    • 대한임베디드공학회논문지
    • /
    • 제3권4호
    • /
    • pp.244-250
    • /
    • 2008
  • This paper describes the design and implementation of an unmanned ground vehicle (UGV) and also estimates how well autonomous navigation and remote control of UGV can be performed through the optimized arbitration of several sensor data, which are acquired from vision, obstacle detection, positioning system, etc. For the autonomous navigation, lane detection and tracing, global positioning, and obstacle avoidance are necessarily required. In addition, for the remote control, two types of experimental environments are established. One is to use a commercial racing wheel module, and the other is to use a haptic device that is useful for a user application based on virtual reality. Experimental results show that autonomous navigation and remote control of the designed UGV can be achieved with more effectiveness and accuracy using the proper arbitration of sensor data and navigation plan.

  • PDF

초음파센서와 시각센서의 융합을 이용한 물체 인식에 관한 연구 (Ultrasonic and Vision Data Fusion for Object Recognition)

  • 고중협;김완주;정명진
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1992년도 하계학술대회 논문집 A
    • /
    • pp.417-421
    • /
    • 1992
  • Ultrasonic and vision data need to be fused for efficient object recognition, especially in mobile robot navigation. In the proposed approach, the whole ultrasonic echo signal is utilized and data fusion is performed based on each sensor's characteristic. It is shown to be effective through the experiment results.

  • PDF

Improvement on the Image Processing for an Autonomous Mobile Robot with an Intelligent Control System

  • Kubik, Tomasz;Loukianov, Andrey A.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.36.4-36
    • /
    • 2001
  • A robust and reliable path recognition system is one necessary component for the autonomous navigation of a mobile robot to help determining its current position in its navigation map. This paper describes a computer visual path-recognition system using on-board video camera as vision-based driving assistance for an autonomous navigation mobile robot. The common problem for a visual system is that its reliability was often influenced by different lighting conditions. Here, two different image processing methods for the path detection were developed to reduce the effect of the luminance: one is based on the RGB color model and features of the path, another is based on the HSV color model in the absence of luminance.

  • PDF

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

인공시계기반 헬기용 3차원 항법시스템 구성 (A Real-Time NDGPS/INS Navigation System Based on Artificial Vision for Helicopter)

  • 김재형;유준;곽휘권
    • 한국군사과학기술학회지
    • /
    • 제11권3호
    • /
    • pp.30-39
    • /
    • 2008
  • An artificial vision aided NDGPS/INS system has been developed and tested in the dynamic environment of ground and flight vehicles to evaluate the overall system performance. The results show the significant advantages in position accuracy and situation awareness. Accuracy meets the CAT-I precision approach and landing using NDGPS/INS integration. Also we confirm the proposed system is effective enough to improve flight safety by using artificial vision. The system design, software algorithm, and flight test results are presented in details.

수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법 (Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing)

  • 이상훈;송진모;배종수
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Development of a Hovering Robot System for Calamity Observation

  • Kang, M.S.;Park, S.;Lee, H.G.;Won, D.H.;Kim, T.J.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.580-585
    • /
    • 2005
  • A QRT(Quad-Rotor Type) hovering robot system is developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV(Unmanned Aerial Vehicle) is equipped with four propellers driven by each electric motor, an embedded controller using a DSP, INS(Inertial Navigation System) using 3-axis rate gyros, a CCD camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. The developed hovering robot shows stable flying performances under the adoption of RIC(Robust Internal-loop Compensator) based disturbance compensation and the vision based localization method. The UAV can also avoid obstacles using eight IR and four ultrasonic range sensors. The VTOL(Vertical Take-Off and Landing) flying object flies into indoor fire spots and sends the images captured by the CCD camera to the operator. This kind of small-sized UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment.

  • PDF

Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles

  • Quan, Nguyen Van;Eum, Hyuk-Min;Lee, Jeisung;Hyun, Chang-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.140-146
    • /
    • 2013
  • In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.