• 제목/요약/키워드: Vision Based Navigation

Search Result 195, Processing Time 0.027 seconds

Development of Sensor Device and Probability-based Algorithm for Braille-block Tracking (확률론에 기반한 점자블록 추종 알고리즘 및 센서장치의 개발)

  • Roh, Chi-Won;Lee, Sung-Ha;Kang, Sung-Chul;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.3
    • /
    • pp.249-255
    • /
    • 2007
  • Under the situation of a fire, it is difficult for a rescue robot to use sensors such as vision sensor, ultrasonic sensor or laser distance sensor because of diffusion, refraction or block of light and sound by dense smoke. But, braille blocks that are installed for the visaully impaired at public places such as subway stations can be used as a map for autonomous mobile robot's localization and navigation. In this paper, we developed a laser sensor stan device which can detect braille blcoks in spite of dense smoke and integrated the device to the robot developed to carry out rescue mission in various hazardous disaster areas at KIST. We implemented MCL algorithm for robot's attitude estimation according to the scanned data and transformed a braille block map to a topological map and designed a nonlinear path tracking controller for autonomous navigation. From various simulations and experiments, we could verify that the developed laser sensor device and the proposed localization method are effective to autonomous tracking of braille blocks and the autonomous navigation robot system can be used for rescue under fire.

Stereo-Vision Based Road Slope Estimation and Free Space Detection on Road (스테레오비전 기반의 도로의 기울기 추정과 자유주행공간 검출)

  • Lee, Ki-Yong;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.199-205
    • /
    • 2011
  • This paper presents an algorithm capable of detecting free space for the autonomous vehicle navigation. The algorithm consists of two main steps: 1) estimation of longitudinal profile of road, 2) detection of free space. The estimation of longitudinal profile of road is detection of v-line in v-disparity image which is corresponded to road slope, using v-disparity image and hough transform, Dijkstra algorithm. To detect free space, we detect u-line in u-disparity image which is a boundary line between free space and obstacle's region, using u-disparity image and dynamic programming. Free space is decided by detected v-line and u-line. The proposed algorithm is proven to be successful through experiments under various traffic scenarios.

A New Refinement Method for Structure from Stereo Motion (스테레오 연속 영상을 이용한 구조 복원의 정제)

  • 박성기;권인소
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.11
    • /
    • pp.935-940
    • /
    • 2002
  • For robot navigation and visual reconstruction, structure from motion (SFM) is an active issue in computer vision community and its properties arc also becoming well understood. In this paper, when using stereo image sequence and direct method as a tool for SFM, we present a new method for overcoming bas-relief ambiguity. We first show that the direct methods, based on optical flow constraint equation, are also intrinsically exposed to such ambiguity although they introduce robust methods. Therefore, regarding the motion and depth estimation by the robust and direct method as approximated ones. we suggest a method that refines both stereo displacement and motion displacement with sub-pixel accuracy, which is the central process f3r improving its ambiguity. Experiments with real image sequences have been executed and we show that the proposed algorithm has improved the estimation accuracy.

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

Robust Control of Robot Manipulators using Vision Systems

  • Lee, Young-Chan;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.7 no.2
    • /
    • pp.162-170
    • /
    • 2003
  • In this paper, we propose a robust controller for trajectory control of n-link robot manipulators using feature based on visual feedback. In order to reduce tracking error of the robot manipulator due to parametric uncertainties, integral action is included in the dynamic control part of the inner control loop. The desired trajectory for tracking is generated from feature extraction by the camera mounted on the end effector. The stability of the robust state feedback control system is shown by the Lyapunov method. Simulation and experimental results on a 5-link robot manipulator with two degree of freedom show that the proposed method has good tracking performance.

  • PDF

Obstacle Avoidance Algorithm using Stereo (스테레오 기반의 장애물 회피 알고리듬)

  • Kim, Se-Sun;Kim, Hyun-Soo;Ha, Jong-Eun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.89-93
    • /
    • 2009
  • This paper deals with obstacle avoidance for unmanned vehicle using stereo system. The "DARPA Grand Challenge 2005" shows that the robot can move autonomously under given waypoint. RADAR, IMS (Inertial Measurement System), GPS, camera are used for autonomous navigation. In this paper, we focus on stereo system for autonomous navigation. Our approach is based on Singh et. al. [5]'s approach that is successfully used in an unmanned vehicle and a planetary robot. We propose an improved algorithm for obstacle avoidance by modifying the cost function of Singh et. al. [5]. Proposed algorithm gives more sharp contrast in choosing local path for obstacle avoidance and it is verified in experimental results.

Determinants for the Development of a Logistics Hub

  • Julian A Barona;Nam Ki Chan;Shin Chang Hoon;Song Jae Young
    • Journal of Navigation and Port Research
    • /
    • v.29 no.2
    • /
    • pp.127-134
    • /
    • 2005
  • The purpose of this study is to explore the concept of a logistics hub, identify key factors and milestones for its development, and give some recommendations and implications to developing countries. For this the countries competing to be Logistics hub in Northeast Asia (NEA), such as South Korea, japan and China, are taken into consideration These countries have under its priority policies the development of a logistics hub vision to become the central area of the region achieving microeconomic and macroeconomic prosperity. Based on the review of the relevant literature, five factors came up as key determinants for the development of a hub project: 1. Logistics services support and infrastructure. 2. Business environment. 3. Economic determinants. 4. Political support and 5. Access to international markets. These are going to be analyzed together with its different variables, using statistical methods.

Control and Calibration for Robot Navigation based on Light's Panel Landmark (천장 전등패널 기반 로봇의 주행오차 보정과 제어)

  • Jin, Tae-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.20 no.2
    • /
    • pp.89-95
    • /
    • 2017
  • In this paper, we suggest the method for a mobile robot to move safely from an initial position to a goal position in the wide environment like a building. There is a problem using odometry encoder sensor to estimate the position of a mobile robot in the wide environment like a building. Because of the phenomenon of wheel's slipping, a encoder sensor has the accumulated error of a sensor measurement as time. Therefore the error must be compensated with using other sensor. A vision sensor is used to compensate the position of a mobile robot as using the regularly attached light's panel on a building's ceiling. The method to create global path planning for a mobile robot model a building's map as a graph data type. Consequently, we can apply floyd's shortest path algorithm to find the path planning. The effectiveness of the method is verified through simulations and experiments.