• Title/Summary/Keyword: Vision navigation

Search Result 310, Processing Time 0.029 seconds

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision (도심 자율주행을 위한 비전기반 차선 추종주행 실험)

  • Suh, Seung-Beum;Kang, Yeon-Sik;Roh, Chi-Won;Kang, Sung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

A Survey of Research on Human-Vehicle Interaction in Defense Area (국방 분야의 인간-차량 인터랙션 연구)

  • Yang, Ji Hyun;Lee, Sang Hun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.3
    • /
    • pp.155-166
    • /
    • 2013
  • We present recent human-vehicle interaction (HVI) research conducted in the area of defense and military application. Research topics discussed in this paper include: training simulation for overland navigation tasks; expertise effects in overland navigation performance and scan patterns; pilot's perception and confidence on an overland navigation task; effects of UAV (Unmanned Aerial Vehicle) supervisory control on F-18 formation flight performance in a simulator environment; autonomy balancing in a manned-unmanned teaming (MUT) swarm attack, enabling visual detection of IED (Improvised Explosive Device) indicators through Perceptual Learning Assessment and Training; usability test on DaViTo (Data Visualization Tool); and modeling peripheral vision for moving target search and detection. Diverse and leading HVI study in the defense domain suggests future research direction in other HVI emerging areas such as automotive industry and aviation domain.

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.

Ultrasonic and Vision Data Fusion for Object Recognition (초음파센서와 시각센서의 융합을 이용한 물체 인식에 관한 연구)

  • Ko, Joong-Hyup;Kim, Wan-Ju;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.417-421
    • /
    • 1992
  • Ultrasonic and vision data need to be fused for efficient object recognition, especially in mobile robot navigation. In the proposed approach, the whole ultrasonic echo signal is utilized and data fusion is performed based on each sensor's characteristic. It is shown to be effective through the experiment results.

  • PDF

Aisle following of a mobile robot using machine vision (영상 정보를 이용한 로보트의 창법 연구)

  • 장무경;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.591-595
    • /
    • 1990
  • This paper describes a method of aisle following of a mobile robot using machine vision. As a navigation guide, Black strip painted on the lower part of wall of aisle is used. The offset of the vehicle position from the center of aisle and the heading angle are determined from the binary image of guide strip captured by a CCD camera. To remove the effect of noise. i.e. break of guide strip for the door or reflection of light, pixel sampling method together with consistency check of the incline for the sampled pixels is used.

  • PDF

Obstacle Recognition Using the Vision and Ultrasonic Sensor in a Mobile Robot (영상과 초음파 정보를 이용한 이동로보트의 장애물 인식)

  • 박민기;박민용
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1154-1161
    • /
    • 1995
  • In this paper, a new method is proposed where the vision and ultrasonic sensor are used to recognize obstacles and to obtain its position and size. Ultrasonic snsors are used to obtain the actual navigation path width of the mobile robot. In conjunction with camera images of the path, recognition of obstacles and the determination of its distance, direction, and width are carried out. The characteristics of the sensors and the mobile robots used generally make it difficult to recognize all environments; accordingly, a restricted environment is employed for this study.

  • PDF

LVLN : A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation (LVLN: 시각-언어 이동을 위한 랜드마크 기반의 심층 신경망 모델)

  • Hwang, Jisu;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.379-390
    • /
    • 2019
  • In this paper, we propose a novel deep neural network model for Vision-and-Language Navigation (VLN) named LVLN (Landmark-based VLN). In addition to both visual features extracted from input images and linguistic features extracted from the natural language instructions, this model makes use of information about places and landmark objects detected from images. The model also applies a context-based attention mechanism in order to associate each entity mentioned in the instruction, the corresponding region of interest (ROI) in the image, and the corresponding place and landmark object detected from the image with each other. Moreover, in order to improve the success rate of arriving the target goal, the model adopts a progress monitor module for checking substantial approach to the target goal. Conducting experiments with the Matterport3D simulator and the Room-to-Room (R2R) benchmark dataset, we demonstrate high performance of the proposed model.

Two Feature Points Based Laser Scanner for Mobile Robot Navigation (레이저 센서에서 두 개의 특징점을 이용한 이동로봇의 항법)

  • Kim, Joo-Wan;Shim, Duk-Sun
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Mobile robots use various sensors for navigation such as wheel encoder, vision sensor, sonar, and laser sensors. Dead reckoning is used with wheel encoder, resulting in the accumulation of positioning errors. For that reason wheel encoder can not be used alone. Too much information of vision sensors leads to an increase in the number of features and complexity of perception scheme. Also Sonar sensor is not suitable for positioning because of its poor accuracy. On the other hand, laser sensor provides accurate distance information relatively. In this paper we propose to extract the angular information from the distance information of laser range finder and use the Kalman filter that match the heading and distance of the laser range finder and those of wheel encoder. For laser scanner with one feature point error may increase much when the feature point is variant or jumping to a new feature point. To solve the problem, we propose to use two feature points and show that the positioning error can be reduced much.

Vision Navigation System by Autonomous Mobile Robot

  • Shin S.Y.;Lee, J.H.;Kang H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.146.3-146
    • /
    • 2001
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads and open area without any specific mark such as painted guide tine or tape. In this method, Robot navigates with visual sensors, which uses visual information to navigate itself along the road. An Artificial Neural Network System was used to decide where to move. It is designed with USB web camera as visual sensor.

  • PDF