• Title/Summary/Keyword: Vision navigation

Search Result 310, Processing Time 0.026 seconds

Vision-Based Robust Control of Robot Manipulators with Jacobian Uncertainty (자코비안 불확실성을 포함하는 로봇 매니퓰레이터의 영상기반 강인제어)

  • Kim, Chin-Su;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.2
    • /
    • pp.113-120
    • /
    • 2006
  • In this paper, a vision-based robust controller for tracking the desired trajectory a robot manipulator is proposed. The trajectory is generated to move the feature point into the desired position which the robot follows to reach to the desired position. To compensate the parametric uncertainties of the robot manipulator which contain in the control input, the robust controller is proposed. In addition, if there are uncertainties in the Jacobian, to compensate it, a vision-based robust controller which has control input is proposed as well in this paper. The stability of the closed-loop system is shown by Lyapunov method. The performance of the proposed method is demonstrated by simulations and experiments on a two degree of freedom 5-link robot manipulators.

  • PDF

Estimation of Angular Acceleration By a Monocular Vision Sensor

  • Lim, Joonhoo;Kim, Hee Sung;Lee, Je Young;Choi, Kwang Ho;Kang, Sung Jin;Chun, Sebum;Lee, Hyung Keun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.3 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.

Vision-based Reduction of Gyro Drift for Intelligent Vehicles (지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소)

  • Kyung, MinGi;Nguyen, Dang Khoi;Kang, Taesam;Min, Dugki;Lee, Jeong-Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

Multi-Range Approach of Stereo Vision for Mobile Robot Navigation in Uncertain Environments

  • Park, Kwang-Ho;Kim, Hyung-O;Baek, Moon-Yeol;Kee, Chang-Doo
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.10
    • /
    • pp.1411-1422
    • /
    • 2003
  • The detection of free spaces between obstacles in a scene is a prerequisite for navigation of a mobile robot. Especially for stereo vision-based navigation, the problem of correspondence between two images is well known to be of crucial importance. This paper describes multi-range approach of area-based stereo matching for grid mapping and visual navigation in uncertain environment. Camera calibration parameters are optimized by evolutionary algorithm for successful stereo matching. To obtain reliable disparity information from both images, stereo images are to be decomposed into three pairs of images with different resolution based on measurement of disparities. The advantage of multi-range approach is that we can get more reliable disparity in each defined range because disparities from high resolution image are used for farther object a while disparities from low resolution images are used for close objects. The reliable disparity map is combined through post-processing for rejecting incorrect disparity information from each disparity map. The real distance from a disparity image is converted into an occupancy grid representation of a mobile robot. We have investigated the possibility of multi-range approach for the detection of obstacles and visual mapping through various experiments.

Path Planning Algorithm for UGVs Based on the Edge Detecting and Limit-cycle Navigation Method (Limit-cycle 항법과 모서리 검출을 기반으로 하는 UGV를 위한 계획 경로 알고리즘)

  • Lim, Yun-Won;Jeong, Jin-Su;An, Jin-Ung;Kim, Dong-Han
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.5
    • /
    • pp.471-478
    • /
    • 2011
  • This UGV (Unmanned Ground Vehicle) is not only widely used in various practical applications but is also currently being researched in many disciplines. In particular, obstacle avoidance is considered one of the most important technologies in the navigation of an unmanned vehicle. In this paper, we introduce a simple algorithm for path planning in order to reach a destination while avoiding a polygonal-shaped static obstacle. To effectively avoid such an obstacle, a path planned near the obstacle is much shorter than a path planned far from the obstacle, on the condition that both paths guarantee that the robot will not collide with the obstacle. So, to generate a path near the obstacle, we have developed an algorithm that combines an edge detection method and a limit-cycle navigation method. The edge detection method, based on Hough Transform and IR sensors, finds an obstacle's edge, and the limit-cycle navigation method generates a path that is smooth enough to reach a detected obstacle's edge. And we proposed novel algorithm to solve local minima using the virtual wall in the local vision. Finally, we verify performances of the proposed algorithm through simulations and experiments.