• Title/Summary/Keyword: Visual Control

Search Result 2,581, Processing Time 0.028 seconds

A Study on the Visual Servoing of Autonomous Mobile Inverted Pendulum (자율주행 모바일 역진자의 비주얼서보잉에 대한 연구)

  • Lee, Junmin;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.3
    • /
    • pp.240-247
    • /
    • 2013
  • This paper proposes an optimal three-dimensional coordinate implementation of the vision sensor using two CCD cameras. The PBVS (Position based visual servoing) is implemented using the positional information obtained from images. Stereo vision by PBVS method that has enhanced every frame using calibration parameters is effective in the distance calculation. The IBVS (Image based visual servoing) is also implemented using the difference between reference and obtained images. Stereo vision by IBVS method calculates the distance using rotation angle of motors that correspond eyes and neck without enhanced images. The PBVS method is compared with the IBVS method in terms of advantages, disadvantages, computing time, and performances. Finally, the IBVS method is applied for the dual arm manipulator on the mobile inverted pendulum. The autonomous mobile inverted pendulum is successfully demonstrated using the center of the manipulator's mass.

On Design of Visual Servoing using an Uncalibrated Camera in 3D Space

  • Morita, Masahiko;Kenji, Kohiyama;Shigeru, Uchikado;Lili, Sun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1121-1125
    • /
    • 2003
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

Automatic Visual Inspection System of Remocon using Camera (카메라를 이용한 리모컨 외관검사 자동화 시스템 구현)

  • Huh, Kyung-Moo;Kang, Su-Min;Park, Se-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1106-1111
    • /
    • 2007
  • The visual inspection method that depends on human's eyes has some problem that a lot of variations happen in examination according to bodily, spiritual state of the checker. We automate remocon inspection process using CCD camera. Our developed inspection system can be used in any remocon production line without the user's big handling. Our inspection system was developed using PC, CCD Camera, Visual C++ for universal work place. The accuracy of proposed system was improved about 3.2[%] than the conventional pattern matching method and the processing time was decreased about 119[ms]. Also we showed that our inspection system is more robust to lighting circumstances.

Stereo Visual Odometry without Relying on RANSAC for the Measurement of Vehicle Motion (차량의 모션계측을 위한 RANSAC 의존 없는 스테레오 영상 거리계)

  • Song, Gwang-Yul;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.321-329
    • /
    • 2015
  • This paper addresses a new algorithm for a stereo visual odometry to measure the ego-motion of a vehicle. The new algorithm introduces an inlier grouping method based on Delaunay triangulation and vanishing point computation. Most visual odometry algorithms rely on RANSAC in choosing inliers. Those algorithms fluctuate largely in processing time between images and have different accuracy depending on the iteration number and the level of outliers. On the other hand, the new approach reduces the fluctuation in the processing time while providing accuracy corresponding to the RANSAC-based approaches.

A study on Precise Trajectory Tracking control of Robot system (로봇시스템의 정밀 궤적 추적제어에 관한 연구)

  • Lee, Woo-Song;Kim, Won-Il;Yang, Jun-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.18 no.2
    • /
    • pp.82-89
    • /
    • 2015
  • This study proposes a new approach to design and control for autonomous mobile robots. In this paper, we describes a fuzzy logic based visual servoing system for an autonomous mobile robot. An existing system always needs to keep a moving object in overall image. This mes difficult to move the autonomous mobile robot spontaneously. In this paper we first explain an autonomous mobile robot and fuzzy logic system. And then we design a fuzzy logic based visual servoing system. We extract some features of the object from an overall image and then design a fuzzy logic system for controlling the visual servoing system to an exact position. We here introduce a shooting robot that can track an object and hit it. It is illustrated that the proposed system presents a desirable performance by a computer simulation and some experiments.

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Performance Evaluation of Visual Path Following Algorithm (영상 교시기반 주행 알고리듬 성능 평가)

  • Choi, I-Sak;Ha, Jong-Eun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.9
    • /
    • pp.902-907
    • /
    • 2011
  • In this paper, we deal with performance evaluation of visual path following using 2D and 3D information. Visual path follow first teaches driving path by selecting milestone images then follows the same route by comparing the milestone image and current image. We follow the visual path following algorithm of [8] and [10]. In [8], a robot navigated with 2D image information only. But in [10], local 3D geometries are reconstructed between the milestone images in order to achieve fast feature prediction which allows the recovery from tracking failures. Experimental results including diverse indoor cases show performance of each algorithm.

Depth Estimation for Image-based Visual Servoing of an Under-actuated System (Under-actuated 시스템에서의 이미지 서보잉을 위한 깊이 추정 기법)

  • Lee, Dae-Won;Kim, Jin-Ho;Kim, H.-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.42-46
    • /
    • 2012
  • A simple and accurate depth estimation algorithm for an IBVS (Image-Based Visual Servoing) is presented. Specifically, this algorithm is useful for under-actuated systems such as visual-guided quadrotor UAVs (Unmanned Aerial Vehicles). Since the image of a marker changes with changing pitch and roll angles of quadrotor, it is difficult to estimate depth. The proposed algorithm compensates a shape of the marker, so that the system acquire more accurate depth information without complicated processes. Also, the roll and pitch channels are decoupled so that the IBVS algorithm can be used in an under-actuated quadrotor system.

A Novel Visual Servoing Approach For Keeping Feature Points Within The Field-of-View (특징점이 Field of View를 벗어나지 않는 새로운 Visual Servoing 기법)

  • Park, Do-Hwan;Yeom, Joon-Hyung;Park, Noh-Yong;Ha, In-Joong
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.322-324
    • /
    • 2007
  • In this paper, an eye-in-hand visual servoing strategy for keeping feature points within the FOV(field-of-view) is proposed. We first specify the FOV constraint which must be satisfied to keep the feature points within the FOV. It is expressed as the inequality relationship between (i) the LOS(jine-of-sight) angles of the center of the feature points from the optical axis of the camera and (ii) the distance between the object and the camera. We then design a nonlinear feedback controller which decouples linearly the translational and rotational control loops. Finally, we show that appropriate choice of the controller gains assures to satisfy the FOV constraint. The main advantage of our approach over the previous ones is that the trajectory of the camera is smooth and circular-like. Furthermore, ours can be applied to the large camera displacement problem.

  • PDF

Linear Visual Feedback Conrtol using Binocular Visual Space (양안 시공간을 이용한 Linear Visual Feedback Control)

  • Lim, Seung-Woo;Park, Chang-Kyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.6
    • /
    • pp.74-79
    • /
    • 1995
  • This paper proposes the stereo LVFC-Robot which Imitates eyes and arms of man. we derived linear approximation equation between visual space and joint space by minimum square method and then applied it to the proposed stereo LVFC-Robot. As a result of a simulation, its efficency is verified. Compared with the stereo VFC, the stereo LVFC Robot don't need the Image Jacobian and the Robot Jacobian. Thus it is possible to control Robot in real time.

  • PDF