• 제목/요약/키워드: Vision Control Algorithm

검색결과 446건 처리시간 0.025초

비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정 (Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion)

  • 박진성;박영진;박윤식;홍덕화
    • 제어로봇시스템학회논문지
    • /
    • 제17권6호
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

Corridor Navigation of the Mobile Robot Using Image Based Control

  • Han, Kyu-Bum;Kim, Hae-Young;Baek, Yoon-Su
    • Journal of Mechanical Science and Technology
    • /
    • 제15권8호
    • /
    • pp.1097-1107
    • /
    • 2001
  • In this paper, the wall following navigation algorithm of the mobile robot using a mono vision system is described. The key points of the mobile robot navigation system are effective acquisition of the environmental information and fast recognition of the robot position. Also, from this information, the mobile robot should be appropriately controlled to follow a desired path. For the recognition of the relative position and orientation of the robot to the wall, the features of the corridor structure are extracted using the mono vision system, then the relative position, the offset distance and steering angle of the robot from the wall, is derived for a simple corridor geometry. For the alleviation of the computation burden of the image processing, the Kalman filter is used to reduce search region in the image space for line detection. Next, the robot is controlled by this information to follow the desired path. The wall following control scheme by the PD control scheme is composed of two control parts, the approaching control and the orientation control, and each control is performed by steering and forward-driving motion of the robot. To verify the effectiveness of the proposed algorithm, the real time navigation experiments are performed. Through the result of the experiments, the effectiveness and flexibility of the suggested algorithm are verified in comparison with a pure encoder-guided mobile robot navigation system.

  • PDF

영상시스템을 이용한 이륜속도차방식 AGV 조향제어 (A study on Vision based Steering Control for Dual Motor Drive AGV)

  • 이현호;이창구;김성중
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2277-2279
    • /
    • 2001
  • This paper describes a vision-based steering control method for AGV which use dual motor drive. We suggest an algorithm which can be detect the guideline quickly and exactly for real time vision processing, and control the steering through an assign the CP (Control - Point) of input image. This method is tested via a IAGV which dual motor drive with a single camera in laboratory environment.

  • PDF

2개의 비전 센서 및 딥 러닝을 이용한 도로 속도 표지판 인식, 자동차 조향 및 속도제어 방법론 (The Road Speed Sign Board Recognition, Steering Angle and Speed Control Methodology based on Double Vision Sensors and Deep Learning)

  • 김인성;서진우;하대완;고윤석
    • 한국전자통신학회논문지
    • /
    • 제16권4호
    • /
    • pp.699-708
    • /
    • 2021
  • 본 논문에서는 2개의 비전 센서와 딥 러닝을 이용한 자율주행 차량의 속도제어 알고리즘을 제시하였다. 비전 센서 A로부터 제공되는 도로 속도 표지판 영상에 딥 러닝 프로그램인 텐서플로우를 이용하여 속도 표지를 인식한 후, 자동차가 인식된 속도를 따르도록 하는 자동차 속도 제어 알고리즘을 제시하였다. 동시에 비전 센서 B부터 전송되는 도로 영상을 실시간으로 분석하여 차선을 검출하고 조향 각을 계산하며 PWM 제어를 통해 전륜 차축을 제어, 차량이 차선을 추적하도록 하는 조향 각 제어 알고리즘을 개발하였다. 제안된 조향 각 및 속도 제어 알고리즘의 유효성을 검증하기 위해서 파이썬 언어, 라즈베리 파이 및 Open CV를 기반으로 하는 자동차 시작품을 제작하였다. 또한, 시험 제작한 트랙에서 조향 및 속도 제어에 관한 시나리오를 검증함으로써 정확성을 확인할 수 있었다.

Development of Inspect Algorithm for Pallets Using Vision System

  • Lee, Man-Hyung;Hong, Suh-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.101.6-101
    • /
    • 2001
  • This paper deals with inspect algorithm using visual system. One of the major problems that arise during polymer production is the estimation of the noise of the product(bad pallets). An erroneous output can cause a lot of losses (production and financial losses). Therefore new methods for real-time inspection of the noise are demanded. For this reason, we have presented a development of vision system algorithm for the defect inspection of PE pallets. First of all, in order to detect the edge of object, the differential filter is used. And we apply to the labeling algorithm for feature extraction. This algorithm is designed for the defect inspection of pallets ...

  • PDF

An Obstacle Avoidance Trajectory Planning for a Quadruped Walking Robot Using Vision and PSD sensor

  • Kong, Jung-Shik;Lee, Bo-Hee;Kim, Jin-Geol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.105.1-105
    • /
    • 2002
  • $\textbullet$ This paper deals with obstacle avoidance of a quadruped robot with a vision system and a PSD sensor. $\textbullet$ The vision system needs for obstacle recognition toward robot. $\textbullet$ Ths PSD sensor is also important element for obstacle recognition. $\textbullet$ We propose algorithm that recognizes obstacles with one vision and PSD sensor. $\textbullet$ We also propose obstacle avoidance algorithm with map from obstacle recognition algorithm. $\textbullet$ Using these algorithm, Quadruped robot can generate gait trajectory. $\textbullet$ Therefore, robot can avoid obstacls, and can move to target point.

  • PDF

비전 기반 스마트 와이퍼 시스템을 위한 지능형 레인 감지 알고리즘 개발 (Intelligent Rain Sensing Algorithm for Vision-based Smart Wiper System)

  • 이경창;김만호;임홍준;이석
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.1727-1730
    • /
    • 2003
  • A windshield wiper system plays a key part in assurance of driver's safety at rainfall. However, because quantity of rain and snow vary irregularly according to time and velocity of automotive, a driver changes speed and operation period of a wiper from time to time in order to secure enough visual field in the traditional windshield wiper system. Because a manual operation of windshield wiper distracts driver's sensitivity and causes inadvertent driving, this is becoming direct cause of traffic accident. Therefore, this paper presents the basic architecture of vision-based smart windshield wiper system and the rain sensing algorithm that regulate speed and operation period of windshield wiper automatically according to quantity of rain or snow. Also, this paper introduces the fuzzy wiper control algorithm based on human's expertise, and evaluates performance of suggested algorithm in simulator model. In especial, the vision sensor can measure wide area relatively than the optical rain sensor. hence, this grasp rainfall state more exactly in case disturbance occurs.

  • PDF

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • 대한용접접합학회:학술대회논문집
    • /
    • 대한용접접합학회 2002년도 Proceedings of the International Welding/Joining Conference-Korea
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

  • Jeong, Jeong-Woo;Kang, Hee-Jun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • 제4권1호
    • /
    • pp.43-48
    • /
    • 2003
  • A linear laser-vision sensor called ‘Perception TriCam Contour' is mounted on an industrial robot and often used for various application of the robot such as the position correction and the inspection of a part. In this paper, a sensor center position calibration is presented for the most accurate use of the robot-Perceptron system. The obtained algorithm is suitable for on-site calibration in an industrial application environment. The calibration algorithm requires the joint sensor readings, and the Perceptron sensor measurements on a specially devised jig which is essential for this calibration process. The algorithm is implemented on the Hyundai 7602 AP robot, and Perceptron's measurement accuracy is increased up to less than 1.4mm.

능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정 (Localization of Mobile Robot Using Active Omni-directional Ranging System)

  • 류지형;김진원;이수영
    • 제어로봇시스템학회논문지
    • /
    • 제14권5호
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.