• Title/Summary/Keyword: vision-based control

Search Result 683, Processing Time 0.028 seconds

Stereo matching algorithm based on systolic array architecture using edges and pixel data (에지 및 픽셀 데이터를 이용한 어레이구조의 스테레오 매칭 알고리즘)

  • Jung, Woo-Young;Park, Sung-Chan;Jung, Hong
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.777-780
    • /
    • 2003
  • We have tried to create a vision system like human eye for a long time. We have obtained some distinguished results through many studies. Stereo vision is the most similar to human eye among those. This is the process of recreating 3-D spatial information from a pair of 2-D images. In this paper, we have designed a stereo matching algorithm based on systolic array architecture using edges and pixel data. This is more advanced vision system that improves some problems of previous stereo vision systems. This decreases noise and improves matching rate using edges and pixel data and also improves processing speed using high integration one chip FPGA and compact modules. We can apply this to robot vision and automatic control vehicles and artificial satellites.

  • PDF

Development of multi-object image processing algorithm in a image plane (한 이미지 평면에 있는 다물체 화상처리 기법 개발)

  • 장완식;윤현권;김재확
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.555-555
    • /
    • 2000
  • This study is concentrated on the development of hight speed multi-object image processing algorithm, and based on these a1gorithm, vision control scheme is developed for the robot's position control in real time. Recently, the use of vision system is rapidly increasing in robot's position centre. To apply vision system in robot's position control, it is necessary to transform the physical coordinate of object into the image information acquired by CCD camera, which is called image processing. Thus, to control the robot's point position in real time, we have to know the center point of object in image plane. Particularly, in case of rigid body, the center points of multi-object must be calculated in a image plane at the same time. To solve these problems, the algorithm of multi-object for rigid body control is developed.

  • PDF

A study of a modal based stereo vision system for a remote control in the unstructued environment on networks (네트워크 상에서 비구성 환경의 원격제어를 위한 모델 기반의 스테레오 비전 시스템에 관한 연구)

  • Yi, Hyoung-Guk;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2246-2248
    • /
    • 1998
  • To control the remote system in the unstructured environment requires data under certain circumstances. When a machine is dealt with an unstructured environment, new environment structure is to be composed. The stereo vision system can get both the intensity data and the range data. So, in this paper, data architecture of a stereo image is proposed to set them.

  • PDF

A Portable Micro-display Driver and Device for Vision Improvement (시력 향상을 위한 휴대형 마이크로디스플레이 구동 드라이버 및 장치)

  • Ryu, Young-Kee;Oh, Choonsuk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.129-135
    • /
    • 2016
  • There are many visual enhancement devices for people with low vision. However, almost conventional devices have been simple magnifying and high cost. The symptoms of people with low vision are very variety. It needs to control of image magnifying, brightness, and contrast to improve the visuality. We developed a portable microdisplay driver and device for visual enhancement. This device based on our suggested four methods such as image magnifying, specific color control, BLU brightness control, and visual axis control using a prism. The basic clinical experiments of the proposed Head Mounted Visual Enhancement Device (HMVED) have been performed. The results show beneficiary effects compared with conventional devices, and improve the life quality on people with low vision on account of low weight, low cost, and easy portability.

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

Developed Ethernet based image control system for deep-sea ROV (심해용 ROV를 위한 수중 원격 영상제어 시스템 개발)

  • Kim, Hyun-Hee;Jeong, Ki-Min;Park, Chul-Soo;Lee, Kyung-Chang;Hwang, Yeong-Yeun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.6
    • /
    • pp.389-394
    • /
    • 2018
  • Remotely operated vehicle(ROV) and autonomous underwater vehicle(AUV) have been used for underwater surveys, underwater exploration, resource harvesting, offshore plant maintenance and repair, and underwater construction. It is hard for people to work in the deep sea. Therefore, we need a vision control system of underwater submersible that can replace human eyes. However, many people have difficulty in developing a deep-sea image control system due to the deep sea special environment such as high pressure, brine, waterproofing and communication. In this paper, we will develop an Ethernet based remote image control system that can control the image mounted on ROV.

Optical Flow Based Collision Avoidance of Multi-Rotor UAVs in Urban Environments

  • Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.252-259
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Development of Web Based Die Discrimination System by matching the information of vision with CAD Database (비전정보와 캐드 DB 의 매칭을 통한 웹기반 금형판별 시스템 개발)

  • 김세원;김동우;전병철;조명우
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.277-280
    • /
    • 2004
  • In recent die industry, web-based production control system is applied widely because of the improvement of IT technology. In result, many researches are published about remote monitoring at a long distance. The target of this study is to develop Die Discrimination System using web-based vision, and CAD API when client discriminates die in process at a long distance. Special feature of this system is to use 2D vision image and to match with DB. We can get discrimination result enough to want with short time and a little low precision in web-monitoring by development of this system.

  • PDF

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.