• Title/Summary/Keyword: Vision-based Control

Search Result 687, Processing Time 0.03 seconds

A Study on Vision Based Gesture Recognition Interface Design for Digital TV (동작인식기반 Digital TV인터페이스를 위한 지시동작에 관한 연구)

  • Kim, Hyun-Suk;Hwang, Sung-Won;Moon, Hyun-Jung
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.257-268
    • /
    • 2007
  • The development of Human Computer Interface has been relied on the development of technology. Mice and keyboards are the most popular HCI devices for personal computing. However, device-based interfaces are quite different from human to human interaction and very artificial. To develop more intuitive interfaces which mimic human to human interface has been a major research topic among HCI researchers and engineers. Also, technology in the TV industry has rapidly developed and the market penetration rate for big size screen TVs has increased rapidly. The HDTV and digital TV broadcasting are being tested. These TV environment changes require changes of Human to TV interface. A gesture recognition-based interface with a computer vision system can replace the remote control-based interface because of its immediacy and intuitiveness. This research focuses on how people use their hands or arms for command gestures. A set of gestures are sampled to control TV set up by focus group interviews and surveys. The result of this paper can be used as a reference to design a computer vision based TV interface.

  • PDF

A seam tracking algorithm based on laser vision (레이저 카메라를 이용한 용접선의 추적)

  • Cho, Hyun-Joong;Ryu, Hyun;Oh, Se-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.593-596
    • /
    • 1996
  • A seam tracking control system with a tool position control and a camera orientation control, has been developed here. For the camera orientation contro, SOFNN was used to learn the expert control signal. The SOFNN algorithm can adjust the fuzzy set parameters and determine the fuzzy logic structure.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Development of Vision Based Steering System for Unmanned Vehicle Using Robust Control

  • Jeong, Seung-Gweon;Lee, Chun-Han;Park, Gun-Hong;Shin, Taek-Young;Kim, Ji-Han;Lee, Man-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1700-1705
    • /
    • 2003
  • In this paper, the automatic steering system for unmanned vehicle was developed. The vision system is used for the lane detection system. This paper defines two modes for detecting lanes on a road. First is searching mode and the other is recognition mode. We use inverse perspective transform and a linear approximation filter for accurate lane detections. The PD control theory is used for the design of the controller to compare with $H_{\infty}$ control theory. The $H_{\infty}$ control theory is used for the design of the controller to reduce the disturbance. The performance of the PD controller and $H_{\infty}$ controller is compared in simulations and tests. The PD controller is easy to tune in the test site. The $H_{\infty}$ controller is robust for the disturbances in the test results.

  • PDF

Extraction of tire information markings using a surface reflection model (표면의 반사 특성을 이용한 타이어 정보 마크의 추출)

  • Ha, Jong-Eun;Lee, Jae-Yong;Gwon, In-So
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.4
    • /
    • pp.324-329
    • /
    • 1996
  • In this paper, we present a vision algorithm to extract the tire information markings on the sidewall of tires. Since the appearance of tire marks is the same as its background, a primary feature to distinguish tire marks from their background is the roughness. Generally, the roughness of tire marks is different from that of its bakground: the surface of tire marks is smoother than the backgrounds. Light incident on the tire surface is reflected differently according to the roughness. For smoother surfaces, the surface irradiance is much stronger than that of rough surfaces. Based on these phenomena and observation, we propose an optimal illumination condition based on Torrance-Sparrow reflection model. We also develop an efficient reflectance-ratio based operator to extract the boundary of tire marks. Even with a very simple masking operation, we were able to obtain remarkable boundary extraction results from real experiments using many tires. By explicitly using the surface reflection model to explain the intensity variation on the black tire surface, we demonstrate that a physics-based vision method is powerful and feasible in extracting surface markings on tires.

  • PDF

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

Application of the Laser Vision Sensor for Corrugated Type Workpiece

  • Lee, Ji-Hyoung;Kim, Jae-Gwon;Kim, Jeom-Gu;Park, In-Wan;Kim, Hyung-Shik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.499-503
    • /
    • 2004
  • This application-oriented paper describes an automated welding carriage system to weld a thin corrugated workpiece with welding seam tracking function. Hyundai Heavy Industries Corporation has developed an automatic welding carriage system, which utilizes pulsed plasma arc welding process for corrugated sheets. It can obtain high speed welding more than 2 times faster than traditional TIG based welding system. The aim of this development is to increase the productivity by using automatic plasma welding carriage systems, to track weld seam line using vision sensor automatically, and finally to provide a convenience to operator in order to carry out welding. In this paper a robust image processing and a distance based tracking algorithms are introduced for corrugated workpiece welding. The automatic welding carriage system is controlled by the programmable logic controller(PLC), and the automatic welding seam tracking system is controlled by the industrial personal computer(IPC) equipped with embedded OS. The system was tested at actual workpiece to show the feasibility and performance of proposed algorithm and to confirm the reliability of developed controller.

  • PDF

Guidance Law for Vision-Based Automatic Landing of UAV

  • Min, Byoung-Mun;Tahk, Min-Jea;Shim, Hyun-Chul David;Bang, Hyo-Choong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.8 no.1
    • /
    • pp.46-53
    • /
    • 2007
  • In this paper, a guidance law for vision-based automatic landing of unmanned aerial vehicles (UAVs) is proposed. Automatic landing is a challenging but crucial capability for UAVs to achieve a fully autonomous flight. In an autonomous landing maneuver of UAVs, the decision of where to landing and the generation of guidance command to achieve a successful landing are very significant problem. This paper is focused on the design of guidance law applicable to automatic landing problem of fixed-wing UAV and rotary-wing UAV, simultaneously. The proposed guidance law generates acceleration command as a control input which derived from a specified time-to-go ($t_go$) polynomial function. The coefficient of $t_go$-polynomial function are determined to satisfy some terminal constraints. Nonlinear simulation results using a fixed-wing and rotary-wing UAV models are presented.

Automatic detection system for surface defects of home appliances based on machine vision (머신비전 기반의 가전제품 표면결함 자동검출 시스템)

  • Lee, HyunJun;Jeong, HeeJa;Lee, JangGoon;Kim, NamHo
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.47-55
    • /
    • 2022
  • Quality control in the smart factory manufacturing process is an important factor. Currently, quality inspection of home appliance manufacturing parts produced by the mold process is mostly performed with the naked eye of the operator, resulting in a high error rate of inspection. In order to improve the quality competition, an automatic defect detection system was designed and implemented. The proposed system acquires an image by photographing an object with a high-performance scan camera at a specific location, and reads defective products due to scratches, dents, and foreign substances according to the vision inspection algorithm. In this study, the depth-based branch decision algorithm (DBD) was developed to increase the recognition rate of defects due to scratches, and the accuracy was improved.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.