• 제목/요약/키워드: Vision Systems

검색결과 1,734건 처리시간 0.023초

Light Source Target Detection Algorithm for Vision-based UAV Recovery

  • Won, Dae-Yeon;Tahk, Min-Jea;Roh, Eun-Jung;Shin, Sung-Sik
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제9권2호
    • /
    • pp.114-120
    • /
    • 2008
  • In the vision-based recovery phase, a terminal guidance for the blended-wing UAV requires visual information of high accuracy. This paper presents the light source target design and detection algorithm for vision-based UAV recovery. We propose a recovery target design with red and green LEDs. This frame provides the relative position between the target and the UAV. The target detection algorithm includes HSV-based segmentation, morphology, and blob processing. These techniques are employed to give efficient detection results in day and night net recovery operations. The performance of the proposed target design and detection algorithm are evaluated through ground-based experiments.

노광시스템을 위한 자동 정렬 비젼시스템 (An Automatic Visual Alignment System for an Exposure System)

  • 조태훈;서재용
    • 반도체디스플레이기술학회지
    • /
    • 제6권1호
    • /
    • pp.43-48
    • /
    • 2007
  • For exposure systems, very accurate alignment between the mask and the substrate is indispensable. In this paper, an automatic alignment system using machine vision for exposure systems is described. Machine vision algorithms are described in detail including extraction of an alignment mark's center position and camera calibration. Methods for extracting parameters for alignment are also presented with some compensation techniques to reduce alignment time. Our alignment system was implemented with a vision system and motion control stages. The performance of the alignment system has been extensively tested with satisfactory results. The performance evaluation shows alignment accuracy of lum within total alignment time of about $2{\sim}3$ seconds including stage moving time.

  • PDF

지능형 철도 시스템 모델 개발을 위한 컬러비전 기반의 소형 기차 위치 측정 (Estimation of Miniature Train Location by Color Vision for Development of an Intelligent Railway System)

  • 노광현;한민홍
    • 제어로봇시스템학회논문지
    • /
    • 제9권1호
    • /
    • pp.44-49
    • /
    • 2003
  • This paper describes a method of estimating miniature train location by color vision for development of an intelligent railway system model. In the teal world, to control trains automatically, GPS(Global Positioning System) is indispensable to determine the location of trains. A color vision system was used for estimating the location of trains in an indoor experiment. Two different rectangular color bars were attached to the top of each train as a means of identifying them. Several trains were detected where they were located on the track by color feature, geometric features and moment invariant, and tracked simultaneously. In the experiment the identity, location and direction of each train were estimated and transferred to the control computer using serial communication. Processing speed of up to 8 frames/sec could be achieved, which was enough speed for the real-time train control.

도심 자율주행을 위한 비전기반 차선 추종주행 실험 (Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision)

  • 서승범;강연식;노치원;강성철
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

레이저 비전을 이용한 3차원 측정 시스템 구현 (Development of a 3-Dimensional Measurement System using Laser Vision)

  • 권효근;천영석;서영수;노영식
    • 전기학회논문지
    • /
    • 제56권5호
    • /
    • pp.973-979
    • /
    • 2007
  • A laser vision system is developed to measure the three-dimensional feature of an object. This system consists of two low cost cameras and a cross laser. One camera and a cross laser are used to measure a plane equation of an object. Using this information, the other camera measures a hole size of an object. The proposed system provides 0.05 mm accuracy measurement systems with relatively low cost.

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

날씨인식 결과를 이용한 GPS 와 비전센서기반 하이브리드 방식의 태양추적 시스템 개발 (A Hybrid Solar Tracking System using Weather Condition Estimates with a Vision Camera and GPS)

  • 유정재;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.557-562
    • /
    • 2014
  • It is well known that solar tracking systems can increase the efficiency of exiting solar panels significantly. In this paper, a hybrid solar tracking system has been developed by using both astronomical estimates from a GPS and the image processing results of a camera vision system. A decision making process is also proposed to distinguish current weather conditions using camera images. Based on the decision making results, the proposed hybrid tracking system switches two tracking control methods. The one control method is based on astronomical estimates of the current solar position. And the other control method is based on the solar image processing result. The developed hybrid solar tracking system is implemented on an experimental platform and the performance of the developed control methods are verified.

A Machine Vision System for Inspecting Tape-Feeder Operation

  • Cho Tai-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.95-99
    • /
    • 2006
  • A tape feeder of a SMD(Surface Mount Device) mounter is a device that sequentially feeds electronic components on a tape reel to the pick-up system of the mounter. As components are getting much smaller, feeding accuracy of a feeder becomes one of the most important factors for successful component pick-up. Therefore, it is critical to keep the feeding accuracy to a specified level in the assembly and production of tape feeders. This paper describes a tape feeder inspection system that was developed to automatically measure and to inspect feeding accuracy using machine vision. It consists of a feeder base, an image acquisition system, and a personal computer. The image acquisition system is composed of CCD cameras with lens, LED illumination systems, and a frame grabber inside the PC. This system loads up to six feeders at a time and inspects them automatically and sequentially. The inspection software was implemented using Visual C++ on Windows with easily usable GUI. Using this system, we can automatically measure and inspect the quality of ail feeders in production process by analyzing the measurement results statistically.

능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정 (Localization of Mobile Robot Using Active Omni-directional Ranging System)

  • 류지형;김진원;이수영
    • 제어로봇시스템학회논문지
    • /
    • 제14권5호
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

머신비젼 기반의 자율주행 차량을 위한 카메라 교정 (Camera Calibration for Machine Vision Based Autonomous Vehicles)

  • 이문규;안택진
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.