• 제목/요약/키워드: Vision-based

검색결과 3,438건 처리시간 0.029초

스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템 (Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object)

  • 이충희;임영철;권순;이종훈
    • 대한임베디드공학회논문지
    • /
    • 제3권3호
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

저하된 로봇 비전에서의 물체 인식을 위한 진화적 생성 기반의 컬러 검출 기법 (Evolutionary Generation Based Color Detection Technique for Object Identification in Degraded Robot Vision)

  • 김경태;서기성
    • 전기학회논문지
    • /
    • 제64권7호
    • /
    • pp.1040-1046
    • /
    • 2015
  • This paper introduces GP(Genetic Programming) based color detection model for an object detection of humanoid robot vision. Existing color detection methods have used linear/nonlinear transformation of RGB color-model. However, most of cases have difficulties to classify colors satisfactory because of interference of among color channels and susceptibility for illumination variation. Especially, they are outstanding in degraded images from robot vision. To solve these problems, we propose illumination robust and non-parametric multi-colors detection model using evolution of GP. The proposed method is compared to the existing color-models for various environments in robot vision for real humanoid Nao.

컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트 (Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques)

  • 조익현;박거태;정순기
    • 한국멀티미디어학회논문지
    • /
    • 제21권1호
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

신경회로망을 이용한 지능형 로봇 제어 시스템 설계 (Design of an Intelligent Robot Control System Using Neural Network)

  • 정동연;서운학;한성현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.279-279
    • /
    • 2000
  • In this paper, we have proposed a new approach to the design of robot vision system to develop the technology for the automatic test and assembling of precision mechanical and electronic parts fur the factory automation. In order to perform real time implementation of the automatic assembling tasks in the complex processes, we have developed an intelligent control algorithm based-on neural networks control theory to enhance the precise motion control. Implementing of the automatic test tasks has been performed by the real-time vision algorithm based-on TMS320C31 DSPs. It distinguishes correctly the difference between the acceptable and unacceptable defective item through pattern recognition of parts by the developed vision algorithm. Finally, the performance of proposed robot vision system has been illustrated by experiment for the similar model of fifth cell among the twelve cell fur automatic test and assembling in S company.

  • PDF

머신비젼 기반의 엔진마운트 부품 자동공급시스템 (An Automated Machine-Vision-based Feeding System for Engine Mount Parts)

  • 이형근;이문규
    • 한국정밀공학회지
    • /
    • 제18권5호
    • /
    • pp.177-185
    • /
    • 2001
  • This paper describes a machine-vision-based prototype system for automatically feeding engine-mount parts to a swaging machine which assembles engine mounts. The system developed consists of a robot, a feeding device with two cylinders and two photo sensors, and a machine vision system. The machine vision system recognizes the type of different parts being fed from the feeding device and estimates the angular difference between the inner-hole center of the part and the point predetermined for assembling. The robot then picks up each part and rotated it through the estimated angle such that the parts are well assembled together as specified. An algorithm has been developed to recognize different part types and estimate the angular difference. The test results obtained for a set of real specimens indicate that the algorithm performs well enough to be applied to prototype system.

  • PDF

Automation of a Teleoperated Microassembly Desktop Station Supervised by Virtual Reality

  • Antoine Ferreira;Fontaine, Jean-Guy;Shigeoki Hirai
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제4권1호
    • /
    • pp.23-31
    • /
    • 2002
  • We proposed a concept of a desktop micro device factory for visually servoed teleoperated microassembly assisted by a virtual reality (VR) interface. It is composed of two micromanipulators equipped with micro tools operating under a light microscope. First a manipulator, control method for the micro object to follow a planned trajectory in pushing operation is proposed undo. vision based-position control. Then, we present the cooperation control strategy of the micro handling operation under vision-based force control integrating a sensor fusion framework approach. A guiding-system based on virtual micro-world exactly reconstructed from the CAD-CAM databases of the real environment being considered is presented for the imprecisely calibrated micro world. Finally, some experimental results of microassembly tasks performed on millimeter-sized components are provided.

조명 변화에 강인한 로봇 축구 시스템의 색상 분류기 (Robust Color Classifier for Robot Soccer System under Illumination Variations)

  • 이성훈;박진현;전향식;최영규
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권1호
    • /
    • pp.32-39
    • /
    • 2004
  • The color-based vision systems have been used to recognize our team robots, the opponent team robots and a ball in the robot soccer system. The color-based vision systems have the difficulty in that they are very sensitive to color variations brought by brightness changes. In this paper, a neural network trained with data obtained from various illumination conditions is used to classify colors in the modified YUV color space for the robot soccer vision system. For this, a new method to measure brightness is proposed by use of a color card. After the neural network is constructed, a look-up-table is generated to replace the neural network in order to reduce the computation time. Experimental results show that the proposed color classification method is robust under illumination variations.

비젼 기반의 포인팅 기기를 위한 퍼지 스크린 검출기 (Fuzzy Screen Detector for a Vision Based Pointing Device)

  • 고재원
    • 전기학회논문지P
    • /
    • 제58권3호
    • /
    • pp.297-302
    • /
    • 2009
  • In this paper, we propose advanced screen detector as a tool for selecting the object for tracking and estimating its distance from a screen using fuzzy logic in vision based pointing device. Our system classifies the line component of the input image into horizontal and vertical lines and applies the fuzzy rule to obtain the best line pair which constitute peripheral framework of the screen. The proposed system improves the detection ratio for detecting the screen in relative to the detector used in the previous works for hand-held type vision based pointing device. Also it allows to detect the screen even though a small part of it may be hidden behind other object.

캠시프트와 KLT특징 추적 알고리즘을 융합한 모바일 로봇의 영상기반 사람추적 및 추종 (A vision based people tracking and following for mobile robots using CAMSHIFT and KLT feature tracker)

  • 이상진;원문철
    • 한국멀티미디어학회논문지
    • /
    • 제17권7호
    • /
    • pp.787-796
    • /
    • 2014
  • Many mobile robot navigation methods utilize laser scanners, ultrasonic sensors, vision camera, and so on for detecting obstacles and path following. However, human utilizes only vision(e.g. eye) information for navigation. In this paper, we study a mobile robot control method based on only the camera vision. The Gaussian Mixture Model and a shadow removal technology are used to divide the foreground and the background from the camera image. The mobile robot uses a combined CAMSHIFT and KLT feature tracker algorithms based on the information of the foreground to follow a person. The algorithm is verified by experiments where a person is tracked and followed by a robot in a hallway.

엘리트 유전알고리즘을 이용한 비젼 기반 로봇의 위치제어 (Vision based position control of manipulator using an elitist genetic algorithm)

  • 백주현;김동준;기창두
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2000년도 추계학술대회 논문집
    • /
    • pp.683-686
    • /
    • 2000
  • A new approach to the task of aligning a robot using camera is presented in this paper. We apply an elitist GA to find the joints angles of manipulator to reach target position instead of using nonlinear least error method. Since it employs parallel search and have good performance in solving optimization problems. In order to improve convergence speed, the floating coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using elitist genetic algorithm.

  • PDF