• 제목/요약/키워드: Vision Systems

검색결과 1,716건 처리시간 0.034초

전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적 (Moving Target Tracking using Vision System for an Omni-directional Wheel Robot)

  • 김산;김동환
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

EVALUATION OF SPEED AND ACCURACY FOR COMPARISON OF TEXTURE CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM

  • Tou, Jing Yi;Khoo, Kenny Kuan Yew;Tay, Yong Haur;Lau, Phooi Yee
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.89-93
    • /
    • 2009
  • Embedded systems are becoming more popular as many embedded platforms have become more affordable. It offers a compact solution for many different problems including computer vision applications. Texture classification can be used to solve various problems, and implementing it in embedded platforms will help in deploying these applications into the market. This paper proposes to deploy the texture classification algorithms onto the embedded computer vision (ECV) platform. Two algorithms are compared; grey level co-occurrence matrices (GLCM) and Gabor filters. Experimental results show that raw GLCM on MATLAB could achieves 50ms, being the fastest algorithm on the PC platform. Classification speed achieved on PC and ECV platform, in C, is 43ms and 3708ms respectively. Raw GLCM could achieve only 90.86% accuracy compared to the combination feature (GLCM and Gabor filters) at 91.06% accuracy. Overall, evaluating all results in terms of classification speed and accuracy, raw GLCM is more suitable to be implemented onto the ECV platform.

  • PDF

랜드마크 기반 비전항법의 오차특성을 고려한 INS/비전 통합 항법시스템 (INS/Vision Integrated Navigation System Considering Error Characteristics of Landmark-Based Vision Navigation)

  • 김영선;황동환
    • 제어로봇시스템학회논문지
    • /
    • 제19권2호
    • /
    • pp.95-101
    • /
    • 2013
  • The paper investigates the geometric effect of landmarks to the navigation error in the landmark based 3D vision navigation and introduces the INS/Vision integrated navigation system considering its effect. The integrated system uses the vision navigation results taking into account the dilution of precision for landmark geometry. Also, the integrated system helps the vision navigation to consider it. An indirect filter with feedback structure is designed, in which the position and the attitude errors are measurements of the filter. Performance of the integrated system is evaluated through the computer simulations. Simulation results show that the proposed algorithm works well and that better performance can be expected when the error characteristics of vision navigation are considered.

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권2호
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

창고 Crane 무인화 시스템 개발 및 적용 (Development and application of unmanned crane system in the warehouse)

  • 박남수;김태진
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.1079-1082
    • /
    • 1996
  • Automatic control systems for warehouse composed of unmanned crane system and vision system. Unmanned crane system is introduced to reject oscillations of a load suspended from a trolley at a moment of its arrival at its target position. And vision system is applied to find out the coordinates of coils on trucks using image processing.

  • PDF

Vision 시스템의 차량 인식률 향상에 관한 연구 (A Study on the Improvement of Vehicle Recognition Rate of Vision System)

  • 오주택;이상용;이상민;김영삼
    • 한국ITS학회 논문지
    • /
    • 제10권3호
    • /
    • pp.16-24
    • /
    • 2011
  • 차량의 전자제어 시스템은 운전자의 안전을 확보하려는 법률적, 사회적 요구에 발맞추어 빠르게 발달하고 있으며, 하드웨어의 가격하락과 센서 및 프로세서의 고성능화에 따라 레이더, 카메라, 레이저와 같은 다양한 센서를 적용한 다양한 운전자 지원 시스템 (Driver Assistance System)이 실용화되고 있다. 이에 본 연구의 선행연구에서는 CCD 카메라로부터 취득되는 영상을 이용하여 실험차량의 주행 차선 및 주변에 위치 하거나 접근하는 차량을 인식하여 운전자의 위험운전에 대한 원인 및 결과를 분석 할 수 있는 Vision 시스템 기반 위험운전 분석 프로그램을 개발하였다. 그러나 선행 연구에서 개발된 Vision 시스템은 터널, 일출, 일몰과 같이 태양광이 충분치 않은 곳에서는 차선 및 차량의 인식율이 매우 떨어지는 것으로 나타났다. 이에 본 연구에서는 밝기 대응 알고리즘을 개발하여 Vision 시스템에 탑재함으로서 언제, 어느 곳에서라도 차선 및 차량에 대한 인식율을 향상시켜 운전자의 위험운전에 대한 원인을 명확하게 분석하고자 한다.

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

스테레오 영상을 이용한 이동형 머니퓰레이터의 시각제어 (Visual Servoing of a Mobile Manipulator Based on Stereo Vision)

  • 이현정;박민규;이민철
    • 제어로봇시스템학회논문지
    • /
    • 제11권5호
    • /
    • pp.411-417
    • /
    • 2005
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the potion of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. Color information is useful for simple recognition in real-time visual servoing. This paper addresses object recognition using colors, stereo matching method to reduce its calculation time, recovery of 3D space and the visual servoing.

3-D vision sensor system for arc welding robot with coordinated motion by transputer system

  • Ishida, Hirofumi;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국제학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.446-450
    • /
    • 1993
  • In this paper we propose an arc welding robot system, where two robots works coordinately and employ the vision sensor. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. The vision sensor consists of two laser slit-ray projectors and one CCD TV camera, and is mounted on the top of one robot. The vision sensor detects the 3-dimensional shape of the groove on the target work which needs to be weld. And two robots are moved coordinately to trace the grooves with accuracy. In order to realize fast image processing, totally five sets of high-speed parallel processing units (Transputer) are employed. The teaching tasks of the coordinated motions are simplified considerably due to this vision sensor. Experimental results reveal the applicability of our system.

  • PDF