• Title/Summary/Keyword: intelligent vision

Search Result 463, Processing Time 0.026 seconds

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

Design of an Intelligent Robot Control System Using Neural Network (신경회로망을 이용한 지능형 로봇 제어 시스템 설계)

  • 정동연;서운학;한성현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.279-279
    • /
    • 2000
  • In this paper, we have proposed a new approach to the design of robot vision system to develop the technology for the automatic test and assembling of precision mechanical and electronic parts fur the factory automation. In order to perform real time implementation of the automatic assembling tasks in the complex processes, we have developed an intelligent control algorithm based-on neural networks control theory to enhance the precise motion control. Implementing of the automatic test tasks has been performed by the real-time vision algorithm based-on TMS320C31 DSPs. It distinguishes correctly the difference between the acceptable and unacceptable defective item through pattern recognition of parts by the developed vision algorithm. Finally, the performance of proposed robot vision system has been illustrated by experiment for the similar model of fifth cell among the twelve cell fur automatic test and assembling in S company.

  • PDF

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

EVALUATION OF SPEED AND ACCURACY FOR COMPARISON OF TEXTURE CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM

  • Tou, Jing Yi;Khoo, Kenny Kuan Yew;Tay, Yong Haur;Lau, Phooi Yee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.89-93
    • /
    • 2009
  • Embedded systems are becoming more popular as many embedded platforms have become more affordable. It offers a compact solution for many different problems including computer vision applications. Texture classification can be used to solve various problems, and implementing it in embedded platforms will help in deploying these applications into the market. This paper proposes to deploy the texture classification algorithms onto the embedded computer vision (ECV) platform. Two algorithms are compared; grey level co-occurrence matrices (GLCM) and Gabor filters. Experimental results show that raw GLCM on MATLAB could achieves 50ms, being the fastest algorithm on the PC platform. Classification speed achieved on PC and ECV platform, in C, is 43ms and 3708ms respectively. Raw GLCM could achieve only 90.86% accuracy compared to the combination feature (GLCM and Gabor filters) at 91.06% accuracy. Overall, evaluating all results in terms of classification speed and accuracy, raw GLCM is more suitable to be implemented onto the ECV platform.

  • PDF

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Visual Servoing of a Mobile Manipulator Based on Stereo Vision

  • Lee, H.J.;Park, M.G.;Lee, M.C.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.767-771
    • /
    • 2003
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the position of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. However, color information is useful for simple recognition in real-time visual servoing. In this paper, we refer to about object recognition using colors, stereo matching method, recovery of 3D space and the visual servoing.

  • PDF

Mobile Robot Localization using Ubiquitous Vision System (시각기반 센서 네트워크를 이용한 이동로봇의 위치 추정)

  • Dao, Nguyen Xuan;Kim, Chi-Ho;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2780-2782
    • /
    • 2005
  • In this paper, we present a mobile robot localization solution by using a Ubiquitous Vision System (UVS). The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the robots' positions. We construct UVS as a multi-agent system by regarding each vision sensor as one vision agent (VA). Each VA performs target segmentation by color and motion information as well as visual tracking for multiple objects. Our modified identified contractnet (ICN) protocol is used for communication between VAs to coordinate multitask. This protocol raises scalability and modularity of thesystem because of independent number of VAs and needless calibration. Furthermore, the handover between VAs by using ICN is seamless. Experimental results show the robustness of the solution with respect to a widespread area. The performance in indoor environments shows the feasibility of the proposed solution in real-time.

  • PDF

Estimation of Miniature Train Location by Color Vision for Development of an Intelligent Railway System (지능형 철도 시스템 모델 개발을 위한 컬러비전 기반의 소형 기차 위치 측정)

  • 노광현;한민홍
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.44-49
    • /
    • 2003
  • This paper describes a method of estimating miniature train location by color vision for development of an intelligent railway system model. In the teal world, to control trains automatically, GPS(Global Positioning System) is indispensable to determine the location of trains. A color vision system was used for estimating the location of trains in an indoor experiment. Two different rectangular color bars were attached to the top of each train as a means of identifying them. Several trains were detected where they were located on the track by color feature, geometric features and moment invariant, and tracked simultaneously. In the experiment the identity, location and direction of each train were estimated and transferred to the control computer using serial communication. Processing speed of up to 8 frames/sec could be achieved, which was enough speed for the real-time train control.

A study on map generation of autonomous Mobile Robot using stereo vision system (스테레오 비젼 시스템을 이용한 자율 이동 로봇의 지도 작성에 관한 연구)

  • Son, Young-Seop;Lee, Kwae-Hi
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2200-2202
    • /
    • 1998
  • Autonomous mobile robot provide many functions such as sensing, processing, and driving. For more intelligent jobs, more intelligent functions are to be added and the existing functions may be updated. To execute a job autonomous mobile robot has a information of surrounding environment. So, robot uses sonar sensor, vision sensor and so on. Obtained sensor information is used map generation. This paper is focused on map generation using stereo vision system.

  • PDF