• Title/Summary/Keyword: Vision-based recognition

Search Result 633, Processing Time 0.023 seconds

Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object (스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

Phoneme Recognition based on Two-Layered Stereo Vision Neural Network (2층 구조의 입체 시각형 신경망 기반 음소인식)

  • Kim, Sung-Ill;Kim, Nag-Cheol
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.5
    • /
    • pp.523-529
    • /
    • 2002
  • The present study describes neural networks for stereoscopic vision, which are applied to identifying human speech. In speech recognition based on stereoscopic vision neural networks (SVNN), the similarities are first obtained by comparing input vocal signals with standard models. They are then given to a dynamic process in which both competitive and cooperative processes are conducted among neighboring similarities. Through the dynamic processes, only one winner neuron is finally detected. In a comparative study, the two-layered SVNN was 7.7% higher in recognition accuracies than the hidden Markov model (HMM). From the evaluation results, it was noticed that SVNN outperformed the existing HMM recognizer.

  • PDF

Vision-Based Robot Manipulator for Grasping Objects (물체 잡기를 위한 비전 기반의 로봇 메뉴플레이터)

  • Baek, Young-Min;Ahn, Ho-Seok;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.331-333
    • /
    • 2007
  • Robot manipulator is one of the important features in service robot area. Until now, there has been a lot of research on robot" manipulator that can imitate the functions of a human being by recognizing and grasping objects. In this paper, we present a robot arm based on the object recognition vision system. We have implemented closed-loop control that use the feedback from visual information, and used a sonar sensor to improve the accuracy. We have placed the web-camera on the top of the hand to recognize objects. We also present some vision-based manipulation issues and our system features.

  • PDF

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

A Case Study on Distance Learning Based Computer Vision Laboratory (원거리 학습 기반 컴퓨터 비젼 실습 사례연구)

  • Lee, Seong-Yeol
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2005.10a
    • /
    • pp.175-181
    • /
    • 2005
  • This paper describes the development of on-line computer vision laboratories to teach the detailed image processing and pattern recognition techniques. The computer vision laboratories include distant image acquisition method, basic image processing and pattern recognition methods, lens and light, and communication. This study introduces a case study that teaches computer vision in distance learning environment. It shows a schematic of a distant loaming workstation and contents of laboratories with image processing examples. The study focus more on the contents of the vision Labs rather than internet application method. The study proposes the ways to improve the on-line computer vision laboratories and includes the further research perspectives

  • PDF

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

Vision-based recognition of a simple non-verbal intent representation by head movements (고개운동에 의한 단순 비언어 의사표현의 비전인식)

  • Yu, Gi-Ho;No, Deok-Su;Lee, Seong-Cheol
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.91-100
    • /
    • 2000
  • In this paper the intent recognition system which recognizes the human's head movements as a simple non-verbal intent representation is presented. The system recognizes five basic intent representations. i.e., strong/weak affirmation. strong/weak negation, and ambiguity by image processing of nodding or shaking movements of head. The vision system for tracking the head movements is composed of CCD camera, image processing board and personal computer. The modified template matching method which replaces the reference image with the searched target image in the previous step is used for the robust tracking of the head movements. For the improvement of the processing speed, the searching is performed in the pyramid representation of the original image. By inspecting the variance of the head movement trajectories. we can recognizes the two basic intent representations - affirmation and negation. Also, by focusing the speed of the head movements, we can see the possibility which recognizes the strength of the intent representation.

  • PDF

Event recognition of entering and exiting (출입 이벤트 인식)

  • Cui, Yaohuan;Lee, Chang-Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.199-204
    • /
    • 2008
  • Visual surveillance is an active topic recently in Computer Vision. Event detection and recognition is one important and useful application of visual surveillance system. In this paper, we propose a new method to recognize the entering and exiting events based on the human's movement feature and the door's state. Without sensors, the proposed approach is based on novel and simple vision method as a combination of edge detection, motion history image and geometrical characteristic of the human shape. The proposed method includes several applications such as access control in visual surveillance and computer vision fields.

  • PDF

A Study on Shape Recognition Technology of Die Casting and Forging Parts Based on Robot Vision for Inspection Process Automation in Limit Environment (극한환경 검사공정 자동화를 위한 로봇비전 기반 주단조 부품의 형상인식 기술에 관한 연구)

  • Bae, H.Y.;Kim, H.J.;Paeng, J.I;Sim, H.S.;Han, SH;Moon, J.C.
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.6
    • /
    • pp.369-378
    • /
    • 2018
  • This study proposes a new approach to real time implimemtation of shape recognition technology of die casting and forging parts based on robot vision for smart factory. The proposed shape recognition and inspection technology for forging and die casting parts is very useful for manufacturing process automatiom and smart factory including external form's automatic inspection of machanical or electronic panrs for the precision verification. The reliabiblity of proposed technology Ihas been illustrated through experiments.

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.