• Title/Summary/Keyword: Vision-based recognition

Search Result 633, Processing Time 0.031 seconds

A Study on Vision-based Robust Hand-Posture Recognition Using Reinforcement Learning (강화 학습을 이용한 비전 기반의 강인한 손 모양 인식에 대한 연구)

  • Jang Hyo-Young;Bien Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.39-49
    • /
    • 2006
  • This paper proposes a hand-posture recognition method using reinforcement learning for the performance improvement of vision-based hand-posture recognition. The difficulties in vision-based hand-posture recognition lie in viewing direction dependency and self-occlusion problem due to the high degree-of-freedom of human hand. General approaches to deal with these problems include multiple camera approach and methods of limiting the relative angle between cameras and the user's hand. In the case of using multiple cameras, however, fusion techniques to induce the final decision should be considered. Limiting the angle of user's hand restricts the user's freedom. The proposed method combines angular features and appearance features to describe hand-postures by a two-layered data structure and reinforcement learning. The validity of the proposed method is evaluated by appling it to the hand-posture recognition system using three cameras.

Design of Optimized pRBFNNs-based Night Vision Face Recognition System Using PCA Algorithm (PCA알고리즘을 이용한 최적 pRBFNNs 기반 나이트비전 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Jang, Byoung-Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.1
    • /
    • pp.225-231
    • /
    • 2013
  • In this study, we propose the design of optimized pRBFNNs-based night vision face recognition system using PCA algorithm. It is difficalt to obtain images using CCD camera due to low brightness under surround condition without lighting. The quality of the images distorted by low illuminance is improved by using night vision camera and histogram equalization. Ada-Boost algorithm also is used for the detection of face image between face and non-face image area. The dimension of the obtained image data is reduced to low dimension using PCA method. Also we introduce the pRBFNNs as recognition module. The proposed pRBFNNs consists of three functional modules such as the condition part, the conclusion part, and the inference part. In the condition part of fuzzy rules, input space is partitioned by using Fuzzy C-Means clustering. In the conclusion part of rules, the connection weights of pRBFNNs is represented as three kinds of polynomials such as linear, quadratic, and modified quadratic. The essential design parameters of the networks are optimized by means of Differential Evolution.

Ultrasonic and Vision Data Fusion for Object Recognition (초음파센서와 시각센서의 융합을 이용한 물체 인식에 관한 연구)

  • Ko, Joong-Hyup;Kim, Wan-Ju;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.417-421
    • /
    • 1992
  • Ultrasonic and vision data need to be fused for efficient object recognition, especially in mobile robot navigation. In the proposed approach, the whole ultrasonic echo signal is utilized and data fusion is performed based on each sensor's characteristic. It is shown to be effective through the experiment results.

  • PDF

Hand gesture recognition for player control

  • Shi, Lan Yan;Kim, Jin-Gyu;Yeom, Dong-Hae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1908-1909
    • /
    • 2011
  • Hand gesture recognition has been widely used in virtual reality and HCI (Human-Computer-Interaction) system, which is challenging and interesting subject in the vision based area. The existing approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. The purpose of this paper is to combine the finite state machine (FSM) and the gesture recognition method, in other to control Windows Media Player, such as: play/pause, next, pervious, and volume up/down.

  • PDF

Vision-Based Two-Arm Gesture Recognition by Using Longest Common Subsequence (최대 공통 부열을 이용한 비전 기반의 양팔 제스처 인식)

  • Choi, Cheol-Min;Ahn, Jung-Ho;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5C
    • /
    • pp.371-377
    • /
    • 2008
  • In this paper, we present a framework for vision-based two-arm gesture recognition. To capture the motion information of the hands, we perform color-based tracking algorithm using adaptive kernel for each frame. And a feature selection algorithm is performed to classify the motion information into four different phrases. By using gesture phrase information, we build a gesture model which consists of a probability of the symbols and a symbol sequence which is learned from the longest common subsequence. Finally, we present a similarity measurement for two-arm gesture recognition by using the proposed gesture models. In the experimental results, we show the efficiency of the proposed feature selection method, and the simplicity and the robustness of the recognition algorithm.

CAD-Based 3-D Object Recognition Using the Robust Stereo Vision and Hough Transform (강건 스테레오 비전과 허프 변환을 이용한 캐드 기반 삼차원 물체인식)

  • 송인호;정성종
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.500-503
    • /
    • 1997
  • In this paper, a method for recognizing 3-D objects using the 3-D Hough transform and the robust stereo vision is studied. A 3-D object is recognized through two steps; modeling step and matching step. In modeling step, features of the object are extracted by analyzing the IGES file. In matching step, the values of the sensed image are compared with those of the IGES file which is assumed to location and orientation in the 3-D Hough transform domain. Since we use the 3-D Hough transform domain of the input image directly, the sensitivity to the noise and the high computational complexity could be significantly allcv~ated. Also, the cost efficiency is improved using the robust stereo vision for obtaining depth map image which is needed for 3-D Hough transform. In order lo verify the proposed method, real telephone model is recognized. Thc results of the location and orientation of the model are presented.

  • PDF

Intelligent Pattern Recognition Algorithms based on Dust, Vision and Activity Sensors for User Unusual Event Detection

  • Song, Jung-Eun;Jung, Ju-Ho;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.95-103
    • /
    • 2019
  • According to the Statistics Korea in 2017, the 10 leading causes of death contain a cardiac disorder disease, self-injury. In terms of these diseases, urgent assistance is highly required when people do not move for certain period of time. We propose an unusual event detection algorithm to identify abnormal user behaviors using dust, vision and activity sensors in their houses. Vision sensors can detect personalized activity behaviors within the CCTV range in the house in their lives. The pattern algorithm using the dust sensors classifies user movements or dust-generated daily behaviors in indoor areas. The accelerometer sensor in the smartphone is suitable to identify activity behaviors of the mobile users. We evaluated the proposed pattern algorithms and the fusion method in the scenarios.

Improved Inference for Human Attribute Recognition using Historical Video Frames

  • Ha, Hoang Van;Lee, Jong Weon;Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.120-124
    • /
    • 2021
  • Recently, human attribute recognition (HAR) attracts a lot of attention due to its wide application in video surveillance systems. Recent deep-learning-based solutions for HAR require time-consuming training processes. In this paper, we propose a post-processing technique that utilizes the historical video frames to improve prediction results without invoking re-training or modifying existing deep-learning-based classifiers. Experiment results on a large-scale benchmark dataset show the effectiveness of our proposed method.

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

A Lane Change Recognition System for Smart Cars (스마트카를 위한 차선변경 인식시스템)

  • Lee, Yong-Jin;Yang, Jeong-Ha;Kwak, Nojun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.46-51
    • /
    • 2015
  • In this paper, we propose a vision-based method to recognize lane changes of an autonomous vehicle. The proposed method is based on six states of driving situations defined by the positional relationship between a vehicle and its nearest lane detected. With the combinations of these states, the lane change is detected. The proposed method yields 98% recognition accuracy of lane change even in poor situations with partially invisible lanes.