• Title/Summary/Keyword: Vision-based recognition

Search Result 633, Processing Time 0.029 seconds

Hybrid Facial Representations for Emotion Recognition

  • Yun, Woo-Han;Kim, DoHyung;Park, Chankyu;Kim, Jaehong
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1021-1028
    • /
    • 2013
  • Automatic facial expression recognition is a widely studied problem in computer vision and human-robot interaction. There has been a range of studies for representing facial descriptors for facial expression recognition. Some prominent descriptors were presented in the first facial expression recognition and analysis challenge (FERA2011). In that competition, the Local Gabor Binary Pattern Histogram Sequence descriptor showed the most powerful description capability. In this paper, we introduce hybrid facial representations for facial expression recognition, which have more powerful description capability with lower dimensionality. Our descriptors consist of a block-based descriptor and a pixel-based descriptor. The block-based descriptor represents the micro-orientation and micro-geometric structure information. The pixel-based descriptor represents texture information. We validate our descriptors on two public databases, and the results show that our descriptors perform well with a relatively low dimensionality.

Design of Optimized RBFNNs based on Night Vision Face Recognition Simulator Using the 2D2 PCA Algorithm ((2D)2 PCA알고리즘을 이용한 최적 RBFNNs 기반 나이트비전 얼굴인식 시뮬레이터 설계)

  • Jang, Byoung-Hee;Kim, Hyun-Ki;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.1-6
    • /
    • 2014
  • In this study, we propose optimized RBFNNs based on night vision face recognition simulator with the aid of $(2D)^2$ PCA algorithm. It is difficult to obtain the night image for performing face recognition due to low brightness in case of image acquired through CCD camera at night. For this reason, a night vision camera is used to get images at night. Ada-Boost algorithm is also used for the detection of face images on both face and non-face image area. And the minimization of distortion phenomenon of the images is carried out by using the histogram equalization. These high-dimensional images are reduced to low-dimensional images by using $(2D)^2$ PCA algorithm. Face recognition is performed through polynomial-based RBFNNs classifier, and the essential design parameters of the classifiers are optimized by means of Differential Evolution(DE). The performance evaluation of the optimized RBFNNs based on $(2D)^2$ PCA is carried out with the aid of night vision face recognition system and IC&CI Lab data.

Hierarchical Deep Belief Network for Activity Recognition Using Smartphone Sensor (스마트폰 센서를 이용하여 행동을 인식하기 위한 계층적인 심층 신뢰 신경망)

  • Lee, Hyunjin
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1421-1429
    • /
    • 2017
  • Human activity recognition has been studied using various sensors and algorithms. Human activity recognition can be divided into sensor based and vision based on the method. In this paper, we proposed an activity recognition system using acceleration sensor and gyroscope sensor in smartphone among sensor based methods. We used Deep Belief Network (DBN), which is one of the most popular deep learning methods, to improve an accuracy of human activity recognition. DBN uses the entire input set as a common input. However, because of the characteristics of different time window depending on the type of human activity, the RBMs, which is a component of DBN, are configured hierarchically by combining them from different time windows. As a result of applying to real data, The proposed human activity recognition system showed stable precision.

Transformation Based Walking Speed Normalization for Gait Recognition

  • Kovac, Jure;Peer, Peter
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2690-2701
    • /
    • 2013
  • Humans are able to recognize small number of people they know well by the way they walk. This ability represents basic motivation for using human gait as the means for biometric identification. Such biometric can be captured at public places from a distance without subject's collaboration, awareness or even consent. Although current approaches give encouraging results, we are still far from effective use in practical applications. In general, methods set various constraints to circumvent the influence factors like changes of view, walking speed, capture environment, clothing, footwear, object carrying, that have negative impact on recognition results. In this paper we investigate the influence of walking speed variation to different visual based gait recognition approaches and propose normalization based on geometric transformations, which mitigates its influence on recognition results. With the evaluation on MoBo gait dataset we demonstrate the benefits of using such normalization in combination with different types of gait recognition approaches.

A Study on Luminance Contrast Criteria for Tactile Walking Surface Indicators (시각장애인 점자블록의 휘도대비 기준에 대한 연구)

  • Shin, Dong-Hong;Park, Kwang-Jae;Kim, Sang-Woon
    • Journal of The Korea Institute of Healthcare Architecture
    • /
    • v.22 no.1
    • /
    • pp.7-15
    • /
    • 2016
  • Purpose: There are the number of color tactile walking surface indicators installed in Korea, because of indefinite regulation in blind and vision-impaired persons' tactile walking surface indicators. In case of yellow tactile walking surface indicators, it shows a deviation severe color. In this study, the researchers suggested color and brightness reference for helping blind and vision-impaired persons' walking through analyzing the color references of tactile walking surface indicators and the color luminance between tactile walking surface indicators and sidewalk currently used. Method: Reasonable luminance contrast criteria is suggested by examining ways of improving the recognition and recognition of objects according to color contrast visually impaired through literature review and analyzing standards of tactile walking surface indicators and the Europe, Japan and Australia of color and luminance contrast criteria. And by examining the color of the tactile walking surface indicators reported in Korea currently used to derive the problem presented by the luminance contrast in the reference and comparison. Finally, the visually impaired tactile walking surface indicators is set for color selection criteria for the recognition rate improves. Results: In order to improve the recognition rate to be tactile walking surface indicators of the contrast of the visually impaired and the environment than the color of the tactile walking surface indicators itself to secure always a certain level or more of brightness contrast values in the set of the color of the tactile walking surface indicators so important. Implication: In order to set the blind tactile walking surface indicators color recognition based on the verification of the real pedestrian based on the results presented in this paper it is required. It is to be understood as an element of the barrier free configuration for securing the walking pedestrian safety.

Design of an Intelligent Robot Control System Using Neural Network (신경회로망을 이용한 지능형 로봇 제어 시스템 설계)

  • 정동연;서운학;한성현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.279-279
    • /
    • 2000
  • In this paper, we have proposed a new approach to the design of robot vision system to develop the technology for the automatic test and assembling of precision mechanical and electronic parts fur the factory automation. In order to perform real time implementation of the automatic assembling tasks in the complex processes, we have developed an intelligent control algorithm based-on neural networks control theory to enhance the precise motion control. Implementing of the automatic test tasks has been performed by the real-time vision algorithm based-on TMS320C31 DSPs. It distinguishes correctly the difference between the acceptable and unacceptable defective item through pattern recognition of parts by the developed vision algorithm. Finally, the performance of proposed robot vision system has been illustrated by experiment for the similar model of fifth cell among the twelve cell fur automatic test and assembling in S company.

  • PDF

An Automated Machine-Vision-based Feeding System for Engine Mount Parts (머신비젼 기반의 엔진마운트 부품 자동공급시스템)

  • Lee, Hyeong-Geun;Lee, Moon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.5
    • /
    • pp.177-185
    • /
    • 2001
  • This paper describes a machine-vision-based prototype system for automatically feeding engine-mount parts to a swaging machine which assembles engine mounts. The system developed consists of a robot, a feeding device with two cylinders and two photo sensors, and a machine vision system. The machine vision system recognizes the type of different parts being fed from the feeding device and estimates the angular difference between the inner-hole center of the part and the point predetermined for assembling. The robot then picks up each part and rotated it through the estimated angle such that the parts are well assembled together as specified. An algorithm has been developed to recognize different part types and estimate the angular difference. The test results obtained for a set of real specimens indicate that the algorithm performs well enough to be applied to prototype system.

  • PDF

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

A Vision-Based Method to Find Fingertips in a Closed Hand

  • Chaudhary, Ankit;Vatwani, Kapil;Agrawal, Tushar;Raheja, J.L.
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.399-408
    • /
    • 2012
  • Hand gesture recognition is an important area of research in the field of Human Computer Interaction (HCI). The geometric attributes of the hand play an important role in hand shape reconstruction and gesture recognition. That said, fingertips are one of the important attributes for the detection of hand gestures and can provide valuable information from hand images. Many methods are available in scientific literature for fingertips detection with an open hand but very poor results are available for fingertips detection when the hand is closed. This paper presents a new method for the detection of fingertips in a closed hand using the corner detection method and an advanced edge detection algorithm. It is important to note that the skin color segmentation methodology did not work for fingertips detection in a closed hand. Thus the proposed method applied Gabor filter techniques for the detection of edges and then applied the corner detection algorithm for the detection of fingertips through the edges. To check the accuracy of the method, this method was tested on a vast number of images taken with a webcam. The method resulted in a higher accuracy rate of detections from the images. The method was further implemented on video for testing its validity on real time image capturing. These closed hand fingertips detection would help in controlling an electro-mechanical robotic hand via hand gesture in a natural way.

Deep Convolution Neural Networks in Computer Vision: a Review

  • Yoo, Hyeon-Joong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.35-43
    • /
    • 2015
  • Over the past couple of years, tremendous progress has been made in applying deep learning (DL) techniques to computer vision. Especially, deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance on standard recognition datasets and tasks such as ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Among them, GoogLeNet network which is a radically redesigned DCNN based on the Hebbian principle and scale invariance set the new state of the art for classification and detection in the ILSVRC 2014. Since there exist various deep learning techniques, this review paper is focusing on techniques directly related to DCNNs, especially those needed to understand the architecture and techniques employed in GoogLeNet network.