• Title/Summary/Keyword: spotter model

Search Result 4, Processing Time 0.022 seconds

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.

Recognizing Hand Digit Gestures Using Stochastic Models

  • Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.807-815
    • /
    • 2008
  • A simple efficient method of spotting and recognizing hand gestures in video is presented using a network of hidden Markov models and dynamic programming search algorithm. The description starts from designing a set of isolated trajectory models which are stochastic and robust enough to characterize highly variable patterns like human motion, handwriting, and speech. Those models are interconnected to form a single big network termed a spotting network or a spotter that models a continuous stream of gestures and non-gestures as well. The inference over the model is based on dynamic programming. The proposed model is highly efficient and can readily be extended to a variety of recurrent pattern recognition tasks. The test result without any engineering has shown the potential for practical application. At the end of the paper we add some related experimental result that has been obtained using a different model - dynamic Bayesian network - which is also a type of stochastic model.

  • PDF

Spoken Document Retrieval Based on Phone Sequence Strings Decoded by PVDHMM (PVDHMM을 이용한 음소열 기반의 SDR 응용)

  • Choi, Dae-Lim;Kim, Bong-Wan;Kim, Chong-Kyo;Lee, Yong-Ju
    • MALSORI
    • /
    • no.62
    • /
    • pp.133-147
    • /
    • 2007
  • In this paper, we introduce a phone vector discrete HMM(PVDHMM) that decodes a phone sequence string, and demonstrates the applicability to spoken document retrieval. The PVDHMM treats a phone recognizer or large vocabulary continuous speech recognizer (LVCSR) as a vector quantizer whose codebook size is equal to the size of its phone set. We apply the PVDHMM to decode the phone sequence strings and compare the outputs with those of a continuous speech recognizer(CSR). Also we carry out spoken document retrieval experiment through PVDHMM word spotter on the phone sequence strings which are generated by phone recognizer or LVCSR and compare its results with those of retrieval through the phone-based vector space model.

  • PDF

Solitary Work Detection of Heavy Equipment Using Computer Vision (컴퓨터비전을 활용한 건설현장 중장비의 단독작업 자동 인식 모델 개발)

  • Jeong, Insoo;Kim, Jinwoo;Chi, Seokho;Roh, Myungil;Biggs, Herbert
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.4
    • /
    • pp.441-447
    • /
    • 2021
  • Construction sites are complex and dangerous because heavy equipment and workers perform various operations simultaneously within limited working areas. Solitary works of heavy equipment in complex job sites can cause fatal accidents, and thus they should interact with spotters and obtain information about surrounding environments during operations. Recently, many computer vision technologies have been developed to automatically monitor construction equipment and detect their interactions with other resources. However, previous methods did not take into account the interactions between equipment and spotters, which is crucial for identifying solitary works of heavy equipment. To address the drawback, this research develops a computer vision-based solitary work detection model that considers interactive operations between heavy equipment and spotters. To validate the proposed model, the research team performed experiments using image data collected from actual construction sites. The results showed that the model was able to detect workers and equipment with 83.4 % accuracy, classify workers and spotters with 84.2 % accuracy, and analyze the equipment-to-spotter interactions with 95.1 % accuracy. The findings of this study can be used to automate manual operation monitoring of heavy equipment and reduce the time and costs required for on-site safety management.