• Title/Summary/Keyword: Bag of Features(BOF)

Search Result 2, Processing Time 0.016 seconds

Detection of Direction Indicators on Road Surfaces Using Inverse Perspective Mapping and NN (원근투영법과 신경망을 이용한 도로노면 방향지시기호 검출 연구)

  • Kim, Jong Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.201-208
    • /
    • 2015
  • This paper proposes a method for detecting the direction indicator shown in the road surface efficiently from the black box system installed on the vehicle. In the proposed method, the direction indicators are detected by inverse perspective mapping(IPM) and bag of visual features(BOF)-based NN classifier. In order to apply the proposed method to real-time environments, the candidated regions of direction indicator in an image only performs IPM, and BOF-based NN is used for the classification of feature information from direction indicators. The results of applying the proposed method to the road surface direction indicators detection and recognition, the detection accuracy was presented at least about 89%, and the method presents a relatively high detection rate in the various road conditions. Thus it can be seen that the proposed method is applied to safe driving support systems available.

BoF based Action Recognition using Spatio-Temporal 2D Descriptor (시공간 2D 특징 설명자를 사용한 BOF 방식의 동작인식)

  • KIM, JinOk
    • Journal of Internet Computing and Services
    • /
    • v.16 no.3
    • /
    • pp.21-32
    • /
    • 2015
  • Since spatio-temporal local features for video representation have become an important issue of modeless bottom-up approaches in action recognition, various methods for feature extraction and description have been proposed in many papers. In particular, BoF(bag of features) has been promised coherent recognition results. The most important part for BoF is how to represent dynamic information of actions in videos. Most of existing BoF methods consider the video as a spatio-temporal volume and describe neighboring 3D interest points as complex volumetric patches. To simplify these complex 3D methods, this paper proposes a novel method that builds BoF representation as a way to learn 2D interest points directly from video data. The basic idea of proposed method is to gather feature points not only from 2D xy spatial planes of traditional frames, but from the 2D time axis called spatio-temporal frame as well. Such spatial-temporal features are able to capture dynamic information from the action videos and are well-suited to recognize human actions without need of 3D extensions for the feature descriptors. The spatio-temporal BoF approach using SIFT and SURF feature descriptors obtains good recognition rates on a well-known actions recognition dataset. Compared with more sophisticated scheme of 3D based HoG/HoF descriptors, proposed method is easier to compute and simpler to understand.