• 제목/요약/키워드: Bag of Visual Words model (BoVW)

검색결과 4건 처리시간 0.02초

이미지 단어집과 관심영역 자동추출을 사용한 이미지 분류 (Image Classification Using Bag of Visual Words and Visual Saliency Model)

  • 장현웅;조수선
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제3권12호
    • /
    • pp.547-552
    • /
    • 2014
  • 플리커, 페이스북과 같은 대용량 소셜 미디어 공유 사이트의 발전으로 이미지 정보가 매우 빠르게 증가하고 있다. 이에 따라 소셜 이미지를 정확하게 검색하기 위한 다양한 연구가 활발히 진행되고 있다. 이미지 태그들의 의미적 연관성을 이용하여 태그기반의 이미지 검색의 정확도를 높이고자 하는 연구를 비롯하여 이미지 단어집(Bag of Visual Words)을 기반으로 웹 이미지를 분류하는 연구도 다양하게 진행되고 있다. 본 논문에서는 이미지에서 배경과 같은 중요도가 떨어지는 정보를 제거하여 중요부분을 찾는 GBVS(Graph Based Visual Saliency)모델을 기존 연구에 사용할 것을 제안한다. 제안하는 방법은 첫 번째, 이미지 태그들의 의미적 연관성을 이용해 1차 분류된 데이터베이스에 SIFT알고리즘을 사용하여 이미지 단어집(BoVW)을 만든다. 두 번째, 테스트할 이미지에 GBVS를 통해서 이미지의 관심영역을 선택하여 테스트한다. 의미연관성 태그와 SIFT기반의 이미지 단어집을 사용한 기존의 방법에 GBVS를 적용한 결과 더 높은 정확도를 보임을 확인하였다.

A Salient Based Bag of Visual Word Model (SBBoVW): Improvements toward Difficult Object Recognition and Object Location in Image Retrieval

  • Mansourian, Leila;Abdullah, Muhamad Taufik;Abdullah, Lilli Nurliyana;Azman, Azreen;Mustaffa, Mas Rina
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권2호
    • /
    • pp.769-786
    • /
    • 2016
  • Object recognition and object location have always drawn much interest. Also, recently various computational models have been designed. One of the big issues in this domain is the lack of an appropriate model for extracting important part of the picture and estimating the object place in the same environments that caused low accuracy. To solve this problem, a new Salient Based Bag of Visual Word (SBBoVW) model for object recognition and object location estimation is presented. Contributions lied in the present study are two-fold. One is to introduce a new approach, which is a Salient Based Bag of Visual Word model (SBBoVW) to recognize difficult objects that have had low accuracy in previous methods. This method integrates SIFT features of the original and salient parts of pictures and fuses them together to generate better codebooks using bag of visual word method. The second contribution is to introduce a new algorithm for finding object place based on the salient map automatically. The performance evaluation on several data sets proves that the new approach outperforms other state-of-the-arts.

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권7호
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

Recognizing Actions from Different Views by Topic Transfer

  • Liu, Jia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권4호
    • /
    • pp.2093-2108
    • /
    • 2017
  • In this paper, we describe a novel method for recognizing human actions from different views via view knowledge transfer. Our approach is characterized by two aspects: 1) We propose a unsupervised topic transfer model (TTM) to model two view-dependent vocabularies, where the original bag of visual words (BoVW) representation can be transferred into a bag of topics (BoT) representation. The higher-level BoT features, which can be shared across views, can connect action models for different views. 2) Our features make it possible to obtain a discriminative model of action under one view and categorize actions in another view. We tested our approach on the IXMAS data set, and the results are promising, given such a simple approach. In addition, we also demonstrate a supervised topic transfer model (STTM), which can combine transfer feature learning and discriminative classifier learning into one framework.