• 제목/요약/키워드: Semantic Image Annotation

검색결과 36건 처리시간 0.021초

KNN-based Image Annotation by Collectively Mining Visual and Semantic Similarities

  • Ji, Qian;Zhang, Liyan;Li, Zechao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권9호
    • /
    • pp.4476-4490
    • /
    • 2017
  • The aim of image annotation is to determine labels that can accurately describe the semantic information of images. Many approaches have been proposed to automate the image annotation task while achieving good performance. However, in most cases, the semantic similarities of images are ignored. Towards this end, we propose a novel Visual-Semantic Nearest Neighbor (VS-KNN) method by collectively exploring visual and semantic similarities for image annotation. First, for each label, visual nearest neighbors of a given test image are constructed from training images associated with this label. Second, each neighboring subset is determined by mining the semantic similarity and the visual similarity. Finally, the relevance between the images and labels is determined based on maximum a posteriori estimation. Extensive experiments were conducted using three widely used image datasets. The experimental results show the effectiveness of the proposed method in comparison with state-of-the-arts methods.

모바일 환경에서 의미 기반 이미지 어노테이션 및 검색 (Semantic Image Annotation and Retrieval in Mobile Environments)

  • 노현덕;서광원;임동혁
    • 한국멀티미디어학회논문지
    • /
    • 제19권8호
    • /
    • pp.1498-1504
    • /
    • 2016
  • The progress of mobile computing technology is bringing a large amount of multimedia contents such as image. Thus, we need an image retrieval system which searches semantically relevant image. In this paper, we propose a semantic image annotation and retrieval in mobile environments. Previous mobile-based annotation approaches cannot fully express the semantics of image due to the limitation of current form (i.e., keyword tagging). Our approach allows mobile devices to annotate the image automatically using the context-aware information such as temporal and spatial data. In addition, since we annotate the image using RDF(Resource Description Framework) model, we are able to query SPARQL for semantic image retrieval. Our system implemented in android environment shows that it can more fully represent the semantics of image and retrieve the images semantically comparing with other image annotation systems.

모바일 환경에서 사용자 정의 규칙과 추론을 이용한 의미 기반 이미지 어노테이션의 확장 (Extending Semantic Image Annotation using User- Defined Rules and Inference in Mobile Environments)

  • 서광원;임동혁
    • 한국멀티미디어학회논문지
    • /
    • 제21권2호
    • /
    • pp.158-165
    • /
    • 2018
  • Since a large amount of multimedia image has dramatically increased, it is important to search semantically relevant image. Thus, several semantic image annotation methods using RDF(Resource Description Framework) model in mobile environment are introduced. Earlier studies on annotating image semantically focused on both the image tag and the context-aware information such as temporal and spatial data. However, in order to fully express their semantics of image, we need more annotations which are described in RDF model. In this paper, we propose an annotation method inferencing with RDFS entailment rules and user defined rules. Our approach implemented in Moment system shows that it can more fully represent the semantics of image with more annotation triples.

Deep Image Annotation and Classification by Fusing Multi-Modal Semantic Topics

  • Chen, YongHeng;Zhang, Fuquan;Zuo, WanLi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권1호
    • /
    • pp.392-412
    • /
    • 2018
  • Due to the semantic gap problem across different modalities, automatically retrieval from multimedia information still faces a main challenge. It is desirable to provide an effective joint model to bridge the gap and organize the relationships between them. In this work, we develop a deep image annotation and classification by fusing multi-modal semantic topics (DAC_mmst) model, which has the capacity for finding visual and non-visual topics by jointly modeling the image and loosely related text for deep image annotation while simultaneously learning and predicting the class label. More specifically, DAC_mmst depends on a non-parametric Bayesian model for estimating the best number of visual topics that can perfectly explain the image. To evaluate the effectiveness of our proposed algorithm, we collect a real-world dataset to conduct various experiments. The experimental results show our proposed DAC_mmst performs favorably in perplexity, image annotation and classification accuracy, comparing to several state-of-the-art methods.

자동 주석 갱신 및 멀티 분할 색상 히스토그램 기법을 이용한 의미기반 비디오 검색 시스템 (A Semantic-based Video Retrieval System using Method of Automatic Annotation Update and Multi-Partition Color Histogram)

  • 이광형;전문석
    • 한국통신학회논문지
    • /
    • 제29권8C호
    • /
    • pp.1133-1141
    • /
    • 2004
  • 비디오 데이터를 효율적으로 처리하기 위해서는 비디오 데이터가 가지고 있는 내용에 대한 정보를 데이터베이스에 저장하고 사용자들의 다양한 질의를 처리할 수 있는 의미기반 검색 기법이 요구된다. 본 논문에서는 주석기반 검색과 특징기반 검색을 이용하여 대용량의 비디오 데이터에 대한 사용자의 다양한 의미검색을 지원하는 에이전트 기반에서의 자동화되고 통합된 비디오 의미기반 검색 시스템을 제안한다. 사용자의 기본적인 질의와 질의에 의해 추출된 키 프레임의 이미지를 선택함으로써 에이전트는 추출된 키 프레임의 주석에 대한 의미를 더욱 구체화시킨다. 또한, 사용자에 의해 선택된 키 프레임은 질의 이미지가 되어 제안하는 특징기반 검색기법을 통해 가장 유사한 키 프레임을 검색한다. 설계하고 구현한 시스템은 실험을 통한 성능평가에서 90% 이상의 높은 정확도를 보였다.

자동 주석을 위한 멀티 큐 통합 (Multi-cue Integration for Automatic Annotation)

  • 신성윤;이양원
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2010년도 제42차 하계학술발표논문집 18권2호
    • /
    • pp.151-152
    • /
    • 2010
  • WWW images locate in structural, networking documents, so the importance of a word can be indicated by its location, frequency. There are two patterns for multi-cues ingegration annotation. The multi-cues integration algorithm shows initial promise as an indicator of semantic keyphrases of the web images. The latent semantic automatic keyphrase extraction that causes the improvement with the usage of multi-cues is expected to be preferable.

  • PDF

Collaborative Similarity Metric Learning for Semantic Image Annotation and Retrieval

  • Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권5호
    • /
    • pp.1252-1271
    • /
    • 2013
  • Automatic image annotation has become an increasingly important research topic owing to its key role in image retrieval. Simultaneously, it is highly challenging when facing to large-scale dataset with large variance. Practical approaches generally rely on similarity measures defined over images and multi-label prediction methods. More specifically, those approaches usually 1) leverage similarity measures predefined or learned by optimizing for ranking or annotation, which might be not adaptive enough to datasets; and 2) predict labels separately without taking the correlation of labels into account. In this paper, we propose a method for image annotation through collaborative similarity metric learning from dataset and modeling the label correlation of the dataset. The similarity metric is learned by simultaneously optimizing the 1) image ranking using structural SVM (SSVM), and 2) image annotation using correlated label propagation, with respect to the similarity metric. The learned similarity metric, fully exploiting the available information of datasets, would improve the two collaborative components, ranking and annotation, and sequentially the retrieval system itself. We evaluated the proposed method on Corel5k, Corel30k and EspGame databases. The results for annotation and retrieval show the competitive performance of the proposed method.

Using Context Information to Improve Retrieval Accuracy in Content-Based Image Retrieval Systems

  • Hejazi, Mahmoud R.;Woo, Woon-Tack;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.926-930
    • /
    • 2006
  • Current image retrieval techniques have shortcomings that make it difficult to search for images based on a semantic understanding of what the image is about. Since an image is normally associated with multiple contexts (e.g. when and where a picture was taken,) the knowledge of these contexts can enhance the quantity of semantic understanding of an image. In this paper, we present a context-aware image retrieval system, which uses the context information to infer a kind of metadata for the captured images as well as images in different collections and databases. Experimental results show that using these kinds of information can not only significantly increase the retrieval accuracy in conventional content-based image retrieval systems but decrease the problems arise by manual annotation in text-based image retrieval systems as well.

  • PDF

영상의 자동 주석: 멀티 큐 통합 (Images Automatic Annotation: Multi-cues Integration)

  • 신성윤;안은미;이양원
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2010년도 춘계학술대회
    • /
    • pp.589-590
    • /
    • 2010
  • All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases, and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF

자동 주석 갱신 및 카테고라이징 기법을 이용한 의미기반 동영상 검색 시스템 (A Semantic-based Video Retrieval System using Design of Automatic Annotation Update and Categorizing)

  • 김정재;이창수;이종희;전문석
    • 한국컴퓨터산업학회논문지
    • /
    • 제5권2호
    • /
    • pp.203-216
    • /
    • 2004
  • 비디오 데이터를 효율적으로 처리하기 위해서는 비디오 데이터가 가지고 있는 내용에 대한 정보를 데이터 베이스에 저장하고 사용자들의 다양한 질의를 처리할 수 있는 의미기반 검색 기법이 요구된다. 기존의 내용기반 비디오 검색 시스템들은 주석기반 검색 또는 특징기반 검색과 같은 단일 방식으로만 검색을 하므로 검색 효율이 낮을 뿐 아니라 완전한 자동 처리가 되지 않아 시스템 관리자나 주석자의 많은 노력을 요구한다. 본 논문에서는 주석기반 검색과 특징기반 검색을 이용하여 대용량의 비디오 데이터에 대한 사용자의 다양한 의미검색을 지원하는 에이전트 기반에서의 자동화되고 통합된 비디오 의미기반 검색 시스템을 제안한다. 사용자의 기본적인 질의와 질의에 의해 추출된 키 프레임의 이미지를 선택함으로써 에이전트는 추출된 키 프레임의 주석에 대한 의미를 더욱 구체화시킨다 또한. 사용자에 의해 선택된 키 프레임은 질의 이미지가 되어 제안하는 특징기반 검색기법을 통해 가장 유사한 키 프레임을 검색한다. 따라서 의미기반 검색을 통해 비디오 데이터의 검색의 효율을 높일 수 있도록 시스템을 설계한다.

  • PDF