• Title/Summary/Keyword: Semantic Similarity

Search Result 281, Processing Time 0.029 seconds

Learning Similarity with Probabilistic Latent Semantic Analysis for Image Retrieval

  • Li, Xiong;Lv, Qi;Huang, Wenting
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1424-1440
    • /
    • 2015
  • It is a challenging problem to search the intended images from a large number of candidates. Content based image retrieval (CBIR) is the most promising way to tackle this problem, where the most important topic is to measure the similarity of images so as to cover the variance of shape, color, pose, illumination etc. While previous works made significant progresses, their adaption ability to dataset is not fully explored. In this paper, we propose a similarity learning method on the basis of probabilistic generative model, i.e., probabilistic latent semantic analysis (PLSA). It first derives Fisher kernel, a function over the parameters and variables, based on PLSA. Then, the parameters are determined through simultaneously maximizing the log likelihood function of PLSA and the retrieval performance over the training dataset. The main advantages of this work are twofold: (1) deriving similarity measure based on PLSA which fully exploits the data distribution and Bayes inference; (2) learning model parameters by maximizing the fitting of model to data and the retrieval performance simultaneously. The proposed method (PLSA-FK) is empirically evaluated over three datasets, and the results exhibit promising performance.

A Study on Building Structures and Processes for Intelligent Web Document Classification (지능적인 웹문서 분류를 위한 구조 및 프로세스 설계 연구)

  • Jang, Young-Cheol
    • Journal of Digital Convergence
    • /
    • v.6 no.4
    • /
    • pp.177-183
    • /
    • 2008
  • This paper aims to offer a solution based on intelligent document classification to create a user-centric information retrieval system allowing user-centric linguistic expression. So, structures expressing user intention and fine document classifying process using EBL, similarity, knowledge base, user intention, are proposed. To overcome the problem requiring huge and exact semantic information, a hybrid process is designed integrating keyword, thesaurus, probability and user intention information. User intention tree hierarchy is build and a method of extracting group intention between key words and user intentions is proposed. These structures and processes are implemented in HDCI(Hybrid Document Classification with Intention) system. HDCI consists of analyzing user intention and classifying web documents stages. Classifying stage is composed of knowledge base process, similarity process and hybrid coordinating process. With the help of user intention related structures and hybrid coordinating process, HDCI can efficiently categorize web documents in according to user's complex linguistic expression with small priori information.

  • PDF

The Study of Automatic Hypertext Generation using the Syntactic and Semantic Similarity (구문적 유사도와 의미적 유사도를 이용한 하이퍼텍스트 자동생성에 관한 연구)

  • Kim, Mun-Seok;Nam, Se-Jin;Shin, Dong-Wook
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.424-429
    • /
    • 1996
  • 본 논문에는 일반문서를 대상으로 하여 그 문사를 하이퍼텍스트(hypertext)로 자동변환하는 기법을 제안하고자 한다. 자동변환의 과정은 대상 문서에서 키워드(keyword)의 인식, 문서를 노드(node) 단위로 분리, 키워드로부터 노드로의 링크(ink) 생성의 3 단계로 이루어 진다. 기존의 연구에서는 문서에서 노드를 분리하는데 구문적 유사도만을 이용하는데, 본 논문에서는 양질의 하이퍼텍스트를 생성하기 위하여 구문적 유사도(syntactic similarity)뿐만 아니라 의미적 유사도(semantic similarity)를 사용한다. 구문적 유사도는 tf*idf와 벡터 곱(vector product)을 이용하고, 의미적 유사도는 시소러스(thesaurus)와 부분부합(partial match)을 이용하여 계산되어 진다. 또 링크 생성시 잘못된 링크의 생성을 막기 위하여 시소러스를 이용하여 시소러스에 존재하는 용어에 한해서 링크를 생성한다.

  • PDF

Computing Semantic Similarity between ECG-Information Concepts Based on an Entropy-Weighted Concept Lattice

  • Wang, Kai;Yang, Shu
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.184-200
    • /
    • 2020
  • Similarity searching is a basic issue in information processing because of the large size of formal contexts and their complicated derivation operators. Recently, some researchers have focused on knowledge reduction methods by using granular computing. In this process, suitable information granules are vital to characterizing the quantities of attributes and objects. To address this problem, a novel approach to obtain an entropy-weighted concept lattice with inclusion degree and similarity distance (ECLisd) has been proposed. The approach aims to compute the combined weights by merging the inclusion degree and entropy degree between two concepts. In addition, another method is utilized to measure the hierarchical distance by considering the different degrees of importance of each attribute. Finally, the rationality of the ECLisd is validated via a comparative analysis.

Semantic Clustering Model for Analytical Classification of Documents in Cloud Environment (클라우드 환경에서 문서의 유형 분류를 위한 시맨틱 클러스터링 모델)

  • Kim, Young Soo;Lee, Byoung Yup
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.11
    • /
    • pp.389-397
    • /
    • 2017
  • Recently semantic web document is produced and added in repository in a cloud computing environment and requires an intelligent semantic agent for analytical classification of documents and information retrieval. The traditional methods of information retrieval uses keyword for query and delivers a document list returned by the search. Users carry a heavy workload for examination of contents because a former method of the information retrieval don't provide a lot of semantic similarity information. To solve these problems, we suggest a key word frequency and concept matching based semantic clustering model using hadoop and NoSQL to improve classification accuracy of the similarity. Implementation of our suggested technique in a cloud computing environment offers the ability to classify and discover similar document with improved accuracy of the classification. This suggested model is expected to be use in the semantic web retrieval system construction that can make it more flexible in retrieving proper document.

Topic-based Multi-document Summarization Using Non-negative Matrix Factorization and K-means (비음수 행렬 분해와 K-means를 이용한 주제기반의 다중문서요약)

  • Park, Sun;Lee, Ju-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.255-264
    • /
    • 2008
  • This paper proposes a novel method using K-means and Non-negative matrix factorization (NMF) for topic -based multi-document summarization. NMF decomposes weighted term by sentence matrix into two sparse non-negative matrices: semantic feature matrix and semantic variable matrix. Obtained semantic features are comprehensible intuitively. Weighted similarity between topic and semantic features can prevent meaningless sentences that are similar to a topic from being selected. K-means clustering removes noises from sentences so that biased semantics of documents are not reflected to summaries. Besides, coherence of document summaries can be enhanced by arranging selected sentences in the order of their ranks. The experimental results show that the proposed method achieves better performance than other methods.

Research on the Hybrid Paragraph Detection System Using Syntactic-Semantic Analysis (구문의미 분석을 활용한 복합 문단구분 시스템에 대한 연구)

  • Kang, Won Seog
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.106-116
    • /
    • 2021
  • To increase the quality of the system in the subjective-type question grading and document classification, we need the paragraph detection. But it is not easy because it is accompanied by semantic analysis. Many researches on the paragraph detection solve the detection problem using the word based clustering method. However, the word based method can not use the order and dependency relation between words. This paper suggests the paragraph detection system using syntactic-semantic relation between words with the Korean syntactic-semantic analysis. This system is the hybrid system of word based, concept based, and syntactic-semantic tree based detection. The experiment result of the system shows it has the better result than the word based system. This system will be utilized in Korean subjective question grading and document classification.

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

Approximate Top-k Labeled Subgraph Matching Scheme Based on Word Embedding (워드 임베딩 기반 근사 Top-k 레이블 서브그래프 매칭 기법)

  • Choi, Do-Jin;Oh, Young-Ho;Bok, Kyoung-Soo;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.33-43
    • /
    • 2022
  • Labeled graphs are used to represent entities, their relationships, and their structures in real data such as knowledge graphs and protein interactions. With the rapid development of IT and the explosive increase in data, there has been a need for a subgraph matching technology to provide information that the user is interested in. In this paper, we propose an approximate Top-k labeled subgraph matching scheme that considers the semantic similarity of labels and the difference in graph structure. The proposed scheme utilizes a learning model using FastText in order to consider the semantic similarity of a label. In addition, the label similarity graph(LSG) is used for approximate subgraph matching by calculating similarity values between labels in advance. Through the LSG, we can resolve the limitations of the existing schemes that subgraph expansion is possible only if the labels match exactly. It supports structural similarity for a query graph by performing searches up to 2-hop. Based on the similarity value, we provide k subgraph matching results. We conduct various performance evaluations in order to show the superiority of the proposed scheme.

Image segmentation preserving semantic object contours by classified region merging (분류된 영역 병합에 의한 객체 원형을 보존하는 영상 분할)

  • 박현상;나종범
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.661-664
    • /
    • 1998
  • Since the region segmentation at high resolution contains most of viable semantic object contours in an image, the bottom-up approach for image segmentation is appropriate for the application such as MPEG-4 which needs to preserve semantic object contours. However, the conventioal region merging methods, that follow the region segmentation, have poor performance in keeping low-contrast semantic object contours. In this paper, we propose an image segmentation algorithm based on classified region merging. The algorithm pre-segments an image with a large number of small regions, and also classifies it into several classes having similar gradient characteristics. Then regions only in the same class are merged according to the boundary weakness or statisticsal similarity. The simulation result shows that the proposed image segmentation preserves semantic object contours very well even with a small number of regions.

  • PDF