• Title/Summary/Keyword: Latent semantic analysis

Search Result 65, Processing Time 0.022 seconds

Retrieval Model using Subject Classification Table, User Profile, and LSI (전공분류표, 사용자 프로파일, LSI를 이용한 검색 모델)

  • Woo Seon-Mi
    • The KIPS Transactions:PartD
    • /
    • v.12D no.5 s.101
    • /
    • pp.789-796
    • /
    • 2005
  • Because existing information retrieval systems, in particular library retrieval systems, use 'exact keyword matching' with user's query, they present user with massive results including irrelevant information. So, a user spends extra effort and time to get the relevant information from the results. Thus, this paper will propose SULRM a Retrieval Model using Subject Classification Table, User profile, and LSI(Latent Semantic Indexing), to provide more relevant results. SULRM uses document filtering technique for classified data and document ranking technique for non-classified data in the results of keyword-based retrieval. Filtering technique uses Subject Classification Table, and ranking technique uses user profile and LSI. And, we have performed experiments on the performance of filtering technique, user profile updating method, and document ranking technique using the results of information retrieval system of our university' digital library system. In case that many documents are retrieved proposed techniques are able to provide user with filtered data and ranked data according to user's subject and preference.

Accelerated Loarning of Latent Topic Models by Incremental EM Algorithm (점진적 EM 알고리즘에 의한 잠재토픽모델의 학습 속도 향상)

  • Chang, Jeong-Ho;Lee, Jong-Woo;Eom, Jae-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1045-1055
    • /
    • 2007
  • Latent topic models are statistical models which automatically captures salient patterns or correlation among features underlying a data collection in a probabilistic way. They are gaining an increased popularity as an effective tool in the application of automatic semantic feature extraction from text corpus, multimedia data analysis including image data, and bioinformatics. Among the important issues for the effectiveness in the application of latent topic models to the massive data set is the efficient learning of the model. The paper proposes an accelerated learning technique for PLSA model, one of the popular latent topic models, by an incremental EM algorithm instead of conventional EM algorithm. The incremental EM algorithm can be characterized by the employment of a series of partial E-steps that are performed on the corresponding subsets of the entire data collection, unlike in the conventional EM algorithm where one batch E-step is done for the whole data set. By the replacement of a single batch E-M step with a series of partial E-steps and M-steps, the inference result for the previous data subset can be directly reflected to the next inference process, which can enhance the learning speed for the entire data set. The algorithm is advantageous also in that it is guaranteed to converge to a local maximum solution and can be easily implemented just with slight modification of the existing algorithm based on the conventional EM. We present the basic application of the incremental EM algorithm to the learning of PLSA and empirically evaluate the acceleration performance with several possible data partitioning methods for the practical application. The experimental results on a real-world news data set show that the proposed approach can accomplish a meaningful enhancement of the convergence rate in the learning of latent topic model. Additionally, we present an interesting result which supports a possible synergistic effect of the combination of incremental EM algorithm with parallel computing.

Learning Similarity with Probabilistic Latent Semantic Analysis for Image Retrieval

  • Li, Xiong;Lv, Qi;Huang, Wenting
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1424-1440
    • /
    • 2015
  • It is a challenging problem to search the intended images from a large number of candidates. Content based image retrieval (CBIR) is the most promising way to tackle this problem, where the most important topic is to measure the similarity of images so as to cover the variance of shape, color, pose, illumination etc. While previous works made significant progresses, their adaption ability to dataset is not fully explored. In this paper, we propose a similarity learning method on the basis of probabilistic generative model, i.e., probabilistic latent semantic analysis (PLSA). It first derives Fisher kernel, a function over the parameters and variables, based on PLSA. Then, the parameters are determined through simultaneously maximizing the log likelihood function of PLSA and the retrieval performance over the training dataset. The main advantages of this work are twofold: (1) deriving similarity measure based on PLSA which fully exploits the data distribution and Bayes inference; (2) learning model parameters by maximizing the fitting of model to data and the retrieval performance simultaneously. The proposed method (PLSA-FK) is empirically evaluated over three datasets, and the results exhibit promising performance.

Investigating the Value of Information in Mobile Commerce: A Text Mining Approach

  • Wang, Ying;Aguirre-Urreta, Miguel;Song, Jaeki
    • Asia pacific journal of information systems
    • /
    • v.26 no.4
    • /
    • pp.577-592
    • /
    • 2016
  • The proliferation of mobile applications and the unique characteristics of the mobile environment have attracted significant research interest in understanding customers' purchasing behaviors in mobile commerce. In this study, we extend customer value theory by combining the predictors of product performance with customer value framework to investigate how in-store information creates value for customers and influences mobile application downloads. Using a data set collected from the Google Application Store, we find that customers value both text and non-text information when they make downloading decisions. We apply latent semantic analysis techniques to analyze customer reviews and product descriptions in the mobile application store and determine the embedded valuable information. Results show that, for mobile applications, price, number of raters, and helpful information in customer reviews and product descriptions significantly affect the number of downloads. Conversely, average rating does not work in the mobile environment. This study contributes to the literature by revealing the role of in-store information in mobile application downloads and by providing application developers with useful guidance about increasing application downloads by improving in-store information management.

An Intelligent Marking System based on Semantic Kernel and Korean WordNet (의미커널과 한글 워드넷에 기반한 지능형 채점 시스템)

  • Cho Woojin;Oh Jungseok;Lee Jaeyoung;Kim Yu-Seop
    • The KIPS Transactions:PartA
    • /
    • v.12A no.6 s.96
    • /
    • pp.539-546
    • /
    • 2005
  • Recently, as the number of Internet users are growing explosively, e-learning has been applied spread, as well as remote evaluation of intellectual capacity However, only the multiple choice and/or the objective tests have been applied to the e-learning, because of difficulty of natural language processing. For the intelligent marking of short-essay typed answer papers with rapidness and fairness, this work utilize heterogenous linguistic knowledges. Firstly, we construct the semantic kernel from un tagged corpus. Then the answer papers of students and instructors are transformed into the vector form. Finally, we evaluate the similarity between the papers by using the semantic kernel and decide whether the answer paper is correct or not, based on the similarity values. For the construction of the semantic kernel, we used latent semantic analysis based on the vector space model. Further we try to reduce the problem of information shortage, by integrating Korean Word Net. For the construction of the semantic kernel we collected 38,727 newspaper articles and extracted 75,175 indexed terms. In the experiment, about 0.894 correlation coefficient value, between the marking results from this system and the human instructors, was acquired.

Comparison and Analysis of Subject Classification for Domestic Research Data (국내 학술논문 주제 분류 알고리즘 비교 및 분석)

  • Choi, Wonjun;Sul, Jaewook;Jeong, Heeseok;Yoon, Hwamook
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.178-186
    • /
    • 2018
  • Subject classification of thesis units is essential to serve scholarly information deliverables. However, to date, there is a journal-based topic classification, and there are not many article-level subject classification services. In the case of academic papers among domestic works, subject classification can be a more important information because it can cover a larger area of service and can provide service by setting a range. However, the problem of classifying themes by field requires the hands of experts in various fields, and various methods of verification are needed to increase accuracy. In this paper, we try to classify topics using the unsupervised learning algorithm to find the correct answer in the unknown state and compare the results of the subject classification algorithms using the coherence and perplexity. The unsupervised learning algorithms are a well-known Hierarchical Dirichlet Process (HDP), Latent Dirichlet Allocation (LDA) and Latent Semantic Indexing (LSI) algorithm.

Generic Summarization Using Generic Important of Semantic Features (의미특징의 포괄적 중요도를 이용한 포괄적 문서 요약)

  • Park, Sun;Lee, Jong-Hoon
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.5
    • /
    • pp.502-508
    • /
    • 2008
  • With the increased use of the internet and the tremendous amount of data it transfers, it is more necessary to summarize documents. We propose a new method using the Non-negative Semantic Variable Matrix (NSVM) and the generic important of semantic features obtained by Non-negative Matrix Factorization (NMF) to extract the sentences for automatic generic summarization. The proposed method use non-negative constraints which is more similar to the human's cognition process. As a result, the proposed method selects more meaningful sentences for summarization than the unsupervised method used the Latent Semantic Analysis (LSA) or clustering methods. The experimental results show that the proposed method archives better performance than other methods.

  • PDF

Analysis of Correlation Between Cohesion and Item Difficulty in English Reading Section of CSAT (수학능력시험 영어 읽기 지문의 응집성과 문항 난이도 간의 상관관계 분석)

  • Hwang, Leesu;Lee, Je-Young
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.5
    • /
    • pp.344-350
    • /
    • 2020
  • The purpose of this study was to investigate the cohesion-related text factors that affected the correct answer rate of the English reading section in the Korean CSAT. To this end, reading passages of the 2014-2018 CSAT and CSAT mock test were collected and analyzed in terms of cohesion. And then, correlation analysis was implemented between the results regarding cohesion and item difficulty. The conclusions were as follows. First, the correct answer rate tended to increase as the linguistic burden of learners was reduced, when specific arguments between sentences were repeated. Second, there was no statistically significant correlation between the 8 sub-area of latent semantic analysis and the correct answer rate. Finally, the pedagogical implications of this study and suggestions for further study were discussed.

Similar Patent Search Service System using Latent Dirichlet Allocation (잠재 의미 분석을 적용한 유사 특허 검색 서비스 시스템)

  • Lim, HyunKeun;Kim, Jaeyoon;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.8
    • /
    • pp.1049-1054
    • /
    • 2018
  • Keyword searching used in the past as a method of finding similar patents, and automated classification by machine learning is using in recently. Keyword searching is a method of analyzing data that is formalized through data refinement. While the accuracy for short text is high, long one consisted of several words like as document that is not able to analyze the meaning contained in sentences. In semantic analysis level, the method of automatic classification is used to classify sentences composed of several words by unstructured data analysis. There was an attempt to find similar documents by combining the two methods. However, it have a problem in the algorithm w the methods of analysis are different ways to use simultaneous unstructured data and regular data. In this paper, we study the method of extracting keywords implied in the document and using the LDA(Latent Semantic Analysis) method to classify documents efficiently without human intervention and finding similar patents.

Experiments using query expansion in LSI (LSI에서 질의 확장을 이용한 실험)

  • 안성수;김동주;이기영;김한우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.151-153
    • /
    • 1999
  • 한번의 질의로 사용자가 모든 요구를 표현하기 어렵고 만족시킬 수 없기 때문에 질의를 확장하는 연구가 계속되고 있다. 본 논문에서는 LSI(Latent Semantic Indexing)에서 사용자의 질의와 의미공간에서의 용어들간의 유사도를 구해 최상위의 용어들을 순서를 정해 질의확장을 하는 방법과 LCA(Local Context Analysis)을 이용하는 방법을 제안한다. 그리고 문서 집합에 대해 3가지 가중치를 적용한 결과를 분석하고 질의확장시의 문제점과 향후 연구과제에 대해 설명한다.

  • PDF