• Title/Summary/Keyword: Latent Semantic Analysis

Search Result 61, Processing Time 0.027 seconds

Reputation Analysis of Document Using Probabilistic Latent Semantic Analysis Based on Weighting Distinctions (가중치 기반 PLSA를 이용한 문서 평가 분석)

  • Cho, Shi-Won;Lee, Dong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.3
    • /
    • pp.632-638
    • /
    • 2009
  • Probabilistic Latent Semantic Analysis has many applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. In this paper, we propose an algorithm using weighted Probabilistic Latent Semantic Analysis Model to find the contextual phrases and opinions from documents. The traditional keyword search is unable to find the semantic relations of phrases, Overcoming these obstacles requires the development of techniques for automatically classifying semantic relations of phrases. Through experiments, we show that the proposed algorithm works well to discover semantic relations of phrases and presents the semantic relations of phrases to the vector-space model. The proposed algorithm is able to perform a variety of analyses, including such as document classification, online reputation, and collaborative recommendation.

Trend Analysis of School Health Research using Latent Semantic Analysis (잠재의미분석방법을 통한 학교보건 연구동향 분석)

  • Shin, Seon-Hi;Park, Youn-Ju
    • Journal of the Korean Society of School Health
    • /
    • v.33 no.3
    • /
    • pp.184-193
    • /
    • 2020
  • Purpose: This study was designed to investigate the trends in school health research in Korea using probabilistic latent semantic analysis. The study longitudinally analyzed the abstracts of the papers published in 「The Journal of the Korean Society of School Health」 over the recent 17 years, which is between 2004 and August 2020. By classifying all the papers according to the topics identified through the analysis, it was possible to see how the distribution of the topics has changed over years. Based on the results, implications for school health research and educational uses of latent semantic analysis were suggested. Methods: This study investigated the research trends by longitudinally analyzing journal abstracts using latent dirichlet allocation (LDA), a type of LSA. The abstracts in 「The Journal of the Korean Society of School Health」 published from 2004 to August 2020 were used for the analysis. Results: A total of 34 latent topics were identified by LDA. Six topics, which were「Adolescent depression and suicide prevention」, 「Students' knowledge, attitudes, & behaviors」, 「Effective self-esteem program through depression interventions」, 「Factors of students' stress」, 「Intervention program to prevent adolescent risky behaviors」, and 「Sex education curriculum, and teacher」were most frequently covered by the journal. Each of them was dealt with in at least 20 papers. The topics related to 「Intervention program to prevent adolescent risky behaviors」, 「Effective self-esteem program through depression interventions」, and 「Preventive vaccination and factors of effective vaccination」 appeared repeatedly over the most recent 5 years. Conclusion: This study introduced an AI-powered analysis method that enables data-centered objective text analysis without human intervention. Based on the results, implications for school health research were presented, and various uses of latent semantic analysis (LSA) in educational research were suggested.

Comparing the Use of Semantic Relations between Tags Versus Latent Semantic Analysis for Speech Summarization (스피치 요약을 위한 태그의미분석과 잠재의미분석간의 비교 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.47 no.3
    • /
    • pp.343-361
    • /
    • 2013
  • We proposed and evaluated a tag semantic analysis method in which original tags are expanded and the semantic relations between original or expanded tags are used to extract key sentences from lecture speech transcripts. To do that, we first investigated how useful Flickr tag clusters and WordNet synonyms are for expanding tags and for detecting the semantic relations between tags. Then, to evaluate our proposed method, we compared it with a latent semantic analysis (LSA) method. As a result, we found that Flick tag clusters are more effective than WordNet synonyms and that the F measure mean (0.27) of the tag semantic analysis method is higher than that of LSA method (0.22).

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

  • Al-Sabahi, Kamal;Zuping, Zhang;Kang, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.254-276
    • /
    • 2019
  • Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, the researchers are paying much attention to Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding has shown good performance allowing words to match on a semantic level. Naively concatenating word embeddings makes common words dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the Latent Semantic Analysis input matrix. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. They are modified versions of the augment weight and the entropy frequency that combine the strength of traditional weighting schemes and word embedding. The proposed approach is evaluated on three English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization. Experimental results on the three datasets show that the proposed model achieved competitive performance compared to the state-of-the-art leading to a conclusion that it provides a better document representation and a better document summary as a result.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Feature Extraction of Concepts by Independent Component Analysis

  • Chagnaa, Altangerel;Ock, Cheol-Young;Lee, Chang-Beom;Jaimai, Purev
    • Journal of Information Processing Systems
    • /
    • v.3 no.1
    • /
    • pp.33-37
    • /
    • 2007
  • Semantic clustering is important to various fields in the modem information society. In this work we applied the Independent Component Analysis method to the extraction of the features of latent concepts. We used verb and object noun information and formulated a concept as a linear combination of verbs. The proposed method is shown to be suitable for our framework and it performs better than a hierarchical clustering in latent semantic space for finding out invisible information from the data.

A Comparison between Factor Structure and Semantic Representation of Personality Test Items Using Latent Semantic Analysis (잠재의미분석을 활용한 성격검사문항의 의미표상과 요인구조의 비교)

  • Park, Sungjoon;Park, Heeyoung;Kim, Cheongtag
    • Korean Journal of Cognitive Science
    • /
    • v.30 no.3
    • /
    • pp.133-156
    • /
    • 2019
  • To investigate how personality test items are understood by participants, their semantic representations were explored by Latent Semantic Analysis, In this thesis, Semantic Similarity Matrix was proposed, which contains cosine similarity of semantic representations between test items and personality traits. The matrix was compared to traditional factor loading matrix. In preliminary study, semantic space was constructed from the passages describing the five traits, collected from 154 undergraduate participants. In study 1, positive correlation was observed between the factor loading matrix of Korean shorten BFI and its semantic similarity matrix. In study 2, short personality test was constructed from semantic similarity matrix, and observed that its factor loading matrix was positively correlated with the semantic similarity matrix as well. In conclusion, the results implies that the factor structure of personality test can be inferred from semantic similarity between the items and factors.

Semantic-based Genetic Algorithm for Feature Selection (의미 기반 유전 알고리즘을 사용한 특징 선택)

  • Kim, Jung-Ho;In, Joo-Ho;Chae, Soo-Hoan
    • Journal of Internet Computing and Services
    • /
    • v.13 no.4
    • /
    • pp.1-10
    • /
    • 2012
  • In this paper, an optimal feature selection method considering sematic of features, which is preprocess of document classification is proposed. The feature selection is very important part on classification, which is composed of removing redundant features and selecting essential features. LSA (Latent Semantic Analysis) for considering meaning of the features is adopted. However, a supervised LSA which is suitable method for classification problems is used because the basic LSA is not specialized for feature selection. We also apply GA (Genetic Algorithm) to the features, which are obtained from supervised LSA to select better feature subset. Finally, we project documents onto new selected feature subset and classify them using specific classifier, SVM (Support Vector Machine). It is expected to get high performance and efficiency of classification by selecting optimal feature subset using the proposed hybrid method of supervised LSA and GA. Its efficiency is proved through experiments using internet news classification with low features.

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.