• Title/Summary/Keyword: Latent Semantic Analysis(LSA)

Search Result 21, Processing Time 0.025 seconds

Semantic-based Genetic Algorithm for Feature Selection (의미 기반 유전 알고리즘을 사용한 특징 선택)

  • Kim, Jung-Ho;In, Joo-Ho;Chae, Soo-Hoan
    • Journal of Internet Computing and Services
    • /
    • v.13 no.4
    • /
    • pp.1-10
    • /
    • 2012
  • In this paper, an optimal feature selection method considering sematic of features, which is preprocess of document classification is proposed. The feature selection is very important part on classification, which is composed of removing redundant features and selecting essential features. LSA (Latent Semantic Analysis) for considering meaning of the features is adopted. However, a supervised LSA which is suitable method for classification problems is used because the basic LSA is not specialized for feature selection. We also apply GA (Genetic Algorithm) to the features, which are obtained from supervised LSA to select better feature subset. Finally, we project documents onto new selected feature subset and classify them using specific classifier, SVM (Support Vector Machine). It is expected to get high performance and efficiency of classification by selecting optimal feature subset using the proposed hybrid method of supervised LSA and GA. Its efficiency is proved through experiments using internet news classification with low features.

Trend Analysis of School Health Research using Latent Semantic Analysis (잠재의미분석방법을 통한 학교보건 연구동향 분석)

  • Shin, Seon-Hi;Park, Youn-Ju
    • Journal of the Korean Society of School Health
    • /
    • v.33 no.3
    • /
    • pp.184-193
    • /
    • 2020
  • Purpose: This study was designed to investigate the trends in school health research in Korea using probabilistic latent semantic analysis. The study longitudinally analyzed the abstracts of the papers published in 「The Journal of the Korean Society of School Health」 over the recent 17 years, which is between 2004 and August 2020. By classifying all the papers according to the topics identified through the analysis, it was possible to see how the distribution of the topics has changed over years. Based on the results, implications for school health research and educational uses of latent semantic analysis were suggested. Methods: This study investigated the research trends by longitudinally analyzing journal abstracts using latent dirichlet allocation (LDA), a type of LSA. The abstracts in 「The Journal of the Korean Society of School Health」 published from 2004 to August 2020 were used for the analysis. Results: A total of 34 latent topics were identified by LDA. Six topics, which were「Adolescent depression and suicide prevention」, 「Students' knowledge, attitudes, & behaviors」, 「Effective self-esteem program through depression interventions」, 「Factors of students' stress」, 「Intervention program to prevent adolescent risky behaviors」, and 「Sex education curriculum, and teacher」were most frequently covered by the journal. Each of them was dealt with in at least 20 papers. The topics related to 「Intervention program to prevent adolescent risky behaviors」, 「Effective self-esteem program through depression interventions」, and 「Preventive vaccination and factors of effective vaccination」 appeared repeatedly over the most recent 5 years. Conclusion: This study introduced an AI-powered analysis method that enables data-centered objective text analysis without human intervention. Based on the results, implications for school health research were presented, and various uses of latent semantic analysis (LSA) in educational research were suggested.

A comparative study of Entity-Grid and LSA models on Korean sentence ordering (한국어 텍스트 문장정렬을 위한 개체격자 접근법과 LSA 기반 접근법의 활용연구)

  • Kim, Youngsam;Kim, Hong-Gee;Shin, Hyopil
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.4
    • /
    • pp.301-321
    • /
    • 2013
  • For the task of sentence ordering, this paper attempts to utilize the Entity-Grid model, a type of entity-based modeling approach, as well as Latent Semantic analysis, which is based on vector space modeling, The task is well known as one of the fundamental tools used to measure text coherence and to enhance text generation processes. For the implementation of the Entity-Grid model, we attempt to use the syntactic roles of the nouns in the Korean text for the ordering task, and measure its impact on the result, since its contribution has been discussed in previous research. Contrary to the case of German, it shows a positive result. In order to obtain the information on the syntactic roles, we use a strategy of using Korean case-markers for the nouns. As a result, it is revealed that the cues can be helpful to measure text coherence. In addition, we compare the results with the ones of the LSA-based model, discussing the advantages and disadvantages of the models, and options for future studies.

  • PDF

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Comparing the Use of Semantic Relations between Tags Versus Latent Semantic Analysis for Speech Summarization (스피치 요약을 위한 태그의미분석과 잠재의미분석간의 비교 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.47 no.3
    • /
    • pp.343-361
    • /
    • 2013
  • We proposed and evaluated a tag semantic analysis method in which original tags are expanded and the semantic relations between original or expanded tags are used to extract key sentences from lecture speech transcripts. To do that, we first investigated how useful Flickr tag clusters and WordNet synonyms are for expanding tags and for detecting the semantic relations between tags. Then, to evaluate our proposed method, we compared it with a latent semantic analysis (LSA) method. As a result, we found that Flick tag clusters are more effective than WordNet synonyms and that the F measure mean (0.27) of the tag semantic analysis method is higher than that of LSA method (0.22).

Target Word Selection using Word Similarity based on Latent Semantic Structure in English-Korean Machine Translation (잠재의미구조 기반 단어 유사도에 의한 역어 선택)

  • 장정호;김유섭;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.502-504
    • /
    • 2002
  • 본 논문에서는 대량의 말뭉치에서 추출된 잠재의미에 기반하여 단어간 유사도를 측정하고 이를 영한 기계 번역에서의 역어선택에 적용한다. 잠재의미 추출을 위해서는 latent semantic analysis(LSA)와 probabilistic LSA(PLSA)를 이용한다. 주어진 단어의 역어 선택시 기본적으로 연어(collocation) 사전을 검색하고, 미등록 단어의 경우 등재된 단어 중 해당 단어와 유사도가 높은 항목의 정보를 활용하며 이 때 $textsc{k}$-최근접 이웃 방법이 이용된다. 단어들간의 유사도 계산은 잠재의미 공간상에서 이루어진다. 실험에서, 연어사전만 이용하였을 경우보다 최고 15%의 성능 향상을 보였으며, PLSA에 기반한 방법이 LSA에 의한 방법보다 역어선택 성능 면에서 약간 더 우수하였다.

  • PDF

Representation of ambiguous word in Latent Semantic Analysis (LSA모형에서 다의어 의미의 표상)

  • 이태헌;김청택
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.2
    • /
    • pp.23-31
    • /
    • 2004
  • Latent Semantic Analysis (LSA Landauer & Dumais, 1997) is a technique to represent the meanings of words using co-occurrence information of words appearing in he same context, which is usually a sentence or a document. In LSA, a word is represented as a point in multidimensional space where each axis represents a context, and a word's meaning is determined by its frequency in each context. The space is reduced by singular value decomposition (SVD). The present study elaborates upon LSA for use of representation of ambiguous words. The proposed LSA applies rotation of axes in the document space which makes possible to interpret the meaning of un. A simulation study was conducted to illustrate the performance of LSA in representation of ambiguous words. In the simulation, first, the texts which contain an ambiguous word were extracted and LSA with rotation was performed. By comparing loading matrix, we categorized the texts according to meanings. The first meaning of an ambiguous wold was represented by LSA with the matrix excluding the vectors for the other meaning. The other meanings were also represented in the same way. The simulation showed that this way of representation of an ambiguous word can identify the meanings of the word. This result suggest that LSA with axis rotation can be applied to representation of ambiguous words. We discussed that the use of rotation makes it possible to represent multiple meanings of ambiguous words, and this technique can be applied in the area of web searching.

  • PDF

Genetic Clustering with Semantic Vector Expansion (의미 벡터 확장을 통한 유전자 클러스터링)

  • Song, Wei;Park, Soon-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.3
    • /
    • pp.1-8
    • /
    • 2009
  • This paper proposes a new document clustering system using fuzzy logic-based genetic algorithm (GA) and semantic vector expansion technology. It has been known in many GA papers that the success depends on two factors, the diversity of the population and the capability to convergence. We use the fuzzy logic-based operators to adaptively adjust the influence between these two factors. In traditional document clustering, the most popular and straightforward approach to represent the document is vector space model (VSM). However, this approach not only leads to a high dimensional feature space, but also ignores the semantic relationships between some important words, which would affect the accuracy of clustering. In this paper we use latent semantic analysis (LSA)to expand the documents to corresponding semantic vectors conceptually, rather than the individual terms. Meanwhile, the sizes of the vectors can be reduced drastically. We test our clustering algorithm on 20 news groups and Reuter collection data sets. The results show that our method outperforms the conventional GA in various document representation environments.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Analysis of Structured and Unstructured Data and Construction of Criminal Profiling System using LSA (LSA를 이용한 정형·비정형데이터 분석과 범죄 프로파일링 시스템 구현)

  • Kim, Yonghoon;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.66-73
    • /
    • 2017
  • Due to the recent rapid changes in society and wide spread of information devices, diverse digital information is utilized in a variety of economic and social analysis. Information related to the crime statistics by type of crime has been used as a major factor in crime. However, statistical analysis using only the structured data has the difficulty in the investigation by providing limited information to investigators and users. In this paper, structured data and unstructured data are analyzed by applying Korean Natural Language Processing (Ko-NLP) and the Latent Semantic Analysis (LSA) technique. It will provide a crime profile optimum system that can be applied to the crime profiling system or statistical analysis.