• Title/Summary/Keyword: LDA model

Search Result 161, Processing Time 0.03 seconds

Topic Model Augmentation and Extension Method using LDA and BERTopic (LDA와 BERTopic을 이용한 토픽모델링의 증강과 확장 기법 연구)

  • Kim, SeonWook;Yang, Kiduk
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.3
    • /
    • pp.99-132
    • /
    • 2022
  • The purpose of this study is to propose AET (Augmented and Extended Topics), a novel method of synthesizing both LDA and BERTopic results, and to analyze the recently published LIS articles as an experimental approach. To achieve the purpose of this study, 55,442 abstracts from 85 LIS journals within the WoS database, which spans from January 2001 to October 2021, were analyzed. AET first constructs a WORD2VEC-based cosine similarity matrix between LDA and BERTopic results, extracts AT (Augmented Topics) by repeating the matrix reordering and segmentation procedures as long as their semantic relations are still valid, and finally determines ET (Extended Topics) by removing any LDA related residual subtopics from the matrix and ordering the rest of them by F1 (BERTopic topic size rank, Inverse cosine similarity rank). AET, by comparing with the baseline LDA result, shows that AT has effectively concretized the original LDA topic model and ET has discovered new meaningful topics that LDA didn't. When it comes to the qualitative performance evaluation, AT performs better than LDA while ET shows similar performances except in a few cases.

Topic Expansion based on Infinite Vocabulary Online LDA Topic Model using Semantic Correlation Information (무한 사전 온라인 LDA 토픽 모델에서 의미적 연관성을 사용한 토픽 확장)

  • Kwak, Chang-Uk;Kim, Sun-Joong;Park, Seong-Bae;Kim, Kweon Yang
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.9
    • /
    • pp.461-466
    • /
    • 2016
  • Topic expansion is an expansion method that reflects external data for improving quality of learned topic. The online learning topic model is not appropriate for topic expansion using external data, because it does not reflect unseen words to learned topic model. In this study, we proposed topic expansion method using infinite vocabulary online LDA. When unseen words appear in learning process, the proposed method allocates unseen word to topic after calculating semantic correlation between unseen word and each topic. To evaluate the proposed method, we compared with existing topic expansion method. The results indicated that the proposed method includes additional information that is not contained in broadcasting script by reflecting external documents. Also, the proposed method outperformed on coherence evaluation.

High-dimensional linear discriminant analysis with moderately clipped LASSO

  • Chang, Jaeho;Moon, Haeseong;Kwon, Sunghoon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.21-37
    • /
    • 2021
  • There is a direct connection between linear discriminant analysis (LDA) and linear regression since the direction vector of the LDA can be obtained by the least square estimation. The connection motivates the penalized LDA when the model is high-dimensional where the number of predictive variables is larger than the sample size. In this paper, we study the penalized LDA for a class of penalties, called the moderately clipped LASSO (MCL), which interpolates between the least absolute shrinkage and selection operator (LASSO) and minimax concave penalty. We prove that the MCL penalized LDA correctly identifies the sparsity of the Bayes direction vector with probability tending to one, which is supported by better finite sample performance than LASSO based on concrete numerical studies.

Hot Topic Discovery across Social Networks Based on Improved LDA Model

  • Liu, Chang;Hu, RuiLin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3935-3949
    • /
    • 2021
  • With the rapid development of Internet and big data technology, various online social network platforms have been established, producing massive information every day. Hot topic discovery aims to dig out meaningful content that users commonly concern about from the massive information on the Internet. Most of the existing hot topic discovery methods focus on a single network data source, and can hardly grasp hot spots as a whole, nor meet the challenges of text sparsity and topic hotness evaluation in cross-network scenarios. This paper proposes a novel hot topic discovery method across social network based on an im-proved LDA model, which first integrates the text information from multiple social network platforms into a unified data set, then obtains the potential topic distribution in the text through the improved LDA model. Finally, it adopts a heat evaluation method based on the word frequency of topic label words to take the latent topic with the highest heat value as a hot topic. This paper obtains data from the online social networks and constructs a cross-network topic discovery data set. The experimental results demonstrate the superiority of the proposed method compared to baseline methods.

Multi-Topic Sentiment Analysis using LDA for Online Review (LDA를 이용한 온라인 리뷰의 다중 토픽별 감성분석 - TripAdvisor 사례를 중심으로 -)

  • Hong, Tae-Ho;Niu, Hanying;Ren, Gang;Park, Ji-Young
    • The Journal of Information Systems
    • /
    • v.27 no.1
    • /
    • pp.89-110
    • /
    • 2018
  • Purpose There is much information in customer reviews, but finding key information in many texts is not easy. Business decision makers need a model to solve this problem. In this study we propose a multi-topic sentiment analysis approach using Latent Dirichlet Allocation (LDA) for user-generated contents (UGC). Design/methodology/approach In this paper, we collected a total of 104,039 hotel reviews in seven of the world's top tourist destinations from TripAdvisor (www.tripadvisor.com) and extracted 30 topics related to the hotel from all customer reviews using the LDA model. Six major dimensions (value, cleanliness, rooms, service, location, and sleep quality) were selected from the 30 extracted topics. To analyze data, we employed R language. Findings This study contributes to propose a lexicon-based sentiment analysis approach for the keywords-embedded sentences related to the six dimensions within a review. The performance of the proposed model was evaluated by comparing the sentiment analysis results of each topic with the real attribute ratings provided by the platform. The results show its outperformance, with a high ratio of accuracy and recall. Through our proposed model, it is expected to analyze the customers' sentiments over different topics for those reviews with an absence of the detailed attribute ratings.

Fault Diagnosis of Induction Motor by Fusion Algorithm based on PCA and IDA (PCA와 LDA에 기반을 둔 융합알고리즘에 의한 유도전동기의 고장진단)

  • Jeon, Byeong-Seok;Lee, Dae-Jong;Lee, Sang-Hyuk;Ryu, Jeong-Woong;Chun, Myung-Geun
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.19 no.2
    • /
    • pp.152-159
    • /
    • 2005
  • In this paper, we propose a diagnosis algorithm using fusion wかd based on PCA and LDA to detect fault states of the induction motor that is applied to various industrial fields. After yielding a feature vector from the current value measured by an experiment using PCA and LDA, training data is made to produce each matching value. In a diagnostic step, two matching values yielded by PCA and LDA are fused by probability model and finally verified. Since the proposed diagnosis algorithm takes only merits of PCA and LDA it shows excellent results under noisy environments. The simulation results to verify the usability of the proposed algorithm showed better performance than the case just using conventional PCA or LDA.

Establishment of ITS Policy Issues Investigation Method in the Road Section applied Textmining (텍스트마이닝을 활용한 도로분야 ITS 정책이슈 탐색기법 정립)

  • Oh, Chang-Seok;Lee, Yong-taeck;Ko, Minsu
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.6
    • /
    • pp.10-23
    • /
    • 2016
  • With requiring circumspections using big data, this study attempts to develop and apply the search method for audit issues relating to the ITS policy or program. For the foregoing, the auditing process of the board of audit and inspection was converged with the theoretical frame of boundary analysis proposed by William Dunn as an analysis tool for audit issues. Moreover, we apply the text mining technique in order to computerize the analysis tool, which is similar to the boundary analysis in the concept of approaching meta-problems. For the text mining analysis, specific model we applied the antisymmetry-symmetry compound lexeme-based LDA model based on the Latent Dirichlet Allocation(LDA) methodologies proposed by David Blei. The several prime issues were founded through a case analysis as follows: lack of collection of traffic information by the urban traffic information system, which is operated by the National Police Agency, the overlapping problems between the Ministry of Land, Infrastructure and Transport and the Advanced Traffic Management System and fabrication of the mileage on digital tachograph.

WV-BTM: A Technique on Improving Accuracy of Topic Model for Short Texts in SNS (WV-BTM: SNS 단문의 주제 분석을 위한 토픽 모델 정확도 개선 기법)

  • Song, Ae-Rin;Park, Young-Ho
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.51-58
    • /
    • 2018
  • As the amount of users and data of NS explosively increased, research based on SNS Big data became active. In social mining, Latent Dirichlet Allocation(LDA), which is a typical topic model technique, is used to identify the similarity of each text from non-classified large-volume SNS text big data and to extract trends therefrom. However, LDA has the limitation that it is difficult to deduce a high-level topic due to the semantic sparsity of non-frequent word occurrence in the short sentence data. The BTM study improved the limitations of this LDA through a combination of two words. However, BTM also has a limitation that it is impossible to calculate the weight considering the relation with each subject because it is influenced more by the high frequency word among the combined words. In this paper, we propose a technique to improve the accuracy of existing BTM by reflecting semantic relation between words.

The Arms Race on the Road: Exploring Factors of SUVs' Popularity by LDA Topic Model (도로 위의 군비경쟁: LDA 토픽모델을 활용한 SUV의 인기 요인 탐구)

  • Jeon, Seung-Bong;Goh, Taekyeong
    • Journal of Digital Convergence
    • /
    • v.18 no.10
    • /
    • pp.239-252
    • /
    • 2020
  • By using text mining, we explore the factors responsible for an increase in SUV preference. We collected 32,679 posts related to SUVs from "Bobaedream," the largest online automobile community in South Korea, and applied the LDA topic model. While previous studies have explained the SUV boom as an individual's risk aversion strategy from crime, the result shows that the topic of 'Safety' appears to be an important factor in the SUV discourse in the context of a car accident and high-speed driving situation. To conclude, the consumption of SUVs in Korean society serves as a mean to prevent anxiety and danger to individuals when driving. We insist that decreasing social trust, caused by an increase in inequality, underlies the perception of risk on the road.

SVD-LDA: A Combined Model for Text Classification

  • Hai, Nguyen Cao Truong;Kim, Kyung-Im;Park, Hyuk-Ro
    • Journal of Information Processing Systems
    • /
    • v.5 no.1
    • /
    • pp.5-10
    • /
    • 2009
  • Text data has always accounted for a major portion of the world's information. As the volume of information increases exponentially, the portion of text data also increases significantly. Text classification is therefore still an important area of research. LDA is an updated, probabilistic model which has been used in many applications in many other fields. As regards text data, LDA also has many applications, which has been applied various enhancements. However, it seems that no applications take care of the input for LDA. In this paper, we suggest a way to map the input space to a reduced space, which may avoid the unreliability, ambiguity and redundancy of individual terms as descriptors. The purpose of this paper is to show that LDA can be perfectly performed in a "clean and clear" space. Experiments are conducted on 20 News Groups data sets. The results show that the proposed method can boost the classification results when the appropriate choice of rank of the reduced space is determined.