• 제목/요약/키워드: Latent topic model

검색결과 80건 처리시간 0.021초

Topic Masks for Image Segmentation

  • Jeong, Young-Seob;Lim, Chae-Gyun;Jeong, Byeong-Soo;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권12호
    • /
    • pp.3274-3292
    • /
    • 2013
  • Unsupervised methods for image segmentation are recently drawing attention because most images do not have labels or tags. A topic model is such an unsupervised probabilistic method that captures latent aspects of data, where each latent aspect, or a topic, is associated with one homogeneous region. The results of topic models, however, usually have noises, which decreases the overall segmentation performance. In this paper, to improve the performance of image segmentation using topic models, we propose two topic masks applicable to topic assignments of homogeneous regions obtained from topic models. The topic masks capture the noises among the assigned topic assignments or topic labels, and remove the noises by replacements, just like image masks for pixels. However, as the nature of topic assignments is different from image pixels, the topic masks have properties that are different from the existing image masks for pixels. There are two contributions of this paper. First, the topic masks can be used to reduce the noises of topic assignments obtained from topic models for image segmentation tasks. Second, we test the effectiveness of the topic masks by applying them to segmented images obtained from the Latent Dirichlet Allocation model and the Spatial Latent Dirichlet Allocation model upon the MSRC image dataset. The empirical results show that one of the masks successfully reduces the topic noises.

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • 제1권1호
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF

점진적 EM 알고리즘에 의한 잠재토픽모델의 학습 속도 향상 (Accelerated Loarning of Latent Topic Models by Incremental EM Algorithm)

  • 장정호;이종우;엄재홍
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권12호
    • /
    • pp.1045-1055
    • /
    • 2007
  • 잠재토픽모델(latent topic model)은 데이타에 내재된 특징적 패턴이나 데이타 정의 자질들 간의 상호 관련성을 확률적으로 모델링하고 자동 추출하는 모델로서 최근 텍스트 문서로부터의 의미 자질 자동 추출, 이미지를 비롯한 멀티미디어 데이타 분석, 생물정보학 분야 등에서 많이 응용되고 있다. 이러한 잠재토픽모델의 대규모 데이타에 대한 적용 시 그 효과 증대를 위한 중요한 이슈 중의 하나는 모델의 효율적 학습에 관한 것이다. 본 논문에서는 대표적 잠재토픽모델 중의 하나인 PLSA (probabilistic latent semantic analysis) 기법을 대상으로 점진적 EM 알고리즘을 활용한, 기본 EM 알고리즘 기반의 기존 학습에 대한 학습속도 증진 기법을 제안한다. 점진적 EM 알고리즘은 토픽 추론 시 전체 데이타에 대한 일괄적 E-step 대신에 일부 데이타에 대한 일련의 부분적 E-step을 수행하는 특징이 있으며 이전 데이터 일부에 대한 학습 결과를 바로 다음 데이타 학습에 반영함으로써 모델 학습의 가속화를 기대할 수 있다. 또한 이론적인 측면에서 지역해로의 수렴성이 보장되고 기존 알고리즘의 큰 수정 없이 구현이 용이하다는 장점이 있다. 논문에서는 해당 알고리즘의 기본적인 응용과 더불어 실제 적용과정 상에서의 가능한 데이터 분할법들을 제시하고 모델 학습 속도 개선 면에서의 성능을 실험적으로 비교 분석한다. 실세계 뉴스 문서 데이타에 대한 실험을 통해, 제안하는 기법이 기존 PLSA 학습 기법에 비해 유의미한 수준에서 학습 속도 증진을 달성할 수 있음을 보이며 추가적으로 모델의 병렬 학습 기법과의 조합을 통한 실험 결과를 간략히 제시한다.

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • 제38권3호
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

Topic Modeling of Korean Newspaper Articles on Aging via Latent Dirichlet Allocation

  • Lee, So Chung
    • Asian Journal for Public Opinion Research
    • /
    • 제10권1호
    • /
    • pp.4-22
    • /
    • 2022
  • The purpose of this study is to explore the structure of social discourse on aging in Korea by analyzing newspaper articles on aging. The analysis is composed of three steps: first, data collection and preprocessing; second, identifying the latent topics; and third, observing yearly dynamics of topics. In total, 1,472 newspaper articles that included the word "aging" within the title were collected from 10 major newspapers between 2006 and 2019. The underlying topic structure was analyzed using Latent Dirichlet Allocation (LDA), a topic modeling method widely adopted by text mining academics and researchers. Seven latent topics were generated from the LDA model, defined as social issues, death, private insurance, economic growth, national debt, labor market innovation, and income security. The topic loadings demonstrated a clear increase in public interest on topics such as national debt and labor market innovation in recent years. This study concludes that media discourse on aging has shifted towards more productivity and efficiency related issues, requiring older people to be productive citizens. Such subjectivation connotes a decreased role of the government and society by shifting the responsibility to individuals not being able to adapt successfully as productive citizens within the labor market.

토픽모델링을 이용한 국내 미세먼지 연구 분류 및 연구동향 분석 (A Study on the Research Topics and Trends in South Korea: Focusing on Particulate Matter)

  • 박혜민;김태용;권대웅;허준용;이주연;양민준
    • 대한원격탐사학회지
    • /
    • 제38권5_3호
    • /
    • pp.873-885
    • /
    • 2022
  • 전 세계적으로 미세먼지(particulate matter, PM)와 사망률 및 유병률 증가의 관련성이 보고되면서 다양한 연구가 수행되었으며, 우리나라에서는 1990년대 후반을 기점으로 PM에 대한 중요성을 인식하고, PM에 대한 다양한 연구가 수행되었다. 본 연구에서는 '미세먼지' 관련 연구들의 주제를 분류하고, 각 주제별 연구 동향을 확인하기 위해 Research Information Sharing Service (RISS)에 게재된 미세먼지 관련 2,764편의 논문을 대상으로 Latent Dirichlet Allocate (LDA) 분석을 수행하였다. 연구 결과, 총 10개의 주제로 분류하는 것이 가장 적합하였으며, 미세먼지 관련 연구주제는 '미세먼지 저감(Topic 1)', '정부 정책 및 관리(Topic 2)', '미세먼지 특성(Topic 3)', '미세먼지 모델(Topic 4)', '환경교육(Topic 5)', '바이오(Topic 6)', '교통수단(Topic 7)', '황사(Topic 8)', '실내 미세먼지 오염(Topic 9)', '인체 위해성(Topic 10)'의 주제로 분류할 수 있었다. 특히, '정부 정책 및 관리(Topic 2)', '미세먼지 모델(Topic 4)', '환경교육(Topic 5)'. '바이오(Topic 6)' 관련 연구주제들이 시간에 따라 전체 논문에 대한 비율이 증가하는 추세를 보여 성행하는 것을 확인하였다(linear slope>0). 본 연구의 결과는 미세먼지 관련 다양한 분야의 연구자들에게 새로운 문헌 고찰의 방법론을 제시하고, 미세먼지 분야의 역사와 발전에 대한 이해를 제공했음에 의의가 있다.

커뮤니티 기반 Q&A서비스에서의 질의 할당을 위한 이용자의 관심 토픽 분석에 관한 연구 (A Study on Mapping Users' Topic Interest for Question Routing for Community-based Q&A Service)

  • 박종도
    • 정보관리학회지
    • /
    • 제32권3호
    • /
    • pp.397-412
    • /
    • 2015
  • 본 연구에서는 커뮤니티 기반 질의응답 서비스에서의 질의할당을 위하여, 해당 커뮤니티에 축적된 질의응답 데이터 세트를 이용하여 해당 카테고리내의 토픽을 분석하고 이를 바탕으로 해당 토픽에 관심을 가지는 이용자의 관심 토픽을 분석하고자 하였다. 특정 카테고리 내의 토픽을 분석하기 위해서 LDA기법을 사용하였고 이를 이용하여 이용자의 관심 토픽을 모델링하였다. 나아가, 커뮤니티에 새롭게 유입되는 질의에 대한 토픽을 분석한 후, 이를 바탕으로 해당 토픽에 대해 관심을 가지고 있는 이용자를 추천하기 위한 일련의 방법들을 실험하였다.

LDA를 이용한 온라인 리뷰의 다중 토픽별 감성분석 - TripAdvisor 사례를 중심으로 - (Multi-Topic Sentiment Analysis using LDA for Online Review)

  • 홍태호;니우한잉;임강;박지영
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제27권1호
    • /
    • pp.89-110
    • /
    • 2018
  • Purpose There is much information in customer reviews, but finding key information in many texts is not easy. Business decision makers need a model to solve this problem. In this study we propose a multi-topic sentiment analysis approach using Latent Dirichlet Allocation (LDA) for user-generated contents (UGC). Design/methodology/approach In this paper, we collected a total of 104,039 hotel reviews in seven of the world's top tourist destinations from TripAdvisor (www.tripadvisor.com) and extracted 30 topics related to the hotel from all customer reviews using the LDA model. Six major dimensions (value, cleanliness, rooms, service, location, and sleep quality) were selected from the 30 extracted topics. To analyze data, we employed R language. Findings This study contributes to propose a lexicon-based sentiment analysis approach for the keywords-embedded sentences related to the six dimensions within a review. The performance of the proposed model was evaluated by comparing the sentiment analysis results of each topic with the real attribute ratings provided by the platform. The results show its outperformance, with a high ratio of accuracy and recall. Through our proposed model, it is expected to analyze the customers' sentiments over different topics for those reviews with an absence of the detailed attribute ratings.

Topic Modeling and Sentiment Analysis of Twitter Discussions on COVID-19 from Spatial and Temporal Perspectives

  • AlAgha, Iyad
    • Journal of Information Science Theory and Practice
    • /
    • 제9권1호
    • /
    • pp.35-53
    • /
    • 2021
  • The study reported in this paper aimed to evaluate the topics and opinions of COVID-19 discussion found on Twitter. It performed topic modeling and sentiment analysis of tweets posted during the COVID-19 outbreak, and compared these results over space and time. In addition, by covering a more recent and a longer period of the pandemic timeline, several patterns not previously reported in the literature were revealed. Author-pooled Latent Dirichlet Allocation (LDA) was used to generate twenty topics that discuss different aspects related to the pandemic. Time-series analysis of the distribution of tweets over topics was performed to explore how the discussion on each topic changed over time, and the potential reasons behind the change. In addition, spatial analysis of topics was performed by comparing the percentage of tweets in each topic among top tweeting countries. Afterward, sentiment analysis of tweets was performed at both temporal and spatial levels. Our intention was to analyze how the sentiment differs between countries and in response to certain events. The performance of the topic model was assessed by being compared with other alternative topic modeling techniques. The topic coherence was measured for the different techniques while changing the number of topics. Results showed that the pooling by author before performing LDA significantly improved the produced topic models.

Non-Simultaneous Sampling Deactivation during the Parameter Approximation of a Topic Model

  • Jeong, Young-Seob;Jin, Sou-Young;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권1호
    • /
    • pp.81-98
    • /
    • 2013
  • Since Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) were introduced, many revised or extended topic models have appeared. Due to the intractable likelihood of these models, training any topic model requires to use some approximation algorithm such as variational approximation, Laplace approximation, or Markov chain Monte Carlo (MCMC). Although these approximation algorithms perform well, training a topic model is still computationally expensive given the large amount of data it requires. In this paper, we propose a new method, called non-simultaneous sampling deactivation, for efficient approximation of parameters in a topic model. While each random variable is normally sampled or obtained by a single predefined burn-in period in the traditional approximation algorithms, our new method is based on the observation that the random variable nodes in one topic model have all different periods of convergence. During the iterative approximation process, the proposed method allows each random variable node to be terminated or deactivated when it is converged. Therefore, compared to the traditional approximation ways in which usually every node is deactivated concurrently, the proposed method achieves the inference efficiency in terms of time and memory. We do not propose a new approximation algorithm, but a new process applicable to the existing approximation algorithms. Through experiments, we show the time and memory efficiency of the method, and discuss about the tradeoff between the efficiency of the approximation process and the parameter consistency.