• Title/Summary/Keyword: topic model

Search Result 871, Processing Time 0.026 seconds

Information Technology Application for Oral Document Analysis (구술문서 자료분석을 위한 정보검색기술의 응용)

  • Park, Soon-Cheol;Hahm, Han-Hee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.2
    • /
    • pp.47-55
    • /
    • 2008
  • The purpose of this paper is to develop an analytical methodology of or릴 documents by the application of. Information Technologies. This system consists of the key word search, contents summary, clustering, classification & topic tracing of the contents. The integrated model of the five levels of retrieval technologies can be exhaustively used in the analysis of oral documents, which were collected as oral history of five men and women in the area of North Jeolla. Of the five methods topic tracing is the most pioneering accomplishment both home and abroad. In final this research will shed light on the methodological and theoretical studies of oral history and culture.

  • PDF

Research Trends Analysis of Big Data: Focused on the Topic Modeling (빅데이터 연구동향 분석: 토픽 모델링을 중심으로)

  • Park, Jongsoon;Kim, Changsik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • The objective of this study is to examine the trends in big data. Research abstracts were extracted from 4,019 articles, published between 1995 and 2018, on Web of Science and were analyzed using topic modeling and time series analysis. The 20 single-term topics that appeared most frequently were as follows: model, technology, algorithm, problem, performance, network, framework, analytics, management, process, value, user, knowledge, dataset, resource, service, cloud, storage, business, and health. The 20 multi-term topics were as follows: sense technology architecture (T10), decision system (T18), classification algorithm (T03), data analytics (T17), system performance (T09), data science (T06), distribution method (T20), service dataset (T19), network communication (T05), customer & business (T16), cloud computing (T02), health care (T14), smart city (T11), patient & disease (T04), privacy & security (T08), research design (T01), social media (T12), student & education (T13), energy consumption (T07), supply chain management (T15). The time series data indicated that the 40 single-term topics and multi-term topics were hot topics. This study provides suggestions for future research.

Current Studies to Estimate the Economic Values of Welfare-endowed Animal Products (동물복지형 축산물의 경제적 가치추정에 관한 연구 동향)

  • Jung, Yun-Pil;Roh, Sung-Hoon;Ohh, Sang-Jip;Lee, Jong-In
    • Journal of Animal Environmental Science
    • /
    • v.16 no.1
    • /
    • pp.29-40
    • /
    • 2010
  • The purpose of the study is to review current studies for economic values on livestock products produced by animal welfare. In order to review the topic, published research papers and reports were reviewed in the world. As the result of the study, the studies for the topic are not researched actively. The main ideas for the studies were consumer survey on meats and egg. Data were questionnaire, Lexis-Nexis databases, consumptions and prices on meats, auction data. Tools for analyses were Random parameters logit and latent class model, WTP analysis, Roterdam model, Pearson's Chi test, Mann-Whitney V-test, Kruskal-Wallis test, structural equation model, regression model, Target-costing, and conjoint analysis.

A Deep Learning-based Depression Trend Analysis of Korean on Social Media (딥러닝 기반 소셜미디어 한글 텍스트 우울 경향 분석)

  • Park, Seojeong;Lee, Soobin;Kim, Woo Jung;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.91-117
    • /
    • 2022
  • The number of depressed patients in Korea and around the world is rapidly increasing every year. However, most of the mentally ill patients are not aware that they are suffering from the disease, so adequate treatment is not being performed. If depressive symptoms are neglected, it can lead to suicide, anxiety, and other psychological problems. Therefore, early detection and treatment of depression are very important in improving mental health. To improve this problem, this study presented a deep learning-based depression tendency model using Korean social media text. After collecting data from Naver KonwledgeiN, Naver Blog, Hidoc, and Twitter, DSM-5 major depressive disorder diagnosis criteria were used to classify and annotate classes according to the number of depressive symptoms. Afterwards, TF-IDF analysis and simultaneous word analysis were performed to examine the characteristics of each class of the corpus constructed. In addition, word embedding, dictionary-based sentiment analysis, and LDA topic modeling were performed to generate a depression tendency classification model using various text features. Through this, the embedded text, sentiment score, and topic number for each document were calculated and used as text features. As a result, it was confirmed that the highest accuracy rate of 83.28% was achieved when the depression tendency was classified based on the KorBERT algorithm by combining both the emotional score and the topic of the document with the embedded text. This study establishes a classification model for Korean depression trends with improved performance using various text features, and detects potential depressive patients early among Korean online community users, enabling rapid treatment and prevention, thereby enabling the mental health of Korean society. It is significant in that it can help in promotion.

Analyzing Changes in Consumers' Interest Areas Related to Skin under the Pandemic: Focusing on Structural Topic Modeling (팬데믹에 따른 소비자의 피부 관련 관심 영역 변화 분석: 구조적 토픽모델링을 중심으로)

  • Nakyung Kim;Jiwon Park;HyungBin Moon
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.173-192
    • /
    • 2024
  • This study aims to understand the changes in the beauty industry due to the pandemic from the consumer's perspective based on consumers' opinions about their skin online before and after the pandemic. Furthermore, this study tries to derive strategies for companies and governments to support sustainable growth and innovation in the beauty industry. To this end, posts on social media from 2017 to 2022 that contained the keyword 'skin concerns' are collected, and after data preprocessing, 96,908 posts are used for the structural topic model. To examine whether consumers' interest areas related to skin change according to the pandemic situation, the analysis period is divided into 7 periods, and the variables that distinguish each stage are used as meta-variables for the structural topic model. As a result, it is found that consumers' interests can be divided into 22 topics, which can be categorized into four main categories: beauty manufacturing, beauty services, skin concerns, and other. The results of this study are expected to be utilized in construction of product development and marketing strategies of related companies and the establishment of economic support policies by the government in response to changes in demand in the beauty industry due to the pandemic.

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

A Study of Research on Methods of Automated Biomedical Document Classification using Topic Modeling and Deep Learning (토픽모델링과 딥 러닝을 활용한 생의학 문헌 자동 분류 기법 연구)

  • Yuk, JeeHee;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.2
    • /
    • pp.63-88
    • /
    • 2018
  • This research evaluated differences of classification performance for feature selection methods using LDA topic model and Doc2Vec which is based on word embedding using deep learning, feature corpus sizes and classification algorithms. In addition to find the feature corpus with high performance of classification, an experiment was conducted using feature corpus was composed differently according to the location of the document and by adjusting the size of the feature corpus. Conclusionally, in the experiments using deep learning evaluate training frequency and specifically considered information for context inference. This study constructed biomedical document dataset, Disease-35083 which consisted biomedical scholarly documents provided by PMC and categorized by the disease category. Throughout the study this research verifies which type and size of feature corpus produces the highest performance and, also suggests some feature corpus which carry an extensibility to specific feature by displaying efficiency during the training time. Additionally, this research compares the differences between deep learning and existing method and suggests an appropriate method by classification environment.

Research Trends Investigation Using Text Mining Techniques: Focusing on Social Network Services (텍스트마이닝을 활용한 연구동향 분석: 소셜네트워크서비스를 중심으로)

  • Yoon, Hyejin;Kim, Chang-Sik;Kwahk, Kee-Young
    • Journal of Digital Contents Society
    • /
    • v.19 no.3
    • /
    • pp.513-519
    • /
    • 2018
  • The objective of this study was to examine the trends on social network services. The abstracts of 308 articles were extracted from web of science database published between 1994 and 2016. Time series analysis and topic modeling of text mining were implemented. The topic modeling results showed that the research topics were mainly 20 topics: trust, support, satisfaction model, organization governance, mobile system, internet marketing, college student effect, opinion diffusion, customer, information privacy, health care, web collaboration, method, learning effectiveness, knowledge, individual theory, child support, algorithm, media participation, and context system. The time series regression results indicated that trust, support satisfaction model, and remains of the topics were hot topics. This study also provided suggestions for future research.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

Predicting Bug Severity by utilizing Topic Model and Bug Report Meta-Field (토픽 모델과 버그 리포트 메타 필드를 이용한 버그 심각도 예측 방법)

  • Yang, Geunseok;Lee, Byungjeong
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.9
    • /
    • pp.616-621
    • /
    • 2015
  • Recently developed software systems have many components, and their complexity is thus increasing. Last year, about 375 bug reports in one day were reported to a software repository in Eclipse and Mozilla open source projects. With so many bug reports submitted, developers' time and efforts have increased unnecessarily. Since the bug severity is manually determined by quality assurance, project manager or other developers in the general bug fixing process, it is biased to them. They might also make a mistake on the manual decision because of the large number of bug reports. Therefore, in this study, we propose an approach of bug severity prediction to solve these problems. First, we find similar topics within a new bug report and reduce the candidate reports of the topic by using the meta field of the bug report. Next, we train the reduced reports by applying Naive Bayes Multinomial. Finally, we predict the severity of the new bug report. We compare our approach with other prediction algorithms by using bug reports in open source projects. The results show that our approach better predicts bug severity than other algorithms.