• Title/Summary/Keyword: topic cluster

Search Result 79, Processing Time 0.034 seconds

Selection of Cluster Topic Words in Hierarchical Clustering using K-Means Algorithm

  • Lee Shin Won;Yi Sang Seon;An Dong Un;Chung Sung Jong
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.885-889
    • /
    • 2004
  • Fast and high-quality document clustering algorithms play an important role in providing data exploration by organizing large amounts of information into a small number of meaningful clusters. Hierarchical clustering improves the performance of retrieval and makes that users can understand easily. For outperforming of clustering, we implemented hierarchical structure with variety and readability, by careful selection of cluster topic words and deciding the number of clusters dynamically. It is important to select topic words because hierarchical clustering structure is summarizes result of searching. We made choice of noun word as a cluster topic word. The quality of topic words is increased $33\%$ as follows. As the topic word of each cluster, the only noun word is extracted for the top-level cluster and the used topic words for the children clusters were not reused.

  • PDF

Enhancing Document Clustering Method using Synonym of Cluster Topic and Similarity (군집 주제의 유의어와 유사도를 이용한 문서군집 향상 방법)

  • Park, Sun;Kim, Kyung-Jun;Lee, Jin-Seok;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.30-38
    • /
    • 2011
  • This paper proposes a new enhancing document clustering method using a synonym of cluster topic and the similarity. The proposed method can well represent the inherent structure of document cluster set by means of selecting terms of cluster topic based on the semantic features by NMF. It can solve the problem of "bags of words" by using of expanding the terms of cluster topics which uses the synonyms of WordNet. Also, it can improve the quality of document clustering which uses the cosine similarity between the expanded cluster topic terms and document set to well cluster document with respect to the appropriation cluster. The experimental results demonstrate that the proposed method achieves better performance than other document clustering methods.

Current Research Trends in Entrepreneurship Based on Topic Modeling and Keyword Co-occurrence Analysis: 2002~2021 (토픽모델링과 동시출현단어 분석을 이용한 기업가정신에 대한 연구동향 분석: 2002~2021)

  • Jang, Sung Hee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.3
    • /
    • pp.245-256
    • /
    • 2022
  • The purpose of this study is to provide comprehensive insights on the current research trends in entrepreneurship based on topic modeling and keyword co-occurrence analysis. This study queried Web of Science database with 'entrepreneurship' and collected 14,953 research articles between 2002 and 2021. The study used R program for topic modeling and VOSviewer program for keyword co-occurrence analysis. The results of this study are as follows. First, as a result of keyword co-occurrence analysis, 5 clusters divided: entrepreneurship and innovation cluster, entrepreneurship education cluster, social entrepreneurship and sustainability cluster, enterprise performance cluster, and knowledge and technology transfer cluster. Second, as a result of the topic modeling analysis, 12 topics found: start-up environment and economic development, international entrepreneurship, venture capital, government policy and support, social entrepreneurship, management-related issues, regional city planning and development, entrepreneurship research, and entrepreneurial intention. Finally, the study identified two hot topics(venture capital and entrepreneurship intention) and a cold topic(international entrepreneurship). The results of this study are useful to understand current research trends in entrepreneurship research and provide insights into research of entrepreneurship.

Is Text Mining on Trade Claim Studies Applicable? Focused on Chinese Cases of Arbitration and Litigation Applying the CISG

  • Yu, Cheon;Choi, DongOh;Hwang, Yun-Seop
    • Journal of Korea Trade
    • /
    • v.24 no.8
    • /
    • pp.171-188
    • /
    • 2020
  • Purpose - This is an exploratory study that aims to apply text mining techniques, which computationally extracts words from the large-scale text data, to legal documents to quantify trade claim contents and enables statistical analysis. Design/methodology - This is designed to verify the validity of the application of text mining techniques as a quantitative methodology for trade claim studies, that have relied mainly on a qualitative approach. The subjects are 81 cases of arbitration and court judgments from China published on the website of the UNCITRAL where the CISG was applied. Validation is performed by comparing the manually analyzed result with the automatically analyzed result. The manual analysis result is the cluster analysis wherein the researcher reads and codes the case. The automatic analysis result is an analysis applying text mining techniques to the result of the cluster analysis. Topic modeling and semantic network analysis are applied for the statistical approach. Findings - Results show that the results of cluster analysis and text mining results are consistent with each other and the internal validity is confirmed. And the degree centrality of words that play a key role in the topic is high as the between centrality of words that are useful for grasping the topic and the eigenvector centrality of the important words in the topic is high. This indicates that text mining techniques can be applied to research on content analysis of trade claims for statistical analysis. Originality/value - Firstly, the validity of the text mining technique in the study of trade claim cases is confirmed. Prior studies on trade claims have relied on traditional approach. Secondly, this study has an originality in that it is an attempt to quantitatively study the trade claim cases, whereas prior trade claim cases were mainly studied via qualitative methods. Lastly, this study shows that the use of the text mining can lower the barrier for acquiring information from a large amount of digitalized text.

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF

Topic Analysis of Scholarly Communication Research

  • Ji, Hyun;Cha, Mikyeong
    • Journal of Information Science Theory and Practice
    • /
    • v.9 no.2
    • /
    • pp.47-65
    • /
    • 2021
  • This study aims to identify specific topics, trends, and structural characteristics of scholarly communication research, based on 1,435 articles published from 1970 to 2018 in the Scopus database through Latent Dirichlet Allocation topic modeling, serial analysis, and network analysis. Topic modeling, time series analysis, and network analysis were used to analyze specific topics, trends, and structures, respectively. The results were summarized into three sets as follows. First, the specific topics of scholarly communication research were nineteen in number, including research resource management and research data, and their research proportion is even. Second, as a result of the time series analysis, there are three upward trending topics: Topic 6: Open Access Publishing, Topic 7: Green Open Access, Topic 19: Informal Communication, and two downward trending topics: Topic 11: Researcher Network and Topic 12: Electronic Journal. Third, the network analysis results indicated that high mean profile association topics were related to the institution, and topics with high triangle betweenness centrality, such as Topic 14: Research Resource Management, shared the citation context. Also, through cluster analysis using parallel nearest neighbor clustering, six clusters connected with different concepts were identified.

The Analysis of Knowledge Structure using Co-word Method in Quality Management Field (동시단어분석을 이용한 품질경영분야 지식구조 분석)

  • Park, Man-Hee
    • Journal of Korean Society for Quality Management
    • /
    • v.44 no.2
    • /
    • pp.389-408
    • /
    • 2016
  • Purpose: This study was designed to analyze the behavioral change of knowledge structures and the trends of research topics in the quality management field. Methods: The network structure and knowledge structure of the words were visualized in map form using co-word analysis, cluster analysis and strategic diagram. Results: Summarizing the research results obtained in this study are as follows. First, the word network derived from co-occurrence matrix had 106 nodes and 5,314 links and its density was analyzed to 0.95. Average betweenness centrality of word network was 2.37. In addition, average closeness centrality and average eigenvector centrality of word network were 0.01. Second, by applying optimal criteria of cluster decision and K-means algorithm to word co-occurrence matrix, 106 words were grouped into seven clusters such as standard & efficiency, product design, reliability, control chart, quality model, 6 sigma, and service quality. Conclusion: According to the results of strategic diagram analysis over time, the traditional research topics of quality management field related to reliability, 6 sigma, control chart topics in the third quadrant were revealed to be declined for their study importance. Research topics related to product design and customer satisfaction were found to be an important research topic over analysis periods. Research topic related to management innovation was emerging state and the scope of research topics related to process model was extended to research topics with system performance. Research topic related to service quality located in the first quadrant was analyzed as the key research topic.

An Informetric Analysis of Topics in University's General Education (대학 교양교육 주제영역의 계량적 분석연구)

  • Choi, Sanghee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.26 no.4
    • /
    • pp.245-262
    • /
    • 2015
  • As the topics of general education in universities become more diverse, it is not an easy task to identify the topics of general education courses. This study aims to identify and visualize the topics of A university's general education courses using informetric analysis methods. 214 syllabi were collected and titles, course introduction, goals, and weekly plans were analyzed. 278 topic words were extracted from the data set and grouped into 8 clusters. In the network analysis, topic clusters were divided into two areas, personal and social. Personal area has 14 sub-topic clusters and social area has 11 sub-topic clusters. In personal area, 'language', 'science', and 'personality' were major topic clusters. In social area, 'multi-culture' cluster was the core cluster with connected to four other clusters. The topic network generated in this study can be used for the university and the university library to enhance general education or to develop collections for general education.

An Analysis of the Research Trends for Urban Study using Topic Modeling (토픽모델링을 이용한 도시 분야 연구동향 분석)

  • Jang, Sun-Young;Jung, Seunghyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.661-670
    • /
    • 2021
  • Research trends can be usefully used to determine the importance of research topics by period, identify insufficient research fields, and discover new fields. In this study, research trends of urban spaces, where various problems are occurring due to population concentration and urbanization, were analyzed by topic modeling. The analysis target was the abstracts of papers listed in the Korea Citation Index (KCI) published between 2002 and 2019. Topic modeling is an algorithm-based text mining technique that can discover a certain pattern in the entire content, and it is easy to cluster. In this study, the frequency of keywords, trends by year, topic derivation, cluster by topic, and trend by topic type were analyzed. Research in urban regeneration is increasing continuously, and it was analyzed as a field where detailed topics could be expanded in the future. Furthermore, urban regeneration is now becoming a regular research field. On the other hand, topics related to development/growth and energy/environment have entered a stagnation period. This study is meaningful because the correlation and trends between keywords were analyzed using topic modeling targeting all domestic urban studies.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.