• Title/Summary/Keyword: 단어 클러스터링

Search Result 67, Processing Time 0.023 seconds

Automatic Construction of Reduced Dimensional Cluster-based Keyword Association Networks using LSI (LSI를 이용한 차원 축소 클러스터 기반 키워드 연관망 자동 구축 기법)

  • Yoo, Han-mook;Kim, Han-joon;Chang, Jae-young
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1236-1243
    • /
    • 2017
  • In this paper, we propose a novel way of producing keyword networks, named LSI-based ClusterTextRank, which extracts significant key words from a set of clusters with a mutual information metric, and constructs an association network using latent semantic indexing (LSI). The proposed method reduces the dimension of documents through LSI, decomposes documents into multiple clusters through k-means clustering, and expresses the words within each cluster as a maximal spanning tree graph. The significant key words are identified by evaluating their mutual information within clusters. Then, the method calculates the similarities between the extracted key words using the term-concept matrix, and the results are represented as a keyword association network. To evaluate the performance of the proposed method, we used travel-related blog data and showed that the proposed method outperforms the existing TextRank algorithm by about 14% in terms of accuracy.

Analyzing Self-Introduction Letter of Freshmen at Korea National College of Agricultural and Fisheries by Using Semantic Network Analysis : Based on TF-IDF Analysis (언어네트워크분석을 활용한 한국농수산대학 신입생 자기소개서 분석 - TF-IDF 분석을 기초로 -)

  • Joo, J.S.;Lee, S.Y.;Kim, J.S.;Kim, S.H.;Park, N.B.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.23 no.1
    • /
    • pp.89-104
    • /
    • 2021
  • Based on the TF-IDF weighted value that evaluates the importance of words that play a key role, the semantic network analysis(SNA) was conducted on the self-introduction letter of freshman at Korea National College of Agriculture and Fisheries(KNCAF) in 2020. The top three words calculated by TF-IDF weights were agriculture, mathematics, study (Q. 1), clubs, plants, friends (Q. 2), friends, clubs, opinions, (Q. 3), mushrooms, insects, and fathers (Q. 4). In the relationship between words, the words with high betweenness centrality are reason, high school, attending (Q. 1), garbage, high school, school (Q. 2), importance, misunderstanding, completion (Q.3), processing, feed, and farmhouse (Q. 4). The words with high degree centrality are high school, inquiry, grades (Q. 1), garbage, cleanup, class time (Q. 2), opinion, meetings, volunteer activities (Q.3), processing, space, and practice (Q. 4). The combination of words with high frequency of simultaneous appearances, that is, high correlation, appeared as 'certification - acquisition', 'problem - solution', 'science - life', and 'misunderstanding - concession'. In cluster analysis, the number of clusters obtained by the height of cluster dendrogram was 2(Q.1), 4(Q.2, 4) and 5(Q. 3). At this time, the cohesion in Cluster was high and the heterogeneity between Clusters was clearly shown.

Clustering Method Of Plagiarism Document To Use Similarity Syntagma Tree (유사 어절 트리를 이용한 표절 문서의 Clustering 방법)

  • Cheon, Seung-Hwan;Kim, Mi-Young;Lee, Guee-Sang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11c
    • /
    • pp.2269-2272
    • /
    • 2002
  • 인터넷과 컴퓨터를 이용한 학생들의 과제물을 평가하는데 있어 표절의 용이성으로 인해 정확히 판별하는 것은 매우 어렵고 번거로운 일이다. 특히 동일한 주제에 대해서 작성되는 경우가 많으므로 독자적으로 작성된 문서와 표절되어진 문서를 판별하기가 쉽지 않다. 이것은 클러스터링 하고자 하는 문서들에서 주요 단어들 즉, 색인어들의 출현 빈도를 추출한 뒤 이를 이용하여 가장 적합한 Clustering을 찾는 기존의 정보 검색 방법들과는 전혀 다른 문제이다. 본 논문에서는 과제물의 평가에 지침을 제공할 수 있도록 유사 어절 트리를 이용한 표절 유사도에 따른 Cluster들을 생성하는 방법에 대해 제안한다.

  • PDF

Weighting and Query Structuring Scheme for Disambiguation in CLTR (교차언어 문서검색에서 중의성 해소를 위한 가중치 부여 및 질의어 구조화 방법)

  • Jeong, Eui-Heon;Kwon, Oh-Woog;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.175-182
    • /
    • 2001
  • 본 논문은 사전에 기반한 질의변환 교차언어 문서검색에서, 대역어 중의성 문제를 해결하기 위한, 질의어 가중치 부여 및 구조화 방법을 제안한다. 제안하는 방법의 질의 변환 과정은 다음의 세 단계로 이루어진다. 첫째, 대역어 클러스터링을 통해 먼저 질의어 단어의 적합한 의미를 결정짓고, 둘째, 문맥정보와 지역정보를 이용하여 후보 대역어들간의 상호관계를 분석하며, 셋째, 각 후보 대역어들을 연결하여, 후보 질의어를 만들고 각각에 가중치를 부여하여 weighted Boolean 질의어로 생성하게 된다. 이를 통해, 단순하고 경제적이지만, 높은 성능을 낼 수 있는 사전에 의한 질의변환 교차언어 문서검색 방법을 제시하고자 한다.

  • PDF

Design and Implementation of Keywords Extraction System from CQI Reports by the Analysis of Graph Centrality (그래프 중심성 분석에 의한 CQI 보고서 핵심어 추출 시스템의 설계 및 개발)

  • Pheaktra, They;Lim, JongBeom;Lee, JongHyuk;Gil, Joon-Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.256-259
    • /
    • 2019
  • 최근 대학교는 CQI(Continuous Quality Improvement) 등의 방대한 교육 관련 데이터를 수집하고 있고 이를 분석하여 교육 및 경영에 활용하고 있다. 핵심어는 텍스트의 내용을 간결하게 표현할 수 있는 단어이다. 그래서 CQI 보고서의 의미를 파악하기 위해서는 먼저 핵심어 추출이 필요하다. CQI 보고서에서 핵심어를 추출하면 이후 정보 검색, 인덱싱, 분류, 클러스터링, 필터링 등과 같은 많은 응용 작업을 용이하게 수행할 수 있다. 따라서 방대한 양의 CQI 보고서로부터 핵심어 추출을 자동화한다면 이후 요약 및 의미 파악에 많은 도움이 될 것이다. 이 논문에서는 CQI 보고서 요약을 위해 자동적으로 핵심어를 추출하는 방법을 제안한다.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

Development of big data based Skin Care Information System SCIS for skin condition diagnosis and management

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.137-147
    • /
    • 2022
  • Diagnosis and management of skin condition is a very basic and important function in performing its role for workers in the beauty industry and cosmetics industry. For accurate skin condition diagnosis and management, it is necessary to understand the skin condition and needs of customers. In this paper, we developed SCIS, a big data-based skin care information system that supports skin condition diagnosis and management using social media big data for skin condition diagnosis and management. By using the developed system, it is possible to analyze and extract core information for skin condition diagnosis and management based on text information. The skin care information system SCIS developed in this paper consists of big data collection stage, text preprocessing stage, image preprocessing stage, and text word analysis stage. SCIS collected big data necessary for skin diagnosis and management, and extracted key words and topics from text information through simple frequency analysis, relative frequency analysis, co-occurrence analysis, and correlation analysis of key words. In addition, by analyzing the extracted key words and information and performing various visualization processes such as scatter plot, NetworkX, t-SNE, and clustering, it can be used efficiently in diagnosing and managing skin conditions.

A Study on Intellectual Structure of Library and Information Science in Korea (문헌정보학의 지식 구조에 관한 연구)

  • Yoo, Yeong-Jun
    • Journal of the Korean Society for information Management
    • /
    • v.20 no.3
    • /
    • pp.277-297
    • /
    • 2003
  • This study was conducted upon the premise that index terms display the intellectual structure of a specific subject field. In this study, and attempt was made to grasp the intellectual structure of Library and Information. Science by clustering the index terms of the journals of the related academic societies at the Library of National Assembly - such as the Journal of the Korean Society for Information Management, the Journal of the Korean Library and Information Science Society, and the Journal of the Korean Society for Library and Information Science. Through the course of the study, index term clusters were generated based on the linkage of the index terms and the frequency of co-occurrence, and moreover, time periods analysis was conducted along with studies on first-appearing terms, in order to clarify the trend and development process of the Library and Information Science. This study also analysed the difference between two intellectual structure by comparing the structure generated by index term clusters with the existing structure of traditional classification systems.

A Study on Research Paper Classification Using Keyword Clustering (키워드 군집화를 이용한 연구 논문 분류에 관한 연구)

  • Lee, Yun-Soo;Pheaktra, They;Lee, JongHyuk;Gil, Joon-Min
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.12
    • /
    • pp.477-484
    • /
    • 2018
  • Due to the advancement of computer and information technologies, numerous papers have been published. As new research fields continue to be created, users have a lot of trouble finding and categorizing their interesting papers. In order to alleviate users' this difficulty, this paper presents a method of grouping similar papers and clustering them. The presented method extracts primary keywords from the abstracts of each paper by using TF-IDF. Based on TF-IDF values extracted using K-means clustering algorithm, our method clusters papers to the ones that have similar contents. To demonstrate the practicality of the proposed method, we use paper data in FGCS journal as actual data. Based on these data, we derive the number of clusters using Elbow scheme and show clustering performance using Silhouette scheme.

A Study on the Structures and Characteristics of National Policy Knowledge (국가 정책지식의 구조와 특성에 관한 연구)

  • Lee, Ji-Sue;Chung, Young-Mee
    • Journal of Information Management
    • /
    • v.41 no.2
    • /
    • pp.1-30
    • /
    • 2010
  • This study analyzed research output in dominant research areas of 19 national research institutions. Policy knowledge produced by the institutions during the past 5 years mainly concerned 10 policies dealing with economy and society issues. Similarities between the research subjects of the institutions were displayed by MDS mapping. The study also identified issue attention cycles of the 5 chosen policies and examined the correlation between the issue attention cycles and the yields of policy knowledge. The knowledge structure of each policy was mapped using co-word analysis and Ward's clustering. It was also found that the institutions performing research on similar subjects demonstrated citation preferences for each other.