• Title/Summary/Keyword: 텍스트 벡터화

Search Result 39, Processing Time 0.022 seconds

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

A Language Model based Knowledge Network for Analyzing Disaster Safety related Social Interest (재난안전 사회관심 분석을 위한 언어모델 활용 정보 네트워크 구축)

  • Choi, Dong-Jin;Han, So-Hee;Kim, Kyung-Jun;Bae, Eun-Sol
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2022.10a
    • /
    • pp.145-147
    • /
    • 2022
  • 본 논문은 대규모 텍스트 데이터에서 이슈를 발굴할 때 사용되는 기존의 정보 네트워크 또는 지식 그래프 구축 방법의 한계점을 지적하고, 문장 단위로 정보 네트워크를 구축하는 새로운 방법에 대해서 제안한다. 먼저 문장을 구성하는 단어와 캐릭터수의 분포를 측정하며 의성어와 같은 노이즈를 제거하기 위한 역치값을 설정하였다. 다음으로 BERT 기반 언어모델을 이용하여 모든 문장을 벡터화하고, 코사인 유사도를 이용하여 두 문장벡터에 대한 유사성을 측정하였다. 오분류된 유사도 결과를 최소화하기 위하여 명사형 단어의 의미적 연관성을 비교하는 알고리즘을 개발하였다. 제안된 유사문장 비교 알고리즘의 결과를 검토해 보면, 두 문장은 서술되는 형태가 다르지만 동일한 주제와 내용을 다루고 있는 것을 확인할 수 있었다. 본 논문에서 제안하는 방법은 단어 단위 지식 그래프 해석의 어려움을 극복할 수 있는 새로운 방법이다. 향후 이슈 및 트랜드 분석과 같은 미래연구 분야에 적용하면, 데이터 기반으로 특정 주제에 대한 사회적 관심을 수렴하고, 수요를 반영한 정책적 제언을 도출하는데 기여할 수 있을 것이다

  • PDF

Modified multi-sense skip-gram using weighted context and x-means (가중 문맥벡터와 X-means 방법을 이용한 변형 다의어스킵그램)

  • Jeong, Hyunwoo;Lee, Eun Ryung
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.389-399
    • /
    • 2021
  • In recent years, word embedding has been a popular field of natural language processing research and a skip-gram has become one successful word embedding method. It assigns a word embedding vector to each word using contexts, which provides an effective way to analyze text data. However, due to the limitation of vector space model, primary word embedding methods assume that every word only have a single meaning. As one faces multi-sense words, that is, words with more than one meaning, in reality, Neelakantan (2014) proposed a multi-sense skip-gram (MSSG) to find embedding vectors corresponding to the each senses of a multi-sense word using a clustering method. In this paper, we propose a modified method of the MSSG to improve statistical accuracy. Moreover, we propose a data-adaptive choice of the number of clusters, that is, the number of meanings for a multi-sense word. Some numerical evidence is given by conducting real data-based simulations.

DOCST: Document frequency Oriented Clustering for Short Texts (가중치를 이용한 효과적인 항공 단문 군집 방법)

  • Kim, Jooyoung;Lee, Jimin;An, Soonhong;Lee, Hoonsuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.331-334
    • /
    • 2018
  • 비정형 데이터의 대표적인 형태 중 하나인 텍스트 데이터 기계학습은 다양한 산업군에서 활용되고 있다. NOTAM 은 하루에 수 천개씩 생성되는 항공전문으로써 현재는 사람의 수작업으로 분석하고 있다. 기계학습을 통해 업무 효율성을 기대할 수 있는 반면, 축약어가 혼재된 단문이라는 데이터의 특성상 일반적인 분석에 어려움이 있다. 본 연구에서는, 데이터의 크기가 크지 않고, 축약어가 혼재되어 있으며, 문장의 길이가 매우 짧은 문서들을 군집화하는 방법을 제안한다. 주제를 기준으로 문서를 분류하는 LDA 와, 단어를 k 차원의 벡터공간에 표현하는 Word2Vec 를 활용하여 잡음이 포함된 단문 데이터에서도 효율적으로 문서를 군집화 할 수 있다.

A Case Study on Text Analysis Using Meal Kit Product Review Data (밀키트 제품 리뷰 데이터를 이용한 텍스트 분석 사례 연구)

  • Choi, Hyeseon;Yeon, Kyupil
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.1-15
    • /
    • 2022
  • In this study, text analysis was performed on the mealkit product review data to identify factors affecting the evaluation of the mealkit product. The data used for the analysis were collected by scraping 334,498 reviews of mealkit products in Naver shopping site. After preprocessing the text data, wordclouds and sentiment analyses based on word frequency and normalized TF-IDF were performed. Logistic regression model was applied to predict the polarity of reviews on mealkit products. From the logistic regression models derived for each product category, the main factors that caused positive and negative emotions were identified. As a result, it was verified that text analysis can be a useful tool that provides a basis for maximizing positive factors for a specific category, menu, and material and removing negative risk factors when developing a mealkit product.

Document Clustering Methods using Hierarchy of Document Contents (문서 내용의 계층화를 이용한 문서 비교 방법)

  • Hwang, Myung-Gwon;Bae, Yong-Geun;Kim, Pan-Koo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2335-2342
    • /
    • 2006
  • The current web is accumulating abundant information. In particular, text based documents are a type used very easily and frequently by human. So, numerous researches are progressed to retrieve the text documents using many methods, such as probability, statistics, vector similarity, Bayesian, and so on. These researches however, could not consider both subject and semantic of documents. So, to overcome the previous problems, we propose the document similarity method for semantic retrieval of document users want. This is the core method of document clustering. This method firstly, expresses a hierarchy semantically of document content ut gives the important hierarchy domain of document to weight. With this, we could measure the similarity between documents using both the domain weight and concepts coincidence in the domain hierarchies.

A Study on automatic assignment of descriptors using machine learning (기계학습을 통한 디스크립터 자동부여에 관한 연구)

  • Kim, Pan-Jun
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.1 s.59
    • /
    • pp.279-299
    • /
    • 2006
  • This study utilizes various approaches of machine learning in the process of automatically assigning descriptors to journal articles. The effectiveness of feature selection and the size of training set were examined, after selecting core journals in the field of information science and organizing test collection from the articles of the past 11 years. Regarding feature selection, after reducing the feature set using $x^2$ statistics(CHI) and criteria that prefer high-frequency features(COS, GSS, JAC), the trained Support Vector Machines(SVM) performed the best. With respect to the size of the training set, it significantly influenced the performance of Support Vector Machines(SVM) and Voted Perceptron(VTP). However, it had little effect on Naive Bayes(NB).

Deep Learning-Based Model for Classification of Medical Record Types in EEG Report (EEG Report의 의무기록 유형 분류를 위한 딥러닝 기반 모델)

  • Oh, Kyoungsu;Kang, Min;Kang, Seok-hwan;Lee, Young-ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.203-210
    • /
    • 2022
  • As more and more research and companies use health care data, efforts are being made to vitalize health care data worldwide. However, the system and format used by each institution is different. Therefore, this research established a basic model to classify text data onto multiple institutions according to the type of the future by establishing a basic model to classify the types of medical records of the EEG Report. For EEG Report classification, four deep learning-based algorithms were compared. As a result of the experiment, the ANN model trained by vectorizing with One-Hot Encoding showed the highest performance with an accuracy of 71%.

Product Planning using Similarity Analysis Technique Based on Word2Vec Model (Word2Vec 모델 기반의 유사도를 이용한 상품기획 모델)

  • Ahn, Yeong-Hwi;Park, Koo-Rack
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.11-12
    • /
    • 2021
  • 소비자가 남긴 댓글이나 상품평은 상품기획의 주요 정보가 될 수 있다. 본 논문에서는 버티컬 무소음 마우스 7,300개에 대한 온라인 댓글을 딥러닝 기술인 Word2Vec을 이용하여 유사도 분석을 시행하였다. 유사도 분석결과 클릭 키워드에 대한 장점으로 소리(.975), 버튼(.972), 무게(.971)가 분석되었으며 단점은 가볍다(.959)이었다. 이는 구매 상품에 대한 소비자의 의견, 태도, 성향 및 서비스에 대한 포괄적인 의견들을 데이터화 하여 상품의 특징을 분석할 수 있는 의미있는 과정 이라고 볼 수 있다. 상품기획 프로세스에 딥러닝 기술을 통한 소비자의 감성분석자료 포함시키는 전략을 적용한다면 상품기획의 시간과 비용투자의 경제성을 높일 수 있고 나아가 빠르게 변화하는 소비자의 요구사항을 적기에 반영할 수 있을 것으로 생각된다.

  • PDF

Analysis on the Characteristics of Construction Practice Information Using Text Mining: Focusing on Information Such as Construction Technology, Cases, and Cost Reduction (텍스트마이닝을 활용한 건설실무정보의 특성 분석 - 건설기술, 사례, 원가절감 등 정보를 중심으로 -)

  • Seong-Yun, Jeong;Jin-Uk, Kim
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.4
    • /
    • pp.205-222
    • /
    • 2022
  • This study aims to improve the information service so that construction engineers and construction project participants without specialized knowledge can easily understand the important words and the interrelationships between them in construction practice. To this end, using text mining and network centrality, the frequency of occurrence of words, topic modeling, and network centrality in construction practice information such as technical information, case information, and cost reduction, which are most used in the Construction Technology Digital Library, were analyzed. Through this analysis, design, construction, project management, specifications, standards, and maintenance related to road construction such as roads, pavements, bridges, and tunnels were identified as important in construction practice. In addition, correlations were analyzed for words with high importance by measuring Degree Centrality and Eigenvector Centrality. The result was that more useful information could be provided if the technical information was expanded. Finally, we presented the limitations of the study results and additional studies according to the limitations.