• Title/Summary/Keyword: Word2Vec

Search Result 219, Processing Time 0.028 seconds

Translation Pre-processing Technique for Improving Analysis Performance of Korean News (한국어 뉴스 분석 성능 향상을 위한 번역 전처리 기법)

  • Lee, Ji-Min;Jeong, Da-Woon;Gu, Yeong-Hyeon;Yoo, Seong-Joon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.619-623
    • /
    • 2020
  • 한국어는 교착어로 1개 이상의 형태소가 단어를 이루고 있기 때문에 텍스트 분석 시 형태소를 분리하는 작업이 필요하다. 자연어를 처리하는 대부분의 알고리즘은 영미권에서 만들어졌고 영어는 굴절어로 특정 경우를 제외하고 일반적으로 하나의 형태소가 단어를 구성하는 구조이다. 그리고 영문은 주로 띄어쓰기 위주로 토큰화가 진행되기 때문에 텍스트 분석이 한국어에 비해 복잡함이 떨어지는 편이다. 이러한 이유들로 인해 한국어 텍스트 분석은 영문 텍스트 분석에 비해 한계점이 있다고 알려져 있다. 한국어 텍스트 분석의 성능 향상을 위해 본 논문에서는 번역 전처리 기법을 제안한다. 번역 전처리 기법이란 원본인 한국어 텍스트를 영문으로 번역하고 전처리를 거친 뒤 분석된 결과를 재번역하는 것이다. 본 논문에서는 한국어 뉴스 기사 데이터와 번역 전처리 기법이 적용된 영문 뉴스 텍스트 데이터를 사용했다. 그리고 주제어 역할을 하는 키워드를 단어 간의 유사도를 계산하는 알고리즘인 Word2Vec(Word to Vector)을 통해 유사 단어를 추출했다. 이렇게 도출된 유사 단어를 텍스트 분석 전문가 대상으로 성능 비교 투표를 진행했을 때, 한국어 뉴스보다 번역 전처리 기법이 적용된 영문 뉴스가 약 3배의 득표 차이로 의미있는 결과를 도출했다.

  • PDF

Topic Model Augmentation and Extension Method using LDA and BERTopic (LDA와 BERTopic을 이용한 토픽모델링의 증강과 확장 기법 연구)

  • Kim, SeonWook;Yang, Kiduk
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.3
    • /
    • pp.99-132
    • /
    • 2022
  • The purpose of this study is to propose AET (Augmented and Extended Topics), a novel method of synthesizing both LDA and BERTopic results, and to analyze the recently published LIS articles as an experimental approach. To achieve the purpose of this study, 55,442 abstracts from 85 LIS journals within the WoS database, which spans from January 2001 to October 2021, were analyzed. AET first constructs a WORD2VEC-based cosine similarity matrix between LDA and BERTopic results, extracts AT (Augmented Topics) by repeating the matrix reordering and segmentation procedures as long as their semantic relations are still valid, and finally determines ET (Extended Topics) by removing any LDA related residual subtopics from the matrix and ordering the rest of them by F1 (BERTopic topic size rank, Inverse cosine similarity rank). AET, by comparing with the baseline LDA result, shows that AT has effectively concretized the original LDA topic model and ET has discovered new meaningful topics that LDA didn't. When it comes to the qualitative performance evaluation, AT performs better than LDA while ET shows similar performances except in a few cases.

Research Trends in Record Management Using Unstructured Text Data Analysis (비정형 텍스트 데이터 분석을 활용한 기록관리 분야 연구동향)

  • Deokyong Hong;Junseok Heo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.73-89
    • /
    • 2023
  • This study aims to analyze the frequency of keywords used in Korean abstracts, which are unstructured text data in the domestic record management research field, using text mining techniques to identify domestic record management research trends through distance analysis between keywords. To this end, 1,157 keywords of 77,578 journals were visualized by extracting 1,157 articles from 7 journal types (28 types) searched by major category (complex study) and middle category (literature informatics) from the institutional statistics (registered site, candidate site) of the Korean Citation Index (KCI). Analysis of t-Distributed Stochastic Neighbor Embedding (t-SNE) and Scattertext using Word2vec was performed. As a result of the analysis, first, it was confirmed that keywords such as "record management" (889 times), "analysis" (888 times), "archive" (742 times), "record" (562 times), and "utilization" (449 times) were treated as significant topics by researchers. Second, Word2vec analysis generated vector representations between keywords, and similarity distances were investigated and visualized using t-SNE and Scattertext. In the visualization results, the research area for record management was divided into two groups, with keywords such as "archiving," "national record management," "standardization," "official documents," and "record management systems" occurring frequently in the first group (past). On the other hand, keywords such as "community," "data," "record information service," "online," and "digital archives" in the second group (current) were garnering substantial focus.

Development of An Automatic Classification System for Game Reviews Based on Word Embedding and Vector Similarity (단어 임베딩 및 벡터 유사도 기반 게임 리뷰 자동 분류 시스템 개발)

  • Yang, Yu-Jeong;Lee, Bo-Hyun;Kim, Jin-Sil;Lee, Ki Yong
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.2
    • /
    • pp.1-14
    • /
    • 2019
  • Because of the characteristics of game software, it is important to quickly identify and reflect users' needs into game software after its launch. However, most sites such as the Google Play Store, where users can download games and post reviews, provide only very limited and ambiguous classification categories for game reviews. Therefore, in this paper, we develop an automatic classification system for game reviews that categorizes reviews into categories that are clearer and more useful for game providers. The developed system converts words in reviews into vectors using word2vec, which is a representative word embedding model, and classifies reviews into the most relevant categories by measuring the similarity between those vectors and each category. Especially, in order to choose the best similarity measure that directly affects the classification performance of the system, we have compared the performance of three representative similarity measures, the Euclidean similarity, cosine similarity, and the extended Jaccard similarity, in a real environment. Furthermore, to allow a review to be classified into multiple categories, we use a threshold-based multi-category classification method. Through experiments on real reviews collected from Google Play Store, we have confirmed that the system achieved up to 95% accuracy.

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

Self Introduction Essay Classification Using Doc2Vec for Efficient Job Matching (Doc2Vec 모형에 기반한 자기소개서 분류 모형 구축 및 실험)

  • Kim, Young Soo;Moon, Hyun Sil;Kim, Jae Kyeong
    • Journal of Information Technology Services
    • /
    • v.19 no.1
    • /
    • pp.103-112
    • /
    • 2020
  • Job seekers are making various efforts to find a good company and companies attempt to recruit good people. Job search activities through self-introduction essay are nowadays one of the most active processes. Companies spend time and cost to reviewing all of the numerous self-introduction essays of job seekers. Job seekers are also worried about the possibility of acceptance of their self-introduction essays by companies. This research builds a classification model and conducted an experiments to classify self-introduction essays into pass or fail using deep learning and decision tree techniques. Real world data were classified using stratified sampling to alleviate the data imbalance problem between passed self-introduction essays and failed essays. Documents were embedded using Doc2Vec method developed from existing Word2Vec, and they were classified using logistic regression analysis. The decision tree model was chosen as a benchmark model, and K-fold cross-validation was conducted for the performance evaluation. As a result of several experiments, the area under curve (AUC) value of PV-DM results better than that of other models of Doc2Vec, i.e., PV-DBOW and Concatenate. Furthmore PV-DM classifies passed essays as well as failed essays, while PV_DBOW can not classify passed essays even though it classifies well failed essays. In addition, the classification performance of the logistic regression model embedded using the PV-DM model is better than the decision tree-based classification model. The implication of the experimental results is that company can reduce the cost of recruiting good d job seekers. In addition, our suggested model can help job candidates for pre-evaluating their self-introduction essays.

A Method for Learning the Specialized Meaning of Terminology through Mixed Word Embedding (혼합 임베딩을 통한 전문 용어 의미 학습 방안)

  • Kim, Byung Tae;Kim, Nam Gyu
    • The Journal of Information Systems
    • /
    • v.30 no.2
    • /
    • pp.57-78
    • /
    • 2021
  • Purpose In this study, first, we try to make embedding results that reflect the characteristics of both professional and general documents. In addition, when disparate documents are put together as learning materials for natural language processing, we try to propose a method that can measure the degree of reflection of the characteristics of individual domains in a quantitative way. Approach For this study, the Korean Supreme Court Precedent documents and Korean Wikipedia are selected as specialized documents and general documents respectively. After extracting the most similar word pairs and similarities of unique words observed only in the specialized documents, we observed how those values were changed in the process of embedding with general documents. Findings According to the measurement methods proposed in this study, it was confirmed that the degree of specificity of specialized documents was relaxed in the process of combining with general documents, and that the degree of dissolution could have a positive correlation with the size of general documents.

On Word Embedding Models and Parameters Optimized for Korean (한국어에 적합한 단어 임베딩 모델 및 파라미터 튜닝에 관한 연구)

  • Choi, Sanghyuk;Seol, Jinseok;Lee, Sang-goo
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.252-256
    • /
    • 2016
  • 본 논문에서는 한국어에 최적화된 단어 임베딩을 학습하기 위한 방법을 소개한다. 단어 임베딩이란 각 단어가 분산된 의미를 지니도록 고정된 차원의 벡터공간에 대응 시키는 방법으로, 기계번역, 개체명 인식 등 많은 자연어처리 분야에서 활용되고 있다. 본 논문에서는 한국어에 대해 최적의 성능을 낼 수 있는 학습용 말뭉치와 임베딩 모델 및 적합한 하이퍼 파라미터를 실험적으로 찾고 그 결과를 분석한다.

  • PDF

Research on the development of demand for medical and bio technology using big data (빅데이터 활용 의학·바이오 부문 사업화 가능 기술 연구)

  • Lee, Bongmun.;Nam, Gayoung;Kang, Byeong Chul;Kim, CheeYong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.345-352
    • /
    • 2022
  • Conducting AI-based fusion business due to the increment of ICT fusion medical device has been expanded. In addition, AI-based medical devices help change existing medical system on treatment into the paradigm of customized treatment such as preliminary diagnosis and prevention. It will be generally promoted to the change of medical device industry. Although the current demand forecasting of medical biotechnology commercialization is based on the method of Delphi and AHP, there is a problem that it is difficult to have a generalization due to fluctuation results according to a pool of participants. Therefore, the purpose of the paper is to predict demand forecasting for identifying promising technology based on building up big data in medical biotechnology. The development method is to employ candidate technologies of keywords extracted from SCOPUS and to use word2vec for drawing analysis indicator, technological distance similarity, and recommended technological similarity of top-level items in order to achieve a reasonable result. In addition, the method builds up academic big data for 5 years (2016-2020) in order to commercialize technology excavation on demand perspective. Lastly, the paper employs global data studies in order to develop domestic and international demand for technology excavation in the medical biotechnology field.

Empirical Comparison of Word Similarity Measures Based on Co-Occurrence, Context, and a Vector Space Model

  • Kadowaki, Natsuki;Kishida, Kazuaki
    • Journal of Information Science Theory and Practice
    • /
    • v.8 no.2
    • /
    • pp.6-17
    • /
    • 2020
  • Word similarity is often measured to enhance system performance in the information retrieval field and other related areas. This paper reports on an experimental comparison of values for word similarity measures that were computed based on 50 intentionally selected words from a Reuters corpus. There were three targets, including (1) co-occurrence-based similarity measures (for which a co-occurrence frequency is counted as the number of documents or sentences), (2) context-based distributional similarity measures obtained from a latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), and Word2Vec algorithm, and (3) similarity measures computed from the tf-idf weights of each word according to a vector space model (VSM). Here, a Pearson correlation coefficient for a pair of VSM-based similarity measures and co-occurrence-based similarity measures according to the number of documents was highest. Group-average agglomerative hierarchical clustering was also applied to similarity matrices computed by individual measures. An evaluation of the cluster sets according to an answer set revealed that VSM- and LDA-based similarity measures performed best.