• 제목/요약/키워드: word vector

검색결과 247건 처리시간 0.032초

지지벡터기계를 이용한 단어 의미 분류 (Word Sense Classification Using Support Vector Machines)

  • 박준혁;이성욱
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제5권11호
    • /
    • pp.563-568
    • /
    • 2016
  • 단어 의미 분별 문제는 문장에서 어떤 단어가 사전에 가지고 있는 여러 가지 의미 중 정확한 의미를 파악하는 문제이다. 우리는 이 문제를 다중 클래스 분류 문제로 간주하고 지지벡터기계를 이용하여 분류한다. 세종 의미 부착 말뭉치에서 추출한 의미 중의성 단어의 문맥 단어를 두 가지 벡터 공간에 표현한다. 첫 번째는 문맥 단어들로 이뤄진 벡터 공간이고 이진 가중치를 사용한다. 두 번째는 문맥 단어의 윈도우 크기에 따라 문맥 단어를 단어 임베딩 모델로 사상한 벡터 공간이다. 실험결과, 문맥 단어 벡터를 사용하였을 때 약 87.0%, 단어 임베딩을 사용하였을 때 약 86.0%의 정확도를 얻었다.

Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소 (Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model)

  • 김광호;이동현;임민규;김지환
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

SSF: Sentence Similar Function Based on word2vector Similar Elements

  • Yuan, Xinpan;Wang, Songlin;Wan, Lanjun;Zhang, Chengyuan
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1503-1516
    • /
    • 2019
  • In this paper, to improve the accuracy of long sentence similarity calculation, we proposed a sentence similarity calculation method based on a system similarity function. The algorithm uses word2vector as the system elements to calculate the sentence similarity. The higher accuracy of our algorithm is derived from two characteristics: one is the negative effect of penalty item, and the other is that sentence similar function (SSF) based on word2vector similar elements doesn't satisfy the exchange rule. In later studies, we found the time complexity of our algorithm depends on the process of calculating similar elements, so we build an index of potentially similar elements when training the word vector process. Finally, the experimental results show that our algorithm has higher accuracy than the word mover's distance (WMD), and has the least query time of three calculation methods of SSF.

한의학 고문헌 데이터 분석을 위한 단어 임베딩 기법 비교: 자연어처리 방법을 적용하여 (Comparison between Word Embedding Techniques in Traditional Korean Medicine for Data Analysis: Implementation of a Natural Language Processing Method)

  • 오준호
    • 대한한의학원전학회지
    • /
    • 제32권1호
    • /
    • pp.61-74
    • /
    • 2019
  • Objectives : The purpose of this study is to help select an appropriate word embedding method when analyzing East Asian traditional medicine texts as data. Methods : Based on prescription data that imply traditional methods in traditional East Asian medicine, we have examined 4 count-based word embedding and 2 prediction-based word embedding methods. In order to intuitively compare these word embedding methods, we proposed a "prescription generating game" and compared its results with those from the application of the 6 methods. Results : When the adjacent vectors are extracted, the count-based word embedding method derives the main herbs that are frequently used in conjunction with each other. On the other hand, in the prediction-based word embedding method, the synonyms of the herbs were derived. Conclusions : Counting based word embedding methods seems to be more effective than prediction-based word embedding methods in analyzing the use of domesticated herbs. Among count-based word embedding methods, the TF-vector method tends to exaggerate the frequency effect, and hence the TF-IDF vector or co-word vector may be a more reasonable choice. Also, the t-score vector may be recommended in search for unusual information that could not be found in frequency. On the other hand, prediction-based embedding seems to be effective when deriving the bases of similar meanings in context.

Word Embedding 자질을 이용한 한국어 개체명 인식 및 분류 (Korean Named Entity Recognition and Classification using Word Embedding Features)

  • 최윤수;차정원
    • 정보과학회 논문지
    • /
    • 제43권6호
    • /
    • pp.678-685
    • /
    • 2016
  • 한국어 개체명 인식에 다양한 연구가 있었지만, 영어 개체명 인식에 비해 자질이 부족한 문제를 가지고 있다. 본 논문에서는 한국어 개체명 인식의 자질 부족 문제를 해결하기 위해 word embedding 자질을 개체명 인식에 사용하는 방법을 제안한다. CBOW(Continuous Bag-of-Words) 모델을 이용하여 word vector를 생성하고, word vector로부터 K-means 알고리즘을 이용하여 군집 정보를 생성한다. word vector와 군집 정보를 word embedding 자질로써 CRFs(Conditional Random Fields)에 사용한다. 실험 결과 TV 도메인과 Sports 도메인, IT 도메인에서 기본 시스템보다 각각 1.17%, 0.61%, 1.19% 성능이 향상되었다. 또한 제안 방법이 다른 개체명 인식 및 분류 시스템보다 성능이 향상되는 것을 보여 그 효용성을 입증했다.

A Study on Word Vector Models for Representing Korean Semantic Information

  • Yang, Hejung;Lee, Young-In;Lee, Hyun-jung;Cho, Sook Whan;Koo, Myoung-Wan
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.41-47
    • /
    • 2015
  • This paper examines whether the Global Vector model is applicable to Korean data as a universal learning algorithm. The main purpose of this study is to compare the global vector model (GloVe) with the word2vec models such as a continuous bag-of-words (CBOW) model and a skip-gram (SG) model. For this purpose, we conducted an experiment by employing an evaluation corpus consisting of 70 target words and 819 pairs of Korean words for word similarities and analogies, respectively. Results of the word similarity task indicated that the Pearson correlation coefficients of 0.3133 as compared with the human judgement in GloVe, 0.2637 in CBOW and 0.2177 in SG. The word analogy task showed that the overall accuracy rate of 67% in semantic and syntactic relations was obtained in GloVe, 66% in CBOW and 57% in SG.

확장된 벡터 공간 모델을 이용한 한국어 문서 분류 방안 (Korean Document Classification Using Extended Vector Space Model)

  • 이상곤
    • 정보처리학회논문지B
    • /
    • 제18B권2호
    • /
    • pp.93-108
    • /
    • 2011
  • 본 논문에서는 한국어 문서의 분류 정밀도 향상을 위해 애매어와 해소어 정보를 이용한 확장된 벡터 공간 모델을 제안하였다. 벡터 공간 모델에 사용된 벡터는 같은 정도의 가중치를 갖는 축이 하나 더 존재하지만, 기존의 방법은 그 축에 아무런 처리가 이루어지지 않았기 때문에 벡터끼리의 비교를 할 때 문제가 발생한다. 같은 가중치를 갖는 축이 되는 단어를 애매어라 정의하고, 단어와 분야 사이의 상호정보량을 계산하여 애매어를 결정하였다. 애매어에 의해 애매성을 해소하는 단어를 해소어라 정의하고, 애매어와 동일한 문서에서 출현하는 단어 중에서 상호정보량을 계산하여 해소어의 세기를 결정하였다. 본 논문에서는 애매어와 해소어를 이용하여 벡터의 차원을 확장하여 문서 분류의 정밀도를 향상시키는 방법을 제안하였다.

Analyzing Errors in Bilingual Multi-word Lexicons Automatically Constructed through a Pivot Language

  • Seo, Hyeong-Won;Kim, Jae-Hoon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제39권2호
    • /
    • pp.172-178
    • /
    • 2015
  • Constructing a bilingual multi-word lexicon is confronted with many difficulties such as an absence of a commonly accepted gold-standard dataset. Besides, in fact, there is no everybody's definition of what a multi-word unit is. In considering these problems, this paper evaluates and analyzes the context vector approach which is one of a novel alignment method of constructing bilingual lexicons from parallel corpora, by comparing with one of general methods. The approach builds context vectors for both source and target single-word units from two parallel corpora. To adapt the approach to multi-word units, we identify all multi-word candidates (namely noun phrases in this work) first, and then concatenate them into single-word units. As a result, therefore, we can use the context vector approach to satisfy our need for multi-word units. In our experimental results, the context vector approach has shown stronger performance over the other approach. The contribution of the paper is analyzing the various types of errors for the experimental results. For the future works, we will study the similarity measure that not only covers a multi-word unit itself but also covers its constituents.

한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상 (Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex))

  • 이정훈;조상현;권혁철
    • 한국멀티미디어학회논문지
    • /
    • 제25권3호
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

워드 임베딩과 딥러닝 기법을 이용한 SMS 문자 메시지 필터링 (SMS Text Messages Filtering using Word Embedding and Deep Learning Techniques)

  • 이현영;강승식
    • 스마트미디어저널
    • /
    • 제7권4호
    • /
    • pp.24-29
    • /
    • 2018
  • 딥러닝에서 자연어 처리를 위한 텍스트 분석 기법은 워드 임베딩을 통해 단어를 벡터 형태로 표현한다. 본 논문에서는 워드 임베딩 기법과 딥러닝 기법을 이용하여 SMS 문자 메시지를 문서 벡터로 구성하고 이를 스팸 문자 메시지와 정상적인 문자 메시지로 분류하는 방법을 제안하였다. 유사한 문맥을 가진 단어들은 벡터 공간에서 인접한 벡터 공간에 표현되도록 하기 위해 전처리 과정으로 자동 띄어쓰기를 적용하고 스팸 문자 메시지로 차단되는 것을 피하기 위한 목적으로 음절의 자모를 특수기호로 왜곡하여 맞춤법이 파괴된 상태로 단어 벡터와 문장 벡터를 생성하였다. 또한 문장 벡터 생성 시 CBOW와 skip gram이라는 두 가지 워드 임베딩 알고리즘을 적용하여 문장 벡터를 표현하였으며, 딥러닝을 이용한 스팸 문자 메시지 필터링의 성능 평가를 위해 SVM Light와 정확도를 비교 측정하였다.