• Title/Summary/Keyword: Word Embedding

Search Result 232, Processing Time 0.029 seconds

Analysis and Comparison of Query focused Korean Document Summarization using Word Embedding (워드 임베딩을 이용한 질의 기반 한국어 문서 요약 분석 및 비교)

  • Heu, Jee-Uk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.161-167
    • /
    • 2019
  • Recently, the amount of created information has been rising rapidly by dissemination of state of the art and developing of the various web service based on ICT. In additionally, the user has to need a lot of times and effort to find the necessary information which is the user want to know it in the mount of information. Document summarization is the technique that making and providing the summary of given document efficiently by analyzing and extracting the key sentences and words. However, it is hard to apply the previous of word embedding technique to the document which is composed by korean language for analyzing contents in the document due to the character of language. In this paper, we propose the new query-focused korean document summarization by exploiting word embedding technique such as Word2Vec and FastText, and then compare the both result of performance.

A Study on the Deduction of Social Issues Applying Word Embedding: With an Empasis on News Articles related to the Disables (단어 임베딩(Word Embedding) 기법을 적용한 키워드 중심의 사회적 이슈 도출 연구: 장애인 관련 뉴스 기사를 중심으로)

  • Choi, Garam;Choi, Sung-Pil
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.1
    • /
    • pp.231-250
    • /
    • 2018
  • In this paper, we propose a new methodology for extracting and formalizing subjective topics at a specific time using a set of keywords extracted automatically from online news articles. To do this, we first extracted a set of keywords by applying TF-IDF methods selected by a series of comparative experiments on various statistical weighting schemes that can measure the importance of individual words in a large set of texts. In order to effectively calculate the semantic relation between extracted keywords, a set of word embedding vectors was constructed by using about 1,000,000 news articles collected separately. Individual keywords extracted were quantified in the form of numerical vectors and clustered by K-means algorithm. As a result of qualitative in-depth analysis of each keyword cluster finally obtained, we witnessed that most of the clusters were evaluated as appropriate topics with sufficient semantic concentration for us to easily assign labels to them.

A Study on the Law2Vec Model for Searching Related Law (연관법령 검색을 위한 워드 임베딩 기반 Law2Vec 모형 연구)

  • Kim, Nari;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1419-1425
    • /
    • 2017
  • The ultimate goal of legal knowledge search is to obtain optimal legal information based on laws and precedent. Text mining research is actively being undertaken to meet the needs of efficient retrieval from large scale data. A typical method is to use a word embedding algorithm based on Neural Net. This paper demonstrates how to search relevant information, applying Korean law information to word embedding. First, we extracts reference laws from precedents in order and takes reference laws as input of Law2Vec. The model learns a law by predicting its surrounding context law. The algorithm then moves over each law in the corpus and repeats the training step. After the training finished, we could infer the relationship between the laws via the embedding method. The search performance was evaluated based on precision and the recall rate which are computed from how closely the results are associated to the search terms. The test result proved that what this paper proposes is much more useful compared to existing systems utilizing only keyword search when it comes to extracting related laws.

User Sentiment Analysis on Amazon Fashion Product Review Using Word Embedding (워드 임베딩을 이용한 아마존 패션 상품 리뷰의 사용자 감성 분석)

  • Lee, Dong-yub;Jo, Jae-Choon;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.4
    • /
    • pp.1-8
    • /
    • 2017
  • In the modern society, the size of the fashion market is continuously increasing both overseas and domestic. When purchasing a product through e-commerce, the evaluation data for the product created by other consumers has an effect on the consumer's decision to purchase the product. By analysing the consumer's evaluation data on the product the company can reflect consumer's opinion which can leads to positive affect of performance to company. In this paper, we propose a method to construct a model to analyze user's sentiment using word embedding space formed by learning review data of amazon fashion products. Experiments were conducted by learning three SVM classifiers according to the number of positive and negative review data using the formed word embedding space which is formed by learning 5.7 million Amazon review data.. Experimental results showed the highest accuracy of 88.0% when learning SVM classifier using 50,000 positive review data and 50,000 negative review data.

Investigating Opinion Mining Performance by Combining Feature Selection Methods with Word Embedding and BOW (Bag-of-Words) (속성선택방법과 워드임베딩 및 BOW (Bag-of-Words)를 결합한 오피니언 마이닝 성과에 관한 연구)

  • Eo, Kyun Sun;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.2
    • /
    • pp.163-170
    • /
    • 2019
  • Over the past decade, the development of the Web explosively increased the data. Feature selection step is an important step in extracting valuable data from a large amount of data. This study proposes a novel opinion mining model based on combining feature selection (FS) methods with Word embedding to vector (Word2vec) and BOW (Bag-of-words). FS methods adopted for this study are CFS (Correlation based FS) and IG (Information Gain). To select an optimal FS method, a number of classifiers ranging from LR (logistic regression), NN (neural network), NBN (naive Bayesian network) to RF (random forest), RS (random subspace), ST (stacking). Empirical results with electronics and kitchen datasets showed that LR and ST classifiers combined with IG applied to BOW features yield best performance in opinion mining. Results with laptop and restaurant datasets revealed that the RF classifier using IG applied to Word2vec features represents best performance in opinion mining.

Error Correction in Korean Morpheme Recovery using Deep Learning (딥 러닝을 이용한 한국어 형태소의 원형 복원 오류 수정)

  • Hwang, Hyunsun;Lee, Changki
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1452-1458
    • /
    • 2015
  • Korean Morphological Analysis is a difficult process. Because Korean is an agglutinative language, one of the most important processes in Morphological Analysis is Morpheme Recovery. There are some methods using Heuristic rules and Pre-Analyzed Partial Words that were examined for this process. These methods have performance limits as a result of not using contextual information. In this study, we built a Korean morpheme recovery system using deep learning, and this system used word embedding for the utilization of contextual information. In '들/VV' and '듣/VV' morpheme recovery, the system showed 97.97% accuracy, a better performance than with SVM(Support Vector Machine) which showed 96.22% accuracy.

The Sentence Similarity Measure Using Deep-Learning and Char2Vec (딥러닝과 Char2Vec을 이용한 문장 유사도 판별)

  • Lim, Geun-Young;Cho, Young-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1300-1306
    • /
    • 2018
  • The purpose of this study is to see possibility of Char2Vec as alternative of Word2Vec that most famous word embedding model in Sentence Similarity Measure Problem by Deep-Learning. In experiment, we used the Siamese Ma-LSTM recurrent neural network architecture for measure similarity two random sentences. Siamese Ma-LSTM model was implemented with tensorflow. We train each model with 200 epoch on gpu environment and it took about 20 hours. Then we compared Word2Vec based model training result with Char2Vec based model training result. as a result, model of based with Char2Vec that initialized random weight record 75.1% validation dataset accuracy and model of based with Word2Vec that pretrained with 3 million words and phrase record 71.6% validation dataset accuracy. so Char2Vec is suitable alternate of Word2Vec to optimize high system memory requirements problem.

Word Embedding using Semantic Restriction of Predicate (용언의 의미 제약을 이용한 단어 임베딩)

  • Lee, Ju-Sang;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.181-183
    • /
    • 2015
  • 최근 자연어 처리 분야에서 딥 러닝이 많이 사용되고 있다. 자연어 처리에서 딥 러닝의 성능 향상을 위해 단어의 표현이 중요하다. 단어 임베딩은 단어 표현을 인공 신경망을 이용해 다차원 벡터로 표현한다. 본 논문에서는 word2vec의 Skip-gram과 negative-sampling을 이용하여 단어 임베딩 학습을 한다. 단어 임베딩 학습 데이터로 한국어 어휘지도 UWordMap의 용언의 필수논항 의미 제약 정보를 이용하여 구성했으며 250,183개의 단어 사전을 구축해 학습한다. 실험 결과로는 의미 제약 정보를 이용한 단어 임베딩이 유사성을 가진 단어들이 인접해 있음을 보인다.

  • PDF

Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words

  • Lee, Tae-Seok;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.344-358
    • /
    • 2022
  • Text summarization is the task of producing a shorter version of a long document while accurately preserving the main contents of the original text. Abstractive summarization generates novel words and phrases using a language generation method through text transformation and prior-embedded word information. However, newly coined words or out-of-vocabulary words decrease the performance of automatic summarization because they are not pre-trained in the machine learning process. In this study, we demonstrated an improvement in summarization quality through the contextualized embedding of BERT with out-of-vocabulary masking. In addition, explicitly providing precise pointing and an optional copy instruction along with BERT embedding, we achieved an increased accuracy than the baseline model. The recall-based word-generation metric ROUGE-1 score was 55.11 and the word-order-based ROUGE-L score was 39.65.

SG-Drop: Faster Skip-Gram by Dropping Context Words

  • Kim, DongJae;Synn, DoangJoo;Kim, Jong-Kook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1014-1017
    • /
    • 2020
  • Many natural language processing (NLP) models utilize pre-trained word embeddings to leverage latent information. One of the most successful word embedding model is the Skip-gram (SG). In this paper, we propose a Skipgram drop (SG-Drop) model, which is a variation of the SG model. The SG-Drop model is designed to reduce training time efficiently. Furthermore, the SG-Drop allows controlling training time with its hyperparameter. It could train word embedding faster than reducing training epochs while better preserving the quality.