• Title/Summary/Keyword: Document Embedding

Search Result 59, Processing Time 0.031 seconds

Prediction of Drug-Drug Interaction Based on Deep Learning Using Drug Information Document Embedding (약물 정보 문서 임베딩을 이용한 딥러닝 기반 약물 간 상호작용 예측)

  • Jung, Sun-woo;Yoo, Sun-yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.276-278
    • /
    • 2022
  • All drugs have a specific action in the body, and in many cases, drugs are combinated due to complications or new symptoms during existing drug treatment. In this case, unexpected interactions may occur within the body. Therefore, predicting drug-drug interactions is a very important task for safe drug use. In this study, we propose a deep learning-based predictive model that learns using drug information documents to predict drug interactions that may occur when using multiple drugs. The drug information document was created by combining several properties such as the drug's mechanism of action, toxicity, and target using DrugBank data. And drug information document is pair with another drug documents and used as an input to a deep learning-based predictive model, and the model outputs the interaction between the two drugs. This study can be used to predict future interactions between new drug pairs by analyzing the differences in experimental results according to changes in various conditions.

  • PDF

A Method for Learning the Specialized Meaning of Terminology through Mixed Word Embedding (혼합 임베딩을 통한 전문 용어 의미 학습 방안)

  • Kim, Byung Tae;Kim, Nam Gyu
    • The Journal of Information Systems
    • /
    • v.30 no.2
    • /
    • pp.57-78
    • /
    • 2021
  • Purpose In this study, first, we try to make embedding results that reflect the characteristics of both professional and general documents. In addition, when disparate documents are put together as learning materials for natural language processing, we try to propose a method that can measure the degree of reflection of the characteristics of individual domains in a quantitative way. Approach For this study, the Korean Supreme Court Precedent documents and Korean Wikipedia are selected as specialized documents and general documents respectively. After extracting the most similar word pairs and similarities of unique words observed only in the specialized documents, we observed how those values were changed in the process of embedding with general documents. Findings According to the measurement methods proposed in this study, it was confirmed that the degree of specificity of specialized documents was relaxed in the process of combining with general documents, and that the degree of dissolution could have a positive correlation with the size of general documents.

Document Embedding and Image Content Analysis for Improving News Clustering System (뉴스 클러스터링 개선을 위한 문서 임베딩 및 이미지 분석 자질의 활용)

  • Kim, Siyeon;Kim, Sang-Bum
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.104-108
    • /
    • 2015
  • 많은 양의 뉴스가 생성됨에 따라 이를 효과적으로 정리하는 기법이 최근 활발히 연구되어왔다. 그 중 뉴스클러스터링은 두 뉴스가 동일사건을 다루는지를 판정하는 분류기의 성능에 의존적인데, 대부분의 경우 BoW(Bag-of-Words)기반 벡터유사도를 사용하고 있다. 본 논문에서는 BoW기반의 벡터유사도 뿐 아니라 두 문서에 포함된 사진들의 유사성 및 주제의 관련성을 측정, 이를 분류기의 자질로 추가하여 두 뉴스가 동일사건을 다루는지 판정하는 분류기의 성능을 개선하는 방법을 제안한다. 사진들의 유사성 및 주제의 관련성은 최근 각광을 받는 딥러닝기반 CNN과 신경망기반 문서임베딩을 통해 측정하였다. 실험결과 기존의 BoW기반 벡터유사도에 의한 분류기의 성능에 비해 제안하는 두 자질을 사용하였을 경우 3.4%의 성능 향상을 보여주었다.

  • PDF

Self-Supervised Document Representation Method

  • Yun, Yeoil;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.187-197
    • /
    • 2020
  • Recently, various methods of text embedding using deep learning algorithms have been proposed. Especially, the way of using pre-trained language model which uses tremendous amount of text data in training is mainly applied for embedding new text data. However, traditional pre-trained language model has some limitations that it is hard to understand unique context of new text data when the text has too many tokens. In this paper, we propose self-supervised learning-based fine tuning method for pre-trained language model to infer vectors of long-text. Also, we applied our method to news articles and classified them into categories and compared classification accuracy with traditional models. As a result, it was confirmed that the vector generated by the proposed model more accurately expresses the inherent characteristics of the document than the vectors generated by the traditional models.

Quantitative and Qualitative Considerations to Apply Methods for Identifying Content Relevance between Knowledge Into Managing Knowledge Service (지식 간 내용적 연관성 파악 기법의 지식 서비스 관리 접목을 위한 정량적/정성적 고려사항 검토)

  • Yoo, Keedong
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.119-132
    • /
    • 2021
  • Identification of associated knowledge based on content relevance is a fundamental functionality in managing service and security of core knowledge. This study compares the performance of methods to identify associated knowledge based on content relevance, i.e., the associated document network composition performance of keyword-based and word-embedding approach, to examine which method exhibits superior performance in terms of quantitative and qualitative perspectives. As a result, the keyword-based approach showed superior performance in core document identification and semantic information representation, while the word embedding approach showed superior performance in F1-Score and Accuracy, association intensity representation, and large-volume document processing. This study can be utilized for more realistic associated knowledge service management, reflecting the needs of companies and users.

SMS Text Messages Filtering using Word Embedding and Deep Learning Techniques (워드 임베딩과 딥러닝 기법을 이용한 SMS 문자 메시지 필터링)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.24-29
    • /
    • 2018
  • Text analysis technique for natural language processing in deep learning represents words in vector form through word embedding. In this paper, we propose a method of constructing a document vector and classifying it into spam and normal text message, using word embedding and deep learning method. Automatic spacing applied in the preprocessing process ensures that words with similar context are adjacently represented in vector space. Additionally, the intentional word formation errors with non-alphabetic or extraordinary characters are designed to avoid being blocked by spam message filter. Two embedding algorithms, CBOW and skip grams, are used to produce the sentence vector and the performance and the accuracy of deep learning based spam filter model are measured by comparing to those of SVM Light.

Word Embeddings-Based Pseudo Relevance Feedback Using Deep Averaging Networks for Arabic Document Retrieval

  • Farhan, Yasir Hadi;Noah, Shahrul Azman Mohd;Mohd, Masnizah;Atwan, Jaffar
    • Journal of Information Science Theory and Practice
    • /
    • v.9 no.2
    • /
    • pp.1-17
    • /
    • 2021
  • Pseudo relevance feedback (PRF) is a powerful query expansion (QE) technique that prepares queries using the top k pseudorelevant documents and choosing expansion elements. Traditional PRF frameworks have robustly handled vocabulary mismatch corresponding to user queries and pertinent documents; nevertheless, expansion elements are chosen, disregarding similarity to the original query's elements. Word embedding (WE) schemes comprise techniques of significant interest concerning QE, that falls within the information retrieval domain. Deep averaging networks (DANs) defines a framework relying on average word presence passed through multiple linear layers. The complete query is understandably represented using the average vector comprising the query terms. The vector may be employed for determining expansion elements pertinent to the entire query. In this study, we suggest a DANs-based technique that augments PRF frameworks by integrating WE similarities to facilitate Arabic information retrieval. The technique is based on the fundamental that the top pseudo-relevant document set is assessed to determine candidate element distribution and select expansion terms appropriately, considering their similarity to the average vector representing the initial query elements. The Word2Vec model is selected for executing the experiments on a standard Arabic TREC 2001/2002 set. The majority of the evaluations indicate that the PRF implementation in the present study offers a significant performance improvement compared to that of the baseline PRF frameworks.

Hybrid Word-Character Neural Network Model for the Improvement of Document Classification (문서 분류의 개선을 위한 단어-문자 혼합 신경망 모델)

  • Hong, Daeyoung;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1290-1295
    • /
    • 2017
  • Document classification, a task of classifying the category of each document based on text, is one of the fundamental areas for natural language processing. Document classification may be used in various fields such as topic classification and sentiment classification. Neural network models for document classification can be divided into two categories: word-level models and character-level models that treat words and characters as basic units respectively. In this study, we propose a neural network model that combines character-level and word-level models to improve performance of document classification. The proposed model extracts the feature vector of each word by combining information obtained from a word embedding matrix and information encoded by a character-level neural network. Based on feature vectors of words, the model classifies documents with a hierarchical structure wherein recurrent neural networks with attention mechanisms are used for both the word and the sentence levels. Experiments on real life datasets demonstrate effectiveness of our proposed model.

Improving the effectiveness of document extraction summary based on the amount of sentence information (문장 정보량 기반 문서 추출 요약의 효과성 제고)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.31-38
    • /
    • 2022
  • In the document extraction summary study, various methods for selecting important sentences based on the relationship between sentences were proposed. In the Korean document summary using the summation similarity of sentences, the summation similarity of the sentences was regarded as the amount of sentence information, and the summary sentences were extracted by selecting important sentences based on this. However, the problem is that it does not take into account the various importance that each sentence contributes to the entire document. Therefore, in this study, we propose a document extraction summary method that provides a summary by selecting important sentences based on the amount of quantitative and semantic information in the sentence. As a result, the extracted sentence agreement was 58.56% and the ROUGE-L score was 34, which was superior to the method using only the combined similarity. Compared to the deep learning-based method, the extraction method is lighter, but the performance is similar. Through this, it was confirmed that the method of compressing information based on semantic similarity between sentences is an important approach in document extraction summary. In addition, based on the quickly extracted summary, the document generation summary step can be effectively performed.

Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words

  • Lee, Tae-Seok;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.344-358
    • /
    • 2022
  • Text summarization is the task of producing a shorter version of a long document while accurately preserving the main contents of the original text. Abstractive summarization generates novel words and phrases using a language generation method through text transformation and prior-embedded word information. However, newly coined words or out-of-vocabulary words decrease the performance of automatic summarization because they are not pre-trained in the machine learning process. In this study, we demonstrated an improvement in summarization quality through the contextualized embedding of BERT with out-of-vocabulary masking. In addition, explicitly providing precise pointing and an optional copy instruction along with BERT embedding, we achieved an increased accuracy than the baseline model. The recall-based word-generation metric ROUGE-1 score was 55.11 and the word-order-based ROUGE-L score was 39.65.