• Title/Summary/Keyword: text embedding

Search Result 146, Processing Time 0.022 seconds

Analysis and Comparison of Query focused Korean Document Summarization using Word Embedding (워드 임베딩을 이용한 질의 기반 한국어 문서 요약 분석 및 비교)

  • Heu, Jee-Uk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.161-167
    • /
    • 2019
  • Recently, the amount of created information has been rising rapidly by dissemination of state of the art and developing of the various web service based on ICT. In additionally, the user has to need a lot of times and effort to find the necessary information which is the user want to know it in the mount of information. Document summarization is the technique that making and providing the summary of given document efficiently by analyzing and extracting the key sentences and words. However, it is hard to apply the previous of word embedding technique to the document which is composed by korean language for analyzing contents in the document due to the character of language. In this paper, we propose the new query-focused korean document summarization by exploiting word embedding technique such as Word2Vec and FastText, and then compare the both result of performance.

Trends in FTA Research of Domestic and International Journal using Paper Abstract Data (초록데이터를 활용한 국내외 FTA 연구동향: 2000-2020)

  • Hee-Young Yoon;Il-Youp Kwak
    • Korea Trade Review
    • /
    • v.45 no.5
    • /
    • pp.37-53
    • /
    • 2020
  • This study aims to provide the implications of research development by comparing domestic and international studies conducted on the subject of FTA. To this end, among the papers written during the period from 2000 to July 23, 2020, papers whose title is searched by FTA (Free Trade Agreement) were selected as research data. In the case of domestic research, 1,944 searches from the Korean Citation Index (KCI) and 970 from the Web of Science and SCOPUS were selected for international research, and the research trend was analyzed through keywords and abstracts. Frequency analysis and word embedding (Word2vec) were used to analyze the data and visualized using t-SNE and Scattertext. The results of the analysis are as follows. First, in the top 30 keywords of domestic and international research, 16 out of 30 were found to be the same. In domestic research, many studies have been conducted to analyze the outcomes or expected effects of countries that have concluded or discussed FTAs with Korea, on the other hand there are diverse range of study subjects in international research. Second, in the word embedding analysis, t-SNE was used to visually represent the research connection of the top 60 keywords. Finally, Scattertext was used to visually indicate which keywords were frequently used in studies from 2000 to 2010, and from 2011 to 2020. This study is the first to draw implications for academic development through abstract and keyword analysis by applying various text mining approaches to the FTA related research papers. Further in-depth research is needed, including collecting a variety of FTA related text data, comparing and analyzing FTA studies in different countries.

A Study on the Law2Vec Model for Searching Related Law (연관법령 검색을 위한 워드 임베딩 기반 Law2Vec 모형 연구)

  • Kim, Nari;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1419-1425
    • /
    • 2017
  • The ultimate goal of legal knowledge search is to obtain optimal legal information based on laws and precedent. Text mining research is actively being undertaken to meet the needs of efficient retrieval from large scale data. A typical method is to use a word embedding algorithm based on Neural Net. This paper demonstrates how to search relevant information, applying Korean law information to word embedding. First, we extracts reference laws from precedents in order and takes reference laws as input of Law2Vec. The model learns a law by predicting its surrounding context law. The algorithm then moves over each law in the corpus and repeats the training step. After the training finished, we could infer the relationship between the laws via the embedding method. The search performance was evaluated based on precision and the recall rate which are computed from how closely the results are associated to the search terms. The test result proved that what this paper proposes is much more useful compared to existing systems utilizing only keyword search when it comes to extracting related laws.

Improved Spam Filter via Handling of Text Embedded Image E-mail

  • Youn, Seongwook;Cho, Hyun-Chong
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.401-407
    • /
    • 2015
  • The increase of image spam, a kind of spam in which the text message is embedded into attached image to defeat spam filtering technique, is a major problem of the current e-mail system. For nearly a decade, content based filtering using text classification or machine learning has been a major trend of anti-spam filtering system. Recently, spammers try to defeat anti-spam filter by many techniques. Text embedding into attached image is one of them. We proposed an ontology spam filters. However, the proposed system handles only text e-mail and the percentage of attached images is increasing sharply. The contribution of the paper is that we add image e-mail handling capability into the anti-spam filtering system keeping the advantages of the previous text based spam e-mail filtering system. Also, the proposed system gives a low false negative value, which means that user's valuable e-mail is rarely regarded as a spam e-mail.

Domain-Specific Terminology Mapping Methodology Using Supervised Autoencoders (지도학습 오토인코더를 이용한 전문어의 범용어 공간 매핑 방법론)

  • Byung Ho Yoon;Junwoo Kim;Namgyu Kim
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.93-110
    • /
    • 2023
  • Recently, attempts have been made to convert unstructured text into vectors and to analyze vast amounts of natural language for various purposes. In particular, the demand for analyzing texts in specialized domains is rapidly increasing. Therefore, studies are being conducted to analyze specialized and general-purpose documents simultaneously. To analyze specific terms with general terms, it is necessary to align the embedding space of the specific terms with the embedding space of the general terms. So far, attempts have been made to align the embedding of specific terms into the embedding space of general terms through a transformation matrix or mapping function. However, the linear transformation based on the transformation matrix showed a limitation in that it only works well in a local range. To overcome this limitation, various types of nonlinear vector alignment methods have been recently proposed. We propose a vector alignment model that matches the embedding space of specific terms to the embedding space of general terms through end-to-end learning that simultaneously learns the autoencoder and regression model. As a result of experiments with R&D documents in the "Healthcare" field, we confirmed the proposed methodology showed superior performance in terms of accuracy compared to the traditional model.

Exploring Teaching Method for Productive Knowledge of Scientific Concept Words through Science Textbook Quantitative Analysis (과학교과서 텍스트의 계량적 분석을 이용한 과학 개념어의 생산적 지식 교육 방안 탐색)

  • Yun, Eunjeong
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.1
    • /
    • pp.41-50
    • /
    • 2020
  • Looking at the understanding of scientific concepts from a linguistic perspective, it is very important for students to develop a deep and sophisticated understanding of words used in scientific concept as well as the ability to use them correctly. This study intends to provide the basis for productive knowledge education of scientific words by noting that the foundation of productive knowledge teaching on scientific words is not well established, and by exploring ways to teach the relationship among words that constitute scientific concept in a productive and effective manner. To this end, we extracted the relationship among the words that make up the scientific concept from the text of science textbook by using quantitative text analysis methods, second, qualitatively examined the meaning of the word relationship extracted as a result of each method, and third, we proposed a writing activity method to help improve the productive knowledge of scientific concept words. We analyzed the text of the "Force and motion" unit on first grade science textbook by using four methods of quantitative linguistic analysis: word cluster, co-occurrence, text network analysis, and word-embedding. As results, this study suggests four writing activities, completing sentence activity by using the result of word cluster analysis, filling the blanks activity by using the result of co-occurrence analysis, material-oriented writing activities by using the result of text network analysis, and finally we made a list of important words by using the result of word embedding.

Fine-Grained Mobile Application Clustering Model Using Retrofitted Document Embedding

  • Yoon, Yeo-Chan;Lee, Junwoo;Park, So-Young;Lee, Changki
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.443-454
    • /
    • 2017
  • In this paper, we propose a fine-grained mobile application clustering model using retrofitted document embedding. To automatically determine the clusters and their numbers with no predefined categories, the proposed model initializes the clusters based on title keywords and then merges similar clusters. For improved clustering performance, the proposed model distinguishes between an accurate clustering step with titles and an expansive clustering step with descriptions. During the accurate clustering step, an automatically tagged set is constructed as a result. This set is utilized to learn a high-performance document vector. During the expansive clustering step, more applications are then classified using this document vector. Experimental results showed that the purity of the proposed model increased by 0.19, and the entropy decreased by 1.18, compared with the K-means algorithm. In addition, the mean average precision improved by more than 0.09 in a comparison with a support vector machine classifier.

Ontology Matching Method Based on Word Embedding and Structural Similarity

  • Hongzhou Duan;Yuxiang Sun;Yongju Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.3
    • /
    • pp.75-88
    • /
    • 2023
  • In a specific domain, experts have different understanding of domain knowledge or different purpose of constructing ontology. These will lead to multiple different ontologies in the domain. This phenomenon is called the ontology heterogeneity. For research fields that require cross-ontology operations such as knowledge fusion and knowledge reasoning, the ontology heterogeneity has caused certain difficulties for research. In this paper, we propose a novel ontology matching model that combines word embedding and a concatenated continuous bag-of-words model. Our goal is to improve word vectors and distinguish the semantic similarity and descriptive associations. Moreover, we make the most of textual and structural information from the ontology and external resources. We represent the ontology as a graph and use the SimRank algorithm to calculate the structural similarity. Our approach employs a similarity queue to achieve one-to-many matching results which provide a wider range of insights for subsequent mining and analysis. This enhances and refines the methodology used in ontology matching.

Application of Domain Knowledge in Transaction-based Recommender Systems through Word Embedding (트랜잭션 기반 추천 시스템에서 워드 임베딩을 통한 도메인 지식 반영)

  • Choi, Yeoungje;Moon, Hyun Sil;Cho, Yoonho
    • Knowledge Management Research
    • /
    • v.21 no.1
    • /
    • pp.117-136
    • /
    • 2020
  • In the studies for the recommender systems which solve the information overload problem of users, the use of transactional data has been continuously tried. Especially, because the firms can easily obtain transactional data along with the development of IoT technologies, transaction-based recommender systems are recently used in various areas. However, the use of transactional data has limitations that it is hard to reflect domain knowledge and they do not directly show user preferences for individual items. Therefore, in this study, we propose a method applying the word embedding in the transaction-based recommender system to reflect preference differences among users and domain knowledge. Our approach is based on SAR, which shows high performance in the recommender systems, and we improved its components by using FastText, one of the word embedding techniques. Experimental results show that the reflection of domain knowledge and preference difference has a significant effect on the performance of recommender systems. Therefore, we expect our study to contribute to the improvement of the transaction-based recommender systems and to suggest the expansion of data used in the recommender system.

A Study on the Application of Natural Language Processing in Health Care Big Data: Focusing on Word Embedding Methods (보건의료 빅데이터에서의 자연어처리기법 적용방안 연구: 단어임베딩 방법을 중심으로)

  • Kim, Hansang;Chung, Yeojin
    • Health Policy and Management
    • /
    • v.30 no.1
    • /
    • pp.15-25
    • /
    • 2020
  • While healthcare data sets include extensive information about patients, many researchers have limitations in analyzing them due to their intrinsic characteristics such as heterogeneity, longitudinal irregularity, and noise. In particular, since the majority of medical history information is recorded in text codes, the use of such information has been limited due to the high dimensionality of explanatory variables. To address this problem, recent studies applied word embedding techniques, originally developed for natural language processing, and derived positive results in terms of dimensional reduction and accuracy of the prediction model. This paper reviews the deep learning-based natural language processing techniques (word embedding) and summarizes research cases that have used those techniques in the health care field. Then we finally propose a research framework for applying deep learning-based natural language process in the analysis of domestic health insurance data.