• Title/Summary/Keyword: Word embedding

Search Result 233, Processing Time 0.03 seconds

Association Modeling on Keyword and Abstract Data in Korean Port Research

  • Yoon, Hee-Young;Kwak, Il-Youp
    • Journal of Korea Trade
    • /
    • v.24 no.5
    • /
    • pp.71-86
    • /
    • 2020
  • Purpose - This study investigates research trends by searching for English keywords and abstracts in 1,511 Korean journal articles in the Korea Citation Index from the 2002-2019 period using the term "Port." The study aims to lay the foundation for a more balanced development of port research. Design/methodology - Using abstract and keyword data, we perform frequency analysis and word embedding (Word2vec). A t-SNE plot shows the main keywords extracted using the TextRank algorithm. To analyze which words were used in what context in our two nine-year subperiods (2002-2010 and 2010-2019), we use Scattertext and scaled F-scores. Findings - First, during the 18-year study period, port research has developed through the convergence of diverse academic fields, covering 102 subject areas and 219 journals. Second, our frequency analysis of 4,431 keywords in 1,511 papers shows that the words "Port" (60 times), "Port Competitiveness" (33 times), and "Port Authority" (29 times), among others, are attractive to most researchers. Third, a word embedding analysis identifies the words highly correlated with the top eight keywords and visually shows four different subject clusters in a t-SNE plot. Fourth, we use Scattertext to compare words used in the two research sub-periods. Originality/value - This study is the first to apply abstract and keyword analysis and various text mining techniques to Korean journal articles in port research and thus has important implications. Further in-depth studies should collect a greater variety of textual data and analyze and compare port studies from different countries.

Word-Level Embedding to Improve Performance of Representative Spatio-temporal Document Classification

  • Byoungwook Kim;Hong-Jun Jang
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.830-841
    • /
    • 2023
  • Tokenization is the process of segmenting the input text into smaller units of text, and it is a preprocessing task that is mainly performed to improve the efficiency of the machine learning process. Various tokenization methods have been proposed for application in the field of natural language processing, but studies have primarily focused on efficiently segmenting text. Few studies have been conducted on the Korean language to explore what tokenization methods are suitable for document classification task. In this paper, an exploratory study was performed to find the most suitable tokenization method to improve the performance of a representative spatio-temporal document classifier in Korean. For the experiment, a convolutional neural network model was used, and for the final performance comparison, tasks were selected for document classification where performance largely depends on the tokenization method. As a tokenization method for comparative experiments, commonly used Jamo, Character, and Word units were adopted. As a result of the experiment, it was confirmed that the tokenization of word units showed excellent performance in the case of representative spatio-temporal document classification task where the semantic embedding ability of the token itself is important.

The Study on Possibility of Applying Word-Level Word Embedding Model of Literature Related to NOS -Focus on Qualitative Performance Evaluation- (과학의 본성 관련 문헌들의 단어수준 워드임베딩 모델 적용 가능성 탐색 -정성적 성능 평가를 중심으로-)

  • Kim, Hyunguk
    • Journal of Science Education
    • /
    • v.46 no.1
    • /
    • pp.17-29
    • /
    • 2022
  • The purpose of this study is to look qualitatively into how efficiently and reasonably a computer can learn themes related to the Nature of Science (NOS). In this regard, a corpus has been constructed focusing on literature (920 abstracts) related to NOS, and factors of the optimized Word2Vec (CBOW, Skip-gram) were confirmed. According to the four dimensions (Inquiry, Thinking, Knowledge and STS) of NOS, the comparative evaluation on the word-level word embedding was conducted. As a result of the study, according to the previous studies and the pre-evaluation on performance, the CBOW model was determined to be 200 for the dimension, five for the number of threads, ten for the minimum frequency, 100 for the number of repetition and one for the context range. And the Skip-gram model was determined to be 200 for the number of dimension, five for the number of threads, ten for the minimum frequency, 200 for the number of repetition and three for the context range. The Skip-gram had better performance in the dimension of Inquiry in terms of types of words with high similarity by model, which was checked by applying it to the four dimensions of NOS. In the dimensions of Thinking and Knowledge, there was no difference in the embedding performance of both models, but in case of words with high similarity for each model, they are sharing the name of a reciprocal domain so it seems that it is required to apply other models additionally in order to learn properly. It was evaluated that the dimension of STS also had the embedding performance that was not sufficient to look into comprehensive STS elements, while listing words related to solution of problems excessively. It is expected that overall implications on models available for science education and utilization of artificial intelligence could be given by making a computer learn themes related to NOS through this study.

Long Short Term Memory based Political Polarity Analysis in Cyber Public Sphere

  • Kang, Hyeon;Kang, Dae-Ki
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.57-62
    • /
    • 2017
  • In this paper, we applied long short term memory(LSTM) for classifying political polarity in cyber public sphere. The data collected from the cyber public sphere is transformed into word corpus data through word embedding. Based on this word corpus data, we train recurrent neural network (RNN) which is connected by LSTM's. Softmax function is applied at the output of the RNN. We conducted our proposed system to obtain experimental results, and we will enhance our proposed system by refining LSTM in our system.

Correlation Analysis of Cancer Biomarkers and COPD Using the Word Embedding (워드 임베딩을 이용한 COPD와 암 관련 바이오마커의 상관관계 분석)

  • Yoon, Byeong-Hun;Kim, Yu-Seop
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.251-254
    • /
    • 2017
  • 본 연구에서는 COPD와 기존에 연관이 있는 것으로 알려진 바이오마커 이외의 새로운 바이오마커를 찾고자 한다. Pubmed Data에서 선정한 암 관련 바이오마커를 추출하여 COPD와 암 관련 바이오마커의 관계를 파악하는 데이터로 사용한다. 그리고 워드 임베딩 모델 중 Word2vec을 사용하여 워드 임베딩 한다. 워드 임베딩한 K차원의 COPD와 암 관련 바이오마커를 t-SNE를 사용하여 시각화한다. 또한 코사인 유사도를 이용하여 COPD와 암 관련 바이오마커의 유사도를 측정한다. 그리고 코사인 유사도와 t-SNE 결과를 이용하여 COPD와 암 관련 바이오마커와의 상관관계를 파악할 수 있으며, 암 관련 바이오마커와 COPD 관련 바이오마커를 비교 하여 기존의 COPD와 연관이 있다고 알려진 바이오마커 이외의 새로운 바이오마커를 찾을 수 있다.

  • PDF

Correlation Analysis of Cancer Biomarkers and COPD Using the Word Embedding (워드 임베딩을 이용한 COPD와 암 관련 바이오마커의 상관관계 분석)

  • Yoon, Byeong-Hun;Kim, Yu-Seop
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.251-254
    • /
    • 2017
  • 본 연구에서는 COPD와 기존에 연관이 있는 것으로 알려진 바이오마커 이외의 새로운 바이오마커를 찾고자 한다. Pubmed Data에서 선정한 암 관련 바이오마커를 추출하여 COPD와 암 관련 바이오마커의 관계를 파악하는 데이터로 사용한다. 그리고 워드 임베딩 모델 중 Word2vec을 사용하여 워드 임베딩 한다. 워드 임베딩한 K차원의 COPD와 암 관련 바이오마커를 t-SNE를 사용하여 시각화한다. 또한 코사인 유사도를 이용하여 COPD와 암 관련 바이오마커의 유사도를 측정한다. 그리고 코사인 유사도와 t-SNE 결과를 이용하여 COPD와 암 관련 바이오마커와의 상관관계를 파악할 수 있으며, 암 관련 바이오마커와 COPD 관련 바이오마커를 비교 하여 기존의 COPD와 연관이 있다고 알려진 바이오마커 이외의 새로운 바이오마커를 찾을 수 있다.

  • PDF

A Study on Korean Fake news Detection Model Using Word Embedding (워드 임베딩을 활용한 한국어 가짜뉴스 탐지 모델에 관한 연구)

  • Shim, Jae-Seung;Lee, Jaejun;Jeong, Ii Tae;Ahn, Hyunchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.199-202
    • /
    • 2020
  • 본 논문에서는 가짜뉴스 탐지 모델에 워드 임베딩 기법을 접목하여 성능을 향상시키는 방법을 제안한다. 기존의 한국어 가짜뉴스 탐지 연구는 희소 표현인 빈도-역문서 빈도(TF-IDF)를 활용한 탐지 모델들이 주를 이루었다. 하지만 이는 가짜뉴스 탐지의 관점에서 뉴스의 언어적 특성을 파악하는 데 한계가 존재하는데, 특히 문맥에서 드러나는 언어적 특성을 구조적으로 반영하지 못한다. 이에 밀집 표현 기반의 워드 임베딩 기법인 Word2vec을 활용한 텍스트 전처리를 통해 문맥 정보까지 반영한 가짜뉴스 탐지 모델을 본 연구의 제안 모델로 생성한 후 TF-IDF 기반의 가짜뉴스 탐지 모델을 비교 모델로 생성하여 두 모델 간의 비교를 통한 성능 검증을 수행하였다. 그 결과 Word2vec 기반의 제안모형이 더욱 우수하였음을 확인하였다.

  • PDF

Development of chatting program using social issue keyword information (사회적 핵심 이슈 키워드 정보를 활용한 채팅 프로그램 개발)

  • Yoon, Kyung-Suob;Jeong, Won-Hyeok
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.307-310
    • /
    • 2020
  • 본 논문에서 이슈 키워드 추출을 위해 텍스트 마이닝(Text Mining) 기술을 요구한다. 사회적 이슈 키워드를 추출하기 위해 키워드 수집 모델이 되는 사이트에서 크롤링(crawling)을 수행한 뒤, 형태소 단위 의미있는 단어를 수집하기 위해 형태소 분석(morphological analysis)을 수행한다. 한국어 형태소 분석을 위해 파이썬의 코엔엘파이(KoNLPy) 패키지를 활용한다. 형태소 분석을 통해 나뉘어진 단어에서 통계를 내어 이슈 키워드 추출한다. 이슈 키워드를 뒷받침할 연관 단어를 분석하기 위해 단어 임베딩(Word Embedding)을 수행한다. 단어 임베딩 수행을 위해 Word2Vec 모델 중 Skip-Gram 방법론을 적용하여 연관 단어를 분석하도록 개발하였다. 웹 소켓(Web Socket) 통신을 통한 채팅 프로그램의 상단에 분석한 이슈 키워드와 연관 단어를 출력하도록 개발하였다.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Improving The Performance of Triple Generation Based on Distant Supervision By Using Semantic Similarity (의미 유사도를 활용한 Distant Supervision 기반의 트리플 생성 성능 향상)

  • Yoon, Hee-Geun;Choi, Su Jeong;Park, Seong-Bae
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.653-661
    • /
    • 2016
  • The existing pattern-based triple generation systems based on distant supervision could be flawed by assumption of distant supervision. For resolving flaw from an excessive assumption, statistics information has been commonly used for measuring confidence of patterns in previous studies. In this study, we proposed a more accurate confidence measure based on semantic similarity between patterns and properties. Unsupervised learning method, word embedding and WordNet-based similarity measures were adopted for learning meaning of words and measuring semantic similarity. For resolving language discordance between patterns and properties, we adopted CCA for aligning bilingual word embedding models and a translation-based approach for a WordNet-based measure. The results of our experiments indicated that the accuracy of triples that are filtered by the semantic similarity-based confidence measure was 16% higher than that of the statistics-based approach. These results suggested that semantic similarity-based confidence measure is more effective than statistics-based approach for generating high quality triples.