• Title/Summary/Keyword: Word embedding

Search Result 234, Processing Time 0.025 seconds

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

Research Trends in Record Management Using Unstructured Text Data Analysis (비정형 텍스트 데이터 분석을 활용한 기록관리 분야 연구동향)

  • Deokyong Hong;Junseok Heo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.73-89
    • /
    • 2023
  • This study aims to analyze the frequency of keywords used in Korean abstracts, which are unstructured text data in the domestic record management research field, using text mining techniques to identify domestic record management research trends through distance analysis between keywords. To this end, 1,157 keywords of 77,578 journals were visualized by extracting 1,157 articles from 7 journal types (28 types) searched by major category (complex study) and middle category (literature informatics) from the institutional statistics (registered site, candidate site) of the Korean Citation Index (KCI). Analysis of t-Distributed Stochastic Neighbor Embedding (t-SNE) and Scattertext using Word2vec was performed. As a result of the analysis, first, it was confirmed that keywords such as "record management" (889 times), "analysis" (888 times), "archive" (742 times), "record" (562 times), and "utilization" (449 times) were treated as significant topics by researchers. Second, Word2vec analysis generated vector representations between keywords, and similarity distances were investigated and visualized using t-SNE and Scattertext. In the visualization results, the research area for record management was divided into two groups, with keywords such as "archiving," "national record management," "standardization," "official documents," and "record management systems" occurring frequently in the first group (past). On the other hand, keywords such as "community," "data," "record information service," "online," and "digital archives" in the second group (current) were garnering substantial focus.

Generating a Korean Sentiment Lexicon Through Sentiment Score Propagation (감정점수의 전파를 통한 한국어 감정사전 생성)

  • Park, Ho-Min;Kim, Chang-Hyun;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.2
    • /
    • pp.53-60
    • /
    • 2020
  • Sentiment analysis is the automated process of understanding attitudes and opinions about a given topic from written or spoken text. One of the sentiment analysis approaches is a dictionary-based approach, in which a sentiment dictionary plays an much important role. In this paper, we propose a method to automatically generate Korean sentiment lexicon from the well-known English sentiment lexicon called VADER (Valence Aware Dictionary and sEntiment Reasoner). The proposed method consists of three steps. The first step is to build a Korean-English bilingual lexicon using a Korean-English parallel corpus. The bilingual lexicon is a set of pairs between VADER sentiment words and Korean morphemes as candidates of Korean sentiment words. The second step is to construct a bilingual words graph using the bilingual lexicon. The third step is to run the label propagation algorithm throughout the bilingual graph. Finally a new Korean sentiment lexicon is generated by repeatedly applying the propagation algorithm until the values of all vertices converge. Empirically, the dictionary-based sentiment classifier using the Korean sentiment lexicon outperforms machine learning-based approaches on the KMU sentiment corpus and the Naver sentiment corpus. In the future, we will apply the proposed approach to generate multilingual sentiment lexica.

Understanding the semantic change of Hangeul using word embedding (단어 임베딩 기법을 이용한 한글의 의미 변화 파악)

  • Sun, Hyunseok;Lee, Yung-Seop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.295-308
    • /
    • 2021
  • In recent years, as many people post their interests on social media or store documents in digital form due to the development of the internet and computer technologies, the amount of text data generated has exploded. Accordingly, the demand for technology to create valuable information from numerous document data is also increasing. In this study, through statistical techniques, we investigate how the meanings of Korean words change over time by using the presidential speech records and newspaper articles public data. Using this, we present a strategy that can be utilized in the study of the synchronic change of Hangeul. The purpose of this study is to deviate from the study of the theoretical language phenomenon of Hangeul, which was studied by the intuition of existing linguists or native speakers, to derive numerical values through public documents that can be used by anyone, and to explain the phenomenon of changes in the meaning of words.

Trend Analysis of Grow-Your-Own Using Social Network Analysis: Focusing on Hashtags on Instagram

  • Park, Yumin;Shin, Yong-Wook
    • Journal of People, Plants, and Environment
    • /
    • v.24 no.5
    • /
    • pp.451-460
    • /
    • 2021
  • Background and objective: The prolonged COVID-19 pandemic has had significant impacts on mental health, which has emerged as a major public health issue around the world. This study aimed to analyze trends and network structure of 'grow-your-own (GYO)' through Instagram, one of the most influential social media platforms, to encourage and sustain home gardening activities for promotion of emotional support and physical health. Methods: A total of 6,388 posts including keyword hashtags '#gyo' and '#growyourown' on Instagram from June 13, 2020 to April 13, 2021 were collected. Word embedding was performed using Word2Vec library, and 7 clusters were identified with K-means clustering: GYO, garden and gardening, allotment, kitchen garden, sustainability, urban gardening, etc. Moreover, we conducted social network analysis to determine the centrality of related words and visualized the results using Gephi 0.9.2. Results: The analysis showed that various combinations of words, such as #growourrownfood, #growourrownveggies, and #growwhatyoueat revealed preference and interest of users in GYO, and appeared to encourage their activities on Instagram. In particular, #gardeningtips, #greenfingers, #goodlife, #gardeninglife, #gardensofinstagram were found to express positive emotions and pride as a gardener by sharing their daily gardening lives. Users were participating in urban gardening through #allotment, #raisedbeds, #kitchengarden and we could identify trends toward self-sufficiency and sustainable living. Conclusion: Based on these findings, it is expected that the trend data of GYO, which is a form of urban gardening, can be used as the basic data to establish urban gardening plans considering each characteristic, such as the emotions and identity of participants as well as their dispositions.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Question Retrieval using Deep Semantic Matching for Community Question Answering (심층적 의미 매칭을 이용한 cQA 시스템 질문 검색)

  • Kim, Seon-Hoon;Jang, Heon-Seok;Kang, In-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.116-121
    • /
    • 2017
  • cQA(Community-based Question Answering) 시스템은 온라인 커뮤니티를 통해 사용자들이 질문을 남기고 답변을 작성할 수 있도록 만들어진 시스템이다. 신규 질문이 인입되면, 기존에 축적된 cQA 저장소에서 해당 질문과 가장 유사한 질문을 검색하고, 그 질문에 대한 답변을 신규 질문에 대한 답변으로 대체할 수 있다. 하지만, 키워드 매칭을 사용하는 전통적인 검색 방식으로는 문장에 내재된 의미들을 이용할 수 없다는 한계가 있다. 이를 극복하기 위해서는 의미적으로 동일한 문장들로 학습이 되어야 하지만, 이러한 데이터를 대량으로 확보하기에는 어려움이 있다. 본 논문에서는 질문이 제목과 내용으로 분리되어 있는 대량의 cQA 셋에서, 질문 제목과 내용을 의미 벡터 공간으로 사상하고 두 벡터의 상대적 거리가 가깝게 되도록 학습함으로써 의사(pseudo) 유사 의미의 성질을 내재화 하였다. 또한, 질문 제목과 내용의 의미 벡터 표현(representation)을 위하여, semi-training word embedding과 CNN(Convolutional Neural Network)을 이용한 딥러닝 기법을 제안하였다. 유사 질문 검색 실험 결과, 제안 모델을 이용한 검색이 키워드 매칭 기반 검색보다 좋은 성능을 보였다.

  • PDF

Neural Theorem Prover with Word Embedding for Efficient Automatic Annotation (효율적인 자동 주석을 위한 단어 임베딩 인공 신경 정리 증명계 구축)

  • Yang, Wonsuk;Park, Hancheol;Park, Jong C.
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.79-84
    • /
    • 2016
  • 본 연구는 전문기관에서 생산되는 검증된 문서를 웹상의 수많은 검증되지 않은 문서에 자동 주석하여 신뢰도 향상 및 심화 정보를 자동으로 추가하는 시스템을 설계하는 것을 목표로 한다. 이를 위해 활용 가능한 시스템인 인공 신경 정리 증명계(neural theorem prover)가 대규모 말뭉치에 적용되지 않는다는 근본적인 문제를 해결하기 위해 내부 순환 모듈을 단어 임베딩 모듈로 교체하여 재구축 하였다. 학습 시간의 획기적인 감소를 입증하기 위해 국가암정보센터의 암 예방 및 실천에 대한 검증된 문서들에서 추출한 28,844개 명제를 위키피디아 암 관련 문서에서 추출한 7,844개 명제에 주석하는 사례를 통하여 기존의 시스템과 재구축한 시스템을 병렬 비교하였다. 동일한 환경에서 기존 시스템의 학습 시간이 553.8일로 추정된 것에 비해 재구축한 시스템은 93.1분 내로 학습이 완료되었다. 본 연구의 장점은 인공 신경 정리 증명계가 모듈화 가능한 비선형 시스템이기에 다른 선형 논리 및 자연언어 처리 모듈들과 병렬적으로 결합될 수 있음에도 현실 사례에 이를 적용 불가능하게 했던 학습 시간에 대한 문제를 해소했다는 점이다.

  • PDF

Korean Dependency Relation Labeling Using Bidirectional LSTM CRFs Based on the Dependency Path and the Dependency Relation Label Distribution of Syllables (의존 경로와 음절단위 의존 관계명 분포 기반의 Bidirectional LSTM CRFs를 이용한 한국어 의존 관계명 레이블링)

  • An, Jaehyun;Lee, Hokyung;Ko, Youngjoong
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.14-19
    • /
    • 2016
  • 본 논문은 문장에서의 어절 간 의존관계가 성립될 때 의존소와 지배소가 어떠한 관계를 가지는지 의존 관계명을 부착하는 모델을 제안한다. 국내에서 한국어 의존구문분석에 관한 연구가 활발히 진행되고 있지만 의존 관계만을 결과로 제시하고 의존 관계명을 제공하지 않는 경우가 많았다. 따라서 본 논문에서는 의존경로(Dependency Path)와 음절의 의존 관계명 분포를 반영하는 음절 임베딩를 이용한 의존 관계명 부착모델을 제안한다. 문장에서 나올 수 있는 최적의 입력 열인 의존 경로(Dependency Path)를 순차 레이블링에서 좋은 성능을 나타내고 있는 bidirectional LSTM-CRFs의 입력 값으로 사용하여 의존 관계명을 결정한다. 제안된 기법은 자질에 대한 많은 노력 없이 의존 경로에 따라 어절 및 음절 단어표상(word embedding)만을 사용하여 순차적으로 의존 관계명을 부착한다. 의존 경로를 사용하지 않고 전체 문장의 어절 순서를 바탕으로 자질을 추출하여 CRFs로 분석한 기존 모델보다 의존 경로를 사용했을 때 4.1%p의 성능향상을 얻었으며, 의존 관계명 분포를 반영하는 음절 임베딩을 사용한 bidirectional LSTM-CRFs는 의존 관계명 부착에 최고의 성능인 96.01%(5.21%p 개선)를 내었다.

  • PDF

Neural Theorem Prover with Word Embedding for Efficient Automatic Annotation (효율적인 자동 주석을 위한 단어 임베딩 인공 신경 정리 증명계 구축)

  • Yang, Wonsuk;Park, Hancheol;Park, Jong C.
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.79-84
    • /
    • 2016
  • 본 연구는 전문기관에서 생산되는 검증된 문서를 웹상의 수많은 검증되지 않은 문서에 자동 주석하여 신뢰도 향상 및 심화 정보를 자동으로 추가하는 시스템을 설계하는 것을 목표로 한다. 이를 위해 활용 가능한 시스템인 인공 신경 정리 증명계(neural theorem prover)가 대규모 말뭉치에 적용되지 않는다는 근본적인 문제를 해결하기 위해 내부 순환 모듈을 단어 임베딩 모듈로 교체하여 재구축 하였다. 학습 시간의 획기적인 감소를 입증하기 위해 국가암정보센터의 암 예방 및 실천에 대한 검증된 문서들에서 추출한 28,844개 명제를 위키피디아 암 관련 문서에서 추출한 7,844개 명제에 주석하는 사례를 통하여 기존의 시스템과 재구축한 시스템을 병렬 비교하였다. 동일한 환경에서 기존 시스템의 학습 시간이 553.8일로 추정된 것에 비해 재구축한 시스템은 93.1분 내로 학습이 완료되었다. 본 연구의 장점은 인공 신경 정리 증명계가 모듈화 가능한 비선형 시스템이기에 다른 선형 논리 및 자연언어 처리 모듈들과 병렬적으로 결합될 수 있음에도 현실 사례에 이를 적용 불가능하게 했던 학습 시간에 대한 문제를 해소했다는 점이다.

  • PDF