• Title/Summary/Keyword: 단어 유사도

Search Result 546, Processing Time 0.026 seconds

A Language Model based Knowledge Network for Analyzing Disaster Safety related Social Interest (재난안전 사회관심 분석을 위한 언어모델 활용 정보 네트워크 구축)

  • Choi, Dong-Jin;Han, So-Hee;Kim, Kyung-Jun;Bae, Eun-Sol
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2022.10a
    • /
    • pp.145-147
    • /
    • 2022
  • 본 논문은 대규모 텍스트 데이터에서 이슈를 발굴할 때 사용되는 기존의 정보 네트워크 또는 지식 그래프 구축 방법의 한계점을 지적하고, 문장 단위로 정보 네트워크를 구축하는 새로운 방법에 대해서 제안한다. 먼저 문장을 구성하는 단어와 캐릭터수의 분포를 측정하며 의성어와 같은 노이즈를 제거하기 위한 역치값을 설정하였다. 다음으로 BERT 기반 언어모델을 이용하여 모든 문장을 벡터화하고, 코사인 유사도를 이용하여 두 문장벡터에 대한 유사성을 측정하였다. 오분류된 유사도 결과를 최소화하기 위하여 명사형 단어의 의미적 연관성을 비교하는 알고리즘을 개발하였다. 제안된 유사문장 비교 알고리즘의 결과를 검토해 보면, 두 문장은 서술되는 형태가 다르지만 동일한 주제와 내용을 다루고 있는 것을 확인할 수 있었다. 본 논문에서 제안하는 방법은 단어 단위 지식 그래프 해석의 어려움을 극복할 수 있는 새로운 방법이다. 향후 이슈 및 트랜드 분석과 같은 미래연구 분야에 적용하면, 데이터 기반으로 특정 주제에 대한 사회적 관심을 수렴하고, 수요를 반영한 정책적 제언을 도출하는데 기여할 수 있을 것이다

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Building a Korean-English Parallel Corpus by Measuring Sentence Similarities Using Sequential Matching of Language Resources and Topic Modeling (언어 자원과 토픽 모델의 순차 매칭을 이용한 유사 문장 계산 기반의 위키피디아 한국어-영어 병렬 말뭉치 구축)

  • Cheon, JuRyong;Ko, YoungJoong
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.901-909
    • /
    • 2015
  • In this paper, to build a parallel corpus between Korean and English in Wikipedia. We proposed a method to find similar sentences based on language resources and topic modeling. We first applied language resources(Wiki-dictionary, numbers, and online dictionary in Daum) to match word sequentially. We construct the Wiki-dictionary using titles in Wikipedia. In order to take advantages of the Wikipedia, we used translation probability in the Wiki-dictionary for word matching. In addition, we improved the accuracy of sentence similarity measuring method by using word distribution based on topic modeling. In the experiment, a previous study showed 48.4% of F1-score with only language resources based on linear combination and 51.6% with the topic modeling considering entire word distributions additionally. However, our proposed methods with sequential matching added translation probability to language resources and achieved 9.9% (58.3%) better result than the previous study. When using the proposed sequential matching method of language resources and topic modeling after considering important word distributions, the proposed system achieved 7.5%(59.1%) better than the previous study.

A Study on the Integration of Similar Sentences in Atomatic Summarizing of Document (자동초록 작성시에 발생하는 유사의미 문장요소들의 통합에 관한 연구)

  • Lee, Tae-Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.34 no.2
    • /
    • pp.87-115
    • /
    • 2000
  • The effects of the Case, Part of Speech, Word and Clause Location, Word Frequency etc. were studied in discriminating the similar sentences of the Korean text. Word Frequency was much related to the discrimination of similarity and Tilte word and Functional Clause were little, but the others were not. The cosine coefficient and Salton'similarity measurement are used to measure the similarity between sentences. The change of clauses between each sentence is also used to unify the similar sentences into a represenative sentence.

  • PDF

A Study on Utterance Verification Using Accumulation of Negative Log-likelihood Ratio (음의 유사도 비율 누적 방법을 이용한 발화검증 연구)

  • 한명희;이호준;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.194-201
    • /
    • 2003
  • In speech recognition, confidence measuring is to decide whether it can be accepted as the recognized results or not. The confidence is measured by integrating frames into phone and word level. In case of word recognition, the confidence measuring verifies the results of recognition and Out-Of-Vocabulary (OOV). Therefore, the post-processing could improve the performance of recognizer without accepting it as a recognition error. In this paper, we measure the confidence modifying log likelihood ratio (LLR) which was the previous confidence measuring. It accumulates only those which the log likelihood ratio is negative when integrating the confidence to phone level from frame level. When comparing the verification performance for the results of word recognizer with the previous method, the FAR (False Acceptance Ratio) is decreased about 3.49% for the OOV and 15.25% for the recognition error when CAR (Correct Acceptance Ratio) is about 90%.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Word Sense Similarity Clustering Based on Vector Space Model and HAL (벡터 공간 모델과 HAL에 기초한 단어 의미 유사성 군집)

  • Kim, Dong-Sung
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.3
    • /
    • pp.295-322
    • /
    • 2012
  • In this paper, we cluster similar word senses applying vector space model and HAL (Hyperspace Analog to Language). HAL measures corelation among words through a certain size of context (Lund and Burgess 1996). The similarity measurement between a word pair is cosine similarity based on the vector space model, which reduces distortion of space between high frequency words and low frequency words (Salton et al. 1975, Widdows 2004). We use PCA (Principal Component Analysis) and SVD (Singular Value Decomposition) to reduce a large amount of dimensions caused by similarity matrix. For sense similarity clustering, we adopt supervised and non-supervised learning methods. For non-supervised method, we use clustering. For supervised method, we use SVM (Support Vector Machine), Naive Bayes Classifier, and Maximum Entropy Method.

  • PDF

Generating Korean Sentences Using Word2Vec (Word2Vec 모델을 활용한 한국어 문장 생성)

  • Nam, Hyun-Gyu;Lee, Young-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.209-212
    • /
    • 2017
  • 고도화된 머신러닝과 딥러닝 기술은 영상처리, 자연어처리 등의 분야에서 많은 문제를 해결하고 있다. 특히 사용자가 입력한 문장을 분석하고 그에 따른 문장을 생성하는 자연어처리 기술은 기계 번역, 자동 요약, 자동 오류 수정 등에 널리 이용되고 있다. 딥러닝 기반의 자연어처리 기술은 학습을 위해 여러 계층의 신경망을 구성하여 단어 간 의존 관계와 문장 구조를 학습한다. 그러나 학습 과정에서의 계산양이 방대하여 모델을 구성하는데 시간과 비용이 많이 필요하다. 그러나 Word2Vec 모델은 신경망과 유사하게 학습하면서도 선형 구조를 가지고 있어 딥러닝 기반 자연어처리 기술에 비해 적은 시간 복잡도로 고차원의 단어 벡터를 계산할 수 있다. 따라서 본 논문에서는 Word2Vec 모델을 활용하여 한국어 문장을 생성하는 방법을 제시하였다. 본 논문에서는 지정된 문장 템플릿에 유사도가 높은 각 단어들을 적용하여 문장을 구성하는 Word2Vec 모델을 설계하였고, 서로 다른 학습 데이터로부터 생성된 문장을 평가하고 제안한 모델의 활용 방안을 제시하였다.

  • PDF

Generating Korean Sentences Using Word2Vec (Word2Vec 모델을 활용한 한국어 문장 생성)

  • Nam, Hyun-Gyu;Lee, Young-Seok
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.209-212
    • /
    • 2017
  • 고도화된 머신러닝과 딥러닝 기술은 영상처리, 자연어처리 등의 분야에서 많은 문제를 해결하고 있다. 특히 사용자가 입력한 문장을 분석하고 그에 따른 문장을 생성하는 자연어처리 기술은 기계 번역, 자동 요약, 자동 오류 수정 등에 널리 이용되고 있다. 딥러닝 기반의 자연어처리 기술은 학습을 위해 여러 계층의 신경망을 구성하여 단어 간 의존 관계와 문장 구조를 학습한다. 그러나 학습 과정에서의 계산양이 방대하여 모델을 구성하는데 시간과 비용이 많이 필요하다. 그러나 Word2Vec 모델은 신경망과 유사하게 학습하면서도 선형 구조를 가지고 있어 딥러닝 기반 자연어처리 기술에 비해 적은 시간 복잡도로 고차원의 단어 벡터를 계산할 수 있다. 따라서 본 논문에서는 Word2Vec 모델을 활용하여 한국어 문장을 생성하는 방법을 제시하였다. 본 논문에서는 지정된 문장 템플릿에 유사도가 높은 각 단어들을 적용하여 문장을 구성하는 Word2Vec 모델을 설계하였고, 서로 다른 학습 데이터로부터 생성된 문장을 평가하고 제안한 모델의 활용 방안을 제시하였다.

  • PDF

Performance Improvement of Word Spotting Using State Weighting of HMM (HMM의 상태별 가중치를 이용한 핵심어 검출의 성능 향상)

  • 최동진
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.305-308
    • /
    • 1998
  • 본 논문에서는 핵심어 검출의 성능을 향상시키기 위한 새로운 후처리 방법을 제안한다. 일반적으로 핵심어 검출 시스템에 의해 검출된 상위 n개의 후보 단어들의 우도(likelihood)는 비슷한 경우가 많다. 따라서, 한 음성구간에 대해 음향학적으로 유사한 핵심어들간의 오인식 가능성이 높아진다. 그러나 기존의 핵심어 검출에 사용된 후처리 방법은 음성의 모든 구간에 같은 비중을 두고 우도를 평가하므로 비슷한 음향학적 특징을 가지는 유사한 핵심어들의 비교에 적합하지 못하다. 이를 해결하기 위하여, 본 논문에서는 후보단어들의 부분적인 음향학적 특징 차이에 기반한 가중치를 우도 계산 시에 반영함으로써 보다 변별력을 높이는 알고리즘을 제안한다. 실험 결과, 제안된 방법을 이용하여 유사한 후보단어들간의 변별력을 높일 수 있었고, 인식율이 93%일 때, 우도비검사 방법에 비해 19.6%의 false alarm rate을 감소시킬 수 있었다.

  • PDF