• Title/Summary/Keyword: N-gram 모델

Search Result 71, Processing Time 0.026 seconds

Self-Organizing n-gram Model for Automatic Word Spacing (자기 조직화 n-gram모델을 이용한 자동 띄어쓰기)

  • Tae, Yoon-Shik;Park, Seong-Bae;Lee, Sang-Jo;Park, Se-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.125-132
    • /
    • 2006
  • 한국어의 자연어처리 및 정보검색분야에서 자동 띄어쓰기는 매우 중요한 문제이다. 신문기사에서조차 잘못된 띄어쓰기를 발견할 수 있을 정도로 띄어쓰기가 어려운 경우가 많다. 본 논문에서는 자기 조직화 n-gram모델을 이용해 자동 띄어쓰기의 정확도를 높이는 방법을 제안한다. 본 논문에서 제안하는 방법은 문맥의 길이를 바꿀 수 있는 가변길이 n-gram모델을 기본으로 하여 모델이 자동으로 문맥의 길이를 결정하도록 한 것으로, 일반적인 n-gram모델에 비해 더욱 높은 성능을 얻을 수 있다. 자기조직화 n-gram모델은 최적의 문맥의 길이를 찾기 위해 문맥의 길이를 늘였을 때 나타나는 확률분포와 문맥의 길이를 늘이지 않았을 태의 확률분포를 비교하여 그 차이가 크다면 문맥의 길이를 늘이고, 그렇지 않다면 문맥의 길이를 자동으로 줄인다. 즉, 더 많은 정보가 필요한 경우는 데이터의 차원을 높여 정확도를 올리며, 이로 인해 증가된 계산량은 필요 없는 데이터의 양을 줄임으로써 줄일 수 있다. 본 논문에서는 실험을 통해 n-gram모델의 자기 조직화 구조가 기본적인 모델보다 성능이 뛰어나다는 것을 확인하였다.

  • PDF

Context-sensitive Spelling Error Correction using Eojeol N-gram (어절 N-gram을 이용한 문맥의존 철자오류 교정)

  • Kim, Minho;Kwon, Hyuk-Chul;Choi, Sungki
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1081-1089
    • /
    • 2014
  • Context-sensitive spelling-error correction methods are largely classified into rule-based methods and statistical data-based methods, the latter of which is often preferred in research. Statistical error correction methods consider context-sensitive spelling error problems as word-sense disambiguation problems. The method divides a vocabulary pair, for correction, which consists of a correction target vocabulary and a replacement candidate vocabulary, according to the context. The present paper proposes a method that integrates a word-phrase n-gram model into a conventional model in order to improve the performance of the probability model by using a correction vocabulary pair, which was a result of a previous study performed by this research team. The integrated model suggested in this paper includes a method used to interpolate the probability of a sentence calculated through each model and a method used to apply the models, when both methods are sequentially applied. Both aforementioned types of integrated models exhibit relatively high accuracy and reproducibility when compared to conventional models or to a model that uses only an n-gram.

A Detection Method of Similar Sentences Considering Plagiarism Patterns of Korean Sentence (한국어 문장 표절 유형을 고려한 유사 문장 판별)

  • Ji, Hye-Sung;Joh, Joon-Hee;Lim, Heui-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.6
    • /
    • pp.79-89
    • /
    • 2010
  • In this paper, we proposed a method to find out similar sentences from documents to detect plagiarized documents. The proposed model adapts LSA and N-gram techniques to detect every type of Korean plagiarized sentence type. To evaluate the performance of the model, we constructed experimental data using students' essays on the same theme. Students made their essay by intentionally plagiarizing some reference documents. The experimental results showed that our proposed model outperforms the conventional N-gram model, Vector model, LSA model in precision, recall, and F measures.

  • PDF

Design of Brain-computer Korean typewriter using N-gram model (N-gram 모델을 이용한 뇌-컴퓨터 한국어 입력기 설계)

  • Lee, Saebyeok;Lim, Heui-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.143-146
    • /
    • 2010
  • 뇌-컴퓨터 인터페이스는 뇌에서 발생하는 생체신호를 통하여 컴퓨터나 외부기기를 직접 제어할 수 있는 기술이다. 자발적으로 언어를 생성하지 못하는 환자들을 위하여 뇌-컴퓨터 인터페이스를 이용하여 한국어를 자유롭게 입력할 수 있는 인터페이스에 대한 연구가 필요하다. 본 연구는 의사소통을 위한 뇌-컴퓨터 인터페이스에서 낮은 정보전달률을 개선하기 위해서 음절 n-gram과 어절 n-gram 모델을 이용하여 언어 예측 모델을 구현하였다. 또한 실제 이를 이용한 뇌 컴퓨터 한국어 입력기를 설계하였다, 이는 기존의 뇌-컴퓨터 인터페이스 연구에서 특징 추출이나 기계학습 방법의 성능향상을 위한 연구와는 차별적인 방법이다.

  • PDF

Passage Re-ranking Model using N-gram attention between Question and Passage (질문-단락 간 N-gram 주의 집중을 이용한 단락 재순위화 모델)

  • Jang, Youngjin;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.554-558
    • /
    • 2020
  • 최근 사전학습 모델의 발달로 기계독해 시스템 성능이 크게 향상되었다. 하지만 기계독해 시스템은 주어진 단락에서 질문에 대한 정답을 찾기 때문에 단락을 직접 검색해야하는 실제 환경에서의 성능 하락은 불가피하다. 즉, 기계독해 시스템이 오픈 도메인 환경에서 높은 성능을 보이기 위해서는 높은 성능의 검색 모델이 필수적이다. 따라서 본 논문에서는 검색 모델의 성능을 보완해 줄 수 있는 오픈 도메인 기계독해를 위한 단락 재순위화 모델을 제안한다. 제안 모델은 합성곱 신경망을 이용하여 질문과 단락을 구절 단위로 표현했으며, N-gram 구절 사이의 상호 주의 집중을 통해 질문과 단락 사이의 관계를 효과적으로 표현했다. KorQuAD를 기반으로한 실험에서 제안모델은 MRR@10 기준 93.0%, Top@1 Precision 기준 89.4%의 높은 성능을 보였다.

  • PDF

A Study on Pseudo N-gram Language Models for Speech Recognition (음성인식을 위한 의사(疑似) N-gram 언어모델에 관한 연구)

  • 오세진;황철준;김범국;정호열;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.16-23
    • /
    • 2001
  • In this paper, we propose the pseudo n-gram language models for speech recognition with middle size vocabulary compared to large vocabulary speech recognition using the statistical n-gram language models. The proposed method is that it is very simple method, which has the standard structure of ARPA and set the word probability arbitrary. The first, the 1-gram sets the word occurrence probability 1 (log likelihood is 0.0). The second, the 2-gram also sets the word occurrence probability 1, which can only connect the word start symbol and WORD, WORD and the word end symbol . Finally, the 3-gram also sets the ward occurrence probability 1, which can only connect the word start symbol , WORD and the word end symbol . To verify the effectiveness of the proposed method, the word recognition experiments are carried out. The preliminary experimental results (off-line) show that the word accuracy has average 97.7% for 452 words uttered by 3 male speakers. The on-line word recognition results show that the word accuracy has average 92.5% for 20 words uttered by 20 male speakers about stock name of 1,500 words. Through experiments, we have verified the effectiveness of the pseudo n-gram language modes for speech recognition.

  • PDF

Class Language Model based on Word Embedding and POS Tagging (워드 임베딩과 품사 태깅을 이용한 클래스 언어모델 연구)

  • Chung, Euisok;Park, Jeon-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.315-319
    • /
    • 2016
  • Recurrent neural network based language models (RNN LM) have shown improved results in language model researches. The RNN LMs are limited to post processing sessions, such as the N-best rescoring step of the wFST based speech recognition. However, it has considerable vocabulary problems that require large computing powers for the LM training. In this paper, we try to find the 1st pass N-gram model using word embedding, which is the simplified deep neural network. The class based language model (LM) can be a way to approach to this issue. We have built class based vocabulary through word embedding, by combining the class LM with word N-gram LM to evaluate the performance of LMs. In addition, we propose that part-of-speech (POS) tagging based LM shows an improvement of perplexity in all types of the LM tests.

Detecting Spectre Malware Binary through Function Level N-gram Comparison (함수 단위 N-gram 비교를 통한 Spectre 공격 바이너리 식별 방법)

  • Kim, Moon-Sun;Yang, Hee-Dong;Kim, Kwang-Jun;Lee, Man-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1043-1052
    • /
    • 2020
  • Signature-based malicious code detection methods share a common limitation; it is very hard to detect modified malicious codes or new malware utilizing zero-day vulnerabilities. To overcome this limitation, many studies are actively carried out to classify malicious codes using N-gram. Although they can detect malicious codes with high accuracy, it is difficult to identify malicious codes that uses very short codes such as Spectre. We propose a function level N-gram comparison algorithm to effectively identify the Spectre binary. To test the validity of this algorithm, we built N-gram data sets from 165 normal binaries and 25 malignant binaries. When we used Random Forest models, the model performance experiments identified Spectre malicious functions with 99.99% accuracy and its f1-score was 92%.

A Study on Keyword Spotting System Using Pseudo N-gram Language Model (의사 N-gram 언어모델을 이용한 핵심어 검출 시스템에 관한 연구)

  • 이여송;김주곤;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.242-247
    • /
    • 2004
  • Conventional keyword spotting systems use the connected word recognition network consisted by keyword models and filler models in keyword spotting. This is why the system can not construct the language models of word appearance effectively for detecting keywords in large vocabulary continuous speech recognition system with large text data. In this paper to solve this problem, we propose a keyword spotting system using pseudo N-gram language model for detecting key-words and investigate the performance of the system upon the changes of the frequencies of appearances of both keywords and filler models. As the results, when the Unigram probability of keywords and filler models were set to 0.2, 0.8, the experimental results showed that CA (Correctly Accept for In-Vocabulary) and CR (Correctly Reject for Out-Of-Vocabulary) were 91.1% and 91.7% respectively, which means that our proposed system can get 14% of improved average CA-CR performance than conventional methods in ERR (Error Reduction Rate).

A Study on Machine Learning Based Anti-Analysis Technique Detection Using N-gram Opcode (N-gram Opcode를 활용한 머신러닝 기반의 분석 방지 보호 기법 탐지 방안 연구)

  • Kim, Hee Yeon;Lee, Dong Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.181-192
    • /
    • 2022
  • The emergence of new malware is incapacitating existing signature-based malware detection techniques., and applying various anti-analysis techniques makes it difficult to analyze. Recent studies related to signature-based malware detection have limitations in that malware creators can easily bypass them. Therefore, in this study, we try to build a machine learning model that can detect and classify the anti-analysis techniques of packers applied to malware, not using the characteristics of the malware itself. In this study, the n-gram opcodes are extracted from the malicious binary to which various anti-analysis techniques of the commercial packers are applied, and the features are extracted by using TF-IDF, and through this, each anti-analysis technique is detected and classified. In this study, real-world malware samples packed using The mida and VMProtect with multiple anti-analysis techniques were trained and tested with 6 machine learning models, and it constructed the optimal model showing 81.25% accuracy for The mida and 95.65% accuracy for VMProtect.