• Title/Summary/Keyword: N-gram model

Search Result 74, Processing Time 0.023 seconds

A Detection Method of Similar Sentences Considering Plagiarism Patterns of Korean Sentence (한국어 문장 표절 유형을 고려한 유사 문장 판별)

  • Ji, Hye-Sung;Joh, Joon-Hee;Lim, Heui-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.6
    • /
    • pp.79-89
    • /
    • 2010
  • In this paper, we proposed a method to find out similar sentences from documents to detect plagiarized documents. The proposed model adapts LSA and N-gram techniques to detect every type of Korean plagiarized sentence type. To evaluate the performance of the model, we constructed experimental data using students' essays on the same theme. Students made their essay by intentionally plagiarizing some reference documents. The experimental results showed that our proposed model outperforms the conventional N-gram model, Vector model, LSA model in precision, recall, and F measures.

  • PDF

Context-sensitive Spelling Error Correction using Eojeol N-gram (어절 N-gram을 이용한 문맥의존 철자오류 교정)

  • Kim, Minho;Kwon, Hyuk-Chul;Choi, Sungki
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1081-1089
    • /
    • 2014
  • Context-sensitive spelling-error correction methods are largely classified into rule-based methods and statistical data-based methods, the latter of which is often preferred in research. Statistical error correction methods consider context-sensitive spelling error problems as word-sense disambiguation problems. The method divides a vocabulary pair, for correction, which consists of a correction target vocabulary and a replacement candidate vocabulary, according to the context. The present paper proposes a method that integrates a word-phrase n-gram model into a conventional model in order to improve the performance of the probability model by using a correction vocabulary pair, which was a result of a previous study performed by this research team. The integrated model suggested in this paper includes a method used to interpolate the probability of a sentence calculated through each model and a method used to apply the models, when both methods are sequentially applied. Both aforementioned types of integrated models exhibit relatively high accuracy and reproducibility when compared to conventional models or to a model that uses only an n-gram.

Spontaneous Speech Language Modeling using N-gram based Similarity (N-gram 기반의 유사도를 이용한 대화체 연속 음성 언어 모델링)

  • Park Young-Hee;Chung Minhwa
    • MALSORI
    • /
    • no.46
    • /
    • pp.117-126
    • /
    • 2003
  • This paper presents our language model adaptation for Korean spontaneous speech recognition. Korean spontaneous speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpus. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf/sup */idf similarity. In addition to relevance weighting, we use disfluencies as Predictor to the neighboring words. The best result reduces 9.7% word error rate relatively and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor also.

  • PDF

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2E
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient (정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응)

  • Choi Joon Ki;Oh Yung-Hwan
    • MALSORI
    • /
    • no.56
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF

Self-Organizing n-gram Model for Automatic Word Spacing (자기 조직화 n-gram모델을 이용한 자동 띄어쓰기)

  • Tae, Yoon-Shik;Park, Seong-Bae;Lee, Sang-Jo;Park, Se-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.125-132
    • /
    • 2006
  • 한국어의 자연어처리 및 정보검색분야에서 자동 띄어쓰기는 매우 중요한 문제이다. 신문기사에서조차 잘못된 띄어쓰기를 발견할 수 있을 정도로 띄어쓰기가 어려운 경우가 많다. 본 논문에서는 자기 조직화 n-gram모델을 이용해 자동 띄어쓰기의 정확도를 높이는 방법을 제안한다. 본 논문에서 제안하는 방법은 문맥의 길이를 바꿀 수 있는 가변길이 n-gram모델을 기본으로 하여 모델이 자동으로 문맥의 길이를 결정하도록 한 것으로, 일반적인 n-gram모델에 비해 더욱 높은 성능을 얻을 수 있다. 자기조직화 n-gram모델은 최적의 문맥의 길이를 찾기 위해 문맥의 길이를 늘였을 때 나타나는 확률분포와 문맥의 길이를 늘이지 않았을 태의 확률분포를 비교하여 그 차이가 크다면 문맥의 길이를 늘이고, 그렇지 않다면 문맥의 길이를 자동으로 줄인다. 즉, 더 많은 정보가 필요한 경우는 데이터의 차원을 높여 정확도를 올리며, 이로 인해 증가된 계산량은 필요 없는 데이터의 양을 줄임으로써 줄일 수 있다. 본 논문에서는 실험을 통해 n-gram모델의 자기 조직화 구조가 기본적인 모델보다 성능이 뛰어나다는 것을 확인하였다.

  • PDF

Design of Brain-computer Korean typewriter using N-gram model (N-gram 모델을 이용한 뇌-컴퓨터 한국어 입력기 설계)

  • Lee, Saebyeok;Lim, Heui-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.143-146
    • /
    • 2010
  • 뇌-컴퓨터 인터페이스는 뇌에서 발생하는 생체신호를 통하여 컴퓨터나 외부기기를 직접 제어할 수 있는 기술이다. 자발적으로 언어를 생성하지 못하는 환자들을 위하여 뇌-컴퓨터 인터페이스를 이용하여 한국어를 자유롭게 입력할 수 있는 인터페이스에 대한 연구가 필요하다. 본 연구는 의사소통을 위한 뇌-컴퓨터 인터페이스에서 낮은 정보전달률을 개선하기 위해서 음절 n-gram과 어절 n-gram 모델을 이용하여 언어 예측 모델을 구현하였다. 또한 실제 이를 이용한 뇌 컴퓨터 한국어 입력기를 설계하였다, 이는 기존의 뇌-컴퓨터 인터페이스 연구에서 특징 추출이나 기계학습 방법의 성능향상을 위한 연구와는 차별적인 방법이다.

  • PDF

Detecting Spectre Malware Binary through Function Level N-gram Comparison (함수 단위 N-gram 비교를 통한 Spectre 공격 바이너리 식별 방법)

  • Kim, Moon-Sun;Yang, Hee-Dong;Kim, Kwang-Jun;Lee, Man-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1043-1052
    • /
    • 2020
  • Signature-based malicious code detection methods share a common limitation; it is very hard to detect modified malicious codes or new malware utilizing zero-day vulnerabilities. To overcome this limitation, many studies are actively carried out to classify malicious codes using N-gram. Although they can detect malicious codes with high accuracy, it is difficult to identify malicious codes that uses very short codes such as Spectre. We propose a function level N-gram comparison algorithm to effectively identify the Spectre binary. To test the validity of this algorithm, we built N-gram data sets from 165 normal binaries and 25 malignant binaries. When we used Random Forest models, the model performance experiments identified Spectre malicious functions with 99.99% accuracy and its f1-score was 92%.

The Utilization of Local Document Information to Improve Statistical Context-Sensitive Spelling Error Correction (통계적 문맥의존 철자오류 교정 기법의 향상을 위한 지역적 문서 정보의 활용)

  • Lee, Jung-Hun;Kim, Minho;Kwon, Hyuk-Chul
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.446-451
    • /
    • 2017
  • The statistical context-sensitive spelling correction technique in this thesis is based upon Shannon's noisy channel model. The interpolation method is used for the improvement of the correction method proposed in the paper, and the general interpolation method is to fill the middle value of the probability by (N-1)-gram and (N-2)-gram. This method is based upon the same statistical corpus. In the proposed method, interpolation is performed using the frequency information between the statistical corpus and the correction document. The advantages of using frequency of correction documents are twofold. First, the probability of the coined word existing only in the correction document can be obtained. Second, even if there are two correction candidates with ambiguous probability values, the ambiguity is solved by correcting them by referring to the correction document. The method proposed in this thesis showed better precision and recall than the existing correction model.

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF