• Title/Summary/Keyword: N-Gram

Search Result 579, Processing Time 0.031 seconds

n-Gram/2L: A Space and Time Efficient Two-Level n-Gram Inverted Index Structure (n-gram/2L: 공간 및 시간 효율적인 2단계 n-gram 역색인 구조)

  • Kim Min-Soo;Whang Kyu-Young;Lee Jae-Gil;Lee Min-Jae
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.12-31
    • /
    • 2006
  • The n-gram inverted index has two major advantages: language-neutral and error-tolerant. Due to these advantages, it has been widely used in information retrieval or in similar sequence matching for DNA and Protein databases. Nevertheless, the n-gram inverted index also has drawbacks: the size tends to be very large, and the performance of queries tends to be bad. In this paper, we propose the two-level n-gram inverted index (simply, the n-gram/2L index) that significantly reduces the size and improves the query performance while preserving the advantages of the n-gram inverted index. The proposed index eliminates the redundancy of the position information that exists in the n-gram inverted index. The proposed index is constructed in two steps: 1) extracting subsequences of length m from documents and 2) extracting n-grams from those subsequences. We formally prove that this two-step construction is identical to the relational normalization process that removes the redundancy caused by a non-trivial multivalued dependency. The n-gram/2L index has excellent properties: 1) it significantly reduces the size and improves the Performance compared with the n-gram inverted index with these improvements becoming more marked as the database size gets larger; 2) the query processing time increases only very slightly as the query length gets longer. Experimental results using databases of 1 GBytes show that the size of the n-gram/2L index is reduced by up to 1.9${\~}$2.7 times and, at the same time, the query performance is improved by up to 13.1 times compared with those of the n-gram inverted index.

Out of Vocabulary Word Extractor based on a Syllable n-gram (음절 n-gram 기반의 미등록 어휘 추정기 구현)

  • Shin, Junsoo;Hong, Chohee
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.139-141
    • /
    • 2013
  • 다양한 콘텐츠가 생성됨에 따라 신조어 및 미등록어도 다양한 형태로 나타나고 있다. 이러한 신조어 및 미등록어는 텍스트 처리 단계에서 오분석 되어 성능 저하의 원인이 된다. 본 논문은 이러한 문제를 해결하기 위해서 대량의 문서로부터 신조어 및 미등록 어휘를 추정하는 방법에 대해서 제안한다. 제안 방법은 대량의 문서로부터 음절 n-gram을 추출한 뒤, 각 n-gram에서 n을 한음절 축소 및 확장 시켜, (n+1)gram, (n-1)gram을 추가적으로 추출한다. 추출된 음절 n-gram을 기준으로 (n+1)gram, (n-1)gram과의 빈도 차이를 계산하여 빈도차가 급격하게 발생하는 구간을 신조어 및 미등록 어휘로 추정한다. 실험결과 신조어 뿐만 아니라 트위터, 미투데이 등과 같은 도메인에 종속적인 미등록 어휘도 추출되는 것을 확인할 수 있었다.

  • PDF

Accurate Intrusion Detection using n-Gram Augmented Naive Bayes (N-Gram 증강 나이브 베이스를 이용한 정확한 침입 탐지)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.285-288
    • /
    • 2008
  • In many intrusion detection applications, n-gram approach has been widely applied. However, n-gram approach has shown a few problems including double counting of features. To address those problems, we applied n-gram augmented Naive Bayes directly to classify intrusive sequences and compared performance with those of Naive Bayes and Support Vector Machines (SVM) with n-gram features by the experiments on host-based intrusion detection benchmark data sets. Experimental results on the University of New Mexico (UNM) benchmark data sets show that the n-gram augmented method, which solves the problem of independence violation that happens when n-gram features are directly applied to Naive Bayes (i.e. Naive Bayes with n-gram features), yields intrusion detectors with higher accuracy than those from Naive Bayes with n-gram features and shows comparable accuracy to those from SVM with n-gram features.

  • PDF

An investigation of chroma n-gram selection for cover song search (커버곡 검색을 위한 크로마 n-gram 선택에 관한 연구)

  • Seo, Jin Soo;Kim, Junghyun;Park, Jihyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.436-441
    • /
    • 2017
  • Computing music similarity is indispensable in constructing music retrieval system. This paper focuses on the cover song search among various music-retrieval tasks. We investigate the cover song search method based on the chroma n-gram to reduce storage for feature DB and enhance search accuracy. Specifically we propose t-tab n-gram, n-gram selection method, and n-gram set comparison method. Experiments on the widely used music dataset confirmed that the proposed method improves cover song search accuracy as well as reduces feature storage.

Scalable and Accurate Intrusion Detection using n-Gram Augmented Naive Bayes and Generalized k-Truncated Suffix Tree (N-그램 증강 나이브 베이스 알고리즘과 일반화된 k-절단 서픽스트리를 이용한 확장가능하고 정확한 침입 탐지 기법)

  • Kang, Dae-Ki;Hwang, Gi-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.4
    • /
    • pp.805-812
    • /
    • 2009
  • In many intrusion detection applications, n-gram approach has been widely applied. However, n-gram approach has shown a few problems including unscalability and double counting of features. To address those problems, we applied n-gram augmented Naive Bayes with k-truncated suffix tree (k-TST) storage mechanism directly to classify intrusive sequences and compared performance with those of Naive Bayes and Support Vector Machines (SVM) with n-gram features by the experiments on host-based intrusion detection benchmark data sets. Experimental results on the University of New Mexico (UNM) benchmark data sets show that the n-gram augmented method, which solves the problem of independence violation that happens when n-gram features are directly applied to Naive Bayes (i.e. Naive Bayes with n-gram features), yields intrusion detectors with higher accuracy than those from Naive Bayes with n-gram features and shows comparable accuracy to those from SVM with n-gram features. For the scalable and efficient counting of n-gram features, we use k-truncated suffix tree mechanism for storing n-gram features. With the k-truncated suffix tree storage mechanism, we tested the performance of the classifiers up to 20-gram, which illustrates the scalability and accuracy of n-gram augmented Naive Bayes with k-truncated suffix tree storage mechanism.

A Study on Applying Novel Reverse N-Gram for Construction of Natural Language Processing Dictionary for Healthcare Big Data Analysis (헬스케어 분야 빅데이터 분석을 위한 개체명 사전구축에 새로운 역 N-Gram 적용 연구)

  • KyungHyun Lee;RackJune Baek;WooSu Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.391-396
    • /
    • 2024
  • This study proposes a novel reverse N-Gram approach to overcome the limitations of traditional N-Gram methods and enhance performance in building an entity dictionary specialized for the healthcare sector. The proposed reverse N-Gram technique allows for more precise analysis and processing of the complex linguistic features of healthcare-related big data. To verify the efficiency of the proposed method, big data on healthcare and digital health announced during the Consumer Electronics Show (CES) held each January was collected. Using the Python programming language, 2,185 news titles and summaries mentioned from January 1 to 31 in 2010 and from January 1 to 31 in 2024 were preprocessed with the new reverse N-Gram method. This resulted in the stable construction of a dictionary for natural language processing in the healthcare field.

Interference Typo Correction Method by using Surrounding Word N-gram and Syllable N-gram (좌우 어절 N-gram 및 음절 N-gram을 이용한 간섭 오타 교정 방법)

  • Son, Sung-Hwan;Kang, Seung-Shik
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.496-499
    • /
    • 2019
  • 스마트폰의 쿼티 자판 소프트 키보드의 버튼과 버튼 사이 좁은 간격으로 인해 사용자가 의도치 않은 간섭 오타가 발생하는 것에 주목하였다. 그리고 오타 교정의 성능은 사용자의 관점에서 얼마나 잘 오타를 교정하느냐도 중요한 부분이지만, 또한 오타가 아닌 어절을 그대로 유지하는 것이 더 중요하게 판단될 수 있다. 왜냐하면 현실적으로 오타인 어절 보다 오타가 아닌 어절이 거의 대부분을 차지하기 때문이다. 따라서 해당 관점에서 교정 방법을 바라보고 연구할 필요가 있다. 이에 맞춰 본 논문에서는 대용량 한국어 말뭉치 데이터를 가지고 확률에 기반한 한국어 간섭 오타 수정 방법에 대해 제안한다. 제안하는 방법은 목표 어절의 좌우 어절 N-gram과 어절 내 좌우 음절 N-gram 정보를 바탕으로 발생할 수 있는 간섭 오타 교정 후보들 중 가운데서 가장 적합한 후보 어절을 선택하는 방법이다.

  • PDF

Image Categorization Using Color N$\times$M-grams (Color N$\times$M-grams를 이용한 영상 분류)

  • 이은주;정성환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.402-404
    • /
    • 1998
  • 최근 영상 정보를 저장하는 시스템의 급증으로, 영상의 특징 요소들의 유사성(similarity)에 근거하여 영상을 분류.검색하는 기술에 많은 관심을 보이고 있다. 본 논문에서는 칼라영상의 분류를 위해 기존의 N$\times$M-grams를 변형한 Color N$\times$M-grams를 제안한다. Color N$\times$M-grams는 영상의 칼라정보를 이용하여 영상고유의 구조 정보를 추출한 후 유사성을 측정하여 영상을 분류한다. 제안된 방법의 성능 평가를 위하여 39쌍의 Benchmark 영상을 사용하여 실험하였다. 실험결과, 제안한 Color N$\times$M-grams를 사용한 방법이 기존의 N$\times$M-grams를 사용하여 칼라 영상을 분류하는 방법보다 1순위로 분류되는 비율에 있어서 약 19% 더 좋은 결과를 보였다.

  • PDF

A Study on Pseudo N-gram Language Models for Speech Recognition (음성인식을 위한 의사(疑似) N-gram 언어모델에 관한 연구)

  • 오세진;황철준;김범국;정호열;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.16-23
    • /
    • 2001
  • In this paper, we propose the pseudo n-gram language models for speech recognition with middle size vocabulary compared to large vocabulary speech recognition using the statistical n-gram language models. The proposed method is that it is very simple method, which has the standard structure of ARPA and set the word probability arbitrary. The first, the 1-gram sets the word occurrence probability 1 (log likelihood is 0.0). The second, the 2-gram also sets the word occurrence probability 1, which can only connect the word start symbol and WORD, WORD and the word end symbol . Finally, the 3-gram also sets the ward occurrence probability 1, which can only connect the word start symbol , WORD and the word end symbol . To verify the effectiveness of the proposed method, the word recognition experiments are carried out. The preliminary experimental results (off-line) show that the word accuracy has average 97.7% for 452 words uttered by 3 male speakers. The on-line word recognition results show that the word accuracy has average 92.5% for 20 words uttered by 20 male speakers about stock name of 1,500 words. Through experiments, we have verified the effectiveness of the pseudo n-gram language modes for speech recognition.

  • PDF

A Method for Twitter Spam Detection Using N-Gram Dictionary Under Limited Labeling (트레이닝 데이터가 제한된 환경에서 N-Gram 사전을 이용한 트위터 스팸 탐지 방법)

  • Choi, Hyeok-Jun;Park, Cheong Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.9
    • /
    • pp.445-456
    • /
    • 2017
  • In this paper, we propose a method to detect spam tweets containing unhealthy information by using an n-gram dictionary under limited labeling. Spam tweets that contain unhealthy information have a tendency to use similar words and sentences. Based on this characteristic, we show that spam tweets can be effectively detected by applying a Naive Bayesian classifier using n-gram dictionaries which are constructed from spam tweets and normal tweets. On the other hand, constructing an initial training set requires very high cost because a large amount of data flows in real time in a twitter. Therefore, there is a need for a spam detection method that can be applied in an environment where the initial training set is very small or non exist. To solve the problem, we propose a method to generate pseudo-labels by utilizing twitter's retweet function and use them for the configuration of the initial training set and the n-gram dictionary update. The results from various experiments using 1.3 million korean tweets collected from December 1, 2016 to December 7, 2016 prove that the proposed method has superior performance than the compared spam detection methods.