• Title/Summary/Keyword: N-gram 모델

Search Result 71, Processing Time 0.037 seconds

Dependency relation analysis and mutual information technique for ASR rescoring (음성인식 리스코링을 위한 의존관계분석과 상호정보량 접근방법의 비교)

  • Chung, Euisok;Jeon, Hyung-Bae;Park, Jeon-Gue
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.164-166
    • /
    • 2014
  • 음성인식 결과는 다수의 후보를 생성할 수 있다. 해당 후보들은 각각 음향모델 값과 언어모델 값을 결합한 형태의 통합 정보를 갖고 있다. 여기서 언어모델 값을 다시 계산하여 성능을 향상하는 접근 방법이 일반적인 음성인식 성능개선 방법 중 하나이며 n-gram 기반 리스코링 접근 방법이 사용되어 왔다. 본 논문은 적절한 성능 개선을 위하여, 대용량 n-gram 모델의 활용 문제점을 고려한 문장 구성 어휘의 의존 관계 분석 접근 방법 및 일정 거리 어휘쌍들의 상호정보량 값을 이용한 접근 방법을 검토한다.

  • PDF

Comments Classification System using Topic Signature (Topic Signature를 이용한 댓글 분류 시스템)

  • Bae, Min-Young;Cha, Jeong-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.774-779
    • /
    • 2008
  • In this work, we describe comments classification system using topic signature. Topic signature is widely used for selecting feature in document classification and summarization. Comments are short and have so many word spacing errors, special characters. We firstly convert comments into 7-gram. We consider the 7-gram as sentence. We convert the 7-gram into 3-gram. We consider the 3-gram as word. We select key feature using topic signature and classify new inputs by the Naive Bayesian method. From the result of experiments, we can see that the proposed method is outstanding over the previous methods.

Performance Evaluation of Large Vocabulary Continuous Speech Recognition System (대어휘 연속음성 인식 시스템의 성능평가)

  • Kim Joo-Gon;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.99-102
    • /
    • 2002
  • 본 논문에서는 한국어 대어휘 연속음성 인식 시스템의 성능향상을 위하여 Multi-Pass 탐색 방법을 도입하고, 그 유효성을 확인하고자 한다. 연속음성 인식실험을 위하여, 최근 실험용으로 널리 사용되고 있는 HTK와 Multi-Pass 탐색 방법을 이용한 음성인식 시스템의 비교 실험을 수행한다. 대어휘 연속음성 인식 시스템에 사용한 언어 모델은 ARPA 표준 형식의 단어 N-gram 언어모델로, 1-pass에서는 2-gram 언어모델을, 2-pass 에서는 역방향 3-gram 언어모델을 이용하여 Multi-Pass 탐색 방법으로 인식을 수행한다. 본 논문에서는 Multi-Pass 탐색 방법을 한국어 연속음성인식에 적합하게 구성한 후, 다양한 한국어 음성 데이터 베이스를 이용하여 인식실험을 수행하였다. 그 결과, 전화망을 통하여 수집된 잡음이 포함된 증권거래용 연속음성 데이터 베이스를 이용한 연속음성 인식실험에서 HTK가 $59.50\%$, Multi-Pass 탐색 방법을 이용한 시스템은 $73.31\%$의 인식성능을 나타내어 HTK를 이용한 연속음성 인식률 보다 약 $13\%$의 인식률 향상을 나타내었다.

  • PDF

Text Mining Analysis Technique on ECDIS Accident Report (텍스트 마이닝 기법을 활용한 ECDIS 사고보고서 분석)

  • Lee, Jeong-Seok;Lee, Bo-Kyeong;Cho, Ik-Soon
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.405-412
    • /
    • 2019
  • SOLAS requires that ECDIS be installed on ships of more than 500 gross tonnage engaged in international navigation until the first inspection arriving after July 1, 2018. Several accidents related to the use of ECDIS have occurred with its installation as a new major navigation instrument. The 12 incident reports issued by MAIB, BSU, BEAmer, DMAIB, and DSB were analyzed, and the cause of accident was determined to be related to the operation of the navigator and the ECDIS system. The text was analyzed using the R-program to quantitatively analyze words related to the cause of the accident. We used text mining techniques such as Wordcloud, Wordnetwork and Wordweight to represent the importance of words according to their frequency of derivation. Wordcloud uses the N-gram model as a way of expressing the frequency of used words in cloud form. As a result of the uni-gram analysis of the N-gram model, ECDIS words were obtained the most, and the bi-gram analysis results showed that the word "Safety Contour" was used most frequently. Based on the bi-gram analysis, the causative words are classified into the officer and the ECDIS system, and the related words are represented by Wordnetwork. Finally, the related words with the of icer and the ECDIS system were composed of word corpus, and Wordweight was applied to analyze the change in corpus frequency by year. As a result of analyzing the tendency of corpus variation with the trend line graph, more recently, the corpus of the officer has decreased, and conversely, the corpus of the ECDIS system is gradually increasing.

An Improving Method of Efficiency for Word Clustering Based on Language Model (언어모델 기반 단어 클러스터링 알고리즘의 효율성 향상 기법)

  • Park, Sang-Woo;Kim, Youngtae;Kang, Dong-Min;Ra, Dongyul
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.55-60
    • /
    • 2011
  • 단어 클러스터링 (word clustering) 또는 군집화는 자연어처리에서 데이터 부족 문제로 인하여 단어 간의 의미관계와 관련된 정보를 사용하기 어렵게 만드는 문제에 대처할 수 있는 중요한 기술이다. 단어 클러스터링과 관련하여 알려진 가장 대표적인 기법으로는 클래스-기반 n-gram 언어모델의 개발을 위하여 제안된 Brown 단어 클러스터링 기법이다. 그러나 Brown 클러스터링 기법을 이용하는데 있어서 부딪치는 가장 큰 문제점은 시간과 공간적인 면에서 자원 소요량이 너무 방대하다는 점이다. 본 연구는 이 클러스터링 기법의 효율성을 개선하는 실험을 수행하였다. 실험 결과 가장 단순한(naive) 접근에 비하여 약 7.9배 이상의 속도 향상을 이룰 수 있음을 관찰하였다.

  • PDF

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF

Part-Of-Speech Tagging using multiple sources of statistical data (이종의 통계정보를 이용한 품사 부착 기법)

  • Cho, Seh-Yeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.501-506
    • /
    • 2008
  • Statistical POS tagging is prone to error, because of the inherent limitations of statistical data, especially single source of data. Therefore it is widely agreed that the possibility of further enhancement lies in exploiting various knowledge sources. However these data sources are bound to be inconsistent to each other. This paper shows the possibility of using maximum entropy model to Korean language POS tagging. We use as the knowledge sources n-gram data and trigger pair data. We show how perplexity measure varies when two knowledge sources are combined using maximum entropy method. The experiment used a trigram model which produced 94.9% accuracy using Hidden Markov Model, and showed increase to 95.6% when combined with trigger pair data using Maximum Entropy method. This clearly shows possibility of further enhancement when various knowledge sources are developed and combined using ME method.

Related Works for an Input String Recommendation and Modification on Mobile Environment (모바일 기기의 입력 문자열 추천 및 오타수정 모델을 위한 주요 기술)

  • Lee, Song-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.602-604
    • /
    • 2011
  • Due to wide usage of smartphones and mobile internet, mobile devices are used in various fields such as sending SMS, participating SNS, retrieving information and the number of users taking advantage of them are growing. The keypads of a mobile device are relatively smaller than those of desktop computers. Thus, the user has a difficulty in input sentences quickly and correctly. In this study, we introduce some string recommendation and modification techniques which can be used for helping a user input in mobile devices quickly and correctly. We describe a TRIE dictionary and n-gram language model which are the main technologies of the keyword recommendation applied to the online search engines.

  • PDF

A Design on Informal Big Data Topic Extraction System Based on Spark Framework (Spark 프레임워크 기반 비정형 빅데이터 토픽 추출 시스템 설계)

  • Park, Kiejin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.521-526
    • /
    • 2016
  • As on-line informal text data have massive in its volume and have unstructured characteristics in nature, there are limitations in applying traditional relational data model technologies for data storage and data analysis jobs. Moreover, using dynamically generating massive social data, social user's real-time reaction analysis tasks is hard to accomplish. In the paper, to capture easily the semantics of massive and informal on-line documents with unsupervised learning mechanism, we design and implement automatic topic extraction systems according to the mass of the words that consists a document. The input data set to the proposed system are generated first, using N-gram algorithm to build multiple words to capture the meaning of the sentences precisely, and Hadoop and Spark (In-memory distributed computing framework) are adopted to run topic model. In the experiment phases, TB level input data are processed for data preprocessing and proposed topic extraction steps are applied. We conclude that the proposed system shows good performance in extracting meaningful topics in time as the intermediate results come from main memories directly instead of an HDD reading.

Development and Evaluation of Information Extraction Module for Postal Address Information (우편주소정보 추출모듈 개발 및 평가)

  • Shin, Hyunkyung;Kim, Hyunseok
    • Journal of Creative Information Culture
    • /
    • v.5 no.2
    • /
    • pp.145-156
    • /
    • 2019
  • In this study, we have developed and evaluated an information extracting module based on the named entity recognition technique. For the given purpose in this paper, the module was designed to apply to the problem dealing with extraction of postal address information from arbitrary documents without any prior knowledge on the document layout. From the perspective of information technique practice, our approach can be said as a probabilistic n-gram (bi- or tri-gram) method which is a generalized technique compared with a uni-gram based keyword matching. It is the main difference between our approach and the conventional methods adopted in natural language processing that applying sentence detection, tokenization, and POS tagging recursively rather than applying the models sequentially. The test results with approximately two thousands documents are presented at this paper.