Browse > Article

Enhancement of a language model using two separate corpora of distinct characteristics  

Cho, Sehyeong (MyongJi University, Department of Computer Science)
Chung, Tae-Sun (MyongJi University, Department of Computer Science)
Publication Information
Journal of the Korean Institute of Intelligent Systems / v.14, no.3, 2004 , pp. 357-362 More about this Journal
Abstract
Language models are essential in predicting the next word in a spoken sentence, thereby enhancing the speech recognition accuracy, among other things. However, spoken language domains are too numerous, and therefore developers suffer from the lack of corpora with sufficient sizes. This paper proposes a method of combining two n-gram language models, one constructed from a very small corpus of the right domain of interest, the other constructed from a large but less adequate corpus, resulting in a significantly enhanced language model. This method is based on the observation that a small corpus from the right domain has high quality n-grams but has serious sparseness problem, while a large corpus from a different domain has more n-gram statistics but incorrectly biased. With our approach, two n-gram statistics are combined by extending the idea of Katz's backoff and therefore is called a dual-source backoff. We ran experiments with 3-gram language models constructed from newspaper corpora of several million to tens of million words together with models from smaller broadcast news corpora. The target domain was broadcast news. We obtained significant improvement (30%) by incorporating a small corpus around one thirtieth size of the newspaper corpus.
Keywords
language model; speech recognition; backoff; perplexity;
Citations & Related Records
연도 인용수 순위
  • Reference