• Title/Summary/Keyword: n-grams

Search Result 66, Processing Time 0.033 seconds

Efficient Language Model based on VCCV unit for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 효율적인 언어모델)

  • Park, Seon-Hui;No, Yong-Wan;Hong, Gwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.836-839
    • /
    • 2003
  • In this paper, we implement a language model by a bigram and evaluate proper smoothing technique for unit of low perplexity. Word, morpheme, clause units are widely used as a language processing unit of the language model. We propose VCCV units which have more small vocabulary than morpheme and clauses units. We compare the VCCV units with the clause and the morpheme units using the perplexity. The most common metric for evaluating a language model is the probability that the model assigns the derivative measures of perplexity. Smoothing used to estimate probabilities when there are insufficient data to estimate probabilities accurately. In this paper, we constructed the N-grams of the VCCV units with low perplexity and tested the language model using Katz, Witten-Bell, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

  • PDF

A Study on Negation Handling and Term Weighting Schemes and Their Effects on Mood-based Text Classification (감정 기반 블로그 문서 분류를 위한 부정어 처리 및 단어 가중치 적용 기법의 효과에 대한 연구)

  • Jung, Yu-Chul;Choi, Yoon-Jung;Myaeng, Sung-Hyon
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.477-497
    • /
    • 2008
  • Mood classification of blog text is an interesting problem, with a potential for a variety of services involving the Web. This paper introduces an approach to mood classification enhancements through the normalized negation n-grams which contain mood clues and corpus-specific term weighting(CSTW). We've done experiments on blog texts with two different classification methods: Enhanced Mood Flow Analysis(EMFA) and Support Vector Machine based Mood Classification(SVMMC). It proves that the normalized negation n-gram method is quite effective in dealing with negations and gave gradual improvements in mood classification with EMF A. From the selection of CSTW, we noticed that the appropriate weighting scheme is important for supporting adequate levels of mood classification performance because it outperforms the result of TF*IDF and TF.

  • PDF

A Corpus Analysis of British-American Children's Adventure Novels: Treasure Island (영미 아동 모험 소설에 관한 코퍼스 분석 연구: 『보물섬』을 중심으로)

  • Choi, Eunsaem;Jung, Chae Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.333-342
    • /
    • 2021
  • In this study, we analyzed the vocabulary, lemmas, keywords, and n-grams in 『Treasure Island』 to identify certain linguistic features of this British-American children's adventure novel. The current study found that, contrary to the popular claim that frequently-used words are important and essential to a story, the set of frequently-used words in 『Treasure Island』 were mostly function words and proper nouns that were not directly related to the plot found in 『Treasure Island』. We also ascertained that a list of keywords using a statistical method making use of a corpus program was not good enough to surmise the story of 『Treasure Island』. However, we managed to extract 30 keywords through the first quantitative keyword analysis and then a second qualitative keyword analysis. We also carried out a series of n-gram analyses and were able to discover lexical bundles that were preferred and frequently used by the author of 『Treasure Island』. We hope that the results of this study will help spread this knowledge among British-American children's literature as well as to further put forward corpus stylistic theory.

An end-to-end synthesis method for Korean text-to-speech systems (한국어 text-to-speech(TTS) 시스템을 위한 엔드투엔드 합성 방식 연구)

  • Choi, Yeunju;Jung, Youngmoon;Kim, Younggwan;Suh, Youngjoo;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.39-48
    • /
    • 2018
  • A typical statistical parametric speech synthesis (text-to-speech, TTS) system consists of separate modules, such as a text analysis module, an acoustic modeling module, and a speech synthesis module. This causes two problems: 1) expert knowledge of each module is required, and 2) errors generated in each module accumulate passing through each module. An end-to-end TTS system could avoid such problems by synthesizing voice signals directly from an input string. In this study, we implemented an end-to-end Korean TTS system using Google's Tacotron, which is an end-to-end TTS system based on a sequence-to-sequence model with attention mechanism. We used 4392 utterances spoken by a Korean female speaker, an amount that corresponds to 37% of the dataset Google used for training Tacotron. Our system obtained mean opinion score (MOS) 2.98 and degradation mean opinion score (DMOS) 3.25. We will discuss the factors which affected training of the system. Experiments demonstrate that the post-processing network needs to be designed considering output language and input characters and that according to the amount of training data, the maximum value of n for n-grams modeled by the encoder should be small enough.

Effect of Glucose, Starch, Sucrose on the Protein Utilization In Weanling Rats (흰쥐에 있어 탄수화물의 종류에 따른 단백질의 체내 이용에 관한 연구)

  • Hong, Myoung-Bock;Kim, Mi-Kyung
    • Journal of Nutrition and Health
    • /
    • v.13 no.4
    • /
    • pp.167-176
    • /
    • 1980
  • This study was conducted to compare effects of various types of dietary carboh ydrates fed with different levels of protein on the protein utilization in weanling rats. Sixty male Sprague-Dawley rats weighing $60{\pm}1.3grams$ were adapted for 1 week with 77% starch-15% casein diet. Then the animals divided into 12 groups according to body weight and fed each experimental diet for two weeks. Carbodydrates used were glucose, starch, and sucrose and the amount of protein given were 0g, 1g, 3g, 5g casein/day. Protein portion of the diet was fed in two seperate feedings per day while nonprotein portion was fed ad libitum. It seemed that there was no significant difference in the protein utilization by using the different kinds of carbohydrate, but in P.E.R., N.P.U., weights of organs and protein and lipid in total carcass, glucose groups were tended to be slightly lower than starch and sucrose groups. The larger the amount of casein given, the higher were the value of body weight gain, F.E.R., weights of organs, total lipid in carcass and the amount of nitrogen retention. On the while, the larger the amount of casein given, the lower were the value of the intake of non-protein portion, P.E.R., N.P.U, and the percentage of nitrogen retention.

  • PDF

A Study on Amino Acid and Minerals Contained in Bastard Broth with Various Parts and Various Boiling Time (廣魚의 부위별, 가열시간에 따른 추출액중 아미노산과 무기질 함량에 관한 연구)

  • Kim, Eun-Kyung;Yum, Cho-Ahe
    • Korean journal of food and cookery science
    • /
    • v.6 no.2
    • /
    • pp.15-26
    • /
    • 1990
  • The material used for the experimental analyses and sensory evaluation of this thesis is 8 Bastards. 4 Bastards are used as Sample A and the other 4 Bastards are used as Sample B. Sample A is the broth from 100 grams of flesh and spinal bones, boiled for (1) 15 minutes, (2) 30 minutes, (3) 60 minutes, and (4) 120 minutes. Sample B is the broth from 100 gram of head and spinal bones, boiled for (1) 15 minutes, (2) 30 minutes, (3) 60 minutes, and (4) 120 minutes. The nutrients analyzed for this thesis are (1) free amino acid, (2) total N, and (3) minerals (Ca, P, Na, K, Zn). The results of the experimental analyses and sensory evaluation of Bastards broth with various boiling time are follows: (1) The total amounts of free amino-acid and total N in the broth are the greatest when boiled for 15 minutes, in both sample A and sample B. (2) The amounts of minerals in the broth increase as time increases. (3) The results of the sensory evaluation show that the subjects prefer the taste of the stock boiled for 120 minutes with regard to sample A, but that they prefer the taste of the stock boiled for 15 minutes with regard to sample B.

  • PDF

Corpus-Based Ambiguity-Driven Learning of Context- Dependent Lexical Rules for Part-of-Speech Tagging (품사태킹을 위한 어휘문맥 의존규칙의 말뭉치기반 중의성주도 학습)

  • 이상주;류원호;김진동;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.178-178
    • /
    • 1999
  • Most stochastic taggers can not resolve some morphological ambiguities that can be resolved only by referring to lexical contexts because they use only contextual probabilities based ontag n-grams and lexical probabilities. Existing lexical rules are effective for resolving such ambiguitiesbecause they can refer to lexical contexts. However, they have two limitations. One is that humanexperts tend to make erroneous rules because they are deterministic rules. Another is that it is hardand time-consuming to acquire rules because they should be manually acquired. In this paper, wepropose context-dependent lexical rules, which are lexical rules based on the statistics of a taggedcorpus, and an ambiguity-driven teaming method, which is the method of automatically acquiring theproposed rules from a tagged corpus. By using the proposed rules, the proposed tagger can partiallyannotate an unseen corpus with high accuracy because it is a kind of memorizing tagger that canannotate a training corpus with 100% accuracy. So, the proposed tagger is useful to improve theaccuracy of a stochastic tagger. And also, it is effectively used for detecting and correcting taggingerrors in a manually tagged corpus. Moreover, the experimental results show that the proposed methodis also effective for English part-of-speech tagging.

Automatic Word Spacing Using Raw Corpus and a Morphological Analyzer (말뭉치와 형태소 분석기를 활용한 한국어 자동 띄어쓰기)

  • Shim, Kwangseob
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.68-75
    • /
    • 2015
  • This paper proposes a method for the automatic word spacing of unsegmented Korean sentences. In our method, eojeol monograms are used for word spacing as opposed to the syllable n-grams that have been used in previous studies. The use of a Korean morphological analyzer is limited to the correction of typical word spacing errors. Our method gives a 98.06% syllable accuracy and a 94.15% eojeol recall, when 10-fold cross-validated with the Sejong corpus, after filtering out non-hangul eojeols. The processing rate is 250K eojeols or 1.8 MB per second on a typical personal computer. Syllable accuracy and eojeol recall are related to the size of the eojeol dictionary, better performance is expected with a bigger corpus.

Classification Protein Subcellular Locations Using n-Gram Features (단백질 서열의 n-Gram 자질을 이용한 세포내 위치 예측)

  • Kim, Jinsuk
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.12-16
    • /
    • 2007
  • The function of a protein is closely co-related with its subcellular location(s). Given a protein sequence, therefore, how to determine its subcellular location is a vitally important problem. We have developed a new prediction method for protein subcellular location(s), which is based on n-gram feature extraction and k-nearest neighbor (kNN) classification algorithm. It classifies a protein sequence to one or more subcellular compartments based on the locations of top k sequences which show the highest similarity weights against the input sequence. The similarity weight is a kind of similarity measure which is determined by comparing n-gram features between two sequences. Currently our method extract penta-grams as features of protein sequences, computes scores of the potential localization site(s) using kNN algorithm, and finally presents the locations and their associated scores. We constructed a large-scale data set of protein sequences with known subcellular locations from the SWISS-PROT database. This data set contains 51,885 entries with one or more known subcellular locations. Our method show very high prediction precision of about 93% for this data set, and compared with other method, it also showed comparable prediction improvement for a test collection used in a previous work.

  • PDF

Enhancement of a language model using two separate corpora of distinct characteristics

  • Cho, Sehyeong;Chung, Tae-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.357-362
    • /
    • 2004
  • Language models are essential in predicting the next word in a spoken sentence, thereby enhancing the speech recognition accuracy, among other things. However, spoken language domains are too numerous, and therefore developers suffer from the lack of corpora with sufficient sizes. This paper proposes a method of combining two n-gram language models, one constructed from a very small corpus of the right domain of interest, the other constructed from a large but less adequate corpus, resulting in a significantly enhanced language model. This method is based on the observation that a small corpus from the right domain has high quality n-grams but has serious sparseness problem, while a large corpus from a different domain has more n-gram statistics but incorrectly biased. With our approach, two n-gram statistics are combined by extending the idea of Katz's backoff and therefore is called a dual-source backoff. We ran experiments with 3-gram language models constructed from newspaper corpora of several million to tens of million words together with models from smaller broadcast news corpora. The target domain was broadcast news. We obtained significant improvement (30%) by incorporating a small corpus around one thirtieth size of the newspaper corpus.