• Title/Summary/Keyword: In Word Probability

Search Result 115, Processing Time 0.023 seconds

A study on the behavioral-structure of production activity through the statistical analysis models - focus on the probability distribution of PERT, Queueing theory - (통계적(統計的) 계량분석(計量分析)모델을 통한 생산활동(活動)의 행태구조(行態構造)에 관한 연구 -RERT와 Queueing theory의 확률분포를 중심으로-)

  • Kim, Hong Jae
    • Journal of Korean Society for Quality Management
    • /
    • v.19 no.2
    • /
    • pp.145-157
    • /
    • 1991
  • Thid study intends to pursue behavioral-structure of production behavior through statistical models which are using in PERT and Queueing theory. We can corprehand the orders of human production behavior's characteristics by several related attributes of probablity/statistics. These attributes are poisson, Beta, exponential distributions and P.S Laplace's natural probability. Human production behavior is related and regressed to these attributes in many divisions intermediately. Progressive numerical understanding in many essential human behavior acts on the application of practical behavior standard in production word and operation.

  • PDF

Performance Improvement of Multilayer Perceptrons with Increased Output Nodes (다층퍼셉트론의 출력 노드 수 증가에 의한 성능 향상)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.123-130
    • /
    • 2009
  • When we apply MLPs(multilayer perceptrons) to pattern classification problems, we generally allocate one output node for each class and the index of output node denotes a class. On the contrary, in this paper, we propose to increase the number of output nodes per each class for performance improvement of MLPs. For theoretical backgrounds, we derive the misclassification probability in two class problems with additional outputs under the assumption that the two classes have equal probability and outputs are uniformly distributed in each class. Also, simulations of 50 isolated-word recognition show the effectiveness of our method.

Statistical Model-Based Voice Activity Detection Using Spatial Cues for Dual-Channel Noisy Speech Recognition (이중채널 잡음음성인식을 위한 공간정보를 이용한 통계모델 기반 음성구간 검출)

  • Shin, Min-Hwa;Park, Ji-Hun;Kim, Hong-Kook;Lee, Yeon-Woo;Lee, Seong-Ro
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.141-148
    • /
    • 2010
  • In this paper, voice activity detection (VAD) for dual-channel noisy speech recognition is proposed in which spatial cues are employed. In the proposed method, a probability model for speech presence/absence is constructed using spatial cues obtained from dual-channel input signal, and a speech activity interval is detected through this probability model. In particular, spatial cues are composed of interaural time differences and interaural level differences of dual-channel speech signals, and the probability model for speech presence/absence is based on a Gaussian kernel density. In order to evaluate the performance of the proposed VAD method, speech recognition is performed for speech segments that only include speech intervals detected by the proposed VAD method. The performance of the proposed method is compared with those of several methods such as an SNR-based method, a direction of arrival (DOA) based method, and a phase vector based method. It is shown from the speech recognition experiments that the proposed method outperforms conventional methods by providing relative word error rates reductions of 11.68%, 41.92%, and 10.15% compared with SNR-based, DOA-based, and phase vector based method, respectively.

  • PDF

Clustering In Tied Mixture HMM Using Homogeneous Centroid Neural Network (Homogeneous Centroid Neural Network에 의한 Tied Mixture HMM의 군집화)

  • Park Dong-Chul;Kim Woo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.9C
    • /
    • pp.853-858
    • /
    • 2006
  • TMHMM(Tied Mixture Hidden Markov Model) is an important approach to reduce the number of free parameters in speech recognition. However, this model suffers from a degradation in recognition accuracy due to its GPDF (Gaussian Probability Density Function) clustering error. This paper proposes a clustering algorithm, called HCNN(Homogeneous Centroid Neural network), to cluster acoustic feature vectors in TMHMM. Moreover, the HCNN uses the heterogeneous distance measure to allocate more code vectors in the heterogeneous areas where probability densities of different states overlap each other. When applied to Korean digit isolated word recognition, the HCNN reduces the error rate by 9.39% over CNN clustering, and 14.63% over the traditional K-means clustering.

SNR of DPCM with the Property of Unequal Bit - Error - Probability (부등비트오율이 고려된 DPCM의 신호대 잡음비)

  • Choi, Yun-Cheol;Park, Young-Goo;Moon, S.J.
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.186-189
    • /
    • 1988
  • In transmission of DPCM signals, it is desired to protect the more significant digits from more errors than the less significant digits. The SNR of DPCM is examined in the case that bit error rates of individual digits consisting of the information word are different each other. The examination shows a better DPCM coding.

  • PDF

Distortion of the Visual Working Memory Induced by Stroop Interference (스트룹 간섭에 의한 시각작업기억의 왜곡 현상)

  • Kim, Daegyu;Hyun, Joo-Seok
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.1
    • /
    • pp.27-51
    • /
    • 2015
  • The present study tested the effect of a top-down influence on recalling the colors of Stroop words. Participants remembered the colors of 1, 2, 3 or 6 Stroop words. After 1 second of a memory delay, they were asked to recall the color of a cued Stroop word by selecting out its corresponding color on a color-wheel stimulus. The correct recall was defined when the participants chose a color that was within ${\pm}45^{\circ}$ from the exact location of Stroop word's color on the color-wheel. Otherwise, the recall was defined as incorrect. The analyses of the frequency distribution of the participants' responses in the error trials showed that the probability of choosing the color-name of the target Stroop word was higher than the probability of other five color-names on the color-wheel. Further analyses showed that increasing the number of Stroop words to manipulate memory load did not affect the probability of the Stroop interference. These results indicate that the top-down interference by Stroop manipulation may induce systematic distortion of the stored representation in visual working memory.

A Probabilistic Context Sensitive Rewriting Method for Effective Transliteration Variants Generation (효과적인 외래어 이형태 생성을 위한 확률 문맥 의존 치환 방법)

  • Lee, Jae-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.73-83
    • /
    • 2007
  • An information retrieval system, using exact match, needs preprocessing or query expansion to generate transliteration variants in order to search foreign word transliteration variants in the documents. This paper proposes an effective method to generate other transliteration variants from a given transliteration. Because simple rewriting of confused characters produces too many false variants, the proposed method controls the generation priority by learning confusion patterns from real uses and calculating their probability. Especially, the left and right context of a pattern is considered, and local rewriting probability and global rewriting probability are calculated to produce more probable variants in earlier stage. The experimental result showed that the method was very effective by showing more than 80% recall with top 20 generations for a transliteration variants set collected from KT SET 2.0.

Context-sensitive Spelling Error Correction using Eojeol N-gram (어절 N-gram을 이용한 문맥의존 철자오류 교정)

  • Kim, Minho;Kwon, Hyuk-Chul;Choi, Sungki
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1081-1089
    • /
    • 2014
  • Context-sensitive spelling-error correction methods are largely classified into rule-based methods and statistical data-based methods, the latter of which is often preferred in research. Statistical error correction methods consider context-sensitive spelling error problems as word-sense disambiguation problems. The method divides a vocabulary pair, for correction, which consists of a correction target vocabulary and a replacement candidate vocabulary, according to the context. The present paper proposes a method that integrates a word-phrase n-gram model into a conventional model in order to improve the performance of the probability model by using a correction vocabulary pair, which was a result of a previous study performed by this research team. The integrated model suggested in this paper includes a method used to interpolate the probability of a sentence calculated through each model and a method used to apply the models, when both methods are sequentially applied. Both aforementioned types of integrated models exhibit relatively high accuracy and reproducibility when compared to conventional models or to a model that uses only an n-gram.

A Korean Homonym Disambiguation Model Based on Statistics Using Weights (가중치를 이용한 통계 기반 한국어 동형이의어 분별 모델)

  • 김준수;최호섭;옥철영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1112-1123
    • /
    • 2003
  • WSD(word sense disambiguation) is one of the most difficult problems in Korean information processing. The Bayesian model that used semantic information, extracted from definition corpus(1 million POS-tagged eojeol, Korean dictionary definitions), resulted in accuracy of 72.08% (nouns 78.12%, verbs 62.45%). This paper proposes the statistical WSD model using NPH(New Prior Probability of Homonym sense) and distance weights. We select 46 homonyms(30 nouns, 16 verbs) occurred high frequency in definition corpus, and then we experiment the model on 47,977 contexts from ‘21C Sejong Corpus’(3.5 million POS-tagged eojeol). The WSD model using NPH improves on accuracy to average 1.70% and the one using NPH and distance weights improves to 2.01%.

A Recognition Time Reduction Algorithm for Large-Vocabulary Speech Recognition (대용량 음성인식을 위한 인식기간 감축 알고리즘)

  • Koo, Jun-Mo;Un, Chong-Kwan;,
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.3
    • /
    • pp.31-36
    • /
    • 1991
  • We propose an efficient pre-classification algorithm extracting candidate words to reduce the recognition time in a large-vocabulary recognition system and also propose the use of spectral and temporal smoothing of the observation probability to improve its classification performance. The proposed algorithm computes the coarse likelihood score for each word in a lexicon using the observation probabilities of speech spectra and duration information of recognition units. With the proposed approach we could reduce the computational amount by 74% with slight degradation of recognition accuracy in 1160-word recognition system based on the phoneme-level HMM. Also, we observed that the proposed coarse likelihood score computation algorithm is a good estimator of the likelihood score computed by the Viterbi algorithm.

  • PDF