• Title/Summary/Keyword: phoneme

Search Result 458, Processing Time 0.025 seconds

A Parallel Speech Recognition Model on Distributed Memory Multiprocessors (분산 메모리 다중프로세서 환경에서의 병렬 음성인식 모델)

  • 정상화;김형순;박민욱;황병한
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.44-51
    • /
    • 1999
  • This paper presents a massively parallel computational model for the efficient integration of speech and natural language understanding. The phoneme model is based on continuous Hidden Markov Model with context dependent phonemes, and the language model is based on a knowledge base approach. To construct the knowledge base, we adopt a hierarchically-structured semantic network and a memory-based parsing technique that employs parallel marker-passing as an inference mechanism. Our parallel speech recognition algorithm is implemented in a multi-Transputer system using distributed-memory MIMD multiprocessors. Experimental results show that the parallel speech recognition system performs better in recognition accuracy than a word network-based speech recognition system. The recognition accuracy is further improved by applying code-phoneme statistics. Besides, speedup experiments demonstrate the possibility of constructing a realtime parallel speech recognition system.

  • PDF

Speech Recognition Error Compensation using MFCC and LPC Feature Extraction Method (MFCC와 LPC 특징 추출 방법을 이용한 음성 인식 오류 보정)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.137-142
    • /
    • 2013
  • Speech recognition system is input of inaccurate vocabulary by feature extraction case of recognition by appear result of unrecognized or similar phoneme recognized. Therefore, in this paper, we propose a speech recognition error correction method using phoneme similarity rate and reliability measures based on the characteristics of the phonemes. Phonemes similarity rate was phoneme of learning model obtained used MFCC and LPC feature extraction method, measured with reliability rate. Minimize the error to be unrecognized by measuring the rate of similar phonemes and reliability. Turned out to error speech in the process of speech recognition was error compensation performed. In this paper, the result of applying the proposed system showed a recognition rate of 98.3%, error compensation rate 95.5% in the speech recognition.

An ERP Study of the Perception of English High Front Vowels by Native Speakers of Korean and English (영어전설고모음 인식에 대한 ERP 실험연구: 한국인과 영어원어민을 대상으로)

  • Yun, Yungdo
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.21-29
    • /
    • 2013
  • The mismatch negativity (MMN) is known to be a fronto-centrally negative component of the auditory event-related potentials (ERP). $N\ddot{a}\ddot{a}t\ddot{a}nen$ et al. (1997) and Winkler et al. (1999) discuss that MMN acts as a cue to a phoneme perception in the ERP paradigm. In this study a perception experiment based on an ERP paradigm to check how Korean and American English speakers perceive the American English high front vowels was conducted. The study found that the MMN obtained from both Korean and American English speakers was shown around the same time after they heard F1s of English high front vowels. However, when the same groups heard English words containing them, the American English listeners' MMN was shown to be a little faster than the Korean listeners' MMN. These findings suggest that non-speech sounds, such as F1s of vowels, may be processed similarly across speakers of different languages; however, phonemes are processed differently; a native language phoneme is processed faster than a non-native language phoneme.

The Primitive Representation in Speech Perception: Phoneme or Distinctive Features (말지각의 기초표상: 음소 또는 변별자질)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.157-169
    • /
    • 2013
  • Using a target detection task, this study compared the processing automaticity of phonemes and features in spoken syllable stimuli to determine the primitive representation in speech perception, phoneme or distinctive feature. For this, we modified the visual search task(Treisman et al., 1992) developed to investigate the processing of visual features(ex. color, shape or their conjunction) for auditory stimuli. In our task, the distinctive features(ex. aspiration or coronal) corresponded to visual primitive features(ex. color and shape), and the phonemes(ex. /$t^h$/) to visual conjunctive features(ex. colored shapes). The automaticity is measured by the set size effect that was the increasing amount of reaction time when the number of distracters increased. Three experiments were conducted. The laryngeal features(experiment 1), the manner features(experiment 2), and the place features(experiment 3) were compared with phonemes. The results showed that the distinctive features are consistently processed faster and automatically than the phonemes. Additionally there were differences in the processing automaticity among the classes of distinctive features. The laryngeal features are the most automatic, the manner features are moderately automatic and the place features are the least automatic. These results are consistent with the previous studies(Bae et al., 2002; Bae, 2010) that showed the perceptual hierarchy of distinctive features.

Prosodic Contour Generation for Korean Text-To-Speech System Using Artificial Neural Networks

  • Lim, Un-Cheon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2E
    • /
    • pp.43-50
    • /
    • 2009
  • To get more natural synthetic speech generated by a Korean TTS (Text-To-Speech) system, we have to know all the possible prosodic rules in Korean spoken language. We should find out these rules from linguistic, phonetic information or from real speech. In general, all of these rules should be integrated into a prosody-generation algorithm in a TTS system. But this algorithm cannot cover up all the possible prosodic rules in a language and it is not perfect, so the naturalness of synthesized speech cannot be as good as we expect. ANNs (Artificial Neural Networks) can be trained to learn the prosodic rules in Korean spoken language. To train and test ANNs, we need to prepare the prosodic patterns of all the phonemic segments in a prosodic corpus. A prosodic corpus will include meaningful sentences to represent all the possible prosodic rules. Sentences in the corpus were made by picking up a series of words from the list of PB (phonetically Balanced) isolated words. These sentences in the corpus were read by speakers, recorded, and collected as a speech database. By analyzing recorded real speech, we can extract prosodic pattern about each phoneme, and assign them as target and test patterns for ANNs. ANNs can learn the prosody from natural speech and generate prosodic patterns of the central phonemic segment in phoneme strings as output response of ANNs when phoneme strings of a sentence are given to ANNs as input stimuli.

Phoneme distribution and syllable structure of entry words in the CMU English Pronouncing Dictionary

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.11-16
    • /
    • 2016
  • This study explores the phoneme distribution and syllable structure of entry words in the CMU English Pronouncing Dictionary to provide phoneticians and linguists with fundamental phonetic data on English word components. Entry words in the dictionary file were syllabified using an R script and examined to obtain the following results: First, English words preferred consonants to vowels in their word components. In addition, monophthongs occurred much more frequently than diphthongs. When all consonants were categorized by manner and place, the distribution indicated the frequency order of stops, fricatives, and nasals according to manner and that of alveolars, bilabials and velars according to place. These results were comparable to the results obtained from the Buckeye Corpus (Yang, 2012). Second, from the analysis of syllable structure, two-syllable words were most favored, followed by three- and one-syllable words. Of the words in the dictionary, 92.7% consisted of one, two or three syllables. This result may be related to human memory or decoding time. Third, the English words tended to exhibit discord between onset and coda consonants and between adjacent vowels. Dissimilarity between the last onset and the first coda was found in 93.3% of the syllables, while 91.6% of the adjacent vowels were different. From the results above, the author concludes that an analysis of the phonetic symbols in a dictionary may lead to a deeper understanding of English word structures and components.

Phoneme Separation and Establishment of Time-Frequency Discriminative Pattern on Korean Syllables (음절신호의 음소 분리와 시간-주파수 판별 패턴의 설정)

  • 류광열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.12
    • /
    • pp.1324-1335
    • /
    • 1991
  • In this paper, a phoneme separation and an establishment of discriminative pattern of Korean phonemes are studied on experiment. The separation uses parameters such as pitch extraction, glottal peak pulse width of each pitch. speech duration. envelope and amplitude bias. The first pitch is extracted by deviations of glottal peak and width. energy and normalization on a bias on the top of vowel envelope. And then, it traces adjacent pitch to vowel in whole. On vewel, amethod to be reduced gliding pattern and the possible of vowel distinction to be used just second formant are proposed, and shrinking pitch waveform has nothing to do with pitch length is estimated. A pattern of envelope, spectrum, shrinking waveform, and a method of analysis by mutual relation among phonemes and manners of articulation on consonant are detected. As experimental results, 90% on vowel phoneme, 80% and 60% on initial and final consonant are discriminated.

  • PDF

Retrieval of Player Event in Golf Videos Using Spoken Content Analysis (음성정보 내용분석을 통한 골프 동영상에서의 선수별 이벤트 구간 검색)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.674-679
    • /
    • 2009
  • This paper proposes a method of player event retrieval using combination of two functions: detection of player name in speech information and detection of sound event from audio information in golf videos. The system consists of indexing module and retrieval module. At the indexing time audio segmentation and noise reduction are applied to audio stream demultiplexed from the golf videos. The noise-reduced speech is then fed into speech recognizer, which outputs spoken descriptors. The player name and sound event are indexed by the spoken descriptors. At search time, text query is converted into phoneme sequences. The lists of each query term are retrieved through a description matcher to identify full and partial phrase hits. For the retrieval of the player name, this paper compares the results of word-based, phoneme-based, and hybrid approach.

A Study of Keyword Spotting System Based on the Weight of Non-Keyword Model (비핵심어 모델의 가중치 기반 핵심어 검출 성능 향상에 관한 연구)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.381-388
    • /
    • 2003
  • This paper presents a method of giving weights to garbage class clustering and Filler model to improve performance of keyword spotting system and a time-saving method of dialogue speech processing system for keyword spotting by calculating keyword transition probability through speech analysis of task domain users. The point of the method is grouping phonemes with phonetic similarities, which is effective in sensing similar phoneme groups rather than individual phonemes, and the paper aims to suggest five groups of phonemes obtained from the analysis of speech sentences in use in Korean morphology and in stock-trading speech processing system. Besides, task-subject Filler model weights are added to the phoneme groups, and keyword transition probability included in consecutive speech sentences is calculated and applied to the system in order to save time for system processing. To evaluate performance of the suggested system, corpus of 4,970 sentences was built to be used in task domains and a test was conducted with subjects of five people in their twenties and thirties. As a result, FOM with the weights on proposed five phoneme groups accounts for 85%, which has better performance than seven phoneme groups of Yapanel [1] with 88.5% and a little bit poorer performance than LVCSR with 89.8%. Even in calculation time, FOM reaches 0.70 seconds than 0.72 of seven phoneme groups. Lastly, it is also confirmed in a time-saving test that time is saved by 0.04 to 0.07 seconds when keyword transition probability is applied.