• Title/Summary/Keyword: Phoneme

Search Result 458, Processing Time 0.024 seconds

Improvement of Recognition Speed for Real-time Address Speech Recognition (실시간 주소 음성인식을 위한 인식 시스템의 인식속도 개선)

  • Hwang Cheol-Jun;Oh Se-Jin;Kim Bum-Koog;Jung Ho-Youl;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.74-77
    • /
    • 1999
  • 본 논문에서는 본 연구실에서 개발한 주소 음성인식 시스템의 인식 속도를 개선시키기 위하예 새로운 가변 프루닝 문턱치를 적용하는 방법을 제안하고 실험을 통하여 그 유효성을 확인하였다. 기존의 가변 프루닝 문턱치는 일정 프레임이 경과하면 일정 값을 가진 문턱치를 계속하여 감소시켜나가는 방법을 반복하기 때문에, 불필요한 탐색공간을 탐색하게 된다. 본 논문에서 새로이 제안하는 가변 프루닝 문턱치를 채용하는 방법은 처음 일정 구간이 경과되면 일정 문턱치를 감소시키나, 다음 일정 프레임에서는 탐색되어야할 후보에 따라서 문턱치를 변화시켜 프루닝시키기 때문에 탐색공간을 효과적으로 감소시킬 수 있다. 제안된 방법의 유효성을 확인하기 위하여, 본 연구실에서 개발한 한국어 주소 입력 시스템에 적용하였다. 이 시스템은 48개의 연속 HMM 유사음소단위(Phoneme Like Units; PLUs)를 인식의 기본단위로 하고, .사용환경 변화에 의한 인식성능의 저하를 최소화하기 위해 최대사후 확률추정법(Maximum A Posteriori Probability Estimation; MAP)을 사용하며, 인식알고리즘으로는OPDP(One Pass Dynamic Programming)법을 이용하고 있다. 남성화자 3인에 의한 75개의 연결주소명을 이용하여 인식 실험을 수행한 결과 고정 프루닝 문턱치를 적용한 경우 인식률은 평균 $96.0\%$, 인식 시간은 5.26초였고, 기존의 가변 프루닝 문턱치의 경우 인식률은 평균 $96.0\%$, 인식 시간은 5.1초인 데 비하여, 새로운 가변 프루닝 문턱치를 적용찬 경우에는 인식률 저하없이 인식 시간이 4.34초로, 기존에 비해 각각 0.92초, 0.76초 인식 시간이 감소되어 제안한 방법의 유효성을 확인할 수 있었다.는 달리 각 산란 영역에서 그 지수는 1씩 작은 값을 갖는다.향에 따라 음장변화가 크게 다를 것이 예상되므로 이를 규명하기 위해서는 궁극적으로 3차원적인 음장분포 연구가 필요하다. 음향센서를 해저면에 매설할 경우 수충의 수온변화와 센서 주변의 수온변화 사이에는 어느 정도의 시간지연이 존재하게 되므로 이에 대한 영향을 규명하는 것도 센서의 성능예측을 위해서 필요하리라 사료된다.가지는 심부 가스의 개발 성공률을 증가시키기 위하여 심부 가스가 존재하는 지역의 지질학적 부존 환경 및 조성상의 특성과 생산시 소요되는 생산비용을 심도에 따라 분석하고 생산에 수반되는 기술적 문제점들을 정리하였으며 마지막으로 향후 요구되는 연구 분야들을 제시하였다. 또한 참고로 현재 심부 가스의 경우 미국이 연구 개발 측면에서 가장 활발한 활동을 전개하고 있으며 그 결과 다수의 신뢰성 있는 자료들을 확보하고 있으므로 본 논문은 USGS와 Gas Research Institute(GRI)에서 제시한 자료에 근거하였다.ऀĀ耀Ā삱?⨀؀Ā Ā?⨀ጀĀ耀Ā?돀ꢘ?⨀硩?⨀ႎ?⨀?⨀넆돐쁖잖⨀쁖잖⨀/ࠐ?⨀焆덐瀆倆Āⶇ퍟ⶇ퍟ĀĀĀĀ磀鲕좗?⨀肤?⨀⁅Ⴅ?⨀쀃잖⨀䣙熸ጁ↏?⨀

  • PDF

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.

The Design of Keyword Spotting System based on Auditory Phonetical Knowledge-Based Phonetic Value Classification (청음 음성학적 지식에 기반한 음가분류에 의한 핵심어 검출 시스템 구현)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.169-178
    • /
    • 2003
  • This study outlines two viewpoints the classification of phone likely unit (PLU) which is the foundation of korean large vocabulary speech recognition, and the effectiveness of Chiljongseong (7 Final Consonants) and Paljogseong (8 Final Consonants) of the korean language. The phone likely classifies the phoneme phonetically according to the location of and method of articulation, and about 50 phone-likely units are utilized in korean speech recognition. In this study auditory phonetical knowledge was applied to the classification of phone likely unit to present 45 phone likely unit. The vowels 'ㅔ, ㅐ'were classified as phone-likely of (ee) ; 'ㅒ, ㅖ' as [ye] ; and 'ㅚ, ㅙ, ㅞ' as [we]. Secondly, the Chiljongseong System of the draft for unified spelling system which is currently in use and the Paljongseonggajokyong of Korean script haerye were illustrated. The question on whether the phonetic value on 'ㄷ' and 'ㅅ' among the phonemes used in the final consonant of the korean fan guage is the same has been argued in the academic world for a long time. In this study, the transition stages of Korean consonants were investigated, and Ciljonseeng and Paljongseonggajokyong were utilized in speech recognition, and its effectiveness was verified. The experiment was divided into isolated word recognition and speech recognition, and in order to conduct the experiment PBW452 was used to test the isolated word recognition. The experiment was conducted on about 50 men and women - divided into 5 groups - and they vocalized 50 words each. As for the continuous speech recognition experiment to be utilized in the materialized stock exchange system, the sentence corpus of 71 stock exchange sentences and speech corpus vocalizing the sentences were collected and used 5 men and women each vocalized a sentence twice. As the result of the experiment, when the Paljongseonggajokyong was used as the consonant, the recognition performance elevated by an average of about 1.45% : and when phone likely unit with Paljongseonggajokyong and auditory phonetic applied simultaneously, was applied, the rate of recognition increased by an average of 1.5% to 2.02%. In the continuous speech recognition experiment, the recognition performance elevated by an average of about 1% to 2% than when the existing 49 or 56 phone likely units were utilized.

Coarticulation Model of Hangul Visual speedh for Lip Animation (입술 애니메이션을 위한 한글 발음의 동시조음 모델)

  • Gong, Gwang-Sik;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1031-1041
    • /
    • 1999
  • 기존의 한글에 대한 입술 애니메이션 방법은 음소의 입모양을 몇 개의 입모양으로 정의하고 이들을 보간하여 입술을 애니메이션하였다. 하지만 발음하는 동안의 실제 입술 움직임은 선형함수나 단순한 비선형함수가 아니기 때문에 보간방법에 의해 중간 움직임을 생성하는 방법으로는 음소의 입술 움직임을 효과적으로 생성할 수 없다. 또 이 방법은 동시조음도 고려하지 않아 음소들간에 변화하는 입술 움직임도 표현할 수 없었다. 본 논문에서는 동시조음을 고려하여 한글을 자연스럽게 발음하는 입술 애니메이션 방법을 제안한다. 비디오 카메라로 발음하는 동안의 음소의 움직임들을 측정하고 입술 움직임 제어 파라미터들을 추출한다. 각각의 제어 파라미터들은 L fqvist의 스피치 생성 제스처 이론(speech production gesture theory)을 이용하여 실제 음소의 입술 움직임에 근사한 움직임인 지배함수(dominance function)들로 정의되고 입술 움직임을 애니메이션할 때 사용된다. 또, 각 지배함수들은 혼합함수(blending function)와 반음절에 의한 한글 합성 규칙을 사용하여 결합하고 동시조음이 적용된 한글을 발음하게 된다. 따라서 스피치 생성 제스처 이론을 이용하여 입술 움직임 모델을 구현한 방법은 기존의 보간에 의해 중간 움직임을 생성한 방법보다 실제 움직임에 근사한 움직임을 생성하고 동시조음도 고려한 움직임을 보여준다.Abstract The existing lip animation method of Hangul classifies the shape of lips with a few shapes and implements the lip animation with interpolating them. However it doesn't represent natural lip animation because the function of the real motion of lips, during articulation, isn't linear or simple non-linear function. It doesn't also represent the motion of lips varying among phonemes because it doesn't consider coarticulation. In this paper we present a new coarticulation model for the natural lip animation of Hangul. Using two video cameras, we film the speaker's lips and extract the lip control parameters. Each lip control parameter is defined as dominance function by using L fqvist's speech production gesture theory. This dominance function approximates to the real lip animation of a phoneme during articulation of one and is used when lip animation is implemented. Each dominance function combines into blending function by using Hangul composition rule based on demi-syllable. Then the lip animation of our coarticulation model represents natural motion of lips. Therefore our coarticulation model approximates to real lip motion rather than the existing model and represents the natural lip motion considered coarticulation.

Analysis of Acoustic Characteristics of Vowel and Consonants Production Study on Speech Proficiency in Esophageal Speech (식도발성의 숙련 정도에 따른 모음의 음향학적 특징과 자음 산출에 대한 연구)

  • Choi, Seong-Hee;Choi, Hong-Shik;Kim, Han-Soo;Lim, Sung-Eun;Lee, Sung-Eun;Pyo, Hwa-Young
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.7-27
    • /
    • 2003
  • Esophageal Speech uses the esophageal air during phonation. Fluent esophageal speakers frequently intake air in oral communication, but unskilled esophageal speakers are difficult with swallowing lots of air. The purpose of this study was to investigate the difference of acoustic characteristics of vowel and consonants production according to the speech proficiency level in esophageal speech. 13 normal male speakers and 13 male esophageal speakers (5 unskilled esophageal speakers, 8 skilled esophageal speakers) with age ranging from 50 to 70 years old. The stimuli were sustained /a/ vowel and 36 meaningless two syllable words. Used vowel is /a/ and consonants were 18 : /k, n, t, m, p, s, c, $C^{h},\;k^{h},\;t^{h},\;p^{h}$, h, I, k', t', p', s', c'/. Fundermental frequency (Fx), Jitter, shimmer, HNR, MPT were measured with by electroglottography using Lx speech studio (Laryngograph Ltd, London, UK). 36 meaningless words produced by esophageal speakers were presented to 3 speech-language pathologists who phonetically transcribed their responses. Fx, Jitter, HNR parameters is significant different between skilled esophageal speakers and unskilled esophageal speakers (P<.05). Considering manner of articulation, ANOVA showed that differences in two esophageal speech groups on speech proficiency were significant; Glide had the highest number of confusion with the other phoneme class, affricates are the most intelligible in the unskilled esophageal speech group, whereas in the skilled esophageal speech group fricatives resulted highest number of confusions, nasals are the most intelligible. In the place of articulation, glottal /h/ is the highest confusion consonant in both groups. Bilabials are the most intelligible in the skilled esophageal speech, velars are the most intelligible in the unskilled esophageal speech. In the structure of syllable, 'CV+V' is more confusion in the skilled esophageal group, unskilled esophageal speech group has similar confusion in both structures. In unskilled esophageal speech, significantly different Fx, Jitter, HNR acoustic parameters of vowel and the highest confusions of Liquid, Nasals consonants could be attributed to unstable, improper contact of neoglottis as vibratory source and insufficiency in the phonatory air supply, and higher motoric demand of remaining articulation due to morphological characteristics of vocal tract after laryngectomy.

  • PDF

Creation and labeling of multiple phonotopic maps using a hierarchical self-organizing classifier (계층적 자기조직화 분류기를 이용한 다수 음성자판의 생성과 레이블링)

  • Chung, Dam;Lee, Kee-Cheol;Byun, Young-Tai
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.3
    • /
    • pp.600-611
    • /
    • 1996
  • Recently, neural network-based speech recognition has been studied to utilize the adaptivity and learnability of neural network models. However, conventional neural network models have difficulty in the co-articulation processing and the boundary detection of similar phonmes of the Korean speech. Also, in case of using one phonotopic map, learning speed may dramatically increase and inaccuracies may be caused because homogeneous learning and recognition method should be applied for heterogenous data. Hence, in this paper, a neural net typewriter has been designed using a hierarchical self-organizing classifier(HSOC), and related algorithms are presented. This HSOC, during its learing stage, distributed phoneme data on hierarchically structured multiple phonotopic maps, using Kohonen's self-organizing feature maps(SOFM). Presented and experimented in this paper were the algorithms for deciding the number of maps, map sizes, the selection of phonemes and their placement per map, an approapriate learning and preprocessing method per map. If maps are divided according to a priorlinguistic knowledge, we would have difficulty in acquiring linguistic knowledge and how to alpply it(e.g., processing extended phonemes). Contrarily, our HSOC has an advantage that multiple phonotopic maps suitable for given input data are self-organizable. The resulting three korean phonotopic maps are optimally labelled and have their own optimal preprocessing schemes, and also confirm to the conventional linguistic knowledge.

  • PDF

Improvement of Naturalness for a HMM-based Korean TTS using the prosodic boundary information (운율경계정보를 이용한 HMM기반 한국어 TTS 자연성 향상 연구)

  • Lim, Gi-Jeong;Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.9
    • /
    • pp.75-84
    • /
    • 2012
  • HMM-based Text-to-Speech systems generally utilize context dependent tri-phone units from a large corpus speech DB to enhance the synthetic speech. To downsize a large corpus speech DB, acoustically similar tri-phone units are clustered based on the decision tree using context dependent information. Context dependent information includes phoneme sequence as well as prosodic information because the naturalness of synthetic speech highly depends on the prosody such as pause, intonation pattern, and segmental duration. However, if the prosodic information was complicated, many context dependent phonemes would have no examples in the training data, and clustering would provide a smoothed feature which will generate unnatural synthetic speech. In this paper, instead of complicate prosodic information we propose a simple three prosodic boundary types and decision tree questions that use rising tone, falling tone, and monotonic tone to improve naturalness. Experimental results show that our proposed method can improve naturalness of a HMM-based Korean TTS and get high MOS in the perception test.

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

Speech Animation Synthesis based on a Korean Co-articulation Model (한국어 동시조음 모델에 기반한 스피치 애니메이션 생성)

  • Jang, Minjung;Jung, Sunjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.49-59
    • /
    • 2020
  • In this paper, we propose a speech animation synthesis specialized in Korean through a rule-based co-articulation model. Speech animation has been widely used in the cultural industry, such as movies, animations, and games that require natural and realistic motion. Because the technique for audio driven speech animation has been mainly developed for English, however, the animation results for domestic content are often visually very unnatural. For example, dubbing of a voice actor is played with no mouth motion at all or with an unsynchronized looping of simple mouth shapes at best. Although there are language-independent speech animation models, which are not specialized in Korean, they are yet to ensure the quality to be utilized in a domestic content production. Therefore, we propose a natural speech animation synthesis method that reflects the linguistic characteristics of Korean driven by an input audio and text. Reflecting the features that vowels mostly determine the mouth shape in Korean, a coarticulation model separating lips and the tongue has been defined to solve the previous problem of lip distortion and occasional missing of some phoneme characteristics. Our model also reflects the differences in prosodic features for improved dynamics in speech animation. Through user studies, we verify that the proposed model can synthesize natural speech animation.

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.