• Title/Summary/Keyword: vowel recognition

Search Result 135, Processing Time 0.033 seconds

Japanese Vowel Sound Classification Using Fuzzy Inference System

  • Phitakwinai, Suwannee;Sawada, Hideyuki;Auephanwiriyakul, Sansanee;Theera-Umpon, Nipon
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.1
    • /
    • pp.35-41
    • /
    • 2014
  • An automatic speech recognition system is one of the popular research problems. There are many research groups working in this field for different language including Japanese. Japanese vowel recognition is one of important parts in the Japanese speech recognition system. The vowel classification system with the Mamdani fuzzy inference system was developed in this research. We tested our system on the blind test data set collected from one male native Japanese speaker and four male non-native Japanese speakers. All subjects in the blind test data set were not the same subjects in the training data set. We found out that the classification rate from the training data set is 95.0 %. In the speaker-independent experiments, the classification rate from the native speaker is around 70.0 %, whereas that from the non-native speakers is around 80.5 %.

Speech Recognition for the Korean Vowel 'ㅣ' based on Waveform-feature Extraction and Neural-network Learning (파형 특징 추출과 신경망 학습 기반 모음 'ㅣ' 음성 인식)

  • Rho, Wonbin;Lee, Jongwoo;Lee, Jaewon
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.69-76
    • /
    • 2016
  • With the recent increase of the interest in IoT in almost all areas of industry, computing technologies have been increasingly applied in human environments such as houses, buildings, cars, and streets; in these IoT environments, speech recognition is being widely accepted as a means of HCI. The existing server-based speech recognition techniques are typically fast and show quite high recognition rates; however, an internet connection is necessary, and complicated server computing is required because a voice is recognized by units of words that are stored in server databases. This paper, as a successive research results of speech recognition algorithms for the Korean phonemic vowel 'ㅏ', 'ㅓ', suggests an implementation of speech recognition algorithms for the Korean phonemic vowel 'ㅣ'. We observed that almost all of the vocal waveform patterns for 'ㅣ' are unique and different when compared with the patterns of the 'ㅏ' and 'ㅓ' waveforms. In this paper we propose specific waveform patterns for the Korean vowel 'ㅣ' and the corresponding recognition algorithms. We also presents experiment results showing that, by adding neural-network learning to our algorithm, the voice recognition success rate for the vowel 'ㅣ' can be increased. As a result we observed that 90% or more of the vocal expressions of the vowel 'ㅣ' can be successfully recognized when our algorithms are used.

Speech Recognition of the Korean Vowel 'ㅡ' based on Neural Network Learning of Bulk Indicators (벌크 지표의 신경망 학습에 기반한 한국어 모음 'ㅡ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.11
    • /
    • pp.617-624
    • /
    • 2017
  • Speech recognition is now one of the most widely used technologies in HCI. Many applications where speech recognition may be used (such as home automation, automatic speech translation, and car navigation) are now under active development. In addition, the demand for speech recognition systems in mobile environments is rapidly increasing. This paper is intended to present a method for instant recognition of the Korean vowel 'ㅡ', as a part of a Korean speech recognition system. The proposed method uses bulk indicators (which are calculated in the time domain) instead of the frequency domain and consequently, the computational cost for the recognition can be reduced. The bulk indicators representing predominant sequence patterns of the vowel 'ㅡ' are learned by neural networks and final recognition decisions are made by those trained neural networks. The results of the experiment show that the proposed method can achieve 88.7% recognition accuracy, and recognition speed of 0.74 msec per syllable.

Syllable-Type-Based Phoneme Weighting Techniques for Listening Intelligibility in Noisy Environments (소음 환경에서의 명료한 청취를 위한 음절형태 기반 음소 가중 기술)

  • Lee, Young Ho;Joo, Jong Han;Choi, Seung Ho
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.165-169
    • /
    • 2014
  • Intelligibility of speech transmitted to listeners can significantly be degraded in noisy environments such as in auditorium and in train station due to ambient noises. Noise-masked speech signal is hard to be recognized by listeners. Among the conventional methods to improve speech intelligibility, consonant-vowel intensity ratio (CVR) approach reinforces the powers of overall consonants. However, excessively reinforced consonant is not helpful in recognition. Furthermore, only some of consonants are improved by the CVR approach. In this paper, we propose the corrective weighting (CW) approach that reinforces the powers of consonants according to syllable-type such as consonant-vowel-consonant (CVC), consonant-vowel (CV) and vowel-consonant (VC) in Korean differently, considering the level of listeners' recognition. The proposed CW approach was evaluated by the subjective test, Comparison Category Rating (CCR) test of ITU-T P.800, showed better performance, that is, 0.18 and 0.24 higher than the unprocessed CVR approach, respectively.

A Study on Formants of Vowels for Speaker Recognition (화자 인식을 위한 모음의 포만트 연구)

  • Ahn Byoung-seob;Shin Jiyoung;Kang Sunmee
    • MALSORI
    • /
    • no.51
    • /
    • pp.1-16
    • /
    • 2004
  • The aim of this paper is to analyze vowels in voice imitation and disguised voice, and to find the invariable phonetic features of the speaker. In this paper we examined the formants of monophthongs /a, u, i, o, {$\omega},{\;}{\varepsilon},{\;}{\Lambda}$/. The results of the present are as follows : $\circled1$ Speakers change their vocal tract features. $\circled2$ Vowels /a, ${\varepsilon}$, i/ appear to be proper for speaker recognition since they show invariable acoustic feature during voice modulation. $\circled3$ F1 does not change easily compared to higher formants. $\circled4$ F3-F2 appears to be constituent for a speaker identification in vowel /a/ and /$\varepsilon$/, and F4-F2 in vowel /i/. $\circled5$ Resulting of F-ratio, differences of each formants were more useful than individual formant of a vowel to speaker recognition.

  • PDF

An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length (모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상)

  • 박준배;김태준;최성용;이정현
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

An Analysis of Formants Extracted from Emotional Speech and Acoustical Implications for the Emotion Recognition System and Speech Recognition System (독일어 감정음성에서 추출한 포먼트의 분석 및 감정인식 시스템과 음성인식 시스템에 대한 음향적 의미)

  • Yi, So-Pae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.45-50
    • /
    • 2011
  • Formant structure of speech associated with five different emotions (anger, fear, happiness, neutral, sadness) was analysed. Acoustic separability of vowels (or emotions) associated with a specific emotion (or vowel) was estimated using F-ratio. According to the results, neutral showed the highest separability of vowels followed by anger, happiness, fear, and sadness in descending order. Vowel /A/ showed the highest separability of emotions followed by /U/, /O/, /I/ and /E/ in descending order. The acoustic results were interpreted and explained in the context of previous articulatory and perceptual studies. Suggestions for the performance improvement of an automatic emotion recognition system and automatic speech recognition system were made.

  • PDF

A Study on the Hangul Recognition Using Hough Transform and Subgraph Pattern (Hough Transform과 부분 그래프 패턴을 이용한 한글 인식에 관한 연구)

  • 구하성;박길철
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.1
    • /
    • pp.185-196
    • /
    • 1999
  • In this dissertation, a new off-line recognition system is proposed using a subgraph pattern, neural network. After thinning is applied to input characters, balance having a noise elimination function on location is performed. Then as the first step for recognition procedure, circular elements are extracted and recognized. From the subblock HT, space feature points such as endpoint, flex point, bridge point are extracted and a subgraph pattern is formed observing the relations among them. A region where vowel can exist is allocated and a candidate point of the vowel is extracted. Then, using the subgraph pattern dictionary, a vowel is recognized. A same method is applied to extract horizontal vowels and the vowel is recognized through a simple structural analysis. For verification of recognition subgraph in this paper, experiments are done with the most frequently used Myngjo font, Gothic font for printed characters and handwritten characters. In case of Gothic font, character recognition rate was 98.9%. For Myngjo font characters, the recognition rate was 98.2%. For handwritten characters, the recognition rate was 92.5%. The total recognition rate was 94.8% with mixed handwriting and printing characters for multi-font recognition.

  • PDF

Speech Recognition of the Korean Vowel 'ㅐ', Based on Time Domain Sequence Patterns (시간 영역 시퀀스 패턴에 기반한 한국어 모음 'ㅐ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.713-720
    • /
    • 2015
  • As computing and network technologies are further developed, communication equipment continues to become smaller, and as a result, mobility is now a predominant feature of current technology. Therefore, demand for speech recognition systems in mobile environments is rapidly increasing. This paper proposes a novel method to recognize the Korean vowel 'ㅐ' as a part of a phoneme-based Korean speech recognition system. The proposed method works by analyzing a sequence of patterns in the time domain instead of the frequency domain, and consequently, its use can markedly reduce computational costs. Three algorithms are presented to detect typical sequence patterns of 'ㅐ', and these are combined to produce the final decision. The results of the experiment show that the proposed method has an accuracy of 89.1% in recognizing the vowel 'ㅐ'.

Korean vowel recognition in noise using auditory model

  • Shim, Jae-Seong;Lee, Jae-Hyuk;Yoon, Tae-Sung;Beack, Seung-Hwa;Park, Sang-Hui
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.1037-1040
    • /
    • 1988
  • In this study, we performed the recognition test on Korean vowel using peripheral auditory model. In addition, for the purpose of objective comparision, the recognition test is performed by extracting LPC cepstrum coefficients from the same data. And the same speech data are mixed with the Guaussian white noise quantitatively, then we repeated the same test, too. So we verified that this auditory model has a adaptability on noise.

  • PDF