• Title/Summary/Keyword: Vowel classification

Search Result 42, Processing Time 0.028 seconds

Nonlinear Interaction between Consonant and Vowel Features in Korean Syllable Perception (한국어 단음절에서 자음과 모음 자질의 비선형적 지각)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.29-38
    • /
    • 2009
  • This study investigated the interaction between consonants and vowels in Korean syllable perception using a speeded classification task (Garner, 1978). Experiment 1 examined whether listeners analytically perceive the component phonemes in CV monosyllables when classification is based on the component phonemes (a consonant or a vowel) and observed a significant redundancy gain and a Garner interference effect. These results imply that the perception of the component phonemes in a CV syllable is not linear. Experiment 2 examined the further relation between consonants and vowels at a subphonemic level comparing classification times based on glottal features (aspiration and lax), on place of articulation features (labial and coronal), and on vowel features (front and back). Across all feature classifications, there were significant but asymmetric interference effects. Glottal feature.based classification showed the least amount of interference effect, while vowel feature.based classification showed moderate interference, and place of articulation feature-based classification showed the most interference. These results show that glottal features are more independent to vowels, but place features are more dependent to vowels in syllable perception. To examine the three-way interaction among glottal, place of articulation, and vowel features, Experiment 3 featured a modified Garner task. The outcome of this experiment indicated that glottal consonant features are independent to both the place of articulation and vowel features, but the place of articulation features are dependent to glottal and vowel features. These results were interpreted to show that speech perception is not abstract and discrete, but nonlinear, and that the perception of features corresponds to the hierarchical organization of articulatory features which is suggested in nonlinear phonology (Clements, 1991; Browman and Goldstein, 1989).

  • PDF

Japanese Vowel Sound Classification Using Fuzzy Inference System

  • Phitakwinai, Suwannee;Sawada, Hideyuki;Auephanwiriyakul, Sansanee;Theera-Umpon, Nipon
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.1
    • /
    • pp.35-41
    • /
    • 2014
  • An automatic speech recognition system is one of the popular research problems. There are many research groups working in this field for different language including Japanese. Japanese vowel recognition is one of important parts in the Japanese speech recognition system. The vowel classification system with the Mamdani fuzzy inference system was developed in this research. We tested our system on the blind test data set collected from one male native Japanese speaker and four male non-native Japanese speakers. All subjects in the blind test data set were not the same subjects in the training data set. We found out that the classification rate from the training data set is 95.0 %. In the speaker-independent experiments, the classification rate from the native speaker is around 70.0 %, whereas that from the non-native speakers is around 80.5 %.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

Speaker Identification Based on Vowel Classification and Vector Quantization (모음 인식과 벡터 양자화를 이용한 화자 인식)

  • Lim, Chang-Heon;Lee, Hwang-Soo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.4
    • /
    • pp.65-73
    • /
    • 1989
  • In this paper, we propose a text-independent speaker identification algorithm based on VQ(vector quantization) and vowel classification, and its performance is studied and compared with that of a conventional speaker identification algorithm using VQ. The proposed speaker identification algorithm is composed of three processes: vowel segmentation, vowel recognition and average distortion calculation. The vowel segmentation is performed automatlcally using RMS energy, BTR(Back-to-Total cavity volume Ratio)and SFBR(Signed Front-to-Back maximum area Ratio) extracted from input speech signal. If the Input speech signal Is noisy, particularity when the SNR is around 20dB, the proposed speaker identification algorithm performs better than the reference speaker identification algorithm when the correct vowel segmentation is done. The same result is obtained when we use the noisy telephone speech signal as an input, too.

  • PDF

A Study on the Classification of the Korean Consonants in the VCV Speech Chain (VCV 연쇄음성상에 존재하는 한국어 자음의 분류에 관한 연구)

  • 최윤석;김기석;김원준;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.6
    • /
    • pp.607-615
    • /
    • 1990
  • In this paper, I propose the experimental models to classify the consonants in the Vowel-Consonant-Vowel (VCV) speech chain into four phonemic groups such as nasals, liquids, plosives and the others. To classify the fuzzy patterns like speech, it is necessary to analyze the distribution of acoustic feature of many training data. The classification rules are maximum 4 th order polynomial functions obtained by regression analysis, contributing collectively the result. The final result shows about 87% success rates with the data spoken by one man.

The Recognition of Printed HANGUL Character (인쇄체 한글 문자 인식에 관한 연구)

  • Jang, Seung-Seok;Jang, Dong-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.17 no.2
    • /
    • pp.27-37
    • /
    • 1991
  • A recognition algorithm for Hangul is developed by structural analysis to Hangul in this theses. Four major procedures are proposed : preprocessing, type classification, separation of consonant and vowel, recognition. In the preprocessing procedure, the thinning algorithm proposed by CHEN & HSU is applied. In the type classification procedure, thinned Hangul image is classified into one of six formal types. In the separation of consonant and vowel procedure, starting from branch-points which are existed in a vowel, character elements are separated by means of tracing branch-point pixel by pixel and comparison with proposed templates. In the same time, the vowels are recognized. In the recognition procedure, consonants are extracted from the separated Hangul character and recognized by modified Crossing method. Recognized characters are converted into KS-5601-1989 codes. The experiments show that correct recognition rate is about 80%-90% and recognition speed is about 2-3 character persecond in three types of different input data on computer with 80386 microprocessor.

  • PDF

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.

Tyue Classification of Korean Characters Considering Relative Type Size (유형의 상대적 크기를 고려한 한글문자의 유형 분류)

  • Kim, Pyeoung-Kee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.99-106
    • /
    • 2006
  • Type classification is a very needed step in recognizing huge character set language such as korean characters. Since most previous researches are based on the composition rule of Korean characters, it has been difficult to correctly classify composite vowel characters and problem space was not divided equally for the lack of classification of last consonant which is relatively bigger than other graphemes. In this paper, I Propose a new type classification method in which horizontal vowel is extracted before vortical vowel and last consonants are further classified into one of five small groups based on horizontal projection profile. The new method uses 19 character types which is more stable than previous 6 types or 15 types. Through experiments on 1.000 frequently used character sets and 30.614 characters scanned from several magazines, I showed that the proposed method is more useful classifying Korean characters of huge set.

  • PDF

A Study on the Printed Korean and Chinese Character Recognition (인쇄체 한글 및 한자의 인식에 관한 연구)

  • 김정우;이세행
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.11
    • /
    • pp.1175-1184
    • /
    • 1992
  • A new classification method and recognition algorithms for printed Korean and Chinese character is studied for Korean text which contains both Korean and Chinese characters. The proposed method utilizes structural features of the vertical and horizontal vowel in Korean character. Korean characters are classified into 6 groups. Vowel and consonant are separated by means of different vowel extraction methods applied to each group. Time consuming thinning process is excluded. A modified crossing distance feature is measured to recognize extracted consonant. For Chinese character, an average of stroke crossing number is calculated on every characters, which allows the characters to be classified into several groups. A recognition process is then followed in terms of the stroke crossing number and the black dot rate of character. Classification between Korean and Chinese character was at the rate of 90.5%, and classification rate of Ming-style 2512 Korean characters was 90.0%. The recognition algorithm was applied on 1278 characters. The recognition rate was 92.2%. The densest class after classification of 4585 Chinese characters was found to contain only 124 characters, only 1/40 of total numbers. The recognition rate was 89.2%.

  • PDF

Classification of Korean Character Type using Multi Neural Network and Fuzzy Inference based on Block Partition for Each Type (형식별 블럭분할에 기초한 다중신경망과 퍼지추론에 의한 한글 형식분류)

  • Pyeon, Seok-Beom;Park, Jong-An
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.5-11
    • /
    • 1994
  • In this paper, the ciassification of Korean character type using multi neural network and fuzzy inference based on block partition is studied. For the effective classification of a consonant and a vowel, block partition method which devide the region of a consonant and a vowel for each type in the character is proposed. And the partitioned block can be changed according to the each type adaptively. For the improvement of classification rate, the multi neural network with a whole and a part neural network is consisted, and the character type by using fuzzy inference is decided. To verify the validity of the proposed method, computer simulation is accomplished, and from the classification rate $92.6\%$, the effectivity of the method is confirmed.

  • PDF