DOI QR코드

DOI QR Code

Classification of Diphthongs using Acoustic Phonetic Parameters

음향음성학 파라메터를 이용한 이중모음의 분류

  • Lee, Suk-Myung (School of Electrical and Electronic Engineering, Yonsei University) ;
  • Choi, Jeung-Yoon (School of Electrical and Electronic Engineering, Yonsei University)
  • Received : 2012.09.13
  • Accepted : 2012.12.05
  • Published : 2013.03.31

Abstract

This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.

본 논문은 이중모음을 분류하기 위한 음향음성학적 파라메터를 연구하였다. 음향음성학적 파라메터는 성도를 통해 음성이 발성될 때 나타나는 특징을 기반으로 하여 분산분석(ANOVA) 방법을 통해 선별한 모음의 길이, 에너지 궤적, 그리고 포먼트의 차이를 이용하였다. TIMIT 데이터 베이스를 사용하였을 때, 단모음과 이중모음만을 구분하는 실험에서는 17.8% 의 밸런스 에러율(BER)을 얻을 수 있었고, /aw/, /ay/, 그리고 /oy/를 단모음과 분류하는 실험에서는 각각 32.9%, 29.9%, 그리고 20.2%의 에러율을 얻을 수 있었다. 추가적으로 진행한 실험에서, 음향음성학적 파라메터와 음성인식에 널리 쓰이고 있는 MFCC를 함께 사용하였을 경우 역시 성능향상이 나타나는 것을 확인하였다.

Keywords

References

  1. K. N. Stevens, "Toward a model for lexical access based on acoustic landmarks and distinctive features," J. Acoust. Soc. Am. 111, 1872-1891 (2002). https://doi.org/10.1121/1.1458026
  2. I. Lehiste and G. E. Peterson, "Transitions, glides, and diphthongs," J. Acoust. Soc. Am. 33, 268-277 (1961). https://doi.org/10.1121/1.1908638
  3. A. Holbrook and G. Fairbanks, "Diphthong formants and their movements," J. Speech and Hearing Res. 5, 38-58 (1962). https://doi.org/10.1044/jshr.0501.38
  4. B. Yang, "An acoustic study of English diphthongs produced by American males and females," Phonetics and Speech Sciences, 2, 43-50 (2010).
  5. R. Carlson and J. Glass, "Vowel classification based on analysis-by-synthesis," in Proc. Int. Conf. Spoken Language Processing, 575-578 (1992).
  6. C. Y. Espy-Wilson, "Acoustic measures for linguistic features distinguishing the semivowels in American English," J. Acoust. Soc. Am. 92, 736-757 (1992). https://doi.org/10.1121/1.403998
  7. J. Gustafson and K. Sjolander, "Educational tools for speech technology," in Proc. Fonetik, 176-179 (1998).
  8. J. S. Garofalo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren, "The DARPA TIMIT acousticphonetic continuous speech corpus CDROM," Linguistic Data Consortium (1993).
  9. I. Read and S. Cox, "Automatic pitch accent prediction for Text-To-Speech synthesis," in Proc. Interspeech, 482-485 (2007).
  10. J. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler, "Acoustic characteristics of American English vowels," J. Acoust. Soc. Am. 97, 3099-3111 (1995). https://doi.org/10.1121/1.411872
  11. R. G. Miller, Beyond ANOVA: Basics of Applied Statistics. (Chapman & Hall, New York, 1997).