Browse > Article
http://dx.doi.org/10.13064/KSSS.2019.11.3.039

Performance of Korean spontaneous speech recognizers based on an extended phone set derived from acoustic data  

Bang, Jeong-Uk (Department of Control and Robot Engineering, Graduate School, Chungbuk National University)
Kim, Sang-Hun (Electronics and Telecommunications Research Institute)
Kwon, Oh-Wook (School of Electronics Engineering, Chungbuk National University)
Publication Information
Phonetics and Speech Sciences / v.11, no.3, 2019 , pp. 39-47 More about this Journal
Abstract
We propose a method to improve the performance of spontaneous speech recognizers by extending their phone set using speech data. In the proposed method, we first extract variable-length phoneme-level segments from broadcast speech signals, and convert them to fixed-length latent vectors using an long short-term memory (LSTM) classifier. We then cluster acoustically similar latent vectors and build a new phone set by choosing the number of clusters with the lowest Davies-Bouldin index. We also update the lexicon of the speech recognizer by choosing the pronunciation sequence of each word with the highest conditional probability. In order to analyze the acoustic characteristics of the new phone set, we visualize its spectral patterns and segment duration. Through speech recognition experiments using a larger training data set than our own previous work, we confirm that the new phone set yields better performance than the conventional phoneme-based and grapheme-based units in both spontaneous speech recognition and read speech recognition.
Keywords
acoustic units; phone set; spontaneous speech recognition; broadcast data;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Sak, H., Senior, A., & Beaufays, F. (2014, September). Long shortterm memory recurrent neural network architectures for large scale acoustic modeling. Proceedings of the Interspeech 2014 (pp. 338-342). Singapore.
2 Stolcke, A. (2002, September). SRILM-an extensible language modeling toolkit. Proceedings of the Interspeech 2002 (pp. 901-904). Denver, CO.
3 Young, S. J., Odell, J. J., & Woodland, P. C. (1994, March). Tree-based state tying for high accuracy acoustic modelling. Proceedings of the Workshop on Human Language Technology (pp. 307-312). Plainsboro, NJ.
4 Bang, J. U., Choi, M. Y., Kim, S. H., & Kwon, O. W. (2017, August). Improving speech recognizers by refining broadcast data with inaccurate subtitle timestamps. Proceedings of the Interspeech 2017 (pp. 2929-2933). Stockholm, Sweden.
5 Bang, J. U., Choi, M. Y., Kim, S. H., & Kwon, O. W. (2019, September). Extending an acoustic data-driven phone set for spontaneous speech recognition. Proceedings of the Interspeech 2019 (pp. 4405-4409). Graz, Austria.
6 Chung, Y. A., Wu, C. C., Shen, C. H., Lee, H. Y., & Lee, L. S. (2016, September). Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder. Proceedings of the Interspeech 2016 (pp. 410-415). San Francisco, CA.
7 Hain, T. (2005). Implicit modelling of pronunciation variation in automatic speech recognition. Speech Communication, 46(2), 171-188.   DOI
8 Killer, M., Stuker, S., & Schultz, T. (2003). Grapheme based speech recognition. Proceedings of the Eurospeech 2003 (pp. 3141-3144). Geneva, Switzerland.
9 Lamel, L., Gauvain, J. L., & Adda, G. (2002). Lightly supervised and unsupervised acoustic model training. Computer Speech and Language, 16(1), 115-129.   DOI
10 Lee, K. N., & Chung, M. (2003, January). Modeling cross-morpheme pronunciation variations for Korean large vocabulary continuous speech recognition. Proceedings of the Eurospeech 2003 (pp. 261-264). Geneva, Switzerland.
11 Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., ... Vesely, K. (2011). The Kaldi speech recognition toolkit. IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (ASRU). Hawaii.
12 MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability (pp. 281-297). Berkeley, CA.
13 Mitra, V., Vergyri, D., & Franco, H. (2016, September). Unsupervised learning of acoustic units using autoencoders and Kohonen nets. Proceedings of the Interspeech 2016 (pp. 1300-1304). San Francisco, CA.
14 Nakamura, M., Iwano, K., & Furui, S. (2008). Differences between acoustic characteristics of spontaneous and read speech and their effects on speech recognition performance. Computer Speech and Language, 22(2), 171-184.   DOI
15 Sainath, T. N., Prabhavalkar, R., Kumar, S., Lee, S., Kannan, A., Rybach, D., Schoglo, V., ... Chiu, C. C. (2018, April). No need for a lexicon? Evaluating the value of the pronunciation lexica in end-to-end models. Proceedings of the International Conference on Acoustics, Speech, Signal Processing (pp. 5859-5863). Calgary, Canada.