DOI QR코드

DOI QR Code

Speech Feature Extraction based on Spikegram for Phoneme Recognition

음소 인식을 위한 스파이크그램 기반의 음성 특성 추출 기술

  • Han, Seokhyeon (Dept. of Electronics Engineering, Kwangwoon University) ;
  • Kim, Jaewon (Dept. of Electronics Engineering, Kwangwoon University) ;
  • An, Soonho (Dept. of Electronics Engineering, Kwangwoon University) ;
  • Shin, Seonghyeon (Dept. of Electronics Engineering, Kwangwoon University) ;
  • Park, Hochong (Dept. of Electronics Engineering, Kwangwoon University)
  • Received : 2019.07.23
  • Accepted : 2019.09.04
  • Published : 2019.09.30

Abstract

In this paper, we propose a method of extracting speech features for phoneme recognition based on spikegram. The Fourier-transform-based features are widely used in phoneme recognition, but they are not extracted in a biologically plausible way and cannot have high temporal resolution due to the frame-based operation. For better phoneme recognition, therefore, it is desirable to have a new method of extracting speech features, which analyzes speech signal in high temporal resolution following the model of human auditory system. In this paper, we analyze speech signal based on a spikegram that models feature extraction and transmission in auditory system, and then propose a method of feature extraction from the spikegram for phoneme recognition. We evaluate the performance of proposed features by using a DNN-based phoneme recognizer and confirm that the proposed features provide better performance than the Fourier-transform-based features for short-length phonemes. From this result, we can verify the feasibility of new speech features extracted based on auditory model for phoneme recognition.

본 논문에서는 스파이크그램을 기반으로 음소 인식을 위한 특성을 추출하는 방법을 제안한다. 음소 인식에 널리 사용되는 푸리에 변환 기반의 특성은 청각 기관의 동작에 부합하는 과정으로 구해지지 않으며 프레임 단위로 추출되어 높은 시간 해상도를 가지지 못한다. 따라서 음소 인식의 성능 향상을 위해 높은 시간 해상도를 가지면서 인간의 청각기관을 모델링 하는 새로운 음성 특성 추출 기술이 요구된다. 본 논문에서는 청각 기관의 특성 추출 및 전달 과정을 모델링 하는 기법인 스파이크그램을 사용하여 음성 신호를 분석하고, 이로부터 음소 인식을 위한 특성을 추출하는 방법을 제안한다. 심층 신경망 기반의 음소 인식기를 사용하여 제안한 특성의 음소 인식 성능을 측정하였고, 짧은 음소에 대해 제안 특성이 기존 푸리에 변환 기반의 특성보다 우수한 성능을 가지는 것을 확인하였다. 이 결과로부터 청각 모델을 기반으로 추출된 새로운 음성 특성을 사용하여 음소 인식이 가능함을 확인할 수 있다.

Keywords

References

  1. D. Yu and L. Deng, Automatic Speech Recognition: A Deep Learning Approach, Springer Publishing Company, Incorporated, 2014.
  2. O. Abdel-Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn and D. Yu, "Convolutional Neural Networks for Speech Recognition," IEEE/ACM Trans. on Audio, Speech, and Language Processing, Vol. 22, No. 10, pp. 1533-1545, Oct. 2014, doi:10.1109/TASLP.2014. 2339736.
  3. E. Smith and M. Lewicki, "Efficient Auditory Coding," Nature, Vol. 439, No. 7079, pp. 978-982, Feb. 2006, doi:10.1038/nature04485.
  4. W.-J. Jang, H.-W. Yun, S.-H. Shin and H. Park, "Music genre classification using spikegram and deep neural network," J. of Broadcast Engineering, Vol. 22, No. 6, pp. 693-701, Nov. 2017, doi:10.5909/JBE. 2017.22.6.693.
  5. S.-H. Shin, H.-W. Yun, W.-J. Jang and H. Park, "Extraction of acoustic features based on auditory spike code and its application to music genre classification," IET Signal Processing, Vol. 13, No. 2, pp. 230-234, Apr. 2019, doi:10.1049/iet-spr.2018.5158.
  6. G. Mather, Foundations of Perception, Psychology Press, 2006.
  7. M. Slaney, "An Efficient Implementation of the Patterson - Holdsworth Auditory Filter Bank," Apple Computer Technical Report #35, 1993.
  8. J. Tropp and A. Gilbert, "Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit," IEEE Trans. on Information Theory, Vol. 53, No. 12, Dec. 2007, doi:10.1109/TIT. 2007.909108.
  9. X. Huang, A. Acero, and H. Hon. Spoken Language Processing: A guide to theory, algorithm, and system development. Prentice Hall, 2001.
  10. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, The MIT Press, Cambridge and London, 2016.
  11. K. F. Lee and H. W. Hon, "Speaker-independent phone recognition using hidden markov models," IEEE Trans. on Audio, Speech, Lang. Process., Vol. 37, No. 11, pp. 1641-1648, Nov. 1989, doi:10.1109/29. 46546.
  12. N. Faraji, S. M. Ahadi and H. Sheikhzadeh, "Sequential method for speech segmentation based on Random Matrix Theory," IET Signal Processing, Vol. 7, No. 7, pp. 625-633, Sept. 2013, doi:10.1049/ietspr.2011.0471.
  13. P. Ladefoged and I. Maddieson. The Sounds of the World's Languages. Oxford, OX, UK: Blackwell Publishers, 1996.