Browse > Article
http://dx.doi.org/10.5391/JKIIS.2014.24.1.090

EEG based Vowel Feature Extraction for Speech Recognition System using International Phonetic Alphabet  

Lee, Tae-Ju (School of Electrical and Electronics Engineering, Chung-Ang University)
Sim, Kwee-Bo (School of Electrical and Electronics Engineering, Chung-Ang University)
Publication Information
Journal of the Korean Institute of Intelligent Systems / v.24, no.1, 2014 , pp. 90-95 More about this Journal
Abstract
The researchs using brain-computer interface, the new interface system which connect human to macine, have been maded to implement the user-assistance devices for control of wheelchairs or input the characters. In recent researches, there are several trials to implement the speech recognitions system based on the brain wave and attempt to silent communication. In this paper, we studied how to extract features of vowel based on international phonetic alphabet (IPA), as a foundation step for implementing of speech recognition system based on electroencephalogram (EEG). We conducted the 2 step experiments with three healthy male subjects, and first step was speaking imagery with single vowel and second step was imagery with successive two vowels. We selected 32 channels, which include frontal lobe related to thinking and temporal lobe related to speech function, among acquired 64 channels. Eigen value of the signal was used for feature vector and support vector machine (SVM) was used for classification. As a result of first step, we should use over than 10th order of feature vector to analyze the EEG signal of speech and if we used 11th order feature vector, the highest average classification rate was 95.63 % in classification between /a/ and /o/, the lowest average classification rate was 86.85 % with /a/ and /u/. In the second step of the experiments, we studied the difference of speech imaginary signals between single and successive two vowels.
Keywords
Brain-computer interface; Speech recognition; Electroencephalogram; Imagined speech;
Citations & Related Records
연도 인용수 순위
  • Reference
1 K. Brigham, B. V. K. V. Kumar, "Imagined Speech Classification with EEG Signals for Silent Communication: A Preliminary Investigation into Synthetic Telepathy," Conf. Bioinf. and Biomed. Eng. 2010, pp. 1-4, 2010.
2 N. Yhoshimura, A. Satsuma, C. S. DaSalla, T. Hanakawa, M. Sato, Y. Koike, "Usability of EEG Cortical Currents in Classification of Vowel Speech Imagery," Int. Conf. Virtual Rehabilitation 2011, pp. 1-2, 2011.
3 L. Bottou, C-. J. Lin, "Support Vector Machine Solvers," Large-Scale Kernel Machines, MIT Press, pp. 1-27, 2007.
4 C. Guger, S. Daban, E. Sellers, C. Holzner, G. Krausz, R. Carabalona, F. Gramatica, G. Edlinger, "How many people are able to control a P300-based brain-computer interface (BCI)?," Neuroscience Letters, vol. 462, issue 1, pp. 94-98, Sep. 2009.   DOI   ScienceOn
5 A. Porbadnigk, M. Wester, J. P. Calliess, T. Schultz, "EEG-based Speech Recognition - Impact of Temporal Effects" 2nd International Conference on Bio-inspired Systems and Signal Processing (Biosignals 2009), 2009.
6 J. R. Wolpaw, E. W. Wolpaw, "Brain-computer interfaces: something new under the sun," Brain-computer interfaces: principles and practice. Oxford University Press, Oxford, pp. 3-12, 2012.
7 X. Pei, J. Hill, G. Schalk, "Silent Communication: Toward Using Brain Signals," IEEE Pulse, vol. 3, issue 1, pp. 43-46, Jan. 2012.
8 R. Bogue, "Brain-computer interfaces: control by thought," Industrial Robot: An International Journale, vol. 37, issue 2, pp. 126-132, 2010.   DOI