DOI QR코드

DOI QR Code

Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network

  • 이태주 (중앙대학교 전자전기공학과) ;
  • 심귀보 (중앙대학교 전자전기공학과)
  • Lee, Tae-Ju (School of Electrical and Electronics Engineering, Chung-Ang University) ;
  • Sim, Kwee-Bo (School of Electrical and Electronics Engineering, Chung-Ang University)
  • 투고 : 2014.06.30
  • 심사 : 2014.11.14
  • 발행 : 2015.01.01

초록

In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

키워드

참고문헌

  1. D. Marshall, D. Coyle, S. Wilson, and M. Callaghan, "Games, Gameplay, and BCI: The state of the art," IEEE trans. on computational intelligence ans AI in games, vol. 5, no. 2, pp. 82-99, Jun. 2013. https://doi.org/10.1109/TCIAIG.2013.2263555
  2. Y. Chae, J. Jeong, and S. Jo, "Toward brain-actuated humanoid robots: asynchronous direct control using an EEG-based BCI," IEEE Trans. on Robotics, vol. 28, no. 5, pp. 1131-1144, Oct. 2012. https://doi.org/10.1109/TRO.2012.2201310
  3. A. Frisoli, C. Loconsole, F. Banno, M. Barsotti, C. Chisari, and M. Bergamasco, "A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks," IEEE Trans. on Systems, Man, and Cybernetics- Part C: Applications and Reviews, vol. 42, no. 6, pp. 1169-1179, Nov. 2012. https://doi.org/10.1109/TSMCC.2012.2226444
  4. D.-E. Kim, S.-M. Park, and K.-B. Sim, "Study on the correlation between grip strength and EEG," Journal of Institute of Control, Robotics and Systems, vol. 19, no. 9, pp. 853-859, 2013. https://doi.org/10.5302/J.ICROS.2013.13.1916
  5. Y.-H. Kim, K.-E. Ko, S.-M. Park, and K.-B. Sim, "Practical use technology for robot control in BCI environment based on motor imagery-P300," Jounal of Institute of Control, Robotics and Systems, vol. 19, no. 3, pp. 227-232, 2013. https://doi.org/10.5302/J.ICROS.2013.13.1866
  6. R. Bogue "Brain-computer interfaces: control by thought," Industrial Robot: An International Journal, vol. 37, no. 2, pp. 126-132, 2010. https://doi.org/10.1108/01439911011018894
  7. T. J. Lee and K. B. Sim, "EEG based vowel feature extraction for speech recognition system using international phonetic alphabet," Journal of Korean Institute of Intelligent Systems, vol. 24, no. 1, pp. 90-95, Feb. 2014. https://doi.org/10.5391/JKIIS.2014.24.1.090
  8. J. A. Freeman and D. M. Skapura, Neural Networks Algorithms, Applications, and Programming Techniques, Addison-Wesley, Massachusetts, 1991.
  9. G. Hinton, A practical Guide to Training Restricted Boltzmann Machines, Version 1, Toronto, 2010.
  10. G. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," Science, vol. 313, no. 5786, pp. 504-507, Jul. 2006. https://doi.org/10.1126/science.1127647
  11. M. A. Salama, A. E. Hassanien, and A. A. Fahmy, "Deep belief network for clustering and classification of a continuous data," Signal Processing and Information Technology (ISSPIT), Luxor, Egypt, pp. 473-477, Dec. 2010.
  12. R. B. Palm, Prediction as a Candidate for Learning Deep Hierarchical Models of Data, Technical University of Denmark, Lyngby, Denmark, 2012.
  13. H. Lee, P. Pham, Y. Largman, and A. Y. Ng, "Unsupervised feature learning for audio classification using convolution deep belief networks," Advances in Neural Information Processing Systems 22 (NIPS 2009), Vancouver, Canada pp. 1-9, Dec. 2010.
  14. G. E. Hinton, "To recognize shapes, first learn to generate images," Computational Neuroscience: Theoretical Insights into Brain Function, vol. 165, pp. 535-547, 2007.