• Title/Summary/Keyword: imagined speech

Search Result 3, Processing Time 0.018 seconds

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

EEG based Vowel Feature Extraction for Speech Recognition System using International Phonetic Alphabet (EEG기반 언어 인식 시스템을 위한 국제음성기호를 이용한 모음 특징 추출 연구)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.90-95
    • /
    • 2014
  • The researchs using brain-computer interface, the new interface system which connect human to macine, have been maded to implement the user-assistance devices for control of wheelchairs or input the characters. In recent researches, there are several trials to implement the speech recognitions system based on the brain wave and attempt to silent communication. In this paper, we studied how to extract features of vowel based on international phonetic alphabet (IPA), as a foundation step for implementing of speech recognition system based on electroencephalogram (EEG). We conducted the 2 step experiments with three healthy male subjects, and first step was speaking imagery with single vowel and second step was imagery with successive two vowels. We selected 32 channels, which include frontal lobe related to thinking and temporal lobe related to speech function, among acquired 64 channels. Eigen value of the signal was used for feature vector and support vector machine (SVM) was used for classification. As a result of first step, we should use over than 10th order of feature vector to analyze the EEG signal of speech and if we used 11th order feature vector, the highest average classification rate was 95.63 % in classification between /a/ and /o/, the lowest average classification rate was 86.85 % with /a/ and /u/. In the second step of the experiments, we studied the difference of speech imaginary signals between single and successive two vowels.