• Title/Summary/Keyword: Brain Computer Interface

Search Result 196, Processing Time 0.026 seconds

Analysis of Change of Event Related Potential in Escape Test using Virtual Reality Technology

  • Hyun, Kyung-Yae;Lee, Gil-Hyun
    • Biomedical Science Letters
    • /
    • v.25 no.2
    • /
    • pp.139-148
    • /
    • 2019
  • The role of electroencephalography (EEG) in the development of brain-computer interface (BCI) technology is increasing. In particular, the importance of the analysis of event related potential (ERP) in various situations is becoming more significant in BCI technology. Studies on past maze and fire situations have been difficult due to risks and realistic problems. Nowadays, according to the development of virtual reality (VR) technology, realistic maze and fire situation can be realized. In this study, ERPs (P300, and evented related negativity) were analyzed to collect objective data on case determination in an emergency situation. In order to overcome the limitations of previous methods that evaluating the EEG frequency change, ERPs were derived by setting epochs for stimulation and standardizing them, and evaluated for ERPs in this study. P3a and P3b, which are subcomponents of P300, were analyzed and the evented related negativity (ERN) was analyzed together with error positivity (Pe). As a result of the study, statistically significant changes of ERPs were observed, this result, which has little related research, is considered to be meaningful as medical basic statistics.

Multi-channel EEG classification method according to music tempo stimuli using 3D convolutional bidirectional gated recurrent neural network (3차원 합성곱 양방향 게이트 순환 신경망을 이용한 음악 템포 자극에 따른 다채널 뇌파 분류 방식)

  • Kim, Min-Soo;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.228-233
    • /
    • 2021
  • In this paper, we propose a method to extract and classify features of multi-channel ElectroEncephalo Graphy (EEG) that change according to various musical tempo stimuli. In the proposed method, a 3D convolutional bidirectional gated recurrent neural network extracts spatio-temporal and long time-dependent features from the 3D EEG input representation transformed through the preprocessing. The experimental results show that the proposed tempo stimuli classification method is superior to the existing method and the possibility of constructing a music-based brain-computer interface.

The Development of Signal Processing Software for Single-and Multi-Voxel MR Spectroscopy (단위용적 및 다용적 기법 자기공명분광 신호처리 분석 소프트웨어의 개발)

  • Paik, Moon-Young;Lee, Hyun-Yong;Shin, Oun-Jae;Eun, Choong-Ki;Mu, Chi-Woong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.544-555
    • /
    • 2002
  • The aim of this study is to develop the $^1H$-MRS data postprocessing software for both single-voxel and multi-voxel technique, which plays and important role as a diagnostic tool in clinical field. This software is based on graphical user interface(GUI) under windows operating system of personal computer(PC). In case of single-voxel MRS, both of raw data in time-domain and spectrum data in frequency-domain are simultaneously displayed in a screen. Several functions such as DC correction, zero filling, line broadening, Lorentz-Gauss filtering and phase correction, etc. are included to increase the quality of spectrum data. In case of multi-voxel analysis, spectroscopic image reconstructed by 3-D FFT was displayed as a spectral grid and overlapped over previously obtained T1- or T2-weighted image for the spectra to be spatially registered with the image. The analysis of MRS peaks were performed by obtaining the ratio of peak area. In single-voxel method, statistically processed peak-area ratios of MRS data obtained from normal human brain are presented. Using multi-voxel method, MR spectroscopic image and metabolite image acquired from brain tumor are demonstrated.

DNA (Data, Network, AI) Based Intelligent Information Technology (DNA (Data, Network, AI) 기반 지능형 정보 기술)

  • Youn, Joosang;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.247-249
    • /
    • 2020
  • In the era of the 4th industrial revolution, the demand for convergence between ICT technologies is increasing in various fields. Accordingly, a new term that combines data, network, and artificial intelligence technology, DNA (Data, Network, AI) is in use. and has recently become a hot topic. DNA has various potential technology to be able to develop intelligent application in the real world. Therefore, this paper introduces the reviewed papers on the service image placement mechanism based on the logical fog network, the mobility support scheme based on machine learning for Industrial wireless sensor network, the prediction of the following BCI performance by means of spectral EEG characteristics, the warning classification method based on artificial neural network using topics of source code and natural language processing model for data visualization interaction with chatbot, related on DNA technology.

Analysis of Dimensionality Reduction Methods Through Epileptic EEG Feature Selection for Machine Learning in BCI (BCI에서 기계 학습을 위한 간질 뇌파 특징 선택을 통한 차원 감소 방법 분석)

  • Tong, Yang;Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1333-1342
    • /
    • 2018
  • Until now, Electroencephalography(: EEG) has been the most important and convenient method for the diagnosis and treatment of epilepsy. However, it is difficult to identify the wave characteristics of an epileptic EEG signals because it is very weak, non-stationary and has strong background noise. In this paper, we analyse the effect of dimensionality reduction methods on Epileptic EEG feature selection and classification. Three dimensionality reduction methods: Pincipal Component Analysis(: PCA), Kernel Principal Component Analysis(: KPCA) and Linear Discriminant Analysis(: LDA) were investigated. The performance of each method was evaluated by using Support Vector Machine SVM, Logistic Regression(: LR), K-Nearestneighbor(: K-NN), Decision Tree(: DR) and Random Forest(: RF). From the experimental result, PCA recorded 75% of highest accuracy in SVM, LR and K-NN. KPCA recorded 85% of best performance in SVM and K-KNN while LDA achieved 100% accuracy in K-NN. Thus, LDA dimensionality reduction is found to provide the best classification result for epileptic EEG signal.

The effects of the methods of eye gaze and visual angles on accuracy of P300 speller (시선응시 방법과 시각도가 P300 문자입력기의 정확도에 미치는 영향)

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.17 no.2
    • /
    • pp.91-100
    • /
    • 2014
  • This study was to examine how visual angle of matrix corresponding to the physical properties of P300 speller and eye gaze corresponding to the user's personal characteristics influence on the accuracy of P300. Visual angle of the matrix was operated as the distance between the user and the matrix and three groups were composed: 60 cm group, 100 cm groups, and 150 cm group. Eye gaze methods was consisted three conditions. Head moving condition was putting eye gaze using head, pupil moving condition was moving pupil with the head fixed, while the eye fixed condition is to fix the eye gaze at the center of the matrix. The results showed that there was significant difference in the accuracy of P300 speller according to the eye gaze method. The accuracy of the head moving condition was higher than the accuracy of pupil moving conditions, accuracy of pupil moving conditions was higher than the accuracy of the eye fixed conditions. However, the effect of visual angle of matrix and interaction effect were not significant. When P300 amplitude of target character was measured depending on how you stare at the target character, P300 amplitude of the head moving condition was greater than P300 amplitude of the pupil moving condition. There was no significant difference in the error distribution in head moving condition and pupil moving condition, while there was a significant difference between two eye gaze conditions and fixed gaze condition. The error was located at the neighboring characters of the target character in head moving condition and pupil moving condition, while the error was relatively distributed widely in fixed eye condition, error was occurred with high rate in characters far away from the center of matrix.

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF

Efficient way to input text through eye gazing method. (시선입력 인터페이스 시스템의 효율적 문자입력 방법)

  • Kwon, O-Jae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.289-298
    • /
    • 2007
  • The EGI system is a new communication method in limelight for helping disabled users to input and handle information on the computer more easily. However, due to the EGI system's "JEM(Jittery Eye Movements)" generation, it actually increases heavy psychological and physiological stresses for the user to input or perceive the target information on a machine. This study illustrates how to resolve this "JEM" issue and suggests a method that is easy and simple to be controled by anyone. A demo tool was built and tested to find and prove the reasons for "JEM" This test shows that that the case with snap up is less stressful than without to input text as a final result of the test evaluation in both psychological snap up and physiological brain wave test. Postnatal or naturally acquired, it is found that the disabled can have opportunities for smoother communication, and a possible efficient system development for better communication.

  • PDF

Wavelet-Based Minimized Feature Selection for Motor Imagery Classification (운동 형상 분류를 위한 웨이블릿 기반 최소의 특징 선택)

  • Lee, Sang-Hong;Shin, Dong-Kun;Lim, Joon-S.
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.27-34
    • /
    • 2010
  • This paper presents a methodology for classifying left and right motor imagery using a neural network with weighted fuzzy membership functions (NEWFM) and wavelet-based feature extraction. Wavelet coefficients are extracted from electroencephalogram(EEG) signal by wavelet transforms in the first step. In the second step, sixty numbers of initial features are extracted from wavelet coefficients by the frequency distribution and the amount of variability in frequency distribution. The distributed non-overlap area measurement method selects the minimized number of features by removing the worst input features one by one, and then minimized six numbers of features are selected with the highest performance result. The proposed methodology shows that accuracy rate is 86.43% with six numbers of features.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.