DOI QR코드

DOI QR Code

Adverse Effects on EEGs and Bio-Signals Coupling on Improving Machine Learning-Based Classification Performances

  • SuJin Bak (AI Signal Processing Lab., Advanced Institute of Convergence Technology)
  • 투고 : 2023.09.02
  • 심사 : 2023.10.18
  • 발행 : 2023.10.31

초록

본 논문에서 우리는 뇌 신호 측정 기술 중 하나인 뇌전도를 활용한 새로운 접근방식을 제안한다. 전통적으로 연구자들은 감정 상태의 분류성능을 향상시키기 위해 뇌전도 신호와 생체신호를 결합해왔다. 우리의 목표는 뇌전도와 결합된 생체신호의 상호작용 효과를 탐구하고, 뇌전도+생체신호의 조합이 뇌전도 단독사용 또는 임의로 생성된 의사 무작위 신호와 결합한 경우에 비해 감정 상태의 분류 정확도를 향상시킬 수 있는지를 확인한다. 네 가지 특징추출 방법을 사용하여 두 개의 공개 데이터셋에서 얻은 데이터 기반의 뇌전도, 뇌전도+생체신호, 뇌전도+생체신호+무작위신호, 및 뇌전도+무작위신호의 네 가지 조합을 조사했다. 감정 상태 (작업 대 휴식 상태)는 서포트 벡터 머신과 장단기 기억망 분류기를 사용하여 분류했다. 우리의 결과는 가장 높은 정확도를 가진 서포트 벡터 머신과 고속 퓨리에 변환을 사용할 때 뇌전도+생체신호의 평균 오류율이 뇌전도+무작위신호와 뇌전도 단독 신호만을 사용한 경우에 비해 각각 4.7% 및 6.5% 높았음을 보여주었다. 우리는 또한 다양한 무작위 신호를 결합하여 뇌전도+생체신호의 오류율을 철저하게 분석했다. 뇌전도+생체신호+무작위신호의 오류율 패턴은 초기에는 깊은 이중 감소 현상으로 인해 감소하다가 차원의 저주로 인해 증가하는 V자 모양을 나타냈다. 결과적으로, 우리의 연구 결과는 뇌파와 생체신호의 결합이 항상 유망한 분류성능을 보장할 수 없음을 시사한다.

In this paper, we propose a novel approach to investigating brain-signal measurement technology using Electroencephalography (EEG). Traditionally, researchers have combined EEG signals with bio-signals (BSs) to enhance the classification performance of emotional states. Our objective was to explore the synergistic effects of coupling EEG and BSs, and determine whether the combination of EEG+BS improves the classification accuracy of emotional states compared to using EEG alone or combining EEG with pseudo-random signals (PS) generated arbitrarily by random generators. Employing four feature extraction methods, we examined four combinations: EEG alone, EG+BS, EEG+BS+PS, and EEG+PS, utilizing data from two widely-used open datasets. Emotional states (task versus rest states) were classified using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM) classifiers. Our results revealed that when using the highest accuracy SVM-FFT, the average error rates of EEG+BS were 4.7% and 6.5% higher than those of EEG+PS and EEG alone, respectively. We also conducted a thorough analysis of EEG+BS by combining numerous PSs. The error rate of EEG+BS+PS displayed a V-shaped curve, initially decreasing due to the deep double descent phenomenon, followed by an increase attributed to the curse of dimensionality. Consequently, our findings suggest that the combination of EEG+BS may not always yield promising classification performance.

키워드

I. Introduction

Emotion plays a vital role in activities such as perception, motivation, learning, and rational decision-making. As the emotional aspect is emphasized, various indicators should be considered to clearly evaluate the perceived emotion. In previous studies, many indicators such as facial images, gestures, and speech were introduced [1, 2]. These indicators have proven the superiority of emotion detection. For example, facial expressions are known to easily identify a person’s emotional state [3, 4]. Some researchers have reported differences in positive, negative, and neutral emotions for each gesture [5, 6].

However, conflicting results have also been reported in which emotional detection was found difficult using these indicators. A recent study showed inconsistency between the subjective responses of participants who watched a short emotional (neutral, happy, angry, disgust, fear, sadness, and surprise) performance of an actress and the scale was evaluated using FaceReader 4.0, an automatic facial expression recognition software [7]. Thus, most of these indicators used for evaluating emotional studies are not always universal [8], and can predict the probability of variation of the emotional state depending on the participant’s will [9]. However, bio-signals (BS) can be used as objective indicators to detect varying emotional states because they cannot be freely [10].

BS is a time-varying reaction within the human body and can be divided into two main categories: physical and physiological BSs. Physical BSs are measured as the result of muscle activity and include pupil size, eye movements, blinks, head movements, respiration, and voice whereas physiological signals are more directly related to the body’s vital functions. Electrocardiography (ECG) and blood volume pulse (BVP) are associated with cardiac activity. The galvanic skin response (GSR) measures the sweat released by an exocrine activity, and electromyography (EMG) measures the muscle excitability by recording the electrical signals from skeletal muscles. In particular, emotion recognition based on electroencephalogram (EEG) is the currently used method in the affective computing area with challenges regarding feature extraction methods for achieving the best classification performance [11-13]. In this study, BS is named bio-signal excluding EEG.

Several studies have shown successful cases ofdetecting specific emotions using BS and EEG together [14-17]. The advantage of BSs, including EEG, is that the activities of the autonomous nervous system (ANS) limit people’s conscious or intentional emotional control. It is impossible for a human to regulate their BS to their own will. Therefore, it can be used as an objective indicator. However, despite the advantages of combining BS with EEG signals, we have raised some issues for the following reasons:

1) Although human emotions can be classified using features extracted from BSs combined with EEG, as previous studies have demonstrated, most studies only utilize them in an integrated manner without considering the correlation between EEG and BSs. Correlation is a statistic that measures the extent to which two variables move together in relation to each other. Correlation analysis is important as several studies have demonstrated that correlations between the two data reflect diverse information, which can affect the classification performance of multivariate data [18, 19]. Despite the importance of this correlation, it is not yet known how the interrelationship between EEG signals and BSs influences emotion classification. According to a study [20], the accuracy (61.80 %) for emotion state classification combined with EEG and GSR, respiration, blood pressure, and temperature was lower than that for emotion state classification (63.33 %) using a single-channel EEG. The researchers concluded that although EEG signals performed better than physiological signals, we also considered the possibility that there is no correlation between EEG and BSs for emotion classification. This was supported by the results of the study by Kim et al. They examined participants’ physiological responsiveness to repeated exposure to stimuli in negative environments. There were no significant differences in the heart rate between the pre-and post-states (p >.05) [21].

2) The performance of emotional recognition or classification by EEGs coupled with BSs can depend on various feature extraction and classification methods. For instance, when emotions (neutral, happy, and sad) were classified with EEG signals using extracted power spectra and wavelet energy entropy (single and fused), it was reported that the fusion features (91.18 %) provided higher accuracy than single features (89.17 %) using an surport vector machine (SVM) [22]. Another study demonstrated that the deep physiological affect network (DPAN) and the fully connected long short-term memory (FC-LSTM) classifiers achieved accuracies of 78.72 % and 68.45 % in the valence classification, respectively, and accuracies of 79.03 % and 66.56 %, respectively, in the arousal classification [23]. The accuracy difference between the two classifiers was greater than 10 %. These results suggested that the accuracy is dependent on feature extraction and/or classification methods, regardless of the combination of the BSs.

Therefore, our study aims to investigate whether classification performances and benefits can be gained by combining EEG signals with BS and/or PS together. We compared the classified error rates of EEG only, EEG+PS, EEG+BS, and EEG+BS+PS. We used four feature extraction methods, and SVM and LSTM classifiers for the classification performance comparison using AMIGOS [37] and DEAP [38] known as the representative public datasets. The classification performances (validations) of the SVM and LSTM models were also conducted using the area under the receiver operating characteristic curve (AUC) to check whether the classification results were reliable or not. We further investigated to find an appropriate number of PSs to improve the classification performances of EEG only and EEG+BS.

II. Related Works

Change in BSs leading to emotional recognition was the basis for previous studies that reported that various BSs change together depending on human emotions [24-26]. For instance, people with high blood pressure exhibited reduced emotional responses to positive and negative stimuli [27]. In addition, under negative emotional conditions such as anger and anxiety, GSR showed a gradual decline [28], and the temperature of the hand skin decreased [29]. On the other hand, heart rate increased during the emotional states of hate or anger [30]. These studies were verified from a physiological perspective. However, because of these results, although most researchers assumed that BS can significantly improve classification performance, there were a few research studies on the effects of BSs. Most studies are still used to combine EEG signals with BSs indiscriminately, without considering the characteristics of BSs [31]. However, it is necessary to investigate the adverse effects in detail and the EEG signals may be combined with artificially generated pseudo-random signals (PSs) as well as measured BSs during EEG measurements. PSs were arbitrarily generated by random generators and are not related to any physiological responses during EEG measurement. When EEG only and EEG+BS are combined with PS, it acts as a type of noise to create conditions similar to those in the real world. Noises are known as the main contributors for deteriorating classification performances (high classification error rates), and the more these noise signals, the poorer the classification performance. It has been already known that the process of obtaining brainwave signals includes several noise signals. However, it has recently been reported that appropriate noise may improve the classification performance [32]. For instance, one study found that the error rate of data with mild noise is lower than that of the original data using ImageNet Large Scale Visual Recognition Challenge (ILSVRC), which is a popular ImageNet challenge for image classification [33]. Another study reported that the noise created from random generators helps to distinguish disease recognition easily using neural network models [34]. Noise serves as an enhanced feature set that increases the difference between the diseased and non-diseased states. The effect of this mild noise is called the deep double descent [35]. However, previous studies have not fully identified the reason for this phenomenon. Despite the performance improvement caused by mild noise, it is also necessary to consider whether more unnecessary information has been provided; poor classification performance results in a curse of dimensionality [36]. This is because the curse of dimensionality is invoked when the amount of information needed is out of range.

III. Mathods

Fig, 1. depicts the overall experimental process. We used two publicly available datasets: “A dataset for Mood, personality and affect research on Individuals and GrOupS (AMIGOS)” [37], and “Datasets for Emotion Analysis using EEG, Physiological and video signals (DEAP)” [38]. The AMIGOS acquired EEG and BS while watching 16 short videos from 40 people. Likewise, the DEAP also acquired EEG and BSs from 32 people who watched some of the 40 music videos in a minute. In both datasets, neuro-bio signals were captured from the subjects during emotion elicitation. We compared the classified error rates between the task and rest states because they led to the highest classification performance (lowest error rates) due to two distinct mental states [39]. It is noteworthy to investigate whether the combined EEG signals with BSs were efficient in terms of classification and prediction performances for the two datasets. Table 1 briefly summarizes the AMIGOS and DEAP open datasets.

CPTSCQ_2023_v28n10_133_f0001.png 이미지

Fig. 1. Overall experimental conceps

Table 1. Summary of AMIGOS and DEAP

CPTSCQ_2023_v28n10_133_t0001.png 이미지

2.1 AMIGOS datasets

EEG signals were recorded using an Emotiv EPOC Neuroheadset containing 14 electrodes for AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 channels (chs). Simultaneously, BSs such as ECG and GSR were also recorded with three electrodes for left/right ECG channels and for the GSR channel. Each participant owned a total of 17 channels, consisting of 14 ch EEG and 3 ch BS. All EEG data were preprocessed with a sampling frequency of 128 Hz, with ECG and GSR downsampled from 256 Hz to 128 Hz. To remove the artifacts, we applied a bandpass filter with a cut-off frequency of approximately 8–45 Hz of EEG and 20–128 Hz of ECG along with a low-pass filter at 5 Hz of GSR according to [40]. These filtering ranges were established because they represent information about changes in emotional states in each biological signal, including EEG [41].

2.2 DEAP datasets

EEG signals were recorded using a Biosemi ActiveTwo system containing 32 electrodes for FP1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, PO3, O1, OZ, PZ, FP2, AF4, FZ, F4, F8, FC6, FC2, CZ, C4, T8, CP6, CP2, P4, P8, PO4, and O2 channels. Simultaneously, they recorded eight BSs, including horizontal and vertical electrooculography (EOG), zygomaticus and trapezius EMGs, GSR, respiration, plethysmograph and temperature. Each person owned a total of 40 channels, consisting of 32 chs of EEG and 8 chs of BS. The preprocessing stage of all data was similar to that of AMIGOS. Specifically, to remove the artifacts, we additionally applied a bandpass filter at approximately 8–45 Hz of EEG.Except for EEG, the other BSs used a 5 Hz low-pass filter in accordance with [40].

2.3 Signal combinations for classification performance comparison

We used two combinations (Case 1 combination listed in Table 2 and Case 2 combination listed in Table 3) of EEG signals combined with BSs and/or PSs for our classification and predictive study.

• Case 1 combination: As listed in Table 2, we selected the combination of EEG only, EEG+BS, and EEG+PS to observe the combined effects on the classified error rates using SVM and LSTM. In AMIGOS, PSs generated by random functions had the same data format as that of BSs for three channels. Similarly, in DEAP, the generated PSs matched the data format with BSs for eight channels.

Table 2. Case 1 combination: Three types of dataset combinations: ‘EEG Only’, ‘EEG+BS’, AND ‘EEG+PS’

CPTSCQ_2023_v28n10_133_t0003.png 이미지

• Case 2 combination: To explore the effects of combining BSs in the AMIGOS and DEAP datasets, we compared the classification error rates of EEG only and EEG+BS by adding multiples of three or eight PS channels, as listed in Table 3. The added number of PS channels was set to be the multiple of the number of BS channels in the two datasets.

Table 3. Case 2 combination: Original datasets combined with various number of PS channels.

CPTSCQ_2023_v28n10_133_t0002.png 이미지

2.4 PS artificially created by random number generators

Pseudo-random numbers were produced using random number generators. The quality of the random number generator is very important because it can affect the error rate results for emotional classification. One way to control the quality of a random generator is periodicity. The precision of a random number generator is expressed as. The larger the value, the stronger the randomness and the longer the processing time. In other words, the smaller the precision period, the weaker the randomness [42]. Based on this principle, we generated PSs using three random generators with different period randomness as listed in Table 4, focusing on how close they are to a truly random function.

Table 4. Error rates based on Case 2 combination (EEG+BS+PS and EEG+PS) with PS randomness using SVM.

CPTSCQ_2023_v28n10_133_t0004.png 이미지

2.5 Feature extractions and reliable classification results

We used feature extraction to obtain useful information from the raw data. It is divided into three main categories: time-domain analysis, frequency-domain analysis, and time-frequency domain analysis. We focused on frequency analysis, which facilitates the understanding of the transient characteristics of physiological signals, including EEG signals. We extracted the features from the preprocessed AMIGOS and DEAP datasets with a response of 5 s from the task and rest states for classification for relatively low error rates. This range was considered the middle part of the signal by excluding the beginning and end of each recording, which is highly prone to movement artifacts [9]. We utilized four extraction methods, namely, power spectrum density (PSD), fast Fourier transform (FFT), known as representative feature extraction methods for frequency analysis, independent component analysis (ICA), and principal component analysis (PCA) as frequently used methods.Unlike our research, most studies have demonstrated that data combining EEG with BS contribute to classification performance without considering the adverse effects (high classification error rates) of BSs, and asserting only the advantages of BSs based on four easy-to-implement features [43, 44].

The PSD is independent of the frequency resolution, further facilitating the comparison of fluctuating signals. PSD is calculated by Fourier transforming the estimated autocorrelation sequence, which is found by nonparametric methods [45]. One of these methods is Welch’s method of estimating the PSD representing the energy density distribution power over the frequency domain of signals. In the PSD, we divided the filtered signals into segments of 100 sample lengths to obtain the Welch PSD estimate. The signal segment was multiplied by a Hamming window of 100 sample lengths. The number of nested samples was 50. The number of lengths of the discrete Fourier transform (DFT) was 256, which produced a frequency resolution of 2/256 radian/sample. The PSD feature was 1 points, where refers to the length of the DFT. Unlike PSD, FFT depends on frequency intervals by transforming data that vary over time into frequency components. However, it is not known whether a signal occurs at a certain point in time because the Fourier transform extracts the frequency component generated during the entire time domain. In this study, FFTs were converted from original signals to 129 samples of the same length as the samples in PSD. ICA-based feature extraction attempted to find linear transformations that maintain possible data-to-data independence; however, PCA seeks to extract data-based features that reflect human perceptual characteristics well. We extracted features from the ICA and PCA with the same number of samples as the PSD and FFT.

We also implemented two commonly used classifiers, SVM and LSTM, to investigate whether the simultaneous use of EEG and BSs was useful for improving the classification and prediction. We calculated error rates as a classification performance indicator because neural networks perform training and testing based on errors.Moreover, we compared the classification and prediction of EEG signals by adding BS and/or PS to determine whether it can improve the performance of EEG+BS based on a deep double descent [46] and whether it is related to the curse of dimensionality [47].

The SVM classifier was trained using a radial-basis kernel function. We used 10-fold cross-validation, where the total dataset was divided by 90 % of the training data set and 10 % of the test set, and then classification tests were performed with 10 % of the test set. At this time, the tested error rates were calculated ten times and averaged thereafter. This process was used to avoid overfitting, which is responsible for the increasing number of errors in real-world datasets by overlearning the training data sets. The error rates were calculated by Number of correct predictions/Total number of predictions. We verified whether the calculated error rates are reliable based on the AUC, known as the area under the receiver operating characteristic (ROC) curve. The AUC can be obtained by measuring the entire two-dimensional area under the ROC curves. The x-axis and y-axis of the ROC curve represent the false positive rate (FPR) and true positive rate (TPR), respectively. FPR is also known as the probability of a false alarm, which represents a false acceptance rate. TPR is defined as the true acceptance rate, which is the opposite concept of FPR. The lower the FPR and the higher the TPR, the higher is the reliability of the classification results. The larger the AUC value, the higher is the reliability of the classification results. For reliable classification results, the AUC values should be within a range of 0.5 –1(a reliable performance at 0.5, and the most reliable performance at 1).

We defined an LSTM network consisting of a layer with 200 hidden units and dropout probability with a default value of 0.2, to prevent overfitting in a dropout layer. Furthermore, adaptive moment estimation (ADAM) [48] was used as a training process with a learning rate of 0.005. To prevent the gradients from exploding, the gradient threshold was set to be one during training. We decreased the learning rate after 125 epochs by multiplying by a factor of 0.2. The root-mean-square error (RMSE) was used as the regression loss function. The LSTM network was used for 10-fold cross-validation. Each cross-validation iteration was of 250 epochs. The LSTM results were also verified by the AUC.

2.6 Pearson’s correlation of EEG, EEG+BS, and EEG+PS

Pearson correlation was analyzed to measure the strength of linearity between two variables. A linear relationship is said to establish if the change in one variable is proportional to the change in the other. A strong linearity between two variables means that the relationship between variables is well-plotted by straight lines in the x-y plane. We calculated Pearson correlation coefficients to determine the relationship between EEGs, and between EEG and BS or PS in the AMIGOS and DEAP datasets.

2.7 Statistical analysis

Levene’s test was used to assess the homogeneity of the variance of the features. Thereafter, a one-way analysis of variance (ANOVA) was applied to determine the differences between classification error rates among EEG only, EEG+BS, and EEG+PS by four features, ICA, PCA, PSD, and FFT based on SVM. The Dunnett T3 was conducted as a post hoc test based on the studentized maximum modulus [49].

IV. Results

3.1 Error rates based on Case 1 combination (EEG Only, EEG+BS, and EEG+PS) using SVM

We calculated the error rates of the task classification from Case 1 combination (EEG only, EEG+BS, and EEG+PS) using SVM for the four feature extraction methods. We would like to determine whether the simultaneous use of EEG and BS versus other combinations can improve the error rates for task classification. Fig. 2. illustrates the classified error rates for the classification between the task and the rest states in AMIGOS (Fig. 2(a)) and DEAP (Fig. 2(b)) using SVM.

CPTSCQ_2023_v28n10_133_f0002.png 이미지

Fig. 2. Bar graphs showing calculated classified error rates using SVM for binary classification (i.e., task vs. rest states) based on four feature extractions: (a) AMIGOS dataset divided into ‘EEG only’ (14 ch EEG), EEG+BS (14 ch EEG and 3 ch bio-signals), and EEG+ PS (14 ch EEG and 3 ch pseudorandom signals). However, the PCA has an AUC value of 0.27 and is unreliable as it does not meet the AUC value of greater than 0.5. (b) DEAP dataset divided into EEG only (32 ch EEG), EEG+BS (32 ch EEG and 8 ch bio-signals), and EEG+PS (32 ch EEG and 8 ch pseudo random signals). Note that EEG+PS shows a lower error rate than EEG+ BS in both AMIGOS and DEAP datasets under four feature extraction methods.

In Fig. 1(a), the classified error rates are low in the order of FFT, PSD, ICA, and PCA. Lower error rates indicate higher classification accuracies. PCA had an AUC value of 0.27, was unreliable, and did not meet the AUC value of larger than 0.5. Furthermore, there was a statistically significant difference in the error rates in the ICA and FFT for Case 1 combination (p < 0.5*). This indicates a distinct error difference among the three combinations (EEG only vs. EEG+BS vs. EEG+PS) in ICA and FFT. For the FFT method, the error rates were lower in the order of EEG only (0.2 %), EEG+PS (2.0 %), and EEG+BS (6.7 %). For the ICA method, the error rates were lower in the order of EEG+PS (60.1 %), EEG only (64.2 %), and EEG+BS (68.8 %). This is based on FFT-SVM with the highest classification accuracy, and EEG+BS shows relatively 6.5% and 4.7% higher error rates compared to EEG-only use and EEG+PS. As a result, we found that the concurrent use of EEG+BS is less accurate than EEG only or EEG+PS alone. This result contrasts with previous results in which EEG+BS exhibited high performance (low error rate) [50].

Similarly, the classified error rates in Fig. 2(b) are low in the order of FFT, ICA, PSD, and PCA. In both the FFT and ICA methods, statistically significant differences were observed in the error rates. This means that there is a distinct error difference among the three combinations (EEG only vs. EEG+BS vs. EEG+PS) in ICA and FFT. Specifically, EEG only (5.0 %) or EEG+PS (5.0 %) showed lower error rates than EEG+BS (6.0 %) for FFT. In ICA, EEG only (47.0 %) or EEG+PS (48.0 %) also showed lower error rates than EEG+BS (51.0 %). Thus, EEG+BS is still less accurate than EEG+PS, which is the same result as that shown in Fig. 2(a). However, we expect to achieve the highest classification performance (lowest error rates) in EEG+BS, because BS is measured simultaneously during EEG measurement and is directly related to physiological responses. However, we obtained the opposite results. EEG+PS has a higher classification performance than EEG+BS, even though PS is an arbitrarily generated signal and is not related to physiological responses during EEG measurement.

3.2 Validation of SVM classifier for four feature extraction methods

To check the reliability of the classified error rates by the SVM classifier, we used the AUC corresponding to the area under the ROC curve. Fig. 3 depicts the two-dimensional ROC curves for the four extraction methods in the AMIGOS (Fig. 3(a)) and DEAP (Fig. 3(b)). As the AUC increased, we obtained more reliable classification results, further indicating that the FFT has the highest reliability for error rates in both datasets.

CPTSCQ_2023_v28n10_133_f0003.png 이미지

Fig. 3. Two-dimensional ROC curves for verifying the accuracy of the SVM classifier by each feature extraction method: (a) AMIGOS, and (b) DEAP. The validation of the model utilized the AUC known as the area under the ROC curve. The closer the AUC’s value is to one, the more reliable the classifiers are. In both datasets, the FFT results are the most reliable. In Fig. 4(a), it indicates that the AUC increases in the order of FFT, PSD, ICA, and PCA. The model validation should indicate an AUC level rate of 0.5 or higher. Our AUC levels are greater than 0.5 except for PCA in all methods. In Fig. 4(b), it indicates that area of AUC increases in the order of FFT, PSD, PCA, and ICA. It also shows the AUC levels are greater than 0.5 in all methods.

Fig. 3(a) depicts the reliable results except for the PCA in the order of FFT, PSD, ICA, and PCA, which is of the same order as that of the SVM classification results. The AUC value ranges from 0–1 (maximum) and should be at least greater than 0.5, which is a reliable classified error rate by the extraction method. The AUC values of FFT, PSD, ICA, and PCA were 1.00, 0.65, 0.51, and 0.27, respectively. The classified error rates using PCAwere not reliable for the AMIGOS dataset using the SVM classifier. In contrast, Fig. 2(b) illustrates highly reliable results in the order of FFT (1.00), PCA (0.71), PSD (0.69), and ICA (0.64). The four methods with AUC values larger than 0.5, indicate reliable classification results for the DEAP dataset using the SVM classifier. These extreme experimental result, such as FFT showing AUC=1, can be derived as a problem of overfitting or underfitting, but according to prior studies [51-52], we excluded this possibility because the superiority of FFT has already been demonstrated.

3.3 Correlations of EEG, EEG+BS, and EEG+PS

To systematically investigate the correlations between EEG and BSs, we calculated Pearson’s correlation coefficients composed of EEG only, EEG+BS, and EEG+PS to consider the effects of BS in both AMIGOS and DEAP.

Fig. 4 illustrates (a) the correlation between the F3 and F4 channels in EEGs among the frontal regions associated with emotions, and (b) the correlation between F3 in EEGs and GSR in BSs from AMIGOS. Fig. 4(a) depicts a strong positive correlation (r = .738, p < .01**) whereas Fig. 4(b) depicts a disorderly relationship (r = .007, NS ). We obtained the correlations between channels in EEGs, which further indicated strong correlations similar to those in Fig. 4(a). The blue line in Fig. 4(a) shows a linear relationship. This implies that the F3 channel varies in the same direction as that of the F4 channel. However, GSR has no relation to the F3 channel as shown in Fig. 4(b).

CPTSCQ_2023_v28n10_133_f0004.png 이미지

Fig. 4. Correlation analysis (a) between F3 and F4 in EEGs, and (b) between F3 in EEGs and GSR in BSs from AMIGOS. A positive correlation is observed between F3 and F4; however, no relevance was observed between F3 and GSR. It indicates that no relationship was observed between EEG and BS, and implied as having poor error rates in EEG+BS.

Fig. 5 depicts the correlation (a) between the Fp1 and Fp2 channels, among the frontal regions associated with emotions, and (b) between tEMG and Fp1 from DEAP. These results are similar to those shown in Fig. 4 using AMIGOS. Fig. 5(a) depicts a positive correlation at (r = .611, p < .01**). We expected that the disorderly relationship (r = .002, NS) between tEMG and Fp1 would have poor classification performance. The correlations between all EEG signals and BSs are similar in both Fig. 4(a) and Fig. 5(a) from AMIGOS and DEAP.

CPTSCQ_2023_v28n10_133_f0005.png 이미지

Fig. 5. Correlation analysis (a) between Fp2 and Fp1 in EEGs, and (b) between tEMG in BSs and Fp1 in EEGs from DEAP. A positive correlation is observed between Fp2 and Fp1; however, no relevance was observed between Fp1 and tMEG. It indicates that no relationship is observed between EEG and BS, and implied as having poor error rates in EEG+BS.

We further calculated the correlation coefficient between one channel among the EEG signals and one PS signal generated by the highly randomized generator as listed in Table 4. We found a weak positive correlation between F3 (EEG) and PS in AMIGOS (r = .123, p < .05*) and Fp1 (EEG) and PS in DEAP (r = -.135, p < .05*). Although no significant correlations were found between all the PS signals, we found some correlations in the number of PS signals with the lowest error rate. Presumably, one of the reasons why EEG+BS exhibited poorer classification performance when compared to the EEG Only or EEG+PS may be due to the lack of correlation between EEGs and BSs.

3.4 Error rates based on Case 2 combination (EEG+BS+PS and EEG+PS) with PS randomness using SVM

To systematically investigate the error rates for classification depending on the artificial signals such as PSs, we calculated the error rates using the SVM classifier for two data combinations of EEG+BS+PS and EEG+PS with three different PS randomness as listed in Table 4. In Fig. 6 (AMIGOS) and Fig. 7 (DEAP), the classified error rates of the four feature extraction methods using the SVM are shown for the case of EEG+BS+PS and EEG+PS with three randomness.

In Fig. 6, the averaged classifier error rates by the extraction methods using AMIGOS were found larger in the order of PCA, ICA, PSD, and FFT, regardless of the data combination of EEG+BS+PS (77.823, 63.733, 48.390, and 6.787 %) or EEG+PS (75.880, 62.783, 49.073, and 4.647 %). As depicted in Fig. 5(a), EEG+PS has lower error rates than EEG+BS+PS, except for PSD (almost equal or slightly higher). However, for medium and low randomness, EEG+PS has higher error rates than EEG+BS+PS, except for ICA (Fig. 6(b)) and PCA (Fig. 6(c)). However, it is difficult to conclude the results owing to the randomness.

CPTSCQ_2023_v28n10_133_f0006.png 이미지

Fig. 6. Averaged classified error rates using SVM with the PS randomness for EEG + BS + PS and EEG + PS from AMIGOS: (a) high randomness in PS, (b) medium randomness in PS, and (c) low randomness in PS. Here, PCA should be excluded because the AUC value (0.27) is not met. PS was obtained from the random generators listed in Table 4, and not related the physiological states at all. Regardless of the combination of BS + PS or PS with EEG, it shows higher error rates in the order of PCA, ICA, PSD, and FFT for all randomness. It suggests that the classification results are influenced by feature extraction methods, but not randomness.

Fig. 7 depicts the averaged classifier error rates obtained by the extraction methods using the DEAP dataset. They are illustrated in the error rates for EEG+BS+PS in the order of PCA, ICA, PSD, and FFT (58.667, 51.700, 50.967, and 15.800 %) and EEG+PS (53.533, 50.833, 50.733, and 17.933 %). This result indicates slightly lower error rates in EEG+PS than those in EEG+BS+PS, except for FFT. However, the differences in the error rates between EEG+BS+PS and EEG+PS with randomness were not clearly observed. We found the differences in error rates in accordance with feature extraction methods, but failed to find any significant error rates dependent on the randomness. We can use any random number generators to create the PS signals in our study.

CPTSCQ_2023_v28n10_133_f0007.png 이미지

Fig. 7. Averaged classified error rates using SVM with randomness for EEG+BS+PS and EEG+PS from DEAP: (a) high randomness in PS, (b) medium randomness in PS, and (c) low randomness in PS. BS was obtained in the human emotional states. PS was obtained from the random generators listed in Table 4 which is not related to any physiological state. The DEAP dataset has the lowest error rate of the FFT feature extraction method in all types with different randomness and shows similar error rates in the rest of the extraction methods. It suggests that both randomness and feature extraction methods are irrelevant to the classification results except for FFT.

3.5 Error rates of EEG+BS and EEG Only by adding PS channels based on Case 2 combination using SVM

We generated a diverse number of PS channels from dsfmt19937, the Mersenne Twister-style, which is the most frequently used and is treated as one of the high-random generators. The generated PS was added to EEG+BS and EEG only to investigate the error rates for the classification of the task and rest states. We calculated the classification error rates by increasing the number of PS channels added to EEG+PS and EEG+BS+PS. Fig. 8. depicts the grand averaged error rates over 10 times from three extraction methods excluding PCA due to the low AUC value (less than 0.5) using SVM for the binary classification (i.e., task vs. rest states) in AMIGOS (a) and DEAP (b). For the EEG only+PS case, the classified error rates show a decreasing tendency for the AMIGOS dataset and an increasing tendency for the DEAP dataset with an increase in the number of PS channels. However, for the EEG+BS+PS case, the error rates were initially decreased by the deep double descent [36] and then increased by the curse of dimensionality [37] for both datasets with an increasing number of PS channels. The inflection points of EEG+BS+PS appeared at 15–16 PS channels, as depicted in Fig. 8. This is based on the deep double descent phenomenon, and adding an appropriate number of PS data reduces the classification error rates. Although previous work has not yet revealed the reason for the occurrence of deep double descent, we may expect that hyper parameters, such as learning rate and batch size, affecting the performance of classifier models can be optimally estimated while approaching the number of PSs corresponding to this inflection point. However, we found that further addition of the number of PSs beyond this inflection point leads to the degradation of the classification performance, similar to the curse of dimensionality. Overall, PS can regulate the computational performance (high and low error rates) for classification, which indicates that combining EEG+BS with an appropriate number of PSs can achieve enhanced performances. Unlike EEG+BS, EEG only does not show V-shapes, and its cause is unknown. In other words, in the case of EEG alone use, the V-shape neither appears in Fig. 8 using SVM nor appears in Fig. 11 using LSTM, but EEG+BS shows a V-shape in all cases.

CPTSCQ_2023_v28n10_133_f0008.png 이미지

Fig. 8. Grand averaged error rates for a binary classification (task vs. rest states) from ICA, PSD, and FFT feature extraction methods using SVM in (a) AMIGOS and (b) DEAP. We obtained the inflection points for both datasets. For EEG+BS, as the number of PS channels increases, the error rates are initially decreased and then increased. For EEG Only, the error rates are steadily decreased for AMOGOS, and steadily increased for DEAP. More obvious inflection point tends to appear in EEG+BS than EEG.

3.6 RMSE of Case 1 combination using LSTM

We calculated RMSEs, which represent the error gap between the observed data and forecasted data of EEG+BS or EEG+PS using the LSTM model. Thus, the smaller the value of the RMSEs, the higher the predictive error rates. Figs. 9 and 10 depict the RMSEs in EEG+BS and EEG+PS of AMIGOS and DEAP, respectively. The blue dotted lines are the actual obtained data, and the orange solid lines are the data predicted by the LSTM model. In AMIGOS, the RMSEs of EEG+BS and EEG+PS were 1.296 and 1.210, respectively. This suggests that the EEG+BS has higher error rates (poorer classification performance) than the arbitrarily generated EEG+PS. This was met by the AUC value (greater than 0.5) as a reliable model. In DEAP, Fig. 10 depicts an RMSE of 785.902 and an RMSE of EEG+PS of 0.668, but the AUC value of EEG+BS does not meet the reliability criterion. Therefore, the RMSEs of the DEAP cannot be compared.

CPTSCQ_2023_v28n10_133_f0009.png 이미지

Fig. 9. RMSEs using LSTM for predicting the AMIGOS dataset: (a) EEG+BS and (b) EEG+PS. The RMSEs for EEG+BS and EEG+PS correspond to 1.296 and 1.210, respectively. EEG+BS shows higher RMSE than EEG+PS. It suggests that EEG+BS has lower performances (high predictive error rates) than EEG+PS.

CPTSCQ_2023_v28n10_133_f0010.png 이미지

Fig. 10. RMSEs using LSTM for predicting the DEAP dataset: (a) EEG+BS and (b) EEG+PS. RMSEs correspond to 785.902 and 0.668 in EEG+BS and EEG+PS, respectively. Thus, EEG+BS shows higher RMSE than EEG+PS. However, the LSTM classification results are unreliable because the AUC value which is less than 0.5 in EEG+BS and the RMSE in DEAP cannot be compared.

In contrast to the superiority of EEG+BS with lower error rates in existing studies [53], we obtained the opposite results, which arbitrarily generated PS combining EEG indicating lower error rates. We should be careful not to unconditionally combine EEG with BS related to emotional state prediction to improve predictive performances.

3.7 RMSE for LSTM between EEG+BS and EEG Only based on the Case 2 combination

We investigated RMSEs based on Case 2 combination using LSTM to compare EEG+BS+PS and EEG only+PS. Fig. 11. depicts the RMSEs as a function of the added PS channels using LSTM for (a) AMIGOS and (b) DEAP. For EEG only+PS, the RMSEs exhibited a decreasing and then fluctuating tendency for the AMIGOS dataset, and an increasing tendency for the DEAP dataset with an increase in the number of PS channels. For EEG+BS+PS, the RMSEs initially decreased by the deep double descent phenomenon and then increased by the curse of dimensionality for both datasets with an increasing number of PS channels. For EEG+BS+PS, the inflection points were captured for both datasets. The red dotted lines represent an inflection line of EEG+BS+PS, which first shows a decreasing and thereafter an increasing error rate pattern. The inflection points occurred at approximately 16 added PS channels. The results of LSTM were similar to those of SVM. This improved the classification performance (low RMSEs) by adding an appropriate number of dummy data to the original data.

CPTSCQ_2023_v28n10_133_f0011.png 이미지

Fig. 11. RMSEs as a function of the number of added PS channels using LSTM: (a) AMIGOS and (b) DEAP. For EEG+BS, the inflection patterns appear in both datasets. The RMSEs are initially decreased and then increased. For EEG Only, RMSEs are decreased and then fluctuated for AMIGOS, and steadily increased for DEAP.

V. Discussion

We compared the classifier error rates using SVM and RMSE using LSTM for the emotional state classification/prediction from EEG signals by adding BSs related to physiological states such as ECG, GSR, EOG, EMGs, GSR, respiration, plethysmograph, and temperature. We generally expect to obtain improved results from EEG signals by adding simultaneously measured BSs (i.e., from EEG+BS). However, better classification performance was obtained by adding randomly generated PSs rather than measured BSs. Our results demonstrate that although EEG signals help in detecting emotional states, the addition of various BSs to EEG signals is not necessary to further improve classification performance. We obtained various results depending on the PS channels as well as the data characteristics, feature extractions, and classification methods. Therefore, unconditionally combined EEG signals with BSs should not always improve classification and prediction performance. We observed higher classification performance (lower error rates) of EEG only or EEG+PS when compared to EEG+BS, and the classification performance of EEG+BS was also improved by adding an appropriate number of dummy PS signals.

4.1 Effects of combining EEG signals with BS

According to the Jamesian theory [54], emotions are only the perception of bodily changes. This emphasizes the important role of a living body response in the study of emotions. With the importance of these physiological responses, BS combined with EEG can help to detect changes in emotional states in humans [55-57]. However, our work focused on whether the combination of EEG and BS achieves some beneficial effects while calculating classification or prediction performances for two classes, rather than detecting the emotional change due to BS combined with EEG. Our results demonstrate that the error rates for EEG+BS in the binary classification between experimental tasks and rest are higher than those in the case of EEG+PS or EEG only. This implies that EEG signals provide better detection of emotional states when compared to other physiological signals. G. Chanel et al. obtained the same results [58]. We also speculate that the combination of EEG with high time resolution and BS, such as GSR or skin temperature, which has a relatively slow biological response, results in a high error rate [59]. In other words, as one of the characteristic factors that can be used to distinguish the data, the time resolution varies among BSs, resulting in poor classification performance (high error rates). Therefore, consistent with existing studies, these differences in data-to-data features allow for instantaneous emotional detection, but it is unknown whether they will influence performance improvement in the computational classification process. Indeed, we have demonstrated that EEG+BS has a lower error rate than EEG only or EEG+PS as depicted in Fig. 2. Ultimately, our results indicate that BSs combined with EEG do not always have positive effects on the classification or prediction of emotion recognition.

4.2 Effects of EEG Only and EEG+BS combined with numerous PS channels

With an increase in the number of PS channels, the error rates for EEG only show a decreasing tendency for the AMIGOS dataset and an increasing tendency for the DEAP dataset using SVM, as depicted in Fig. 8. However, in terms of showing the V-pattern, as depicted in Fig. 11(a), EEG only does not always show linear increasing or decreasing patterns. As more PSs were added, EEG only is also likely to show V-patterns, and it is simply not visible owing to the classifier type or data characteristics. On the other hand, the error rates for EEG+BS initially decreased and thereafter increased for both datasets as the number of PS channels increased. Therefore, EEG+BS shows V-shapes in the error rates with an increase in the number of PSs. By adding an appropriate number of dummy PS channels, we obtained a rather high-performance classification result. The effect of the performance improvement due to the addition of PS is the same as that of the deep double descent phenomenon that was demonstrated in a previous study [60]. Although the cause of this phenomenon is still unknown, classification performance results can change depending on the optimal model size, the number of training sessions, and the amount of data. In this study, unlike that of EEG only, the deep double descent phenomenon in the case of EEG+BS is clearly observed for up to 15–16 added PS channels, showing a low error rate. The addition of more PS channels beyond their appropriate number degrades the classification performance. The more the combined PS channels are, the higher the error rates of EEG+BS are observed, because the combined BS data presumably include a lot of unnecessary information or fail to find optimized hyperparameters that can have lower error rates in the machine learning process. This is the same phenomenon, such as the theory called the curse of dimensionality [61, 62], which interprets that the existence of a large amount of data does not always maintain lower error rates. In other words, as the amount of data increases, the error rates increase because of the addition of unnecessary (colinear, redundant, or noisy) data or sparsity. The results of the SVM depicted in Fig. 8 indicate a similar trend for some of the LSTM, as depicted in Fig. 11. Thus, we can suggest that the classification performance (error rates) of EEG+BS can be improved by simply adding arbitrarily generated PS data up to an inflection point.

4.3 Effects of classification error rates of EEG+PS and EEG+BS+PS with PS randomness

A recent study showed that mild noise can affect classification performance depending on the measure (high to low) of randomness [34]. However, we demonstrated that the classification results were not influenced by randomness as depicted in both Fig. 6 and Fig. 7.

We compared the error rates using SVM based on the three random number generators (i.e., dsfmt19937; high randomness, mcg16807; low randomness, mlfg633164; medium randomness) with different randomness. We did not find any differences in the error rates owing to the randomness between them. However, the error rates were affected by the feature extraction method regardless of the randomness. The error rates were larger in the order of PCA, ICA, PSD, and FFT as depicted in Fig. 6. Fig. 7(a) and Fig. 7(c) also depict the same pattern as that depicted in Fig. 6, but Fig. 7(b) illustrates that the methods except for FFT have similar error rates. This is the same interpretation as our results. It has been reported that error rates are affected by data feature extraction methods [63, 64]. Therefore, we could not find any differences in the error rates due to the randomness.

4.4 Correlation affecting the error rates of classification

We expected that the correlations between EEG signals and BSs may affect classification performance. In Figs. 4 and 5, the error rates of the PS and EEG combinations were lower than those of the BS and EEG combinations. This may be due to some correlations between EEG and PS, and no correlations between EEG and BS. We yielded a low error rate, especially when there was a correlation or high correlation between EEGs or between EEG and PSs. This is probably due to the large difference between the features extracted from each state in the two mental states for classification. Our results are noteworthy in that BSs, which are not correlated with EEG, can potentially increase the error rates; on the other hand, PS, which is linearly correlated with EEG, can potentially decrease the error rates. We interpret that the correlation between physiological signals can regulate the classification performance of mental states for emotion recognition.

VI. Conclusions

This study demonstrated that EEG signals combined with BS signals, measured simultaneously and related to physiological responses did not always improve classification performances. We summarize this on two grounds.

First, we found the highest error rate of EEG+BS in both AMIGOS and DEAP datasets using SVM and LSTM models among EEG, EEG+BS, and EEG+PS. Hence, the error rate obtained by combining EEG and BS does not always guarantee a low error rate.

Second, we show that if PS signals are added properly, we can improve the error rate of EEG+BS depending on the type of classifier. In this process, the error rate of EEG+BS shows a V-pattern, accompanied by a deep double descent and a curse of dimensionality.

Therefore, the error rates obtained by combining BS with EEG do not promise to have low error rates. We believe that our work will provide a new paradigm for future emotion recognition research.

ACKNOWLEDGEMENT

This work was supported in part by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea(NRF-2023S1A5A8076043) and in part by the Korea Agency for Infrastructure Technology Advancemen(KAIA) grant funded by the Ministry of Land,Infrastructure and Transport (Grant 20LTSM-B156015-01).

참고문헌

  1. L. Kessous et al., "Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis," J. Multimodal User Interfaces, Vol. 3, No. 1-2, pp. 33-48, March 2010. DOI: 10.1007/s12193-009-0025-5 
  2. A. Raouzaiou et al., "An intelligent scheme for facial expression recognition,", Lecture Notes in Computer Science. Springer, pp. 1109-1116, 2003. DOI: 10.1007/3-540-44989-2_132 
  3. V. Gallese et al., "Action recognition in the premotor cortex," Brain, Vol. 119, No. 2, pp. 593-609, April 1996. DOI: 10.1093/brain/119.2.593 
  4. L. Carr et al., "Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas," Proc. Natl. Acad. Sci. U. S. A., Vol. 100, No. 9, pp. 5497-5502, April 2003. DOI: 10.1073/pnas.0935845100 
  5. T. Flaisch et al., "Emotion and the processing of symbolic gestures: An event-related brain potential study," Soc. Cogn. Affect. Neurosci., Vol. 6, No. 1, pp. 109-118, March 2010. DOI: 10.1093/scan/nsq022 
  6. M. Coulson, "Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence," J. Nonverbal Behav., Vol. 28, No. 2, pp. 117-139, June 2004. DOI: 10.1023/B:JONB.0000023655.25550.be 
  7. A. Vartanov et al., "Facial expressions and subjective assessments of emotions," Cogn. Syst. Res., Vol. 59, pp. 319-328, January 2020. DOI: 10.1016/j.cogsys.2019.10.005 
  8. P. Rani et al., A New Approach to Implicit Human-Robot Interaction Using Affective Cues. I-Tech Education and Publishing, pp. 233-252, December 2006.
  9. K. H. Kim et al., "Emotion recognition system using short-term monitoring of physiological signals," Med. Biol. Eng. Comput., Vol. 42, No. 3, pp. 419-427, February 2004. DOI: 10.1007/BF02344719 
  10. T. Tuncer et al., "A new fractal pattern feature generation function based emotion recognition method using EEG," Chaos Solitons Fract., Vol. 144, pp. 110671, March 2021. DOI: 10.1016/j.chaos.2021.110671 
  11. H. S. Friedman, Encyclopedia of Mental Health. Academic Press, pp. 1-786, 2015. 
  12. J. A. Onton and S. Makeig, "High-frequency broadband modulations of electroencephalographic spectra," Front. Hum. Neurosci., Vol. 3, pp. 61, December 2009. DOI: 10.3389/neuro.09.061.2009 
  13. A. R. Damasio et al., "Subcortical and cortical brain activity during the feeling of self-generated emotions," Nat. Neurosci., Vol. 3, No. 10, pp. 1049-1056, October 2000. DOI: 10.1038/79871 
  14. T. Song et al., "EEG emotion recognition using dynamical graph convolutional neural networks," IEEE Trans. Affect. Comput., Vol. 11, No. 3, pp. 532-541, March 2018. DOI: 10.1109/TAFFC.2018.2817622 
  15. Y.-L. Hsu et al., "Automatic ecg-based emotion recognition in music listening," IEEE Trans. Affect. Comput., Vol. 11, No. 1, pp. 85-99, March 2020. DOI: 10.1109/TAFFC.2017.2781732 
  16. N. Ravaja et al., "Virtual character facial expressions influence human brain and facial EMG activity in a decision-making game," IEEE Trans. Affect. Comput., Vol. 9, No. 2, pp. 285-298, June 2016. DOI: 10.1109/TAFFC.2016.2601101 
  17. F. Agrafioti et al., "ECG pattern analysis for emotion detection," IEEE Trans. Affective Comput., Vol. 3, No. 1, pp. 102-115, March 2011. DOI: 10.1109/T-AFFC.2011.28 
  18. A. Fehske et al., A New Approach to Signal Classification Using Spectral Correlation and Neural Networks, First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, pp. 144-150, Baltimore, MD, USA, December 2005. DOI: 10.1109/DYSPAN.2005.1542629 
  19. A. Tsanas et al., "Novel speech signal processing algorithms for high-accuracy classification of Parkinson's disease," IEEE Trans. Bio Med. Eng., Vol. 59, No. 5, pp. 1264-1271, May 2012. DOI: 10.1109/TBME.2012.2183367 
  20. Z. Khalili and M. H. Moradi, Emotion Recognition System Using Brain and Peripheral Signals: Using Correlation Dimension to Improve the Results of EEG, International Joint Conference on Neural Networks pp. 1571-1575, Atlanta, GA, USA, June 2009. DOI: 10.1109/IJCNN.2009.5178854 
  21. J. Kim et al., "Environmental distress and physiological signals: Examination of the saliency detection method," J. Comput. Civ. Eng., Vol. 34, No. 6, pp. 04020046, August 2020. DOI: 10.1061/(ASCE)CP.1943-5487.0000926 
  22. Q. Gao et al., "EEG based emotion recognition using fusion feature extraction method," Multimedia Tool. Appl., Vol. 79, No. 37, pp. 27057-27074, July 2020. DOI: https://doi.org/10.1007/s11042-020-09354-y 
  23. B. H. Kim and S. Jo, "Deep physiological affect network for the recognition of human emotions," IEEE Trans. Affective Comput., Vol. 11, No. 2, pp. 230-243, January 2018. DOI: 10.1109/TAFFC.2018.2790939 
  24. G. Van Der Vloed and J. Berentsen, "Measuring emotional wellbeing with a non-intrusive bed sensor,", Springer, Vol. 5727, pp. 908-911, August 2009 DOI: 10.1007/978-3-642-03658-3_108 
  25. R. W. Picard et al., "Toward machine emotional intelligence: Analysis of affective physiological state," IEEE Trans. Pattern Anal. Machine Intell., Vol. 23, No. 10, pp. 1175-1191, October 2001. DOI: 10.1109/34.954607 
  26. J. T. Cacioppo et al., Handbook of Psychophysiology. Cambridge University Press, pp. 1-902, 2007. 
  27. J. A. McCubbin et al., "Cardiovascular emotional dampening: The relationship between blood pressure and recognition of emotion," Psychosom. Med., Vol. 73, No. 9, pp. 743-750, October 2011. DOI: 10.1097/PSY.0b013e318235ed55 
  28. P. Das et al., Emotion Recognition Employing ECG and GSR Signals as Markers of ANS, Conference on Advances in Signal Processing (CASP), pp. 37-42, Pune, India, June 2016. DOI: 10.1109/CASP.2016.7746134 
  29. S. E. Rimm-Kaufman and J. Kagan, "The psychological significance of changes in skin temperature," Motiv. Emot., Vol. 20, No. 1, pp. 63-78, March 1996. DOI: 10.1007/BF02251007 
  30. S. R. Vrana, "The psychophysiology of disgust: Differentiating negative emotional contexts with facial EMG," Psychophysiology, Vol. 30, No. 3, pp. 279-286, May 1993. DOI: 10.1111/j.1469-8986.1993.tb03354.x 
  31. Y. Luo et al., "EEG-based emotion classification using spiking neural networks," IEEE Access, Vol. 8, pp. 46007-46016, March 2020. DOI: 10.1109/ACCESS.2020.2978163 
  32. J. Yim and K.-A. Sohn, Enhancing the Performance of Convolutional Neural Networks on Quality Degraded Datasets, International Conference on Digital Image Computing: Techniques and Applications (DICTA) pp. 1-8, Sydney, NSW, Australia, November 2017. DOI: 10.1109/DICTA.2017.8227427 
  33. O. Russakovsky et al., "Imagenet large scale visual recognition challenge," Int. J. Comput. Vis., Vol. 115, No. 3, pp. 211-252, April 2015. DOI: 10.1007/s11263-015-0816-y 
  34. K. Sriwong et al., "Post-Operative Life Expectancy of Lung Cancer Patients Predicted by Bayesian Network Model," Int. J. Mach. Learn. Comput., Vol. 8, No. 3, pp. 280-285, June 2018. DOI: 10.18178/ijmlc.2018.8.3.700 
  35. A. Krizhevsky et al., "Imagenet classification with deep convolutional neural networks," Adv. Neural Inf. Process. Syst., Vol. 25, pp. 1097-1105, December 2012. 
  36. W.-L. Zheng et al., "Identifying stable patterns over time for emotion recognition from EEG," IEEE Trans. Affect. Comput., Vol. 10, No. 3, pp. 417-429, June 2017. DOI: 10.1109/TAFFC.2017.2712143 
  37. J. A. M. Correa et al., "Amigos: A dataset for affect, personality and mood research on individuals and groups," IEEE Trans. Affect. Comput., Vol. 12, No. 2, pp. 479-493, November 2018. DOI: 10.1109/TAFFC.2018.2884461 
  38. K. R. Scherer, "What are emotions? And how can they be measured?," Social science information,," Vol. 44, No. 4, pp. 695-729, December 2005. DOI: https://doi.org/10.1177/053901840505821 
  39. X. Yong and C. Menon, "EEG classification of different imaginary movements within the same limb," PLOS ONE, Vol. 10, No. 4, pp. e0121896, April 2015. DOI: 10.1371/journal.pone.0121896 
  40. J. Hu et al., "Removal of EOG and EMG artifacts from EEG using combination of functional link neural network and adaptive neural fuzzy inference system," Neurocomputing, Vol. 151, pp. 278-287, March 2015. DOI: 10.1016/j.neucom.2014.09.040 
  41. K. B. Doelling et al., "Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing," Neuroimage, Vol. 85, No. 2, pp. 761-768, January 2014. DOI: 10.1016/j.neuroimage.2013.06.035 
  42. W.-Y. Jeong and S.-K. Lee, "A study on the self-key generation algorithm for security elevation in Near field communications," The J. Korea Inst. Electron. Commun. Sci., Vol. 7, No. 5, pp. 1027-1032, October 2012. DOI: https://doi.org/10.13067/JKIECS.2012.7.5.1027 
  43. A. R. Subhani et al., "Machine learning framework for the detection of mental stress at multiple levels," IEEE Access, Vol. 5, pp. 13545-13556, July 2017. DOI: 10.1109/ACCESS.2017.2723622 
  44. F. Lotte et al., "A review of classification algorithms for EEG-based brain-computer interfaces," J. Neural Eng., Vol. 4, No. 2, pp. R1-R13, January 2007. DOI: 10.1088/1741-2560/4/2/R01 
  45. O. Faust et al., "Analysis of EEG signals during epileptic and alcoholic states using AR modeling techniques," IRBM, Vol. 29, No. 1, pp. 44-52, March 2008. DOI: 10.1016/j.rbmret.2007.11.003 
  46. P. Nakkiran et al., "Deep double descent: Where bigger models and more data hurt," Journal of Statistical Mechanics: Theory and Experiment, Vol. 2021, No. 12, pp. 124003, Desember 2021. DOI: 10.1088/1742-5468/ac3a74 
  47. S. Kpotufe, The Curse of Dimension in Nonparametric Regression. San Diego: UC, pp. 1-123, 2010. 
  48. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv Preprint ArXiv:1412.6980, 2014. 
  49. C. W. Dunnett, "Pairwise multiple comparisons in the unequal variance case," J. Am. Stat. Assoc., Vol. 75, No. 372, pp. 796-800, December 1980. DOI: 10.1080/01621459.1980.10477552 
  50. J. Liao et al., "Multimodal Physiological Signal Emotion Recognition Based on Convolutional Recurrent Neural Network." IOP conference series: materials science and engineering IOP Publishing, Vol. 782, No. 3, pp. 032005, March 2020. DOI: 10.1088/1757-899X/782/3/032005 
  51. Duan, R. N., Wang, X. W., Lu, B. L., "EEG-based emotion recognition in listening music by using support vector machine and linear dynamic system." International Conference, pp. 12-15, Springer, Berlin, Heidelberg, November 2012. DOI: https://doi.org/10.1007/978-3-642-34478-7_57 
  52. Nie, D., Wang, X. W., Shi, L. C., Lu, B. L. "EEG-based emotion recognition during watching movies." International IEEE/EMBS Conference on Neural Engineering, pp. 667-670, Cancun, Mexico, April 2011. DOI: 10.1109/NER.2011.5910636 
  53. T. Song et al., "MPED: A multi-modal physiological emotion database for discrete emotion recognition," IEEE Access, Vol. 7, pp. 12177-12191, January 2019. DOI: 10.1109/ACCESS.2019.2891579 
  54. G. Hatfield, "Did Descartes have a Jamesian theory of the emotions?," Philos. Psychol., Vol. 20, No. 4, pp. 413-440, August 2007. DOI: 10.1080/09515080701422041 
  55. B. Rim et al., "Deep learning in physiological signal data: A survey," Sensors (Basel), Vol. 20, No. 4, pp. 969, February 2020. DOI: 10.3390/s20040969 
  56. Y. Liu et al., ""Multisubject "learning" for mental workload classification using concurrent EEG, fNIRS, and physiological measures," Front. Hum. Neurosci., Vol. 11, pp. 389, July 2017. DOI: 10.3389/fnhum.2017.00389 
  57. W. Lin et al., "Deep convolutional neural network for emotion recognition using EEG and peripheral physiological signal,", Lecture Notes in Computer Science, Vol. 10667, pp. 385-394, December 2017. DOI: 10.1007/978-3-319-71589-6_33 
  58. G. Chanel et al., "Emotion assessment: Arousal evaluation using EEG's and peripheral physiological signals,", Lecture Notes in Computer Science, Vol. 4105, pp. 530-537, September 2006. DOI: 10.1007/11848035_70 
  59. S. Siddharth et al., "Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing," IEEE Trans. Affect. Comput., Vol. 13, No. 1, pp. 96-107, May 2019. DOI: 10.1109/TAFFC.2019.2916015 
  60. G. R. Kini and C. Thrampoulidis, Analytic Study of Double Descent in Binary Classification: The Impact of Loss, IEEE International Symposium on Information Theory (ISIT) pp. 2527-2532, Los Angeles, CA, USA, August 2020, DOI: 10.1109/ISIT44484.2020.9174344 
  61. J.-H. Yu and D.-H. Kim, Spectral Matching Using Range Queries Based on Pyramid-Technique in Hyperspectral Image Library, The Korea Contents Association, pp. 83-84, May 2011. DOI: 2011.05a  2011.05a
  62. E. Novak and K. Ritter, "The curse of dimension and a universal method for numerical integration," Multivariate Approximation and Splines, ISNM International Series of Numerical Mathematics, Springer, Birkhauser, Basel, Vol. 125, pp. 177-187, January 1997. DOI: https://doi.org/10.1007/978-3-0348-8871-4_15pp. 
  63. G. Zhao et al., "Multi-target positive emotion recognition from EEG signals," IEEE Trans. Affect. Comput., Vol. 14, No. 1, pp. 370-381, December 2020. DOI: 10.1109/TAFFC.2020.3043135 
  64. J. Hwang and J. W. Yoon, "Random noise addition for detecting adversarially generated image dataset," The J. Korea Inst. Inf. Electron. Commun. Technol., Vol. 12, No. 6, pp. 629-635, December 2019. DOI: https://doi.org/10.17661/jkiiect.2019.12.6.629