• Title/Summary/Keyword: Facial signals

Search Result 37, Processing Time 0.027 seconds

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Lightweight Deep Learning Model for Heart Rate Estimation from Facial Videos (얼굴 영상 기반의 심박수 추정을 위한 딥러닝 모델의 경량화 기법)

  • Gyutae Hwang;Myeonggeun Park;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.2
    • /
    • pp.51-58
    • /
    • 2023
  • This paper proposes a deep learning method for estimating the heart rate from facial videos. Our proposed method estimates remote photoplethysmography (rPPG) signals to predict the heart rate. Although there have been proposed several methods for estimating rPPG signals, most previous methods can not be utilized in low-power single board computers due to their computational complexity. To address this problem, we construct a lightweight student model and employ a knowledge distillation technique to reduce the performance degradation of a deeper network model. The teacher model consists of 795k parameters, whereas the student model only contains 24k parameters, and therefore, the inference time was reduced with the factor of 10. By distilling the knowledge of the intermediate feature maps of the teacher model, we improved the accuracy of the student model for estimating the heart rate. Experiments were conducted on the UBFC-rPPG dataset to demonstrate the effectiveness of the proposed method. Moreover, we collected our own dataset to verify the accuracy and processing time of the proposed method on a real-world dataset. Experimental results on a NVIDIA Jetson Nano board demonstrate that our proposed method can infer the heart rate in real time with the mean absolute error of 2.5183 bpm.

Multi-modal Authentication Using Score Fusion of ECG and Fingerprints

  • Kwon, Young-Bin;Kim, Jason
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.132-146
    • /
    • 2020
  • Biometric technologies have become widely available in many different fields. However, biometric technologies using existing physical features such as fingerprints, facial features, irises, and veins must consider forgery and alterations targeting them through fraudulent physical characteristics such as fake fingerprints. Thus, a trend toward next-generation biometric technologies using behavioral biometrics of a living person, such as bio-signals and walking characteristics, has emerged. Accordingly, in this study, we developed a bio-signal authentication algorithm using electrocardiogram (ECG) signals, which are the most uniquely identifiable form of bio-signal available. When using ECG signals with our system, the personal identification and authentication accuracy are approximately 90% during a state of rest. When using fingerprints alone, the equal error rate (EER) is 0.243%; however, when fusing the scores of both the ECG signal and fingerprints, the EER decreases to 0.113% on average. In addition, as a function of detecting a presentation attack on a mobile phone, a method for rejecting a transaction when a fake fingerprint is applied was successfully implemented.

Emotion Recognition Using Tone and Tempo Based on Voice for IoT (IoT를 위한 음성신호 기반의 톤, 템포 특징벡터를 이용한 감정인식)

  • Byun, Sung-Woo;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.1
    • /
    • pp.116-121
    • /
    • 2016
  • In Internet of things (IoT) area, researches on recognizing human emotion are increasing recently. Generally, multi-modal features like facial images, bio-signals and voice signals are used for the emotion recognition. Among the multi-modal features, voice signals are the most convenient for acquisition. This paper proposes an emotion recognition method using tone and tempo based on voice. For this, we make voice databases from broadcasting media contents. Emotion recognition tests are carried out by extracted tone and tempo features from the voice databases. The result shows noticeable improvement of accuracy in comparison to conventional methods using only pitch.

Direction control using signals originating from facial muscle constructions (안면근에 의해 발생되는 신호를 이용한 방향 제어)

  • Yang, Eun-Joo;Kim, Eung-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.427-432
    • /
    • 2003
  • EEG is an electrical signal, which occurs during information processing in the brain. These EEG signals have been used clinically, but nowadays we ate mainly studying Brain-Computer Interface (BCI) such as interfacing with a computer through the EEG, controlling the machine through the EEG. The ultimate purpose of BCI study is specifying the EEG at various mental states so as to control the computer and machine. This research makes the controlling system of directions with the artifact that are generated from the subject s will, for the purpose of controlling the machine correctly and reliably We made the system like this. First, we select the particular artifact among the EEG mixed with artifact, then, recognize and classify the signals pattern, then, change the signals to general signals that can be used by the controlling system of directions.

Monophthong Recognition Optimizing Muscle Mixing Based on Facial Surface EMG Signals (안면근육 표면근전도 신호기반 근육 조합 최적화를 통한 단모음인식)

  • Lee, Byeong-Hyeon;Ryu, Jae-Hwan;Lee, Mi-Ran;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.143-150
    • /
    • 2016
  • In this paper, we propose Korean monophthong recognition method optimizing muscle mixing based on facial surface EMG signals. We observed that EMG signal patterns and muscle activity may vary according to Korean monophthong pronunciation. We use RMS, VAR, MMAV1, MMAV2 which were shown high recognition accuracy in previous study and Cepstral Coefficients as feature extraction algorithm. And we classify Korean monophthong by QDA(Quadratic Discriminant Analysis) and HMM(Hidden Markov Model). Muscle mixing optimized using input data in training phase, optimized result is applied in recognition phase. Then New data are input, finally Korean monophthong are recognized. Experimental results show that the average recognition accuracy is 85.7% in QDA, 75.1% in HMM.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

Efficacy evaluation on whitening cosmetics in Japan

  • Funasaka, Yoko
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.28 no.3
    • /
    • pp.91-108
    • /
    • 2002
  • Whitening agents are eagerly demanded especially by oriental women who often suffers from the pigmentary disorders such as melasma and solar lentigines. As these pigmentary disorders are exacerbated by ultraviolet (UV), the whitening agents could exert its effect not only by inhibiting melanin synthesis but also by inhibiting UV activated signals. Eumelanin protects UV-induced DNA damages so that the chemicals which could reduce UV-induced DNA damages might be the ideal lightening agents. The effect of newly synthesized antioxidants, a-tocopheryl ferulate, on protective effect for UV-induced DNA damages as well as inhibiting melanin synthesis are briefly shown. For clinical evaluation, our results of the efficacy of lightening agents on treating pigment macules in combination with chemical peeling are shown. Furthermore, newly developed facial image analyzers to quantitatively evaluate the improvement of pigment macules are introduced.