• Title/Summary/Keyword: Facial signals

Search Result 37, Processing Time 0.028 seconds

Implementation of communication system using signals originating from facial muscle constructions

  • Kim, EungSoo;Eum, TaeWan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.217-222
    • /
    • 2004
  • A person does communication between each other using language. But, In the case of disabled person, cannot communicate own idea to use writing and gesture. We embodied communication system using the EEG so that disabled person can do communication. After feature extraction of the EEG included facial muscle signals, it is converted the facial muscle into control signal, and then did so that can select character and communicate idea.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

HMM-Based Automatic Speech Recognition using EMG Signal

  • Lee Ki-Seung
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.3
    • /
    • pp.101-109
    • /
    • 2006
  • It has been known that there is strong relationship between human voices and the movements of the articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The EMG signals were acquired from three articulatory facial muscles. Preliminary, 10 Korean digits were used as recognition variables. The various feature parameters including filter bank outputs, linear predictive coefficients and cepstrum coefficients were evaluated to find the appropriate parameters for EMG-based speech recognition. The sequence of the EMG signals for each word is modelled by a hidden Markov model (HMM) framework. A continuous word recognition approach was investigated in this work. Hence, the model for each word is obtained by concatenating the subword models and the embedded re-estimation techniques were employed in the training stage. The findings indicate that such a system may have a capacity to recognize speech signals with an accuracy of up to 90%, in case when mel-filter bank output was used as the feature parameters for recognition.

Robust Extraction of Heartbeat Signals from Mobile Facial Videos (모바일 얼굴 비디오로부터 심박 신호의 강건한 추출)

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.1
    • /
    • pp.51-56
    • /
    • 2019
  • This paper proposes an improved heartbeat signal extraction method for ballistocardiography(BCG)-based heart-rate measurement on mobile environment. First, from a mobile facial video, a handshake-free head motion signal is extracted by tracking facial features and background features at the same time. Then, a novel signal periodicity computation method is proposed to accurately separate out the heartbeat signal from the head motion signal. The proposed method could robustly extract heartbeat signals from mobile facial videos, and enabled more accurate heart rate measurement (measurement errors were reduced by 3-4 bpm) compared to the existing method.

Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard (안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발)

  • Kim, Hong-Hyun;Park, Hyun-Seok;Kim, Eung-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.289-292
    • /
    • 2009
  • A person does communication between each other using language. But In the case of disabled person can not communication own idea to use writing and gesture. Therefore, In this paper, we embodied communication system using the facial muscle signals so that disabled person can do communication. Especially, After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal, and then select character and communicate using a minimum list keyboard.

  • PDF

Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard (안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발)

  • Kim, Hong-Hyun;Kim, Eung-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.6
    • /
    • pp.1338-1344
    • /
    • 2010
  • A person does communication between each other using language. But In the case of disabled person can not communication own idea to use writing and gesture. Therefore, In this paper, we embodied communication system using the facial muscle signals so that disabled person can do communication. Especially, After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal, and then select character and communication using a minimum list keyboard.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

The Effects of a Massage and Oro-facial Exercise Program on Spastic Dysarthrics' Lip Muscle Function

  • Hwang, Young-Jin;Jeong, Ok-Ran;Yeom, Ho-Joon
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.55-64
    • /
    • 2004
  • This study was to determine the effects of a massage and oro-facial exercise program on spastic dysarthric patients' lip muscle function using an electromyogram (EMG). Three subjects with Spastic Dysarthria participated in the study. The surface electrodes were positioned on the Levator Labii Superior Muscle (LLSM), Depressor Labii Inferior Muscle (DLIM), and Orbicularis Oris Muscle (OOM). To examine lip muscle function improvement, the EMG signals were analyzed in terms of RMS (Root Mean Square) values and Median Frequency. In addition, the diadochokinetic movements and the rate of sentence reading were measured. The results revealed that the RMS values were decreased and the Median Frequency moved to a high frequency area. Diadochokinesis and sentence reading rates were improved.

  • PDF

Normalization Framework of BCI-based Facial Interface

  • Sung, Yunsick;Gong, Suhyun
    • Journal of Multimedia Information System
    • /
    • v.2 no.3
    • /
    • pp.275-280
    • /
    • 2015
  • Recently brainwaves are utilized diversely in the field of medicine, entertainment, education and so on. In the case of medicine, brainwaves are analyzed to estimate patients' diseases. However, the applications for entertainments usually utilize brainwaves as control signal without figuring out the characters of the brainwaves. Given that users' brainwaves are different each other, a normalization method is essential. The traditional brainwave normalization approaches utilize normal distribution. However, those approaches assume that brainwaves are collected enough to conduct normal distribution. When the few amounts of brainwaves are measured, the accuracy of the control signal based on the measured brainwaves becomes low. In this paper, we propose a normalization framework of BCI-based facial interfaces for novel volume controllers, which can normalizes the few amounts of brainwaves and then generates the control signals of BCI-based facial interfaces. In the experiments, two subjects were involved to validate the proposed framework and then the normalization processes were introduced.