• Title/Summary/Keyword: Emotion processing

Search Result 311, Processing Time 0.023 seconds

The Effects of Nonverbal Communication of Fast Food Restaurant Servers on Customer Loyalty - Focusing on Customer Emotion and Self-Identification - (패스트푸드업체 서비스종사원의 비언어적 커뮤니케이션이 고객충성도에 미치는 영향 - 고객감정과 자아동일시를 중심으로 -)

  • Yoo, Young-Jin;Park, Yi-Kyung
    • Culinary science and hospitality research
    • /
    • v.22 no.3
    • /
    • pp.166-182
    • /
    • 2016
  • This study intended to verify the impact of non-verbal communication of servers in the service industry on the affective path among customer's positive emotion, self-identification, and loyalty(behavioral and attitudinal loyalty). The data of 397 customers of typical fast food restaurants in Busan and Gyeongsangbuk-do area were analyzed with SPSS and AMOS, and the hypotheses were verified through structural equation model after frequency analysis, as well as exploratory and confirmatory factor analysis. According to the empirical analysis, all three components of server non-verbal communication in the service industry, body language, pseudo language, and body appearance, in respective order, had positive (+) influences on the positive emotion of customers. In addition, customer emotion had a positive (+) influence on brand self-identification. Finally, self-identification had a positive (+) influence on behavior loyalty and attitudinal loyalty. This study suggested practical implications and logical implications in the course of developing emotional loyalty for restaurant companies.

Similarity Evaluation of Popular Music based on Emotion and Structure of Lyrics (가사의 감정 분석과 구조 분석을 이용한 노래 간 유사도 측정)

  • Lee, Jaehwan;Lim, Hyewon;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.10
    • /
    • pp.479-487
    • /
    • 2016
  • People can listen to almost every type of music by music streaming services without possessing music. Ironically it is difficult to choose what to listen to. A music recommendation system helps people in making a choice. However, existing recommendation systems have high computation complexity and do not consider context information. Emotion is one of the most important context information of music. Lyrics can be easily computed with various language processing techniques and can even be used to extract emotion of music from itself. We suggest a music-level similarity evaluation method using emotion and structure. Our result shows that it is important to consider semantic information when we evaluate similarity of music.

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.

Discriminative Effects of Social Skills Training on Facial Emotion Recognition among Children with Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.29 no.4
    • /
    • pp.150-160
    • /
    • 2018
  • Objectives: This study investigated the effect of social skills training (SST) on facial emotion recognition and discrimination in children with attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Twenty-three children aged 7 to 10 years participated in our SST. They included 15 children diagnosed with ADHD and 8 with ASD. The participants' parents completed the Korean version of the Child Behavior Checklist (K-CBCL), the ADHD Rating Scale, and Conner's Scale at baseline and post-treatment. The participants completed the Korean Wechsler Intelligence Scale for Children-IV (K-WISC-IV) and the Advanced Test of Attention at baseline and the Penn Emotion Recognition and Discrimination Task at baseline and post-treatment. Results: No significant changes in facial emotion recognition and discrimination occurred in either group before and after SST. However, when controlling for the processing speed of K-WISC and the social subscale of K-CBCL, the ADHD group showed more improvement in total (p=0.049), female (p=0.039), sad (p=0.002), mild (p=0.015), female extreme (p=0.005), male mild (p=0.038), and Caucasian (p=0.004) facial expressions than did the ASD group. Conclusion: SST improved facial expression recognition for children with ADHD more effectively than it did for children with ASD, in whom additional training to help emotion recognition and discrimination is needed.

Design of a Mirror for Fragrance Recommendation based on Personal Emotion Analysis (개인의 감성 분석 기반 향 추천 미러 설계)

  • Hyeonji Kim;Yoosoo Oh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.11-19
    • /
    • 2023
  • The paper proposes a smart mirror system that recommends fragrances based on user emotion analysis. This paper combines natural language processing techniques such as embedding techniques (CounterVectorizer and TF-IDF) and machine learning classification models (DecisionTree, SVM, RandomForest, SGD Classifier) to build a model and compares the results. After the comparison, the paper constructs a personal emotion-based fragrance recommendation mirror model based on the SVM and word embedding pipeline-based emotion classifier model with the highest performance. The proposed system implements a personalized fragrance recommendation mirror based on emotion analysis, providing web services using the Flask web framework. This paper uses the Google Speech Cloud API to recognize users' voices and use speech-to-text (STT) to convert voice-transcribed text data. The proposed system provides users with information about weather, humidity, location, quotes, time, and schedule management.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

Multiple Regression-Based Music Emotion Classification Technique (다중 회귀 기반의 음악 감성 분류 기법)

  • Lee, Dong-Hyun;Park, Jung-Wook;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.239-248
    • /
    • 2018
  • Many new technologies are studied with the arrival of the 4th industrial revolution. In particular, emotional intelligence is one of the popular issues. Researchers are focused on emotional analysis studies for music services, based on artificial intelligence and pattern recognition. However, they do not consider how we recommend proper music according to the specific emotion of the user. This is the practical issue for music-related IoT applications. Thus, in this paper, we propose an probability-based music emotion classification technique that makes it possible to classify music with high precision based on the range of emotion, when developing music related services. For user emotion recognition, one of the popular emotional model, Russell model, is referenced. For the features of music, the average amplitude, peak-average, the number of wavelength, average wavelength, and beats per minute were extracted. Multiple regressions were derived using regression analysis based on the collected data, and probability-based emotion classification was carried out. In our 2 different experiments, the emotion matching rate shows 70.94% and 86.21% by the proposed technique, and 66.83% and 76.85% by the survey participants. From the experiment, the proposed technique generates improved results for music classification.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

THE EFFECTIVENESS AND CHARACTERISTICS OF 3 POINT TASK ANALYSIS AS A NEW ERGONOMIC AND KANSEI DESIGN METHOD

  • Yamaoka, Toshiki;Matsunobe, Takuo
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.05a
    • /
    • pp.15-19
    • /
    • 2001
  • This paper describes effectiveness and characteristics of 3 P(point) task analysis as a new Ergonomic and Kansei design method for extracting user demand especially. The key point in 3 P task analysis is to describe the flow of tasks and extract any problems in each task. A solution of a problem means a user demand. 3 P task analysis cal eliminate an oversight of check items by examining the users' information processing level. The suers' information processing level was divided into the following three stages for problem extraction: acquirement of information ---> understanding and judgment ---> operation. Three stages has fourteenth cues such as difficulty of seeing, no emphasis, mapping for extracting problems. To link analysis results to the formulation of a product concept. I added a column on the right side of the table for writing the requirements (user demand) to resolve the problems extracted from each task. The requirements are extracted by using seventh cues. Finally 3 P task analysis was compared with group interview to make the characteristics of 3 P task analysis, especially extracting user demand, clear.

  • PDF