• Title/Summary/Keyword: Artificial Emotion Model

Search Result 50, Processing Time 0.028 seconds

Emotional Model Focused on Robot's Familiarity to Human

  • Choi, Tae-Yong;Kim, Chang-Hyun;Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1025-1030
    • /
    • 2005
  • This paper deals with the emotional model of the software-robot. The software-robot requires several capabilities such as sensing, perceiving, acting, communicating, and surviving. and so on. There are already many studies about the emotional model like KISMET and AIBO. The new emotional model using the modified friendship scheme is proposed in this paper. Quite often, the available emotional models have time invariant human respond architectures. Conventional emotional models make the sociable robot get around with humans, and obey human commands during robot operation. This behavior makes the robot very different from real pets. Similar to real pets, the proposed emotional model with the modified friendship capability has time varying property depending on interaction between human and robot.

  • PDF

Empowering Emotion Classification Performance Through Reasoning Dataset From Large-scale Language Model (초거대 언어 모델로부터의 추론 데이터셋을 활용한 감정 분류 성능 향상)

  • NunSol Park;MinHo Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.59-61
    • /
    • 2023
  • 본 논문에서는 감정 분류 성능 향상을 위한 초거대 언어모델로부터의 추론 데이터셋 활용 방안을 제안한다. 이 방안은 Google Research의 'Chain of Thought'에서 영감을 받아 이를 적용하였으며, 추론 데이터는 ChatGPT와 같은 초거대 언어 모델로 생성하였다. 본 논문의 목표는 머신러닝 모델이 추론 데이터를 이해하고 적용하는 능력을 활용하여, 감정 분류 작업의 성능을 향상시키는 것이다. 초거대 언어 모델(ChatGPT)로부터 추출한 추론 데이터셋을 활용하여 감정 분류 모델을 훈련하였으며, 이 모델은 감정 분류 작업에서 향상된 성능을 보였다. 이를 통해 추론 데이터셋이 감정 분류에 있어서 큰 가치를 가질 수 있음을 증명하였다. 또한, 이 연구는 기존에 감정 분류 작업에 사용되던 데이터셋만을 활용한 모델과 비교하였을 때, 추론 데이터를 활용한 모델이 더 높은 성능을 보였음을 증명한다. 이 연구를 통해, 적은 비용으로 초거대 언어모델로부터 생성된 추론 데이터셋의 활용 가능성을 보여주고, 감정 분류 작업 성능을 향상시키는 새로운 방법을 제시한다. 제시한 방안은 감정 분류뿐만 아니라 다른 자연어처리 분야에서도 활용될 수 있으며, 더욱 정교한 자연어 이해와 처리가 가능함을 시사한다.

  • PDF

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

Recent Trend in Measurement Techniques of Emotion Science (감성과학을 위한 측정기법의 최근 연구 동향)

  • Jung, Hyo-Il;Park, Tae-Sun;Lee, Bae-Hwan;Yun, Sung-Hyun;Lee, Woo-Young;Kim, Wang-Bae
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.235-242
    • /
    • 2010
  • Emotion science is one of the rapidly expanding engineering/scientific disciplines which has a major impact on human society. Such growing interests in emotion science and engineering owe the recent trend that various academic fields are being merged. In this paper we review the recent techniques in the measuring the emotion related elements and applications which include animal model system to investigate the neural network and behaviour, artificial nose/neuronal chip for in-depth understanding of sensing the outer stimuli, metabolic controlling using emotional stimulant such as sounds. In particular, microfabrication techniques made it possible to construct nano/micron scale sensing parts/chips to accommodate the olfactory cells and neuron cells and gave us a new opportunities to investigate the emotion precisely. Recent developments in the measurement techniques will be able to help combine the social sciences and natural sciences, and consequently expand the scope of studies.

  • PDF

Convergence Implementing Emotion Prediction Neural Network Based on Heart Rate Variability (HRV) (심박변이도를 이용한 인공신경망 기반 감정예측 모형에 관한 융복합 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.33-41
    • /
    • 2018
  • The purpose of this study is to develop more accurate and robust emotion prediction neural network (EPNN) model by combining heart rate variability (HRV) and neural network. For the sake of improving the prediction performance more reliably, the proposed EPNN model is based on various types of activation functions like hyperbolic tangent, linear, and Gaussian functions, all of which are embedded in hidden nodes to improve its performance. In order to verify the validity of the proposed EPNN model, a number of HRV metrics were calculated from 20 valid and qualified participants whose emotions were induced by using money game. To add more rigor to the experiment, the participants' valence and arousal were checked and used as output node of the EPNN. The experiment results reveal that the F-Measure for Valence and Arousal is 80% and 95%, respectively, proving that the EPNN yields very robust and well-balanced performance. The EPNN performance was compared with competing models like neural network, logistic regression, support vector machine, and random forest. The EPNN was more accurate and reliable than those of the competing models. The results of this study can be effectively applied to many types of wearable computing devices when ubiquitous digital health environment becomes feasible and permeating into our everyday lives.

GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model

  • Ahn, Hyunchul;Kim, Seongjin;Kim, Jae Kyeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2056-2069
    • /
    • 2014
  • In order to implement interactive and personalized Web services properly, it is necessary to understand the tangible and intangible responses of the users and to recognize their emotional states. Recently, some studies have attempted to build emotional state estimation models based on facial expressions. Most of these studies have applied multiple regression analysis (MRA), artificial neural network (ANN), and support vector regression (SVR) as the prediction algorithm, but the prediction accuracies have been relatively low. In order to improve the prediction performance of the emotion prediction model, we propose a novel SVR model that is optimized using a genetic algorithm (GA). Our proposed algorithm-GASVR-is designed to optimize the kernel parameters and the feature subsets of SVRs in order to predict the levels of two aspects-valence and arousal-of the emotions of the users. In order to validate the usefulness of GASVR, we collected a real-world data set of facial responses and emotional states via a survey. We applied GASVR and other algorithms including MRA, ANN, and conventional SVR to the data set. Finally, we found that GASVR outperformed all of the comparative algorithms in the prediction of the valence and arousal levels.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

Detects depression-related emotions in user input sentences (사용자 입력 문장에서 우울 관련 감정 탐지)

  • Oh, Jaedong;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1759-1768
    • /
    • 2022
  • This paper proposes a model to detect depression-related emotions in a user's speech using wellness dialogue scripts provided by AI Hub, topic-specific daily conversation datasets, and chatbot datasets published on Github. There are 18 emotions, including depression and lethargy, in depression-related emotions, and emotion classification tasks are performed using KoBERT and KOELECTRA models that show high performance in language models. For model-specific performance comparisons, we build diverse datasets and compare classification results while adjusting batch sizes and learning rates for models that perform well. Furthermore, a person performs a multi-classification task by selecting all labels whose output values are higher than a specific threshold as the correct answer, in order to reflect feeling multiple emotions at the same time. The model with the best performance derived through this process is called the Depression model, and the model is then used to classify depression-related emotions for user utterances.