• Title/Summary/Keyword: artificial emotion

Search Result 165, Processing Time 0.022 seconds

Local and Global Attention Fusion Network For Facial Emotion Recognition (얼굴 감정 인식을 위한 로컬 및 글로벌 어텐션 퓨전 네트워크)

  • Minh-Hai Tran;Tram-Tran Nguyen Quynh;Nhu-Tai Do;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.493-495
    • /
    • 2023
  • Deep learning methods and attention mechanisms have been incorporated to improve facial emotion recognition, which has recently attracted much attention. The fusion approaches have improved accuracy by combining various types of information. This research proposes a fusion network with self-attention and local attention mechanisms. It uses a multi-layer perceptron network. The network extracts distinguishing characteristics from facial images using pre-trained models on RAF-DB dataset. We outperform the other fusion methods on RAD-DB dataset with impressive results.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyun;Sim Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyeon;Sim Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Artificial Intelligence and Literary Sensibility (인공지능과 문학 감성의 상호 연결)

  • Seunghee Sone
    • Science of Emotion and Sensibility
    • /
    • v.26 no.4
    • /
    • pp.115-124
    • /
    • 2023
  • This study explores the intersection of literary studies and artificial intelligence (AI), focusing on the common theme of human emotions to foster complementary advancements in both fields. By adopting a comparative perspective, the paper investigates emotion as a shared focal point, analyzing various emotion-related concepts from both literary and AI perspectives. Despite the scarcity of research on the fusion of AI and literary studies, this study pioneers an interdisciplinary approach within the humanities, anticipating future developments in AI. It proposes that literary sensibility can contribute to AI by formalizing subjective literary emotions, thereby enhancing AI's understanding of complex human emotions. This paper's methodology involves the terminology-centered extraction of emotions, aiming to blend subjective imagination with objective technology. This fusion is expected to not only deepen AI's comprehension of human complexities but also broaden literary research by rapidly analyzing diverse human data. The study emphasizes the need for a collaborative dialogue between literature and engineering, recognizing each field's limitations while pursuing a convergent enhancement that transcends these boundaries.

  • PDF

Multiclass Music Classification Approach Based on Genre and Emotion

  • Jonghwa Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.27-32
    • /
    • 2024
  • Reliable and fine-grained musical metadata are required for efficient search of rapidly increasing music files. In particular, since the primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, emotion classification along with genre classification of music is crucial. In this paper, as an initial approach towards a "ground-truth" dataset for music emotion and genre classification, we elaborately generated a music corpus through labeling of a large number of ordinary people. In order to verify the suitability of the dataset through the classification results, we extracted features according to MPEG-7 audio standard and applied different machine learning models based on statistics and deep neural network to automatically classify the dataset. By using standard hyperparameter setting, we reached an accuracy of 93% for genre classification and 80% for emotion classification, and believe that our dataset can be used as a meaningful comparative dataset in this research field.

Optimized patch feature extraction using CNN for emotion recognition (감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출)

  • Irfan Haider;Aera kim;Guee-Sang Lee;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

Emotion Adjustment Method for Diverse Expressions of Same Emotion Depending on Each Character's Characteristics (캐릭터 성격에 따른 동일 감정 표현의 다양화를 위한 감정 조정 방안)

  • Lee, Chang-Sook;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.37-47
    • /
    • 2010
  • Along with language, emotion is an effective means of expression. By expressing our emotions as well as speaking language, we can deliver our message better. Because each person expresses the same emotion differently, this expression is a useful gauge to measure an individual personality. To avoid monotonous emotional expression from virtual characters, therefore, it is necessary to adjust the creation and deletion of the same emotion depending on each character's personality. This paper has attempted to define personality characteristics that have an impact on each emotion and propose a method to adjust the emotions. Furthermore, the relationship between particular emotion and personality characteristics has been defined by matching the significance of specified personality characteristics with the lexical meaning. In addition, using the Raw Score, the weighted value which is necessary for the adjustment, continuance and deletion of each emotion has been defined. Then, emotion was properly adjusted. When the same emotion was adjusted using actual personality test data, different results have been observed by personality. This paper has been conducted using NEO Personality Inventory (NEO-PI) which consisted of 5 broad domains and 30 sub domains.

Pattern Recognition Methods for Emotion Recognition with speech signal

  • Park Chang-Hyun;Sim Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.150-154
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition are determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section.