• Title/Summary/Keyword: 음성감정인식

Search Result 142, Processing Time 0.022 seconds

Speech Emotion Recognition using Feature Selection and Fusion Method (특징 선택과 융합 방법을 이용한 음성 감정 인식)

  • Kim, Weon-Goo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.8
    • /
    • pp.1265-1271
    • /
    • 2017
  • In this paper, the speech parameter fusion method is studied to improve the performance of the conventional emotion recognition system. For this purpose, the combination of the parameters that show the best performance by combining the cepstrum parameters and the various pitch parameters used in the conventional emotion recognition system are selected. Various pitch parameters were generated using numerical and statistical methods using pitch of speech. Performance evaluation was performed on the emotion recognition system using Gaussian mixture model(GMM) to select the pitch parameters that showed the best performance in combination with cepstrum parameters. As a parameter selection method, sequential feature selection method was used. In the experiment to distinguish the four emotions of normal, joy, sadness and angry, fifteen of the total 56 pitch parameters were selected and showed the best recognition performance when fused with cepstrum and delta cepstrum coefficients. This is a 48.9% reduction in the error of emotion recognition system using only pitch parameters.

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

Emotion Recognition Using Tone and Tempo Based on Voice for IoT (IoT를 위한 음성신호 기반의 톤, 템포 특징벡터를 이용한 감정인식)

  • Byun, Sung-Woo;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.1
    • /
    • pp.116-121
    • /
    • 2016
  • In Internet of things (IoT) area, researches on recognizing human emotion are increasing recently. Generally, multi-modal features like facial images, bio-signals and voice signals are used for the emotion recognition. Among the multi-modal features, voice signals are the most convenient for acquisition. This paper proposes an emotion recognition method using tone and tempo based on voice. For this, we make voice databases from broadcasting media contents. Emotion recognition tests are carried out by extracted tone and tempo features from the voice databases. The result shows noticeable improvement of accuracy in comparison to conventional methods using only pitch.

VR Companion Animal Communion System for Pet Loss Syndrome (펫로스 증후군을 위한 VR 반려동물 교감 시스템)

  • Choi, Hyeong-Mun;Moon, Mikyeong;Lee, Gun-ho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.563-564
    • /
    • 2021
  • 반려동물 보유 가구 수가 증가하면서 반려동물의 상실로 인한 펫로스 증후군을 호소하는 반려인 또한 증가하고 있다. 펫로스 증후군을 치유하기 위해 반려동물을 가상으로라도 만나서 평소에 하던 말과 행동을 할 수 있도록 하여 차츰 이별을 할 수 있도록 할 필요가 있다. 본 논문에서는 VR을 통하여 반려인이 3D로 모델링 된 반려동물과 직접 교감할 수 있는 시스템에 대한 연구 내용을 기술한다. 이 시스템을 통해 떠나보낸 반려동물과 평소와 같은 말과 행동을 할 수 있도록 도와주어 감정의 정화를 서서히 할 수 있도록 해준다.

  • PDF

Speech Emotion Recognition Based on Deep Networks: A Review (딥네트워크 기반 음성 감정인식 기술 동향)

  • Mustaqeem, Mustaqeem;Kwon, Soonil
    • Annual Conference of KIPS
    • /
    • 2021.05a
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.

Building Living Lab for Acquiring Behavioral Data for Early Screening of Developmental Disorders

  • Kim, Jung-Jun;Kwon, Yong-Seop;Kim, Min-Gyu;Kim, Eun-Soo;Kim, Kyung-Ho;Sohn, Dong-Seop
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.8
    • /
    • pp.47-54
    • /
    • 2020
  • Developmental disorders are impairments of brain and/or central nervous system and refer to a disorder of brain function that affects languages, communication skills, perception, sociality and so on. In diagnosis of developmental disorders, behavioral response such as expressing emotions in proper situation is one of observable indicators that tells whether or not individual has the disorders. However, diagnosis by observation can allow subjective evaluation that leads erroneous conclusion. This research presents the technological environment and data acquisition system for AI based screening of autism disorder. The environment was built considering activities for two screening protocols, namely Autism Diagnostic Observation Schedule (ADOS) and Behavior Development Screening for Toddler (BeDevel). The activities between therapist and baby during the screening are fully recorded. The proposed software in this research was designed to support recording, monitoring and data tagging for learning AI algorithms.

The AI Promotion Strategy of Korea Defense for the AI Expansion in Defense Domain (국방분야 인공지능 저변화를 위한 대한민국 국방 인공지능 추진전략)

  • Lee, Seung-Mok;Kim, Young-Gon;An, Kyung-Soo
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.59-73
    • /
    • 2021
  • Recently, artificial intelligence has spread rapidly and popularized and expanded to the voice recognition personal service sector, and major countries have established artificial intelligence promotion strategies, but in the case of South Korea's defense domain, its influence is low with a geopolitical location with North Korea. This paper presents a total of six strategies for promoting South Korea's defense artificial intelligence, including establishing roadmaps, securing manpower, installing the artificial intelligence base, and strengthening cooperation among stakeholders in order to increase the impact of South Korea's defense artificial intelligence and successfully promote artificial intelligence. These suggestions are expected to establish the foundation for expanding the base of artificial intelligence.

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

Framework Switching of Speaker Overlap Detection System (화자 겹침 검출 시스템의 프레임워크 전환 연구)

  • Kim, Hoinam;Park, Jisu;Cha, Shin;Son, Kyung A;Yun, Young-Sun;Park, Jeon Gue
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.101-113
    • /
    • 2021
  • In this paper, we introduce a speaker overlap system and look at the process of converting the existed system on the specific framework of artificial intelligence. Speaker overlap is when two or more speakers speak at the same time during a conversation, and can lead to performance degradation in the fields of speech recognition or speaker recognition, and a lot of research is being conducted because it can prevent performance degradation. Recently, as application of artificial intelligence is increasing, there is a demand for switching between artificial intelligence frameworks. However, when switching frameworks, performance degradation is observed due to the unique characteristics of each framework, making it difficult to switch frameworks. In this paper, the process of converting the speaker overlap detection system based on the Keras framework to the pytorch-based system is explained and considers components. As a result of the framework switching, the pytorch-based system showed better performance than the existing Keras-based speaker overlap detection system, so it can be said that it is valuable as a fundamental study on systematic framework conversion.