• Title/Summary/Keyword: 음성 피쳐

Search Result 7, Processing Time 0.017 seconds

Conformer-based Elderly Speech Recognition using Feature Fusion Module (피쳐 퓨전 모듈을 이용한 콘포머 기반의 노인 음성 인식)

  • Minsik Lee;Jihie Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.39-43
    • /
    • 2023
  • 자동 음성 인식(Automatic Speech Recognition, ASR)은 컴퓨터가 인간의 음성을 텍스트로 변환하는 기술이다. 자동 음성 인식 시스템은 다양한 응용 분야에서 사용되며, 음성 명령 및 제어, 음성 검색, 텍스트 트랜스크립션, 자동 음성 번역 등 다양한 작업을 목적으로 한다. 자동 음성 인식의 노력에도 불구하고 노인 음성 인식(Elderly Speech Recognition, ESR)에 대한 어려움은 줄어들지 않고 있다. 본 연구는 노인 음성 인식에 콘포머(Conformer)와 피쳐 퓨전 모듈(Features Fusion Module, FFM)기반 노인 음성 인식 모델을 제안한다. 학습, 평가는 VOTE400(Voide Of The Elderly 400 Hours) 데이터셋으로 한다. 본 연구는 그동안 잘 이뤄지지 않았던 콘포머와 퓨전피쳐를 사용해 노인 음성 인식을 위한 딥러닝 모델을 제시하였다는데 큰 의미가 있다. 또한 콘포머 모델보다 높은 수준의 정확도를 보임으로써 노인 음성 인식을 위한 딥러닝 모델 연구에 기여했다.

  • PDF

On the Importance of Tonal Features for Speech Emotion Recognition (음성 감정인식에서의 톤 정보의 중요성 연구)

  • Lee, Jung-In;Kang, Hong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.5
    • /
    • pp.713-721
    • /
    • 2013
  • This paper describes an efficiency of chroma based tonal features for speech emotion recognition. As the tonality caused by major or minor keys affects to the perception of musical mood, so the speech tonality affects the perception of the emotional states of spoken utterances. In order to justify this assertion with respect to tonality and emotion, subjective hearing tests are carried out by using synthesized signals generated from chroma features, and consequently show that the tonality contributes especially to the perception of the negative emotion such as anger and sad. In automatic emotion recognition tests, the modified chroma-based tonal features are shown to produce noticeable improvement of accuracy when they are supplemented to the conventional log-frequency power coefficient (LFPC)-based spectral features.

Voice Activity Detection Using Global Speech Absence Probability Based on Teager Energy in Noisy Environments (잡음환경에서 Teager Energy 기반의 전역 음성부재확률을 이용하는 음성검출)

  • Park, Yun-Sik;Lee, Sang-Min
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.97-103
    • /
    • 2012
  • In this paper, we propose a novel voice activity detection (VAD) algorithm to effectively distinguish speech from nonspeech in various noisy environments. Global speech absence probability (GSAP) derived from likelihood ratio (LR) based on the statistical model is widely used as the feature parameter for VAD. However, the feature parameter based on conventional GSAP is not sufficient to distinguish speech from noise at low SNRs (signal-to-noise ratios). The presented VAD algorithm utilizes GSAP based on Teager energy (TE) as the feature parameter to provide the improved performance of decision for speech segments in noisy environment. Performances of the proposed VAD algorithm are evaluated by objective test under various environments and better results compared with the conventional methods are obtained.

Analysis of Voice Quality Features and Their Contribution to Emotion Recognition (음성감정인식에서 음색 특성 및 영향 분석)

  • Lee, Jung-In;Choi, Jeung-Yoon;Kang, Hong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.5
    • /
    • pp.771-774
    • /
    • 2013
  • This study investigates the relationship between voice quality measurements and emotional states, in addition to conventional prosodic and cepstral features. Open quotient, harmonics-to-noise ratio, spectral tilt, spectral sharpness, and band energy were analyzed as voice quality features, and prosodic features related to fundamental frequency and energy are also examined. ANOVA tests and Sequential Forward Selection are used to evaluate significance and verify performance. Classification experiments show that using the proposed features increases overall accuracy, and in particular, errors between happy and angry decrease. Results also show that adding voice quality features to conventional cepstral features leads to increase in performance.

A Study on the Intelligent Man-Machine Interface System: The Experiments of the Recognition of Korean Monotongs and Cognitive Phenomena of Korean Speech Recognition Using Artificial Neural Net Models (통합 사용자 인터페이스에 관한 연구 : 인공 신경망 모델을 이용한 한국어 단모음 인식 및 음성 인지 실험)

  • Lee, Bong-Ku;Kim, In-Bum;Kim, Ki-Seok;Hwang, Hee-Yeung
    • Annual Conference on Human and Language Technology
    • /
    • 1989.10a
    • /
    • pp.101-106
    • /
    • 1989
  • 음성 및 문자를 통한 컴퓨터와의 정보 교환을 위한 통합 사용자 인터페이스 (Intelligent Man- Machine interface) 시스템의 일환으로 한국어 단모음의 인식을 위한 시스템을 인공 신경망 모델을 사용하여 구현하였으며 인식시스템의 상위 접속부에 필요한 단어 인식 모듈에 있어서의 인지 실험도 행하였다. 모음인식의 입력으로는 제1, 제2, 제3 포르만트가 사용되었으며 실험대상은 한국어의 [아, 어, 오, 우, 으, 이, 애, 에]의 8 개의 단모음으로 하였다. 사용한 인공 신경망 모델은 Multilayer Perceptron 이며, 학습 규칙은 Generalized Delta Rule 이다. 1 인의 남성 화자에 대하여 약 94%의 인식율을 나타내었다. 그리고 음성 인식시의 인지 현상 실험을 위하여 약 20개의 단어를 인공신경망의 어휘레벨에 저장하여 음성의 왜곡, 인지시의 lexical 영향, categorical percetion등을 실험하였다. 이때의 인공 신경망 모델은 Interactive Activation and Competition Model을 사용하였으며, 음성 입력으로는 가상의 음성 피쳐 데이타를 사용하였다.

  • PDF

Speech Enhancement Algorithm Based on Teager Energy and Speech Absence Probability in Noisy Environments (잡음환경에서 Teager 에너지와 음성부재확률 기반의 음성향상 알고리즘)

  • Park, Yun-Sik;An, Hong-Sub;Lee, Sang-Min
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.81-88
    • /
    • 2012
  • In this paper, we propose a novel speech enhancement algorithm for effective noise suppression in various noisy environments. In the proposed method, to result in improved decision performance for speech and noise segments, local speech absence probability (LSAP, local SAP) based on Teager energy of noisy speech is used as the feature parameter for voice activity detection (VAD) in each frequency subband instead of conventional LSAP. In addition, The presented method utilizes global SAP (GSAP) derived in each frame as the weighting parameter for the modification of the adopted TE operator to improve the performance of TE operator. Performances of the proposed algorithm are evaluated by objective test under various environments and better results compared with the conventional methods are obtained.

Hi, KIA! Classifying Emotional States from Wake-up Words Using Machine Learning (Hi, KIA! 기계 학습을 이용한 기동어 기반 감성 분류)

  • Kim, Taesu;Kim, Yeongwoo;Kim, Keunhyeong;Kim, Chul Min;Jun, Hyung Seok;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.24 no.1
    • /
    • pp.91-104
    • /
    • 2021
  • This study explored users' emotional states identified from the wake-up words -"Hi, KIA!"- using a machine learning algorithm considering the user interface of passenger cars' voice. We targeted four emotional states, namely, excited, angry, desperate, and neutral, and created a total of 12 emotional scenarios in the context of car driving. Nine college students participated and recorded sentences as guided in the visualized scenario. The wake-up words were extracted from whole sentences, resulting in two data sets. We used the soundgen package and svmRadial method of caret package in open source-based R code to collect acoustic features of the recorded voices and performed machine learning-based analysis to determine the predictability of the modeled algorithm. We compared the accuracy of wake-up words (60.19%: 22%~81%) with that of whole sentences (41.51%) for all nine participants in relation to the four emotional categories. Accuracy and sensitivity performance of individual differences were noticeable, while the selected features were relatively constant. This study provides empirical evidence regarding the potential application of the wake-up words in the practice of emotion-driven user experience in communication between users and the artificial intelligence system.