• Title/Summary/Keyword: Voice Feature

Search Result 232, Processing Time 0.027 seconds

A study of the preconsonantal vowel shortening in Chinese

  • Yun, Ilsung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.39-44
    • /
    • 2018
  • This study aimed to examine whether preconsonantal vowel shortening, which occurs in many languages, exists in Chinese. To this end, we compared 15 pairs of Chinese bi-syllabic words with intervocalic unaspirated/aspirated stops. The results revealed that (1) the effect of the feature aspiration of the following stop on the preceding vowel (V1) was neither significant nor consistent though V1 tends to be a little longer before an unaspirated stop; (2) the following unaspirated stop closure (C) was similar to or longer than its aspirated cognate; (3) the durational sum of V1 and C was longer when the stop is unaspirated, and V1 and C had no compensatory relationship; (4) Voice Onset Time (VOT) was significantly longer when the stop is aspirated than unaspirated; (5) the vowel (V2) following VOT was significantly longer when the stop is unaspirated, so the differentials in VOT were partially compensated; (6) despite the partial compensation, the sum of VOT and V2 was longer when the stop is aspirated; (7) words with an intervocalic aspirated stop were longer than those with its unaspirated cognate. It is concluded that while VOT is the most important factor for deciding the timing structure of Chinese words with intervocalic stops, closure duration is crucial for Korean and many other languages.

Extraction and Analysis of Voice Feature Parameter of Chungbuk News Announcers (충북방송 뉴스 진행자의 음성적 특징 추출 및 분석)

  • Kim, Bong-Hyun;Lee, Se-Hwan;Ka, Min-Kyoung;Cho, Dong-Uk;J.Bae, Young-Lae
    • Annual Conference of KIPS
    • /
    • 2009.11a
    • /
    • pp.363-364
    • /
    • 2009
  • 방송 산업이 기술적 구조적으로 발전하고 시청자의 수준 향상 및 문화 산업이 급변함에 따라 현대사회에서 방송 분야는 거대 성장을 거듭하고 있다. 이러한 방송 산업의 시대적 변화속에서 지속적으로 관심의 대상이 되고 있는 것이 시청자들의 수준 및 변화의 초점이며 이를 파악하여 원활한 방송의 진행을 주도해야 하는 것이 방송 진행자의 역할이다. 따라서 본 논문에서는 충북지역의 방송 3사에서 뉴스를 담당하고 있는 진행자에 대한 음성을 수집하여 다양한 음성 분석 요소들을 적용하고 이에 따른 결과값을 기반으로 방송 진행자의 음성에 대한 특징적 정보를 추출하는 실험을 수행하였다. 특히, 음성을 통해 전달할 수 있는 영향력을 분석하기 위해 피치, 지터, 짐머, 안정도, 및 스펙트로그램 등의 다양한 음성 분석 요소를 적용하였으며 결과값에 대한 비교, 분석을 수행하였다.

Construction of Customer Appeal Classification Model Based on Speech Recognition

  • Sheng Cao;Yaling Zhang;Shengping Yan;Xiaoxuan Qi;Yuling Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.258-266
    • /
    • 2023
  • Aiming at the problems of poor customer satisfaction and poor accuracy of customer classification, this paper proposes a customer classification model based on speech recognition. First, this paper analyzes the temporal data characteristics of customer demand data, identifies the influencing factors of customer demand behavior, and determines the process of feature extraction of customer voice signals. Then, the emotional association rules of customer demands are designed, and the classification model of customer demands is constructed through cluster analysis. Next, the Euclidean distance method is used to preprocess customer behavior data. The fuzzy clustering characteristics of customer demands are obtained by the fuzzy clustering method. Finally, on the basis of naive Bayesian algorithm, a customer demand classification model based on speech recognition is completed. Experimental results show that the proposed method improves the accuracy of the customer demand classification to more than 80%, and improves customer satisfaction to more than 90%. It solves the problems of poor customer satisfaction and low customer classification accuracy of the existing classification methods, which have practical application value.

GMM-Based Gender Identification Employing Group Delay (Group Delay를 이용한 GMM기반의 성별 인식 알고리즘)

  • Lee, Kye-Hwan;Lim, Woo-Hyung;Kim, Nam-Soo;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.243-249
    • /
    • 2007
  • We propose an effective voice-based gender identification using group delay(GD) Generally, features for speech recognition are composed of magnitude information rather than phase information. In our approach, we address a difference between male and female for GD which is a derivative of the Fourier transform phase. Also, we propose a novel way to incorporate the features fusion scheme based on a combination of GD and magnitude information such as mel-frequency cepstral coefficients(MFCC), linear predictive coding (LPC) coefficients, reflection coefficients and formant. The experimental results indicate that GD is effective in discriminating gender and the performance is significantly improved when the proposed feature fusion technique is applied.

Prediction of Closed Quotient During Vocal Phonation using GRU-type Neural Network with Audio Signals

  • Hyeonbin Han;Keun Young Lee;Seong-Yoon Shin;Yoseup Kim;Gwanghyun Jo;Jihoon Park;Young-Min Kim
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.2
    • /
    • pp.145-152
    • /
    • 2024
  • Closed quotient (CQ) represents the time ratio for which the vocal folds remain in contact during voice production. Because analyzing CQ values serves as an important reference point in vocal training for professional singers, these values have been measured mechanically or electrically by either inverse filtering of airflows captured by a circumferentially vented mask or post-processing of electroglottography waveforms. In this study, we introduced a novel algorithm to predict the CQ values only from audio signals. This has eliminated the need for mechanical or electrical measurement techniques. Our algorithm is based on a gated recurrent unit (GRU)-type neural network. To enhance the efficiency, we pre-processed an audio signal using the pitch feature extraction algorithm. Then, GRU-type neural networks were employed to extract the features. This was followed by a dense layer for the final prediction. The Results section reports the mean square error between the predicted and real CQ. It shows the capability of the proposed algorithm to predict CQ values.

Deep Learning-Based Speech Emotion Recognition Technology Using Voice Feature Filters (음성 특징 필터를 이용한 딥러닝 기반 음성 감정 인식 기술)

  • Shin Hyun Sam;Jun-Ki Hong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.223-231
    • /
    • 2023
  • In this study, we propose a model that extracts and analyzes features from deep learning-based speech signals, generates filters, and utilizes these filters to recognize emotions in speech signals. We evaluate the performance of emotion recognition accuracy using the proposed model. According to the simulation results using the proposed model, the average emotion recognition accuracy of DNN and RNN was very similar, at 84.59% and 84.52%, respectively. However, we observed that the simulation time for DNN was approximately 44.5% shorter than that of RNN, enabling quicker emotion prediction.

A Study on the Development of Embedded Serial Multi-modal Biometrics Recognition System (임베디드 직렬 다중 생체 인식 시스템 개발에 관한 연구)

  • Kim, Joeng-Hoon;Kwon, Soon-Ryang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.49-54
    • /
    • 2006
  • The recent fingerprint recognition system has unstable factors, such as copy of fingerprint patterns and hacking of fingerprint feature point, which mali cause significant system error. Thus, in this research, we used the fingerprint as the main recognition device and then implemented the multi-biometric recognition system in serial using the speech recognition which has been widely used recently. As a multi-biometric recognition system, once the speech is successfully recognized, the fingerprint recognition process is run. In addition, speaker-dependent DTW(Dynamic Time Warping) algorithm is used among existing speech recognition algorithms (VQ, DTW, HMM, NN) for effective real-time process while KSOM (Kohonen Self-Organizing feature Map) algorithm, which is the artificial intelligence method, is applied for the fingerprint recognition system because of its calculation amount. The experiment of multi-biometric recognition system implemented in this research showed 2 to $7\%$ lower FRR (False Rejection Ratio) than single recognition systems using each fingerprints or voice, but zero FAR (False Acceptance Ratio), which is the most important factor in the recognition system. Moreover, there is almost no difference in the recognition time(average 1.5 seconds) comparing with other existing single biometric recognition systems; therefore, it is proved that the multi-biometric recognition system implemented is more efficient security system than single recognition systems based on various experiments.

Automatic detection and severity prediction of chronic kidney disease using machine learning classifiers (머신러닝 분류기를 사용한 만성콩팥병 자동 진단 및 중증도 예측 연구)

  • Jihyun Mun;Sunhee Kim;Myeong Ju Kim;Jiwon Ryu;Sejoong Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.45-56
    • /
    • 2022
  • This paper proposes an optimal methodology for automatically diagnosing and predicting the severity of the chronic kidney disease (CKD) using patients' utterances. In patients with CKD, the voice changes due to the weakening of respiratory and laryngeal muscles and vocal fold edema. Previous studies have phonetically analyzed the voices of patients with CKD, but no studies have been conducted to classify the voices of patients. In this paper, the utterances of patients with CKD were classified using the variety of utterance types (sustained vowel, sentence, general sentence), the feature sets [handcrafted features, extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS), CNN extracted features], and the classifiers (SVM, XGBoost). Total of 1,523 utterances which are 3 hours, 26 minutes, and 25 seconds long, are used. F1-score of 0.93 for automatically diagnosing a disease, 0.89 for a 3-classes problem, and 0.84 for a 5-classes problem were achieved. The highest performance was obtained when the combination of general sentence utterances, handcrafted feature set, and XGBoost was used. The result suggests that a general sentence utterance that can reflect all speakers' speech characteristics and an appropriate feature set extracted from there are adequate for the automatic classification of CKD patients' utterances.

Hi, KIA! Classifying Emotional States from Wake-up Words Using Machine Learning (Hi, KIA! 기계 학습을 이용한 기동어 기반 감성 분류)

  • Kim, Taesu;Kim, Yeongwoo;Kim, Keunhyeong;Kim, Chul Min;Jun, Hyung Seok;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.24 no.1
    • /
    • pp.91-104
    • /
    • 2021
  • This study explored users' emotional states identified from the wake-up words -"Hi, KIA!"- using a machine learning algorithm considering the user interface of passenger cars' voice. We targeted four emotional states, namely, excited, angry, desperate, and neutral, and created a total of 12 emotional scenarios in the context of car driving. Nine college students participated and recorded sentences as guided in the visualized scenario. The wake-up words were extracted from whole sentences, resulting in two data sets. We used the soundgen package and svmRadial method of caret package in open source-based R code to collect acoustic features of the recorded voices and performed machine learning-based analysis to determine the predictability of the modeled algorithm. We compared the accuracy of wake-up words (60.19%: 22%~81%) with that of whole sentences (41.51%) for all nine participants in relation to the four emotional categories. Accuracy and sensitivity performance of individual differences were noticeable, while the selected features were relatively constant. This study provides empirical evidence regarding the potential application of the wake-up words in the practice of emotion-driven user experience in communication between users and the artificial intelligence system.

Between Monster and Hero -Characters with Supernatural Powers of Fantasy Dramas (괴물과 영웅 사이 -판타지 드라마의 초능력 인물)

  • Kim, Kyung-Min
    • Journal of Popular Narrative
    • /
    • v.26 no.1
    • /
    • pp.9-39
    • /
    • 2020
  • The aim of this study is to examine how heroic characters with supernatural powers are portrayed, and what shortcomings and desires are present in the societies they are born into, with reference to television series with superheroes such as , , and out of many motifs of Korean television fantasy series. The common feature of the superheroes represented in these three dramas is that they are viewed as monsters symbolizing vigilance and alienation instead of being regarded as typical heroes that are the object of praise and admiration. All three dramas criticize the corruption and limitations of bureaucratic powers such as the judiciary, prosecution, and police. The protagonists showcase their heroics by correcting such problems and helping the weak and the victimized by using their supernatural powers. At the same time, they broach uncomfortable topics, highlight truths that some may wish to hide, and also argue the concept of 'normality' and the 'world of naturalness'. For this reason, they are treated as monsters and alienated. Despite being called upon to solve the problems in reality, the deficiencies and contradictions of our society are also revealed by them. The idea of expressing the repressed desires in reality, is similar to the attributes of fantasy in that it criticizes and overthrows reality in order to meet the desires. This study verified not only the subversive characters of fantasy, but also the limitations when such attributes were combined with the characteristics of the medium of television shows. The significance of this study is to give attention to a genre that had previously been neglected by Korean productions but is now gaining traction, and also to suggest many tasks for researching more subdivided and diversified fantasy dramas in the future.


(34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
Copyright (C) KISTI. All Rights Reserved.