• 제목/요약/키워드: Cepstral parameters

검색결과 59건 처리시간 0.019초

Improvements on MFCC by Elaboration of the Filter Banks and Windows

  • Lee, Chang-Young
    • 음성과학
    • /
    • 제14권4호
    • /
    • pp.131-144
    • /
    • 2007
  • In an effort to improve the performance of mel frequency cepstral coefficients (MFCC), we investigate the effects of varying the parameters for the filter banks and their associated windows on speech recognition rates. Specifically, the mel and bark scales are combined with various types of filter bank windows. Comparison and evaluation of the suggested methods are performed by two independent ways of speech recognition and the Fisher discriminant objective function. It is shown that the Hanning window based on the bark scale yields 28.1% relative performance improvements over the triangular window with the mel scale in speech recognition error rate. Further work on incorporating PCA and/or LDA would be desirable as a postprocessor to MFCC extraction.

  • PDF

육체피로와 음성신호와의 상관관계 (Correlation between Physical Fatigue and Speech Signals)

  • 김태훈;권철홍
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.11-17
    • /
    • 2015
  • This paper deals with the correlation between physical fatigue and speech signals. A treadmill task to increase fatigue and a set of subjective questionnaire for rating tiredness were designed. The results from the questionnaire and the collected bio-signals showed that the designed task imposes physical fatigue. The t-test for two-related-samples between the speech signals and fatigue showed that the parameters statistically significant to fatigue are fundamental frequency, first and second formant frequencies, long term average spectral slope, smoothed pitch perturbation quotient, relative average perturbation, pitch perturbation quotient, cepstral peak prominence, and harmonics to noise ratio. According to the experimental results, it is shown that mouth is opened small and voice is changed to be breathy as the physical fatigue accumulates.

Phonation types of Korean fricatives and affricates

  • Lee, Goun
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.51-57
    • /
    • 2017
  • The current study compared the acoustic features of the two phonation types for Korean fricatives (plain: /s/, fortis : /s'/) and the three types for affricates (aspirated : /$ts^h$/, lenis : /ts/, and fortis : /ts'/) in order to determine the phonetic status of the plain fricative /s/. Considering the different manners of articulation between fricatives and affricates, we examined four acoustic parameters (rise time, intensity, fundamental frequency, and Cepstral Peak Prominence (CPP) values) of the 20 Korean native speakers' productions. The results showed that unlike Korean affricates, F0 cannot distinguish two fricatives, and voice quality (CPP values) only distinguishes phonation types of Korean fricatives and affricates by grouping non-fortis sibilants together. Therefore, based on the similarity found in /$ts^h$/ and /ts/ and the idiosyncratic pattern found in /s/, this research concludes that non-fortis fricative /s/ cannot be categorized as belonging to either phonation type.

켑스트럼 파라미터와 다중대역 여기신호를 사용한 음성 합성 시스팀 (A Speech Synthesis System based on Cepstral Parameters and Multiband Excitation Signal)

  • 김기순
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1995년도 제12회 음성통신 및 신호처리 워크샵 논문집 (SCAS 12권 1호)
    • /
    • pp.211-215
    • /
    • 1995
  • 명료하고 자연스러운 한국어 음성을 생성하기 위하여 다중대역 여기신호를 이용한 음성 합성 시스팀을 제안한다. 분석계에서는 켑스트럼 파라미터를 사용하여 유성/무성 판별 스펙트럼을 이용한 유/무성 구간 자동판별법을 제안하고, 현재 단순 임펄스와 백색잡음만으로도 구성된 음원과 간단한 유성/무성 판별로 구동되어지는 합성음의 음질상의 한계를 개선하기 위하여 합성계에서는 음질개선 방안으로 유성음 구동시 다중대역 여기신호를 도입하여 합성시 이용한다. 제안된 방법에 대한 청취실험을 한 결과, 유성음 부분 특히 잡음이 많이 섞여 있는 유성음화 마찰음과 모음의 천이부분 등에서 일반적으로 사용되고 있는 간단한 유성/무성 파라미터를 사용한 합성음에 비하여 다중대역 여기신호를 사용한 합성음의 명료도가 매우 우수함을 확인하였다.

  • PDF

마이크로폰어레이를 이용한 사용자 정보추출 (Personal Information Extraction Using A Microphone Array)

  • 김혜진;윤호섭
    • 로봇학회논문지
    • /
    • 제3권2호
    • /
    • pp.131-136
    • /
    • 2008
  • This paper proposes a method to extract the personal information using a microphone array. Useful personal information, particularly customers, is age and gender. On the basis of this information, service applications for robots can satisfy users by offering services adaptive to the special needs of specific user groups that may include adults and children as well as females and males. We applied Gaussian Mixture Model (GMM) as a classifier and Mel Frequency Cepstral coefficients (MFCCs) as a voice feature. The major aim of this paper is to discover the voice source parameters of age and gender and to classify these two characteristics simultaneously. For the ubiquitous environment, voices obtained by the selected channels in a microphone array are useful to reduce background noise.

  • PDF

Locating the damaged storey of a building using distance measures of low-order AR models

  • Xing, Zhenhua;Mita, Akira
    • Smart Structures and Systems
    • /
    • 제6권9호
    • /
    • pp.991-1005
    • /
    • 2010
  • The key to detecting damage to civil engineering structures is to find an effective damage indicator. The damage indicator should promptly reveal the location of the damage and accurately identify the state of the structure. We propose to use the distance measures of low-order AR models as a novel damage indicator. The AR model has been applied to parameterize dynamical responses, typically the acceleration response. The premise of this approach is that the distance between the models, fitting the dynamical responses from damaged and undamaged structures, may be correlated with the information about the damage, including its location and severity. Distance measures have been widely used in speech recognition. However, they have rarely been applied to civil engineering structures. This research attempts to improve on the distance measures that have been studied so far. The effect of varying the data length, number of parameters, and other factors was carefully studied.

정신피로와 음성특징과의 상관관계 측정 (Measuring Correlation between Mental Fatigues and Speech Features)

  • 김정인;권철홍
    • 말소리와 음성과학
    • /
    • 제6권2호
    • /
    • pp.3-8
    • /
    • 2014
  • This paper deals with how mental fatigue has an effect on human voice. For this a monotonous task to increase the feeling of the fatigue and a set of subjective questionnaire for rating the fatigue were designed. From the experiments the designed task was proven to be monotonous based on the results of the questionnaire responses. To investigate a statistical relationship between speech features extracted from the collected speech data and fatigue, the T test for two-related-samples was used. Statistical analysis shows that speech parameters deeply related to the fatigue are the first formant bandwidth, Jitter, H1-H2, cepstral peak prominence, and harmonics-to-noise ratio. According to the experimental results, it can be seen that voice is changed to be breathy as mental fatigue proceeds.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델 (RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC)

  • 임현택;김수형;이귀상;양형정
    • 스마트미디어저널
    • /
    • 제12권5호
    • /
    • pp.28-35
    • /
    • 2023
  • 본 연구에서는 음성감정인식의 적용 가능성과 실용성 향상을 위해 적은 수의 파라미터를 가지는 새로운 경량화 모델 RoutingConvNet(Routing Convolutional Neural Network)을 제안한다. 제안모델은 학습 가능한 매개변수를 줄이기 위해 양방향 MFCC(Mel-Frequency Cepstral Coefficient)를 채널 단위로 연결해 장기간의 감정 의존성을 학습하고 상황 특징을 추출한다. 저수준 특징 추출을 위해 경량심층 CNN을 구성하고, 음성신호에서의 채널 및 공간 신호에 대한 정보 확보를 위해 셀프어텐션(Self-attention)을 사용한다. 또한, 정확도 향상을 위해 동적 라우팅을 적용해 특징의 변형에 강인한 모델을 구성하였다. 제안모델은 음성감정 데이터셋(EMO-DB, RAVDESS, IEMOCAP)의 전반적인 실험에서 매개변수 감소와 정확도 향상을 보여주며 약 156,000개의 매개변수로 각각 87.86%, 83.44%, 66.06%의 정확도를 달성하였다. 본 연구에서는 경량화 대비 성능 평가를 위한 매개변수의 수, 정확도간 trade-off를 계산하는 지표를 제안하였다.

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • 전기전자학회논문지
    • /
    • 제17권3호
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.