• Title/Summary/Keyword: Cepstral parameters

Search Result 58, Processing Time 0.022 seconds

Classification of Diphthongs using Acoustic Phonetic Parameters (음향음성학 파라메터를 이용한 이중모음의 분류)

  • Lee, Suk-Myung;Choi, Jeung-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.167-173
    • /
    • 2013
  • This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.

A MFCC-based CELP Speech Coder for Server-based Speech Recognition in Network Environments (네트워크 환경에서 서버용 음성 인식을 위한 MFCC 기반 음성 부호화기 설계)

  • Lee, Gil-Ho;Yoon, Jae-Sam;Oh, Yoo-Rhee;Kim, Hong-Kook
    • MALSORI
    • /
    • no.54
    • /
    • pp.27-43
    • /
    • 2005
  • Existing standard speech coders can provide speech communication of high quality while they degrade the performance of speech recognition systems that use the reconstructed speech by the coders. The main cause of the degradation is that the spectral envelope parameters in speech coding are optimized to speech quality rather than to the performance of speech recognition. For example, mel-frequency cepstral coefficient (MFCC) is generally known to provide better speech recognition performance than linear prediction coefficient (LPC) that is a typical parameter set in speech coding. In this paper, we propose a speech coder using MFCC instead of LPC to improve the performance of a server-based speech recognition system in network environments. However, the main drawback of using MFCC is to develop the efficient MFCC quantization with a low-bit rate. First, we explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel error. As a result, we propose a 8.7 kbps MFCC-based CELP coder. It is shown from a PESQ test that the proposed speech coder has a comparable speech quality to 8 kbps G.729 while it is shown that the performance of speech recognition using the proposed speech coder is better than that using G.729.

  • PDF

Speech Query Recognition for Tamil Language Using Wavelet and Wavelet Packets

  • Iswarya, P.;Radha, V.
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1135-1148
    • /
    • 2017
  • Speech recognition is one of the fascinating fields in the area of Computer science. Accuracy of speech recognition system may reduce due to the presence of noise present in speech signal. Therefore noise removal is an essential step in Automatic Speech Recognition (ASR) system and this paper proposes a new technique called combined thresholding for noise removal. Feature extraction is process of converting acoustic signal into most valuable set of parameters. This paper also concentrates on improving Mel Frequency Cepstral Coefficients (MFCC) features by introducing Discrete Wavelet Packet Transform (DWPT) in the place of Discrete Fourier Transformation (DFT) block to provide an efficient signal analysis. The feature vector is varied in size, for choosing the correct length of feature vector Self Organizing Map (SOM) is used. As a single classifier does not provide enough accuracy, so this research proposes an Ensemble Support Vector Machine (ESVM) classifier where the fixed length feature vector from SOM is given as input, termed as ESVM_SOM. The experimental results showed that the proposed methods provide better results than the existing methods.

Realization a Text Independent Speaker Identification System with Frame Level Likelihood Normalization (프레임레벨유사도정규화를 적용한 문맥독립화자식별시스템의 구현)

  • 김민정;석수영;김광수;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.1
    • /
    • pp.8-14
    • /
    • 2002
  • In this paper, we realized a real-time text-independent speaker recognition system using gaussian mixture model, and applied frame level likelihood normalization method which shows its effects in verification system. The system has three parts as front-end, training, recognition. In front-end part, cepstral mean normalization and silence removal method were applied to consider speaker's speaking variations. In training, gaussian mixture model was used for speaker's acoustic feature modeling, and maximum likelihood estimation was used for GMM parameter optimization. In recognition, likelihood score was calculated with speaker models and test data at frame level. As test sentences, we used text-independent sentences. ETRI 445 and KLE 452 database were used for training and test, and cepstrum coefficient and regressive coefficient were used as feature parameters. The experiment results show that the frame-level likelihood method's recognition result is higher than conventional method's, independently the number of registered speakers.

  • PDF

Music Identification Using Pitch Histogram and MFCC-VQ Dynamic Pattern (피치 히스토그램과 MFCC-VQ 동적 패턴을 사용한 음악 검색)

  • Park Chuleui;Park Mansoo;Kim Sungtak;Kim Hoirin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.178-185
    • /
    • 2005
  • This paper presents a new music identification method using probabilistic and dynamic characteristics of melody. The propo3ed method uses pitch and MFCC parameters as feature vectors for the characteristics of music notes and represents melody pattern by pitch histogram and temporal sequence of codeword indices. We also propose a new pattern matching method for the hybrid method. We have tested the proposed algorithm in small (drama OST) and broad (1.005 popular songs) search spaces. The experimental results on search areas of OST and 1,005 popular songs showed better performance of the proposed method over conventional methods. We achieved the performance improvement of average $9.9\%$ and $10.2\%$ in error reduction rate on each search area.

The Effect of Voice Therapy for the Treatment of Functional Aphonia: A Preliminary Study (기능적 실성증에 대한 음성치료의 효과 분석: 기초 연구)

  • Kim, No Eul;Kim, Jun Seok;Oh, Jae Hwan;Kim, Dong Young;Woo, Joo Hyun
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.32 no.2
    • /
    • pp.75-80
    • /
    • 2021
  • Background and Objectives Functional aphonia refers to in which by presenting whispering voice and almost producing very high-pitched tensed voices are produced. Voice therapy is the most effective treatment, but there is a lack of consensus for application of voice therapy. The purpose of this study was to examine the vocal characteristics of functional aphonia and the effect of voice therapy applied accordingly. Materials and Method From October 2019 to December 2020, 11 patients with functional aphonia were treated using voice therapy which was processing three stages such as vocal hygiene, trial therapy, and behavioral therapy. Of these, 7 patients who completed the voice evaluation before and after voice therapy was enrolled in this study. By retrospective chart review, clinical information such as sex, age, symptoms, duration, social and medical history, process of voice therapy, subjective and objective findings were analyzed. Voice parameters before and after voice therapy were compared. Results In GRBAS study, grade, rough, and asthenic, and in Consensus Auditory-Perceptual Evaluation of Voice, overall severity, roughness, pitch, and loudness were significantly improved after voice therapy. In Voice handicap index, all of the scores of total and sub-categories were significantly decreased. In objective voice analysis, jitter, cepstral peak prominence, and maximum phonation time were significantly improved. Conclusion The voice therapy was effective for the treatment of functional aphonia by restoring patient's vocalization and improving voice quality, pitch and loudness.

Study for Correlation between Objective and Subjective Voice Parameters in Patients with Dysphonia (발성장애 환자에서 주관적 음성검사와 객관적 음성검사의 연관성 연구)

  • Park, Jung Woo;Kim, Boram;Oh, Jae Hwan;Kang, Tae Kyu;Kim, Dong Young;Woo, Joo Hyun
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.30 no.2
    • /
    • pp.118-123
    • /
    • 2019
  • Background and Objectives Voice evaluation is classified into subjective tests such as auditory perception and self-measurement, and objective tests such as acoustic and aerodynamic analysis. When evaluating dysphonia, subjective and objective test results do not always match. The purpose of this study was to analyze the relationship between subjective and objective evaluation in patients with dysphonia and to identify meaningful parameters by disease. Materials and Method The total of 322 patients who visited voice clinic from May 2017 to May 2018 were included in this study. Laryngeal lesions were identified using stroboscopy. Pearson correlation test was performed to analyse correlation between subjective tests including GRBAS scale and voice handicap index, and objective tests including jitter, shimmer, noise to harmonic ratio (NHR), cepstral peak prominence (CPP), maximal phonation time (MPT), mean flow rate, and subglottic pressure. Results In vocal nodule and sulcus vocalis, among GRBAS system, grade and breathiness showed good correlation with CPP, and roughness showed good correlation with jitter or shimmer. In unilateral vocal cord paralysis (UVCP), grade and breathiness showed a very good correlation with CPP, and also good correlation with jitter, shimmer, NHR, and MPT. Also asthenia showed good correlation with CPP and MPT. Vocal polyp has a limited association with other diseases. Conclusion In patients with dysphonia, grade and breathiness showed good correlation with CPP, jitter, and shimmer, and reflect the state of voice change well especially in UVCP, CPP, and MPT.

Korean speech recognition using deep learning (딥러닝 모형을 사용한 한국어 음성인식)

  • Lee, Suji;Han, Seokjin;Park, Sewon;Lee, Kyeongwon;Lee, Jaeyong
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.213-227
    • /
    • 2019
  • In this paper, we propose an end-to-end deep learning model combining Bayesian neural network with Korean speech recognition. In the past, Korean speech recognition was a complicated task due to the excessive parameters of many intermediate steps and needs for Korean expertise knowledge. Fortunately, Korean speech recognition becomes manageable with the aid of recent breakthroughs in "End-to-end" model. The end-to-end model decodes mel-frequency cepstral coefficients directly as text without any intermediate processes. Especially, Connectionist Temporal Classification loss and Attention based model are a kind of the end-to-end. In addition, we combine Bayesian neural network to implement the end-to-end model and obtain Monte Carlo estimates. Finally, we carry out our experiments on the "WorimalSam" online dictionary dataset. We obtain 4.58% Word Error Rate showing improved results compared to Google and Naver API.