• 제목/요약/키워드: cepstral features

검색결과 76건 처리시간 0.03초

자율이동로봇의 명령 교시를 위한 HMM 기반 음성인식시스템의 구현 (Implementation of Hidden Markov Model based Speech Recognition System for Teaching Autonomous Mobile Robot)

  • 조현수;박민규;이민철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.281-281
    • /
    • 2000
  • This paper presents an implementation of speech recognition system for teaching an autonomous mobile robot. The use of human speech as the teaching method provides more convenient user-interface for the mobile robot. In this study, for easily teaching the mobile robot, a study on the autonomous mobile robot with the function of speech recognition is tried. In speech recognition system, a speech recognition algorithm using HMM(Hidden Markov Model) is presented to recognize Korean word. Filter-bank analysis model is used to extract of features as the spectral analysis method. A recognized word is converted to command for the control of robot navigation.

  • PDF

화자 식별에서의 배경화자데이터를 이용한 히스토그램 등화 기법 (Histogram Equalization Using Background Speakers' Utterances for Speaker Identification)

  • 김명재;양일호;소병민;김민석;유하진
    • 말소리와 음성과학
    • /
    • 제4권2호
    • /
    • pp.79-86
    • /
    • 2012
  • In this paper, we propose a novel approach to improve histogram equalization for speaker identification. Our method collects all speech features of UBM training data to make a reference distribution. The ranks of the feature vectors are calculated in the sorted list of the collection of the UBM training data and the test data. We use the ranks to perform order-based histogram equalization. The proposed method improves the accuracy of the speaker recognition system with short utterances. We use four kinds of speech databases to evaluate the proposed speaker recognition system and compare the system with cepstral mean normalization (CMN), mean and variance normalization (MVN), and histogram equalization (HEQ). Our system reduced the relative error rate by 33.3% from the baseline system.

Dysarthric speaker identification with different degrees of dysarthria severity using deep belief networks

  • Farhadipour, Aref;Veisi, Hadi;Asgari, Mohammad;Keyvanrad, Mohammad Ali
    • ETRI Journal
    • /
    • 제40권5호
    • /
    • pp.643-652
    • /
    • 2018
  • Dysarthria is a degenerative disorder of the central nervous system that affects the control of articulation and pitch; therefore, it affects the uniqueness of sound produced by the speaker. Hence, dysarthric speaker recognition is a challenging task. In this paper, a feature-extraction method based on deep belief networks is presented for the task of identifying a speaker suffering from dysarthria. The effectiveness of the proposed method is demonstrated and compared with well-known Mel-frequency cepstral coefficient features. For classification purposes, the use of a multi-layer perceptron neural network is proposed with two structures. Our evaluations using the universal access speech database produced promising results and outperformed other baseline methods. In addition, speaker identification under both text-dependent and text-independent conditions are explored. The highest accuracy achieved using the proposed system is 97.3%.

Combination of Classifiers Decisions for Multilingual Speaker Identification

  • Nagaraja, B.G.;Jayanna, H.S.
    • Journal of Information Processing Systems
    • /
    • 제13권4호
    • /
    • pp.928-940
    • /
    • 2017
  • State-of-the-art speaker recognition systems may work better for the English language. However, if the same system is used for recognizing those who speak different languages, the systems may yield a poor performance. In this work, the decisions of a Gaussian mixture model-universal background model (GMM-UBM) and a learning vector quantization (LVQ) are combined to improve the recognition performance of a multilingual speaker identification system. The difference between these classifiers is in their modeling techniques. The former one is based on probabilistic approach and the latter one is based on the fine-tuning of neurons. Since the approaches are different, each modeling technique identifies different sets of speakers for the same database set. Therefore, the decisions of the classifiers may be used to improve the performance. In this study, multitaper mel-frequency cepstral coefficients (MFCCs) are used as the features and the monolingual and cross-lingual speaker identification studies are conducted using NIST-2003 and our own database. The experimental results show that the combined system improves the performance by nearly 10% compared with that of the individual classifier.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Musical Genre Classification Based on Deep Residual Auto-Encoder and Support Vector Machine

  • Xue Han;Wenzhuo Chen;Changjian Zhou
    • Journal of Information Processing Systems
    • /
    • 제20권1호
    • /
    • pp.13-23
    • /
    • 2024
  • Music brings pleasure and relaxation to people. Therefore, it is necessary to classify musical genres based on scenes. Identifying favorite musical genres from massive music data is a time-consuming and laborious task. Recent studies have suggested that machine learning algorithms are effective in distinguishing between various musical genres. However, meeting the actual requirements in terms of accuracy or timeliness is challenging. In this study, a hybrid machine learning model that combines a deep residual auto-encoder (DRAE) and support vector machine (SVM) for musical genre recognition was proposed. Eight manually extracted features from the Mel-frequency cepstral coefficients (MFCC) were employed in the preprocessing stage as the hybrid music data source. During the training stage, DRAE was employed to extract feature maps, which were then used as input for the SVM classifier. The experimental results indicated that this method achieved a 91.54% F1-score and 91.58% top-1 accuracy, outperforming existing approaches. This novel approach leverages deep architecture and conventional machine learning algorithms and provides a new horizon for musical genre classification tasks.

Multi-constrained optimization combining ARMAX with differential search for damage assessment

  • K, Lakshmi;A, Rama Mohan Rao
    • Structural Engineering and Mechanics
    • /
    • 제72권6호
    • /
    • pp.689-712
    • /
    • 2019
  • Time-series models like AR-ARX and ARMAX, provide a robust way to capture the dynamic properties of structures, and their residuals can be effectively used as features for damage detection. Even though several research papers discuss the implementation of AR-ARX and ARMAX models for damage diagnosis, they are basically been exploited so far for detecting the time instant of damage and also the spatial location of the damage. However, the inverse problem associated with damage quantification i.e. extent of damage using time series models is not been reported in the literature. In this paper, an approach to detect the extent of damage by combining the ARMAX model by formulating the inverse problem as a multi-constrained optimization problem and solving using a newly developed hybrid adaptive differential search with dynamic interaction is presented. The proposed variant of the differential search technique employs small multiple populations which perform the search independently and exchange the information with the dynamic neighborhood. The adaptive features and local search ability features are built into the algorithm in order to improve the convergence characteristics and also the overall performance of the technique. The multi-constrained optimization formulations of the inverse problem, associated with damage quantification using time series models, attempted here for the first time, can considerably improve the robustness of the search process. Numerical simulation studies have been carried out by considering three numerical examples to demonstrate the effectiveness of the proposed technique in robustly identifying the extent of the damage. Issues related to modeling errors and also measurement noise are also addressed in this paper.

Noise-Robust Speaker Recognition Using Subband Likelihoods and Reliable-Feature Selection

  • Kim, Sung-Tak;Ji, Mi-Kyong;Kim, Hoi-Rin
    • ETRI Journal
    • /
    • 제30권1호
    • /
    • pp.89-100
    • /
    • 2008
  • We consider the feature recombination technique in a multiband approach to speaker identification and verification. To overcome the ineffectiveness of conventional feature recombination in broadband noisy environments, we propose a new subband feature recombination which uses subband likelihoods and a subband reliable-feature selection technique with an adaptive noise model. In the decision step of speaker recognition, a few very low unreliable feature likelihood scores can cause a speaker recognition system to make an incorrect decision. To overcome this problem, reliable-feature selection adjusts the likelihood scores of an unreliable feature by comparison with those of an adaptive noise model, which is estimated by the maximum a posteriori adaptation technique using noise features directly obtained from noisy test speech. To evaluate the effectiveness of the proposed methods in noisy environments, we use the TIMIT database and the NTIMIT database, which is the corresponding telephone version of TIMIT database. The proposed subband feature recombination with subband reliable-feature selection achieves better performance than the conventional feature recombination system with reliable-feature selection.

  • PDF

MSVQ/TDRNN을 이용한 음성인식 (Speech Recognition Using MSVQ/TDRNN)

  • 김성석
    • 한국음향학회지
    • /
    • 제33권4호
    • /
    • pp.268-272
    • /
    • 2014
  • 본 논문에서는 MSVQ(Multi-Section Vector Quantization)와 시간지연 회귀 신경회로망(TDRNN)을 이용한 하이브리드 구조의 음성인식 방법을 제안한다. MSVQ는 음성의 길이를 일정한 구간 수로 정규화한 코드북을 생성하고, 시간지연 회귀 신경회로망은 이 코드북을 이용하여 음성을 인식한다. 시간지연 회귀 신경회로망은 음성의 시계열 문맥정보를 잘 학습할 수 있는 구조로 구성되었다. 음성특징으로 인지선형예측(PLP) 계수가 사용되었다. 음성인식 실험을 수행한 결과 MSVQ/TDRNN 음성인식기는 97.9 %의 화자독립 음성 인식률을 보였다.

재생 정보 기반 우연성 지향적 음악 추천에 관한 연구 (A Study on Serendipity-Oriented Music Recommendation Based on Play Information)

  • 하태현;이상원
    • 대한산업공학회지
    • /
    • 제41권2호
    • /
    • pp.128-136
    • /
    • 2015
  • With the recent interests with culture technologies, many studies for recommendation systems have been done. In this vein, various music recommendation systems have been developed. However, they have often focused on the technical aspects such as feature extraction and similarity comparison, and have not sufficiently addressed them in user-centered perspectives. For users' high satisfaction with recommended music items, it is necessary to study how the items are connected to the users' actual desires. For this, our study proposes a novel music recommendation method based on serendipity, which means the freshness users feel for their familiar items. The serendipity is measured through the comparison of users' past and recent listening tendencies. We utilize neural networks to apply these tendencies to the recommendation process and to extract the features of music items as MFCCs (Mel-frequency cepstral coefficients). In that the recommendation method is developed based on the characteristics of user behaviors, it is expected that user satisfaction for the recommended items can be actually increased.