• Title/Summary/Keyword: Mel frequency cepstral coefficient

검색결과 65건 처리시간 0.028초

Audio Fingerprint Retrieval Method Based on Feature Dimension Reduction and Feature Combination

  • Zhang, Qiu-yu;Xu, Fu-jiu;Bai, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.522-539
    • /
    • 2021
  • In order to solve the problems of the existing audio fingerprint method when extracting audio fingerprints from long speech segments, such as too large fingerprint dimension, poor robustness, and low retrieval accuracy and efficiency, a robust audio fingerprint retrieval method based on feature dimension reduction and feature combination is proposed. Firstly, the Mel-frequency cepstral coefficient (MFCC) and linear prediction cepstrum coefficient (LPCC) of the original speech are extracted respectively, and the MFCC feature matrix and LPCC feature matrix are combined. Secondly, the feature dimension reduction method based on information entropy is used for column dimension reduction, and the feature matrix after dimension reduction is used for row dimension reduction based on energy feature dimension reduction method. Finally, the audio fingerprint is constructed by using the feature combination matrix after dimension reduction. When speech's user retrieval, the normalized Hamming distance algorithm is used for matching retrieval. Experiment results show that the proposed method has smaller audio fingerprint dimension and better robustness for long speech segments, and has higher retrieval efficiency while maintaining a higher recall rate and precision rate.

Noise-Robust Speaker Recognition Using Subband Likelihoods and Reliable-Feature Selection

  • Kim, Sung-Tak;Ji, Mi-Kyong;Kim, Hoi-Rin
    • ETRI Journal
    • /
    • 제30권1호
    • /
    • pp.89-100
    • /
    • 2008
  • We consider the feature recombination technique in a multiband approach to speaker identification and verification. To overcome the ineffectiveness of conventional feature recombination in broadband noisy environments, we propose a new subband feature recombination which uses subband likelihoods and a subband reliable-feature selection technique with an adaptive noise model. In the decision step of speaker recognition, a few very low unreliable feature likelihood scores can cause a speaker recognition system to make an incorrect decision. To overcome this problem, reliable-feature selection adjusts the likelihood scores of an unreliable feature by comparison with those of an adaptive noise model, which is estimated by the maximum a posteriori adaptation technique using noise features directly obtained from noisy test speech. To evaluate the effectiveness of the proposed methods in noisy environments, we use the TIMIT database and the NTIMIT database, which is the corresponding telephone version of TIMIT database. The proposed subband feature recombination with subband reliable-feature selection achieves better performance than the conventional feature recombination system with reliable-feature selection.

  • PDF

Emotion recognition from speech using Gammatone auditory filterbank

  • 레바부이;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.

한국어 음성을 이용한 연령 분류 딥러닝 알고리즘 기술 개발 (Development of Age Classification Deep Learning Algorithm Using Korean Speech)

  • 소순원;유승민;김주영;안현준;조백환;육순현;김인영
    • 대한의용생체공학회:의공학회지
    • /
    • 제39권2호
    • /
    • pp.63-68
    • /
    • 2018
  • In modern society, speech recognition technology is emerging as an important technology for identification in electronic commerce, forensics, law enforcement, and other systems. In this study, we aim to develop an age classification algorithm for extracting only MFCC(Mel Frequency Cepstral Coefficient) expressing the characteristics of speech in Korean and applying it to deep learning technology. The algorithm for extracting the 13th order MFCC from Korean data and constructing a data set, and using the artificial intelligence algorithm, deep artificial neural network, to classify males in their 20s, 30s, and 50s, and females in their 20s, 40s, and 50s. finally, our model confirmed the classification accuracy of 78.6% and 71.9% for males and females, respectively.

얼굴영상과 음성을 이용한 멀티모달 감정인식 (Multimodal Emotion Recognition using Face Image and Speech)

  • 이현구;김동주
    • 디지털산업정보학회논문지
    • /
    • 제8권1호
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

동적 시간 신축 알고리즘을 이용한 화자 식별 (Speaker Identification Using Dynamic Time Warping Algorithm)

  • 정승도
    • 한국산학기술학회논문지
    • /
    • 제12권5호
    • /
    • pp.2402-2409
    • /
    • 2011
  • 음성에는 전달하고자 하는 정보 이외에 화자 고유의 음향적 특징을 담고 있다. 화자간의 음향적 차이를 이용하여 말하고 있는 사람이 누구인지 판단하는 방법이 화자 인식이다. 화자 인식에는 화자 확인과 화자 식별로 구분되는데 화자 확인은 1명의 음성을 대상으로 본인인지 아닌지를 검증하는 방법이다. 반면, 화자 식별은 미리 등록된 다수의 종속 문장으로부터 가장 유사한 모델을 찾아 대상 의뢰인이 누군지 식별하는 방법이다. 본 논문에서는 MFCC(Mel Frequency Cepstral Coefficient) 계수를 추출하여 특징 벡터를 구성하였고, 특징 간 유사도 비교는 동적 시간 신축(Dynamic Time Warping) 알고리즘을 이용한다. 각 화자마다 두 개의 종속 문장을 훈련 데이터로 사용하여 음운성에 기반을 둔 공통적 특징을 기술하였고, 이를 통해 데이터베이스에 저장되어 있지 않은 단어를 사용하더라도 동일 화자임을 식별할 수 있도록 하였다.

웨이브렛 패킷 기반 캡스트럼 계수를 이용한 수중 천이신호 특징 추출 알고리즘 (Feature Extraction Algorithm for Underwater Transient Signal Using Cepstral Coefficients Based on Wavelet Packet)

  • 김주호;팽동국;이종현;이승우
    • 한국해양공학회지
    • /
    • 제28권6호
    • /
    • pp.552-559
    • /
    • 2014
  • In general, the number of underwater transient signals is very limited for research on automatic recognition. Data-dependent feature extraction is one of the most effective methods in this case. Therefore, we suggest WPCC (Wavelet packet ceptsral coefficient) as a feature extraction method. A wavelet packet best tree for each data set is formed using an entropy-based cost function. Then, every terminal node of the best trees is counted to build a common wavelet best tree. It corresponds to flexible and non-uniform filter bank reflecting characteristics for the data set. A GMM (Gaussian mixture model) is used to classify five classes of underwater transient data sets. The error rate of the WPCC is compared using MFCC (Mel-frequency ceptsral coefficients). The error rates of WPCC-db20, db40, and MFCC are 0.4%, 0%, and 0.4%, respectively, when the training data consist of six out of the nine pieces of data in each class. However, WPCC-db20 and db40 show rates of 2.98% and 1.20%, respectively, while MFCC shows a rate of 7.14% when the training data consists of only three pieces. This shows that WPCC is less sensitive to the number of training data pieces than MFCC. Thus, it could be a more appropriate method for underwater transient recognition. These results may be helpful to develop an automatic recognition system for an underwater transient signal.

피치 히스토그램과 MFCC-VQ 동적 패턴을 사용한 음악 검색 (Music Identification Using Pitch Histogram and MFCC-VQ Dynamic Pattern)

  • 박철의;박만수;김성탁;김회린
    • 한국음향학회지
    • /
    • 제24권3호
    • /
    • pp.178-185
    • /
    • 2005
  • 본 논문에서는 내용기반 음악 정보 검색 방법으로써 멜로디의 시간 변화 특성과 통계적 특성을 모두 이용할 수 있는 hybrid 방법에 대해 제안하였다. 실제 방송 환경에의 적용을 위해 드라마 OST의 좁은 검색 범위뿐만 아니라 가요 1,005곡의 넓은 검색 범위에서도 제안한 방법을 이용하여 실험하였다. 제안된 방법은 특징 벡터로써 pitch와 MFCC(Mel Frequency Cepstral Coefficient)를 사용하여 음의 특성을 나타내었으며 멜로디를 표현하기 위해 피치 히스토그램과 VQ (Vector Quantization) 코드화한 MFCC의 템포럴 시퀀스를 이용함으로써 음악 검색 방법에 멜로디의 시간 변화 특성과 통계적 특성을 함께 적용할 수 있었다. 또한 pitch 히스토그램과 MFCC-VQ 템포럴 방법을 모두 사용한 hybrid 방식에 적절한 패턴 매칭 방법을 제안함으로써 기존의 각 단일 방식을 이용한 성능 결과 (MFCC-VQ 템포럴)와 비교하여 볼 때 드라마 OST 검색 범위에서는 평균 $9.9\%$, 가요 1,005곡의 검색 범위에서는 $10.2\%$의 오류 감소율을 나타내었다.

다양한 합성곱 신경망 방식을 이용한 모바일 기기를 위한 시작 단어 검출의 성능 비교 (Performance comparison of wake-up-word detection on mobile devices using various convolutional neural networks)

  • 김상홍;이보원
    • 한국음향학회지
    • /
    • 제39권5호
    • /
    • pp.454-460
    • /
    • 2020
  • 음성인식 기능을 제공하는 인공지능 비서들은 정확도가 뛰어난 클라우드 기반의 음성인식을 통해 동작한다. 클라우드 기반의 음성인식에서 시작 단어 인식은 대기 중인 기기를 활성화하는 데 중요한 역할을 한다. 본 논문에서는 공개 데이터셋인 구글의 Speech Commands 데이터셋을 사용하여 스펙트로그램 및 멜-주파수 캡스트럼 계수 특징을 입력으로 하여 모바일 기기에 대응한 저 연산 시작 단어 검출을 위한 합성곱 신경망의 성능을 비교한다. 본 논문에서 사용한 합성곱 신경망은 다층 퍼셉트론, 일반적인 합성곱 신경망, VGG16, VGG19, ResNet50, ResNet101, ResNet152, MobileNet이며, MobileNet의 성능을 유지하면서 모델 크기를 1/25로 줄인 네트워크도 제안한다.