• Title/Summary/Keyword: 음성 특징 추출

Search Result 310, Processing Time 0.025 seconds

음성 인식률 향상을 위한 음성의 특징 파라미터 추출 알고리즘

  • Choi, Jae-Seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.686-687
    • /
    • 2017
  • 본 논문에서는 잡음에 강인하고 음성인식 성능이 효과적인 멜 주파수 켑스트럼 계수의 파라미터의 추출 알고리즘을 제안한다. 본 논문에서 제안한 알고리즘은 배경잡음이 혼합된 깨끗한 연속음성 중에서 위너필터를 이용하여 음성에 포함된 배경잡음을 감소시키며, 이후에 멜 주파수 켑스트럼 계수의 특징추출 방법을 사용하여 음성의 특징 파라미터를 추출한다.

  • PDF

Effective Feature Extraction in the Individual frequency Sub-bands for Speech Recognition (음성인식을 위한 주파수 부대역별 효과적인 특징추출)

  • 지상문
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.598-603
    • /
    • 2003
  • This paper presents a sub-band feature extraction approach in which the feature extraction method in the individual frequency sub-bands is determined in terms of speech recognition accuracy. As in the multi-band paradigm, features are extracted independently in frequency sub-regions of the speech signal. Since the spectral shape is well structured in the low frequency region, the all pole model is effective for feature extraction. But, in the high frequency region, the nonparametric transform, discrete cosine transform is effective for the extraction of cepstrum. Using the sub-band specific feature extraction method, the linguistic information in the individual frequency sub-bands can be extracted effectively for automatic speech recognition. The validity of the proposed method is shown by comparing the results of speech recognition experiments for our method with those obtained using a full-band feature extraction method.

Voice Recognition Performance Improvement using the Convergence of Voice signal Feature and Silence Feature Normalization in Cepstrum Feature Distribution (음성 신호 특징과 셉스트럽 특징 분포에서 묵음 특징 정규화를 융합한 음성 인식 성능 향상)

  • Hwang, Jae-Cheon
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.5
    • /
    • pp.13-17
    • /
    • 2017
  • Existing Speech feature extracting method in speech Signal, there are incorrect recognition rates due to incorrect speech which is not clear threshold value. In this article, the modeling method for improving speech recognition performance that combines the feature extraction for speech and silence characteristics normalized to the non-speech. The proposed method is minimized the noise affect, and speech recognition model are convergence of speech signal feature extraction to each speech frame and the silence feature normalization. Also, this method create the original speech signal with energy spectrum similar to entropy, therefore speech noise effects are to receive less of the noise. the performance values are improved in signal to noise ration by the silence feature normalization. We fixed speech and non speech classification standard value in cepstrum For th Performance analysis of the method presented in this paper is showed by comparing the results with CHMM HMM, the recognition rate was improved 2.7%p in the speech dependent and advanced 0.7%p in the speech independent.

Voice Synthesis Detection Using Language Model-Based Speech Feature Extraction (언어 모델 기반 음성 특징 추출을 활용한 생성 음성 탐지)

  • Seung-min Kim;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.439-449
    • /
    • 2024
  • Recent rapid advancements in voice generation technology have enabled the natural synthesis of voices using text alone. However, this progress has led to an increase in malicious activities, such as voice phishing (voishing), where generated voices are exploited for criminal purposes. Numerous models have been developed to detect the presence of synthesized voices, typically by extracting features from the voice and using these features to determine the likelihood of voice generation.This paper proposes a new model for extracting voice features to address misuse cases arising from generated voices. It utilizes a deep learning-based audio codec model and the pre-trained natural language processing model BERT to extract novel voice features. To assess the suitability of the proposed voice feature extraction model for voice detection, four generated voice detection models were created using the extracted features, and performance evaluations were conducted. For performance comparison, three voice detection models based on Deepfeature proposed in previous studies were evaluated against other models in terms of accuracy and EER. The model proposed in this paper achieved an accuracy of 88.08%and a low EER of 11.79%, outperforming the existing models. These results confirm that the voice feature extraction method introduced in this paper can be an effective tool for distinguishing between generated and real voices.

Selective Speech Feature Extraction using Channel Similarity in CHMM Vocabulary Recognition (CHMM 어휘인식에서 채널 유사성을 이용한 선택적 음성 특징 추출)

  • Oh, Sang Yeon
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.453-458
    • /
    • 2013
  • HMM Speech recognition systems have a few weaknesses, including failure to recognize speech due to the mixing of environment noise other voices. In this paper, we propose a speech feature extraction methode using CHMM for extracting selected target voice from mixture of voices and noises. we make use of channel similarity and correlate relation for the selective speech extraction composes. This proposed method was validated by showing that the average distortion of separation of the technique decreased by 0.430 dB. It was shown that the performance of the selective feature extraction is better than another system.

Target Speech Segregation Using Non-parametric Correlation Feature Extraction in CASA System (CASA 시스템의 비모수적 상관 특징 추출을 이용한 목적 음성 분리)

  • Choi, Tae-Woong;Kim, Soon-Hyub
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.1
    • /
    • pp.79-85
    • /
    • 2013
  • Feature extraction of CASA system uses time continuity and channel similarity and makes correlogram of auditory elements for the use. In case of using feature extraction with cross correlation coefficient for channel similarity, it has much computational complexity in order to display correlation quantitatively. Therefore, this paper suggests feature extraction method using non-parametric correlation coefficient in order to reduce computational complexity when extracting the feature and tests to segregate target speech by CASA system. As a result of measuring SNR (Signal to Noise Ratio) for the performance evaluation of target speech segregation, the proposed method shows a slight improvement of 0.14 dB on average over the conventional method.

Speaker Recognition Technique by Extracting Speech Feature Vector using Wiener Filter Method (위너필터 방법을 사용한 음성 특징 벡터 추출에 의한 화자인식 기법)

  • Choi, Jae-seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.617-618
    • /
    • 2017
  • 음성인식의 적절한 성능을 구하기 위하여 잡음환경 하에서 최적인 음성의 특징 벡터를 선택할 필요가 있다. 본 논문에서는 위너필터 방법과 인간의 청각계의 특성을 활용한 멜 주파수 켑스트럼 계수를 사용한 음성인식 방법을 제안한다. 본 논문에서 제안하는 음성의 특징 벡터는 음성 중에서 배경잡음을 제거한 후에 깨끗한 음성신호의 벡터를 추출하는 방법이며, 다층 퍼셉트론 신경회로망에 멜 주파수 켑스트럼 계수를 입력하여 학습시킴으로써 음성인식을 구현한다. 본 실험에서는 멜 주파수 켑스트럼 계수의 특징 벡터를 사용하여 백색잡음이 혼합된 경우에 대하여 음성인식 실험을 실시하였다.

  • PDF

Voice Recognition Performance Improvement using the Convergence of Bayesian method and Selective Speech Feature (베이시안 기법과 선택적 음성특징 추출을 융합한 음성 인식 성능 향상)

  • Hwang, Jae-Chun
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.6
    • /
    • pp.7-11
    • /
    • 2016
  • Voice recognition systems which use a white noise and voice recognition environment are not correct voice recognition with variable voice mixture. Therefore in this paper, we propose a method using the convergence of Bayesian technique and selecting voice for effective voice recognition. we make use of bank frequency response coefficient for selective voice extraction, Using variables observed for the combination of all the possible two observations for this purpose, and has an voice signal noise information to the speech characteristic extraction selectively is obtained by the energy ratio on the output. It provide a noise elimination and recognition rates are improved with combine voice recognition of bayesian methode. The result which we confirmed that the recognition rate of 2.3% is higher than HMM and CHMM methods in vocabulary recognition, respectively.

Speech Recognition Error Compensation using MFCC and LPC Feature Extraction Method (MFCC와 LPC 특징 추출 방법을 이용한 음성 인식 오류 보정)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.137-142
    • /
    • 2013
  • Speech recognition system is input of inaccurate vocabulary by feature extraction case of recognition by appear result of unrecognized or similar phoneme recognized. Therefore, in this paper, we propose a speech recognition error correction method using phoneme similarity rate and reliability measures based on the characteristics of the phonemes. Phonemes similarity rate was phoneme of learning model obtained used MFCC and LPC feature extraction method, measured with reliability rate. Minimize the error to be unrecognized by measuring the rate of similar phonemes and reliability. Turned out to error speech in the process of speech recognition was error compensation performed. In this paper, the result of applying the proposed system showed a recognition rate of 98.3%, error compensation rate 95.5% in the speech recognition.

A study on Gabor Filter Bank-based Feature Extraction Algorithm for Analysis of Acoustic data of Emergency Rescue (응급구조 음향데이터 분석을 위한 Gabor 필터뱅크 기반의 특징추출 알고리즘에 대한 연구)

  • Hwang, Inyoung;Chang, Joon-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1345-1347
    • /
    • 2015
  • 본 논문에서는 응급상황이 신고되는 상황에서 수보자에게 전달되는 신고자의 주변음향신호로부터 신고자의 주변상황을 추정하기 위하여 음향의 주파수적 특성 및 변화특성의 모델링 성능이 뛰어난 Gabor 필터뱅크 기반의 특징벡터 추출 기술 및 분류 성능이 뛰어난 심화신경망을 도입한다. 제안하는 Gabor 필터뱅크 기반의 특징벡터 추출 기법은 비음성 구간 검출기를 통하여 음성/비음성을 구분한 후에 비음성 구간에서 23차의 Mel-filter bank 계수를 추출한 후에 이로부터 Gabor 필터를 이용하여 주변상황 추정을 위한 특징벡터를 추출하고, 이로부터 학습된 심화신경망을 통하여 신고자의 장소적 정보를 추정한다. 제안된 기법은 여러 가지 시나리오 환경에서 평가되었으며, 우수한 분류성능을 보였다.