• 제목/요약/키워드: speech database

검색결과 330건 처리시간 0.026초

DSP를 이용한 자동차 소음에 강인한 음성인식기 구현 (Implementation of a Robust Speech Recognizer in Noisy Car Environment Using a DSP)

  • 정익주
    • 음성과학
    • /
    • 제15권2호
    • /
    • pp.67-77
    • /
    • 2008
  • In this paper, we implemented a robust speech recognizer using the TMS320VC33 DSP. For this implementation, we had built speech and noise database suitable for the recognizer using spectral subtraction method for noise removal. The recognizer has an explicit structure in aspect that a speech signal is enhanced through spectral subtraction before endpoints detection and feature extraction. This helps make the operation of the recognizer clear and build HMM models which give minimum model-mismatch. Since the recognizer was developed for the purpose of controlling car facilities and voice dialing, it has two recognition engines, speaker independent one for controlling car facilities and speaker dependent one for voice dialing. We adopted a conventional DTW algorithm for the latter and a continuous HMM for the former. Though various off-line recognition test, we made a selection of optimal conditions of several recognition parameters for a resource-limited embedded recognizer, which led to HMM models of the three mixtures per state. The car noise added speech database is enhanced using spectral subtraction before HMM parameter estimation for reducing model-mismatch caused by nonlinear distortion from spectral subtraction. The hardware module developed includes a microcontroller for host interface which processes the protocol between the DSP and a host.

  • PDF

음성 인식용 데이터베이스 검증시스템을 위한 새로운 음성 인식 성능 지표 (A New Speech Quality Measure for Speech Database Verification System)

  • 지승은;김우일
    • 한국정보통신학회논문지
    • /
    • 제20권3호
    • /
    • pp.464-470
    • /
    • 2016
  • 본 논문에서는 음성의 특성 지표를 이용한 음성 인식용 데이터베이스 검증 시스템의 개발 내용을 소개하고 이 시스템의 핵심 기술인 음성 특성 지표 추출 알고리즘을 설명한다. 선행 연구에서는 본 시스템에 필요한 효과적인 음성 인식 성능 지표를 생성하기 위해 대표적인 음성 인식 성능 지표인 단어 오인식률(Word Error Rate, WER)과 상관도가 높은 여러 가지 음성 특성 지표들을 조합하여 새로운 성능 지표를 생성하였다. 생성된 음성 인식 성능 지표는 다양한 잡음 환경에서 각 음성 특성 지표를 단독으로 사용할 때보다 단어 오인식률과 높은 상관도를 나타내어 음성 인식 성능을 예측하는데 효과적임을 입증 하였다. 본 실험에서는 선행 연구에서 조합에 사용한 이차적인 음성 인식기에서 추출된 음향 모델 확률 값을 GMM(Gaussian Mixture Model) 음향 모델 확률 값으로 대체해 조합함으로써 시스템 구축 시 다른 음성 인식기에 대한 의존성을 감소시킨다.

Unit Generation Based on Phrase Break Strength and Pruning for Corpus-Based Text-to-Speech

  • Kim, Sang-Hun;Lee, Young-Jik;Hirose, Keikichi
    • ETRI Journal
    • /
    • 제23권4호
    • /
    • pp.168-176
    • /
    • 2001
  • This paper discusses two important issues of corpus-based synthesis: synthesis unit generation based on phrase break strength information and pruning redundant synthesis unit instances. First, the new sentence set for recording was designed to make an efficient synthesis database, reflecting the characteristics of the Korean language. To obtain prosodic context sensitive units, we graded major prosodic phrases into 5 distinctive levels according to pause length and then discriminated intra-word triphones using the levels. Using the synthesis unit with phrase break strength information, synthetic speech was generated and evaluated subjectively. Second, a new pruning method based on weighted vector quantization (WVQ) was proposed to eliminate redundant synthesis unit instances from the synthesis database. WVQ takes the relative importance of each instance into account when clustering similar instances using vector quantization (VQ) technique. The proposed method was compared with two conventional pruning methods through objective and subjective evaluations of synthetic speech quality: one to simply limit the maximum number of instances, and the other based on normal VQ-based clustering. For the same reduction rate of instance number, the proposed method showed the best performance. The synthetic speech with reduction rate 45% had almost no perceptible degradation as compared to the synthetic speech without instance reduction.

  • PDF

Noisy Speech Recognition Based on Noise-Adapted HMMs Using Speech Feature Compensation

  • Chung, Yong-Joo
    • 융합신호처리학회논문지
    • /
    • 제15권2호
    • /
    • pp.37-41
    • /
    • 2014
  • The vector Taylor series (VTS) based method usually employs clean speech Hidden Markov Models (HMMs) when compensating speech feature vectors or adapting the parameters of trained HMMs. It is well-known that noisy speech HMMs trained by the Multi-condition TRaining (MTR) and the Multi-Model-based Speech Recognition framework (MMSR) method perform better than the clean speech HMM in noisy speech recognition. In this paper, we propose a method to use the noise-adapted HMMs in the VTS-based speech feature compensation method. We derived a novel mathematical relation between the train and the test noisy speech feature vector in the log-spectrum domain and the VTS is used to estimate the statistics of the test noisy speech. An iterative EM algorithm is used to estimate train noisy speech from the test noisy speech along with noise parameters. The proposed method was applied to the noise-adapted HMMs trained by the MTR and MMSR and could reduce the relative word error rate significantly in the noisy speech recognition experiments on the Aurora 2 database.

음성인식을 이용한 자동 호 분류 철도 예약 시스템 (A Train Ticket Reservation Aid System Using Automated Call Routing Technology Based on Speech Recognition)

  • 심유진;김재인;구명완
    • 대한음성학회지:말소리
    • /
    • 제52호
    • /
    • pp.161-169
    • /
    • 2004
  • This paper describes the automated call routing for train ticket reservation aid system based on speech recognition. We focus on the task of automatically routing telephone calls based on user's fluently spoken response instead of touch tone menus in an interactive voice response system. Vector-based call routing algorithm is investigated and mapping table for key term is suggested. Korail database collected by KT is used for call routing experiment. We evaluate call-classification experiments for transcribed text from Korail database. In case of small training data, an average call routing error reduction rate of 14% is observed when mapping table is used.

  • PDF

심층신경망을 이용한 조음 예측 모형 개발 (Development of articulatory estimation model using deep neural network)

  • 유희조;양형원;강재구;조영선;황성하;홍연정;조예진;김서현;남호성
    • 말소리와 음성과학
    • /
    • 제8권3호
    • /
    • pp.31-38
    • /
    • 2016
  • Speech inversion (acoustic-to-articulatory mapping) is not a trivial problem, despite the importance, due to the highly non-linear and non-unique nature. This study aimed to investigate the performance of Deep Neural Network (DNN) compared to that of traditional Artificial Neural Network (ANN) to address the problem. The Wisconsin X-ray Microbeam Database was employed and the acoustic signal and articulatory pellet information were the input and output in the models. Results showed that the performance of ANN deteriorated as the number of hidden layers increased. In contrast, DNN showed lower and more stable RMS even up to 10 deep hidden layers, suggesting that DNN is capable of learning acoustic-articulatory inversion mapping more efficiently than ANN.

한국인의 영어 음성코퍼스 설계 및 구축 (Design and Construction of Korean-Spoken English Corpus (K-SEC))

  • 이석재;이숙향;강석근;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 5월 학술대회지
    • /
    • pp.12-20
    • /
    • 2003
  • K-SEC(Korean-Spoken English Corpus) is a kind of speech database that is being under construction by the authors of this paper. This article discusses the needs of the K-SEC from various academic disciplines and industrial circles, and it introduces the characteristics of the K-SEC design, its catalogues and contents of the recorded database, exemplifying what are being considered from both Korean and English languages' phonetics and phonologies. The K-SEC can be marked as a beginning of a parallel speech corpus, and it is suggested that a similar corpus should be enlarged for the future advancements of the experimental phonetics and the speech information technology.

  • PDF

제한된 학습 데이터를 사용하는 End-to-End 음성 인식 모델 (End-to-end speech recognition models using limited training data)

  • 김준우;정호영
    • 말소리와 음성과학
    • /
    • 제12권4호
    • /
    • pp.63-71
    • /
    • 2020
  • 음성 인식은 딥러닝 및 머신러닝 분야에서 활발히 상용화 되고 있는 분야 중 하나이다. 그러나, 현재 개발되고 있는 음성 인식 시스템은 대부분 성인 남녀를 대상으로 인식이 잘 되는 실정이다. 이것은 음성 인식 모델이 대부분 성인 남녀 음성 데이터베이스를 학습하여 구축된 모델이기 때문이다. 따라서, 노인, 어린이 및 사투리를 갖는 화자의 음성을 인식하는데 문제를 일으키는 경향이 있다. 노인과 어린이의 음성을 잘 인식하기 위해서는 빅데이터를 구축하는 방법과 성인 대상 음성 인식 엔진을 노인 및 어린이 데이터로 적응하는 방법 등이 있을 수 있지만, 본 논문에서는 음향적 데이터 증강에 기반한 재귀적 인코더와 언어적 예측이 가능한 transformer 디코더로 구성된 새로운 end-to-end 모델을 제안한다. 제한된 데이터셋으로 구성된 한국어 노인 및 어린이 음성 인식을 통해 제안된 방법의 성능을 평가한다.

다층 퍼셉트론 신경회로망을 이용한 후두 질환 음성 식별 (Detection of Laryngeal Pathology in Speech Using Multilayer Perceptron Neural Networks)

  • 강현민;김유신;김형순
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.115-118
    • /
    • 2002
  • Neural networks have been known to have great discriminative power in pattern classification problems. In this paper, the multilayer perceptron neural networks are employed to automatically detect laryngeal pathology in speech. Also new feature parameters are introduced which can reflect the periodicity of speech and its perturbation. These parameters and cepstral coefficients are used as input of the multilayer perceptron neural networks. According to the experiment using Korean disordered speech database, incorporation of new parameters with cepstral coefficients outperforms the case with only cepstral coefficients.

  • PDF

음성인식기를 이용한 발음오류 자동분류 결과 분석 (Performance Analysis of Automatic Mispronunciation Detection Using Speech Recognizer)

  • 강효원;이상필;배민영;이재강;권철홍
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.29-32
    • /
    • 2003
  • This paper proposes an automatic pronunciation correction system which provides users with correction guidelines for each pronunciation error. For this purpose, we develop an HMM speech recognizer which automatically classifies pronunciation errors when Korean speaks foreign language. And, we collect speech database of native and nonnative speakers using phonetically balanced word lists. We perform analysis of mispronunciation types from the experiment of automatic mispronunciation detection using speech recognizer.

  • PDF