• 제목/요약/키워드: Speech feature

검색결과 712건 처리시간 0.023초

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • 전기전자학회논문지
    • /
    • 제17권3호
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.

The Autonomy of Tenseness as a Feature

  • Yun, Il-Sung
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.117-131
    • /
    • 2003
  • The feature tenseness has long been a controversial issue. Many scholars have hardly accepted tenseness as a distinctive feature, due to the absence of its consistent and objective phonetic evidence especially in English. Instead, they claim that voicing is the primary feature and even say that no other feature can-be independent of voicing. However, voicing feature does not explain everything and significant aerodynamic and physiological correlates of the feature tenseness have been reported in English as well as in some other languages that have the tense/lax distinction in their obstruents. It is suggested that voicing is a simple and direct feature while tenseness is a complex and indirect feature and its autonomy as a distinctive feature should be acknowledged. This will enable us to describe the phonetic reality more properly across languages as well as in individual languages.

  • PDF

유ㆍ무성음 척도를 포함한 재구성 특징 파라미터의 음성 인식 성능평가 (Performance Evaluation of Speech Recognition Using the Reconstructed Feature Parameter with Voiced-Unvoiced Measure)

  • 이광석;한학용;고시영;허강인
    • 한국정보통신학회논문지
    • /
    • 제7권2호
    • /
    • pp.177-182
    • /
    • 2003
  • 본 연구는 유사음에 강인한 음성인식을 위하여 음성의 유ㆍ무성음 척도를 특징 파라미터에 추가 구성하여 음절과 음소단위의 음성인식을 행하였다. 이를 위하여 피치검출에 이용되는 알고리듬인 HPS(Harmonic Product Spectrum)의 스펙트럼 정보를 이용하여 유ㆍ무성음의 정도를 나타내는 척도를 제안한다. 제안된 척도는 HPS의 첨도와 피크의 개수 그리고 높이척도이다. 이들 척도 값을 포함하여 특징 파라미터를 재구성하고 제안된 특징의 유효성을 검증하기 위하여 CVC형 유사 음절 DB하에서 기존 특징 파라미터와 비교하여 음성인식 실험을 행하였다.

FPGA-Based Hardware Accelerator for Feature Extraction in Automatic Speech Recognition

  • Choo, Chang;Chang, Young-Uk;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • 제13권3호
    • /
    • pp.145-151
    • /
    • 2015
  • We describe in this paper a hardware-based improvement scheme of a real-time automatic speech recognition (ASR) system with respect to speed by designing a parallel feature extraction algorithm on a Field-Programmable Gate Array (FPGA). A computationally intensive block in the algorithm is identified implemented in hardware logic on the FPGA. One such block is mel-frequency cepstrum coefficient (MFCC) algorithm used for feature extraction process. We demonstrate that the FPGA platform may perform efficient feature extraction computation in the speech recognition system as compared to the generalpurpose CPU including the ARM processor. The Xilinx Zynq-7000 System on Chip (SoC) platform is used for the MFCC implementation. From this implementation described in this paper, we confirmed that the FPGA platform is approximately 500× faster than a sequential CPU implementation and 60× faster than a sequential ARM implementation. We thus verified that a parallelized and optimized MFCC architecture on the FPGA platform may significantly improve the execution time of an ASR system, compared to the CPU and ARM platforms.

숨은마코프모형을 이용하는 음성 끝점 검출을 위한 이산 특징벡터 (A Discrete Feature Vector for Endpoint Detection of Speech with Hidden Markov Model)

  • 이재기;오창혁
    • 응용통계연구
    • /
    • 제21권6호
    • /
    • pp.959-967
    • /
    • 2008
  • 본 연구의 목적은 숨은마코프모형을 사용하여 음성구간의 끝점을 검출하는 문제에서 소음의 환경에서도 강건하며 계산의 부하가 적은 이산형 특징벡터를 제안하고 이의 성질을 실증적으로 밝히는 것이다. 제시된 특징벡터는 일차원의 소리 신호의 에너지의 변화율을 나타내는 경사도이며 숨은마코프모형과 관련된 계산에서의 부하를 감소하기 위하여 세 개의 값으로 이산화하였다. 여러 소음 수준의 끝점 검출의 실험에서, 제시된 특징벡터가 잡음 환경에서도 강건함을 보였다.

스펙트럼 분석과 신경망을 이용한 음성/음악 분류 (Speech/Music Discrimination Using Spectrum Analysis and Neural Network)

  • 금지수;임성길;이현수
    • 한국음향학회지
    • /
    • 제26권5호
    • /
    • pp.207-213
    • /
    • 2007
  • 본 연구에서는 스펙트럼 분석과 신경망을 이용한 효과적인 음성/음악 분류 방법을 제안한다. 제안하는 방법은 스펙트럼을 분석하여 스펙트럴 피크 트랙에서 지속성 특징 파라미터인 MSDF(Maximum Spectral Duration Feature)를 추출하고 기존의 특징 파라미터인 MFSC(Mel Frequency Spectral Coefficients)와 결합하여 음성/음악 분류기의 특징으로 사용한다. 그리고 신경망을 음성/음악 분류기로 사용하였으며, 제안하는 방법의 성능 평가를 위해 학습 패턴 선별과 양, 신경망 구성에 따른 다양한 성능 평가를 수행하였다. 음성/음악 분류 결과 기존의 방법에 비해 성능 향상과 학습 패턴의 선별과 모델 구성에 따른 안정성을 확인할 수 있었다. MSDF와 MFSC를 특징 파라미터로 사용하고 50초 이상의 학습 패턴을 사용할 때 음성에 대해서는 94.97%, 음악에 대해서는 92.38%의 분류율을 얻었으며, MFSC만 사용할 때보다 음성은 1.25%, 음악은 1.69%의 향상된 성능을 얻었다.

Knowledge-driven speech features for detection of Korean-speaking children with autism spectrum disorder

  • Seonwoo Lee;Eun Jung Yeo;Sunhee Kim;Minhwa Chung
    • 말소리와 음성과학
    • /
    • 제15권2호
    • /
    • pp.53-59
    • /
    • 2023
  • Detection of children with autism spectrum disorder (ASD) based on speech has relied on predefined feature sets due to their ease of use and the capabilities of speech analysis. However, clinical impressions may not be adequately captured due to the broad range and the large number of features included. This paper demonstrates that the knowledge-driven speech features (KDSFs) specifically tailored to the speech traits of ASD are more effective and efficient for detecting speech of ASD children from that of children with typical development (TD) than a predefined feature set, extended Geneva Minimalistic Acoustic Standard Parameter Set (eGeMAPS). The KDSFs encompass various speech characteristics related to frequency, voice quality, speech rate, and spectral features, that have been identified as corresponding to certain of their distinctive attributes of them. The speech dataset used for the experiments consists of 63 ASD children and 9 TD children. To alleviate the imbalance in the number of training utterances, a data augmentation technique was applied to TD children's utterances. The support vector machine (SVM) classifier trained with the KDSFs achieved an accuracy of 91.25%, surpassing the 88.08% obtained using the predefined set. This result underscores the importance of incorporating domain knowledge in the development of speech technologies for individuals with disorders.

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • 제28권6호
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF

다단계 인식기반의 POI 인식기 개발 (Multi-stage Recognition for POI)

  • 전형배;황규웅;정훈;김승희;박준;이윤근
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.131-134
    • /
    • 2007
  • We propose a multi-stage recognizer architecture that reduces the computation load and makes fast recognizer. To improve performance of baseline multi-stage recognizer, we introduced new feature. We used confidence vector for each phone segment instead of best phoneme sequence. The multi-stage recognizer with new feature has better performance on n-best and has more robustness.

  • PDF

ON IMPROVING THE PERFORMANCE OF CODED SPECTRAL PARAMETERS FOR SPEECH RECOGNITION

  • Choi, Seung-Ho;Kim, Hong-Kook;Lee, Hwang-Soo
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1998년도 제15회 음성통신 및 신호처리 워크샵(KSCSP 98 15권1호)
    • /
    • pp.250-253
    • /
    • 1998
  • In digital communicatioin networks, speech recognition systems conventionally reconstruct speech followed by extracting feature [parameters. In this paper, we consider a useful approach by incorporating speech coding parameters into the speech recognizer. Most speech coders employed in the networks represent line spectral pairs as spectral parameters. In order to improve the recognition performance of the LSP-based speech recognizer, we introduce two different ways: one is to devise weighed distance measures of LSPs and the other is to transform LSPs into a new feature set, named a pseudo-cepstrum. Experiments on speaker-independent connected-digit recognition showed that the weighted distance measures significantly improved the recognition accuracy than the unweighted one of LSPs. Especially we could obtain more improved performance by using PCEP. Compared to the conventional methods employing mel-frequency cepstral coefficients, the proposed methods achieved higher performance in recognition accuracies.

  • PDF