• Title/Summary/Keyword: Speech Processing

Search Result 950, Processing Time 0.039 seconds

A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter (Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구)

  • Lee, Jun-Hwan;Lee, Sang-Beom
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information (청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.

Pre-Processing for Performance Enhancement of Speech Recognition in Digital Communication Systems (디지털 통신 시스템에서의 음성 인식 성능 향상을 위한 전처리 기술)

  • Seo, Jin-Ho;Park, Ho-Chong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.7
    • /
    • pp.416-422
    • /
    • 2005
  • Speech recognition in digital communication systems has very low performance due to the spectral distortion caused by speech codecs. In this paper, the spectral distortion by speech codecs is analyzed and a pre-processing method which compensates for the spectral distortion is proposed for performance enhancement of speech recognition. Three standard speech codecs. IS-127 EVRC. ITU G.729 CS-ACELP and IS-96 QCELP. are considered for algorithm development and evaluation, and a single method which can be applied commonly to all codecs is developed. The performance of the proposed method is evaluated for three codecs, and by using the speech features extracted from the compensated spectrum. the recognition rate is improved by the maximum of $15.6\%$ compared with that using the degraded speech features.

Design and Implementation of a Speech Synthesis Engine and a Plug-in for Internet Web Page (인터넷 웹페이지의 음성합성을 위한 엔진 및 플러그-인 설계 및 구현)

  • Lee, Hee-Man;Kim, Ji-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.461-469
    • /
    • 2000
  • In the paper, the design and the implementation of the netscape plug-in and the speech synthesis enginegenerating the speech sounds from the text information of the web pages are described. The steps of the generating speech sound from an web pages are the speech synthesis plug-in is activated when the netscape finds the audio/xesp MIME data type embedded in the browsed web page; the HTML file referenced in the EMBED MTML tag is down loaded from the referenced URL to send to the commander object located in the said plug-in; The speech synthesis engine control tags and the text characters are extracted from the down loaded HTML document by the commander object the synthesized speech sounds are generated by the speech synthesis engine. The speech synthesis engine interprets the command streams from the commander objects to call the member functions for the processing of the speech segment data in the data banks. The commander object and the speech synthesis engine are designed as an independent object to enhancethe flexitility and the portability.

  • PDF

Dimension Reduction Method of Speech Feature Vector for Real-Time Adaptation of Voice Activity Detection (음성구간 검출기의 실시간 적응화를 위한 음성 특징벡터의 차원 축소 방법)

  • Park Jin-Young;Lee Kwang-Seok;Hur Kang-In
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.3
    • /
    • pp.116-121
    • /
    • 2006
  • In this paper, we propose the dimension reduction method of multi-dimension speech feature vector for real-time adaptation procedure in various noisy environments. This method which reduces dimensions non-linearly to map the likelihood of speech feature vector and noise feature vector. The LRT(Likelihood Ratio Test) is used for classifying speech and non-speech. The results of implementation are similar to multi-dimensional speech feature vector. The results of speech recognition implementation of detected speech data are also similar to multi-dimensional(10-order dimensional MFCC(Mel-Frequency Cepstral Coefficient)) speech feature vector.

  • PDF

On the Role of Prefabricated Speech in L2 Acquisition Process: An Information Processing Approach

  • Boo, Kyung-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 1991.10a
    • /
    • pp.196-208
    • /
    • 1991
  • This study focused on the role of prefabricated speech (routines and patterns) in the L2 acquisition process. The data for this study consisted of spontaneous speech samples and various observational records of three Korean children learning English as L2 in a nursery school. The specific questions addressed here were: (1) What routines, patterns, and creative constructions did the children use? (2) What was the general trend in the three children's use of routines, patterns, and creative constructions over time? The data were collected over a period of one school year by observing the children in their school. The findings were discussed from the perspective of human information processing. This study found that prefabricated speech played a significant role in the three children's L2 acquisition. The automatic processing of prefabricated speech appeared to enable the children to reduce the burden on their information processing systems, which allowed the saved resources available for other language development activities. Also, the children's language development was evident in their increase in the use of patterns. The children were moving from heavy dependence on wholly unanalyzed routines to increased use of partly unanalyzed patterns. This increased control was the result of an increase in procedural knowledge.

  • PDF

A Study on the Performance of TDNN-Based Speech Recognizer with Network Parameters

  • Nam, Hojung;Kwon, Y.;Paek, Inchan;Lee, K.S.;Yang, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2E
    • /
    • pp.32-37
    • /
    • 1997
  • This paper proposes a isolated speech recognition method of Korean digits using a TDNN(Time Delay Neural Network) which is able to recognizc time-varying speech properties. We also make an investigation of effect on network parameter of TDNN ; hidden layers and time-delays. TDNNs in our experiments consist of 2 and 3 hidden layers and have several time-delays. From experiment result, TDNN structure which has 2 hidden-layers, gives a good result for speech recognition of Korean digits. Mis-recognition by time-delays can be improved by changing TDNN structures and mis-recognition separated from time-delays can be improved by changing input patterns.

  • PDF

Development of Embedded Fast/Light Phoneme Recognizer for Distributed Speech Recognition (분산음성인식을 위한 내장형 고속/경량 음소인식기 개발)

  • Kim, Seung-Hi;Hwang, Kyu-Woong;Jeon, Hyun-Bae;Jeong, Hoon;Park, Jun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.395-396
    • /
    • 2007
  • ETRI 음성/언어정보연구센터에서는 분산음성인식을 위해 메모리를 작게 사용하며 속도가 빠른 음소인식기를 개발 중이다. 음향 모델, 언어 모델, 탐색 네트워크 등 고정되어 있는 정보는 인식기를 수행하기 이전에 미리 binary 형태로 구축하여 ROM 형태로 저장함으로써 실제 사용해야 할 RAM 용량을 대폭 줄일 수 있었다. Tied state에 기반한 triphone 모델에서는 unique HMM 만을 사용함으로써 인식시간 및 메모리 사용량을 대폭 줄일 수 있었다. Monophone 인식기의 경우 RAM 사용량이 179KB였으며, triphone 인식기의 경우 435KB의 RAM 사용량과 RTF(Real Time Factor) 0.02를 확인하였다.

  • PDF

Noise reduction system using time-delay neural network (시간지연 신경회로망을 이용한 잡음제거 시스템)

  • Choi Jae-Seung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.121-128
    • /
    • 2005
  • On the research field for speech signal, neural network mainly uses for the category classification in speech recognition and applies to signal processing. Accordingly, this paper proposes a noise reduction system using a time-delay neural network, which implements the mapping from the space of speech signal degraded by noise to the space of clean speech signal. It is confirmed that this method is effective for speech degraded not only by white noise but also by colored noise using the noise reduction system, which restores the amplitude component of fast Fourier transform.

A New Pruning Method for Synthesis Database Reduction Using Weighted Vector Quantization

  • Kim, Sanghun;Lee, Youngjik;Keikichi Hirose
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4E
    • /
    • pp.31-38
    • /
    • 2001
  • A large-scale synthesis database for a unit selection based synthesis method usually retains redundant synthesis unit instances, which are useless to the synthetic speech quality. In this paper, to eliminate those instances from the synthesis database, we proposed a new pruning method called weighted vector quantization (WVQ). The WVQ reflects relative importance of each synthesis unit instance when clustering the similar instances using vector quantization (VQ) technique. The proposed method was compared with two conventional pruning methods through the objective and subjective evaluations of the synthetic speech quality: one to simply limit maximum number of instance, and the other based on normal VQ-based clustering. The proposed method showed the best performance under 50% reduction rates. Over 50% of reduction rates, the synthetic speech quality is not seriously but perceptibly degraded. Using the proposed method, the synthesis database can be efficiently reduced without serious degradation of the synthetic speech quality.

  • PDF