• Title/Summary/Keyword: Speaker independent

Search Result 235, Processing Time 0.022 seconds

A Study on a Model Parameter Compensation Method for Noise-Robust Speech Recognition (잡음환경에서의 음성인식을 위한 모델 파라미터 변환 방식에 관한 연구)

  • Chang, Yuk-Hyeun;Chung, Yong-Joo;Park, Sung-Hyun;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.112-121
    • /
    • 1997
  • In this paper, we study a model parameter compensation method for noise-robust speech recognition. We study model parameter compensation on a sentence by sentence and no other informations are used. Parallel model combination(PMC), well known as a model parameter compensation algorithm, is implemented and used for a reference of performance comparision. We also propose a modified PMC method which tunes model parameter with an association factor that controls average variability of gaussian mixtures and variability of single gaussian mixture per state for more robust modeling. We obtain a re-estimation solution of environmental variables based on the expectation-maximization(EM) algorithm in the cepstral domain. To evaluate the performance of the model compensation methods, we perform experiments on speaker-independent isolated word recognition. Noise sources used are white gaussian and driving car noise. To get corrupted speech we added noise to clean speech at various signal-to-noise ratio(SNR). We use noise mean and variance modeled by 3 frame noise data. Experimental result of the VTS approach is superior to other methods. The scheme of the zero order VTS approach is similar to the modified PMC method in adapting mean vector only. But, the recognition rate of the Zero order VTS approach is higher than PMC and modified PMC method based on log-normal approximation.

  • PDF

A Content-based Video Rate-control Algorithm Interfaced to Human-eye (인간과 결합한 내용기반 동영상 율제어)

  • 황재정;진경식;황치규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.3C
    • /
    • pp.307-314
    • /
    • 2003
  • In the general multiple video object coder, more interested objects such as speaker or moving object is consistently coded with higher priority. Since the priority of each object may not be fixed in the whole sequence and be variable on frame basis, it must be adjusted in a frame. In this paper, we analyze the independent rate control algorithm and global algorithm that the QP value is controled by the static parameters, object importance or priority, target PSNR, weighted distortion. The priority among static parameters is analyzed and adjusted into dynamic parameters according to the visual interests or importance obtained by camera interface. Target PSNR and weighted distortion are proportionally derived by using magnitude, motion, and distortion. We apply those parameters for the weighted distortion control and the priority-based control resulting in the efficient bit-rate distribution. As results of this paper, we achieved that fewer bits are allocated for video objects which has less importance and more bits for those which has higher visual importance. The duration of stability in the visual quality is reduced to less than 15 frames of the coded sequence. In the aspect of PSNR, the proposed scheme shows higher quality of more than 2d13 against the conventional schemes. Thus the coding scheme interfaced to human- eye proves an efficient video coder dealing with the multiple number of video objects.

A Speech Translation System for Hotel Reservation (호텔예약을 위한 음성번역시스템)

  • 구명완;김재인;박상규;김우성;장두성;홍영국;장경애;김응인;강용범
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.24-31
    • /
    • 1996
  • In this paper, we present a speech translation system for hotel reservation, KT_STS(Korea Telecom Speech Translation System). KT-STS is a speech-to-speech translation system which translates a spoken utterance in Korean into one in Japanese. The system has been designed around the task of hotel reservation(dialogues between a Korean customer and a hotel reservation de나 in Japan). It consists of a Korean speech recognition system, a Korean-to-Japanese machine translation system and a korean speech synthesis system. The Korean speech recognition system is an HMM(Hidden Markov model)-based speaker-independent, continuous speech recognizer which can recognize about 300 word vocabularies. Bigram language model is used as a forward language model and dependency grammar is used for a backward language model. For machine translation, we use dependency grammar and direct transfer method. And Korean speech synthesizer uses the demiphones as a synthesis unit and the method of periodic waveform analysis and reallocation. KT-STS runs in nearly real time on the SPARC20 workstation with one TMS320C30 DSP board. We have achieved the word recognition rate of 94. 68% and the sentence recognition rate of 82.42% after the speech recognition tests. On Korean-to-Japanese translation tests, we achieved translation success rate of 100%. We had an international joint experiment in which our system was connected with another system developed by KDD in Japan using the leased line.

  • PDF

Reconsideration of the Linguistic Category of Mediation in Language: a Comparative Approach between French and Korean (언어의 '매개작용' 범주 고찰: 프랑스어와 한국어 비교 연구)

  • Suh, Jungyeon
    • Cross-Cultural Studies
    • /
    • v.46
    • /
    • pp.297-325
    • /
    • 2017
  • In this paper, I would like to reconsider the evidential category (or the mediation category) in languages with language specific values, especially in Korean and French evidentials. We tried to analyze how the evidentials are represented in both languages including their linguistic markers (grammatical, lexical or discursive) and their semantic meanings. According to the precedent studies from the general linguistic point of view, we would like to reconsider the semantic meanings of both languages' grammatical markers, the so-called Korean retrospective marker '-te-' and French conditionals in the framework of the enunciative operation theory suggested by $Descl{\acute{e}}s$ & $Guentch{\acute{e}}va$ (2000), which proposed to classify the type of discourse by the language-independent description tools conceived after the enunciation theory suggested by Bally (1965), Benveniste (1956), Culioli (1973). Through this approach, we would like to contribute to establishing the linguistic basis not only for the general linguistic research to determine the invariant meaning of linguistic evidentials and their system, but also for the applied linguistics to the language engineering field.

Automatic Recognition of Pitch Accent Using Distributed Time-Delay Recursive Neural Network (분산 시간지연 회귀신경망을 이용한 피치 악센트 자동 인식)

  • Kim Sung-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.277-281
    • /
    • 2006
  • This paper presents a method for the automatic recognition of pitch accents over syllables. The method that we propose is based on the time-delay recursive neural network (TDRNN). which is a neural network classifier with two different representation of dynamic context: the delayed input nodes allow the representation of an explicit trajectory F0(t) along time. while the recursive nodes provide long-term context information that reflects the characteristics of pitch accentuation in spoken English. We apply the TDRNN to pitch accent recognition in two forms: in the normal TDRNN. all of the prosodic features (pitch. energy, duration) are used as an entire set in a single TDRNN. while in the distributed TDRNN. the network consists of several TDRNNs each taking a single prosodic feature as the input. The final output of the distributed TDRNN is weighted sum of the output of individual TDRNN. We used the Boston Radio News Corpus (BRNC) for the experiments on the speaker-independent pitch accent recognition. π 1e experimental results show that the distributed TDRNN exhibits an average recognition accuracy of 83.64% over both pitch events and non-events.