• Title/Summary/Keyword: speaker dependent system

Search Result 76, Processing Time 0.033 seconds

Continuous Digit Recognition Using the Weight Initialization and LR Parser

  • Choi, Ki-Hoon;Lee, Seong-Kwon;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2E
    • /
    • pp.14-23
    • /
    • 1996
  • This paper is a on the neural network to recognize the phonemes, the weight initialization to reduce learning speed, and LR parser for continuous speech recognition. The neural network spots the phonemes in continuous speech and LR parser parses the output of neural network. The whole phonemes recognized in neural network are divided into several groups which are grouped by the similarity of phonemes, and then each group consists of neural network. Each group of neural network to recognize the phonemes consisits of that recognize the phonemes of their own group and VGNN(Verify Group Neural Network) which judges whether the inputs are their own group or not. The weights of neural network are not initialized with random values but initialized from learning data to reduce learning speed. The LR parsing method applied to this paper is not a method which traces a unique path, but one which traces several possible paths because the output of neural network is not accurate. The parser processes the continuous speech frame by frame as accumulating the output of neural network through several possible paths. If this accumulated path-value drops below the threshold value, this path is deleted in possible parsing paths. This paper applies the continuous speech recognition system to the threshold value, this path is deleted in possible parsing paths. This paper applies the continuous speech recognition system to the continuous Korea digits recognition. The recognition rate of isolated digits is 97% in speaker dependent, and 75% in speaker dependent. The recognition rate of continuous digits is 74% in spaker dependent.

  • PDF

A Study on MLP Neural Network Architecture and Feature Extraction for Korean Syllable Recognition (한국어 음절 인식을 위한 MLP 신경망 구조 및 특징 추출에 관한 연구)

  • 금지수;이현수
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.672-675
    • /
    • 1999
  • In this paper, we propose a MLP neural network architecture and feature extraction for Korean syllable recognition. In the proposed syllable recognition system, firstly onset is classified by onset classification neural network. And the results information of onset classification neural network are used for feature selection of imput patterns vector. The feature extraction of Korean syllables is based on sonority. Using the threshold rate separate the syllable. The results of separation are used for feature of onset. nucleus and coda. ETRI's SAMDORI has been used by speech DB. The recognition rate is 96% in the speaker dependent and 93.3% in the speaker independent.

  • PDF

A New Vocoder based on AMR 7.4Kbit/s Mode for Speaker Dependent System (화자 의존 환경의 AMR 7.4Kbit/s모드에 기반한 보코더)

  • Min, Byung-Jae;Park, Dong-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.9C
    • /
    • pp.691-696
    • /
    • 2008
  • A new vocoder of Code Excited Linear Predictive (CELP) based on Adaptive Multi Rate (AMR) 7.4kbit/s mode is proposed in this paper. The proposed vocoder achieves a better compression rate in an environment of Speaker Dependent Coding System (SDSC) and is efficiently used for systems, such as OGM(Outgoing message) and TTS(Text To Speech), which needs only one person's speech. In order to enhance the compression rate of a coder, a new Line Spectral Pairs(LSP) code-book is employed by using Centroid Neural Network (CNN) algorithm. In comparison with original(traditional) AMR 7.4 Kbit/s coder, the new coder shows 27% higher compression rate while preserving synthesized speech quality in terms of Mean Opinion Score(MOS).

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

A embodiment of mouse pointing system using 3-axis accelerometer and sound-recognition module (3축 가속도센서 및 음성인식 모듈을 이용한 마우스 포인팅 시스템의 구현)

  • Lee, Seung-Joon;Shin, Dong-Hwan;Kasno, Mohamad Afif B.;Kim, Joo-Woong;Park, Jin-Woo;Eom, Ki-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.934-937
    • /
    • 2010
  • In this paper, we did pursue the embodiment of a mouse pointing system which help the handicapped and people of not familiar with using electronics use electronic devices easily. Speech Recognition and 3-axis acceleration sensors in conjunction with a headset, a new mouse pointing system is constructed. We used speaker dependent system module which are generating the BCD code by recognizing human voices because it has high recognition rate rather than speaker independent system. Head-set mouse system is organized by 3-axis accelerometer, sound recognition module and TMS320F2812 processor. The main controller, TMS320F2812 DSP-processor is communicated with main computer by using SCI communications. The system is operated by Visual Basic in PC.

  • PDF

A Study on Design and Implementation of Speech Recognition System Using ART2 Algorithm

  • Kim, Joeng Hoon;Kim, Dong Han;Jang, Won Il;Lee, Sang Bae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.149-154
    • /
    • 2004
  • In this research, we selected the speech recognition to implement the electric wheelchair system as a method to control it by only using the speech and used DTW (Dynamic Time Warping), which is speaker-dependent and has a relatively high recognition rate among the speech recognitions. However, it has to have small memory and fast process speed performance under consideration of real-time. Thus, we introduced VQ (Vector Quantization) which is widely used as a compression algorithm of speaker-independent recognition, to secure fast recognition and small memory. However, we found that the recognition rate decreased after using VQ. To improve the recognition rate, we applied ART2 (Adaptive Reason Theory 2) algorithm as a post-process algorithm to obtain about 5% recognition rate improvement. To utilize ART2, we have to apply an error range. In case that the subtraction of the first distance from the second distance for each distance obtained to apply DTW is 20 or more, the error range is applied. Likewise, ART2 was applied and we could obtain fast process and high recognition rate. Moreover, since this system is a moving object, the system should be implemented as an embedded one. Thus, we selected TMS320C32 chip, which can process significantly many calculations relatively fast, to implement the embedded system. Considering that the memory is speech, we used 128kbyte-RAM and 64kbyte ROM to save large amount of data. In case of speech input, we used 16-bit stereo audio codec, securing relatively accurate data through high resolution capacity.

An Integrated Neural Network Model for Domain Action Determination in Goal-Oriented Dialogues

  • Lee, Hyunjung;Kim, Harksoo;Seo, Jungyun
    • Journal of Information Processing Systems
    • /
    • v.9 no.2
    • /
    • pp.259-270
    • /
    • 2013
  • A speaker's intentions can be represented by domain actions (domain-independent speech act and domain-dependent concept sequence pairs). Therefore, it is essential that domain actions be determined when implementing dialogue systems because a dialogue system should determine users' intentions from their utterances and should create counterpart intentions to the users' intentions. In this paper, a neural network model is proposed for classifying a user's domain actions and planning a system's domain actions. An integrated neural network model is proposed for simultaneously determining user and system domain actions using the same framework. The proposed model performed better than previous non-integrated models in an experiment using a goal-oriented dialogue corpus. This result shows that the proposed integration method contributes to improving domain action determination performance.

A Study on Design and Implementation of Embedded System for speech Recognition Process

  • Kim, Jung-Hoon;Kang, Sung-In;Ryu, Hong-Suk;Lee, Sang-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.201-206
    • /
    • 2004
  • This study attempted to develop a speech recognition module applied to a wheelchair for the physically handicapped. In the proposed speech recognition module, TMS320C32 was used as a main processor and Mel-Cepstrum 12 Order was applied to the pro-processor step to increase the recognition rate in a noisy environment. DTW (Dynamic Time Warping) was used and proven to be excellent output for the speaker-dependent recognition part. In order to utilize this algorithm more effectively, the reference data was compressed to 1/12 using vector quantization so as to decrease memory. In this paper, the necessary diverse technology (End-point detection, DMA processing, etc.) was managed so as to utilize the speech recognition system in real time

Recognition of Korean Vowels using Bayesian Classification with Mouth Shape (베이지안 분류 기반의 입 모양을 이용한 한글 모음 인식 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.852-859
    • /
    • 2019
  • With the development of IT technology and smart devices, various applications utilizing image information are being developed. In order to provide an intuitive interface for pronunciation recognition, there is a growing need for research on pronunciation recognition using mouth feature values. In this paper, we propose a system to distinguish Korean vowel pronunciations by detecting feature points of lips region in images and applying Bayesian based learning model. The proposed system implements the recognition system based on Bayes' theorem, so that it is possible to improve the accuracy of speech recognition by accumulating input data regardless of whether it is speaker independent or dependent on small amount of learning data. Experimental results show that it is possible to effectively distinguish Korean vowels as a result of applying probability based Bayesian classification using only visual information such as mouth shape features.

A study on the Recognition of Continuous Digits using Syntactic Analysis and One-Stage DP (구문 분석과 One-Stage DP를 이용한 연속 숫자음 인식에 관한 연구)

  • Ann, Tae-Ock
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.97-104
    • /
    • 1995
  • This paper is a study on the recognition of continuous digits for the implementation of a voice dialing system, and proposes an method of speech recognition using syntactic analysis and One-Stage DP. In order to perform the speech recognition, first of all, we make DMS model by section division algorithm and let continuous digits data be recognized through the proposed One-Stage DP method using syntactic analysis. In this study, 7 continuous digits of 21 kinds which is pronounced by 8 male speakers two or three times, are used. The speaker dependent and speaker independent recognition are performed with the above data by way of the conventional One-Stage DP and the proposed One-Stage DP using syntactic analysis under the condition of laboratory environment. From the recognition experiments, it is shown that the proposed method was better than the established method. And, the recognition accuracy of speaker dependence and independence by the proposed One-Stage DP using syntactic analysis was about 91.7% and 89.7%.

  • PDF