• Title/Summary/Keyword: Connected speech

Search Result 147, Processing Time 0.026 seconds

A Study on Connected Digits Recognition Using the K-L Expansion (K-L 전개를 이용한 연속 숫자음 인식에 관한 연구)

  • 김주곤;오세진;황철준;김범국;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.24-31
    • /
    • 2001
  • The K-L expansion is a method for compressing dimensions of features and thus reduces computational cost in recognition process. Also This is well known that features can be extracted without much loss of information in the statistical pattern recognition. In this paper, the method that effectively applies K-L(Karhunen-Loeve) expansion to feature parameters of speech is proposed to improve the recognition accuracy of the Korean speech recognition system. The recognition performance of a novel feature parameters obtained by the proposed method(K-L coefficients) is compared with those of conventional Mel-cepstrum and regressive coefficients through speaker independent connected digits recognition experiments. Experimental results showed that average recognition rates using the K-L coefficients with regression coefficients obtained higher accuracy than conventional Mel-cepstrum with their regression coefficients.

  • PDF

Speech emotion recognition using attention mechanism-based deep neural networks (주목 메커니즘 기반의 심층신경망을 이용한 음성 감정인식)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.407-412
    • /
    • 2017
  • In this paper, we propose a speech emotion recognition method using a deep neural network based on the attention mechanism. The proposed method consists of a combination of CNN (Convolution Neural Networks), GRU (Gated Recurrent Unit), DNN (Deep Neural Networks) and attention mechanism. The spectrogram of the speech signal contains characteristic patterns according to the emotion. Therefore, we modeled characteristic patterns according to the emotion by applying the tuned Gabor filters as convolutional filter of typical CNN. In addition, we applied the attention mechanism with CNN and FC (Fully-Connected) layer to obtain the attention weight by considering context information of extracted features and used it for emotion recognition. To verify the proposed method, we conducted emotion recognition experiments on six emotions. The experimental results show that the proposed method achieves higher performance in speech emotion recognition than the conventional methods.

Conversational Quality Measurement System for Mobile VoIP Speech Communication (모바일 VoIP 음성통신을 위한 대화음질 측정 시스템)

  • Cho, Jae-Man;Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.4
    • /
    • pp.71-77
    • /
    • 2011
  • In this paper, we propose a conversational quality measurement (CQM) system for providing the objective QoS of high quality mobile VoIP voice telecommunication. For measuring the conversational quality, the VoIP telecommunication system is implemented in two smart phones connected with VoIP. The VoIP telecommunication system consists of echo cancellation, noise reduction, speech encoding/decoding, packet generation with RTP (Real-Time Protocol), jitter buffer control and POS (Play-out Schedule) with LC (loss Concealment). The CQM system is connected to a microphone and a speaker of each smart phone. The voice signal of each speaker is recorded and used to measure CE (Conversational Efficiency), CS (Conversational Symmetry), PESQ (Perceptual Evaluation of Speech Quality) and CE-CS-PESQ correlation. We prove the CQM system by measuring CE, CS and PESQ under various SNR, delay and loss due to IP network environment.

Implementation of Speech Recognition and Flight Controller Based on Deep Learning for Control to Primary Control Surface of Aircraft

  • Hur, Hwa-La;Kim, Tae-Sun;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.9
    • /
    • pp.57-64
    • /
    • 2021
  • In this paper, we propose a device that can control the primary control surface of an aircraft by recognizing speech commands. The speech command consists of 19 commands, and a learning model is constructed based on a total of 2,500 datasets. The training model is composed of a CNN model using the Sequential library of the TensorFlow-based Keras model, and the speech file used for training uses the MFCC algorithm to extract features. The learning model consists of two convolution layers for feature recognition and Fully Connected Layer for classification consists of two dense layers. The accuracy of the validation dataset was 98.4%, and the performance evaluation of the test dataset showed an accuracy of 97.6%. In addition, it was confirmed that the operation was performed normally by designing and implementing a Raspberry Pi-based control device. In the future, it can be used as a virtual training environment in the field of voice recognition automatic flight and aviation maintenance.

A study of broad board classification of korean digits using symbol processing (심볼을 이용한 한국어 숫자음의 광역 음소군 분류에 관한 연구)

  • Lee, Bong-Gu;Lee, Guk;Hhwang, Hee-Yoong
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.481-485
    • /
    • 1989
  • The object of this parer is on the design of an broad board classifier for connected. Korean digit. Many approaches have been applied in speech recognition systems: parametric vector quantization, dynamic programming and hiden Markov model. In the 80's the neural network method, which is expected to solve complex speech recognition problems, came bach. We have chosen the rule based system for our model. The phoneme-groups that we wish to classify are vowel_like, plosive_like fricative_like, and stop_like.The data used are 1380 connected digits spoken by three untrained male speakers. We have seen 91.5% classification rate.

  • PDF

A hardware architecture of connected speech recognition and FPGA implementation (연결 단어 음성인식을 위한 하드웨어 아키텍쳐 및 FPGA 구현)

  • Kim, Yong;Jeong, Hong
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.381-382
    • /
    • 2006
  • In this paper, we present an efficient architecture for connected speech recognition that can be efficiently implemented with FPGA. The architecture consists of newly derived two-level dynamic programming (TLDP) that use only bit addition and shift operations. The advantages of this architecture are the spatial efficiency to accommodate more words with limited space and the computational speed from avoiding propagation delays in multiplications. The architecture is highly regular, consisting of identical and simple processing elements with only nearest-neighbor communication, and external communication occurs with the end processing elements.

  • PDF

A study on recognition improvement of velopharyngeal insufficiency patient's speech using various types of deep neural network (심층신경망 구조에 따른 구개인두부전증 환자 음성 인식 향상 연구)

  • Kim, Min-seok;Jung, Jae-hee;Jung, Bo-kyung;Yoon, Ki-mu;Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.703-709
    • /
    • 2019
  • This paper proposes speech recognition systems employing Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) structures combined with Hidden Markov Moldel (HMM) to effectively recognize the speech of VeloPharyngeal Insufficiency (VPI) patients, and compares the recognition performance of the systems to the Gaussian Mixture Model (GMM-HMM) and fully-connected Deep Neural Network (DNNHMM) based speech recognition systems. In this paper, the initial model is trained using normal speakers' speech and simulated VPI speech is used for generating a prior model for speaker adaptation. For VPI speaker adaptation, selected layers are trained in the CNN-HMM based model, and dropout regulatory technique is applied in the LSTM-HMM based model, showing 3.68 % improvement in recognition accuracy. The experimental results demonstrate that the proposed LSTM-HMM-based speech recognition system is effective for VPI speech with small-sized speech data, compared to conventional GMM-HMM and fully-connected DNN-HMM system.

Speech Data Collection for korean Speech Recognition (한국어 음성인식을 위한 음성 데이터 수집)

  • Park, Jong-Ryeal;Kwon, Oh-Wook;Kim, Do-Yeong;Choi, In-Jeong;Jeong, Ho-Young;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.74-81
    • /
    • 1995
  • This paper describes the development of speech databases for the Korean language which were constructed at Communications Research Laboratory in KAIST. The procedure and environment to construct the speech database are presented in detail, and the phonetic and linguistic properties of the databases are presented. the databases were intended for use in designing and evaluating speech recognition algorithms. The databases consist of five different sets of speech contents : trade-related continuous speech with 3,000 words, variable-length connected digits, phoneme-balanced 75 isolated words, 500 isolated Korean provincial names, and Korean A-set words.

  • PDF

Speech Stimuli on the Diagnostic Evaluation of Speech with Cleft Lip and Palate : Clinical Use and Literature Review (구개열 환자 말 평가 시 검사어에 대한 고찰 : 임상현장의 말 평가 어음자료와 문헌적 고찰을 중심으로)

  • Choi, Seong-Hee;Choi, Jae-Nam;Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.16 no.1
    • /
    • pp.33-48
    • /
    • 2005
  • Differential diagnosis of articulation and resonance problems in the cleft lip and palate speech is required for evaluating various factors contribute to speech problems such as VPI, dental occlusion, palatal fistulae, learning. However, validity of speech stimuli is current issue to evaluate accurately each problem in cleft speech. This study was conducted to investigate speech stimuli using in the clinical setting and review the literatures and articles published 1990 to 2005 for helping develop standardized speech samples. The results were recommendation to evaluate properly velopharyngeal function when conducting a diagnostic evaluation as follows : 1) In identification hypernasality, the speech stimuli should be included low pressure consonants to eliminate effects of nasal emission, compensatory articulation. 2) Speech stimuli should be consist of visual, front sounds to eliminate compensatory articulation and to stimulate easily. 3) Regarding early diagnosis and treatment, speech stimuli need to develop for infants and preschooler. 4) Stimulus length on nasalance scores should be at least 6 syllables. 5) In phonetic context on nasalance scores, /i/ vowel should be take into consideration excluding paragraph. 6) Connected speech stimuli should be developed for evaluating intelligibility and VP function.

  • PDF

On Detecting the Transition Regions of Phonemes by Using the Asymmetrical Rate of Speech Waveforms (음성파형의 비대칭율을 이용한 음소의 전이구간 검출)

  • Bae, Myung-Jin;Lee, Eul-jae;Ann, Sou-Guil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.4
    • /
    • pp.55-65
    • /
    • 1990
  • To recognize continued speech, it is necessary to segment the connected acoustic signal into phonetic units, In this paper, as a parameter to detect transition regions in continued speech, we propose a new asymmetrical rate. The suggested rate represents a change rate of magnitude of speech signals. As comparing this rate with other rate in adjacent frame, the state of the frame can be distinguished between steady state and transient state.

  • PDF