• 제목/요약/키워드: Connected speech

검색결과 146건 처리시간 0.022초

Recurrent Neural Network with Backpropagation Through Time Learning Algorithm for Arabic Phoneme Recognition

  • Ismail, Saliza;Ahmad, Abdul Manan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1033-1036
    • /
    • 2004
  • The study on speech recognition and understanding has been done for many years. In this paper, we propose a new type of recurrent neural network architecture for speech recognition, in which each output unit is connected to itself and is also fully connected to other output units and all hidden units [1]. Besides that, we also proposed the new architecture and the learning algorithm of recurrent neural network such as Backpropagation Through Time (BPTT, which well-suited. The aim of the study was to observe the difference of Arabic's alphabet like "alif" until "ya". The purpose of this research is to upgrade the people's knowledge and understanding on Arabic's alphabet or word by using Recurrent Neural Network (RNN) and Backpropagation Through Time (BPTT) learning algorithm. 4 speakers (a mixture of male and female) are trained in quiet environment. Neural network is well-known as a technique that has the ability to classified nonlinear problem. Today, lots of researches have been done in applying Neural Network towards the solution of speech recognition [2] such as Arabic. The Arabic language offers a number of challenges for speech recognition [3]. Even through positive results have been obtained from the continuous study, research on minimizing the error rate is still gaining lots attention. This research utilizes Recurrent Neural Network, one of Neural Network technique to observe the difference of alphabet "alif" until "ya".

  • PDF

GMM based Speaker Identification using Pitch Information (피치 정보를 이용한 GMM 기반의 화자 식별)

  • Park Taesun;Hahn Minsoo
    • MALSORI
    • /
    • 제47호
    • /
    • pp.121-129
    • /
    • 2003
  • This paper describes the use of pitch information for speaker identification. The recognition system is a GMM based one with 4 connected Korean digits speech database. The mean of the pitch period in voiced sections of speech are shown to be ,useful at discriminating between speakers. Utilizing this feature with Gaussian mixture model in the speaker identification system gave a marked improvement, maximum 6% improvement comparing to the baseline Gaussian mixture model.

  • PDF

Improved Text-to-Speech Synthesis System Using Articulatory Synthesis and Concatenative Synthesis (조음 합성과 연결 합성 방식을 결합한 개선된 문서-음성 합성 시스템)

  • 이근희;김동주;홍광석
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.369-372
    • /
    • 2002
  • In this paper, we present an improved TTS synthesis system using articulatory synthesis and concatenative synthesis. In concatenative synthesis, segments of speech are excised from spoken utterances and connected to form the desired speech signal. We adopt LPC as a parameter, VQ to reduce the memory capacity, and TD-PSOLA to solve the naturalness problem.

  • PDF

A Study on the Sound Effect for Improving Customer's Speech Recognition in the TTS-based Shop Music Broadcasting Service (TTS를 이용한 매장음원방송에서 고객의 인지도 향상을 위한 음향효과 연구)

  • Kang, Sun-Mee;Kim, Hyun-Deuc;Chang, Moon-Soo
    • Phonetics and Speech Sciences
    • /
    • 제1권4호
    • /
    • pp.105-109
    • /
    • 2009
  • This thesis describes the method for well voice announcement using the TTS(Text-To-Speech) technology in the shop music broadcasting service. Offering a high quality TTS sound service for each shop requires a great expense. According to a report on the architectural acoustics the room acoustic indexes such as reverberation time and early decay time are closely connected with a subjective awareness about acoustics. By using the result the customers will be able to recognize better the voice announcement by applying sound effect to speech files made by TTS. The result of an aural comprehension examination has shown better about almost all of the parameters by applying reverb effect to TTS sound.

  • PDF

A Study on the Diphone Recognition of Korean Connected Words and Eojeol Reconstruction (한국어 연결단어의 이음소 인식과 어절 형성에 관한 연구)

  • ;Jeong, Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • 제14권4호
    • /
    • pp.46-63
    • /
    • 1995
  • This thesis described an unlimited vocabulary connected speech recognition system using Time Delay Neural Network(TDNN). The recognition unit is the diphone unit which includes the transition section of two phonemes, and the number of diphone unit is 329. The recognition processing of korean connected speech is composed by three part; the feature extraction section of the input speech signal, the diphone recognition processing and post-processing. In the feature extraction section, the extraction of diphone interval in input speech signal is carried and then the feature vectors of 16th filter-bank coefficients are calculated for each frame in the diphone interval. The diphone recognition processing is comprised by the three stage hierachical structure and is carried using 30 Time Delay Neural Networks. particularly, the structure of TDNN is changed so as to increase the recognition rate. The post-processing section, mis-recognized diphone strings are corrected using the probability of phoneme transition and the probability o phoneme confusion and then the eojeols (Korean word or phrase) are formed by combining the recognized diphones.

  • PDF

An Alteration Rule of Formant Transition for Improvement of Korean Demisyllable Based Synthesis by Rule (한국어 반음절단위 규칙합성의 개선을 위한 포만트천이의 변경규칙)

  • Lee, Ki-Young;Choi, Chang-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • 제15권4호
    • /
    • pp.98-104
    • /
    • 1996
  • This paper propose the alteraton rule to compensate a formant trasition of several connected vowels for improving an unnatural synthesized continuous speech which is concatenated by each demisyllable without coarticulated formant transition for use in dmisyllable based synthesis by rule. To fullfill each formant transition part, the database of 42 stationary vowels which are segmented from the stable part of each vowels is appended to the one of Korean demisyllables, and the resonance circuit used in formant synthesis is employed to change the formant frequency of speech signals. To evaluate the synthesied speech by this rule, we carried out the alteration rule for connected vowels of the synthesized speech based on demisyllable, and compare spectrogram and MOS tested scores with the original and the demisyllable based synthesized speech without this rule. The result shows that this proposed rule can synthesize the more natural speech.

  • PDF

Telephone Speech Recognition with Data-Driven Selective Temporal Filtering based on Principal Component Analysis

  • Jung Sun Gyun;Son Jong Mok;Bae Keun Sung
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 2004년도 학술대회지
    • /
    • pp.764-767
    • /
    • 2004
  • The performance of a speech recognition system is generally degraded in telephone environment because of distortions caused by background noise and various channel characteristics. In this paper, data-driven temporal filters are investigated to improve the performance of a specific recognition task such as telephone speech. Three different temporal filtering methods are presented with recognition results for Korean connected-digit telephone speech. Filter coefficients are derived from the cepstral domain feature vectors using the principal component analysis.

  • PDF

A study on the interactive speech recognition mobile robot (대화형 음성인식 이동로봇에 관한 연구)

  • 이재영;윤석현;홍광석
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • 제33B권11호
    • /
    • pp.97-105
    • /
    • 1996
  • This paper is a study on the implementation of speech recognition mobile robot to which the interactive speech recognition techniques is applied. The speech command uttered the sentential connected word and is asserted through the wireless mic system. This speech signal transferred LPC-cepstrum and shorttime energy which are computed from the received signal on the DSP board to notebook PC. In notebook PC, DP matching technique is used for recognizer and the recognition results are transferred to the motor control unit which output pulse signals corresponding to the recognized command and drive the stepping motor. Grammar network applied to reduce the recognition speed of the recogniger, so that real time recognition is realized. The misrecognized command is revised by interface revision through the conversation with mobile robot. Therefore, user can move the mobile robot to the direction which user wants.

  • PDF

Characteristics of Laryngeal-Diadochokinesis (L-DDK) in Nonfluent Speakers (비유창성 화자의 후두 교호운동 특성)

  • Han, Ji-Yeon;Lee, Ok-Bun;Park, Hee-Jun;Lim, Hye-Jin
    • Speech Sciences
    • /
    • 제14권2호
    • /
    • pp.55-64
    • /
    • 2007
  • Laryngeal DDK involve with the rate, pattern, and regularity (periodicity) in opening and closing of vocal fold. This study was aimed at investigating the characteristics of laryngeal DDK between nonfluent and fluent speakers. One with an ataxic dysarthria (with cerebellar lesion) and the other with stuttering, and 13 normal speakers were evaluated. L-DDK were analyzed with MSP (motor speech profile, CSL 4400). Measures of DDK included: DDKavr, DDKcvp, DDKjit, DDKavp. An ataxic dysarthric speaker and a stutterer showed more reduced rate and aperiodic L-DDK (both adductory and abductory movement) than normal speakers. But the average L-DDK period (ms) in adductory movement in a speaker with stuttering showed more decreased than the other. Results from this study are preliminary. Nonetheless, results of L-DDK produced by nonfluent speakers suggested the possibility to have relation with slow rate of phonatory initiation and connected speech. In the future, perceptual studies are needed in conjuction with acoustic and speech production.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.