• Title/Summary/Keyword: speech process

Search Result 525, Processing Time 0.022 seconds

Implementation and Evaluation of an HMM-Based Speech Synthesis System for the Tagalog Language

  • Mesa, Quennie Joy;Kim, Kyung-Tae;Kim, Jong-Jin
    • MALSORI
    • /
    • v.68
    • /
    • pp.49-63
    • /
    • 2008
  • This paper describes the development and assessment of a hidden Markov model (HMM) based Tagalog speech synthesis system, where Tagalog is the most widely spoken indigenous language of the Philippines. Several aspects of the design process are discussed here. In order to build the synthesizer a speech database is recorded and phonetically segmented. The constructed speech corpus contains approximately 89 minutes of Tagalog speech organized in 596 spoken utterances. Furthermore, contextual information is determined. The quality of the synthesized speech is assessed by subjective tests employing 25 native Tagalog speakers as respondents. Experimental results show that the new system is able to obtain a 3.29 MOS which indicates that the developed system is able to produce highly intelligible neutral Tagalog speech with stable quality even when a small amount of speech data is used for HMM training.

  • PDF

Usability Test Guidelines for Speech-Oriented Multimodal User Interface (음성기반 멀티모달 사용자 인터페이스의 사용성 평가 방법론)

  • Hong, Ki-Hyung
    • MALSORI
    • /
    • no.67
    • /
    • pp.103-120
    • /
    • 2008
  • Basic components for multimodal interface, such as speech recognition, speech synthesis, gesture recognition, and multimodal fusion, have their own technological limitations. For example, the accuracy of speech recognition decreases for large vocabulary and in noisy environments. In spite of those technological limitations, there are lots of applications in which speech-oriented multimodal user interfaces are very helpful to users. However, in order to expand application areas for speech-oriented multimodal interfaces, we have to develop the interfaces focused on usability. In this paper, we introduce usability and user-centered design methodology in general. There has been much work for evaluating spoken dialogue systems. We give a summary for PARADISE (PARAdigm for Dialogue System Evaluation) and PROMISE (PROcedure for Multimodal Interactive System Evaluation) that are the generalized evaluation frameworks for voice and multimodal user interfaces. Then, we present usability components for speech-oriented multimodal user interfaces and usability testing guidelines that can be used in a user-centered multimodal interface design process.

  • PDF

A Reliability Study on the Auditory-perceptual Evaluation of Parkinsonian Dysarthria (파킨슨증으로 인한 마비말장애의 청지각적 평가에 대한 신뢰도 연구)

  • Kim, Hyang-Hee;Lee, Mi-Sook;Kim, Sun-Woo;Lee, Won-Yong
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.129-141
    • /
    • 2004
  • An auditory-perceptual evaluation has long been utilized in assessing dysarthric speech. The process involves subjective judgement and the results might vary depending on clinical experiences or training of listeners. This study aimed to investigate reliability of the auditory-perceptual evaluation of 22 multi -dimensional variables on 6 patients with Parkinsonian speech disorders. Listeners were divided into two groups: one consisted of 6 speech therapists with clinical experiences for three years or more, and the other 6 graduate students without any previous clinical background. The results showed that the former evaluated dysarthric speech with higher inter-rater and intra-rater reliabilities than the latter. Furthermore, such speech variables as 'precise consonant: 'speech intelligibility: and 'SMR regularity' were more influenced than others by clinical experiences. We, therefore, postulated that a reliable auditory-perceptual evaluation of dysarthric speech may require adequate amount of clinical training of listeners.

  • PDF

Audio /Speech Codec Using Variable Delay MDCT/IMDCT (가변 지연 MDCT/IMDCT를 이용한 오디오/음성 코덱)

  • Sangkil Lee;In-Sung Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.2
    • /
    • pp.69-76
    • /
    • 2023
  • A high-quality audio/voice codec using the MDCT/IMDCT process can perfectly restore the current frame through an overlap-add process with the previous frame. In the overlap-add process, an algorithm delay equal to the frame length occurs. In this paper, we propose a MDCT/IMDCT process that reduces algorithm delay by using a variable phase shift in MDCT/IMDCT process. In this paper, a low-delay audio/speech codec was proposed by applying the low delay MDCT/IMDCT algorithm to the ITU-T standard codec G.729.1 codec. The algorithm delay in the MDCT/IMDCT process can be reduced from 20 ms to 1.25 ms. The performance of the decoded output signal of the audio/speech codec to which low-delay MDCT/IMDCT is applied is evaluated through the PESQ test, which is an objective quality test method. Despite of the reduction in transmission delay, it was confirmed that there is no difference in sound quality from the conventional method.

Google speech recognition of an English paragraph produced by college students in clear or casual speech styles (대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.

Phonological Process of Children with Cleft Palate (구개파열 아동의 음음변동에 관한 연구)

  • Choi, Jae-Nam;Sung, Soo-Jin;Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.16 no.1
    • /
    • pp.49-52
    • /
    • 2005
  • Background and Objectives : Children with cleft palate children may be imparied in articulation and resonance. This study examined the phonological process usage of 3-, 4- and 5- year old children with cleft palate. Materials and Method : Twenty seven children with cleft palat participated 3-, 4- and 5-year old children with cleft palate. The authors performed speech evaluation using picture consonants test for children with cleft palate. Percentage of consonants correct(PCC), mean value of each phoneme depends on articulation site and manner were evaluated. Results : In place of articulation, ommission of velar consonants were the most frequent. In manner of articulation, ommission of nasal consonants were the most frequent. Backing, glottal stop, was the most prominent phonological process children with cleft palate. Conclusion : These results may indicate that articulation disorder with cleft palate. and other articulation disorders differences should be considered in the interpretation of speech evaluations.

  • PDF

Experiment of VoIP Transmission with AMR Speech Codec in Wireless LAN (무선랜 환경에서 AMR 음성부호화기를 적용한 VoIP 전송 실험)

  • Shin, Hye-Jung;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.67-73
    • /
    • 2004
  • Packet loss, jitter, and delay in the Internet are caused mainly by the shortage of network bandwidth. It is due to queuing and routing process in the intermediate nodes of the packet network. In the Internet whose bandwidth is changing very rapidly in time depending on the number of users and data traffic, controlling the peak transmission bit-rate of a VoIP. system depending on the channel condition could be very helpful for making use of the available network bandwidth. Adapting packet size to the channel condition can reduce packet loss to improve the speech quality. It has been shown in [1] that a VoIP system with an AMR speech codec provides better speech quality than VoIP systems with fixed rate speech codecs. With the adaptive codec mode assignment. algorithm proposed in [1], in this paper, we performed the voice transmission experiments using the wireless LAN through the real Internet environment. Experimental results are analyzed and discussed with our findings.

  • PDF

Phonological processes of vowels from orthographic to pronounced words in the Buckeye Corpus by sex and age groups

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.25-31
    • /
    • 2018
  • This paper investigated the phonological processes of monophthongs and diphthongs in the pronounced words present in the Buckeye Corpus and compared the frequency distribution of these processes by sex and age groups to provide a clearer understanding of spoken English to linguists and phoneticians. Both orthographic and pronounced words were extracted from the transcribed label scripts of the Buckeye Corpus using R. Next, the phonological processes of monophthongs and diphthongs in the orthographic and pronounced labels were tabulated using R scripts, and a frequency distribution by vowel process types, as well as sex and age groups, was created. The results revealed that 95% of the orthographic words contained the same number of syllables, whereas 5% had different numbers of vowels, thereby proving that speakers tend to preserve vowels in spontaneous speech. In addition, deletion processes were preferred in natural speech. Most vowel deletions occurred with an unstressed syllable. Chi-square tests were performed to calculate dependence in the distribution of phonological process types for male and female groups and young and old groups. The results showed a very strong correlation. This finding indicates that vowel processes occurred in approximately the same pattern in natural and spontaneous speech data regardless of sex and age, as well as whether or not the vowel processes were identical. Based on these results, the author concludes that an analysis of phonological processes in spontaneous speech corpora can greatly enhance practical understanding of spoken English.

A Study on the Pitch Detection of Speech Harmonics by the Peak-Fitting (음성 하모닉스 스펙트럼의 피크-피팅을 이용한 피치검출에 관한 연구)

  • Kim, Jong-Kuk;Jo, Wang-Rae;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.85-95
    • /
    • 2003
  • In speech signal processing, it is very important to detect the pitch exactly in speech recognition, synthesis and analysis. If we exactly pitch detect in speech signal, in the analysis, we can use the pitch to obtain properly the vocal tract parameter. It can be used to easily change or to maintain the naturalness and intelligibility of quality in speech synthesis and to eliminate the personality for speaker-independence in speech recognition. In this paper, we proposed a new pitch detection algorithm. First, positive center clipping is process by using the incline of speech in order to emphasize pitch period with a glottal component of removed vocal tract characteristic in time domain. And rough formant envelope is computed through peak-fitting spectrum of original speech signal infrequence domain. Using the roughed formant envelope, obtain the smoothed formant envelope through calculate the linear interpolation. As well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. Inverse fast fourier transform (IFFT) compute this flattened harmonics. After all, we obtain Residual signal which is removed vocal tract element. The performance was compared with LPC and Cepstrum, ACF. Owing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

Development of Integrated Speech Training Aids for Hearing Impaired (청각 장애인용 통합형 발음 훈련 기기의 개발)

  • 박상희;김동준
    • Journal of Biomedical Engineering Research
    • /
    • v.13 no.4
    • /
    • pp.275-284
    • /
    • 1992
  • Development of Integrated Speech Training Aids for Hearing Impaired In this study, a spepch lralnlng aids that can do real-time display of vocal tract shape and other speech parameters together in a single system is implemenLed and self-training program for this system is developed. To estimate vocal tract shape, speech production process is assumed to be AR model. Through LPC analysis, vocal tract shape, intensity, and log spcclrum are calculated. And, fundamental frequency and nasality are measured using vibration sensors.

  • PDF