• Title/Summary/Keyword: vowel duration

Search Result 156, Processing Time 0.026 seconds

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

Acoustic characteristics and perceptual cues for Korean Stops (한국어 파열음의 음향적 특성과 지각 단서)

  • Lee, Kyung-Hee;Jung, Myung-Sook
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.139-155
    • /
    • 2000
  • The aim of this research is to investigate acoustic characteristics of three different types of Korean Stops-plain, tensed, and aspirated-, and employ these as a base to determine which one(s) can be used as perceptual cues. In this paper, we have examined acoustic characteristics of Korean Stops, especially voice onset time(VOT), closure duration(CD), degree of pitch of following vowels and differences in the intensity of the Stops build-up after the onset of voicing. From the above characteristics, differences can be made between word-initial and word-medial positions. That is to say, in word-initial position, the three Korean Stops are distinguished by VOT and pitch, whereas in word-medial by CD, VOT and pitch. However, the acoustic characteristics do not have the same value as perceptual cues. In both word-initial, and medial positions, the immediately following vowels play the most important role in perceiving Korean Stops. And in case of word'-medial positions,. CD and VOT also play important perceptual roles. In order to have a more fine-grained distinction among Korean Stops, we think future research should be done to investigate which factor(s) of the following vowels is/are the most determinative perceptual cue(s). However, based on our investigation, we may conclude that it is highly plausible that pitch can be one of the most important perceptual cues when distinguishing the three Korean Stops.

  • PDF

The final stop consonant perception in typically developing children aged 4 to 6 years and adults (4-6세 정상발달아동 및 성인의 종성파열음 지각력 비교)

  • Byeon, Kyeongeun;Ha, Seunghee
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.57-65
    • /
    • 2015
  • This study aimed to identify the development pattern of final stop consonant perception using the gating task. Sixty-four subjects participated in the study: 16 children aged 4 years, 16 children aged 5 years, 17 children aged 6 years, and 15 adults. One-syllable words with consonant-vowel-consonant(CVC) structure, mokㄱ-motㄱ and papㄱ-patㄱ were used as stimuli in order to remove the redundancy of acoustic cues in stimulus words, 40ms-length (-40ms) and 60ms-length (-60ms) from the entire duration of the final consonant were deleted. Three conditions (the whole word segment, -40ms, -60ms) were used for this speech perception experiment. 48 tokens (4 stimuli ${\times}3$ conditions ${\times}4$ trials) in total were provided for participants. The results indicated that 5 and 6 year olds showed final consonant perception similar to adults in stimuli, papㄱ-patㄱ and only the 6-year-old children showed perception similar to adults in stimuli, 'mokㄱ-motㄱ. The results suggested that younger typically developing children require more acoustic information to accurately perceive final consonants than older children and adults. Final consonant perception ability may become adult-like around 6 years old. The study provides fundamental data on the development pattern of speech perception in normal developing children, which can be used to compare to those of children with communication disorders.

Acoustic Characteristics of 'Short Rushes of Speech' using Alternate Motion Rates in Patients with Parkinson's Disease (파킨슨병 환자의 교대운동속도 과제에서 관찰된 '말 뭉침'의 음향학적 특성)

  • Kim, Sun Woo;Yoon, Ji Hye;Lee, Seung Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.55-62
    • /
    • 2015
  • It is widely accepted that Parkinson's disease(PD) is the most common cause of hypokinetic dysarthria, and its characteristics of 'short rushes of speech' have become more evident along with the severity of motor disorders. Speech alternate motion rates (AMRs) are particularly useful for observing not only rate abnormalities but also deviant speech. However, relatively little is known about the characteristics of 'short rushes of speech' in terms of AMRs of PD except for the perceptual characteristics. The purpose of this study was to examine which acoustic features of 'short rushes of speech' in terms of AMRs are a robust indicator of Parkinsonian speech. Numbers of syllabic repetitions (/pə/, /tə/, /kə/) in AMR tasks were analyzed through acoustic methods observing a spectrogram of the Computerized Speech Lab in 9 patients with PD. Acoustically, we found three characteristics of 'short rushes of speech': 1) Vocalized consonants without closure duration(VC) 76.3%; 2) No consonant segmentation(NC) 18.6%; 3) No vowel formant frequency(NV) 5.1%. Based on these results, 'short rushes of speech' may affect the failure to reach and maintain the phonatory targets. In order to best achieve the therapeutic goals, and to make the treatment most efficacious, it is important to incorporate training methods which are based on both phonation and articulation.

Classification of Diphthongs using Acoustic Phonetic Parameters (음향음성학 파라메터를 이용한 이중모음의 분류)

  • Lee, Suk-Myung;Choi, Jeung-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.167-173
    • /
    • 2013
  • This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.

A Comparative Study of Glottal Data from Normal Adults Using Two Laryngographs

  • Yang, Byung-Gon;Wang, Soo-Geun;Kwon, Soon-Bok
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.15-25
    • /
    • 2003
  • A laryngograph was developed to measure the open and closed movements of vocal folds in our laboratory. This study attempted to evaluate its performance by comparing its glottal data with that of the original laryngograph. Ten normal Korean adults Participated in the experiment. Each subject produced a sustained vowel /a/ for about five seconds. This study compared f0 values, contact quotients of the duration of closed vocal folds over one glottal pulse, and area quotients of the closed over open vocal folds derived from glottal waves using both the original and new laryngographs. Results showed that the mean and standard deviation of the two laryngographs were almost comparable with a correlation coefficient 0.662 but minor systematic shift below those of the original laryngograph was observed. The absolute mean difference converged into 1 Hz, which indicates a possibility of adopting some threshold of rejecting inappropriate pitch values beyond a threshold value. The contact quotient of the normal subjects came out slightly over the 50% in a citation speech. Finally, the area quotient converged into 1. We will pursue further studies on the abnormal patients in the future.

  • PDF

Guidance to the Praat, a Software for Speech and Acoustic Analysis (음성 및 음향분석 프로그램 Praat의 임상적 활용법)

  • Seong, Cheol Jae
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.33 no.2
    • /
    • pp.64-76
    • /
    • 2022
  • Praat is a useful analysis tool for linguists, engineers, doctors, speech-language pathologits, music majors, and natural scientists. Basic parameters including duration, pitch, energy and perturbation parameters such as jitter and shimmer can be easily measured and manipulated in the sound editor. When a more in-depth analysis is needed, it is recommended to understand the advanced menus of the object window and learn how to use them. Among the object window menus, vowel formant analysis, spectrum analysis, and cepstrum analysis can be cited as useful ones in the clinical field. The spectrum object can be usefully used for voice quality measurement and diagnosis of patients with voice disorders by showing the energy distribution according to frequency axis (domain). A cepstrum object is useful for speech analysis when periodicity of the sound object is not measurable. The low to high ratio obtained from the spectral object and the CPPs measured from the cepstrum object have attracted many researchers, and it has been proven that the CPPs measured in Praat are relatively excellent.

Effects of phonological and phonetic information of vowels on perception of prosodic prominence in English

  • Suyeon Im
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.1-7
    • /
    • 2023
  • This study investigates how the phonological and phonetic information of vowels influences prosodic prominence among linguistically untrained listeners using public speech in American English. We first examined the speech material's phonetic realization of vowels (i.e., maximum F0, F0 range, phone rate [as a measure of duration considering the speech rate of the utterance], and mean intensity). Results showed that the high vowels /i/ and /u/ likely had the highest max F0, while the low vowels /æ/ and /ɑ/ tended to have the highest mean intensity. Both high and low vowels had similarly high phone rates. Next, we examined the effects of the vowels' phonological and phonetic information on listeners' perceptions of prosodic prominence. The results showed that vowels significantly affected the likelihood of perceived prominence independent of acoustic cues. The high and low vowels affected probability of perceived prominence less than the mid vowels /ɛ/ and /ʌ/, although the former two were more likely to be phonetically enhanced in the speech than the latter. Overall, these results suggest that perceptions of prosodic prominence in English are not directly influenced by signal-driven factors (i.e., vowels' acoustic information) but are mediated by expectation-driven factors (e.g., vowels' phonological information).

COMPARISON OF SPEECH PATTERNS ACCORDING TO THE DEGREE OF SURGICAL SETBACK IN MANDIBULAR PROGNATHIC PATIENTS (하악골 전돌증 수술 후 하악골 이동량에 따른 발음 양상에 관한 비교 연구)

  • Shin, Ki-Young;Lee, Dong-Keun;Oh, Seung-Hwan;Sung, Hun-Mo;Lee, Suk-Hang
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.23 no.1
    • /
    • pp.48-58
    • /
    • 2001
  • After performing mandibular setback surgery, we found some changes in patterns and organs of speech. This investigation was undertaken to investigate the aspect and degree of speech patterns according to the amount of surgical setback in mandibular prognathic patients. Thirteen patients with skeletal Class III malocclusion were studied preoperative and postoperative over 6 months. They had undergone the mandible setback operation via bilateral sagittal split ramus osteotomy(BSSRO). We split the patients into two groups. Group 1 included patients whose degree of mandibular setback was 6mm or less, and Group 2 above 6mm. Control group was two adults wish normal speech patterns. A phonetician performed narrow phonetic transcriptions of tape-recorded words and sentences produced by each of the patients and the acoustic characteristics of the plosives, fricatives, and flaps were analyzed with a phonetic computer program (Computerized Speech Lab(CSL) Model 4300B(USA)). The results are as follows: 1. Generally, Patients showed longer closure duration of plosives, shorter VOT(voice onset time) and higher ratio of closure duration against VOT. 2. Patients showed more frequent diffuse distribution than the control group in frication noise energy of fricatives. 3. In fricatives, frequency of compact from were higher in group 1 than in group 2. 4. Generally, a short duration of closure for /ㄹ/ was not realized in the patient's flaps. Instead, it was realized as fricatives, sonorant with a vowel-like formant structure, or trill type consonant. 5. Abnormality of the patient's articulation was reduced, but adaptation of their articulation after surgery was not perfect and the degree of adaptation was different according to the degree of surgical setback.

  • PDF

Improvements on Speech Recognition for Fast Speech (고속 발화음에 대한 음성 인식 향상)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.88-95
    • /
    • 2006
  • In this Paper. a method for improving the performance of automatic speech recognition (ASR) system for conversational speech is proposed. which mainly focuses on increasing the robustness against the rapidly speaking utterances. The proposed method doesn't require an additional speech recognition task to represent speaking rate quantitatively. Energy distribution for special bands is employed to detect the vowel regions, the number of vowels Per unit second is then computed as speaking rate. To improve the Performance for fast speech. in the pervious methods. a sequence of the feature vectors is expanded by a given scaling factor, which is computed by a ratio between the standard phoneme duration and the measured one. However, in the method proposed herein. utterances are classified by their speaking rates. and the scaling factor is determined individually for each class. In this procedure, a maximum likelihood criterion is employed. By the results from the ASR experiments devised for the 10-digits mobile phone number. it is confirmed that the overall error rate was reduced by $17.8\%$ when the proposed method is employed