• Title/Summary/Keyword: phonetics

Search Result 937, Processing Time 0.018 seconds

The fundamental frequency (f0) distribution of Korean speakers in a dialogue corpus using Praat and R (Praat과 R로 분석한 한국인 대화 음성 말뭉치의 fundamental frequency(f0)값 분포)

  • Byunggon Yang
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.17-25
    • /
    • 2023
  • This study examines the fundamental frequency(f0) distribution of 2,740 Korean speakers in a dialogue speech corpus. Praat and R were used for the collection and analysis of acoustical f0 data after removing extreme values considering the interquartile f0 range of the intonational phrases produced by each individual speaker. Results showed that the average f0 value of all speakers was 185 Hz and the median value was 187 Hz. The f0 data showed a positively skewed distribution of 0.11, and the kurtosis was -0.09, which is close to the normal distribution. The pitch values of daily conversations varied in the range of 238 Hz. Further examination of the male and female groups showed distinct median f0 values: 114 Hz for males and 199 Hz for females. A t-test between the two groups yielded a significant difference. The skewness representing the distribution shape was 1.24 for the male group and 0.58 for the female group. The kurtosis was 5.21 and 3.88 for the male and female groups, and the male group values appeared leptokurtic. A regression analysis between the median f0 and age yielded a slope of 0.15 for the male group and -0.586 for the female group, which indicated a divergent relationship. In conclusion, a normative f0 distribution of different Korean age and sex groups can be examined in the conversational speech corpus recorded by a massive number of participants. However, more rigorous data might be required to define a relation between age and f0 values.

Aspects of Korean rhythm realization by second language learners: Focusing on Chinese learners of Korean (제 2언어 학습자의 한국어 리듬 실현양상 -중국인 한국어 학습자를 중심으로-)

  • Youngsook Yune
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.27-35
    • /
    • 2023
  • This study aimed to investigate the effect of Chinese on the production of Korean rhythm. Korean and Chinese are typologically classified into different rhythmic categories; because of this, the phonological properties of Korean and Chinese are similar and different at the same time. As a result, Chinese can exert both positive and negative influences on the realization of Korean rhythm. To investigate the influence of the rhythm of the native language of L2 learners on their target language, we conducted an acoustic analysis using acoustic metrics like of the speech of 5 Korean native speakers and 10 advanced Chinese Korean learners. The analyzed material is a short paragraph of five sentences containing a variety of syllable structures. The results showed that KS and CS rhythms are similar in %V, VarcoV, and nPVI_S. However, CS, unlike KS, showed characteristics closer to those of a stress-timed language in the values of %V and VarcoV. There was also a significant difference in nPVI_V values. These results demonstrate a negative influence of the native language in the realization of Korean rhythm. This can be attributed to the fact that all vowels in Chinese sentence are not pronounced with the same emphasis due to neutral tone. In this sense, this study allowed us to observe influences of L1 on L2 production of rhythm.

A comparison of the absolute error of estimated speaking fundamental frequency (AEF0) among etiological groups of voice disorders (음성장애의 병인 집단 간 추정 발화 기본주파수 절대 오차 비교)

  • Seung Jin Lee;Jae-Yol Lim;Jaeock Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.53-60
    • /
    • 2023
  • This study compared the absolute error of estimated fundamental frequency (AEF0) using voice - (VRP) and speech range profile (SRP) tasks across various etiological groups with voice disorders. Additionally, we explored the association between AEF0 and related voice parameters within each specific etiological group. The participants included 120 individuals, comprising 30 each from the functional (FUNC), organic (ORGAN), and eurological (NEUR) voice disorder groups, and a normal control group (NC). Each participant performed voice and SRP tasks, and the fundamental frequency of connected speech was measured using electroglottography (EGG). When comparing the AEF0 measures across the etiological groups, there were no differences in Grade and Severity among the patients. However, variations were observed in AEF0VRP and AEF0SUM. Specifically, AEF0VRP was higher in the ORGAN group than in the FUNC and NC groups, whereas AEF0SUM was higher in the ORGAN group than in the NC group. Furthermore, within FUNC and NEUR, AEF0 showed a positive correlation with Grade, while in ORGAN, it exhibited a positive correlation with the mean closed quotient (CQ). Attention should be paid to the application of AEF0 measures and related voice variables based on the etiological group. This study provides foundational information for the clinical application of AEF0 measures.

Comparison of mean airflow rate before and after treatment in patients with sulcus vocalis according to aerodynamic analysis methods (성대구증 환자의 공기역학적 검사 방법에 따른 치료 전과 후의 평균호기류율 비교)

  • Seung Yeon Lee;Hong-Shik Choi;Jaeock Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.61-69
    • /
    • 2023
  • Sulcus vocalis is characterized by incomplete closure of the vocal folds, with a high mean airflow rate (MFR) as a distinctive feature. The MFR is measured using two aerodynamic analysis methods [the maximum sustained phonation protocol (MXPH) and voicing efficiency protocol (VOEF)] of the phonatory aerodynamic system (PAS), and the results may vary depending on the method. This study compared the differences in MFR before and after treatment (microsurgery and voice therapy) according to the MXPH and VOEF of the PAS in 30 patients with sulcus vocalis. Additionally, we examined whether there were differences in the subjective voice evaluation (voice handicap index, VHI), perceptual voice evaluation (GRBS), and fundamental frequency (F0) before and after treatment. The results showed significant differences between the two methods, both before and after treatment, in patients with sulcus vocalis. However, there were no significant differences by methods in the changes before and after treatment. The VHI and GRBS scores significantly decreased after treatment; however, F0 showed no significant differences before and after treatment. This study indicates that when evaluating MFR changes in patients with sulcus vocalis, it is acceptable to use either aerodynamic analysis (MXPH or VOEF).

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Modified AWSSDR method for frequency-dependent reverberation time estimation (주파수 대역별 잔향시간 추정을 위한 변형된 AWSSDR 방식)

  • Min Sik Kim;Hyung Soon Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.91-100
    • /
    • 2023
  • Reverberation time (T60) is a typical acoustic parameter that provides information about reverberation. Since the impacts of reverberation vary depending on the frequency bands even in the same space, frequency-dependent (FD) T60, which offers detailed insights into the acoustic environments, can be useful. However, most conventional blind T60 estimation methods, which estimate the T60 from speech signals, focus on fullband T60 estimation, and a few blind FDT60 estimation methods commonly show poor performance in the low-frequency bands. This paper introduces a modified approach based on Attentive pooling based Weighted Sum of Spectral Decay Rates (AWSSDR), previously proposed for blind T60 estimation, by extending its target from fullband T60 to FDT60. The experimental results show that the proposed method outperforms conventional blind FDT60 estimation methods on the acoustic characterization of environments (ACE) challenge evaluation dataset. Notably, it consistently exhibits excellent estimation performance in all frequency bands. This demonstrates that the mechanism of the AWSSDR method is valuable for blind FDT60 estimation because it reflects the FD variations in the impact of reverberation, aggregating information about FDT60 from the speech signal by processing the spectral decay rates associated with the physical properties of reverberation in each frequency band.

Machine-learning-based out-of-hospital cardiac arrest (OHCA) detection in emergency calls using speech recognition (119 응급신고에서 수보요원과 신고자의 통화분석을 활용한 머신 러닝 기반의 심정지 탐지 모델)

  • Jong In Kim;Joo Young Lee;Jio Chung;Dae Jin Shin;Dong Hyun Choi;Ki Hong Kim;Ki Jeong Hong;Sunhee Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.109-118
    • /
    • 2023
  • Cardiac arrest is a critical medical emergency where immediate response is essential for patient survival. This is especially true for Out-of-Hospital Cardiac Arrest (OHCA), for which the actions of emergency medical services in the early stages significantly impact outcomes. However, in Korea, a challenge arises due to a shortage of dispatcher who handle a large volume of emergency calls. In such situations, the implementation of a machine learning-based OHCA detection program can assist responders and improve patient survival rates. In this study, we address this challenge by developing a machine learning-based OHCA detection program. This program analyzes transcripts of conversations between responders and callers to identify instances of cardiac arrest. The proposed model includes an automatic transcription module for these conversations, a text-based cardiac arrest detection model, and the necessary server and client components for program deployment. Importantly, The experimental results demonstrate the model's effectiveness, achieving a performance score of 79.49% based on the F1 metric and reducing the time needed for cardiac arrest detection by 15 seconds compared to dispatcher. Despite working with a limited dataset, this research highlights the potential of a cardiac arrest detection program as a valuable tool for responders, ultimately enhancing cardiac arrest survival rates.

Change in acoustic characteristics of voice quality and speech fluency with aging (노화에 따른 음질과 구어 유창성의 음향학적 특성 변화)

  • Hee-June Park;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.45-51
    • /
    • 2023
  • Voice issues such as voice weakness that arise with age can have social and emotional impacts, potentially leading to feelings of isolation and depression. This study aimed to investigate the changes in acoustic characteristics resulting from aging, focusing on voice quality and spoken fluency. To this end, tasks involving sustained vowel phonation and paragraph reading were recorded for 20 elderly and 20 young participants. Voice-quality-related variables, including F0, jitter, shimmer, and Cepstral Peak Prominence (CPP) values, were analyzed along with speech-fluency-related variables, such as average syllable duration (ASD), articulation rate (AR), and speech rate (SR). The results showed that in voice quality-related measurements, F0 was higher for the elderly and voice quality was diminished, as indicated by increased jitter, shimmer, and lower CPP levels. Speech fluency analysis also demonstrated that the elderly spoke more slowly, as indicated by all ASD, AR, and SR measurements. Correlation analysis between voice quality and speech fluency showed a significant relationship between shimmer and CPP values and between ASD and SR values. This suggests that changes in spoken fluency can be identified early by measuring the variations in voice quality. This study further highlights the reciprocal relationship between voice quality and spoken fluency, emphasizing that deterioration in one can affect the other.

Spectral moment analysis of distortion errors in alveolar fricatives in Korean children (치조 마찰음 왜곡 오류 유무에 따른 아동 발화 적률분석 비교)

  • Yunju Han;Do Hyung Kim;Ja Eun Hwang;Dae-Hyun Jang;Jae Won Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.33-40
    • /
    • 2024
  • This study investigated acoustic features in spectral moment analysis, comparing accurate articulations with distortions of alveolar fricatives such as dentalization, palatalization, and lateralization. A retrospective analysis was conducted on speech samples from 61 children (mean age: 5.6±1.5 years, 19 females, 42 males) using the Assessment of Phonology & Articulation for Children (APAC) and Urimal-test of Articulation and Phonology I (U-TAP I). Spectral moment analysis was applied to 169 speech samples. The results revealed that the center of gravity of accurate articulations was higher than that of palatalization, while palatalization was lower than dentalization. The variance of dentalization was higher than that of both accurate articulations and palatalization. The skewness of dentalization was higher than that of accurate articulations, and the skewness of palatalization was higher than that of accurate articulations. The kurtosis of palatalization was higher than that of both accurate articulations and dentalization. No significant differences were observed for the position of fricatives (initial, medial) and tense type (plain, tense) across all variables of spectral moment analysis for each distortion type. This study confirmed distinct patterns in center of gravity, variance, skewness, and kurtosis depending on the type of alveolar fricative distortion. The objective values provided in this study will serve as foundational data for diagnosing alveolar fricative distortions in children with speech sound disorders.

Evaluation of the readability of self-reported voice disorder questionnaires (자기보고식 음성장애 설문지 문항의 가독성 평가)

  • HyeRim Kwak;Seok-Chae Rhee;Seung Jin Lee;HyangHee Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.41-48
    • /
    • 2024
  • The significance of self-reported voice assessments concerning patients' chief complaints and quality of life has increased. Therefore, readability assessments of questionnaire items are essential. In this study, readability analyses were performed based on text grade and complexity, vocabulary frequency and grade, and lexical diversity of the 11 Korean versions of self-reported voice disorder questionnaires (KVHI, KAVI, KVQOL, K-SVHI, K-VAPP, K-VPPC, TVSQ, K-VDCQ, K-VFI, K-VTDS, and K-VoiSS). Additionally, a comparative readability assessment was conducted on the original versions of these questionnaires to discern the differences between their Korean counterparts and the questionnaires for children. Consequently, it was determined that voice disorder questionnaires could be used without difficulty for populations with lower literacy levels. Evaluators should consider subjects' reading levels when conducting assessments, and future developments and revisions should consider their reading difficulties.