• Title/Summary/Keyword: 말소리

Search Result 1,337, Processing Time 0.023 seconds

Efficacy of laughing voice treatment (SKMVTT) in benign vocal fold lesions (양성성대질환의 웃음 음성치료(SKMVTT))

  • Jung, Dae-Yong;Wi, Joon-Yeol;Kim, Seong-Tae
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.155-161
    • /
    • 2018
  • The purpose of this study was to evaluate the efficacy of a multiple voice therapy technique ($SKMVTT^{(R)}$) using laughter for the treatment of various benign vocal fold lesions. To achieve this, 23 female patients diagnosed with vocal nodules, vocal polyp, and muscle tension dysphonia through videostroboscopy were enrolled in vocal hygiene and $SKMVTT^{(R)}$. All of the patients were treated once a week for 4 to 12 sessions. The GRBAS scale was used to confirm the changes in voice quality before and after the treatment. Acoustic analysis was performed to evaluate jitter, shimmer, NHR, fundamental frequency variation, amplitude variation, PFR, and dB range. Videostroboscopy was performed to confirm the changes in the laryngeal features before and after the treatment. After the $SKMVTT^{(R)}$, the results of the perceptual evaluation demonstrated that the G, R, and B scales significantly improved. An acoustic evaluation also demonstrated that jitter, shimmer, NHR, vAm, vFo, PFR, and dB range also significantly improved after the $SKMVTT^{(R)}$. In comparison to the videostroboscopic findings, the size of the vocal nodules and vocal polyp decreased or disappeared after the treatment. In addition, the size of the cuneiform tubercles decreased, the length of the aryepiglottic folds became longer, and the laryngeal findings of the supraglottic compressions improved after the $SKMVTT^{(R)}$. These results suggest that the $SKMVTT^{(R)}$ is effective in improving the vocal quality of patients with benign vocal fold lesions. In conclusion, it seems that laughter and inspiratory phonation suppressed abnormal laryngeal elevation and lowered laryngeal height, which seems to have the effect of improving hyperfunctional phonation.

Acoustic-phonetic characteristics of fricatives distortion in functional articulation disorders (기능적 조음음운장애아동의 치조 마찰음 왜곡의 음향음성학적 특성)

  • Yang, Minkyo;Choi, Yaelin;Kim, Eun Yeon;Yoo, Hyun Ji
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.127-134
    • /
    • 2018
  • This study aims to explain the difficulties children with articulation and phonological disorders have in producing alveolar fricative sounds. The study will perform a comparative analysis revealing how ordinary children produce alveolar fricative sounds through five different acoustic variables, and consequently identifying objective differences, compared to children with articulation and phonological disorders. Therefore, this study compared and analyzed the differences between 10 children with articulation and phonological disorders and 10 ordinary children according to a phonation type of alveolar fricative sounds (/s/ and /$s^*$), a type of vowel (/i/, /ε/, /u/, /o/, /ɯ/, /ʌ/, /ɑ/), and a structure of syllables (CV, VCV) through acoustic variables including a central moment, skewness, kurtosis, a center of gravity and variance. That is, children with articulation and phonological disorders, when compared to ordinary children, have difficulties with concentrating an agile and momentary friction with strength when articulating alveolar fricative sounds, which uses strong energy and accompany tension. Furthermore, the values of alveolar fricative sounds of children with articulation and phonological disorders appeared to spread evenly over the average range, which means that the range of overall the standard deviation values for children with functional phonological disorders is wider than that of ordinary children. For a future study, if the mispronounced sounds relating to omission, substitution, and addition can be compared and analyzed for various target groups, it could be used effectively to help children with functional phonological disorders.

Normalized gestural overlap measures and spatial properties of lingual movements in Korean non-assimilating contexts

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.31-38
    • /
    • 2019
  • The current electromagnetic articulography study analyzes several articulatory measures and examines whether, and if so, how they are interconnected, with a focus on cluster types and an additional consideration of speech rates and morphosyntactic contexts. Using articulatory data on non-assimilating contexts from three Seoul-Korean speakers, we examine how speaker-dependent gestural overlap between C1 and C2 in a low vowel context (/a/-to-/a/) and their resulting intergestural coordination are realized. Examining three C1C2 sequences (/k(#)t/, /k(#)p/, and /p(#)t/), we found that three normalized gestural overlap measures (movement onset lag, constriction onset lag, and constriction plateau lag) were correlated with one another for all speakers. Limiting the scope of analysis to C1 velar stop (/k(#)t/ and /k(#)p/), the results are recapitulated as follows. First, for two speakers (K1 and K3), i) longer normalized constriction plateau lags (i.e., less gestural overlap) were observed in the pre-/t/ context, compared to the pre-/p/ (/k(#)t/>/k(#)p/), ii) the tongue dorsum at the constriction offset of C1 in the pre-/t/ contexts was more anterior, and iii) these two variables are correlated. Second, the three speakers consistently showed greater horizontal distance between the vertical tongue dorsum and the vertical tongue tip position in /k(#)t/ sequences when it was measured at the time of constriction onset of C2 (/k(#)t/>/k(#)p/): the tongue tip completed its constriction onset by extending further forward in the pre-/t/ contexts than the uncontrolled tongue tip articulator in the pre-/p/ contexts (/k(#)t/>/k(#)p/). Finally, most speakers demonstrated less variability in the horizontal distance of the lingual-lingual sequences, which were taken as the active articulators (/k(#)t/=/k(#)p/ for K1; /k(#)t/

Performance of Korean spontaneous speech recognizers based on an extended phone set derived from acoustic data (음향 데이터로부터 얻은 확장된 음소 단위를 이용한 한국어 자유발화 음성인식기의 성능)

  • Bang, Jeong-Uk;Kim, Sang-Hun;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.39-47
    • /
    • 2019
  • We propose a method to improve the performance of spontaneous speech recognizers by extending their phone set using speech data. In the proposed method, we first extract variable-length phoneme-level segments from broadcast speech signals, and convert them to fixed-length latent vectors using an long short-term memory (LSTM) classifier. We then cluster acoustically similar latent vectors and build a new phone set by choosing the number of clusters with the lowest Davies-Bouldin index. We also update the lexicon of the speech recognizer by choosing the pronunciation sequence of each word with the highest conditional probability. In order to analyze the acoustic characteristics of the new phone set, we visualize its spectral patterns and segment duration. Through speech recognition experiments using a larger training data set than our own previous work, we confirm that the new phone set yields better performance than the conventional phoneme-based and grapheme-based units in both spontaneous speech recognition and read speech recognition.

Performance comparison of various deep neural network architectures using Merlin toolkit for a Korean TTS system (Merlin 툴킷을 이용한 한국어 TTS 시스템의 심층 신경망 구조 성능 비교)

  • Hong, Junyoung;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.57-64
    • /
    • 2019
  • In this paper, we construct a Korean text-to-speech system using the Merlin toolkit which is an open source system for speech synthesis. In the text-to-speech system, the HMM-based statistical parametric speech synthesis method is widely used, but it is known that the quality of synthesized speech is degraded due to limitations of the acoustic modeling scheme that includes context factors. In this paper, we propose an acoustic modeling architecture that uses deep neural network technique, which shows excellent performance in various fields. Fully connected deep feedforward neural network (DNN), recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), bidirectional LSTM (BLSTM) are included in the architecture. Experimental results have shown that the performance is improved by including sequence modeling in the architecture, and the architecture with LSTM or BLSTM shows the best performance. It has been also found that inclusion of delta and delta-delta components in the acoustic feature parameters is advantageous for performance improvement.

Text-to-speech with linear spectrogram prediction for quality and speed improvement (음질 및 속도 향상을 위한 선형 스펙트로그램 활용 Text-to-speech)

  • Yoon, Hyebin
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.71-78
    • /
    • 2021
  • Most neural-network-based speech synthesis models utilize neural vocoders to convert mel-scaled spectrograms into high-quality, human-like voices. However, neural vocoders combined with mel-scaled spectrogram prediction models demand considerable computer memory and time during the training phase and are subject to slow inference speeds in an environment where GPU is not used. This problem does not arise in linear spectrogram prediction models, as they do not use neural vocoders, but these models suffer from low voice quality. As a solution, this paper proposes a Tacotron 2 and Transformer-based linear spectrogram prediction model that produces high-quality speech and does not use neural vocoders. Experiments suggest that this model can serve as the foundation of a high-quality text-to-speech model with fast inference speed.

Statistical analysis on long-term change of jitter component on continuous speech signal (음성신호의 Jitter 성분의 장시간 변화에 관한 통계적 분석)

  • Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.73-80
    • /
    • 2020
  • In this study, a method for measuring the jitter component in continuous speech is presented. In the conventional jitter measurement method, pitch variabilities are commonly measured from the sustained vowels. In the case of continuous speech, such as a spoken sentence, distortion occurs with the existing measurement method owing to the influence of prosody information according to the sentence. Therefore, we propose a method to reduce the pitch fluctuations of prosody information in continuous speech. To remove this pitch fluctuation component, a curve representing the fluctuation is obtained via polynomial interpolation for the pitch track in the analysis interval, and the shift is removed according to the curve. Subsequently, the variability of the pitch frequency is obtained by a method of measuring jitter from the trajectory of the pitch from which the shift is removed. To measure the effects of the proposed method, parameter values before and after the operations are compared using samples from the Kay Pentax MEEI database. The statistical analysis of the experimental results showed that jitter components from the continuous speech can be measured effectively by proposed method and the values are comparable to the parameters of sustained vowel from the same speaker.

Change in lip movement during speech by aging: Based on a double vowel (노화에 따른 발화 시 입술움직임의 변화: 이중모음을 중심으로)

  • Park, Hee-June
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.73-79
    • /
    • 2021
  • This study investigated the change in lip movement during speech according to aging. For the study, 15 elderly women with an average of 69 years and 15 young women with an average of 22 years were selected. To measure the movement of the lips, the ratio between the minimum point and the maximum point of movement when pronouncing a double vowel was analyzed in pixel units using image analysis software. For clinical utility, the software was produced by applying an automated algorithm and compared with the results of handwork. This study found that the range of the width and length of lips in double vowel tasks was smaller for the elderly than that of the young. A strong positive correlation was found between manual and automated methods, indicating that both methods are useful for extracting lip contours. Based on the above results, it was found that the range of the lips decreased when ignited as aging progressed. Therefore, monitoring the condition of lip performance by simply measuring the movement of lips before aging progresses, and performing exercises to maintain lip range, will prevent pronunciation problems caused by aging.

Longitudinal music perception performance of postlingual deaf adults with cochlear implants using acoustic and/or electrical stimulation

  • Chang, Son A;Shin, Sujin;Kim, Sungkeong;Lee, Yeabitna;Lee, Eun Young;Kim, Hanee;Shin, You-Ree;Chun, Young-Myoung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.103-109
    • /
    • 2021
  • In this study, we investigated longitudinal music perception of adult cochlear implant (CI) users and how acoustic stimulation with CI affects their music performance. A total of 163 participants' data were analyzed retrospectively. 96 participants were using acoustic stimulation with CI and 67 participants were using electrical stimulation only via CI. The music performance (melody identification, appreciation, and satisfaction) data were collected pre-implantation, 1-year, and 2-year post-implantation. Mixed repeated measures of ANOVA and pairwise analysis adjusted by Tukey were used for the statistics. As result, in both groups, there were significant improvements in melody identification, music appreciation, and music satisfaction at 1-year, and 2-year post-implantation than a pre-implantation, but there was no significant difference between 1 and 2 years in any of the variables. Also, the group of acoustic stimulation with CI showed better perception skill of melody identification than the CI-only group. However, no differences found in music appreciation and satisfaction between the two groups, and possible explanations were discussed. In conclusion, acoustic and/or electrical hearing devices benefit the recipients in music performance over time. Although acoustic stimulation accompanied with electrical stimulation could benefit the recipients in terms of listening skills, those benefits may not extend to the subjective acceptance of music. These results suggest the need for improved sound processing mechanisms and music rehabilitation.

The pattern of use by gender and age of the discourse markers 'a', 'eo', and 'eum' (담화표지 '아', '어', '음'의 성별과 연령별 사용 양상)

  • Song, Youngsook;Shim, Jisu;Oh, Jeahyuk
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.37-45
    • /
    • 2020
  • This paper quantitatively calculated the speech frequency of the discourse markers 'a', 'eo', and 'eum' and the speech duration of these discourse markers using the Seoul Corpus, a spontaneous speech corpus. The sound durations were confirmed with Praat, the Seoul Corpus was analyzed with Emeditor, and the results were presented by statistical analysis with R. Based on the corpus analysis, the study investigated whether a particular factor is preferred by speakers of particular categories. The most prominent feature of the corpus is that the sound durations of female speakers were longer than those of men when using the 'eum' discourse marker in a final position. In age-related variables, teenagers uttered 'a' more than 'eo' in an initial position when compared to people in their 40s. This study is significant because it has quantitatively analyzed the discourse markers 'a', 'eo', and 'eum' by gender and age. In order to continue the discussion, more precise research should be conducted considering the context. In addition, similarities can be found in "e" and "ma" in Japanese(Watanabe & Ishi, 2000) and 'uh', 'um' in English(Gries, 2013). afterwards, a study to identify commonalities and differences can be predicted by using the cross-linguistic analysis of the discourse.