• Title/Summary/Keyword: Disfluency

Search Result 28, Processing Time 0.022 seconds

Event Valence Matters: Investigating the Moderating Role of Event Valence on Event Markers' Systematic Effect

  • Lee, Hyejin;Choi, Jinhee
    • Asia Marketing Journal
    • /
    • v.16 no.4
    • /
    • pp.59-73
    • /
    • 2015
  • Previous research has revealed that people feel past target events are more distant when they recall more intervening events, event markers, that are both accessible in memory and perceived to be related to that target event (Zauberman, Levav, Diehl, and Bhargave 2010). This phenomenon was called the systematic effect of event markers (SEEM). In this research, we explore the moderating effect of the valence of the target event on SEEM and suggest the difficulty of recalling event markers as the possible mechanism. Study 1 shows that SEEM mainly occur when the valence of the target event is negative rather than positive. Study 2 showed that even though people have more difficulty recalling four event markers than one regardless of event valence, the difficulty of recalling event markers only mediates SEEM when the target event valence is negative. Furthermore, when the target event is positive, SEEM does not exist, confirming that the mediating role of the difficulty of recalling event markers on SEEM is moderated by the valence of the target event.

Therapeutic Use of Music for Stuttering Children (말더듬 아동을 위한 음악치료적 접근)

  • Cho, Jung Min
    • Journal of Music and Human Behavior
    • /
    • v.4 no.1
    • /
    • pp.21-30
    • /
    • 2007
  • Unlike other common forms of speech disorder, such as phonological disorder or dysphonia, stuttering has not been studied within the context of music therapy. Most cases of stuttering display no difficulty in singing, and fluency within the musical structure does not translate to fluency in speech. Hence, musical approach has been generally considered to be ineffective to the treatment of stuttering. However, the fundamentals of music therapy assume its extensive application in treating variety of speech disorders, including the case of stuttering. Presented in this paper are the case studies designed to validate the efficacy of music therapy as a remedy for stuttering. This study enrolled 6 children with stuttering and conducted 20 individual sessions over a period of 10 weeks. The sessions focused on the Melodic Intonation Therapy, Reinforcement of speech rhythm, song writing and singing. Musical elements were structured to enhance the verbal expression and rhythmic senses, as well as to facilitate the initiation of verbal communication. The result is as follows. First, it was noticed that the disfluency had been decreased in before and after of the music therapy in every child although the result was somewhat different depending the child. The overall result of the investigation shows the significant difference statistically. And categorically speaking, the significant difference was checked in the frequency of the stuttering. In the steps of the session, the increase and decrease was happened repeatedly, and then after it was decreased little by little. Secondly, the Communication Attitude was decreased in before and after of the music therapy, and also there was significant difference statistically. although the avoidance behavior was decreased in before and after of the music therapy, the increase and the decrease was repeated irregularly in the steps of session. All the results described above shows that music therapy gives positive effect to decrease in disfluency of stuttering child and also to develop the Communication Attitude. And new possibility and effectiveness can be proposed in the musical approach to the stuttering.

  • PDF

Intonation Patterns of Korean Spontaneous Speech (한국어 자유 발화 음성의 억양 패턴)

  • Kim, Sun-Hee
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.85-94
    • /
    • 2009
  • This paper investigates the intonation patterns of Korean spontaneous speech through an analysis of four dialogues in the domain of travel planning. The speech corpus, which is a subset of spontaneous speech database recorded and distributed by ETRI, is labeled in APs and IPs based on K-ToBI system using Momel, an intonation stylization algorithm. It was found that unlike in English, a significant number of APs and IPs include hesitation lengthening, which is known to be a disfluency phenomenon due to speech planning. This paper also claims that the hesitation lengthening is different from the IP-final lengthening and that it should be categorized as a new category, as it greatly affects the intonation patterns of the language. Except for the fact that 19.09% of APs show hesitation lengthening, the spontaneous speech shows the same AP patterns as in read speech with higher frequency of falling patterns such as LHL in comparison with read speech which show more LH and LHLH patterns. The IP boundary tones of spontaneous speech, showing the same five patterns such as L%, HL%, LHL%, H%, LH% as in read speech, show higher frequency of rising patterns (H% and LH%) and contour tones (HL%, LH%, LHL%) while read speech on the contrary shows higher frequency of falling patterns and simple tones at the end of IPs.

  • PDF

A study of speaking rate on Parkinson's disease with palilalia (동어반복증을 동반한 파킨슨병 환자의 말속도 연구)

  • Kim, Sun Woo
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.61-66
    • /
    • 2016
  • The purpose of this study is to examine the speaking rate(overall speaking rate and articulatory rate) of Parkinson's disease patients with palilalia(PDP). Palilalia is traditionally characterized by not only compulsive repetitions of words and phrases, but also by increased rate of speech based on auditory perception. Since Souques(1908) first characterized palilalia as fast speech rate from the perspective of auditory perception, few studies have evaluated PDP speech using acoustic methods. To compare the speech rate between PDP and normal subjects, we included five PDP and eight control subjects(age over 55), as well as the date acquired under reading tasks(standardized Korean paragraph). The difference in median of the overall speaking rate was not statically significant between the PDP group(median 5.25, IQR 1.30) and normal group(median 4.76, IQR 0.71). The PDP, however, had a significantly higher syllables per second on the articulatory rate(median 6.60, IQR 1.04) than normal subjects(median 5.60, IQR 0.52). Results indicated no differences in pause over 250msec and disfluency duration between the two groups. To provide useful insight into PDP speech, multiple levels of analysis should be employed.

A Study on Laryngeal Behavior of Persons Who Stutter with Fiber-Optic Nasolaryngoscope (후두 내시경(Fiber-Optic Nasolaryngoscope)을 이용한 말더듬인의 후두양상에 관한 연구)

  • Jung, Hun;Ahn, Jong-Bok;Choi, Byung-Heun;Kwon, Do-Ha
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.159-173
    • /
    • 2008
  • The purpose of this study was to use fiber-optic nasolaryngoscope to find out differences in laryngeal behavior between persons who stutter(PS) and those who do not stutter(NS) upon their utterance. To meet the goal above, this study took 5 NS and 5 PS respectively as a part of sampling, so that they were all asked to join an experiment hereof. As a result, this study came to the following findings: First, there was not any significant difference in laryngeal behavior of uttering spoken languages between stuttering group and control. Second, there were some differences in laryngeal behavior of repetition and prolongation, which were a sort of disfluency revealed in the utterance of nonfluent spoken languages between stuttering group and control. Third, as reported by prior studies, it was found that there were differences in laryngeal abehavior of stutterer group's nonfluent spoken languages depending upon stuttering types. In this study, a variety of laryngeal behavior unreported in prior studies could be found. In addition, it was notable that stutterers showed different laryngeal behavior depending on their personal stuttering types. On block condition, Subject 1 showed laryngeal behavior of fAB, INT and fAD; Subject 2 showed laryngeal behavior of fAB, fAD and rAD; Subject 3 showed laryngeal behavior of fAD and rAD; Subject 4 showed only laryngeal behavior of fAD; and Subejct 5 showed laryngeal behavior of fAB, fAD and rAD. Summing up, these findings imply that when stutterers utter nonfluent words, they may reveal a variety of laryngeal behavior depending on their personal stuttering types. Moreover, it is found that there are more or less differences in the utterance of nonfluent spoken languages between NS and stuttering ones. In particular, it is interesting that one common trait of nonfluent spoken languages uttered by PS is evidently excessive laryngeal stress, no matter which type of stuttering they reveal.

  • PDF

A study on the change of prosodic units by speech rate and frequency of turn-taking (발화 속도와 말차례 교체 빈도에 따른 운율 단위 변화에 관한 연구)

  • Won, Yugwon
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.29-38
    • /
    • 2022
  • This study aimed to analyze the speech appearing in the National Institute of Korean Language's Daily Conversation Speech Corpus (2020) and reveal how the speech rate and the frequency of turn-taking affect the change in prosody units. The analysis results showed a positive correlation between intonation phrase, word phrase frequency, and speaking duration as the speech speed increased; however, the correlation was low, and the suitability of the regression model of the speech rate was 3%-11%, which was weak in explanatory power. There was a significant difference in the mean speech rate according to the frequency of the turn-taking, and the speech rate decreased as the frequency of the turn-taking increased. In addition, as the frequency of turn-taking increased, the frequency of intonation phrases, the frequency of word phrases, and the speaking duration decreased; there was a high negative correlation. The suitability of the regression model of the turn-taking frequency was calculated as 27%-32%. The frequency of turn-taking functions as a factor in changing the speech rate and prosodic units. It is presumed that this can be influenced by the disfluency of the dialogue, the characteristics of turn-taking, and the active interaction between the speakers.

Development of the video-based smart utterance deep analyser (SUDA) application (동영상 기반 자동 발화 심층 분석(SUDA) 어플리케이션 개발)

  • Lee, Soo-Bok;Kwak, Hyo-Jung;Yun, Jae-Min;Shin, Dong-Chun;Sim, Hyun-Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.63-72
    • /
    • 2020
  • This study aims to develop a video-based smart utterance deep analyser (SUDA) application that analyzes semiautomatically the utterances that child and mother produce during interactions over time. SUDA runs on the platform of Android, iPhones, and tablet PCs, and allows video recording and uploading to server. In this device, user modes are divided into three modes: expert mode, general mode and manager mode. In the expert mode which is useful for speech and language evaluation, the subject's utterances are analyzed semi-automatically by measuring speech and language factors such as disfluency, morpheme, syllable, word, articulation rate and response time, etc. In the general mode, the outcome of utterance analysis is provided in a graph form, and the manger mode is accessed only to the administrator controlling the entire system, such as utterance analysis and video deletion. SUDA helps to reduce clinicians' and researchers' work burden by saving time for utterance analysis. It also helps parents to receive detailed information about speech and language development of their child easily. Further, this device will contribute to building a big longitudinal data enough to explore predictors of stuttering recovery and persistence.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.