• 제목/요약/키워드: Human speech

Search Result 568, Processing Time 0.024 seconds

Digital enhancement of pronunciation assessment: Automated speech recognition and human raters

  • Miran Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.13-20
    • /
    • 2023
  • This study explores the potential of automated speech recognition (ASR) in assessing English learners' pronunciation. We employed ASR technology, acknowledged for its impartiality and consistent results, to analyze speech audio files, including synthesized speech, both native-like English and Korean-accented English, and speech recordings from a native English speaker. Through this analysis, we establish baseline values for the word error rate (WER). These were then compared with those obtained for human raters in perception experiments that assessed the speech productions of 30 first-year college students before and after taking a pronunciation course. Our sub-group analyses revealed positive training effects for Whisper, an ASR tool, and human raters, and identified distinct human rater strategies in different assessment aspects, such as proficiency, intelligibility, accuracy, and comprehensibility, that were not observed in ASR. Despite such challenges as recognizing accented speech traits, our findings suggest that digital tools such as ASR can streamline the pronunciation assessment process. With ongoing advancements in ASR technology, its potential as not only an assessment aid but also a self-directed learning tool for pronunciation feedback merits further exploration.

A Speech Enhancement Algorithm based on Human Psychoacoustic Property (심리음향 특성을 이용한 음성 향상 알고리즘)

  • Jeon, Yu-Yong;Lee, Sang-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.6
    • /
    • pp.1120-1125
    • /
    • 2010
  • In the speech system, for example hearing aid as well as speech communication, speech quality is degraded by environmental noise. In this study, to enhance the speech quality which is degraded by environmental speech, we proposed an algorithm to reduce the noise and reinforce the speech. The minima controlled recursive averaging (MCRA) algorithm is used to estimate the noise spectrum and spectral weighting factor is used to reduce the noise. And partial masking effect which is one of the human hearing properties is introduced to reinforce the speech. Then we compared the waveform, spectrogram, Perceptual Evaluation of Speech Quality (PESQ) and segmental Signal to Noise Ratio (segSNR) between original speech, noisy speech, noise reduced speech and enhanced speech by proposed method. As a result, enhanced speech by proposed method is reinforced in high frequency which is degraded by noise, and PESQ, segSNR is enhanced. It means that the speech quality is enhanced.

A Study on the Human Auditory Scaling (인간의 청각 척도에 관한 고찰)

  • Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.125-134
    • /
    • 1997
  • Human beings can perceive various aspects of sound including loudness, pitch, length, and timber. Recently many studies were conducted to clarify complex auditory scales of the human ear. This study critically reviews some of these scales (decibel, sone, phon for loudness perception; mel and bark for pitch) and proposes to apply the scales to normalize acoustic correlates of human speech. One of the most important aspects of human auditory perception is the nonlinearity which should be incorporated into the linear speech analysis and synthesis system. Further studies using more sophisticated equipment are desirable to refine these scales, through the analysis of human auditory perception of complex tones or speech. This will lead scientists to develop better speech recognition and synthesis devices.

  • PDF

Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot (모의 지능로봇에서 음성신호에 의한 감정인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

A Validity Study on Measurement of Mental Fatigue Using Speech Technology (음성기술을 이용한 정신피로 측정에 관한 타당성 연구)

  • Song, Seungkyu;Kim, Jongyeol;Jang, Junsu;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.3-10
    • /
    • 2013
  • This study proposes a method to measure mental fatigue using speech technology, which has not been used in previous research and is easier than existing complex and difficult methods. It aims at establishing a relationship between the human voice and mental fatigue based on experiments to measure the influence of mental fatigue on the human voice. Two monotonous tasks of simple calculation such as finding the sum of three one digit numbers were used to measure the feeling of monotony and two sets of subjective questionnaires were used to measure mental fatigue. While thirty subjects perform the experiment, responses to the questionnaire and speech data were collected. Speech features related to speech source and the vocal tract filter were extracted from the speech data. According to the results, speech parameters deeply related to mental fatigue are a mean and standard deviation of fundamental frequency, jitter, and shimmer. This study shows that speech technology is a useful method for measuring mental fatigue.

Progress, challenges, and future perspectives in genetic researches of stuttering

  • Kang, Changsoo
    • Journal of Genetic Medicine
    • /
    • v.18 no.2
    • /
    • pp.75-82
    • /
    • 2021
  • Speech and language functions are highly cognitive and human-specific features. The underlying causes of normal speech and language function are believed to reside in the human brain. Developmental persistent stuttering, a speech and language disorder, has been regarded as the most challenging disorder in determining genetic causes because of the high percentage of spontaneous recovery in stutters. This mysterious characteristic hinders speech pathologists from discriminating recovered stutters from completely normal individuals. Over the last several decades, several genetic approaches have been used to identify the genetic causes of stuttering, and remarkable progress has been made in genome-wide linkage analysis followed by gene sequencing. So far, four genes, namely GNPTAB, GNPTG, NAGPA, and AP4E1, are known to cause stuttering. Furthermore, thegeneration of mouse models of stuttering and morphometry analysis has created new ways for researchers to identify brain regions that participate in human speech function and to understand the neuropathology of stuttering. In this review, we aimed to investigate previous progress, challenges, and future perspectives in understanding the genetics and neuropathology underlying persistent developmental stuttering.

Speech sound and personality impression (말소리와 성격 이미지)

  • Lee, Eunyung;Yuh, Heaok
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.59-67
    • /
    • 2017
  • Regardless of their intention, listeners tend to assess speakers' personalities based on the sounds of the speech they hear. Assessment criteria, however, have not been fully investigated to indicate whether there is any relationship between the acoustic cue of produced speech sounds and perceived personality impression. If properly investigated, the potential relationship between these two will provide crucial insights on the aspects of human communications and further on human-computer interaction. Since human communications have distinctive characteristics of simultaneity and complexity, this investigation would be the identification of minimum essential factors among the sounds of speech and perceived personality impression. The purpose of this study, therefore, is to identify significant associations between the speech sounds and perceived personality impression of speaker by the listeners. Twenty eight subjects participated in the experiment and eight acoustic parameters were extracted by using Praat from the recorded sounds of the speech. The subjects also completed the Neo-five Factor Inventory test so that their personality traits could be measured. The results of the experiment show that four major factors(duration average, pitch difference value, pitch average and intensity average) play crucial roles in defining the significant relationship.

Evaluation of English speaking proficiency under fixed speech rate: Focusing on utterances produced by Korean child learners of English

  • Narah Choi;Tae-Yeoub Jang
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • This study attempted to test the hypothesis that Korean evaluators can score L2 speech appropriately, even when speech rate features are unavailable. Two perception experiments-preliminary and main-were conducted sequentially. The purpose of the preliminary experiment was to categorize English-as-a-foreign-language (EFL) speakers into two groups-advanced learners and lower-level learners-based on the proficiency scores given by five human raters. In the main experiment, a set of stimuli was prepared such that the speech rate of all data tokens was modified to have a uniform speech rate. Ten human evaluators were asked to score the stimulus tokens on a 5-point scale. These scores were statistically analyzed to determine whether there was a significant difference in utterance production between the two groups. The results of the preliminary experiment confirm that higher-proficiency learners speak faster than lower-proficiency learners. The results of the main experiment indicate that under controlled speech-rate conditions, human raters can appropriately assess learner proficiency, probably thanks to the linguistic features that the raters considered during the evaluation process.

HUMAN MOTION AND SPEECH ANALYSIS TO CONSTRUCT DECISION MODEL FOR A ROBOT TO END COMMUNICATING WITH A HUMAN

  • Otsuka, Naoki;Murakami, Makoto
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.719-722
    • /
    • 2009
  • The purpose of this paper is to develop a robot that moves independently, communicates with a human, and explicitly extracts information from the human mind that is rarely expressed verbally. In a spoken dialog system for information collection, it is desirable to continue communicating with the user as long as possible, but not if the user does not wish to communicate. Therefore, the system should be able to terminate the communication before the user starts to object to using it. In this paper, to enable the construction of a decision model for a system to decide when to stop communicating with a human, we acquired speech and motion data from individuals who were asked many questions by another person. We then analyze their speech and body motion when they do not mind answering the questions, and also when they wish the questioning to cease. From the results, we can identify differences in speech power, length of pauses, speech rate, and body motion.

  • PDF

Development of TTS for a Human-Robot Interface (휴먼-로봇 인터페이스를 위한 TTS의 개발)

  • Bae Jae-Hyun;Oh Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.135-138
    • /
    • 2006
  • The communication method between human and robot is one of the important parts for a human-robot interaction. And speech is easy and intuitive communication method for human-being. By using speech as a communication method for robot, we can use robot as familiar way. In this paper, we developed TTS for human-robot interaction. Synthesis algorithms were modified for an efficient utilization of restricted resource in robot. And synthesis database were reconstructed for an efficiency. As a result, we could reduce the computation time with slight degradation of the speech quality.

  • PDF