• 제목/요약/키워드: speech factors

검색결과 352건 처리시간 0.019초

한국어 단음절 낱말 인식에 미치는 어휘적 특성의 영향 (Analysis of Lexical Effect on Spoken Word Recognition Test)

  • 윤미선;이봉원
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.15-26
    • /
    • 2005
  • The aim of this paper was to analyze the lexical effects on spoken word recognition of Korean monosyllabic word. The lexical factors chosen in this paper was frequency, density and lexical familiarity of words. Result of the analysis was as follows; frequency was the significant factor to predict spoken word recognition score of monosyllabic word. The other factors were not significant. This result suggest that word frequency should be considered in speech perception test.

  • PDF

마이크로폰의 종류 및 설치거리에 따른 음성인식성능변화의 검토 (The Validation of Speech Recognition Performance Change according to the kind and established distance of the Microphone)

  • 김연화;이광현;최대림;김봉완;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.141-143
    • /
    • 2003
  • Speech recognition performance depends on various factors. One of the factors is the characteristic and established distance of a microphone which is used when speech data is collected. Thus, in the present experiment speech databases for tests are created through the type and established distance of a microphone. Then, acoustic models are built based on these databases, and each of the acoustic models is assessed by the data to determine recognition performance depending on various microphones and established microphone distances.

  • PDF

말늦은 아동의 말소리 발달 종단 연구 (A longitudinal study of phonological development in Korean late-talkers)

  • 김수진;이수향;홍경훈
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.115-122
    • /
    • 2017
  • This study attempts to determine the extent to which late talkers are at the risk of delayed phonological development, in order to identify groups at risk and to find factors affecting delayed phonological development. A group of 1,452 children (51% boys, 49% girls) were recruited from the nationwide Panel Study on Korean Children. The current study collected data from 418 children who were previously identified as late-talkers (LT) at their age of three on average (Time 1: expressive vocabulary test) and three years later (Time 2: phonological test). Their phonological outcomes of the Time 2 were analyzed and then compared to those of a group of 1,056 children with typical language development (NLT: no late-talkers) at the age of three in terms of the number of incorrect consonants, and the speech sound disorders rating scores. LT showed a lower articulation score than NLT, and boys showed a lower score than girls. These findings indicate that the late onset of speech and the gender of young children could be potential risk factors of speech sound disorders.

Effects of phonological and phonetic information of vowels on perception of prosodic prominence in English

  • Suyeon Im
    • 말소리와 음성과학
    • /
    • 제15권3호
    • /
    • pp.1-7
    • /
    • 2023
  • This study investigates how the phonological and phonetic information of vowels influences prosodic prominence among linguistically untrained listeners using public speech in American English. We first examined the speech material's phonetic realization of vowels (i.e., maximum F0, F0 range, phone rate [as a measure of duration considering the speech rate of the utterance], and mean intensity). Results showed that the high vowels /i/ and /u/ likely had the highest max F0, while the low vowels /æ/ and /ɑ/ tended to have the highest mean intensity. Both high and low vowels had similarly high phone rates. Next, we examined the effects of the vowels' phonological and phonetic information on listeners' perceptions of prosodic prominence. The results showed that vowels significantly affected the likelihood of perceived prominence independent of acoustic cues. The high and low vowels affected probability of perceived prominence less than the mid vowels /ɛ/ and /ʌ/, although the former two were more likely to be phonetically enhanced in the speech than the latter. Overall, these results suggest that perceptions of prosodic prominence in English are not directly influenced by signal-driven factors (i.e., vowels' acoustic information) but are mediated by expectation-driven factors (e.g., vowels' phonological information).

자동차 주행 환경에서의 음성 전달 명료도와 음성 인식 성능 비교 (Comparison of Speech Intelligibility & Performance of Speech Recognition in Real Driving Environments)

  • 이광현;최대림;김영일;김봉완;이용주
    • 대한음성학회지:말소리
    • /
    • 제50호
    • /
    • pp.99-110
    • /
    • 2004
  • The normal transmission characteristics of sound are hardly obtained due to the various noises and structural factors in a running car environment. It is due to the channel distortion of the original source sound recorded by microphones, and it seriously degrades the performance of the speech recognition in real driving environments. In this paper we analyze the degree of intelligibility under the various sound distortion environments by channels according to driving speed with respect to speech transmission index(STI) and compare the STI with rates of speech recognition. We examine the correlation between measures of intelligibility depending on sound pick-up patterns and performance in speech recognition. Thereby we consider the optimal location of a microphone in single channel environment. In experimentation we find that high correlation is obtained between STI and rates of speech recognition.

  • PDF

PROSODY CONTROL BASED ON SYNTACTIC INFORMATION IN KOREAN TEXT-TO-SPEECH CONVERSION SYSTEM

  • Kim, Yeon-Jun;Oh, Yung-Hwan
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 FIFTH WESTERN PACIFIC REGIONAL ACOUSTICS CONFERENCE SEOUL KOREA
    • /
    • pp.937-942
    • /
    • 1994
  • Text-to-Speech(TTS) conversion system can convert any words or sentences into speech. To synthesize the speech like human beings do, careful prosody control including intonation, duration, accent, and pause is required. It helps listeners to understand the speech clearly and makes the speech sound more natural. In this paper, a prosody control scheme which makes use of the information of the function word is proposed. Among many factors of prosody, intonation, duration, and pause are closely related to syntactic structure, and their relations have been formalized and embodied in TTS. To evaluate the synthesized speech with the proposed prosody control, one of the subjective evaluation methods-MOS(Mean Opinion Score) method has been used. Synthesized speech has been tested on 10 listeners and each listener scored the speech between 1 and 5. Through the evaluation experiments, it is observed that the proposed prosody control helps TTS system synthesize the more natural speech.

  • PDF

잡음 환경 하에서의 입술 정보와 PSO-NCM 최적화를 통한 거절 기능 성능 향상 (Improvement of Rejection Performance using the Lip Image and the PSO-NCM Optimization in Noisy Environment)

  • 김병돈;최승호
    • 말소리와 음성과학
    • /
    • 제3권2호
    • /
    • pp.65-70
    • /
    • 2011
  • Recently, audio-visual speech recognition (AVSR) has been studied to cope with noise problems in speech recognition. In this paper we propose a novel method of deciding weighting factors for audio-visual information fusion. We adopt the particle swarm optimization (PSO) to weighting factor determination. The AVSR experiments show that PSO-based normalized confidence measures (NCM) improve the rejection performance of mis-recognized words by 33%.

  • PDF

한국인 표준 음성 DB 구축 (Developing a Korean Standard Speech DB)

  • 신지영;장혜진;강연민;김경화
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.139-150
    • /
    • 2015
  • The data accumulated in this database will be used to develop a speaker identification system. This may also be applied towards, but not limited to, fields of phonetic studies, sociolinguistics, and language pathology. We plan to supplement the large-scale speech corpus next year, in terms of research methodology and content, to better answer the needs of diverse fields. The purpose of this study is to develop a speech corpus for standard Korean speech. For the samples to viably represent the state of spoken Korean, demographic factors were considered to modulate a balanced spread of age, gender, and dialects. Nine separate regional dialects were categorized, and five age groups were established from individuals in their 20s to 60s. A speech-sample collection protocol was developed for the purpose of this study where each speaker performs five tasks: two reading tasks, two semi-spontaneous speech tasks, and one spontaneous speech task. This particular configuration of sample data collection accommodates gathering of rich and well-balanced speech-samples across various speech types, and is expected to improve the utility of the speech corpus developed in this study. Samples from 639 individuals were collected using the protocol. Speech samples were collected also from other sources, for a combined total of samples from 1,012 individuals.

한국인 표준 음성 DB 구축(II) (Developing a Korean standard speech DB (II))

  • 신지영;김경화
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.9-22
    • /
    • 2017
  • The purpose of this paper is to report the whole process of developing Korean Standard Speech Database (KSS DB). This project is supported by SPO (Supreme Prosecutors' Office) research grant for three years from 2014 to 2016. KSS DB is designed to provide speech data for acoustic-phonetic and phonological studies and speaker recognition system. For the samples to represent the spoken Korean, sociolinguistic factors, such as region (9 regional dialects), age (5 age groups over 20) and gender (male and female) were considered. The goal of the project is to collect over 3,000 male and female speakers of nine regional dialects and five age groups employing direct and indirect methods. Speech samples of 3,191 speakers (2,829 speakers and 362 speakers using direct and indirect methods, respectively) are collected and databased. KSS DB designs to collect read and spontaneous speech samples from each speaker carrying out 5 speech tasks: three (pseudo-)spontaneous speech tasks (producing prolonged simple vowels, 28 blanked sentences and spontaneous talk) and two read speech tasks (reading 55 phonetically and phonologically rich sentences and reading three short passages). KSS DB includes a 16-bit, 44.1kHz speech waveform file and a orthographic file for each speech task.