• Title/Summary/Keyword: Human speech

Search Result 568, Processing Time 0.024 seconds

Analysis and synthesis of Korean Vowels by LP Method (LP 방법에 의한 한국모음의 분석과 합성)

  • Son, Ho-In;Sin, Dong-Jin;An, Su-Gil
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.18 no.1
    • /
    • pp.41-50
    • /
    • 1981
  • The human speech contains many redundancies. To economize communication channel or memory size for a computerized synthesis of human voices, it is necessary to compress the data before sending. We have treated human speech organ as an eighth order dynamic system which is time varying as the person speaks. Using an anaylyzer of our design, each eight parameters are obtained for the vowels [아], [어], [오], [우], [으], [이], [애], and (외) of korean language with considerable discrepancies between persons. Supplying those parameters to a synthesizer which we have made, we have sucoeeded in the simulation of human speech for the above mentioned vowels of Korean language and observed that they bear all the features of the original speakers.

  • PDF

Acoustic-Phonetic Phenotypes in Pediatric Speech Disorders;An Interdisciplinary Approach

  • Bunnell, H. Timothy
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.31-36
    • /
    • 2006
  • Research in the Center for Pediatric Auditory and Speech Sciences (CPASS) is attempting to characterize or phenotype children with speech delays based on acoustic-phonetic evidence and relate those phenotypes to chromosome loci believed to be related to language and speech. To achieve this goal we have adopted a highly interdisciplinary approach that merges fields as diverse as automatic speech recognition, human genetics, neuroscience, epidemiology, and speech-language pathology. In this presentation I will trace the background of this project, and the rationale for our approach. Analyses based on a large amount of speech recorded from 18 children with speech delays will be presented to illustrate the approach we will be taking to characterize the acoustic phonetic properties of disordered speech in young children. The ultimate goal of our work is to develop non-invasive and objective measures of speech development that can be used to better identify which children with apparent speech delays are most in need of, or would receive the most benefit from the delivery of therapeutic services.

  • PDF

On the Role of Prefabricated Speech in L2 Acquisition Process: An Information Processing Approach

  • Boo, Kyung-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 1991.10a
    • /
    • pp.196-208
    • /
    • 1991
  • This study focused on the role of prefabricated speech (routines and patterns) in the L2 acquisition process. The data for this study consisted of spontaneous speech samples and various observational records of three Korean children learning English as L2 in a nursery school. The specific questions addressed here were: (1) What routines, patterns, and creative constructions did the children use? (2) What was the general trend in the three children's use of routines, patterns, and creative constructions over time? The data were collected over a period of one school year by observing the children in their school. The findings were discussed from the perspective of human information processing. This study found that prefabricated speech played a significant role in the three children's L2 acquisition. The automatic processing of prefabricated speech appeared to enable the children to reduce the burden on their information processing systems, which allowed the saved resources available for other language development activities. Also, the children's language development was evident in their increase in the use of patterns. The children were moving from heavy dependence on wholly unanalyzed routines to increased use of partly unanalyzed patterns. This increased control was the result of an increase in procedural knowledge.

  • PDF

Common Speech Database Collection and Validation for Communications (한국어 공통 음성 DB구축 및 오류 검증)

  • Lee Soo-jong;Kim Sanghun;Lee Youngjik
    • MALSORI
    • /
    • no.46
    • /
    • pp.145-157
    • /
    • 2003
  • In this paper, we'd like to briefly introduce Korean common speech database, which project has been started to construct a large scaled speech database since 2002. The project aims at supporting the R&D environment of the speech technology for industries. It encourages domestic speech industries and activates speech technology domestic market. In the first year, the resulting common speech database consists of 25 kinds of databases considering various recording conditions such as telephone, PC, VoIP etc. The speech database will be widely used for speech recognition, speech synthesis, and speaker identification. On the other hand, although the database was originally corrected by manual, still it retains unknown errors and human errors. So, in order to minimize the errors in the database, we tried to find the errors based on the recognition errors and classify several kinds of errors. To be more effective than typical recognition technique, we will develop the automatic error detection method. In the future, we will try to construct new databases reflecting the needs of companies and universities.

  • PDF

New Speech Enhancement Method using Psychoacoustic Criteria (심리 음향 기준을 이용한 새로운 음질 개선 방법)

  • 김대경;박장식;손경식
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.1
    • /
    • pp.56-66
    • /
    • 2001
  • The spectral subtraction algorithm using a criterion based on the human perception has been recently developed. The speech processed with Virag's algorithm sounds more pleasant to a human listener than those obtained by the classical methods. However, Virag's algorithm requires a robust voice activity detector (VAD). In the ESS (extended spectral subtraction) algorithm without VAD, the residual noise becomes more noticeable as the SNR decrease. In this paper we propose a new speech enhancement method, the combination of Wiener filter and spectral subtraction based on noise masking characteristics in the human auditory system. There is no need of VAD because the noise can be successively updated even during speech activity using Wiener filter. The adjustment of the subtraction parameter based on the masking threshold makes the residual noise inaudible. The proposed method has been compared with conventional spectral subtraction algorithms. Objective and subjective evaluation of the proposed system is performed with several noise types having different time-frequency distributions. The application of objective measures, the study of the speech spectrograms, as well as subjective listening tests, confirm that the enhanced speech with proposed algorithm is more pleasant to a human listener.

  • PDF

An evaluation of Korean students' pronunciation of an English passage by a speech recognition application and two human raters

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.19-25
    • /
    • 2020
  • This study examined thirty-one Korean students' pronunciation of an English passage using a speech recognition application, Speechnotes, and two Canadian raters' evaluations of their speech according to the International English Language Testing System (IELTS) band criteria to assess the possibility of using the application as a teaching aid for pronunciation education. The results showed that the grand average percentage of correctly recognized words was 77.7%. From the moderate recognition rate, the pronunciation level of the participants was construed as intermediate and higher. The recognition rate varied depending on the composition of the content words and the function words in each given sentence. Frequency counts of unrecognized words by group level and word type revealed the typical pronunciation problems of the participants, including fricatives and nasals. The IELTS bands chosen by the two native raters for the rainbow passage had a moderately high correlation with each other. A moderate correlation was reported between the number of correctly recognized content words and the raters' bands, while an almost a negligible correlation was found between the function words and the raters' bands. From these results, the author concludes that the speech recognition application could constitute a partial aid for diagnosing each individual's or the group's pronunciation problems, but further studies are still needed to match human raters.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

A review of speech perception: The first step for convergence on speech engineering (말소리지각에 대한 종설: 음성공학과의 융복합을 위한 첫 단계)

  • Lee, Young-lim
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.509-516
    • /
    • 2017
  • People observe a lot of events in our environment and we do not have any difficulty to perceive events including speech perception. Like perception of biological motion, two main theorists have debated on speech perception. The purpose of this review article is to briefly describe speech perception and compare these two theories of speech perception. Motor theorists claim that speech perception is special to human because we both produce and perceive articulatory events that are processed by innate neuromotor commands. However, direct perception theorists claim that speech perception is not different from nonspeech perception because we only need to detect information directly like all other kinds of event. It is important to grasp the fundamental idea of how human perceive articulatory events for the convergence on speech engineering. Thus, this basic review of speech perception is expected to be able to used for AI, voice recognition technology, speech recognition system, etc.