• Title/Summary/Keyword: natural speech

Search Result 320, Processing Time 0.025 seconds

A Parallel Speech Recognition Model on Distributed Memory Multiprocessors (분산 메모리 다중프로세서 환경에서의 병렬 음성인식 모델)

  • 정상화;김형순;박민욱;황병한
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.44-51
    • /
    • 1999
  • This paper presents a massively parallel computational model for the efficient integration of speech and natural language understanding. The phoneme model is based on continuous Hidden Markov Model with context dependent phonemes, and the language model is based on a knowledge base approach. To construct the knowledge base, we adopt a hierarchically-structured semantic network and a memory-based parsing technique that employs parallel marker-passing as an inference mechanism. Our parallel speech recognition algorithm is implemented in a multi-Transputer system using distributed-memory MIMD multiprocessors. Experimental results show that the parallel speech recognition system performs better in recognition accuracy than a word network-based speech recognition system. The recognition accuracy is further improved by applying code-phoneme statistics. Besides, speedup experiments demonstrate the possibility of constructing a realtime parallel speech recognition system.

  • PDF

A Relationship of Tone, Consonant, and Speech Perception in Audiological Diagnosis

  • Han, Woo-Jae;Allen, Jont B.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.5
    • /
    • pp.298-308
    • /
    • 2012
  • This study was designed to examine the phoneme recognition errors of hearing-impaired (HI) listeners on a consonant-by-consonant basis, to show (1) how each HI ear perceives individual consonants differently and (2) how standard clinical measurements (i.e., using a tone and word) fail to predict these differences. Sixteen English consonant-vowel (CV) syllables of six signal-to-noise ratios in speech-weighted noise were presented at the most comfortable level for ears with mild-to-moderate sensorineural hearing loss. The findings were as follows: (1) individual HI listeners with a symmetrical pure-tone threshold showed different consonant-loss profiles (CLPs) (i.e., over a set of the 16 English consonants, the likelihood of misperceiving each consonant) in right and left ears. (2) A similar result was found across subjects. Paired ears of different HI individuals with identical pure-tone threshold presented different CLPs in one ear to the other. (3) Paired HI ears having the same averaged consonant score demonstrated completely different CLPs. We conclude that the standard clinical measurements are limited in their ability to predict the extent to which speech perception is degraded in HI ears, and thus they are a necessary, but not a sufficient measurement for HI speech perception. This suggests that the CV measurement would be a useful clinical tool.

Implementation of Text-to-Audio Visual Speech Synthesis Using Key Frames of Face Images (키프레임 얼굴영상을 이용한 시청각음성합성 시스템 구현)

  • Kim MyoungGon;Kim JinYoung;Baek SeongJoon
    • MALSORI
    • /
    • no.43
    • /
    • pp.73-88
    • /
    • 2002
  • In this paper, for natural facial synthesis, lip-synch algorithm based on key-frame method using RBF(radial bases function) is presented. For lips synthesizing, we make viseme range parameters from phoneme and its duration information that come out from the text-to-speech(TTS) system. And we extract viseme information from Av DB that coincides in each phoneme. We apply dominance function to reflect coarticulation phenomenon, and apply bilinear interpolation to reduce calculation time. At the next time lip-synch is performed by playing the synthesized images obtained by interpolation between each phonemes and the speech sound of TTS.

  • PDF

Japanese Speech Based Fuzzy Man-Machine Interface of Manipulators

  • Izumi, Kiyotaka;Watanabe, Keigo;Tamano, Yuya;Kiguchi, Kazuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.603-608
    • /
    • 2003
  • Recently, personal robots and home robots are developing by many companies and research groups. It is considered that a general effective interface for user of those robots is speech or voice. In this paper, Japanese speech based man-machine interface system is discussed for reflecting the fuzziness of natural language on robots, by using fuzzy reasoning. The present system consists of the derivation part of action command and the modification part of the derived command. In particular, a unique problem of Japanese is solved by applying the morphological analyzer ChaSen. The proposed system is applied for the motion control of a robot manipulator. It is proved from the experimental results that the proposed system can easily modify the same voice command to the actual different levels of the command, according to the current state of the robot.

  • PDF

Teaching English Restructuring and Post-lexical Phenomena (영어 발화의 재구조와 후-어휘 음운현상의 지도)

  • Lee Sunbeom
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.169-172
    • /
    • 2002
  • English is one of the stress-timed languages and has much more dynamic rhythm, stress and the tendency toward the isochronism of stressed syllables. It goes with various English utterance restructuring, irrespective of the pauses by syntactic boundaries, and post-lexically phonological phenomena. Specifically in the real speech acts, the natural utterances of fluent speakers or the broadcasting speech cause much more various English restructuring and phonological phenomena. This has been an obstacle for students in speaking fluent English and understanding normal speech. Therefore, this study tried to focus the most problematic factor in English speaking and listening difficulty on English restructuring and post-lexically phonological phenomena caused by stress-timed rhythm and, second, to point out the importance of teaching English rhythm bearing that in mind.

  • PDF

Emotion Recognition based on Multiple Modalities

  • Kim, Dong-Ju;Lee, Hyeon-Gu;Hong, Kwang-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.228-236
    • /
    • 2011
  • Emotion recognition plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between humans and computer. Most of previous work on emotion recognition focused on extracting emotions from face, speech or EEG information separately. Therefore, a novel approach is presented in this paper, including face, speech and EEG, to recognize the human emotion. The individual matching scores obtained from face, speech, and EEG are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. In the experiment results, the proposed approach gives an improvement of more than 18.64% when compared to the most successful unimodal approach, and also provides better performance compared to approaches integrating two modalities each other. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

A Study of Speech Coding for the Transmission on Network by the Wavelet Packets (Wavelet Packet을 이용한 Network 상의 음성 코드에 관한 연구)

  • Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3028-3030
    • /
    • 2000
  • In general. a speech coding is dedicated to the compression performance or the speech quality. But. the speech coding in this paper is focused on the performance of flexible transmission to the, network speed. For this. the subbanding coding is needed. which is used the wavelet packet concept in the signal analysis. The extraction of each frequency-band is difficult to general signal analysis methods, after coding each band, the reconstruction of these is also a difficult problem. But. with the wavelet packet concept(perfect reconstruction) and its fast computation algorithm. the extraction of each band and the reconstruction are more natural. Also, this paper describes a direct solution of the voice transmission on network and implement this algorithm at the TCP/IP network environment of PC.

  • PDF

Voice Similarities between Brothers

  • Ko, Do-Heung;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.1-11
    • /
    • 2002
  • This paper aims to provide a guideline for modelling speaker identification and speaker verification by comparing voice similarities between brothers. Five pairs of brothers who are believed to have similar voices participated in this experiment. Before conducted in the experiment, perceptual tests were measured if the voices were similar between brothers. The words were measured in both isolation and context, and the subjects were asked to read five times with about three seconds of interval between readings. Recordings were made at natural speed in a quiet room. The data were analyzed in pitch and formant frequencies using CSL (Computerized Speech Lab), PCQuirer and MDVP (Multi -dimensional Voice Program). It was found that data of the initial vowels are much more similar and homogeneous than those of vowels in other position. The acoustic data showed that voice similarities are strikingly high in both pitch and formant frequencies. It was also found that the correlation coefficient was not significant between parameters above.

  • PDF

Voice Similarities between Sisters

  • Ko, Do-Heung
    • Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.43-50
    • /
    • 2001
  • This paper deals with voice similarities between sisters who are supposed to have common physiological characteristics from a single biological mother. Nine pairs of sisters who are believed to have similar voices participated in this experiment. The speech samples obtained from one pair of sisters were eliminated in the analysis because their perceptual score was relatively low. The words were measured in both isolation and context, and the subjects were asked to read the text five times with about three seconds of interval between readings. Recordings were made at natural speed in a quiet room. The data were analyzed in pitch and formant frequencies using CSL (Computerized Speech Lab) and PCQuirer. It was found that data of the initial vowels are much more similar and homogeneous than those of vowels in other positions. The acoustic data showed that voice similarities are strikingly high in both pitch and formant frequencies. It is assumed that statistical data obtained from this experiment can be used as a guideline for modelling speaker identification and speaker verification.

  • PDF

PASS: A Parallel Speech Understanding System

  • Chung, Sang-Hwa
    • Journal of Electrical Engineering and information Science
    • /
    • v.1 no.1
    • /
    • pp.1-9
    • /
    • 1996
  • A key issue in spoken language processing has become the integration of speech understanding and natural language processing(NLP). This paper presents a parallel computational model for the integration of speech and NLP. The model adopts a hierarchically-structured knowledge base and memory-based parsing techniques. Processing is carried out by passing multiple markers in parallel through the knowledge base. Speech-specific problems such as insertion, deletion, and substitution have been analyzed and their parallel solutions are provided. The complete system has been implemented on the Semantic Network Array Processor(SNAP) and is operational. Results show an 80% sentence recognition rate for the Air Traffic Control domain. Moreover, a 15-fold speed-up can be obtained over an identical sequential implementation with an increasing speed advantage as the size of the knowledge base grows.

  • PDF