• 제목/요약/키워드: Human speech

검색결과 571건 처리시간 0.024초

A New Pruning Method for Synthesis Database Reduction Using Weighted Vector Quantization

  • Kim, Sanghun;Lee, Youngjik;Keikichi Hirose
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권4E호
    • /
    • pp.31-38
    • /
    • 2001
  • A large-scale synthesis database for a unit selection based synthesis method usually retains redundant synthesis unit instances, which are useless to the synthetic speech quality. In this paper, to eliminate those instances from the synthesis database, we proposed a new pruning method called weighted vector quantization (WVQ). The WVQ reflects relative importance of each synthesis unit instance when clustering the similar instances using vector quantization (VQ) technique. The proposed method was compared with two conventional pruning methods through the objective and subjective evaluations of the synthetic speech quality: one to simply limit maximum number of instance, and the other based on normal VQ-based clustering. The proposed method showed the best performance under 50% reduction rates. Over 50% of reduction rates, the synthetic speech quality is not seriously but perceptibly degraded. Using the proposed method, the synthesis database can be efficiently reduced without serious degradation of the synthetic speech quality.

  • PDF

Some effects of audio-visual speech in perceiving Korean

  • Kim, Jee-Sun;Davis, Chris
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 1999년도 제11회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.335-342
    • /
    • 1999
  • The experiments reported here investigated whether seeing a speaker's face (visible speech) affects the perception and memory of Korean speech sounds. In order to exclude the possibility of top-down, knowledge-based influences on perception and memory, the experiments tested people with no knowledge of Korean. The first experiment examined whether visible speech (Auditory and Visual - AV) assists English native speakers (with no knowledge of Korean) in the detection of a syllable within a Korean speech phrase. It was found that a syllable was more likely to be detected within a phrase when the participants could see the speaker's face. The second experiment investigated whether English native speakers' judgments about the duration of a Korean phrase would be affected by visible speech. It was found that in the AV condition participant's estimates of phrase duration were highly correlated with the actual durations whereas those in the AO condition were not. The results are discussed with respect to the benefits of communication with multimodal information and future applications.

  • PDF

인간의 청각모델에 기초한 잡음환경에 적응된 잡음억압 시스템 (Adaptive Noise Suppression system based on Human Auditory Model)

  • 최재승
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2008년도 춘계종합학술대회 A
    • /
    • pp.421-424
    • /
    • 2008
  • 본 논문에서는 다양한 배경잡음에 의해 열화된 음성을 강조하기 위하여 청각모델에 기초로 한 잡음환경에 적응된 잡음억압 시스템을 제안한다. 제안한 시스템은 먼저 유성음과 무성음의 구간을 검출한 후, 각 입력 프레임에서 적응적인 청각기강의 처리를 한다. 마지막으로 진폭성분과 위상성분이 포함된 신경회로망을 사용하여 잡음신호를 제거한 후에 음성을 강조하는 처리를 한다. 본 시스템은 신호대잡음비의 평가방법을 통하여 다양한 잡음에 의해서 열화된 음성신호에 대해서 유효하다는 것을 실험으로 확인한다.

  • PDF

음성기반 멀티모달 인터페이스 기술 현황 및 과제 (The Status and Research Themes of Speech based Multimodal Interface Technology)

  • 이지근;이은숙;이혜정;김봉완;정석태;정성태;이용주;한문성
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.111-114
    • /
    • 2002
  • Complementary use of several modalities in human-to-human communication ensures high accuracy, and only few communication problem occur. Therefore, multimodal interface is considered as the next generation interface between human and computer. This paper presents the current status and research themes of speech-based multimodal interface technology, It first introduces about the concept of multimodal interface. It surveys the recognition technologies of input modalities and synthesis technologies of output modalities. After that it surveys integration technology of modality. Finally, it presents research themes of speech-based multimodal interface technology.

  • PDF

인간의 청각 메커니즘을 적용한 웨이블렛 분석을 통한 음성 향상에 대한 연구 (A study of speech. enhancement through wavelet analysis using auditory mechanism)

  • 이준석;길세기;홍준표;홍승홍
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.397-400
    • /
    • 2002
  • This paper has been studied speech enhancement method in noisy environment. By mean of that we prefer human auditory mechanism which is perfect system and applied wavelet transform. Multi-resolution of wavelet transform make possible multiband spectrum analysis like human ears. This method was verified very effective way in noisy speech enhancement.

  • PDF

3세와 5세 유아의 혼잣말과 어머니의 비계설정과의 관계 (The Relationship between 3- and 5-year-old children' private speech and their mothers' scaffolding)

  • 박영순;유안진
    • 한국생활과학회지
    • /
    • 제14권1호
    • /
    • pp.59-68
    • /
    • 2005
  • The purposes of this study were to investigate the relationship between children's private speech during the individual session and maternal scaffolding during mother-child session. Subjects were twenty 3-year-old children and twenty 5-year-old children and their mothers recruited from day-care centers in Seoul. Mother-child interaction was videotaped for 15 minutes and maternal utterances were transcribed for analysis maternal scaffolding. Individual session of child after 3-5days was videotaped for 15 minutes and children's utterance was transcribed. Subcategories of maternal scaffolding were significantly related with children's private speech during individual session. There did appear to be an age difference in this relationship. In verbal strategy for scaffolding that 3-year-old's mother used, other-regulation and control, praise strategy was significantly related with children's private speech. In verbal strategy for scaffolding that 5-year-old's mother used, other-regulation and control, teaching strategy was significantly related with children's private speech. In maternal physical control strategy, withdrawal of mother physical control the maze task over time was significantly related with children's private speech. Withdrawal of mother physical control 5-year-old's physical performance was significantly related with children's private speech.

  • PDF

자연스런 인간-로봇 상호작용을 위한 음성 신호의 AM-FM 성분 분해 및 순간 주파수와 순간 진폭의 추정에 관한 연구 (AM-FM Decomposition and Estimation of Instantaneous Frequency and Instantaneous Amplitude of Speech Signals for Natural Human-robot Interaction)

  • 이희영
    • 음성과학
    • /
    • 제12권4호
    • /
    • pp.53-70
    • /
    • 2005
  • A Vowel of speech signals are multicomponent signals composed of AM-FM components whose instantaneous frequency and instantaneous amplitude are time-varying. The changes of emotion states cause the variation of the instantaneous frequencies and the instantaneous amplitudes of AM-FM components. Therefore, it is important to estimate exactly the instantaneous frequencies and the instantaneous amplitudes of AM-FM components for the extraction of key information representing emotion states and changes in speech signals. In tills paper, firstly a method decomposing speech signals into AM - FM components is addressed. Secondly, the fundamental frequency of vowel sound is estimated by the simple method based on the spectrogram. The estimate of the fundamental frequency is used for decomposing speech signals into AM-FM components. Thirdly, an estimation method is suggested for separation of the instantaneous frequencies and the instantaneous amplitudes of the decomposed AM - FM components, based on Hilbert transform and the demodulation property of the extended Fourier transform. The estimates of the instantaneous frequencies and the instantaneous amplitudes can be used for modification of the spectral distribution and smooth connection of two words in the speech synthesis systems based on a corpus.

  • PDF

MRI를 이용한 조음모델시뮬레이터 구현에 관하여 (On the Implementation of Articulatory Speech Simulator Using MRI)

  • 조철우
    • 음성과학
    • /
    • 제2권
    • /
    • pp.45-55
    • /
    • 1997
  • This paper describes the procedure of implementing an articulatory speech simulator, in order to model the human articulatory organs and to synthesize speech from this model after. Images required to construct the vocal tract model were obtained from MRI, they were then used to construct 2D and 3D vocal tract shapes. In this paper 3D vocal tract shapes were constructed by spatially concatenating and interpolating sectional MRI images. 2D vocal tract shapes were constructed and analyzed automatically into a digital filter model. Following this speech sounds corresponding to the model were then synthesized from the filter. All procedures in this study were using MATLAB.

  • PDF

Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류 (Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum)

  • 배명수;안수길
    • 한국음향학회지
    • /
    • 제4권1호
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF