• 제목/요약/키워드: Acoustical parameters

검색결과 413건 처리시간 0.024초

MBE 부호화용 스펙트럼 V-UV 구간 검출에 관한 연구 (On a Detection of V-UV Segments of Speech Spectrum for the MBE Coding)

  • 김을제
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1992년도 학술논문발표회 논문집 제11권 1호
    • /
    • pp.43-48
    • /
    • 1992
  • In the area of speech vocoder systems, the MBE vocoder allows the high quality and low bit rate. In the MBE parameters detection, the dicision methods of V/UV region proposed until now are dependent highly to the other parameters, fundamental frequency and formant information. In this paper, thus, we propose a new V/UV detection method that uses a zero-crossing rate of flatten harmonices spectrum. This method can reduce the influences of the other parameters for the V/UV regions detection.

  • PDF

A Robust Non-Speech Rejection Algorithm

  • Ahn, Young-Mok
    • The Journal of the Acoustical Society of Korea
    • /
    • 제17권1E호
    • /
    • pp.10-13
    • /
    • 1998
  • We propose a robust non-speech rejection algorithm using the three types of pitch-related parameters. The robust non-speech rejection algorithm utilizes three kinds of pitch parameters : (1) pitch range, (2) difference of the successive pitch range, and (3) the number of successive pitches satisfying constraints related with the previous two parameters. The acceptance rate of the speech commands was 95% for -2.8dB signal-to-noise ratio (SNR) speech database that consisted of 2440 utterances. The rejection rate of the non-speech sounds was 100% while the acceptance rate of the speech commands was 97% in an office environment.

  • PDF

실의 잔향시간 측정방법 고찰 (Measurement of the Reverberation Time of Rooms with Reference to Other Acoustical Parameters)

  • 오양기;주진수;정광용;김선우
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2001년도 춘계학술대회논문집
    • /
    • pp.230-233
    • /
    • 2001
  • Revision of KS, Korean Standards, is currently actively discussed. It is just the time for a new world class standards under the new system with WTO, World Trade Organization. This paper is a part of "Researches on the Standards in the Building Acoustic Field", as one of KS revision projects. The aim of this study is to define the requirements for measuring the reverberation time of rooms with reference to other acoustical parameters. In the former KS, there is no items matched with this purpose. Therefore measuring reverberation time and other room acoustical parameters with confidential test procedure was impossible. On these basis, a new part of KS is proposed, and some problems remained and further discussions in the proposed draft are described in this study.

  • PDF

고분자 압전 박막 센서를 이용한 사과의 충격 음파 특성 분석 (Analysis of Impact Acoustic Property of Apple Using Piezo-Polymer Film Sensor)

  • 김만수;이상대;박정학;김기복
    • 비파괴검사학회지
    • /
    • 제28권2호
    • /
    • pp.144-150
    • /
    • 2008
  • 본 연구는 PVDF(polyvinylidene flouride) 압전센서를 이용하여 사과의 내부 품질을 평가할 수 있는지를 고찰하기 위하여 수행되었다. 사과 표면에 기계적 충격을 가한 후 반대편에 부착된 PVDF 압전센서를 이용하여 사과를 투과한 음파신호를 수신하였다. 투과 음파신호를 분석하기 위한 음향 파라미터들로서 시간영역에서는 신호의 rise time(RT), ring down count(RC), energy(EN), event duration(ED), peak amplitude(PA)를 이용하였으며 주파수영역에서는 spectral density(SD)를 각각 이용하였다. 사과의 저장 기간이 증가함에 따라 응답신호의 크기는 감소하였으며 중심 주파수도 저주파수로 변화하였다. 분석에 사용된 음향 파라미터들은 사과의 저장 기간과 밀접한 관계가 있는 것으로 나타났다. 음향 파라미터들과 저장기간에 대하여 다중회귀분석을 수행한 결과 결정계수가 0.97로 매우 높게 나타났다. 따라서 PVDF 압전센서와 음향 파라미터를 이용하여 사과의 저장기간에 따른 내부 품질 평가가 가능할 것으로 기대된다.

Low Complexity Vector Quantizer Design for LSP Parameters

  • Woo, Hong-Chae
    • The Journal of the Acoustical Society of Korea
    • /
    • 제17권3E호
    • /
    • pp.53-57
    • /
    • 1998
  • Spectral information at a speech coder should be quantized with sufficient accuracy to keep perceptually transparent output speech. Spectral information at a low bit rate speech coder is usually transformed into corresponding line spectrum pair parameters and is often quantized with a vector quantization algorithm. As the vector quantization algorithm generally has high complexity in the optimal code vector searching routine, the complexity reduction in that routine is investigated using the ordering property of the line spectrum pair. When the proposed complexity reduction algorithm is applied to the well-known split vector quantization algorithm, the 46% complexity reduction is achieved in the distortion measure compu-tation.

  • PDF

망각소자를 갖는 t-분포 강인 연속 추정을 이용한 음성 신호 추정에 관한 연구 (Robust Sequential Estimation based on t-distribution with forgetting factor for time-varying speech)

  • 이주헌
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1998년도 제15회 음성통신 및 신호처리 워크샵(KSCSP 98 15권1호)
    • /
    • pp.470-474
    • /
    • 1998
  • In this paper, to estimate the time-varying parameters of speech signal, we use the robust sequential estimator based on t-distribution and, for time-varying signal, introduce the forgetting factor. By using the RSE based on t-distribution with small degree of freedom, we can alleviate efficiently the effects of outliers to obtain the better performance of parameter estimation. Moreover, by the forgetting factor, the proposed algorithm can estimate the accurate parameters under the rapid variation of speech signal.

  • PDF

실내공간의 잔향시간과 음향변수 측정방법 (Measurement of the Reverberation Time of Rooms with Reference to Other Acoustical Parameters)

  • 오양기;주진수;정광용;김선우
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2001년도 추계학술대회논문집 I
    • /
    • pp.392-396
    • /
    • 2001
  • Revision of KS, Korean Standards, is currently actively discussed. It is just the time for a new world class standards under the new system with WTO, World Trade Organization. This paper is a part of “Researchs on the Standards in the Building Acoustic Field”, as one of KS revision projects. The aim of this study is to define the requirements for measuring the reverberation time and other major room acoustical parameters.

  • PDF

Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류 (Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum)

  • 배명수;안수길
    • 한국음향학회지
    • /
    • 제4권1호
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF

입술 파라미터 선정에 따른 바이모달 음성인식 성능 비교 및 검증 (Performance Comparison and Verification of Lip Parameter Selection Methods in the Bimodal Speech ]Recognition System)

  • 박병구;김진영;임재열
    • 한국음향학회지
    • /
    • 제18권3호
    • /
    • pp.68-72
    • /
    • 1999
  • 바이모달 음성인식 시스템에서 어떤 입술파라미터를 선정하느냐 그리고 얼마나 견인하게 추출하는 가에 따라서 인식률에 큰 영향을 미친다. 그래서 본 논문에서는 자동 추출 알고리듬을 이용하여 입술파라미터를 추출하고 안쪽 입술 파라미터가 바깥 입술 파라미터보다 바이모달 음성인식 시스템에 더 많은 영향을 미친다는 것을 보였다. 그리고 손으로 추출한 추출알고리듬과 비교하여 자동 추출알고리듬의 신뢰성을 비교하였다.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.