• 제목/요약/키워드: Auditory Scene Analysis

검색결과 13건 처리시간 0.033초

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • 대한청각학회지
    • /
    • 제24권1호
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • Journal of Audiology & Otology
    • /
    • 제24권1호
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Functional Analysis of Music Used in Film

CASA 기반 음성분리 성능 향상을 위한 형태 분석 기술의 응용 (Application of Shape Analysis Techniques for Improved CASA-Based Speech Separation)

  • 이윤경;권오욱
    • 대한음성학회지:말소리
    • /
    • 제65호
    • /
    • pp.153-168
    • /
    • 2008
  • We propose a new method to apply shape analysis techniques to a computational auditory scene analysis (CASA)-based speech separation system. The conventional CASA-based speech separation system extracts speech signals from a mixture of speech and noise signals. In the proposed method, we complement the missing speech signals by applying the shape analysis techniques such as labelling and distance function. In the speech separation experiment, the proposed method improves signal-to-noise ratio by 6.6 dB. When the proposed method is used as a front-end of speech recognizers, it improves recognition accuracy by 22% for the speech-shaped stationary noise condition and 7.2% for the two-talker noise condition at the target-to-masker ratio than or equal to -3 dB.

  • PDF

CASA 시스템의 청각장면과 PAR를 이용한 음성 영역 검출에 관한 연구 (A Study on Voice Activity Detection Using Auditory Scene and Periodic to Aperiodic Component Ratio in CASA System)

  • 김정호;고형화;강철호
    • 전자공학회논문지
    • /
    • 제50권10호
    • /
    • pp.181-187
    • /
    • 2013
  • 인간의 청각은 청각 장면 분석을 통해 배경 잡음이나 여러 사람들이 동시에 말하는 상황에서도 특정 목적을 가지는 음성 신호를 청취할 수 있는 능력을 가지고 있다. 인간의 청각 능력 시스템을 잘 반영한 CASA 시스템을 이용해 음성을 분리를 할 수 있다. 그러나 CASA 세그먼트에서 음성의 위치를 잘못 결정 했을 때 CASA 시스템의 성능은 감소된다. 본 논문에서는 CASA 시스템에서 잘못된 음성 영역 위치로 인해 발생되는 성능 감소를 개선하기 위하여 청각 장면, 그리고 주기 성분과 비주기 성분의 비율(PAR)을 결합한 음성 영역 검출 알고리즘을 제안한다. 음성 영역 검출의 성능을 평가하기 위하여 백색 잡음과 자동차 잡음 환경에서 신호 대 잡음비의 변화에 따라 실험을 수행하였다. 본 논문에서는 신호 대 잡음비 15~0dB에서 기존의 알고리즘(Pitch 와 Guoning Hu)과 제안한 알고리즘을 비교한 결과, 음성 영역 검출의 정확도가 백색잡음과 자동차 잡음에서 신호 대 잡음비 15dB 에서 최대 4%, 0dB에서 최대 34% 씩 각각 향상되었다.

CASA 기반의 마이크간 전달함수 비 추정 알고리즘 (CASA Based Approach to Estimate Acoustic Transfer Function Ratios)

  • 신민규;고한석
    • 한국음향학회지
    • /
    • 제33권1호
    • /
    • pp.54-59
    • /
    • 2014
  • 본 논문은 비정상 (nonstationary)특성을 가지는 잡음환경에서 마이크간 전달함수 비 (RTF, Relative Transfer Function) 추정 알고리즘을 제안한다. 음성을 이용한 다양한 기기에 다중 마이크를 이용한 잡음제거 기술은 널리 사용되며, 이때 각 마이크간의 입력 신호 사이의 관계는 필수적으로 추정되어야 한다. 본 논문에서는 기존의 OM-LSA(Optimally-Modified Log-Spectral Amplitude)기반의 추정 방식에 CASA (Computational Auditory Scene Analysis)를 접목시킨 방식을 제안한다. 제안한 방법의 성능 검증을 위하여 비정상 백색 잡음 (nonstationary white Gaussian noise) 환경에서 10명 화자 발음을 이용한 마이크간 전달함수 비 추정 성능 평가 실험을 수행하였다. 잡음 신호가 초당 8dB 증감하는 환경에서 SBF (Signal Blocking Factor)가 평균 2.65dB 개선됨을 확인하였다.

Separation of Single Channel Mixture Using Time-domain Basis Functions

  • Jang, Gil-Jin;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권4E호
    • /
    • pp.146-155
    • /
    • 2002
  • We present a new technique for achieving source separation when given only a single charmel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of time-domain basis functions that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single charmel data and sets of basis functions. For each time point we infer the source parameters and their contribution factors. This inference is possible due to the prior knowledge of the basis functions and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation, and our experimental results exhibit a high level of separation performance for simulated mixtures as well as real environment recordings employing mixtures of two different sources. We show separation results of two music signals as well as the separation of two voice signals.

Separation of Single Channel Mixture Using Time-domain Basis Functions

  • 장길진;오영환
    • 한국음향학회지
    • /
    • 제21권4호
    • /
    • pp.146-146
    • /
    • 2002
  • We present a new technique for achieving source separation when given only a single channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of time-domain basis functions that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis functions. For each time point we infer the source parameters and their contribution factors. This inference is possible due to the prior knowledge of the basis functions and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation, and our experimental results exhibit a high level of separation performance for simulated mixtures as well as real environment recordings employing mixtures of two different sources. We show separation results of two music signals as well as the separation of two voice signals.

시각과 청각 및 음향적 관점에서의 노랫말 모음 연구 (Visual.Auditory.Acoustic Study on Singing Vowels of Korean Lyric Songs)

  • 이재강
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.362-366
    • /
    • 1996
  • This paper is generally divided in 2 parts. One is the study on vowels about korean singer's lyric song in view of Daniel Jones' Cardinal Vowel. The other is acoustic study on vowels in my singing about korean lyric song. Analysis data are KBS concert video tape and CSL's. NSP file on my singing and Informants are famous singers i.e. 3 sopranos, 1 mezzo, 2 tenors, 1baritone, and me. Analysis aim is to find out Korean 8 vowels([equation omitted]) quality in singing. The methods of descrition are used in closed vowels, half closed vowels, half open vowels, open vowels and rounded vowels, unroundes vowels and formants. The study of the former is while watching the monitor screen to stop the scene that is to be analysixed. The study of the latter is to analysis the spectrogram converted by CSL's. SP file. Analysis results are an follows: Visual and auditory korean vowels quality in singing have the 3 tendency. One is the tendency of more rounded than is usual Korean vowels. Another is the tendency of centralized to center point in Cardinal Vowel and the other is the tendency of diversity in vowel quality. Acoustic analysis is studied by means of 4 formants. Fl and F2 show similiar step in spoken. In Fl there is the same formant values. This seems to vocal organization be perceived the singign situation. The width of F3 is the widest of all, so F3 may be the characteristics in singing. In conclude, the characteristics of vowels in Korean lyric songs are seems to have the tendencies of rounding, centralizing to center point in Cardinal Vowel, diversity in vowel quality and, F3'widest width in compared with usual Korean vowels.

  • PDF

잡음환경에서의 음성인식 성능 향상을 위한 이중채널 음성의 CASA 기반 전처리 방법 (CASA-based Front-end Using Two-channel Speech for the Performance Improvement of Speech Recognition in Noisy Environments)

  • 박지훈;윤재삼;김홍국
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.289-290
    • /
    • 2007
  • In order to improve the performance of a speech recognition system in the presence of noise, we propose a noise robust front-end using two-channel speech signals by separating speech from noise based on the computational auditory scene analysis (CASA). The main cues for the separation are interaural time difference (ITD) and interaural level difference (ILD) between two-channel signal. As a result, we can extract 39 cepstral coefficients are extracted from separated speech components. It is shown from speech recognition experiments that proposed front-end has outperforms the ETSI front-end with single-channel speech.

  • PDF