• 제목/요약/키워드: Sound Signal

검색결과 898건 처리시간 0.027초

Wavelet Transform을 이용한 Heart Sound Analysis (Analysis of Heart Sound Using the Wavelet Transform)

  • 위지영;김중규
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 제13회 신호처리 합동 학술대회 논문집
    • /
    • pp.959-962
    • /
    • 2000
  • A heart sound algorithm, which separates the heart sound signal into four parts; the first heart sound, the systolic period, the second heart sound, and the diastolic period has been developed. The algorithm uses discrete intensity envelopes of approximations of the wavelet transform analysis method to the phonocard-iogram(PCG)signal. Heart sound a highly nonstation-ary signal, so in the analysis of heart sound, it is important to study the frequency and time information. Further more, Wavelet Transform provides more features and characteristics of the PCG signal that will help physician to obtain qualitative and quantitative measurements of the heart sound.

  • PDF

뇌파 측정을 이용한 차량 깜빡이 소리의 음질 평가 (Sound Quality Evaluation of Turn-signal of a Passenger Vehicle based on Brain Signal)

  • 신태진;이영준;이상권
    • 한국소음진동공학회논문집
    • /
    • 제22권11호
    • /
    • pp.1137-1143
    • /
    • 2012
  • This paper presents the correlation between psychological and physiological acoustics for the automotive sound. The research purpose of this paper is to evaluate the sound quality of turn-signal sound of a passenger car based EEG signal. The previous method for the objective evaluation of sound quality is to use sound metrics based on psychological acoustics. This method uses not only psychological acoustics but also physiological acoustics. For this work, the sounds of 7 premium passenger cars are recorded and evaluated subjectively by 30 persons. The correlation between this subjective rating and sound metrics is calculated based on psychological acoustics. Finally the correlation between the subjective rating and the EEG signal measured on the brain is also calculated. Throughout these results the new evaluation system for the sound quality on interior sound of a passenger car has been developed based on bio-signal.

통계적 모델링 기법을 이용한 연속심음신호의 자동분류에 관한 연구 (Automatic Classification of Continuous Heart Sound Signals Using the Statistical Modeling Approach)

  • 김희근;정용주
    • 한국음향학회지
    • /
    • 제26권4호
    • /
    • pp.144-152
    • /
    • 2007
  • 기존의 심음분류를 위한 연구들은 인공신경망을 이용하여 주로 이루어졌다. 그러나 심음신호의 통계적 특성을 분석 한 결과 HMM의 의한 신호모델링이 적합한 것으로 나타났다. 본 연구에서는 다양한 질병을 나타내는 심음신호를 HMM을 이용하여 모델링 하고 인식성능이 심음신호의 클러스터링에 따라서 많이 좌우되는 것을 알 수 있었다. 또한 실제 환경에서의 심음신호는 그 시작과 끝나는 시점이 정해지지 않은 연속신호이다. 따라서 HMM을 이용한 심음분류를 위해서는 연속적인 심음신호로부터 한 사이클의 분할된 심음을 추출할 필요성이 있다. 일반적으로 수동분할은 분할오류를 발생시키며 실시간 심음인식에 적합하지 않으므로 분할과정이 필요치 않는 ergodic형 HMM을 변형하여 사용할 것을 제안하였다. 그리고 제안된 HMM은 연속심음을 이용한 분류실험에서 매우 높은 성능을 보임을 알 수 있었다.

마이크로폰 배열로 발생되는 입력 시간차를 이용한 음원의 방향 추정 장치에 관한 연구 (A Study about Direction Estimate Device of the Sound Source using Input Time Difference by Microphones′ Arrangement)

  • 윤준호;최기훈;유재명
    • 한국정밀공학회지
    • /
    • 제21권5호
    • /
    • pp.91-98
    • /
    • 2004
  • Human uses level difference and time difference to get space information. Therefore this paper shows that method to presume direction of sound source by time difference and to mark presumed position. The position means direction from geometrical center of sensors to the sound source. To get the time difference of microphones input level, we will be explained about arrangement of microphones which used for the sensor to take the sound signal. It is included distance among the 3 microphones and distance between microphones and sound source. Secondly, input signals are transmitted to CPU througth digital process. CPU is used to DSP(Digital Signal Processor) for manage the signal by real time. Finally, the position of sound source is perceived by an explained algorithm in this paper.

신호 모델링 기법을 이용한 소총화기 신호 검출에 대한 연구 (A Study on the Detection of Small Arm Rifle Sound Using the Signal Modelling Method)

  • 신민철;박규식
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제21권7호
    • /
    • pp.443-451
    • /
    • 2015
  • 본 논문에서는 신호 모델링 기법을 이용하여 소총화기에서 발생하는 탄환충격파(SW, Shock Wave) 음향신호와 총성(MB, Muzzle Blast) 음향신호를 효과적으로 검출할 수 있는 알고리즘을 제안하였다. 전장에서 저격수의 위치를 탐지하기 위해서는 저격수의 소총화기에서 발생하는 탄환충격파와 총성 신호를 정확하게 검출하여 적 저격수의 방향각과 거리를 추정하는 것이 중요하다. 제안 알고리즘의 성능을 검증하기 위하여 국내 군 사격장에서 실제 소총화기 발사 실험을 진행하였고, 실험결과 제안 알고리즘은 탄환충격파 신호 검출에 있어 기존 알고리즘에 비해 최대 20% 가까운 성능향상을, 총성 신호 검출에 있어서는 약 5% 정도의 성능향상을 가져옴을 확인할 수 있었다.

Heart Sound Recognition by Analysis of wavelet transform and Neural network.

  • Lee, Jung-Jun;Lee, Sang-Min;Hong, Seung-Hong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.1045-1048
    • /
    • 2000
  • This paper presents the application of the wavelet transform analysis and the neural network method to the phonocardiogram (PCG) signal. Heart sound is a acoustic signal generated by cardiac valves, myocardium and blood flow and is a very complex and nonstationary signal composed of many source. Heart sound can be discriminated normal heart sound and heart murmur. Murmurs have broader frequency bandwidth than the normal ones and can occur at random position of cardiac cycle. In this paper, we classified the group of heart sound as normal heart sound(NO), pre-systolic murmur(PS), early systolic murmur(ES), late systolic murmur(LS), early diastolic murmur(ED). And we used the wavelet transform to shorten artifacts and strengthen the low level signal. The ANN system was trained and tested with the back- propagation algorithm from a large data set of examples-normal and abnormal signals classified by expert. The best ANN configuration occurred with 15 hidden layer neurons. We can get the accuracy of 85.6% by using the proposed algorithm.

  • PDF

Class Determination Based on Kullback-Leibler Distance in Heart Sound Classification

  • Chung, Yong-Joo;Kwak, Sung-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • 제27권2E호
    • /
    • pp.57-63
    • /
    • 2008
  • Stethoscopic auscultation is still one of the primary tools for the diagnosis of heart diseases due to its easy accessibility and relatively low cost. It is, however, a difficult skill to acquire. Many research efforts have been done on the automatic classification of heart sound signals to support clinicians in heart sound diagnosis. Recently, hidden Markov models (HMMs) have been used quite successfully in the automatic classification of the heart sound signal. However, in the classification using HMMs, there are so many heart sound signal types that it is not reasonable to assign a new class to each of them. In this paper, rather than constructing an HMM for each signal type, we propose to build an HMM for a set of acoustically-similar signal types. To define the classes, we use the KL (Kullback-Leibler) distance between different signal types to determine if they should belong to the same class. From the classification experiments on the heart sound data consisting of 25 different types of signals, the proposed method proved to be quite efficient in determining the optimal set of classes. Also we found that the class determination approach produced better results than the heuristic class assignment method.

LSP 파라미터를 이용한 음성신호의 성분분리에 관한 연구 (A Study on a Method of U/V Decision by Using The LSP Parameter in The Speech Signal)

  • 이희원;나덕수;정찬중;배명진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.1107-1110
    • /
    • 1999
  • In speech signal processing, the accurate decision of the voiced/unvoiced sound is important for robust word recognition and analysis and a high coding efficiency. In this paper, we propose the mehod of the voiced/unvoiced decision using the LSP parameter which represents the spectrum characteristics of the speech signal. The voiced sound has many more LSP parameters in low frequency region. To the contrary, the unvoiced sound has many more LSP parameters in high frequency region. That is, the LSP parameter distribution of the voiced sound is different to that of the unvoiced sound. Also, the voiced sound has the minimun value of sequantial intervals of the LSP parameters in low frequency region. The unvoiced sound has it in high frequency region. we decide the voiced/unvoiced sound by using this charateristics. We used the proposed method to some continuous speech and then achieved good performance.

  • PDF

Sound System Analysis for Health Smart Home

  • CASTELLI Eric;ISTRATE Dan;NGUYEN Cong-Phuong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2004년도 ICEIC The International Conference on Electronics Informations and Communications
    • /
    • pp.237-243
    • /
    • 2004
  • A multichannel smart sound sensor capable to detect and identify sound events in noisy conditions is presented in this paper. Sound information extraction is a complex task and the main difficulty consists is the extraction of high­level information from an one-dimensional signal. The input of smart sound sensor is composed of data collected by 5 microphones and its output data is sent through a network. For a real time working purpose, the sound analysis is divided in three steps: sound event detection for each sound channel, fusion between simultaneously events and sound identification. The event detection module find impulsive signals in the noise and extracts them from the signal flow. Our smart sensor must be capable to identify impulsive signals but also speech presence too, in a noisy environment. The classification module is launched in a parallel task on the channel chosen by data fusion process. It looks to identify the event sound between seven predefined sound classes and uses a Gaussian Mixture Model (GMM) method. Mel Frequency Cepstral Coefficients are used in combination with new ones like zero crossing rate, centroid and roll-off point. This smart sound sensor is a part of a medical telemonitoring project with the aim of detecting serious accidents.

  • PDF

Aurally Relevant Analysis by Synthesis - VIPER a New Approach to Sound Design -

  • Daniel, Peter;Pischedda, Patrice
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2003년도 춘계학술대회논문집
    • /
    • pp.1009-1009
    • /
    • 2003
  • VIPER a new tool for the VIsual PERception of sound quality and for sound design will be presented. Requirement for the visualization of sound quality is a signal analysis modeling the information processing of the ear. The first step of the signal processing implemented in VIPER, calculates an auditory spectrogram by a filter bank adapted to the time- and frequency resolution of the human ear. The second step removes redundant information by extracting time- and frequency contours from the auditory spectrogram in analogy to contours of the visual system. In a third step contours and/or auditory spectrogram can be resynthesised confirming that only aurally relevant information were extracted. The visualization of the contours in VIPER allows intuitively to grasp the important components of a signal. Contributions of parts of a signal to the overall quality can be easily auralized by editing and resynthesising the contours or the underlying auditory spectrogram. Resynthesis of time contours alone allows e.g. to auralize impulsive components separately from the tonal components. Further processing of the contours determines tonal parts in form of tracks. Audible differences between two versions of a sound can be visually inspected in VIPER through the help of auditory distance spectrograms. Applications are shown for the sound design of several interior noises of cars.

  • PDF