• Title/Summary/Keyword: Acoustic event recognition

Search Result 9, Processing Time 0.024 seconds

Frequency-Cepstral Features for Bag of Words Based Acoustic Context Awareness (Bag of Words 기반 음향 상황 인지를 위한 주파수-캡스트럴 특징)

  • Park, Sang-Wook;Choi, Woo-Hyun;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.4
    • /
    • pp.248-254
    • /
    • 2014
  • Among acoustic signal analysis tasks, acoustic context awareness is one of the most formidable tasks in terms of complexity since it requires sophisticated understanding of individual acoustic events. In conventional context awareness methods, individual acoustic event detection or recognition is employed to generate a relevant decision on the impending context. However this approach may produce poorly performing decision results in practical situations due to the possibility of events occurring simultaneously or the acoustically similar events that are difficult to distinguish with each other. Particularly, the babble noise acoustic event occurring at a bus or subway environment may create confusion to context awareness task since babbling is similar in any environment. Therefore in this paper, a frequency-cepstral feature vector is proposed to mitigate the confusion problem during the situation awareness task of binary decisions: bus or metro. By employing the Support Vector Machine (SVM) as the classifier, the proposed feature vector scheme is shown to produce better performance than the conventional scheme.

A Real-Time Sound Recognition System with a Decision Logic of Random Forest for Robots (Random Forest를 결정로직으로 활용한 로봇의 실시간 음향인식 시스템 개발)

  • Song, Ju-man;Kim, Changmin;Kim, Minook;Park, Yongjin;Lee, Seoyoung;Son, Jungkwan
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.273-281
    • /
    • 2022
  • In this paper, we propose a robot sound recognition system that detects various sound events. The proposed system is designed to detect various sound events in real-time by using a microphone on a robot. To get real-time performance, we use a VGG11 model which includes several convolutional neural networks with real-time normalization scheme. The VGG11 model is trained on augmented DB through 24 kinds of various environments (12 reverberation times and 2 signal to noise ratios). Additionally, based on random forest algorithm, a decision logic is also designed to generate event signals for robot applications. This logic can be used for specific classes of acoustic events with better performance than just using outputs of network model. With some experimental results, the performance of proposed sound recognition system is shown on real-time device for robots.

A study on training DenseNet-Recurrent Neural Network for sound event detection (음향 이벤트 검출을 위한 DenseNet-Recurrent Neural Network 학습 방법에 관한 연구)

  • Hyeonjin Cha;Sangwook Park
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.5
    • /
    • pp.395-401
    • /
    • 2023
  • Sound Event Detection (SED) aims to identify not only sound category but also time interval for target sounds in an audio waveform. It is a critical technique in field of acoustic surveillance system and monitoring system. Recently, various models have introduced through Detection and Classification of Acoustic Scenes and Events (DCASE) Task 4. This paper explored how to design optimal parameters of DenseNet based model, which has led to outstanding performance in other recognition system. In experiment, DenseRNN as an SED model consists of DensNet-BC and bi-directional Gated Recurrent Units (GRU). This model is trained with Mean teacher model. With an event-based f-score, evaluation is performed depending on parameters, related to model architecture as well as model training, under the assessment protocol of DCASE task4. Experimental result shows that the performance goes up and has been saturated to near the best. Also, DenseRNN would be trained more effectively without dropout technique.

Combining multi-task autoencoder with Wasserstein generative adversarial networks for improving speech recognition performance (음성인식 성능 개선을 위한 다중작업 오토인코더와 와설스타인식 생성적 적대 신경망의 결합)

  • Kao, Chao Yuan;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.670-677
    • /
    • 2019
  • As the presence of background noise in acoustic signal degrades the performance of speech or acoustic event recognition, it is still challenging to extract noise-robust acoustic features from noisy signal. In this paper, we propose a combined structure of Wasserstein Generative Adversarial Network (WGAN) and MultiTask AutoEncoder (MTAE) as deep learning architecture that integrates the strength of MTAE and WGAN respectively such that it estimates not only noise but also speech features from noisy acoustic source. The proposed MTAE-WGAN structure is used to estimate speech signal and the residual noise by employing a gradient penalty and a weight initialization method for Leaky Rectified Linear Unit (LReLU) and Parametric ReLU (PReLU). The proposed MTAE-WGAN structure with the adopted gradient penalty loss function enhances the speech features and subsequently achieve substantial Phoneme Error Rate (PER) improvements over the stand-alone Deep Denoising Autoencoder (DDAE), MTAE, Redundant Convolutional Encoder-Decoder (R-CED) and Recurrent MTAE (RMTAE) models for robust speech recognition.

Dual CNN Structured Sound Event Detection Algorithm Based on Real Life Acoustic Dataset (실생활 음향 데이터 기반 이중 CNN 구조를 특징으로 하는 음향 이벤트 인식 알고리즘)

  • Suh, Sangwon;Lim, Wootaek;Jeong, Youngho;Lee, Taejin;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.855-865
    • /
    • 2018
  • Sound event detection is one of the research areas to model human auditory cognitive characteristics by recognizing events in an environment with multiple acoustic events and determining the onset and offset time for each event. DCASE, a research group on acoustic scene classification and sound event detection, is proceeding challenges to encourage participation of researchers and to activate sound event detection research. However, the size of the dataset provided by the DCASE Challenge is relatively small compared to ImageNet, which is a representative dataset for visual object recognition, and there are not many open sources for the acoustic dataset. In this study, the sound events that can occur in indoor and outdoor are collected on a larger scale and annotated for dataset construction. Furthermore, to improve the performance of the sound event detection task, we developed a dual CNN structured sound event detection system by adding a supplementary neural network to a convolutional neural network to determine the presence of sound events. Finally, we conducted a comparative experiment with both baseline systems of the DCASE 2016 and 2017.

Acoustic Event Detection and Matlab/Simulink Interoperation for Individualized Things-Human Interaction (사물-사람 간 개인화된 상호작용을 위한 음향신호 이벤트 감지 및 Matlab/Simulink 연동환경)

  • Lee, Sanghyun;Kim, Tag Gon;Cho, Jeonghun;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.4
    • /
    • pp.189-198
    • /
    • 2015
  • Most IoT-related approaches have tried to establish the relation by connecting the network between things. The proposed research will present how the pervasive interaction of eco-system formed by touching the objects between humans and things can be recognized on purpose. By collecting and sharing the detected patterns among all kinds of things, we can construct the environment which enables individualized interactions of different objects. To perform the aforementioned, we are going to utilize technical procedures such as event-driven signal processing, pattern matching for signal recognition, and hardware in the loop simulation. We will also aim to implement the prototype of sensor processor based on Arduino MCU, which can be integrated with system using Arduino-Matlab/Simulink hybrid-interoperation environment. In the experiment, we use piezo transducer to detect the vibration or vibrates the surface using acoustic wave, which has specific frequency spectrum and individualized signal shape in terms of time axis. The signal distortion in time and frequency domain is recorded into memory tracer within sensor processor to extract the meaningful pattern by comparing the stored with lookup table(LUT). In this paper, we will contribute the initial prototypes for the acoustic touch processor by using off-the-shelf MCU and the integrated framework based on Matlab/Simulink model to provide the individualization of the touch-sensing for the user on purpose.

Convolutional neural network based amphibian sound classification using covariance and modulogram (공분산과 모듈로그램을 이용한 콘볼루션 신경망 기반 양서류 울음소리 구별)

  • Ko, Kyungdeuk;Park, Sangwook;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.1
    • /
    • pp.60-65
    • /
    • 2018
  • In this paper, a covariance matrix and modulogram are proposed for realizing amphibian sound classification using CNN (Convolutional Neural Network). First of all, a database is established by collecting amphibians sounds including endangered species in natural environment. In order to apply the database to CNN, it is necessary to standardize acoustic signals with different lengths. To standardize the acoustic signals, covariance matrix that gives distribution information and modulogram that contains the information about change over time are extracted and used as input to CNN. The experiment is conducted by varying the number of a convolutional layer and a fully-connected layer. For performance assessment, several conventional methods are considered representing various feature extraction and classification approaches. From the results, it is confirmed that convolutional layer has a greater impact on performance than the fully-connected layer. Also, the performance based on CNN shows attaining the highest recognition rate with 99.07 % among the considered methods.

Performance assessments of feature vectors and classification algorithms for amphibian sound classification (양서류 울음 소리 식별을 위한 특징 벡터 및 인식 알고리즘 성능 분석)

  • Park, Sangwook;Ko, Kyungdeuk;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.401-406
    • /
    • 2017
  • This paper presents the performance assessment of several key algorithms conducted for amphibian species sound classification. Firstly, 9 target species including endangered species are defined and a database of their sounds is built. For performance assessment, three feature vectors such as MFCC (Mel Frequency Cepstral Coefficient), RCGCC (Robust Compressive Gammachirp filterbank Cepstral Coefficient), and SPCC (Subspace Projection Cepstral Coefficient), and three classifiers such as GMM(Gaussian Mixture Model), SVM(Support Vector Machine), DBN-DNN(Deep Belief Network - Deep Neural Network) are considered. In addition, i-vector based classification system which is widely used for speaker recognition, is used to assess for this task. Experimental results indicate that, SPCC-SVM achieved the best performance with 98.81 % while other methods also attained good performance with above 90 %.

Bird sounds classification by combining PNCC and robust Mel-log filter bank features (PNCC와 robust Mel-log filter bank 특징을 결합한 조류 울음소리 분류)

  • Badi, Alzahra;Ko, Kyungdeuk;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.39-46
    • /
    • 2019
  • In this paper, combining features is proposed as a way to enhance the classification accuracy of sounds under noisy environments using the CNN (Convolutional Neural Network) structure. A robust log Mel-filter bank using Wiener filter and PNCCs (Power Normalized Cepstral Coefficients) are extracted to form a 2-dimensional feature that is used as input to the CNN structure. An ebird database is used to classify 43 types of bird species in their natural environment. To evaluate the performance of the combined features under noisy environments, the database is augmented with 3 types of noise under 4 different SNRs (Signal to Noise Ratios) (20 dB, 10 dB, 5 dB, 0 dB). The combined feature is compared to the log Mel-filter bank with and without incorporating the Wiener filter and the PNCCs. The combined feature is shown to outperform the other mentioned features under clean environments with a 1.34 % increase in overall average accuracy. Additionally, the accuracy under noisy environments at the 4 SNR levels is increased by 1.06 % and 0.65 % for shop and schoolyard noise backgrounds, respectively.