• Title/Summary/Keyword: Sound Event Detection

Search Result 25, Processing Time 0.02 seconds

Audio Event Detection Using Deep Neural Networks (깊은 신경망을 이용한 오디오 이벤트 검출)

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kim, Ji-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.183-190
    • /
    • 2017
  • This paper proposes an audio event detection method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate output probabilities of twenty audio events for each frame. Mel scale filter bank (FBANK) features are extracted from each frame, and its five consecutive frames are combined as one vector which is the input feature of the FFNN. The output layer of FFNN produces audio event probabilities for each input feature vector. More than five consecutive frames of which event probability exceeds threshold are detected as an audio event. An audio event continues until the event is detected within one second. The proposed method achieves as 71.8% accuracy for 20 classes of the UrbanSound8K and the BBC Sound FX dataset.

Sound Event Detection based on Deep Neural Networks (딥 뉴럴네트워크 기반의 소리 이벤트 검출)

  • Chung, Suk-Hwan;Chung, Yong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.2
    • /
    • pp.389-396
    • /
    • 2019
  • In this paper, various architectures of deep neural networks were applied for sound event detection and their performances were compared using a common audio database. The FNN, CNN, RNN and CRNN were implemented using hyper-parameters optimized for the database as well as the architecture of each neural network. Among the implemented deep neural networks, CRNN performed best at all testing conditions and CNN followed CRNN in performance. Although RNN has a merit in tracking the time-correlations in audio signals, it showed poor performance compared with CNN and CRNN.

A Real-Time Sound Recognition System with a Decision Logic of Random Forest for Robots (Random Forest를 결정로직으로 활용한 로봇의 실시간 음향인식 시스템 개발)

  • Song, Ju-man;Kim, Changmin;Kim, Minook;Park, Yongjin;Lee, Seoyoung;Son, Jungkwan
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.273-281
    • /
    • 2022
  • In this paper, we propose a robot sound recognition system that detects various sound events. The proposed system is designed to detect various sound events in real-time by using a microphone on a robot. To get real-time performance, we use a VGG11 model which includes several convolutional neural networks with real-time normalization scheme. The VGG11 model is trained on augmented DB through 24 kinds of various environments (12 reverberation times and 2 signal to noise ratios). Additionally, based on random forest algorithm, a decision logic is also designed to generate event signals for robot applications. This logic can be used for specific classes of acoustic events with better performance than just using outputs of network model. With some experimental results, the performance of proposed sound recognition system is shown on real-time device for robots.

Retrieval of Player Event in Golf Videos Using Spoken Content Analysis (음성정보 내용분석을 통한 골프 동영상에서의 선수별 이벤트 구간 검색)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.674-679
    • /
    • 2009
  • This paper proposes a method of player event retrieval using combination of two functions: detection of player name in speech information and detection of sound event from audio information in golf videos. The system consists of indexing module and retrieval module. At the indexing time audio segmentation and noise reduction are applied to audio stream demultiplexed from the golf videos. The noise-reduced speech is then fed into speech recognizer, which outputs spoken descriptors. The player name and sound event are indexed by the spoken descriptors. At search time, text query is converted into phoneme sequences. The lists of each query term are retrieved through a description matcher to identify full and partial phrase hits. For the retrieval of the player name, this paper compares the results of word-based, phoneme-based, and hybrid approach.

Time-domain Sound Event Detection Algorithm Using Deep Neural Network (심층신경망을 이용한 시간 영역 음향 이벤트 검출 알고리즘)

  • Kim, Bum-Jun;Moon, Hyeongi;Park, Sung-Wook;Jeong, Youngho;Park, Young-Cheol
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.472-484
    • /
    • 2019
  • This paper proposes a time-domain sound event detection algorithm using DNN (Deep Neural Network). In this system, time domain sound waveform data which is not converted into the frequency domain is used as input to the DNN. The overall structure uses CRNN structure, and GLU, ResNet, and Squeeze-and-excitation blocks are applied. And proposed structure uses structure that considers features extracted from several layers together. In addition, under the assumption that it is practically difficult to obtain training data with strong labels, this study conducted training using a small number of weakly labeled training data and a large number of unlabeled training data. To efficiently use a small number of training data, the training data applied data augmentation methods such as time stretching, pitch change, DRC (dynamic range compression), and block mixing. Unlabeled data was supplemented with insufficient training data by attaching a pseudo-label. In the case of using the neural network and the data augmentation method proposed in this paper, the sound event detection performance is improved by about 6 %(based on the f-score), compared with the case where the neural network of the CRNN structure is used by training in the conventional method.

Performance analysis of weakly-supervised sound event detection system based on the mean-teacher convolutional recurrent neural network model (평균-교사 합성곱 순환 신경망 모델을 이용한 약지도 음향 이벤트 검출 시스템의 성능 분석)

  • Lee, Seokjin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.139-147
    • /
    • 2021
  • This paper introduces and implements a Sound Event Detection (SED) system based on weakly-supervised learning where only part of the data is labeled, and analyzes the effect of parameters. The SED system estimates the classes and onset/offset times of events in the acoustic signal. In order to train the model, all information on the event class and onset/offset times must be provided. Unfortunately, the onset/offset times are hard to be labeled exactly. Therefore, in the weakly-supervised task, the SED model is trained by "strongly labeled data" including the event class and activations, "weakly labeled data" including the event class, and "unlabeled data" without any label. Recently, the SED systems using the mean-teacher model are widely used for the task with several parameters. These parameters should be chosen carefully because they may affect the performance. In this paper, performance analysis was performed on parameters, such as the feature, moving average parameter, weight of the consistency cost function, ramp-up length, and maximum learning rate, using the data of DCASE 2020 Task 4. Effects and the optimal values of the parameters were discussed.

Dilated convolution and gated linear unit based sound event detection and tagging algorithm using weak label (약한 레이블을 이용한 확장 합성곱 신경망과 게이트 선형 유닛 기반 음향 이벤트 검출 및 태깅 알고리즘)

  • Park, Chungho;Kim, Donghyun;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.414-423
    • /
    • 2020
  • In this paper, we propose a Dilated Convolution Gate Linear Unit (DCGLU) to mitigate the lack of sparsity and small receptive field problems caused by the segmentation map extraction process in sound event detection with weak labels. In the advent of deep learning framework, segmentation map extraction approaches have shown improved performance in noisy environments. However, these methods are forced to maintain the size of the feature map to extract the segmentation map as the model would be constructed without a pooling operation. As a result, the performance of these methods is deteriorated with a lack of sparsity and a small receptive field. To mitigate these problems, we utilize GLU to control the flow of information and Dilated Convolutional Neural Networks (DCNNs) to increase the receptive field without additional learning parameters. For the performance evaluation, we employ a URBAN-SED and self-organized bird sound dataset. The relevant experiments show that our proposed DCGLU model outperforms over other baselines. In particular, our method is shown to exhibit robustness against nature sound noises with three Signal to Noise Ratio (SNR) levels (20 dB, 10 dB and 0 dB).

Irregular Sound Detection using the K-means Algorithm (K-means 알고리듬을 이용한 비정상 사운드 검출)

  • Lee Jae-yeal;Cho Sang-jin;Chong Ui-pil
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.341-344
    • /
    • 2004
  • 발전소에서 운전 중인 발전 설비의 장비 및 기계의 동작, 감시, 진단은 매우 중요한 일이다. 발전소의 이상 감지를 위해 상태 모니터링이 사용되며, 이상이 발생되었을 때 고장의 원인을 분석하고 적절한 조치를 계획하기 위한 이상 진단 과정을 따르게 된다. 본 논문에서는 산업 현장에서 기기들의 운전시에 발생하는 기기 발생 음을 획득하여 정상/비정상을 판정하기 위한 알고리듬에 대하여 연구하였다. 사운드 감시(Sound Monitoring) 기술은 관측된 신호를 acoustic event로 분류하는 것과 분류된 이벤트를 정상 또는 비정상으로 구분하는 두 가지 과정으로 진행할 수 있다. 기존의 기술들은 주파수 분석과 패턴 인식의 방법으로 간단하게 적용되어 왔으며, 본 논문에서는 K-means clustering 알고리듬을 이용하여 사운드를 acoustic event로 분류하고 분류된 사운드를 정상 또는 비정상으로 구분하는 알고리듬을 개발하였다.

  • PDF

Sound event detection model using self-training based on noisy student model (잡음 학생 모델 기반의 자가 학습을 활용한 음향 사건 검지)

  • Kim, Nam Kyun;Park, Chang-Soo;Kim, Hong Kook;Hur, Jin Ook;Lim, Jeong Eun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.479-487
    • /
    • 2021
  • In this paper, we propose an Sound Event Detection (SED) model using self-training based on a noisy student model. The proposed SED model consists of two stages. In the first stage, a mean-teacher model based on an Residual Convolutional Recurrent Neural Network (RCRNN) is constructed to provide target labels regarding weakly labeled or unlabeled data. In the second stage, a self-training-based noisy student model is constructed by applying different noise types. That is, feature noises, such as time-frequency shift, mixup, SpecAugment, and dropout-based model noise are used here. In addition, a semi-supervised loss function is applied to train the noisy student model, which acts as label noise injection. The performance of the proposed SED model is evaluated on the validation set of the Detection and Classification of Acoustic Scenes and Events (DCASE) 2020 Challenge Task 4. The experiments show that the single model and ensemble model of the proposed SED based on the noisy student model improve F1-score by 4.6 % and 3.4 % compared to the top-ranked model in DCASE 2020 challenge Task 4, respectively.

PC-Camera based Monitoring for Unattended NC Machining (무인가공을 위한 PC 카메라 기반의 모니터링)

  • Song, Shi-Yong;Ko, Key-Hoon;Choi, Byoung-Kyu
    • IE interfaces
    • /
    • v.19 no.1
    • /
    • pp.43-52
    • /
    • 2006
  • In order to make best use of NC machine tools with minimal labor costs, they need to be in operation 24 hours a day without being attended by human operators except for setup and tool changes. Thus, unattended machining is becoming a dream of every modern machine shop. However, without a proper mechanism for real-time monitoring of the machining processes, unattended machine could lead to a disaster. Investigated in this paper are ways to using PC camera as a real-time monitoring system for unattended NC milling operations. This study defined five machining states READY, NORMAL MACHINING, ABNORMAL MACHINING, COLLISION and END-OF-MACHINING and modeled them with DEVS (discrete event system) formalism. An image change detection algorithm has been developed to detect the table movements and a flame and smoke detection algorithm to detect unstable cutting process. Spindle on/off and cutting status could be successfully detected from the sound signals. Initial experimentation shows that the PC camera could be used as a reliable monitoring system for unattended NC machining.