• 제목/요약/키워드: audio event classification

검색결과 14건 처리시간 0.022초

Convolutional Neural Network based Audio Event Classification

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kang, Yoseb;Oh, Junseok;Park, Jeong-Sik;Jang, Gil-Jin;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권6호
    • /
    • pp.2748-2760
    • /
    • 2018
  • This paper proposes an audio event classification method based on convolutional neural networks (CNNs). CNN has great advantages of distinguishing complex shapes of image. Proposed system uses the features of audio sound as an input image of CNN. Mel scale filter bank features are extracted from each frame, then the features are concatenated over 40 consecutive frames and as a result, the concatenated frames are regarded as an input image. The output layer of CNN generates probabilities of audio event (e.g. dogs bark, siren, forest). The event probabilities for all images in an audio segment are accumulated, then the audio event having the highest accumulated probability is determined to be the classification result. This proposed method classified thirty audio events with the accuracy of 81.5% for the UrbanSound8K, BBC Sound FX, DCASE2016, and FREESOUND dataset.

깊은 신경망을 이용한 오디오 이벤트 분류 (Audio Event Classification Using Deep Neural Networks)

  • 임민규;이동현;김광호;김지환
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.27-33
    • /
    • 2015
  • This paper proposes an audio event classification method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate event probabilities of ten audio events (dog barks, engine idling, and so on) for each frame. For each frame, mel scale filter bank features of its consecutive frames are used as the input vector of the FFNN. These event probabilities are accumulated for the events and the classification result is determined as the event with the highest accumulated probability. For the same dataset, the best accuracy of previous studies was reported as about 70% when the Support Vector Machine (SVM) was applied. The best accuracy of the proposed method achieves as 79.23% for the UrbanSound8K dataset when 80 mel scale filter bank features each from 7 consecutive frames (in total 560) were implemented as the input vector for the FFNN with two hidden layers and 2,000 neurons per hidden layer. In this configuration, the rectified linear unit was suggested as its activation function.

Intelligent User Pattern Recognition based on Vision, Audio and Activity for Abnormal Event Detections of Single Households

  • Jung, Ju-Ho;Ahn, Jun-Ho
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권5호
    • /
    • pp.59-66
    • /
    • 2019
  • According to the KT telecommunication statistics, people stayed inside their houses on an average of 11.9 hours a day. As well as, according to NSC statistics in the united states, people regardless of age are injured for a variety of reasons in their houses. For purposes of this research, we have investigated an abnormal event detection algorithm to classify infrequently occurring behaviors as accidents, health emergencies, etc. in their daily lives. We propose a fusion method that combines three classification algorithms with vision pattern, audio pattern, and activity pattern to detect unusual user events. The vision pattern algorithm identifies people and objects based on video data collected through home CCTV. The audio and activity pattern algorithms classify user audio and activity behaviors using the data collected from built-in sensors on their smartphones in their houses. We evaluated the proposed individual pattern algorithm and fusion method based on multiple scenarios.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

오디오 신호에 기반한 음란 동영상 판별 (Classification of Phornographic Videos Based on the Audio Information)

  • 김봉완;최대림;이용주
    • 대한음성학회지:말소리
    • /
    • 제63호
    • /
    • pp.139-151
    • /
    • 2007
  • As the Internet becomes prevalent in our lives, harmful contents, such as phornographic videos, have been increasing on the Internet, which has become a very serious problem. To prevent such an event, there are many filtering systems mainly based on the keyword-or image-based methods. The main purpose of this paper is to devise a system that classifies pornographic videos based on the audio information. We use the mel-cepstrum modulation energy (MCME) which is a modulation energy calculated on the time trajectory of the mel-frequency cepstral coefficients (MFCC) as well as the MFCC as the feature vector. For the classifier, we use the well-known Gaussian mixture model (GMM). The experimental results showed that the proposed system effectively classified 98.3% of pornographic data and 99.8% of non-pornographic data. We expect the proposed method can be applied to the more accurate classification system which uses both video and audio information.

  • PDF

DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법 (A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning)

  • 이유진;낭종호
    • 정보과학회 논문지
    • /
    • 제43권11호
    • /
    • pp.1281-1297
    • /
    • 2016
  • 최근 스마트 기기의 보급으로 자유롭게 비디오 컨텐츠를 생성하고 이를 빠르고 편리하게 공유할 수 있는 네트워크 환경이 갖추어지면서, 퍼스널 비디오가 급증하고 있다. 그러나, 퍼스널 비디오는 비디오라는 특성 상 멀티 모달리티로 구성되어 있으면서 데이터가 시간의 흐름에 따라 변화하기 때문에 이벤트 분류를 할 때 이에 대한 고려가 필요하다. 본 논문에서는 비디오 내의 멀티 모달리티들로부터 고수준의 특징을 추출하여 시간 순으로 재배열한 것을 바탕으로 모달리티 사이의 연관관계를 Deep Neural Network(DNN)으로 학습하여 퍼스널 비디오 이벤트를 분류하는 방법을 제안한다. 제안하는 방법은 비디오에 내포된 이미지와 오디오를 시간적으로 동기화하여 추출한 후 GoogLeNet과 Multi-Layer Perceptron(MLP)을 이용하여 각각 고수준 정보를 추출한다. 그리고 이들을 비디오에 표현된 시간순으로 재 배열하여 비디오 한 편당 하나의 특징으로 재 생성하고 이를 바탕으로 학습한 DNN을 이용하여 퍼스널 비디오 이벤트를 분류한다.

오디오 신호를 이용한 음란 동영상 판별 (Classification of Phornographic Videos Using Audio Information)

  • 김봉완;최대림;방만원;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.207-210
    • /
    • 2007
  • As the Internet is prevalent in our life, harmful contents have been increasing on the Internet, which has become a very serious problem. Among them, pornographic video is harmful as poison to our children. To prevent such an event, there are many filtering systems which are based on the keyword based methods or image based methods. The main purpose of this paper is to devise a system that classifies the pornographic videos based on the audio information. We use Mel-Cepstrum Modulation Energy (MCME) which is modulation energy calculated on the time trajectory of the Mel-Frequency cepstral coefficients (MFCC) and MFCC as the feature vector and Gaussian Mixture Model (GMM) as the classifier. With the experiments, the proposed system classified the 97.5% of pornographic data and 99.5% of non-pornographic data. We expect the proposed method can be used as a component of the more accurate classification system which uses video information and audio information simultaneously.

  • PDF

Acoustic Monitoring and Localization for Social Care

  • Goetze, Stefan;Schroder, Jens;Gerlach, Stephan;Hollosi, Danilo;Appell, Jens-E.;Wallhoff, Frank
    • Journal of Computing Science and Engineering
    • /
    • 제6권1호
    • /
    • pp.40-50
    • /
    • 2012
  • Increase in the number of older people due to demographic changes poses great challenges to the social healthcare systems both in the Western and as well as in the Eastern countries. Support for older people by formal care givers leads to enormous temporal and personal efforts. Therefore, one of the most important goals is to increase the efficiency and effectiveness of today's care. This can be achieved by the use of assistive technologies. These technologies are able to increase the safety of patients or to reduce the time needed for tasks that do not relate to direct interaction between the care giver and the patient. Motivated by this goal, this contribution focuses on applications of acoustic technologies to support users and care givers in ambient assisted living (AAL) scenarios. Acoustic sensors are small, unobtrusive and can be added to already existing care or living environments easily. The information gathered by the acoustic sensors can be analyzed to calculate the position of the user by localization and the context by detection and classification of acoustic events in the captured acoustic signal. By doing this, possibly dangerous situations like falls, screams or an increased amount of coughs can be detected and appropriate actions can be initialized by an intelligent autonomous system for the acoustic monitoring of older persons. The proposed system is able to reduce the false alarm rate compared to other existing and commercially available approaches that basically rely only on the acoustic level. This is due to the fact that it explicitly distinguishes between the various acoustic events and provides information on the type of emergency that has taken place. Furthermore, the position of the acoustic event can be determined as contextual information by the system that uses only the acoustic signal. By this, the position of the user is known even if she or he does not wear a localization device such as a radio-frequency identification (RFID) tag.

DNN을 이용한 오디오 이벤트 검출 성능 비교 (Comparison of Audio Event Detection Performance using DNN)

  • 정석환;정용주
    • 한국전자통신학회논문지
    • /
    • 제13권3호
    • /
    • pp.571-578
    • /
    • 2018
  • 최근 딥러닝 기법이 다양한 종류의 패턴 인식에 있어서 우수한 성능을 보이고 있다. 하지만 소규모의 훈련데이터를 이용한 분류 실험에 있어서 전통적으로 사용되던 머신러닝 기법에 비해서 DNN의 성능이 우수한지에 대해서는 다소 간의 논란이 있어 왔다. 본 연구에서는 오디오 검출에 있어서 전통적으로 사용되어 왔던 GMM, SVM의 성능과 DNN의 성능을 비교하였다. 동일한 데이터에 대해서 인식실험을 수행한 결과, 전반적인 성능은 DNN이 우수하였으나 세그먼트 기반의 F-score에서 SVM이 DNN에 비해 우수한 성능을 보임을 알 수 있었다.

약지도 음향 이벤트 검출을 위한 파형 기반의 종단간 심층 콘볼루션 신경망에 대한 연구 (A study on the waveform-based end-to-end deep convolutional neural network for weakly supervised sound event detection)

  • 이석진;김민한;정영호
    • 한국음향학회지
    • /
    • 제39권1호
    • /
    • pp.24-31
    • /
    • 2020
  • 본 논문에서는 음향 이벤트 검출을 위한 심층 신경망에 대한 연구를 진행하였다. 특히 약하게 표기된 데이터 및 표기되지 않은 훈련 데이터를 포함하는 약지도 문제에 대하여, 입력 오디오 파형으로부터 이벤트 검출 결과를 얻어내는 종단간 신경망을 구축하는 연구를 진행하였다. 본 연구에서 제안하는 시스템은 1차원 콘볼루션 신경망을 깊게 적층하는 구조를 기반으로 하였으며, 도약 연결 및 게이팅 메커니즘 등의 추가적인 구조를 통해 성능을 개선하였다. 또한 음향 구간 검출 및 후처리를 통하여 성능을 향상시켰으며, 약지도 데이터를 다루기 위하여 평균-교사 모델을 적용하여 학습하는 과정을 도입하였다. 본 연구에서 고안된 시스템을 Detection and Classification of Acoustic Scenes and Events(DCASE) 2019 Task 4 데이터를 이용하여 평가하였으며, 그 결과 약 54 %의 구간-기반 F1-score 및 32%의 이벤트-기반 F1-score를 얻을 수 있었다.