• Title/Summary/Keyword: Sound Classification

Search Result 297, Processing Time 0.03 seconds

Floor impact sound classification and setting Acceptable limit based on psychoacoustical evaluation (감성평가 기반 바닥충격음 등급화 및 수인한도 설정)

  • Kim, Sung Min;Hong, Joo Young;Jeon, Jin Yong
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.7-9
    • /
    • 2014
  • An auditory experiment was conducted to establish annoyance criteria for floor impact noise in apartment buildings. Heavyweight floor impact sounds were recorded using an impact ball; the impact sound pressure level (SPL) together with the temporal decay rate (DR), which is quantified by the dB drop per second, was analyzed. For the experiment, A-weighted exposure levels of the heavy-weight floor impact sounds ranging 34~73 dB were evaluated at 3 dB intervals. Participants used a 7-point verbal scale to evaluate the level of annoyance from floor impact noise. The results show that the annoyance increases with increasing impact SPL and decreasing DR. Consequently, a classification and an acceptable level of floor impact sounds were proposed.

  • PDF

Lung Sound Classification Using Hjorth Descriptor Measurement on Wavelet Sub-bands

  • Rizal, Achmad;Hidayat, Risanuri;Nugroho, Hanung Adi
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1068-1081
    • /
    • 2019
  • Signal complexity is one point of view to analyze the biological signal. It arises as a result of the physiological signal produced by biological systems. Signal complexity can be used as a method in extracting the feature for a biological signal to differentiate a pathological signal from a normal signal. In this research, Hjorth descriptors, one of the signal complexity measurement techniques, were measured on signal sub-band as the features for lung sounds classification. Lung sound signal was decomposed using two wavelet analyses: discrete wavelet transform (DWT) and wavelet packet decomposition (WPD). Meanwhile, multi-layer perceptron and N-fold cross-validation were used in the classification stage. Using DWT, the highest accuracy was obtained at 97.98%, while using WPD, the highest one was found at 98.99%. This result was found better than the multi-scale Hjorth descriptor as in previous studies.

Comparison of environmental sound classification performance of convolutional neural networks according to audio preprocessing methods (오디오 전처리 방법에 따른 콘벌루션 신경망의 환경음 분류 성능 비교)

  • Oh, Wongeun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.3
    • /
    • pp.143-149
    • /
    • 2020
  • This paper presents the effect of the feature extraction methods used in the audio preprocessing on the classification performance of the Convolutional Neural Networks (CNN). We extract mel spectrogram, log mel spectrogram, Mel Frequency Cepstral Coefficient (MFCC), and delta MFCC from the UrbanSound8K dataset, which is widely used in environmental sound classification studies. Then we scale the data to 3 distributions. Using the data, we test four CNNs, VGG16, and MobileNetV2 networks for performance assessment according to the audio features and scaling. The highest recognition rate is achieved when using the unscaled log mel spectrum as the audio features. Although this result is not appropriate for all audio recognition problems but is useful for classifying the environmental sounds included in the Urbansound8K.

A literature review on diagnostic markers and subtype classification of children with speech sound disorders (원인을 모르는 말소리장애의 하위유형 분류 및 진단 표지에 관한 문헌 고찰)

  • Yi, Roo-Dah;Kim, Soo-Jin
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.87-99
    • /
    • 2022
  • A review regarding indicators used in Korean research is needed to develop a diagnostic marker system for Korean children with speech sound disorders (SSD). This literature review examined the research conducted to reveal the characteristics of children with SSD of unknown origin in Korea. The researchers in Korea used diverse variables as indicators to identify the natures of children with SSD. These included indicators related to external characteristics of speech sound and comorbid features other than external aspects of speech sound. The attention has been focused on specific indicators so far. This result implies that some indicators may still require closer study in various aspects due to their influence, and some may require more attention due to the limited number of research. This article argues that more research is necessary to comprehensively describe the unique characteristics of children with SSD of unknown origin and suggests a direction for future research regarding diagnostic markers and subtype classification of SSD. It also proposes potential diagnostic markers and a set of assessments for the subtype classification of SSD.

Convolutional neural network based traffic sound classification robust to environmental noise (합성곱 신경망 기반 환경잡음에 강인한 교통 소음 분류 모델)

  • Lee, Jaejun;Kim, Wansoo;Lee, Kyogu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.6
    • /
    • pp.469-474
    • /
    • 2018
  • As urban population increases, research on urban environmental noise is getting more attention. In this study, we classify the abnormal noise occurring in traffic situation by using a deep learning algorithm which shows high performance in recent environmental noise classification studies. Specifically, we classify the four classes of tire skidding sounds, car crash sounds, car horn sounds, and normal sounds using convolutional neural networks. In addition, we add three environmental noises, including rain, wind and crowd noises, to our training data so that the classification model is more robust in real traffic situation with environmental noises. Experimental results show that the proposed traffic sound classification model achieves better performance than the existing algorithms, particularly under harsh conditions with environmental noises.

Search for Optimal Data Augmentation Policy for Environmental Sound Classification with Deep Neural Networks (심층 신경망을 통한 자연 소리 분류를 위한 최적의 데이터 증대 방법 탐색)

  • Park, Jinbae;Kumar, Teerath;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.854-860
    • /
    • 2020
  • Deep neural networks have shown remarkable performance in various areas, including image classification and speech recognition. The variety of data generated by augmentation plays an important role in improving the performance of the neural network. The transformation of data in the augmentation process makes it possible for neural networks to be learned more generally through more diverse forms. In the traditional field of image process, not only new augmentation methods have been proposed for improving the performance, but also exploring methods for an optimal augmentation policy that can be changed according to the dataset and structure of networks. Inspired by the prior work, this paper aims to explore to search for an optimal augmentation policy in the field of sound data. We carried out many experiments randomly combining various augmentation methods such as adding noise, pitch shift, or time stretch to empirically search which combination is most effective. As a result, by applying the optimal data augmentation policy we achieve the improved classification accuracy on the environmental sound classification dataset (ESC-50).

Proposal of a new method for learning of diesel generator sounds and detecting abnormal sounds using an unsupervised deep learning algorithm

  • Hweon-Ki Jo;Song-Hyun Kim;Chang-Lak Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.506-515
    • /
    • 2023
  • This study is to find a method to learn engine sound after the start-up of a diesel generator installed in nuclear power plant with an unsupervised deep learning algorithm (CNN autoencoder) and a new method to predict the failure of a diesel generator using it. In order to learn the sound of a diesel generator with a deep learning algorithm, sound data recorded before and after the start-up of two diesel generators was used. The sound data of 20 min and 2 h were cut into 7 s, and the split sound was converted into a spectrogram image. 1200 and 7200 spectrogram images were created from sound data of 20 min and 2 h, respectively. Using two different deep learning algorithms (CNN autoencoder and binary classification), it was investigated whether the diesel generator post-start sounds were learned as normal. It was possible to accurately determine the post-start sounds as normal and the pre-start sounds as abnormal. It was also confirmed that the deep learning algorithm could detect the virtual abnormal sounds created by mixing the unusual sounds with the post-start sounds. This study showed that the unsupervised anomaly detection algorithm has a good accuracy increased about 3% with comparing to the binary classification algorithm.

Convolutional Neural Network based Audio Event Classification

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kang, Yoseb;Oh, Junseok;Park, Jeong-Sik;Jang, Gil-Jin;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2748-2760
    • /
    • 2018
  • This paper proposes an audio event classification method based on convolutional neural networks (CNNs). CNN has great advantages of distinguishing complex shapes of image. Proposed system uses the features of audio sound as an input image of CNN. Mel scale filter bank features are extracted from each frame, then the features are concatenated over 40 consecutive frames and as a result, the concatenated frames are regarded as an input image. The output layer of CNN generates probabilities of audio event (e.g. dogs bark, siren, forest). The event probabilities for all images in an audio segment are accumulated, then the audio event having the highest accumulated probability is determined to be the classification result. This proposed method classified thirty audio events with the accuracy of 81.5% for the UrbanSound8K, BBC Sound FX, DCASE2016, and FREESOUND dataset.

Implementation of Music Source Classification System by Embedding Information Code (정보코드 결합을 이용한 음원분류 시스템 구현)

  • Jo, Jae-Young;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.3
    • /
    • pp.250-255
    • /
    • 2006
  • In digital multimedia society, we usually use the digital sound music ( Mp3, wav, etc.) system instead of analog music. In the middle of generating or recording and transmitting, if we embed the digital code which is useful to music information, we can easily select as well as classify the music title by using Mp3 player that embedded sound source classification system. In this paper, sound source classification system which could be classify and search a music informations by way of user friendly scheme is implemented. We performed some experiments to testify the validity of proposed scheme by using implemented system.

  • PDF

Enhanced Sound Signal Based Sound-Event Classification (향상된 음향 신호 기반의 음향 이벤트 분류)

  • Choi, Yongju;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.5
    • /
    • pp.193-204
    • /
    • 2019
  • The explosion of data due to the improvement of sensor technology and computing performance has become the basis for analyzing the situation in the industrial fields, and various attempts to detect events based on such data are increasing recently. In particular, sound signals collected from sensors are used as important information to classify events in various application fields as an advantage of efficiently collecting field information at a relatively low cost. However, the performance of sound-event classification in the field cannot be guaranteed if noise can not be removed. That is, in order to implement a system that can be practically applied, robust performance should be guaranteed even in various noise conditions. In this study, we propose a system that can classify the sound event after generating the enhanced sound signal based on the deep learning algorithm. Especially, to remove noise from the sound signal itself, the enhanced sound data against the noise is generated using SEGAN applied to the GAN with a VAE technique. Then, an end-to-end based sound-event classification system is designed to classify the sound events using the enhanced sound signal as input data of CNN structure without a data conversion process. The performance of the proposed method was verified experimentally using sound data obtained from the industrial field, and the f1 score of 99.29% (railway industry) and 97.80% (livestock industry) was confirmed.