• Title/Summary/Keyword: Mel-Spectrogram

Search Result 39, Processing Time 0.026 seconds

Emergency Sound Classification with Early Fusion (Early Fusion을 적용한 위급상황 음향 분류)

  • Jin-Hwan Yang;Sung-Sik Kim;Hyuk-Soon Choi;Nammee Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1213-1214
    • /
    • 2023
  • 현재 국내외 CCTV 구축량 증가로 사생활 침해와 높은 설치 비용등이 문제점으로 제기되고 있다. 따라서 본 연구는 Early Fusion을 적용한 위급상황 음향 분류 모델을 제안한다. 음향 데이터에 STFT(Short Time Fourier Transform), Spectrogram, Mel-Spectrogram을 적용해 특징 벡터를 추출하고 3차원으로 Early Fusion하여 ResNet, DenseNet, EfficientNetV2으로 학습한다. 실험 결과 Early Fusion 방법이 가장 좋은 결과를 보였고 DenseNet, EfficientNetV2가 Accuracy, F1-Score 모두 0.972의 성능을 보였다.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

Parallel Network Model of Abnormal Respiratory Sound Classification with Stacking Ensemble

  • Nam, Myung-woo;Choi, Young-Jin;Choi, Hoe-Ryeon;Lee, Hong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.21-31
    • /
    • 2021
  • As the COVID-19 pandemic rapidly changes healthcare around the globe, the need for smart healthcare that allows for remote diagnosis is increasing. The current classification of respiratory diseases cost high and requires a face-to-face visit with a skilled medical professional, thus the pandemic significantly hinders monitoring and early diagnosis. Therefore, the ability to accurately classify and diagnose respiratory sound using deep learning-based AI models is essential to modern medicine as a remote alternative to the current stethoscope. In this study, we propose a deep learning-based respiratory sound classification model using data collected from medical experts. The sound data were preprocessed with BandPassFilter, and the relevant respiratory audio features were extracted with Log-Mel Spectrogram and Mel Frequency Cepstral Coefficient (MFCC). Subsequently, a Parallel CNN network model was trained on these two inputs using stacking ensemble techniques combined with various machine learning classifiers to efficiently classify and detect abnormal respiratory sounds with high accuracy. The model proposed in this paper classified abnormal respiratory sounds with an accuracy of 96.9%, which is approximately 6.1% higher than the classification accuracy of baseline model.

Temporal attention based animal sound classification (시간 축 주의집중 기반 동물 울음소리 분류)

  • Kim, Jungmin;Lee, Younglo;Kim, Donghyeon;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.406-413
    • /
    • 2020
  • In this paper, to improve the classification accuracy of bird and amphibian acoustic sound, we utilize GLU (Gated Linear Unit) and Self-attention that encourages the network to extract important features from data and discriminate relevant important frames from all the input sequences for further performance improvement. To utilize acoustic data, we convert 1-D acoustic data to a log-Mel spectrogram. Subsequently, undesirable component such as background noise in the log-Mel spectrogram is reduced by GLU. Then, we employ the proposed temporal self-attention to improve classification accuracy. The data consist of 6-species of birds, 8-species of amphibians including endangered species in the natural environment. As a result, our proposed method is shown to achieve an accuracy of 91 % with bird data and 93 % with amphibian data. Overall, an improvement of about 6 % ~ 7 % accuracy in performance is achieved compared to the existing algorithms.

Light weight architecture for acoustic scene classification (음향 장면 분류를 위한 경량화 모형 연구)

  • Lim, Soyoung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.979-993
    • /
    • 2021
  • Acoustic scene classification (ASC) categorizes an audio file based on the environment in which it has been recorded. This has long been studied in the detection and classification of acoustic scenes and events (DCASE). In this study, we considered the problem that ASC faces in real-world applications that the model used should have low-complexity. We compared several models that apply light-weight techniques. First, a base CNN model was proposed using log mel-spectrogram, deltas, and delta-deltas features. Second, depthwise separable convolution, linear bottleneck inverted residual block was applied to the convolutional layer, and Quantization was applied to the models to develop a low-complexity model. The model considering low-complexity was similar or slightly inferior to the performance of the base model, but the model size was significantly reduced from 503 KB to 42.76 KB.

A Study on the Classification of Fault Motors using Sound Data (소리 데이터를 이용한 불량 모터 분류에 관한 연구)

  • Il-Sik, Chang;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.885-896
    • /
    • 2022
  • Motor failure in manufacturing plays an important role in future A/S and reliability. Motor failure is detected by measuring sound, current, and vibration. For the data used in this paper, the sound of the car's side mirror motor gear box was used. Motor sound consists of three classes. Sound data is input to the network model through a conversion process through MelSpectrogram. In this paper, various methods were applied, such as data augmentation to improve the performance of classifying fault motors and various methods according to class imbalance were applied resampling, reweighting adjustment, change of loss function and representation learning and classification into two stages. In addition, the curriculum learning method and self-space learning method were compared through a total of five network models such as Bidirectional LSTM Attention, Convolutional Recurrent Neural Network, Multi-Head Attention, Bidirectional Temporal Convolution Network, and Convolution Neural Network, and the optimal configuration was found for motor sound classification.

Shooting sound analysis using convolutional neural networks and long short-term memory (합성곱 신경망과 장단기 메모리를 이용한 사격음 분석 기법)

  • Kang, Se Hyeok;Cho, Ji Woong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.312-318
    • /
    • 2022
  • This paper proposes a model which classifies the type of guns and information about sound source location using deep neural network. The proposed classification model is composed of convolutional neural networks (CNN) and long short-term memory (LSTM). For training and test the model, we use the Gunshot Audio Forensic Dataset generated by the project supported by the National Institute of Justice (NIJ). The acoustic signals are transformed to Mel-Spectrogram and they are provided as learning and test data for the proposed model. The model is compared with the control model consisting of convolutional neural networks only. The proposed model shows high accuracy more than 90 %.

Acceleration signal-based haptic texture recognition according to characteristics of object surface material using conformer model (Conformer 모델을 이용한 물체 표면 재료의 특성에 따른 가속도 신호 기반 햅틱 질감 인식)

  • Hyoung-Gook Kim;Dong-Ki Jeong;Jin-Young Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.3
    • /
    • pp.214-220
    • /
    • 2023
  • In this paper, we propose a method to improve texture recognition performance from haptic acceleration signals representing the texture characteristics of object surface materials by using a Conformer model that combines the advantages of a convolutional neural network and a transformer. In the proposed method, three-axis acceleration signals generated by impact sound and vibration are combined into one-dimensional acceleration data while a person contacts the surface of the object materials using a tool such as a stylus , and the logarithmic Mel-spectrogram is extracted from the haptic acceleration signal similar to the audio signal. Then, Conformer is applied to the extracted the logarithmic Mel-spectrogram to learn main local and global frequency features in recognizing the texture of various object materials. Experiments on the Lehrstuhl für Medientechnik (LMT) haptic texture dataset consisting of 60 materials to evaluate the performance of the proposed model showed that the proposed method can effectively recognize the texture of the object surface material better than the existing methods.

Hierarchical Flow-Based Anomaly Detection Model for Motor Gearbox Defect Detection

  • Younghwa Lee;Il-Sik Chang;Suseong Oh;Youngjin Nam;Youngteuk Chae;Geonyoung Choi;Gooman Park
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1516-1529
    • /
    • 2023
  • In this paper, a motor gearbox fault-detection system based on a hierarchical flow-based model is proposed. The proposed system is used for the anomaly detection of a motion sound-based actuator module. The proposed flow-based model, which is a generative model, learns by directly modeling a data distribution function. As the objective function is the maximum likelihood value of the input data, the training is stable and simple to use for anomaly detection. The operation sound of a car's side-view mirror motor is converted into a Mel-spectrogram image, consisting of a folding signal and an unfolding signal, and used as training data in this experiment. The proposed system is composed of an encoder and a decoder. The data extracted from the layer of the pretrained feature extractor are used as the decoder input data in the encoder. This information is used in the decoder by performing an interlayer cross-scale convolution operation. The experimental results indicate that the context information of various dimensions extracted from the interlayer hierarchical data improves the defect detection accuracy. This paper is notable because it uses acoustic data and a normalizing flow model to detect outliers based on the features of experimental data.